The EGS4 Code System: Solution of Gamma-ray and Electron Transport Problems
DOE R&D Accomplishments Database
Nelson, W. R.; Namito, Yoshihito
1990-03-01
In this paper we present an overview of the EGS4 Code System -- a general purpose package for the Monte Carlo simulation of the transport of electrons and photons. During the last 10-15 years EGS has been widely used to design accelerators and detectors for high-energy physics. More recently the code has been found to be of tremendous use in medical radiation physics and dosimetry. The problem-solving capabilities of EGS4 will be demonstrated by means of a variety of practical examples. To facilitate this review, we will take advantage of a new add-on package, called SHOWGRAF, to display particle trajectories in complicated geometries. These are shown as 2-D laser pictures in the written paper and as photographic slides of a 3-D high-resolution color monitor during the oral presentation. 11 refs., 15 figs.
A comparison between EGS4 and MCNP computer modeling of an in vivo X-ray fluorescence system.
Al-Ghorabie, F H; Natto, S S; Al-Lyhiani, S H
2001-03-01
The Monte Carlo computer codes EGS4 and MCNP were used to develop a theoretical model of a 180 degrees geometry in vivo X-ray fluorescence system for the measurement of platinum concentration in head and neck tumors. The model included specification of the photon source, collimators, phantoms and detector. Theoretical results were compared and evaluated against X-ray fluorescence data obtained experimentally from an existing system developed by the Swansea In Vivo Analysis and Cancer Research Group. The EGS4 results agreed well with the MCNP results. However, agreement between the measured spectral shape obtained using the experimental X-ray fluorescence system and the simulated spectral shape obtained using the two Monte Carlo codes was relatively poor. The main reason for the disagreement between the results arises from the basic assumptions which the two codes used in their calculations. Both codes assume a "free" electron model for Compton interactions. This assumption will underestimate the results and invalidates any predicted and experimental spectra when compared with each other.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba
2005-12-20
In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users ofmore » EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version, a deliberate attempt was made to present example problems in order to help the user ''get started'', and we follow that spirit in this report. A series of elementary tutorial user codes are presented in Chapter 3, with more sophisticated sample user codes described in Chapter 4. Novice EGS users will find it helpful to read through the initial sections of the EGS5 User Manual (provided in Appendix B of this report), proceeding then to work through the tutorials in Chapter 3. The User Manuals and other materials found in the appendices contain detailed flow charts, variable lists, and subprogram descriptions of EGS5 and PEGS. Included are step-by-step instructions for developing basic EGS5 user codes and for accessing all of the physics options available in EGS5 and PEGS. Once acquainted with the basic structure of EGS5, users should find the appendices the most frequently consulted sections of this report.« less
2011-01-01
reliability, e.g., Turbo Codes [2] and Low Density Parity Check ( LDPC ) codes [3]. The challenge to apply both MIMO and ECC into wireless systems is on...REPORT Fixed-point Design of theLattice-reduction-aided Iterative Detection andDecoding Receiver for Coded MIMO Systems 14. ABSTRACT 16. SECURITY...illustrates the performance of coded LR aided detectors. 1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 13. SUPPLEMENTARY NOTES The views, opinions
Diagnostic x-ray dosimetry using Monte Carlo simulation.
Ioppolo, J L; Price, R I; Tuchyna, T; Buckley, C E
2002-05-21
An Electron Gamma Shower version 4 (EGS4) based user code was developed to simulate the absorbed dose in humans during routine diagnostic radiological procedures. Measurements of absorbed dose using thermoluminescent dosimeters (TLDs) were compared directly with EGS4 simulations of absorbed dose in homogeneous, heterogeneous and anthropomorphic phantoms. Realistic voxel-based models characterizing the geometry of the phantoms were used as input to the EGS4 code. The voxel geometry of the anthropomorphic Rando phantom was derived from a CT scan of Rando. The 100 kVp diagnostic energy x-ray spectra of the apparatus used to irradiate the phantoms were measured, and provided as input to the EGS4 code. The TLDs were placed at evenly spaced points symmetrically about the central beam axis, which was perpendicular to the cathode-anode x-ray axis at a number of depths. The TLD measurements in the homogeneous and heterogenous phantoms were on average within 7% of the values calculated by EGS4. Estimates of effective dose with errors less than 10% required fewer numbers of photon histories (1 x 10(7)) than required for the calculation of dose profiles (1 x 10(9)). The EGS4 code was able to satisfactorily predict and thereby provide an instrument for reducing patient and staff effective dose imparted during radiological investigations.
Diagnostic x-ray dosimetry using Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Ioppolo, J. L.; Price, R. I.; Tuchyna, T.; Buckley, C. E.
2002-05-01
An Electron Gamma Shower version 4 (EGS4) based user code was developed to simulate the absorbed dose in humans during routine diagnostic radiological procedures. Measurements of absorbed dose using thermoluminescent dosimeters (TLDs) were compared directly with EGS4 simulations of absorbed dose in homogeneous, heterogeneous and anthropomorphic phantoms. Realistic voxel-based models characterizing the geometry of the phantoms were used as input to the EGS4 code. The voxel geometry of the anthropomorphic Rando phantom was derived from a CT scan of Rando. The 100 kVp diagnostic energy x-ray spectra of the apparatus used to irradiate the phantoms were measured, and provided as input to the EGS4 code. The TLDs were placed at evenly spaced points symmetrically about the central beam axis, which was perpendicular to the cathode-anode x-ray axis at a number of depths. The TLD measurements in the homogeneous and heterogenous phantoms were on average within 7% of the values calculated by EGS4. Estimates of effective dose with errors less than 10% required fewer numbers of photon histories (1 × 107) than required for the calculation of dose profiles (1 × 109). The EGS4 code was able to satisfactorily predict and thereby provide an instrument for reducing patient and staff effective dose imparted during radiological investigations.
Comparison of EGS4 and MCNP Monte Carlo codes when calculating radiotherapy depth doses.
Love, P A; Lewis, D G; Al-Affan, I A; Smith, C W
1998-05-01
The Monte Carlo codes EGS4 and MCNP have been compared when calculating radiotherapy depth doses in water. The aims of the work were to study (i) the differences between calculated depth doses in water for a range of monoenergetic photon energies and (ii) the relative efficiency of the two codes for different electron transport energy cut-offs. The depth doses from the two codes agree with each other within the statistical uncertainties of the calculations (1-2%). The relative depth doses also agree with data tabulated in the British Journal of Radiology Supplement 25. A discrepancy in the dose build-up region may by attributed to the different electron transport algorithims used by EGS4 and MCNP. This discrepancy is considerably reduced when the improved electron transport routines are used in the latest (4B) version of MCNP. Timing calculations show that EGS4 is at least 50% faster than MCNP for the geometries used in the simulations.
Pretest mediction of Semiscale Test S-07-10 B. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobbe, C A
A best estimate prediction of Semiscale Test S-07-10B was performed at INEL by EG and G Idaho as part of the RELAP4/MOD6 code assessment effort and as the Nuclear Regulatory Commission pretest calculation for the Small Break Experiment. The RELAP4/MOD6 Update 4 and the RELAP4/MOD7 computer codes were used to analyze Semiscale Test S-07-10B, a 10% communicative cold leg break experiment. The Semiscale Mod-3 system utilized an electrially heated simulated core operating at a power level of 1.94 MW. The initial system pressure and temperature in the upper plenum was 2276 psia and 604/sup 0/F, respectively.
Chiavassa, S; Lemosquet, A; Aubineau-Lanièce, I; de Carlan, L; Clairand, I; Ferrer, L; Bardiès, M; Franck, D; Zankl, M
2005-01-01
This paper aims at comparing dosimetric assessments performed with three Monte Carlo codes: EGS4, MCNP4c2 and MCNPX2.5e, using a realistic voxel phantom, namely the Zubal phantom, in two configurations of exposure. The first one deals with an external irradiation corresponding to the example of a radiological accident. The results are obtained using the EGS4 and the MCNP4c2 codes and expressed in terms of the mean absorbed dose (in Gy per source particle) for brain, lungs, liver and spleen. The second one deals with an internal exposure corresponding to the treatment of a medullary thyroid cancer by 131I-labelled radiopharmaceutical. The results are obtained by EGS4 and MCNPX2.5e and compared in terms of S-values (expressed in mGy per kBq and per hour) for liver, kidney, whole body and thyroid. The results of these two studies are presented and differences between the codes are analysed and discussed.
Comparisons between MCNP, EGS4 and experiment for clinical electron beams.
Jeraj, R; Keall, P J; Ostwald, P M
1999-03-01
Understanding the limitations of Monte Carlo codes is essential in order to avoid systematic errors in simulations, and to suggest further improvement of the codes. MCNP and EGS4, Monte Carlo codes commonly used in medical physics, were compared and evaluated against electron depth dose data and experimental backscatter results obtained using clinical radiotherapy beams. Different physical models and algorithms used in the codes give significantly different depth dose curves and electron backscattering factors. The default version of MCNP calculates electron depth dose curves which are too penetrating. The MCNP results agree better with experiment if the ITS-style energy-indexing algorithm is used. EGS4 underpredicts electron backscattering for high-Z materials. The results slightly improve if optimal PRESTA-I parameters are used. MCNP simulates backscattering well even for high-Z materials. To conclude the comparison, a timing study was performed. EGS4 is generally faster than MCNP and use of a large number of scoring voxels dramatically slows down the MCNP calculation. However, use of a large number of geometry voxels in MCNP only slightly affects the speed of the calculation.
NASA Astrophysics Data System (ADS)
Cook, S. J.
2009-05-01
Aquarius is a Windows application that models fluid flow and heat transport under conditions in which fluid buoyancy can significantly impact patterns and magnitudes of fluid flow. The package is designed as a visualization tool through which users can examine flow systems in environments, both low temperature aquifers and regions with elevated PT regimes such as deep sedimentary basins, hydrothermal systems, and contact thermal aureoles. The package includes 4 components: (1) A finite-element mesh generator/assembler capable of representing complex geologic structures. Left-hand, right-hand and alternating linear triangles can be mixed within the mesh. Planer horizontal, planer vertical and cylindrical vertical coordinate sections are supported. (2) A menu-selectable system for setting properties and boundary/initial conditions. The design retains mathematical terminology for all input parameters such as scalars (e.g., porosity), tensors (e.g., permeability), and boundary/initial conditions (e.g., fixed potential). This makes the package an effective instructional aide by linking model requirements with the underlying mathematical concepts of partial differential equations and the solution logic of boundary/initial value problems. (3) Solution algorithms for steady-state and time-transient fluid flow/heat transport problems. For all models, the nonlinear global matrix equations are solved sequentially using over-relaxation techniques. Matrix storage design allows for large (e.g., 20000) element models to run efficiently on a typical PC. (4) A plotting system that supports contouring nodal data (e.g., head), vector plots for flux data (e.g., specific discharge), and colour gradient plots for elemental data (e.g., porosity), water properties (e.g., density), and performance measures (e.g., Peclet numbers). Display graphics can be printed or saved in standard graphic formats (e.g., jpeg). This package was developed from procedural codes in C written originally to model the hydrothermal flow system responsible for contact metamorphism of Utah's Alta Stock (Cook et al., AJS 1997). These codes were reprogrammed in Microsoft C# to take advantage of object oriented design and the capabilities of Microsoft's .NET framework. The package is available at no cost by e-mail request from the author.
Decoding the disease-associated proteins encoded in the human chromosome 4.
Chen, Lien-Chin; Liu, Mei-Ying; Hsiao, Yung-Chin; Choong, Wai-Kok; Wu, Hsin-Yi; Hsu, Wen-Lian; Liao, Pao-Chi; Sung, Ting-Yi; Tsai, Shih-Feng; Yu, Jau-Song; Chen, Yu-Ju
2013-01-04
Chromosome 4 is the fourth largest chromosome, containing approximately 191 megabases (~6.4% of the human genome) with 757 protein-coding genes. A number of marker genes for many diseases have been found in this chromosome, including genetic diseases (e.g., hepatocellular carcinoma) and biomedical research (cardiac system, aging, metabolic disorders, immune system, cancer and stem cell) related genes (e.g., oncogenes, growth factors). As a pilot study for the chromosome 4-centric human proteome project (Chr 4-HPP), we present here a systematic analysis of the disease association, protein isoforms, coding single nucleotide polymorphisms of these 757 protein-coding genes and their experimental evidence at the protein level. We also describe how the findings from the chromosome 4 project might be used to drive the biomarker discovery and validation study in disease-oriented projects, using the examples of secretomic and membrane proteomic approaches in cancer research. By integrating with cancer cell secretomes and several other existing databases in the public domain, we identified 141 chromosome 4-encoded proteins as cancer cell-secretable/shedable proteins. Additionally, we also identified 54 chromosome 4-encoded proteins that have been classified as cancer-associated proteins with successful selected or multiple reaction monitoring (SRM/MRM) assays developed. From literature annotation and topology analysis, 271 proteins were recognized as membrane proteins while 27.9% of the 757 proteins do not have any experimental evidence at the protein-level. In summary, the analysis revealed that the chromosome 4 is a rich resource for cancer-associated proteins for biomarker verification projects and for drug target discovery projects.
Kotte, Amelia; Hill, Kaitlin A; Mah, Albert C; Korathu-Larson, Priya A; Au, Janelle R; Izmirian, Sonia; Keir, Scott S; Nakamura, Brad J; Higa-McMillan, Charmaine K
2016-11-01
This study examines implementation facilitators and barriers of a statewide roll-out of a measurement feedback system (MFS) in a youth public mental health system. 76 % of all state care coordinators (N = 47) completed interviews, which were coded via content analysis until saturation. Facilitators (e.g., recognition of the MFS's clinical utility) and barriers (e.g., MFS's reliability and validity) emerged paralleling the Exploration, Adoption/Preparation, Implementation, and Sustainment framework outlined by Aarons et al. (Adm Policy Mental Health Mental Health Serv Res, 38:4-23, 2011). Sustainment efforts may leverage innovation fit, individual adopter, and system related facilitators.
Emergency general surgery: definition and estimated burden of disease.
Shafi, Shahid; Aboutanos, Michel B; Agarwal, Suresh; Brown, Carlos V R; Crandall, Marie; Feliciano, David V; Guillamondegui, Oscar; Haider, Adil; Inaba, Kenji; Osler, Turner M; Ross, Steven; Rozycki, Grace S; Tominaga, Gail T
2013-04-01
Acute care surgery encompasses trauma, surgical critical care, and emergency general surgery (EGS). While the first two components are well defined, the scope of EGS practice remains unclear. This article describes the work of the American Association for the Surgery of Trauma to define EGS. A total of 621 unique International Classification of Diseases-9th Rev. (ICD-9) diagnosis codes were identified using billing data (calendar year 2011) from seven large academic medical centers that practice EGS. A modified Delphi methodology was used by the American Association for the Surgery of Trauma Committee on Severity Assessment and Patient Outcomes to review these codes and achieve consensus on the definition of primary EGS diagnosis codes. National Inpatient Sample data from 2009 were used to develop a national estimate of EGS burden of disease. Several unique ICD-9 codes were identified as primary EGS diagnoses. These encompass a wide spectrum of general surgery practice, including upper and lower gastrointestinal tract, hepatobiliary and pancreatic disease, soft tissue infections, and hernias. National Inpatient Sample estimates revealed over 4 million inpatient encounters nationally in 2009 for EGS diseases. This article provides the first list of ICD-9 diagnoses codes that define the scope of EGS based on current clinical practices. These findings have wide implications for EGS workforce training, access to care, and research.
Maljovec, D.; Liu, S.; Wang, B.; ...
2015-07-14
Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less
Performance Analysis of Hybrid ARQ Protocols in a Slotted Code Division Multiple-Access Network
1989-08-01
Convolutional Codes . in Proc Int. Conf. Commun., 21.4.1-21.4.5, 1987. [27] J. Hagenauer. Rate Compatible Punctured Convolutional Codes . in Proc Int. Conf...achieved by using a low rate (r = 0.5), high constraint length (e.g., 32) punctured convolutional code . Code puncturing provides for a variable rate code ...investigated the use of convolutional codes in Type II Hybrid ARQ protocols. The error
ERIC Educational Resources Information Center
Ishii, David N.
2011-01-01
The purpose of this paper is to explore the use of a new coding system that incorporates the various types of metatalk that occurred during paired learners' engagement in a consciousness-raising task. On the basis of previous studies, metalanguage (e.g. with or without terminology), knowledge sources (e.g. intuition), and verbalisation strategies…
A Picture is Worth 1,000 Words. The Use of Clinical Images in Electronic Medical Records.
Ai, Angela C; Maloney, Francine L; Hickman, Thu-Trang; Wilcox, Allison R; Ramelson, Harley; Wright, Adam
2017-07-12
To understand how clinicians utilize image uploading tools in a home grown electronic health records (EHR) system. A content analysis of patient notes containing non-radiological images from the EHR was conducted. Images from 4,000 random notes from July 1, 2009 - June 30, 2010 were reviewed and manually coded. Codes were assigned to four properties of the image: (1) image type, (2) role of image uploader (e.g. MD, NP, PA, RN), (3) practice type (e.g. internal medicine, dermatology, ophthalmology), and (4) image subject. 3,815 images from image-containing notes stored in the EHR were reviewed and manually coded. Of those images, 32.8% were clinical and 66.2% were non-clinical. The most common types of the clinical images were photographs (38.0%), diagrams (19.1%), and scanned documents (14.4%). MDs uploaded 67.9% of clinical images, followed by RNs with 10.2%, and genetic counselors with 6.8%. Dermatology (34.9%), ophthalmology (16.1%), and general surgery (10.8%) uploaded the most clinical images. The content of clinical images referencing body parts varied, with 49.8% of those images focusing on the head and neck region, 15.3% focusing on the thorax, and 13.8% focusing on the lower extremities. The diversity of image types, content, and uploaders within a home grown EHR system reflected the versatility and importance of the image uploading tool. Understanding how users utilize image uploading tools in a clinical setting highlights important considerations for designing better EHR tools and the importance of interoperability between EHR systems and other health technology.
Subversion: The Neglected Aspect of Computer Security.
1980-06-01
fundamentally flawed. Recall from mathematics that it is sufficient to disprove a4 proposition (e.g., that a system is secure) by showing only one example where...made. This lack of protection is one of the fundamental reasons why the subversion of computer systems can be so effective. Later chapters will amplify...an area of code that will not be liable to revision. Operatine system software, as pointed out earlier, is often riddled with design errors or subject
From Requirements to Code: Issues and Learning in IS Students' Systems Development Projects
ERIC Educational Resources Information Center
Scott, Elsje
2008-01-01
The Computing Curricula (2005) place Information Systems (IS) at the intersection of exact sciences (e.g. General Systems Theory), technology (e.g. Computer Science), and behavioral sciences (e.g. Sociology). This presents particular challenges for teaching and learning, as future IS professionals need to be equipped with a wide range of…
Wang, R; Li, X A
2001-02-01
The dose parameters for the beta-particle emitting 90Sr/90Y source for intravascular brachytherapy (IVBT) have been calculated by different investigators. At a distant distance from the source, noticeable differences are seen in these parameters calculated using different Monte Carlo codes. The purpose of this work is to quantify as well as to understand these differences. We have compared a series of calculations using an EGS4, an EGSnrc, and the MCNP Monte Carlo codes. Data calculated and compared include the depth dose curve for a broad parallel beam of electrons, and radial dose distributions for point electron sources (monoenergetic or polyenergetic) and for a real 90Sr/90Y source. For the 90Sr/90Y source, the doses at the reference position (2 mm radial distance) calculated by the three code agree within 2%. However, the differences between the dose calculated by the three codes can be over 20% in the radial distance range interested in IVBT. The difference increases with radial distance from source, and reaches 30% at the tail of dose curve. These differences may be partially attributed to the different multiple scattering theories and Monte Carlo models for electron transport adopted in these three codes. Doses calculated by the EGSnrc code are more accurate than those by the EGS4. The two calculations agree within 5% for radial distance <6 mm.
Simulation of HEAO 3 Background
2007-01-01
i i o iipp i i o iinn VdE A NaEEf VdE A NaEEfprod ji ji j R where i is a stable isotope in volume V, ai is its fractional abundance, i the...National Nuclear Data Center (NNDC), Brookhaven National Laboratory, Brookhaven, NY. [10] W. Nelson et al., ”The EGS4 code system ”, SLAC-Report-265
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2011-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(102-4), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
Utter, Garth H; Miller, Preston R; Mowery, Nathan T; Tominaga, Gail T; Gunter, Oliver; Osler, Turner M; Ciesla, David J; Agarwal, Suresh K; Inaba, Kenji; Aboutanos, Michel B; Brown, Carlos V R; Ross, Steven E; Crandall, Marie L; Shafi, Shahid
2015-05-01
The American Association for the Surgery of Trauma (AAST) recently established a grading system for uniform reporting of anatomic severity of several emergency general surgery (EGS) diseases. There are five grades of severity for each disease, ranging from I (lowest severity) to V (highest severity). However, the grading process requires manual chart review. We sought to evaluate whether International Classification of Diseases, 9th and 10th Revisions, Clinical Modification (ICD-9-CM, ICD-10-CM) codes might allow estimation of AAST grades for EGS diseases. The Patient Assessment and Outcomes Committee of the AAST reviewed all available ICD-9-CM and ICD-10-CM diagnosis codes relevant to 16 EGS diseases with available AAST grades. We then matched grades for each EGS disease with one or more ICD codes. We used the Official Coding Guidelines for ICD-9-CM and ICD-10-CM and the American Hospital Association's "Coding Clinic for ICD-9-CM" for coding guidance. The ICD codes did not allow for matching all five AAST grades of severity for each of the 16 diseases. With ICD-9-CM, six diseases mapped into four categories of severity (instead of five), another six diseases into three categories of severity, and four diseases into only two categories of severity. With ICD-10-CM, five diseases mapped into four categories of severity, seven diseases into three categories, and four diseases into two categories. Two diseases mapped into discontinuous categories of grades (two in ICD-9-CM and one in ICD-10-CM). Although resolution is limited, ICD-9-CM and ICD-10-CM diagnosis codes might have some utility in roughly approximating the severity of the AAST grades in the absence of more precise information. These ICD mappings should be validated and refined before widespread use to characterize EGS disease severity. In the long-term, it may be desirable to develop alternatives to ICD-9-CM and ICD-10-CM codes for routine collection of disease severity characteristics.
Shaping electromagnetic waves using software-automatically-designed metasurfaces.
Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie
2017-06-15
We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy
NASA Astrophysics Data System (ADS)
Chamberland, Marc J. P.; Taylor, Randle E. P.; Rogers, D. W. O.; Thomson, Rowan M.
2016-12-01
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm)3 voxels) and eye plaque (with (1 mm)3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
egs_brachy: a versatile and fast Monte Carlo code for brachytherapy.
Chamberland, Marc J P; Taylor, Randle E P; Rogers, D W O; Thomson, Rowan M
2016-12-07
egs_brachy is a versatile and fast Monte Carlo (MC) code for brachytherapy applications. It is based on the EGSnrc code system, enabling simulation of photons and electrons. Complex geometries are modelled using the EGSnrc C++ class library and egs_brachy includes a library of geometry models for many brachytherapy sources, in addition to eye plaques and applicators. Several simulation efficiency enhancing features are implemented in the code. egs_brachy is benchmarked by comparing TG-43 source parameters of three source models to previously published values. 3D dose distributions calculated with egs_brachy are also compared to ones obtained with the BrachyDose code. Well-defined simulations are used to characterize the effectiveness of many efficiency improving techniques, both as an indication of the usefulness of each technique and to find optimal strategies. Efficiencies and calculation times are characterized through single source simulations and simulations of idealized and typical treatments using various efficiency improving techniques. In general, egs_brachy shows agreement within uncertainties with previously published TG-43 source parameter values. 3D dose distributions from egs_brachy and BrachyDose agree at the sub-percent level. Efficiencies vary with radionuclide and source type, number of sources, phantom media, and voxel size. The combined effects of efficiency-improving techniques in egs_brachy lead to short calculation times: simulations approximating prostate and breast permanent implant (both with (2 mm) 3 voxels) and eye plaque (with (1 mm) 3 voxels) treatments take between 13 and 39 s, on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core, to achieve 2% average statistical uncertainty on doses within the PTV. egs_brachy will be released as free and open source software to the research community.
NASA Technical Reports Server (NTRS)
Stevens, N. J.
1979-01-01
Cases where the charged-particle environment acts on the spacecraft (e.g., spacecraft charging phenomena) and cases where a system on the spacecraft causes the interaction (e.g., high voltage space power systems) are considered. Both categories were studied in ground simulation facilities to understand the processes involved and to measure the pertinent parameters. Computer simulations are based on the NASA Charging Analyzer Program (NASCAP) code. Analytical models are developed in this code and verified against the experimental data. Extrapolation from the small test samples to space conditions are made with this code. Typical results from laboratory and computer simulations are presented for both types of interactions. Extrapolations from these simulations to performance in space environments are discussed.
Benchmark of PENELOPE code for low-energy photon transport: dose comparisons with MCNP4 and EGS4.
Ye, Sung-Joon; Brezovich, Ivan A; Pareek, Prem; Naqvi, Shahid A
2004-02-07
The expanding clinical use of low-energy photon emitting 125I and 103Pd seeds in recent years has led to renewed interest in their dosimetric properties. Numerous papers pointed out that higher accuracy could be obtained in Monte Carlo simulations by utilizing newer libraries for the low-energy photon cross-sections, such as XCOM and EPDL97. The recently developed PENELOPE 2001 Monte Carlo code is user friendly and incorporates photon cross-section data from the EPDL97. The code has been verified for clinical dosimetry of high-energy electron and photon beams, but has not yet been tested at low energies. In the present work, we have benchmarked the PENELOPE code for 10-150 keV photons. We computed radial dose distributions from 0 to 10 cm in water at photon energies of 10-150 keV using both PENELOPE and MCNP4C with either DLC-146 or DLC-200 cross-section libraries, assuming a point source located at the centre of a 30 cm diameter and 20 cm length cylinder. Throughout the energy range of simulated photons (except for 10 keV), PENELOPE agreed within statistical uncertainties (at worst +/- 5%) with MCNP/DLC-146 in the entire region of 1-10 cm and with published EGS4 data up to 5 cm. The dose at 1 cm (or dose rate constant) of PENELOPE agreed with MCNP/DLC-146 and EGS4 data within approximately +/- 2% in the range of 20-150 keV, while MCNP/DLC-200 produced values up to 9% lower in the range of 20-100 keV than PENELOPE or the other codes. However, the differences among the four datasets became negligible above 100 keV.
12 CFR Appendix A to Subpart B of... - Commentary
Code of Federal Regulations, 2014 CFR
2014-01-01
...,” as defined in section 4A-501(b) of Article 4A, Funds Transfers, of the Uniform Commercial Code (UCC... refers to other provisions of the Uniform Commercial Code, e.g., definitions in article 1 of the UCC, these other provisions of the UCC, as approved by the National Conference of Commissioners on Uniform...
12 CFR Appendix A to Subpart B of... - Commentary
Code of Federal Regulations, 2010 CFR
2010-01-01
...,” as defined in section 4A-501(b) of Article 4A, Funds Transfers, of the Uniform Commercial Code (UCC... refers to other provisions of the Uniform Commercial Code, e.g., definitions in Article 1 of the UCC, these other provisions of the UCC, as approved by the National Conference of Commissioners on Uniform...
12 CFR Appendix A to Subpart B of... - Commentary
Code of Federal Regulations, 2011 CFR
2011-01-01
...,” as defined in section 4A-501(b) of Article 4A, Funds Transfers, of the Uniform Commercial Code (UCC... refers to other provisions of the Uniform Commercial Code, e.g., definitions in Article 1 of the UCC, these other provisions of the UCC, as approved by the National Conference of Commissioners on Uniform...
12 CFR Appendix A to Subpart B of... - Commentary
Code of Federal Regulations, 2013 CFR
2013-01-01
...,” as defined in section 4A-501(b) of Article 4A, Funds Transfers, of the Uniform Commercial Code (UCC... refers to other provisions of the Uniform Commercial Code, e.g., definitions in article 1 of the UCC, these other provisions of the UCC, as approved by the National Conference of Commissioners on Uniform...
12 CFR Appendix A to Subpart B of... - Commentary
Code of Federal Regulations, 2012 CFR
2012-01-01
...,” as defined in section 4A-501(b) of Article 4A, Funds Transfers, of the Uniform Commercial Code (UCC... refers to other provisions of the Uniform Commercial Code, e.g., definitions in Article 1 of the UCC, these other provisions of the UCC, as approved by the National Conference of Commissioners on Uniform...
Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.
Uzun, Vassilya; Bilgin, Sami
2016-01-01
For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.
A coded tracking telemetry system
Howey, P.W.; Seegar, W.S.; Fuller, M.R.; Titus, K.; Amlaner, Charles J.
1989-01-01
We describe the general characteristics of an automated radio telemetry system designed to operate for prolonged periods on a single frequency. Each transmitter sends a unique coded signal to a receiving system that encodes and records only the appropriater, pre-programmed codes. A record of the time of each reception is stored on diskettes in a micro-computer. This system enables continuous monitoring of infrequent signals (e.g. one per minute or one per hour), thus extending operation life or allowing size reduction of the transmitter, compared to conventional wildlife telemetry. Furthermore, when using unique codes transmitted on a single frequency, biologists can monitor many individuals without exceeding the radio frequency allocations for wildlife.
VeryVote: A Voter Verifiable Code Voting System
NASA Astrophysics Data System (ADS)
Joaquim, Rui; Ribeiro, Carlos; Ferreira, Paulo
Code voting is a technique used to address the secure platform problem of remote voting. A code voting system consists in secretly sending, e.g. by mail, code sheets to voters that map their choices to entry codes in their ballot. While voting, the voter uses the code sheet to know what code to enter in order to vote for a particular candidate. In effect, the voter does the vote encryption and, since no malicious software on the PC has access to the code sheet it is not able to change the voter’s intention. However, without compromising the voter’s privacy, the vote codes are not enough to prove that the vote is recorded and counted as cast by the election server.
Acoustically Forced Coaxial Hydrogen/Liquid Oxygen Jet Flames
2016-05-15
Briefing Charts 3. DATES COVERED (From - To) 25 April 2016 - 15 May 2016 4. TITLE AND SUBTITLE Acoustically Forced Coaxial Hydrogen / Liquid Oxygen Jet...area code) N/A Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. 239.18 1 Acoustically Forced Coaxial Hydrogen / Liquid Oxygen Jet Flames...propellants be stored in condensed form – e.g., kerosene, liquid oxygen in rockets • Combustion systems can no longer be designed to meet modern
A Novel Code System for Revealing Sources of Students' Difficulties with Stoichiometry
ERIC Educational Resources Information Center
Gulacar, Ozcan; Overton, Tina L.; Bowman, Charles R.; Fynewever, Herb
2013-01-01
A coding scheme is presented and used to evaluate solutions of seventeen students working on twenty five stoichiometry problems in a think-aloud protocol. The stoichiometry problems are evaluated as a series of sub-problems (e.g., empirical formulas, mass percent, or balancing chemical equations), and the coding scheme was used to categorize each…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2010-01-01
This paper provides a review of the hybrid (Monte Carlo/deterministic) radiation transport methods and codes used at the Oak Ridge National Laboratory and examples of their application for increasing the efficiency of real-world, fixed-source Monte Carlo analyses. The two principal hybrid methods are (1) Consistent Adjoint Driven Importance Sampling (CADIS) for optimization of a localized detector (tally) region (e.g., flux, dose, or reaction rate at a particular location) and (2) Forward Weighted CADIS (FW-CADIS) for optimizing distributions (e.g., mesh tallies over all or part of the problem space) or multiple localized detector regions (e.g., simultaneous optimization of two or moremore » localized tally regions). The two methods have been implemented and automated in both the MAVRIC sequence of SCALE 6 and ADVANTG, a code that works with the MCNP code. As implemented, the methods utilize the results of approximate, fast-running 3-D discrete ordinates transport calculations (with the Denovo code) to generate consistent space- and energy-dependent source and transport (weight windows) biasing parameters. These methods and codes have been applied to many relevant and challenging problems, including calculations of PWR ex-core thermal detector response, dose rates throughout an entire PWR facility, site boundary dose from arrays of commercial spent fuel storage casks, radiation fields for criticality accident alarm system placement, and detector response for special nuclear material detection scenarios and nuclear well-logging tools. Substantial computational speed-ups, generally O(10{sup 2-4}), have been realized for all applications to date. This paper provides a brief review of the methods, their implementation, results of their application, and current development activities, as well as a considerable list of references for readers seeking more information about the methods and/or their applications.« less
Verification and validation of RADMODL Version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kimball, K.D.
1993-03-01
RADMODL is a system of linked computer codes designed to calculate the radiation environment following an accident in which nuclear materials are released. The RADMODL code and the corresponding Verification and Validation (V&V) calculations (Appendix A), were developed for Westinghouse Savannah River Company (WSRC) by EGS Corporation (EGS). Each module of RADMODL is an independent code and was verified separately. The full system was validated by comparing the output of the various modules with the corresponding output of a previously verified version of the modules. The results of the verification and validation tests show that RADMODL correctly calculates the transportmore » of radionuclides and radiation doses. As a result of this verification and validation effort, RADMODL Version 1.0 is certified for use in calculating the radiation environment following an accident.« less
NASA Astrophysics Data System (ADS)
Mense, Mario; Schindelhauer, Christian
We introduce the Read-Write-Coding-System (RWC) - a very flexible class of linear block codes that generate efficient and flexible erasure codes for storage networks. In particular, given a message x of k symbols and a codeword y of n symbols, an RW code defines additional parameters k ≤ r,w ≤ n that offer enhanced possibilities to adjust the fault-tolerance capability of the code. More precisely, an RWC provides linear left(n,k,dright)-codes that have (a) minimum distance d = n - r + 1 for any two codewords, and (b) for each codeword there exists a codeword for each other message with distance of at most w. Furthermore, depending on the values r,w and the code alphabet, different block codes such as parity codes (e.g. RAID 4/5) or Reed-Solomon (RS) codes (if r = k and thus, w = n) can be generated. In storage networks in which I/O accesses are very costly and redundancy is crucial, this flexibility has considerable advantages as r and w can optimally be adapted to read or write intensive applications; only w symbols must be updated if the message x changes completely, what is different from other codes which always need to rewrite y completely as x changes. In this paper, we first state a tight lower bound and basic conditions for all RW codes. Furthermore, we introduce special RW codes in which all mentioned parameters are adjustable even online, that is, those RW codes are adaptive to changing demands. At last, we point out some useful properties regarding safety and security of the stored data.
The Environment-Power System Analysis Tool development program. [for spacecraft power supplies
NASA Technical Reports Server (NTRS)
Jongeward, Gary A.; Kuharski, Robert A.; Kennedy, Eric M.; Wilcox, Katherine G.; Stevens, N. John; Putnam, Rand M.; Roche, James C.
1989-01-01
The Environment Power System Analysis Tool (EPSAT) is being developed to provide engineers with the ability to assess the effects of a broad range of environmental interactions on space power systems. A unique user-interface-data-dictionary code architecture oversees a collection of existing and future environmental modeling codes (e.g., neutral density) and physical interaction models (e.g., sheath ionization). The user-interface presents the engineer with tables, graphs, and plots which, under supervision of the data dictionary, are automatically updated in response to parameter change. EPSAT thus provides the engineer with a comprehensive and responsive environmental assessment tool and the scientist with a framework into which new environmental or physical models can be easily incorporated.
Kivisalu, Trisha M; Lewey, Jennifer H; Shaffer, Thomas W; Canfield, Merle L
2016-01-01
The Rorschach Performance Assessment System (R-PAS) aims to provide an evidence-based approach to administration, coding, and interpretation of the Rorschach Inkblot Method (RIM). R-PAS analyzes individualized communications given by respondents to each card to code a wide pool of possible variables. Due to the large number of possible codes that can be assigned to these responses, it is important to consider the concordance rates among different assessors. This study investigated interrater reliability for R-PAS protocols. Data were analyzed from a nonpatient convenience sample of 50 participants who were recruited through networking, local marketing, and advertising efforts from January 2013 through October 2014. Blind recoding was used and discrepancies between the initial and blind coders' ratings were analyzed for each variable with SPSS yielding percent agreement and intraclass correlation values. Data for Location, Space, Contents, Synthesis, Vague, Pairs, Form Quality, Populars, Determinants, and Cognitive and Thematic codes are presented. Rates of agreement for 1,168 responses were higher for more simplistic coding (e.g., Location), whereas agreement was lower for more complex codes (e.g., Cognitive and Thematic codes). Overall, concordance rates achieved good to excellent agreement. Results suggest R-PAS is an effective method with high interrater reliability supporting its empirical basis.
Fracturing And Liquid CONvection
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-02-29
FALCON has been developed to enable simulation of the tightly coupled fluid-rock behavior in hydrothermal and engineered geothermal system (EGS) reservoirs, targeting the dynamics of fracture stimulation, fluid flow, rock deformation, and heat transport in a single integrated code, with the ultimate goal of providing a tool that can be used to test the viability of EGS in the United States and worldwide. Reliable reservoir performance predictions of EGS systems require accurate and robust modeling for the coupled thermalhydrologicalmechanical processes.
Arthur-Farraj, Peter J; Morgan, Claire C; Adamowicz, Martyna; Gomez-Sanchez, Jose A; Fazal, Shaline V; Beucher, Anthony; Razzaghi, Bonnie; Mirsky, Rhona; Jessen, Kristjan R; Aitman, Timothy J
2017-09-12
Repair Schwann cells play a critical role in orchestrating nerve repair after injury, but the cellular and molecular processes that generate them are poorly understood. Here, we perform a combined whole-genome, coding and non-coding RNA and CpG methylation study following nerve injury. We show that genes involved in the epithelial-mesenchymal transition are enriched in repair cells, and we identify several long non-coding RNAs in Schwann cells. We demonstrate that the AP-1 transcription factor C-JUN regulates the expression of certain micro RNAs in repair Schwann cells, in particular miR-21 and miR-34. Surprisingly, unlike during development, changes in CpG methylation are limited in injury, restricted to specific locations, such as enhancer regions of Schwann cell-specific genes (e.g., Nedd4l), and close to local enrichment of AP-1 motifs. These genetic and epigenomic changes broaden our mechanistic understanding of the formation of repair Schwann cell during peripheral nervous system tissue repair. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Superwide-angle coverage code-multiplexed optical scanner.
Riza, Nabeel A; Arain, Muzammil A
2004-05-01
A superwide-angle coverage code-multiplexed optical scanner is presented that has the potential to provide 4 pi-sr coverage. As a proof-of-concept experiment, an angular scan range of 288 degrees for six randomly distributed beams is demonstrated. The proposed scanner achieves its superwide coverage by exploiting a combination of phase-encoded transmission and reflection holography within an in-line hologram recording-retrieval geometry. The basic scanner unit consists of one phase-only digital mode spatial light modulator for code entry (i.e., beam scan control) and a holographic material from which we obtained what we believe is the first-of-a-kind extremely wide coverage, low component count, high speed (e.g., microsecond domain), and large aperture (e.g., > 1-cm diameter) scanner.
Implementing TCP/IP and a socket interface as a server in a message-passing operating system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hipp, E.; Wiltzius, D.
1990-03-01
The UNICOS 4.3BSD network code and socket transport interface are the basis of an explicit network server for NLTSS, a message passing operating system on the Cray YMP. A BSD socket user library provides access to the network server using an RPC mechanism. The advantages of this server methodology are its modularity and extensibility to migrate to future protocol suites (e.g. OSI) and transport interfaces. In addition, the network server is implemented in an explicit multi-tasking environment to take advantage of the Cray YMP multi-processor platform. 19 refs., 5 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Marte
The research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to: 1) Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation. 2) Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator. 3) Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the resultsmore » to improve understand of proppant flow and transport. 4) Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production. 5) Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include: 1) A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS, 2) Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock, 3) Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications, and 4) Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less
High-Speed Digital Interferometry
NASA Technical Reports Server (NTRS)
De Vine, Glenn; Shaddock, Daniel A.; Ware, Brent; Spero, Robert E.; Wuchenich, Danielle M.; Klipstein, William M.; McKenzie, Kirk
2012-01-01
Digitally enhanced heterodyne interferometry (DI) is a laser metrology technique employing pseudo-random noise (PRN) codes phase-modulated onto an optical carrier. Combined with heterodyne interferometry, the PRN code is used to select individual signals, returning the inherent interferometric sensitivity determined by the optical wavelength. The signal isolation arises from the autocorrelation properties of the PRN code, enabling both rejection of spurious signals (e.g., from scattered light) and multiplexing capability using a single metrology system. The minimum separation of optical components is determined by the wavelength of the PRN code.
Automated apparatus and method of generating native code for a stitching machine
NASA Technical Reports Server (NTRS)
Miller, Jeffrey L. (Inventor)
2000-01-01
A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.
Digital Controller For Emergency Beacon
NASA Technical Reports Server (NTRS)
Ivancic, William D.
1990-01-01
Prototype digital controller intended for use in 406-MHz emergency beacon. Undergoing development according to international specifications, 406-MHz emergency beacon system includes satellites providing worldwide monitoring of beacons, with Doppler tracking to locate each beacon within 5 km. Controller turns beacon on and off and generates binary codes identifying source (e.g., ship, aircraft, person, or vehicle on land). Codes transmitted by phase modulation. Knowing code, monitor attempts to communicate with user, monitor uses code information to dispatch rescue team appropriate to type and locations of carrier.
A Novel Method for Estimating Transgender Status Using Electronic Medical Records
Roblin, Douglas; Barzilay, Joshua; Tolsma, Dennis; Robinson, Brandi; Schild, Laura; Cromwell, Lee; Braun, Hayley; Nash, Rebecca; Gerth, Joseph; Hunkeler, Enid; Quinn, Virginia P.; Tangpricha, Vin; Goodman, Michael
2016-01-01
Background We describe a novel algorithm for identifying transgender people and determining their male-to-female (MTF) or female-to-male (FTM) identity in electronic medical records (EMR) of an integrated health system. Methods A SAS program scanned Kaiser Permanente Georgia EMR from January 2006 through December 2014 for relevant diagnostic codes, and presence of specific keywords (e.g., “transgender” or “transsexual”) in clinical notes. Eligibility was verified by review of de-identified text strings containing targeted keywords, and if needed, by an additional in-depth review of records. Once transgender status was confirmed, FTM or MTF identity was assessed using a second SAS program and another round of text string reviews. Results Of 813,737 members, 271 were identified as possibly transgender: 137 through keywords only, 25 through diagnostic codes only, and 109 through both codes and keywords. Of these individuals, 185 (68%, 95% confidence interval [CI]: 62-74%) were confirmed as definitely transgender. The proportions (95% CIs) of definite transgender status among persons identified via keywords, diagnostic codes, and both were 45% (37-54%), 56% (35-75%), and 100% (96-100%), respectively. Of the 185 definitely transgender people, 99 (54%, 95% CI: 46-61%) were MTF, 84 (45%, 95% CI: 38-53%) were FTM. For two persons, gender identity remained unknown. Prevalence of transgender people (per 100,000 members) was 4.4 (95% CI: 2.6-7.4) in 2006 and 38.7 (95% CI: 32.4-46.2) in 2014. Conclusions The proposed method of identifying candidates for transgender health studies is low cost and relatively efficient. It can be applied in other similar health care systems. PMID:26907539
Interfacing MCNPX and McStas for simulation of neutron transport
NASA Astrophysics Data System (ADS)
Klinkby, Esben; Lauritzen, Bent; Nonbøl, Erik; Kjær Willendrup, Peter; Filges, Uwe; Wohlmuther, Michael; Gallmeier, Franz X.
2013-02-01
Simulations of target-moderator-reflector system at spallation sources are conventionally carried out using Monte Carlo codes such as MCNPX (Waters et al., 2007 [1]) or FLUKA (Battistoni et al., 2007; Ferrari et al., 2005 [2,3]) whereas simulations of neutron transport from the moderator and the instrument response are performed by neutron ray tracing codes such as McStas (Lefmann and Nielsen, 1999; Willendrup et al., 2004, 2011a,b [4-7]). The coupling between the two simulation suites typically consists of providing analytical fits of MCNPX neutron spectra to McStas. This method is generally successful but has limitations, as it e.g. does not allow for re-entry of neutrons into the MCNPX regime. Previous work to resolve such shortcomings includes the introduction of McStas inspired supermirrors in MCNPX. In the present paper different approaches to interface MCNPX and McStas are presented and applied to a simple test case. The direct coupling between MCNPX and McStas allows for more accurate simulations of e.g. complex moderator geometries, backgrounds, interference between beam-lines as well as shielding requirements along the neutron guides.
Rare and low-frequency coding variants alter human adult height
Marouli, Eirini; Graff, Mariaelisa; Medina-Gomez, Carolina; Lo, Ken Sin; Wood, Andrew R; Kjaer, Troels R; Fine, Rebecca S; Lu, Yingchang; Schurmann, Claudia; Highland, Heather M; Rüeger, Sina; Thorleifsson, Gudmar; Justice, Anne E; Lamparter, David; Stirrups, Kathleen E; Turcot, Valérie; Young, Kristin L; Winkler, Thomas W; Esko, Tõnu; Karaderi, Tugce; Locke, Adam E; Masca, Nicholas GD; Ng, Maggie CY; Mudgal, Poorva; Rivas, Manuel A; Vedantam, Sailaja; Mahajan, Anubha; Guo, Xiuqing; Abecasis, Goncalo; Aben, Katja K; Adair, Linda S; Alam, Dewan S; Albrecht, Eva; Allin, Kristine H; Allison, Matthew; Amouyel, Philippe; Appel, Emil V; Arveiler, Dominique; Asselbergs, Folkert W; Auer, Paul L; Balkau, Beverley; Banas, Bernhard; Bang, Lia E; Benn, Marianne; Bergmann, Sven; Bielak, Lawrence F; Blüher, Matthias; Boeing, Heiner; Boerwinkle, Eric; Böger, Carsten A; Bonnycastle, Lori L; Bork-Jensen, Jette; Bots, Michiel L; Bottinger, Erwin P; Bowden, Donald W; Brandslund, Ivan; Breen, Gerome; Brilliant, Murray H; Broer, Linda; Burt, Amber A; Butterworth, Adam S; Carey, David J; Caulfield, Mark J; Chambers, John C; Chasman, Daniel I; Chen, Yii-Der Ida; Chowdhury, Rajiv; Christensen, Cramer; Chu, Audrey Y; Cocca, Massimiliano; Collins, Francis S; Cook, James P; Corley, Janie; Galbany, Jordi Corominas; Cox, Amanda J; Cuellar-Partida, Gabriel; Danesh, John; Davies, Gail; de Bakker, Paul IW; de Borst, Gert J.; de Denus, Simon; de Groot, Mark CH; de Mutsert, Renée; Deary, Ian J; Dedoussis, George; Demerath, Ellen W; den Hollander, Anneke I; Dennis, Joe G; Di Angelantonio, Emanuele; Drenos, Fotios; Du, Mengmeng; Dunning, Alison M; Easton, Douglas F; Ebeling, Tapani; Edwards, Todd L; Ellinor, Patrick T; Elliott, Paul; Evangelou, Evangelos; Farmaki, Aliki-Eleni; Faul, Jessica D; Feitosa, Mary F; Feng, Shuang; Ferrannini, Ele; Ferrario, Marco M; Ferrieres, Jean; Florez, Jose C; Ford, Ian; Fornage, Myriam; Franks, Paul W; Frikke-Schmidt, Ruth; Galesloot, Tessel E; Gan, Wei; Gandin, Ilaria; Gasparini, Paolo; Giedraitis, Vilmantas; Giri, Ayush; Girotto, Giorgia; Gordon, Scott D; Gordon-Larsen, Penny; Gorski, Mathias; Grarup, Niels; Grove, Megan L.; Gudnason, Vilmundur; Gustafsson, Stefan; Hansen, Torben; Harris, Kathleen Mullan; Harris, Tamara B; Hattersley, Andrew T; Hayward, Caroline; He, Liang; Heid, Iris M; Heikkilä, Kauko; Helgeland, Øyvind; Hernesniemi, Jussi; Hewitt, Alex W; Hocking, Lynne J; Hollensted, Mette; Holmen, Oddgeir L; Hovingh, G. Kees; Howson, Joanna MM; Hoyng, Carel B; Huang, Paul L; Hveem, Kristian; Ikram, M. Arfan; Ingelsson, Erik; Jackson, Anne U; Jansson, Jan-Håkan; Jarvik, Gail P; Jensen, Gorm B; Jhun, Min A; Jia, Yucheng; Jiang, Xuejuan; Johansson, Stefan; Jørgensen, Marit E; Jørgensen, Torben; Jousilahti, Pekka; Jukema, J Wouter; Kahali, Bratati; Kahn, René S; Kähönen, Mika; Kamstrup, Pia R; Kanoni, Stavroula; Kaprio, Jaakko; Karaleftheri, Maria; Kardia, Sharon LR; Karpe, Fredrik; Kee, Frank; Keeman, Renske; Kiemeney, Lambertus A; Kitajima, Hidetoshi; Kluivers, Kirsten B; Kocher, Thomas; Komulainen, Pirjo; Kontto, Jukka; Kooner, Jaspal S; Kooperberg, Charles; Kovacs, Peter; Kriebel, Jennifer; Kuivaniemi, Helena; Küry, Sébastien; Kuusisto, Johanna; La Bianca, Martina; Laakso, Markku; Lakka, Timo A; Lange, Ethan M; Lange, Leslie A; Langefeld, Carl D; Langenberg, Claudia; Larson, Eric B; Lee, I-Te; Lehtimäki, Terho; Lewis, Cora E; Li, Huaixing; Li, Jin; Li-Gao, Ruifang; Lin, Honghuang; Lin, Li-An; Lin, Xu; Lind, Lars; Lindström, Jaana; Linneberg, Allan; Liu, Yeheng; Liu, Yongmei; Lophatananon, Artitaya; Luan, Jian'an; Lubitz, Steven A; Lyytikäinen, Leo-Pekka; Mackey, David A; Madden, Pamela AF; Manning, Alisa K; Männistö, Satu; Marenne, Gaëlle; Marten, Jonathan; Martin, Nicholas G; Mazul, Angela L; Meidtner, Karina; Metspalu, Andres; Mitchell, Paul; Mohlke, Karen L; Mook-Kanamori, Dennis O; Morgan, Anna; Morris, Andrew D; Morris, Andrew P; Müller-Nurasyid, Martina; Munroe, Patricia B; Nalls, Mike A; Nauck, Matthias; Nelson, Christopher P; Neville, Matt; Nielsen, Sune F; Nikus, Kjell; Njølstad, Pål R; Nordestgaard, Børge G; Ntalla, Ioanna; O'Connel, Jeffrey R; Oksa, Heikki; Loohuis, Loes M Olde; Ophoff, Roel A; Owen, Katharine R; Packard, Chris J; Padmanabhan, Sandosh; Palmer, Colin NA; Pasterkamp, Gerard; Patel, Aniruddh P; Pattie, Alison; Pedersen, Oluf; Peissig, Peggy L; Peloso, Gina M; Pennell, Craig E; Perola, Markus; Perry, James A; Perry, John R.B.; Person, Thomas N; Pirie, Ailith; Polasek, Ozren; Posthuma, Danielle; Raitakari, Olli T; Rasheed, Asif; Rauramaa, Rainer; Reilly, Dermot F; Reiner, Alex P; Renström, Frida; Ridker, Paul M; Rioux, John D; Robertson, Neil; Robino, Antonietta; Rolandsson, Olov; Rudan, Igor; Ruth, Katherine S; Saleheen, Danish; Salomaa, Veikko; Samani, Nilesh J; Sandow, Kevin; Sapkota, Yadav; Sattar, Naveed; Schmidt, Marjanka K; Schreiner, Pamela J; Schulze, Matthias B; Scott, Robert A; Segura-Lepe, Marcelo P; Shah, Svati; Sim, Xueling; Sivapalaratnam, Suthesh; Small, Kerrin S; Smith, Albert Vernon; Smith, Jennifer A; Southam, Lorraine; Spector, Timothy D; Speliotes, Elizabeth K; Starr, John M; Steinthorsdottir, Valgerdur; Stringham, Heather M; Stumvoll, Michael; Surendran, Praveen; Hart, Leen M ‘t; Tansey, Katherine E; Tardif, Jean-Claude; Taylor, Kent D; Teumer, Alexander; Thompson, Deborah J; Thorsteinsdottir, Unnur; Thuesen, Betina H; Tönjes, Anke; Tromp, Gerard; Trompet, Stella; Tsafantakis, Emmanouil; Tuomilehto, Jaakko; Tybjaerg-Hansen, Anne; Tyrer, Jonathan P; Uher, Rudolf; Uitterlinden, André G; Ulivi, Sheila; van der Laan, Sander W; Van Der Leij, Andries R; van Duijn, Cornelia M; van Schoor, Natasja M; van Setten, Jessica; Varbo, Anette; Varga, Tibor V; Varma, Rohit; Edwards, Digna R Velez; Vermeulen, Sita H; Vestergaard, Henrik; Vitart, Veronique; Vogt, Thomas F; Vozzi, Diego; Walker, Mark; Wang, Feijie; Wang, Carol A; Wang, Shuai; Wang, Yiqin; Wareham, Nicholas J; Warren, Helen R; Wessel, Jennifer; Willems, Sara M; Wilson, James G; Witte, Daniel R; Woods, Michael O; Wu, Ying; Yaghootkar, Hanieh; Yao, Jie; Yao, Pang; Yerges-Armstrong, Laura M; Young, Robin; Zeggini, Eleftheria; Zhan, Xiaowei; Zhang, Weihua; Zhao, Jing Hua; Zhao, Wei; Zhao, Wei; Zheng, He; Zhou, Wei; Rotter, Jerome I; Boehnke, Michael; Kathiresan, Sekar; McCarthy, Mark I; Willer, Cristen J; Stefansson, Kari; Borecki, Ingrid B; Liu, Dajiang J; North, Kari E; Heard-Costa, Nancy L; Pers, Tune H; Lindgren, Cecilia M; Oxvig, Claus; Kutalik, Zoltán; Rivadeneira, Fernando; Loos, Ruth JF; Frayling, Timothy M; Hirschhorn, Joel N; Deloukas, Panos; Lettre, Guillaume
2016-01-01
Summary Height is a highly heritable, classic polygenic trait with ∼700 common associated variants identified so far through genome-wide association studies. Here, we report 83 height-associated coding variants with lower minor allele frequencies (range of 0.1-4.8%) and effects of up to 2 cm/allele (e.g. in IHH, STC2, AR and CRISPLD2), >10 times the average effect of common variants. In functional follow-up studies, rare height-increasing alleles of STC2 (+1-2 cm/allele) compromised proteolytic inhibition of PAPP-A and increased cleavage of IGFBP-4 in vitro, resulting in higher bioavailability of insulin-like growth factors. These 83 height-associated variants overlap genes mutated in monogenic growth disorders and highlight new biological candidates (e.g. ADAMTS3, IL11RA, NOX4) and pathways (e.g. proteoglycan/glycosaminoglycan synthesis) involved in growth. Our results demonstrate that sufficiently large sample sizes can uncover rare and low-frequency variants of moderate to large effect associated with polygenic human phenotypes, and that these variants implicate relevant genes and pathways. PMID:28146470
Malataras, G; Kappas, C; Lovelock, D M; Mohan, R
1997-01-01
This article presents a comparison between two implementations of an EGS4 Monte Carlo simulation of a radiation therapy machine. The first implementation was run on a high performance RISC workstation, and the second was run on an inexpensive PC. The simulation was performed using the MCRAD user code. The photon energy spectra, as measured at a plane transverse to the beam direction and containing the isocenter, were compared. The photons were also binned radially in order to compare the variation of the spectra with radius. With 500,000 photons recorded in each of the two simulations, the running times were 48 h and 116 h for the workstation and the PC, respectively. No significant statistical differences between the two implementations were found.
Hoehner, Christine M; Sabounchi, Nasim S; Brennan, Laura K; Hovmand, Peter; Kemner, Allison
2015-01-01
In the evaluation of the Healthy Kids, Healthy Communities initiative, investigators implemented Group Model Building (GMB) to promote systems thinking at the community level. As part of the GMB sessions held in each community partnership, participants created behavior-over-time graphs (BOTGs) to characterize their perceptions of changes over time related to policies, environments, collaborations, and social determinants in their community related to healthy eating, active living, and childhood obesity. To describe the process of coding BOTGs and their trends. Descriptive study of trends among BOTGs from 11 domains (eg, active living environments, social determinants of health, funding) and relevant categories and subcategories based on the graphed variables. In addition, BOTGs were distinguished by whether the variables were positively (eg, access to healthy foods) or negatively (eg, screen time) associated with health. The GMB sessions were held in 49 community partnerships across the United States. Participants in the GMB sessions (n = 590; n = 5-21 per session) included key individuals engaged in or impacted by the policy, system, or environmental changes occurring in the community. Thirty codes were developed to describe the direction (increasing, decreasing, stable) and shape (linear, reinforcing, balancing, or oscillating) of trends from 1660 graphs. The patterns of trends varied by domain. For example, among variables positively associated with health, the prevalence of reinforcing increasing trends was highest for active living and healthy eating environments (37.4% and 29.3%, respectively), partnership and community capacity (38.8%), and policies (30.2%). Examination of trends of specific variables suggested both convergence (eg, for cost of healthy foods) and divergence (eg, for farmers' markets) of trends across partnerships. Behavior-over-time graphs provide a unique data source for understanding community-level trends and, when combined with causal maps and computer modeling, can yield insights about prevention strategies to address childhood obesity.
Color coding of control room displays: the psychocartography of visual layering effects.
Van Laar, Darren; Deshe, Ofer
2007-06-01
To evaluate which of three color coding methods (monochrome, maximally discriminable, and visual layering) used to code four types of control room display format (bars, tables, trend, mimic) was superior in two classes of task (search, compare). It has recently been shown that color coding of visual layers, as used in cartography, may be used to color code any type of information display, but this has yet to be fully evaluated. Twenty-four people took part in a 2 (task) x 3 (coding method) x 4 (format) wholly repeated measures design. The dependent variables assessed were target location reaction time, error rates, workload, and subjective feedback. Overall, the visual layers coding method produced significantly faster reaction times than did the maximally discriminable and the monochrome methods for both the search and compare tasks. No significant difference in errors was observed between conditions for either task type. Significantly less perceived workload was experienced with the visual layers coding method, which was also rated more highly than the other coding methods on a 14-item visual display quality questionnaire. The visual layers coding method is superior to other color coding methods for control room displays when the method supports the user's task. The visual layers color coding method has wide applicability to the design of all complex information displays utilizing color coding, from the most maplike (e.g., air traffic control) to the most abstract (e.g., abstracted ecological display).
Analysis of automatic repeat request methods for deep-space downlinks
NASA Technical Reports Server (NTRS)
Pollara, F.; Ekroot, L.
1995-01-01
Automatic repeat request (ARQ) methods cannot increase the capacity of a memoryless channel. However, they can be used to decrease the complexity of the channel-coding system to achieve essentially error-free transmission and to reduce link margins when the channel characteristics are poorly predictable. This article considers ARQ methods on a power-limited channel (e.g., the deep-space channel), where it is important to minimize the total power needed to transmit the data, as opposed to a bandwidth-limited channel (e.g., terrestrial data links), where the spectral efficiency or the total required transmission time is the most relevant performance measure. In the analysis, we compare the performance of three reference concatenated coded systems used in actual deep-space missions to that obtainable by ARQ methods using the same codes, in terms of required power, time to transmit with a given number of retransmissions, and achievable probability of word error. The ultimate limits of ARQ with an arbitrary number of retransmissions are also derived.
Nuclear fuel management optimization using genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1995-07-01
The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-03-01
A novel lower-complexity construction scheme of quasi-cyclic low-density parity-check (QC-LDPC) codes for optical transmission systems is proposed based on the structure of the parity-check matrix for the Richardson-Urbanke (RU) algorithm. Furthermore, a novel irregular QC-LDPC(4 288, 4 020) code with high code-rate of 0.937 is constructed by this novel construction scheme. The simulation analyses show that the net coding gain ( NCG) of the novel irregular QC-LDPC(4 288,4 020) code is respectively 2.08 dB, 1.25 dB and 0.29 dB more than those of the classic RS(255, 239) code, the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code at the bit error rate ( BER) of 10-6. The irregular QC-LDPC(4 288, 4 020) code has the lower encoding/decoding complexity compared with the LDPC(32 640, 30 592) code and the irregular QC-LDPC(3 843, 3 603) code. The proposed novel QC-LDPC(4 288, 4 020) code can be more suitable for the increasing development requirements of high-speed optical transmission systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Joon H.; Siegel, Malcolm Dean; Arguello, Jose Guadalupe, Jr.
2011-03-01
This report describes a gap analysis performed in the process of developing the Waste Integrated Performance and Safety Codes (IPSC) in support of the U.S. Department of Energy (DOE) Office of Nuclear Energy Advanced Modeling and Simulation (NEAMS) Campaign. The goal of the Waste IPSC is to develop an integrated suite of computational modeling and simulation capabilities to quantitatively assess the long-term performance of waste forms in the engineered and geologic environments of a radioactive waste storage or disposal system. The Waste IPSC will provide this simulation capability (1) for a range of disposal concepts, waste form types, engineered repositorymore » designs, and geologic settings, (2) for a range of time scales and distances, (3) with appropriate consideration of the inherent uncertainties, and (4) in accordance with rigorous verification, validation, and software quality requirements. The gap analyses documented in this report were are performed during an initial gap analysis to identify candidate codes and tools to support the development and integration of the Waste IPSC, and during follow-on activities that delved into more detailed assessments of the various codes that were acquired, studied, and tested. The current Waste IPSC strategy is to acquire and integrate the necessary Waste IPSC capabilities wherever feasible, and develop only those capabilities that cannot be acquired or suitably integrated, verified, or validated. The gap analysis indicates that significant capabilities may already exist in the existing THC codes although there is no single code able to fully account for all physical and chemical processes involved in a waste disposal system. Large gaps exist in modeling chemical processes and their couplings with other processes. The coupling of chemical processes with flow transport and mechanical deformation remains challenging. The data for extreme environments (e.g., for elevated temperature and high ionic strength media) that are needed for repository modeling are severely lacking. In addition, most of existing reactive transport codes were developed for non-radioactive contaminants, and they need to be adapted to account for radionuclide decay and in-growth. The accessibility to the source codes is generally limited. Because the problems of interest for the Waste IPSC are likely to result in relatively large computational models, a compact memory-usage footprint and a fast/robust solution procedure will be needed. A robust massively parallel processing (MPP) capability will also be required to provide reasonable turnaround times on the analyses that will be performed with the code. A performance assessment (PA) calculation for a waste disposal system generally requires a large number (hundreds to thousands) of model simulations to quantify the effect of model parameter uncertainties on the predicted repository performance. A set of codes for a PA calculation must be sufficiently robust and fast in terms of code execution. A PA system as a whole must be able to provide multiple alternative models for a specific set of physical/chemical processes, so that the users can choose various levels of modeling complexity based on their modeling needs. This requires PA codes, preferably, to be highly modularized. Most of the existing codes have difficulties meeting these requirements. Based on the gap analysis results, we have made the following recommendations for the code selection and code development for the NEAMS waste IPSC: (1) build fully coupled high-fidelity THCMBR codes using the existing SIERRA codes (e.g., ARIA and ADAGIO) and platform, (2) use DAKOTA to build an enhanced performance assessment system (EPAS), and build a modular code architecture and key code modules for performance assessments. The key chemical calculation modules will be built by expanding the existing CANTERA capabilities as well as by extracting useful components from other existing codes.« less
31 CFR 29.106 - Representative payees.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the same circumstances as each plan permits for non-Federal Benefit Payments under the plan. (See e.g., section 4-629(b) of the D.C. Code (1997) (applicable to the Police and Firefighters Plan).) ...
31 CFR 29.106 - Representative payees.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the same circumstances as each plan permits for non-Federal Benefit Payments under the plan. (See e.g., section 4-629(b) of the D.C. Code (1997) (applicable to the Police and Firefighters Plan).) ...
31 CFR 29.106 - Representative payees.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the same circumstances as each plan permits for non-Federal Benefit Payments under the plan. (See e.g., section 4-629(b) of the D.C. Code (1997) (applicable to the Police and Firefighters Plan).) ...
31 CFR 29.106 - Representative payees.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the same circumstances as each plan permits for non-Federal Benefit Payments under the plan. (See e.g., section 4-629(b) of the D.C. Code (1997) (applicable to the Police and Firefighters Plan).) ...
31 CFR 29.106 - Representative payees.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the same circumstances as each plan permits for non-Federal Benefit Payments under the plan. (See e.g., section 4-629(b) of the D.C. Code (1997) (applicable to the Police and Firefighters Plan).) ...
The Role of Ontologies in Schema-based Program Synthesis
NASA Technical Reports Server (NTRS)
Bures, Tomas; Denney, Ewen; Fischer, Bernd; Nistor, Eugen C.
2004-01-01
Program synthesis is the process of automatically deriving executable code from (non-executable) high-level specifications. It is more flexible and powerful than conventional code generation techniques that simply translate algorithmic specifications into lower-level code or only create code skeletons from structural specifications (such as UML class diagrams). Key to building a successful synthesis system is specializing to an appropriate application domain. The AUTOBAYES and AUTOFILTER systems, under development at NASA Ames, operate in the two domains of data analysis and state estimation, respectively. The central concept of both systems is the schema, a representation of reusable computational knowledge. This can take various forms, including high-level algorithm templates, code optimizations, datatype refinements, or architectural information. A schema also contains applicability conditions that are used to determine when it can be applied safely. These conditions can refer to the initial specification, to intermediate results, or to elements of the partially-instantiated code. Schema-based synthesis uses AI technology to recursively apply schemas to gradually refine a specification into executable code. This process proceeds in two main phases. A front-end gradually transforms the problem specification into a program represented in an abstract intermediate code. A backend then compiles this further down into a concrete target programming language of choice. A core engine applies schemas on the initial problem specification, then uses the output of those schemas as the input for other schemas, until the full implementation is generated. Since there might be different schemas that implement different solutions to the same problem this process can generate an entire solution tree. AUTOBAYES and AUTOFILTER have reached the level of maturity where they enable users to solve interesting application problems, e.g., the analysis of Hubble Space Telescope images. They are large (in total around 100kLoC Prolog), knowledge intensive systems that employ complex symbolic reasoning to generate a wide range of non-trivial programs for complex application do- mains. Their schemas can have complex interactions, which make it hard to change them in isolation or even understand what an existing schema actually does. Adding more capabilities by increasing the number of schemas will only worsen this situation, ultimately leading to the entropy death of the synthesis system. The root came of this problem is that the domain knowledge is scattered throughout the entire system and only represented implicitly in the schema implementations. In our current work, we are addressing this problem by making explicit the knowledge from Merent parts of the synthesis system. Here; we discuss how Gruber's definition of an ontology as an explicit specification of a conceptualization matches our efforts in identifying and explicating the domain-specific concepts. We outline the dual role ontologies play in schema-based synthesis and argue that they address different audiences and serve different purposes. Their first role is descriptive: they serve as explicit documentation, and help to understand the internal structure of the system. Their second role is prescriptive: they provide the formal basis against which the other parts of the system (e.g., schemas) can be checked. Their final role is referential: ontologies also provide semantically meaningful "hooks" which allow schemas and tools to access the internal state of the program derivation process (e.g., fragments of the generated code) in domain-specific rather than language-specific terms, and thus to modify it in a controlled fashion. For discussion purposes we use AUTOLINEAR, a small synthesis system we are currently experimenting with, which can generate code for solving a system of linear equations, Az = b.
Exploiting the cannibalistic traits of Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Collins, O.
1993-01-01
In Reed-Solomon codes and all other maximum distance separable codes, there is an intrinsic relationship between the size of the symbols in a codeword and the length of the codeword. Increasing the number of symbols in a codeword to improve the efficiency of the coding system thus requires using a larger set of symbols. However, long Reed-Solomon codes are difficult to implement and many communications or storage systems cannot easily accommodate an increased symbol size, e.g., M-ary frequency shift keying (FSK) and photon-counting pulse-position modulation demand a fixed symbol size. A technique for sharing redundancy among many different Reed-Solomon codewords to achieve the efficiency attainable in long Reed-Solomon codes without increasing the symbol size is described. Techniques both for calculating the performance of these new codes and for determining their encoder and decoder complexities is presented. These complexities are usually found to be substantially lower than conventional Reed-Solomon codes of similar performance.
Application of a Design Space Exploration Tool to Enhance Interleaver Generation
2009-06-24
2], originally dedicated to channel coding, are being currently reused in a large set of the whole digital communication systems (e.g. equalization... originally target interface synthesis, is shown to be also suited to the interleaver design space exploration. Our design flow can take as input...slice turbo codes,” in Proc. 3rd Int. Symp. Turbo Codes, Related Topics, Brest , 2003, pp. 343–346. [11] IEEE 802.15.3a, WPAN High Rate Alternative [12
40 CFR 86.005-17 - On-board diagnostics.
Code of Federal Regulations, 2013 CFR
2013-07-01
... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...
40 CFR 86.005-17 - On-board diagnostics.
Code of Federal Regulations, 2012 CFR
2012-07-01
... other available operating parameters), and functionality checks for computer output components (proper... considered acceptable. (e) Storing of computer codes. The OBD system shall record and store in computer... monitors that can be considered continuously operating monitors (e.g., misfire monitor, fuel system monitor...
A Consistent System for Coding Laboratory Samples
NASA Astrophysics Data System (ADS)
Sih, John C.
1996-07-01
A formal laboratory coding system is presented to keep track of laboratory samples. Preliminary useful information regarding the sample (origin and history) is gained without consulting a research notebook. Since this system uses and retains the same research notebook page number for each new experiment (reaction), finding and distinguishing products (samples) of the same or different reactions becomes an easy task. Using this system multiple products generated from a single reaction can be identified and classified in a uniform fashion. Samples can be stored and filed according to stage and degree of purification, e.g. crude reaction mixtures, recrystallized samples, chromatographed or distilled products.
Solutions for Coding Societal Events
2016-12-01
develop a prototype system for civil unrest event extraction, and (3) engineer BBN ACCENT (ACCurate Events from Natural Text ) to support broad use by...56 iv List of Tables Table 1: Features in similarity metric. Abbreviations are as follows. TG: text graph...extraction of a stream of events (e.g. protests, attacks, etc.) from unstructured text (e.g. news, social media). This technical report presents results
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Yi; Fakcharoenphol, Perapon; Wang, Shihao
2013-12-01
TOUGH2-EGS-MP is a parallel numerical simulation program coupling geomechanics with fluid and heat flow in fractured and porous media, and is applicable for simulation of enhanced geothermal systems (EGS). TOUGH2-EGS-MP is based on the TOUGH2-MP code, the massively parallel version of TOUGH2. In TOUGH2-EGS-MP, the fully-coupled flow-geomechanics model is developed from linear elastic theory for thermo-poro-elastic systems and is formulated in terms of mean normal stress as well as pore pressure and temperature. Reservoir rock properties such as porosity and permeability depend on rock deformation, and the relationships between these two, obtained from poro-elasticity theories and empirical correlations, are incorporatedmore » into the simulation. This report provides the user with detailed information on the TOUGH2-EGS-MP mathematical model and instructions for using it for Thermal-Hydrological-Mechanical (THM) simulations. The mathematical model includes the fluid and heat flow equations, geomechanical equation, and discretization of those equations. In addition, the parallel aspects of the code, such as domain partitioning and communication between processors, are also included. Although TOUGH2-EGS-MP has the capability for simulating fluid and heat flows coupled with geomechanical effects, it is up to the user to select the specific coupling process, such as THM or only TH, in a simulation. There are several example problems illustrating applications of this program. These example problems are described in detail and their input data are presented. Their results demonstrate that this program can be used for field-scale geothermal reservoir simulation in porous and fractured media with fluid and heat flow coupled with geomechanical effects.« less
The Code of the Street and Violent Versus Property Crime Victimization.
McNeeley, Susan; Wilcox, Pamela
2015-01-01
Previous research has shown that individuals who adopt values in line with the code of the street are more likely to experience violent victimization (e.g., Stewart, Schreck, & Simons, 2006). This study extends this literature by examining the relationship between the street code and multiple types of violent and property victimization. This research investigates the relationship between street code-related values and 4 types of victimization (assault, breaking and entering, theft, and vandalism) using Poisson-based multilevel regression models. Belief in the street code was associated with higher risk of experiencing assault, breaking and entering, and vandalism, whereas theft victimization was not related to the street code. The results suggest that the code of the street influences victimization broadly--beyond violence--by increasing behavior that provokes retaliation from others in various forms.
NASA Technical Reports Server (NTRS)
1991-01-01
Recommendations are made after 32 interviews, lesson identification, lesson analysis, and mission characteristics identification. The major recommendations are as follows: (1) to develop end-to-end planning and scheduling operations concepts by mission class and to ensure their consideration in system life cycle documentation; (2) to create an organizational infrastructure at the Code 500 level, supported by a Directorate level steering committee with project representation, responsible for systems engineering of end-to-end planning and scheduling systems; (3) to develop and refine mission capabilities to assess impacts of early mission design decisions on planning and scheduling; and (4) to emphasize operational flexibility in the development of the Advanced Space Network, other institutional resources, external (e.g., project) capabilities and resources, operational software and support tools.
Comment on ‘egs_brachy: a versatile and fast Monte Carlo code for brachytherapy’
NASA Astrophysics Data System (ADS)
Yegin, Gultekin
2018-02-01
In a recent paper (Chamberland et al 2016 Phys. Med. Biol. 61 8214) develop a new Monte Carlo code called egs_brachy for brachytherapy treatments. It is based on EGSnrc, and written in the C++ programming language. In order to benchmark the egs_brachy code, the authors use it in various test case scenarios in which complex geometry conditions exist. Another EGSnrc based brachytherapy dose calculation engine, BrachyDose, is used for dose comparisons. The authors fail to prove that egs_brachy can produce reasonable dose values for brachytherapy sources in a given medium. The dose comparisons in the paper are erroneous and misleading. egs_brachy should not be used in any further research studies unless and until all the potential bugs are fixed in the code.
Fiedler, Jan; Baker, Andrew H; Dimmeler, Stefanie; Heymans, Stephane; Mayr, Manuel; Thum, Thomas
2018-05-23
Non-coding RNAs are increasingly recognized not only as regulators of various biological functions but also as targets for a new generation of RNA therapeutics and biomarkers. We hereby review recent insights relating to non-coding RNAs including microRNAs (e.g. miR-126, miR-146a), long non-coding RNAs (e.g. MIR503HG, GATA6-AS, SMILR) and circular RNAs (e.g. cZNF292) and their role in vascular diseases. This includes identification and therapeutic use of hypoxia-regulated non-coding RNAs and endogenous non-coding RNAs that regulate intrinsic smooth muscle cell signalling, age-related non-coding RNAs and non-coding RNAs involved in the regulation of mitochondrial biology and metabolic control. Finally, we discuss non-coding RNA species with biomarker potential.
PHOTOMETRIC ANALYSIS OF HS Aqr, EG Cep, VW LMi, AND DU Boo
DOE Office of Scientific and Technical Information (OSTI.GOV)
Djurasevic, G.; Latkovic, O.; Bastuerk, Oe.
2013-03-15
We analyze new multicolor light curves for four close late-type binaries: HS Aqr, EG Cep, VW LMi, and DU Boo, in order to determine the orbital and physical parameters of the systems and estimate the distances. The analysis is done using the modeling code of G. Djurasevic, and is based on up-to-date measurements of spectroscopic elements. All four systems have complex, asymmetric light curves that we model by including bright or dark spots on one or both components. Our findings indicate that HS Aqr and EG Cep are in semi-detached, while VW LMi and DU Boo are in overcontact configurations.
Barnett, Miya L.; Niec, Larissa N.; Acevedo-Polakovich, I. David
2013-01-01
This paper describes the initial evaluation of the Therapist-Parent Interaction Coding System (TPICS), a measure of in vivo therapist coaching for the evidence-based behavioral parent training intervention, parent-child interaction therapy (PCIT). Sixty-one video-recorded treatment sessions were coded with the TPICS to investigate (1) the variety of coaching techniques PCIT therapists use in the early stage of treatment, (2) whether parent skill-level guides a therapist’s coaching style and frequency, and (3) whether coaching mediates changes in parents’ skill levels from one session to the next. Results found that the TPICS captured a range of coaching techniques, and that parent skill-level prior to coaching did relate to therapists’ use of in vivo feedback. Therapists’ responsive coaching (e.g., praise to parents) was a partial mediator of change in parenting behavior from one session to the next for specific child-centered parenting skills; whereas directive coaching (e.g., modeling) did not relate to change. The TPICS demonstrates promise as a measure of coaching during PCIT with good reliability scores and initial evidence of construct validity. PMID:24839350
Make Movies out of Your Dynamical Simulations with OGRE!
NASA Astrophysics Data System (ADS)
Tamayo, Daniel; Douglas, R. W.; Ge, H. W.; Burns, J. A.
2013-10-01
We have developed OGRE, the Orbital GRaphics Environment, an open-source project comprising a graphical user interface that allows the user to view the output from several dynamical integrators (e.g., SWIFT) that are commonly used for academic work. One can interactively vary the display speed, rotate the view and zoom the camera. This makes OGRE a great tool for students or the general public to explore accurate orbital histories that may display interesting dynamical features, e.g. the destabilization of Solar System orbits under the Nice model, or interacting pairs of exoplanets. Furthermore, OGRE allows the user to choreograph sequences of transformations as the simulation is played to generate movies for use in public talks or professional presentations. The graphical user interface is coded using Qt to ensure portability across different operating systems. OGRE will run on Linux and Mac OS X. The program is available as a self-contained executable, or as source code that the user can compile. We are continually updating the code, and hope that people who find it useful will contribute to the development of new features.
Make Movies out of Your Dynamical Simulations with OGRE!
NASA Astrophysics Data System (ADS)
Tamayo, Daniel; Douglas, R. W.; Ge, H. W.; Burns, J. A.
2014-01-01
We have developed OGRE, the Orbital GRaphics Environment, an open-source project comprising a graphical user interface that allows the user to view the output from several dynamical integrators (e.g., SWIFT) that are commonly used for academic work. One can interactively vary the display speed, rotate the view and zoom the camera. This makes OGRE a great tool for students or the general public to explore accurate orbital histories that may display interesting dynamical features, e.g. the destabilization of Solar System orbits under the Nice model, or interacting pairs of exoplanets. Furthermore, OGRE allows the user to choreograph sequences of transformations as the simulation is played to generate movies for use in public talks or professional presentations. The graphical user interface is coded using Qt to ensure portability across different operating systems. OGRE will run on Linux and Mac OS X. The program is available as a self-contained executable, or as source code that the user can compile. We are continually updating the code, and hope that people who find it useful will contribute to the development of new features.
Multi-optimization Criteria-based Robot Behavioral Adaptability and Motion Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pin, Francois G.
2002-06-01
Robotic tasks are typically defined in Task Space (e.g., the 3-D World), whereas robots are controlled in Joint Space (motors). The transformation from Task Space to Joint Space must consider the task objectives (e.g., high precision, strength optimization, torque optimization), the task constraints (e.g., obstacles, joint limits, non-holonomic constraints, contact or tool task constraints), and the robot kinematics configuration (e.g., tools, type of joints, mobile platform, manipulator, modular additions, locked joints). Commercially available robots are optimized for a specific set of tasks, objectives and constraints and, therefore, their control codes are extremely specific to a particular set of conditions. Thus,more » there exist a multiplicity of codes, each handling a particular set of conditions, but none suitable for use on robots with widely varying tasks, objectives, constraints, or environments. On the other hand, most DOE missions and tasks are typically ''batches of one''. Attempting to use commercial codes for such work requires significant personnel and schedule costs for re-programming or adding code to the robots whenever a change in task objective, robot configuration, number and type of constraint, etc. occurs. The objective of our project is to develop a ''generic code'' to implement this Task-space to Joint-Space transformation that would allow robot behavior adaptation, in real time (at loop rate), to changes in task objectives, number and type of constraints, modes of controls, kinematics configuration (e.g., new tools, added module). Our specific goal is to develop a single code for the general solution of under-specified systems of algebraic equations that is suitable for solving the inverse kinematics of robots, is useable for all types of robots (mobile robots, manipulators, mobile manipulators, etc.) with no limitation on the number of joints and the number of controlled Task-Space variables, can adapt to real time changes in number and type of constraints and in task objectives, and can adapt to changes in kinematics configurations (change of module, change of tool, joint failure adaptation, etc.).« less
Study of steam condensation at sub-atmospheric pressure: setting a basic research using MELCOR code
NASA Astrophysics Data System (ADS)
Manfredini, A.; Mazzini, M.
2017-11-01
One of the most serious accidents that can occur in the experimental nuclear fusion reactor ITER is the break of one of the headers of the refrigeration system of the first wall of the Tokamak. This results in water-steam mixture discharge in vacuum vessel (VV), with consequent pressurization of this container. To prevent the pressure in the VV exceeds 150 KPa absolute, a system discharges the steam inside a suppression pool, at an absolute pressure of 4.2 kPa. The computer codes used to analyze such incident (eg. RELAP 5 or MELCOR) are not validated experimentally for such conditions. Therefore, we planned a basic research, in order to have experimental data useful to validate the heat transfer correlations used in these codes. After a thorough literature search on this topic, ACTA, in collaboration with the staff of ITER, defined the experimental matrix and performed the design of the experimental apparatus. For the thermal-hydraulic design of the experiments, we executed a series of calculations by MELCOR. This code, however, was used in an unconventional mode, with the development of models suited respectively to low and high steam flow-rate tests. The article concludes with a discussion of the placement of experimental data within the map featuring the phenomenon characteristics, showing the importance of the new knowledge acquired, particularly in the case of chugging.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
Kailkhura, Bhavya; Theagarajan, Lakshmi Narasimhan; Varshney, Pramod K.
2017-04-12
In this paper, we generalize the well-known index coding problem to exploit the structure in the source-data to improve system throughput. In many applications (e.g., multimedia), the data to be transmitted may lie (or can be well approximated) in a low-dimensional subspace. We exploit this low-dimensional structure of the data using an algebraic framework to solve the index coding problem (referred to as subspace-aware index coding) as opposed to the traditional index coding problem which is subspace-unaware. Also, we propose an efficient algorithm based on the alternating minimization approach to obtain near optimal index codes for both subspace-aware and -unawaremore » cases. In conclusion, our simulations indicate that under certain conditions, a significant throughput gain (about 90%) can be achieved by subspace-aware index codes over conventional subspace-unaware index codes.« less
Applied Computational Transonic Aerodynamics,
1982-08-01
contributions. Considering first the body integral (2.95) we now have the situation that, with the effect of the boundary layer represented, e.g. through... effects , (3) static aeroelastic distortion, (4) up to three interfering bodies of nacelle or store type, and (5) an improved method of treating...tip. To date, no modeling of nacelle or store pylons has been included in this code. In the NLR code [641, the effect of (finite) bodies and wing
Embedded DCT and wavelet methods for fine granular scalable video: analysis and comparison
NASA Astrophysics Data System (ADS)
van der Schaar-Mitrea, Mihaela; Chen, Yingwei; Radha, Hayder
2000-04-01
Video transmission over bandwidth-varying networks is becoming increasingly important due to emerging applications such as streaming of video over the Internet. The fundamental obstacle in designing such systems resides in the varying characteristics of the Internet (i.e. bandwidth variations and packet-loss patterns). In MPEG-4, a new SNR scalability scheme, called Fine-Granular-Scalability (FGS), is currently under standardization, which is able to adapt in real-time (i.e. at transmission time) to Internet bandwidth variations. The FGS framework consists of a non-scalable motion-predicted base-layer and an intra-coded fine-granular scalable enhancement layer. For example, the base layer can be coded using a DCT-based MPEG-4 compliant, highly efficient video compression scheme. Subsequently, the difference between the original and decoded base-layer is computed, and the resulting FGS-residual signal is intra-frame coded with an embedded scalable coder. In order to achieve high coding efficiency when compressing the FGS enhancement layer, it is crucial to analyze the nature and characteristics of residual signals common to the SNR scalability framework (including FGS). In this paper, we present a thorough analysis of SNR residual signals by evaluating its statistical properties, compaction efficiency and frequency characteristics. The signal analysis revealed that the energy compaction of the DCT and wavelet transforms is limited and the frequency characteristic of SNR residual signals decay rather slowly. Moreover, the blockiness artifacts of the low bit-rate coded base-layer result in artificial high frequencies in the residual signal. Subsequently, a variety of wavelet and embedded DCT coding techniques applicable to the FGS framework are evaluated and their results are interpreted based on the identified signal properties. As expected from the theoretical signal analysis, the rate-distortion performances of the embedded wavelet and DCT-based coders are very similar. However, improved results can be obtained for the wavelet coder by deblocking the base- layer prior to the FGS residual computation. Based on the theoretical analysis and our measurements, we can conclude that for an optimal complexity versus coding-efficiency trade- off, only limited wavelet decomposition (e.g. 2 stages) needs to be performed for the FGS-residual signal. Also, it was observed that the good rate-distortion performance of a coding technique for a certain image type (e.g. natural still-images) does not necessarily translate into similarly good performance for signals with different visual characteristics and statistical properties.
Java Source Code Analysis for API Migration to Embedded Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winter, Victor; McCoy, James A.; Guerrero, Jonathan
Embedded systems form an integral part of our technological infrastructure and oftentimes play a complex and critical role within larger systems. From the perspective of reliability, security, and safety, strong arguments can be made favoring the use of Java over C in such systems. In part, this argument is based on the assumption that suitable subsets of Java’s APIs and extension libraries are available to embedded software developers. In practice, a number of Java-based embedded processors do not support the full features of the JVM. For such processors, source code migration is a mechanism by which key abstractions offered bymore » APIs and extension libraries can made available to embedded software developers. The analysis required for Java source code-level library migration is based on the ability to correctly resolve element references to their corresponding element declarations. A key challenge in this setting is how to perform analysis for incomplete source-code bases (e.g., subsets of libraries) from which types and packages have been omitted. This article formalizes an approach that can be used to extend code bases targeted for migration in such a manner that the threats associated the analysis of incomplete code bases are eliminated.« less
TRIQS: A toolbox for research on interacting quantum systems
NASA Astrophysics Data System (ADS)
Parcollet, Olivier; Ferrero, Michel; Ayral, Thomas; Hafermann, Hartmut; Krivenko, Igor; Messio, Laura; Seth, Priyanka
2015-11-01
We present the TRIQS library, a Toolbox for Research on Interacting Quantum Systems. It is an open-source, computational physics library providing a framework for the quick development of applications in the field of many-body quantum physics, and in particular, strongly-correlated electronic systems. It supplies components to develop codes in a modern, concise and efficient way: e.g. Green's function containers, a generic Monte Carlo class, and simple interfaces to HDF5. TRIQS is a C++/Python library that can be used from either language. It is distributed under the GNU General Public License (GPLv3). State-of-the-art applications based on the library, such as modern quantum many-body solvers and interfaces between density-functional-theory codes and dynamical mean-field theory (DMFT) codes are distributed along with it.
Hamann, Cara J; Peek-Asa, Corinne
2017-05-01
Among roadway users, bicyclists are considered vulnerable due to their high risk for injury when involved in a crash. Little is known about the circumstances leading to near crashes, crashes, and related injuries or how these vary by age and gender. The purpose of this study was to examine the rates and characteristics of safety-relevant events (crashes, near crashes, errors, and traffic violations) among adult and child bicyclists. Bicyclist trips were captured using Pedal Portal, a data acquisition and coding system which includes a GPS-enabled video camera and graphical user interface. A total of 179 safety-relevant events were manually coded from trip videos. Overall, child errors and traffic violations occurred at a rate of 1.9 per 100min of riding, compared to 6.3 for adults. However, children rode on the sidewalk 56.4% of the time, compared with 12.7% for adults. For both adults and children, the highest safety-relevant event rates occurred on paved roadways with no bicycle facilities present (Adults=8.6 and Children=7.2, per 100min of riding). Our study, the first naturalistic study to compare safety-relevant events among adults and children, indicates large variation in riding behavior and exposure between child and adult bicyclists. The majority of identified events were traffic violations and we were not able to code all risk-relevant data (e.g., subtle avoidance behaviors, failure to check for traffic, probability of collision). Future naturalistic cycling studies would benefit from enhanced instrumentation (e.g., additional camera views) and coding protocols able to fill these gaps. Copyright © 2017 Elsevier Ltd. All rights reserved.
EG and G and NASA face seal codes comparison
NASA Technical Reports Server (NTRS)
Basu, Prit
1994-01-01
This viewgraph presentation presents the following results for the example comparison: EG&G code with face deformations suppressed and SPIRALG agree well with each other as well as with the experimental data; 0 rpm stiffness data calculated by EG&G code are about 70-100 percent lower than that by SPIRALG; there is no appreciable difference between 0 rpm and 16,000 rpm stiffness and damping coefficients calculated by SPIRALG; and the film damping above 500 psig calculated by SPIRALG is much higher than the O-Ring secondary seal damping (e.g. 50 lbf.s/in).
75 FR 32459 - Notice Announcing Preliminary Permit Drawing
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-08
... accordingly. \\2\\ 18 CFR 4.37 (2009). See, e.g., BPUS Generation Development, LLC, 126 FERC ] 61,132 (2009). On... drawing. Kimberly D. Bose, Secretary. [FR Doc. 2010-13559 Filed 6-7-10; 8:45 am] BILLING CODE 6717-01-P ...
ESAS Deliverable PS 1.1.2.3: Customer Survey on Code Generations in Safety-Critical Applications
NASA Technical Reports Server (NTRS)
Schumann, Johann; Denney, Ewen
2006-01-01
Automated code generators (ACG) are tools that convert a (higher-level) model of a software (sub-)system into executable code without the necessity for a developer to actually implement the code. Although both commercially supported and in-house tools have been used in many industrial applications, little data exists on how these tools are used in safety-critical domains (e.g., spacecraft, aircraft, automotive, nuclear). The aims of the survey, therefore, were threefold: 1) to determine if code generation is primarily used as a tool for prototyping, including design exploration and simulation, or for fiight/production code; 2) to determine the verification issues with code generators relating, in particular, to qualification and certification in safety-critical domains; and 3) to determine perceived gaps in functionality of existing tools.
Phonological coding during reading.
Leinenger, Mallorie
2014-11-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. (PsycINFO Database Record (c) 2014 APA, all rights reserved).
Phonological coding during reading
Leinenger, Mallorie
2014-01-01
The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679
The Programming Language Python In Earth System Simulations
NASA Astrophysics Data System (ADS)
Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.
2004-12-01
Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
Identifying Human Factors Issues in Aircraft Maintenance Operations
NASA Technical Reports Server (NTRS)
Veinott, Elizabeth S.; Kanki, Barbara G.; Shafto, Michael G. (Technical Monitor)
1995-01-01
Maintenance operations incidents submitted to the Aviation Safety Reporting System (ASRS) between 1986-1992 were systematically analyzed in order to identify issues relevant to human factors and crew coordination. This exploratory analysis involved 95 ASRS reports which represented a wide range of maintenance incidents. The reports were coded and analyzed according to the type of error (e.g, wrong part, procedural error, non-procedural error), contributing factors (e.g., individual, within-team, cross-team, procedure, tools), result of the error (e.g., aircraft damage or not) as well as the operational impact (e.g., aircraft flown to destination, air return, delay at gate). The main findings indicate that procedural errors were most common (48.4%) and that individual and team actions contributed to the errors in more than 50% of the cases. As for operational results, most errors were either corrected after landing at the destination (51.6%) or required the flight crew to stop enroute (29.5%). Interactions among these variables are also discussed. This analysis is a first step toward developing a taxonomy of crew coordination problems in maintenance. By understanding what variables are important and how they are interrelated, we may develop intervention strategies that are better tailored to the human factor issues involved.
It takes two-coincidence coding within the dual olfactory pathway of the honeybee.
Brill, Martin F; Meyer, Anneke; Rössler, Wolfgang
2015-01-01
To rapidly process biologically relevant stimuli, sensory systems have developed a broad variety of coding mechanisms like parallel processing and coincidence detection. Parallel processing (e.g., in the visual system), increases both computational capacity and processing speed by simultaneously coding different aspects of the same stimulus. Coincidence detection is an efficient way to integrate information from different sources. Coincidence has been shown to promote associative learning and memory or stimulus feature detection (e.g., in auditory delay lines). Within the dual olfactory pathway of the honeybee both of these mechanisms might be implemented by uniglomerular projection neurons (PNs) that transfer information from the primary olfactory centers, the antennal lobe (AL), to a multimodal integration center, the mushroom body (MB). PNs from anatomically distinct tracts respond to the same stimulus space, but have different physiological properties, characteristics that are prerequisites for parallel processing of different stimulus aspects. However, the PN pathways also display mirror-imaged like anatomical trajectories that resemble neuronal coincidence detectors as known from auditory delay lines. To investigate temporal processing of olfactory information, we recorded PN odor responses simultaneously from both tracts and measured coincident activity of PNs within and between tracts. Our results show that coincidence levels are different within each of the two tracts. Coincidence also occurs between tracts, but to a minor extent compared to coincidence within tracts. Taken together our findings support the relevance of spike timing in coding of olfactory information (temporal code).
JACOB: an enterprise framework for computational chemistry.
Waller, Mark P; Dresselhaus, Thomas; Yang, Jack
2013-06-15
Here, we present just a collection of beans (JACOB): an integrated batch-based framework designed for the rapid development of computational chemistry applications. The framework expedites developer productivity by handling the generic infrastructure tier, and can be easily extended by user-specific scientific code. Paradigms from enterprise software engineering were rigorously applied to create a scalable, testable, secure, and robust framework. A centralized web application is used to configure and control the operation of the framework. The application-programming interface provides a set of generic tools for processing large-scale noninteractive jobs (e.g., systematic studies), or for coordinating systems integration (e.g., complex workflows). The code for the JACOB framework is open sourced and is available at: www.wallerlab.org/jacob. Copyright © 2013 Wiley Periodicals, Inc.
A Comprehensive Validation Approach Using The RAVEN Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Cogliati, Joshua J
2015-06-01
The RAVEN computer code , developed at the Idaho National Laboratory, is a generic software framework to perform parametric and probabilistic analysis based on the response of complex system codes. RAVEN is a multi-purpose probabilistic and uncertainty quantification platform, capable to communicate with any system code. A natural extension of the RAVEN capabilities is the imple- mentation of an integrated validation methodology, involving several different metrics, that represent an evolution of the methods currently used in the field. The state-of-art vali- dation approaches use neither exploration of the input space through sampling strategies, nor a comprehensive variety of metrics neededmore » to interpret the code responses, with respect experimental data. The RAVEN code allows to address both these lacks. In the following sections, the employed methodology, and its application to the newer developed thermal-hydraulic code RELAP-7, is reported.The validation approach has been applied on an integral effect experiment, representing natu- ral circulation, based on the activities performed by EG&G Idaho. Four different experiment configurations have been considered and nodalized.« less
Performance of MIMO-OFDM using convolution codes with QAM modulation
NASA Astrophysics Data System (ADS)
Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa
2014-04-01
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
A flooding induced station blackout analysis for a pressurized water reactor using the RISMC toolkit
Mandelli, Diego; Prescott, Steven; Smith, Curtis; ...
2015-05-17
In this paper we evaluate the impact of a power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: the RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., component/system activation) and to perform statistical analyses. In our case, the simulation of the flooding is performed by using an advanced smooth particle hydrodynamics code calledmore » NEUTRINO. The obtained results allow the user to investigate and quantify the impact of timing and sequencing of events on system safety. The impact of power uprate is determined in terms of both core damage probability and safety margins.« less
The photo-colorimetric space as a medium for the representation of spatial data
NASA Technical Reports Server (NTRS)
Kraiss, K. Friedrich; Widdel, Heino
1989-01-01
Spatial displays and instruments are usually used in the context of vehicle guidance, but it is hard to find applicable spatial formats in information retrieval and interaction systems. Human interaction with spatial data structures and the applicability of the CIE color space to improve dialogue transparency is discussed. A proposal is made to use the color space to code spatially represented data. The semantic distances of the categories of dialogue structures or, more general, of database structures, are determined empirically. Subsequently the distances are transformed and depicted into the color space. The concept is demonstrated for a car diagnosis system, where the category cooling system could, e.g., be coded in blue, the category ignition system in red. Hereby a correspondence between color and semantic distances is achieved. Subcategories can be coded as luminance differences within the color space.
Rosenberg-Yunger, Zahava R S; Klassen, Anne F; Amin, Leila; Granek, Leeat; D'Agostino, Norma M; Boydell, Katherine M; Greenberg, Mark; Barr, Ronald D; Nathan, Paul C
2013-09-01
Despite the risk for late effects in adult survivors of cancer in childhood or adolescence, many survivors fail to transition from pediatric to adult long-term follow-up (LTFU) care. The purpose of this study was to identify the barriers and facilitators of transition from pediatric to adult LTFU care. In this qualitative study, 38 Canadian survivors of cancer in childhood or adolescence, currently aged 15-26 years, were interviewed using semi-structured, open-ended questions. Participants belonged to one of four groups: pre-transition (n=10), successful transition (n=11), failed to transition (n=7), and transitioned to an adult center but then dropped out of adult care (n=10). A constructivist grounded theory approach was used to analyze the interview data. This approach consisted of coding transcripts line by line to develop categories and using constant comparison to examine relationships within and across codes and categories. Interviewing continued until saturation was reached. Three interrelated themes were identified that affected the transition process: micro-level patient factors (e.g., due diligence, anxiety), meso-level support factors (e.g., family, friends), and macro-level system factors (e.g., appointments, communication, healthcare providers). Factors could act as facilitators to transition (e.g., family support), barriers to transition (e.g., difficulty booking appointments), or as both a barrier and a facilitator (e.g., anxiety). This study illustrates the interaction between multiple factors that facilitate and/or prevent transition from pediatric to adult LTFU cancer care. A number of recommendations are presented to address potential macro-level system barriers to successful transition.
NASA Astrophysics Data System (ADS)
Mazza, Mirko
2015-12-01
Reinforced concrete (r.c.) framed buildings designed in compliance with inadequate seismic classifications and code provisions present in many cases a high vulnerability and need to be retrofitted. To this end, the insertion of a base isolation system allows a considerable reduction of the seismic loads transmitted to the superstructure. However, strong near-fault ground motions, which are characterised by long-duration horizontal pulses, may amplify the inelastic response of the superstructure and induce a failure of the isolation system. The above considerations point out the importance of checking the effectiveness of different isolation systems for retrofitting a r.c. framed structure. For this purpose, a numerical investigation is carried out with reference to a six-storey r.c. framed building, which, primarily designed (as to be a fixed-base one) in compliance with the previous Italian code (DM96) for a medium-risk seismic zone, has to be retrofitted by insertion of an isolation system at the base for attaining performance levels imposed by the current Italian code (NTC08) in a high-risk seismic zone. Besides the (fixed-base) original structure, three cases of base isolation are studied: elastomeric bearings acting alone (e.g. HDLRBs); in-parallel combination of elastomeric and friction bearings (e.g. high-damping-laminated-rubber bearings, HDLRBs and steel-PTFE sliding bearings, SBs); friction bearings acting alone (e.g. friction pendulum bearings, FPBs). The nonlinear analysis of the fixed-base and base-isolated structures subjected to horizontal components of near-fault ground motions is performed for checking plastic conditions at the potential critical (end) sections of the girders and columns as well as critical conditions of the isolation systems. Unexpected high values of ductility demand are highlighted at the lower floors of all base-isolated structures, while re-centring problems of the base isolation systems under near-fault earthquakes are expected in case of friction bearings acting alone (i.e. FPBs) or that in combination (i.e. SBs) with HDLRBs.
NASA Astrophysics Data System (ADS)
Nekuchaev, A. O.; Shuteev, S. A.
2014-04-01
A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.
The ASC Sequoia Programming Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seager, M
2008-08-06
In the late 1980's and early 1990's, Lawrence Livermore National Laboratory was deeply engrossed in determining the next generation programming model for the Integrated Design Codes (IDC) beyond vectorization for the Cray 1s series of computers. The vector model, developed in mid 1970's first for the CDC 7600 and later extended from stack based vector operation to memory to memory operations for the Cray 1s, lasted approximately 20 years (See Slide 5). The Cray vector era was deemed an extremely long lived era as it allowed vector codes to be developed over time (the Cray 1s were faster in scalarmore » mode than the CDC 7600) with vector unit utilization increasing incrementally over time. The other attributes of the Cray vector era at LLNL were that we developed, supported and maintained the Operating System (LTSS and later NLTSS), communications protocols (LINCS), Compilers (Civic Fortran77 and Model), operating system tools (e.g., batch system, job control scripting, loaders, debuggers, editors, graphics utilities, you name it) and math and highly machine optimized libraries (e.g., SLATEC, and STACKLIB). Although LTSS was adopted by Cray for early system generations, they later developed COS and UNICOS operating systems and environment on their own. In the late 1970s and early 1980s two trends appeared that made the Cray vector programming model (described above including both the hardware and system software aspects) seem potentially dated and slated for major revision. These trends were the appearance of low cost CMOS microprocessors and their attendant, departmental and mini-computers and later workstations and personal computers. With the wide spread adoption of Unix in the early 1980s, it appeared that LLNL (and the other DOE Labs) would be left out of the mainstream of computing without a rapid transition to these 'Killer Micros' and modern OS and tools environments. The other interesting advance in the period is that systems were being developed with multiple 'cores' in them and called Symmetric Multi-Processor or Shared Memory Processor (SMP) systems. The parallel revolution had begun. The Laboratory started a small 'parallel processing project' in 1983 to study the new technology and its application to scientific computing with four people: Tim Axelrod, Pete Eltgroth, Paul Dubois and Mark Seager. Two years later, Eugene Brooks joined the team. This team focused on Unix and 'killer micro' SMPs. Indeed, Eugene Brooks was credited with coming up with the 'Killer Micro' term. After several generations of SMP platforms (e.g., Sequent Balance 8000 with 8 33MHz MC32032s, Allian FX8 with 8 MC68020 and FPGA based Vector Units and finally the BB&N Butterfly with 128 cores), it became apparent to us that the killer micro revolution would indeed take over Crays and that we definitely needed a new programming and systems model. The model developed by Mark Seager and Dale Nielsen focused on both the system aspects (Slide 3) and the code development aspects (Slide 4). Although now succinctly captured in two attached slides, at the time there was tremendous ferment in the research community as to what parallel programming model would emerge, dominate and survive. In addition, we wanted a model that would provide portability between platforms of a single generation but also longevity over multiple--and hopefully--many generations. Only after we developed the 'Livermore Model' and worked it out in considerable detail did it become obvious that what we came up with was the right approach. In a nutshell, the applications programming model of the Livermore Model posited that SMP parallelism would ultimately not scale indefinitely and one would have to bite the bullet and implement MPI parallelism within the Integrated Design Code (IDC). We also had a major emphasis on doing everything in a completely standards based, portable methodology with POSIX/Unix as the target environment. We decided against specialized libraries like STACKLIB for performance, but kept as many general purpose, portable math libraries as were needed by the codes. Third, we assumed that the SMPs in clusters would evolve in time to become more powerful, feature rich and, in particular, offer more cores. Thus, we focused on OpenMP, and POSIX PThreads for programming SMP parallelism. These code porting efforts were lead by Dale Nielsen, A-Division code group leader, and Randy Christensen, B-Division code group leader. Most of the porting effort revolved removing 'Crayisms' in the codes: artifacts of LTSS/NLTSS, Civic compiler extensions beyond Fortran77, IO libraries and dealing with new code control languages (we switched to Perl and later to Python). Adding MPI to the codes was initially problematic and error prone because the programmers used MPI directly and sprinkled the calls throughout the code.« less
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core Concrete Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R; Farmer, Mitchell; Francis, Matthew W
Lower head failure and corium concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH.« less
Fukushima Daiichi Unit 1 ex-vessel prediction: Core melt spreading
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farmer, M. T.; Robb, K. R.; Francis, M. W.
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis has been carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially-dependent melt conditions and extent of spreading during relocation from the vessel. Lastly, this information was then used as input for the long-term debris coolability analysis with CORQUENCH that is reported in a companion paper.« less
Fukushima Daiichi Unit 1 ex-vessel prediction: Core melt spreading
Farmer, M. T.; Robb, K. R.; Francis, M. W.
2016-10-31
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, an analysis has been carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 were used as input.more » MELTSPREAD was then used to predict the spatially-dependent melt conditions and extent of spreading during relocation from the vessel. Lastly, this information was then used as input for the long-term debris coolability analysis with CORQUENCH that is reported in a companion paper.« less
32 CFR 623.4 - Accounting procedures.
Code of Federal Regulations, 2011 CFR
2011-07-01
... or O) indicating which loan of the day is first; e.g., A-first, B-second, etc. 51 “M”. 52-53 “G4” for... commercial bills of lading (CBL). Freight charges will be paid by the borrower. The CBL will cite proper project codes. NOTE: In emergencies where use of CBL would delay shipment, government bills of lading (GBL...
32 CFR 623.4 - Accounting procedures.
Code of Federal Regulations, 2013 CFR
2013-07-01
... or O) indicating which loan of the day is first; e.g., A-first, B-second, etc. 51 “M”. 52-53 “G4” for... commercial bills of lading (CBL). Freight charges will be paid by the borrower. The CBL will cite proper project codes. NOTE: In emergencies where use of CBL would delay shipment, government bills of lading (GBL...
Impact of Forecast and Model Error Correlations In 4dvar Data Assimilation
NASA Astrophysics Data System (ADS)
Zupanski, M.; Zupanski, D.; Vukicevic, T.; Greenwald, T.; Eis, K.; Vonder Haar, T.
A weak-constraint 4DVAR data assimilation system has been developed at Cooper- ative Institute for Research in the Atmosphere (CIRA), Colorado State University. It is based on the NCEP's ETA 4DVAR system, and it is fully parallel (MPI coding). The CIRA's 4DVAR system is aimed for satellite data assimilation research, with cur- rent focus on assimilation of cloudy radiances and microwave satellite measurements. Most important improvement over the previous 4DVAR system is a degree of gener- ality introduced into the new algorithm, namely for applications with different NWP models (e.g., RAMS, WRF, ETA, etc.), and for the choice of control variable. In cur- rent applications, the non-hydrostatic RAMS model and its adjoint are used, including all microphysical processess. The control variable includes potential temperature, ve- locity potential and stream function, vertical velocity, and seven mixing ratios with respect to all water phases. Since the statistics of the microphysical components of the control variable is not well known, a special attention will be paid to the impact of the forecast and model (prior) error correlations on the 4DVAR analysis. In particular, the sensitivity of the analysis with respect to decorrelation length will be examined. The prior error covariances are modelled using the compactly-supported, space-limited correlations developed at NASA DAO.
Visual Representation of Eye Gaze Is Coded by a Nonopponent Multichannel System
ERIC Educational Resources Information Center
Calder, Andrew J.; Jenkins, Rob; Cassel, Anneli; Clifford, Colin W. G.
2008-01-01
To date, there is no functional account of the visual perception of gaze in humans. Previous work has demonstrated that left gaze and right gaze are represented by separate mechanisms. However, these data are consistent with either a multichannel system comprising separate channels for distinct gaze directions (e.g., left, direct, and right) or an…
NASA Astrophysics Data System (ADS)
Reutov, B. F.; Lazarev, M. V.; Ermakova, S. V.; Zisman, S. L.; Kaplanovich, L. S.; Svetushkov, V. V.
2016-07-01
In the 20th century, the thermal power engineering in this country was oriented toward oncethrough cooling systems. More than 50% of the CHPP and NPP capacities with once-through cooling systems put into operation before the 1990s were large-scale water consumers but with minimum irretrievable water consumption. In 1995, the Water Code of the Russian Federation was adopted in which restrictions on application of once-through cooling systems for newly designed combined heat and power plants (CHPPs) were introduced for the first time. A ban on application of once-through systems was imposed by the current Water Code of the Russian Federation (Federal law no. 74-FZ, Art. 60 Cl. 4) not only for new CHPPs but also for those to be modified. Clause 4 of Article 60 of the Water Code of the Russian Federation contravenes law no. 7-FZ "On Protection of the Environment" that has priority significance, since the water environment is only part of the natural environment and those articles of the Water Code of the Russian Federation that are related directly to electric power engineering, viz., Articles 46 and 62. In recent decades, the search for means to increase revenue charges and the economic pressure on the thermal power industry caused introduction by law of charges for use of water by cooling systems irrespective of the latter's impact on the water quality of the source, the environment, the economic efficiency of the power production, and the living conditions of the people. The long-range annual increase in the water use charges forces the power generating companies to switch transfer once-through service water supply installations to recirculating water supply systems and once-through-recirculating systems with multiple reuse of warm water, which drastically reduces the technical, economic, and ecological characteristic of the power plant operation and also results in increasing power rates for the population. This work comprehensively substantiates the demands of power engineering specialists that the ban on development and construction of once-through service water supply systems should be lifted and the proposals for new parameters, e.g., temperature and back pressure, for designing low-potential equipment of steam-gas and steam-power plants.
Optimal Codes for the Burst Erasure Channel
NASA Technical Reports Server (NTRS)
Hamkins, Jon
2010-01-01
Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.
Finite element methods in a simulation code for offshore wind turbines
NASA Astrophysics Data System (ADS)
Kurz, Wolfgang
1994-06-01
Offshore installation of wind turbines will become important for electricity supply in future. Wind conditions above sea are more favorable than on land and appropriate locations on land are limited and restricted. The dynamic behavior of advanced wind turbines is investigated with digital simulations to reduce time and cost in development and design phase. A wind turbine can be described and simulated as a multi-body system containing rigid and flexible bodies. Simulation of the non-linear motion of such a mechanical system using a multi-body system code is much faster than using a finite element code. However, a modal representation of the deformation field has to be incorporated in the multi-body system approach. The equations of motion of flexible bodies due to deformation are generated by finite element calculations. At Delft University of Technology the simulation code DUWECS has been developed which simulates the non-linear behavior of wind turbines in time domain. The wind turbine is divided in subcomponents which are represented by modules (e.g. rotor, tower etc.).
Letter order is not coded by open bigrams
Kinoshita, Sachiko; Norris, Dennis
2013-01-01
Open bigram (OB) models (e.g., SERIOL: Whitney, 2001, 2008; Binary OB, Grainger & van Heuven, 2003; Overlap OB, Grainger et al., 2006; Local combination detector model, Dehaene et al., 2005) posit that letter order in a word is coded by a set of ordered letter pairs. We report three experiments using bigram primes in the same-different match task, investigating the effects of order reversal and the number of letters intervening between the letters in the target. Reversed bigrams (e.g., fo-OF, ob-ABOLISH) produced robust priming, in direct contradiction to the assumption that letter order is coded by the presence of ordered letter pairs. Also in contradiction to the core assumption of current open bigram models, non-contiguous bigrams spanning three letters in the target (e.g., bs-ABOLISH) showed robust priming effects, equivalent in size to contiguous bigrams (e.g., bo-ABOLISH). These results question the role of open bigrams in coding letter order. PMID:23914048
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alfonsi, Andrea; Rabiti, Cristian; Mandelli, Diego
2016-06-01
RAVEN is a software framework able to perform parametric and stochastic analysis based on the response of complex system codes. The initial development was aimed at providing dynamic risk analysis capabilities to the thermohydraulic code RELAP-7, currently under development at Idaho National Laboratory (INL). Although the initial goal has been fully accomplished, RAVEN is now a multi-purpose stochastic and uncertainty quantification platform, capable of communicating with any system code. In fact, the provided Application Programming Interfaces (APIs) allow RAVEN to interact with any code as long as all the parameters that need to be perturbed are accessible by input filesmore » or via python interfaces. RAVEN is capable of investigating system response and explore input space using various sampling schemes such as Monte Carlo, grid, or Latin hypercube. However, RAVEN strength lies in its system feature discovery capabilities such as: constructing limit surfaces, separating regions of the input space leading to system failure, and using dynamic supervised learning techniques. The development of RAVEN started in 2012 when, within the Nuclear Energy Advanced Modeling and Simulation (NEAMS) program, the need to provide a modern risk evaluation framework arose. RAVEN’s principal assignment is to provide the necessary software and algorithms in order to employ the concepts developed by the Risk Informed Safety Margin Characterization (RISMC) program. RISMC is one of the pathways defined within the Light Water Reactor Sustainability (LWRS) program. In the RISMC approach, the goal is not just to identify the frequency of an event potentially leading to a system failure, but the proximity (or lack thereof) to key safety-related events. Hence, the approach is interested in identifying and increasing the safety margins related to those events. A safety margin is a numerical value quantifying the probability that a safety metric (e.g. peak pressure in a pipe) is exceeded under certain conditions. Most of the capabilities, implemented having RELAP-7 as a principal focus, are easily deployable to other system codes. For this reason, several side activates have been employed (e.g. RELAP5-3D, any MOOSE-based App, etc.) or are currently ongoing for coupling RAVEN with several different software. The aim of this document is to provide a set of commented examples that can help the user to become familiar with the RAVEN code usage.« less
Gomez-Ramirez, Jaime; Costa, Tommaso
2017-12-01
Here we investigate whether systems that minimize prediction error e.g. predictive coding, can also show creativity, or on the contrary, prediction error minimization unqualifies for the design of systems that respond in creative ways to non-recurrent problems. We argue that there is a key ingredient that has been overlooked by researchers that needs to be incorporated to understand intelligent behavior in biological and technical systems. This ingredient is boredom. We propose a mathematical model based on the Black-Scholes-Merton equation which provides mechanistic insights into the interplay between boredom and prediction pleasure as the key drivers of behavior. Copyright © 2017 Elsevier B.V. All rights reserved.
Benchmark Problems of the Geothermal Technologies Office Code Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jan; Ferrada, Juan J; Curd, Warren
During inductive plasma operation of ITER, fusion power will reach 500 MW with an energy multiplication factor of 10. The heat will be transferred by the Tokamak Cooling Water System (TCWS) to the environment using the secondary cooling system. Plasma operations are inherently safe even under the most severe postulated accident condition a large, in-vessel break that results in a loss-of-coolant accident. A functioning cooling water system is not required to ensure safe shutdown. Even though ITER is inherently safe, TCWS equipment (e.g., heat exchangers, piping, pressurizers) are classified as safety important components. This is because the water is predictedmore » to contain low-levels of radionuclides (e.g., activated corrosion products, tritium) with activity levels high enough to require the design of components to be in accordance with French regulations for nuclear pressure equipment, i.e., the French Order dated 12 December 2005 (ESPN). ESPN has extended the practical application of the methodology established by the Pressure Equipment Directive (97/23/EC) to nuclear pressure equipment, under French Decree 99-1046 dated 13 December 1999, and Order dated 21 December 1999 (ESP). ASME codes and supplementary analyses (e.g., Failure Modes and Effects Analysis) will be used to demonstrate that the TCWS equipment meets these essential safety requirements. TCWS is being designed to provide not only cooling, with a capacity of approximately 1 GW energy removal, but also elevated temperature baking of first-wall/blanket, vacuum vessel, and divertor. Additional TCWS functions include chemical control of water, draining and drying for maintenance, and facilitation of leak detection/localization. The TCWS interfaces with the majority of ITER systems, including the secondary cooling system. U.S. ITER is responsible for design, engineering, and procurement of the TCWS with industry support from an Engineering Services Organization (ESO) (AREVA Federal Services, with support from Northrop Grumman, and OneCIS). ITER International Organization (ITER-IO) is responsible for design oversight and equipment installation in Cadarache, France. TCWS equipment will be fabricated using ASME design codes with quality assurance and oversight by an Agreed Notified Body (approved by the French regulator) that will ensure regulatory compliance. This paper describes the TCWS design and how U.S. ITER and fabricators will use ASME codes to comply with EU Directives and French Orders and Decrees.« less
USDA-ARS?s Scientific Manuscript database
The environmental modeling community has historically been concerned with the proliferation of models and the effort associated with collective model development tasks (e.g., code generation, data provisioning and transformation, etc.). Environmental modeling frameworks (EMFs) have been developed to...
RELAP-7 Code Assessment Plan and Requirement Traceability Matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.
2016-10-01
The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, andmore » technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Epiney, A.; Canepa, S.; Zerkak, O.
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rohatgi, U.S.; Cheng, H.S.; Khan, H.J.
This document is the User`s Manual for the Boiling Water Reactor (BWR), and Simplified Boiling Water Reactor (SBWR) systems transient code RAMONA-4B. The code uses a three-dimensional neutron-kinetics model coupled with a multichannel, nonequilibrium, drift-flux, phase-flow model of the thermal hydraulics of the reactor vessel. The code is designed to analyze a wide spectrum of BWR core and system transients. Chapter 1 gives an overview of the code`s capabilities and limitations; Chapter 2 describes the code`s structure, lists major subroutines, and discusses the computer requirements. Chapter 3 is on code, auxillary codes, and instructions for running RAMONA-4B on Sun SPARCmore » and IBM Workstations. Chapter 4 contains component descriptions and detailed card-by-card input instructions. Chapter 5 provides samples of the tabulated output for the steady-state and transient calculations and discusses the plotting procedures for the steady-state and transient calculations. Three appendices contain important user and programmer information: lists of plot variables (Appendix A) listings of input deck for sample problem (Appendix B), and a description of the plotting program PAD (Appendix C). 24 refs., 18 figs., 11 tabs.« less
Final Technical Report for GO17004 Regulatory Logic: Codes and Standards for the Hydrogen Economy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakarado, Gary L.
The objectives of this project are to: develop a robust supporting research and development program to provide critical hydrogen behavior data and a detailed understanding of hydrogen combustion and safety across a range of scenarios, needed to establish setback distances in building codes and minimize the overall data gaps in code development; support and facilitate the completion of technical specifications by the International Organization for Standardization (ISO) for gaseous hydrogen refueling (TS 20012) and standards for on-board liquid (ISO 13985) and gaseous or gaseous blend (ISO 15869) hydrogen storage by 2007; support and facilitate the effort, led by the NFPA,more » to complete the draft Hydrogen Technologies Code (NFPA 2) by 2008; with experimental data and input from Technology Validation Program element activities, support and facilitate the completion of standards for bulk hydrogen storage (e.g., NFPA 55) by 2008; facilitate the adoption of the most recently available model codes (e.g., from the International Code Council [ICC]) in key regions; complete preliminary research and development on hydrogen release scenarios to support the establishment of setback distances in building codes and provide a sound basis for model code development and adoption; support and facilitate the development of Global Technical Regulations (GTRs) by 2010 for hydrogen vehicle systems under the United Nations Economic Commission for Europe, World Forum for Harmonization of Vehicle Regulations and Working Party on Pollution and Energy Program (ECE-WP29/GRPE); and to Support and facilitate the completion by 2012 of necessary codes and standards needed for the early commercialization and market entry of hydrogen energy technologies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ward, Robert Cameron; Steiner, Don
2004-06-15
The generation of runaway electrons during a thermal plasma disruption is a concern for the safe and economical operation of a tokamak power system. Runaway electrons have high energy, 10 to 300 MeV, and may potentially cause extensive damage to plasma-facing components (PFCs) through large temperature increases, melting of metallic components, surface erosion, and possible burnout of coolant tubes. The EPQ code system was developed to simulate the thermal response of PFCs to a runaway electron impact. The EPQ code system consists of several parts: UNIX scripts that control the operation of an electron-photon Monte Carlo code to calculate themore » interaction of the runaway electrons with the plasma-facing materials; a finite difference code to calculate the thermal response, melting, and surface erosion of the materials; a code to process, scale, transform, and convert the electron Monte Carlo data to volumetric heating rates for use in the thermal code; and several minor and auxiliary codes for the manipulation and postprocessing of the data. The electron-photon Monte Carlo code used was Electron-Gamma-Shower (EGS), developed and maintained by the National Research Center of Canada. The Quick-Therm-Two-Dimensional-Nonlinear (QTTN) thermal code solves the two-dimensional cylindrical modified heat conduction equation using the Quickest third-order accurate and stable explicit finite difference method and is capable of tracking melting or surface erosion. The EPQ code system is validated using a series of analytical solutions and simulations of experiments. The verification of the QTTN thermal code with analytical solutions shows that the code with the Quickest method is better than 99.9% accurate. The benchmarking of the EPQ code system and QTTN versus experiments showed that QTTN's erosion tracking method is accurate within 30% and that EPQ is able to predict the occurrence of melting within the proper time constraints. QTTN and EPQ are verified and validated as able to calculate the temperature distribution, phase change, and surface erosion successfully.« less
2018 Ground Robotics Capabilities Conference and Exhibiton
2018-04-11
Transportable Robot System (MTRS) Inc 1 Non -standard Equipment (approved) Explosive Ordnance Disposal Common Robotic System-Heavy (CRS-H) Inc 1 AROC: 3-Star...and engineering • AI risk mitigation methodologies and techniques are at best immature – E.g., V&V; Probabilistic software analytics; code level...controller to minimize potential UxS mishaps and unauthorized Command and Control (C2). • PSP-10 – Ensure that software systems which exhibit non
Numerical Simulation Applications in the Design of EGS Collab Experiment 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnston, Henry; White, Mark D.; Fu, Pengcheng
The United States Department of Energy, Geothermal Technologies Office (GTO) is funding a collaborative investigation of enhanced geothermal systems (EGS) processes at the meso-scale. This study, referred to as the EGS Collab project, is a unique opportunity for scientists and engineers to investigate the creation of fracture networks and circulation of fluids across those networks under in-situ stress conditions. The EGS Collab project is envisioned to comprise three experiments and the site for the first experiment is on the 4850 Level (4,850 feet below ground surface) in phyllite of the Precambrian Poorman formation, at the Sanford Underground Research Facility, locatedmore » at the former Homestake Gold Mine, in Lead, South Dakota. Principal objectives of the project are to develop a number of intermediate-scale field sites and to conduct well-controlled in situ experiments focused on rock fracture behavior and permeability enhancement. Data generated during these experiments will be compared against predictions of a suite of computer codes specifically designed to solve problems involving coupled thermal, hydrological, geomechanical, and geochemical processes. Comparisons between experimental and numerical simulation results will provide code developers with direction for improvements and verification of process models, build confidence in the suite of available numerical tools, and ultimately identify critical future development needs for the geothermal modeling community. Moreover, conducting thorough comparisons of models, modelling approaches, measurement approaches and measured data, via the EGS Collab project, will serve to identify techniques that are most likely to succeed at the Frontier Observatory for Research in Geothermal Energy (FORGE), the GTO's flagship EGS research effort. As noted, outcomes from the EGS Collab project experiments will serve as benchmarks for computer code verification, but numerical simulation additionally plays an essential role in designing these meso-scale experiments. This paper describes specific numerical simulations supporting the design of Experiment 1, a field test involving hydraulic stimulation of two fractures from notched sections of the injection borehole and fluid circulation between sub-horizontal injection and production boreholes in each fracture individually and collectively, including the circulation of chilled water. Whereas the mine drift allows for accurate and close placement of monitoring instrumentation to the developed fractures, active ventilation in the drift cooled the rock mass within the experimental volume. Numerical simulations were executed to predict seismic events and magnitudes during stimulation, initial fracture orientations for smooth horizontal wellbores, pressure requirements for fracture initiation from notched wellbores, fracture propagation during stimulation between the injection and production boreholes, tracer travel times between the injection and production boreholes, produced fluid temperatures with chilled water injections, pressure limits on fluid circulation to avoid fracture growth, temperature environment surrounding the 4850 Level drift, and fracture propagation within a stress field altered by drift excavation, ventilation cooling, and dewatering.« less
Layered Wyner-Ziv video coding.
Xu, Qian; Xiong, Zixiang
2006-12-01
Following recent theoretical works on successive Wyner-Ziv coding (WZC), we propose a practical layered Wyner-Ziv video coder using the DCT, nested scalar quantization, and irregular LDPC code based Slepian-Wolf coding (or lossless source coding with side information at the decoder). Our main novelty is to use the base layer of a standard scalable video coder (e.g., MPEG-4/H.26L FGS or H.263+) as the decoder side information and perform layered WZC for quality enhancement. Similar to FGS coding, there is no performance difference between layered and monolithic WZC when the enhancement bitstream is generated in our proposed coder. Using an H.26L coded version as the base layer, experiments indicate that WZC gives slightly worse performance than FGS coding when the channel (for both the base and enhancement layers) is noiseless. However, when the channel is noisy, extensive simulations of video transmission over wireless networks conforming to the CDMA2000 1X standard show that H.26L base layer coding plus Wyner-Ziv enhancement layer coding are more robust against channel errors than H.26L FGS coding. These results demonstrate that layered Wyner-Ziv video coding is a promising new technique for video streaming over wireless networks.
48 CFR 452.219-70 - Size Standard and NAICS Code Information.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Size Standard and NAICS Code Information. 452.219-70 Section 452.219-70 Federal Acquisition Regulations System DEPARTMENT OF... System Code(s) and business size standard(s) describing the products and/or services to be acquired under...
Numerical Simulation of the Emergency Condenser of the SWR-1000
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krepper, Eckhard; Schaffrath, Andreas; Aszodi, Attila
The SWR-1000 is a new innovative boiling water reactor (BWR) concept, which was developed by Siemens AG. This concept is characterized in particular by passive safety systems (e.g., four emergency condensers, four building condensers, eight passive pressure pulse transmitters, and six gravity-driven core-flooding lines). In the framework of the BWR Physics and Thermohydraulic Complementary Action to the European Union BWR Research and Development Cluster, emergency condenser tests were performed by Forschungszentrum Juelich at the NOKO test facility. Posttest calculations with ATHLET are presented, which aim at the determination of the removable power of the emergency condenser and its operation mode.more » The one-dimensional thermal-hydraulic code ATHLET was extended by the module KONWAR for the calculation of the heat transfer coefficient during condensation in horizontal tubes. In addition, results of conventional finite difference calculations using the code CFX-4 are presented, which investigate the natural convection during the heatup process at the secondary side of the NOKO test facility.« less
Patient-specific dosimetry based on quantitative SPECT imaging and 3D-DFT convolution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Akabani, G.; Hawkins, W.G.; Eckblade, M.B.
1999-01-01
The objective of this study was to validate the use of a 3-D discrete Fourier Transform (3D-DFT) convolution method to carry out the dosimetry for I-131 for soft tissues in radioimmunotherapy procedures. To validate this convolution method, mathematical and physical phantoms were used as a basis of comparison with Monte Carlo transport (MCT) calculations which were carried out using the EGS4 system code. The mathematical phantom consisted of a sphere containing uniform and nonuniform activity distributions. The physical phantom consisted of a cylinder containing uniform and nonuniform activity distributions. Quantitative SPECT reconstruction was carried out using the Circular Harmonic Transformmore » (CHT) algorithm.« less
Caro, I; Stiles, W B
1997-01-01
Translating a verbal coding system from one language to another can yield unexpected insights into the process of communication in different cultures. This paper describes the problems and understandings we encountered as we translated a verbal response modes (VRM) taxonomy from English into Spanish. Standard translations of text (e.g., psychotherapeutic dialogue) systematically change the form of certain expressions, so supposedly equivalent expressions had different VRM codings in the two languages. Prominent examples of English forms whose translation had different codes in Spanish included tags, question forms, and "let's" expressions. Insofar as participants use such forms to convey nuances of their relationship, standard translations of counseling or psychotherapy sessions or other conversations may systematically misrepresent the relationship between the participants. The differences revealed in translating the VRM system point to subtle but important differences in the degrees of verbal directiveness and inclusion in English versus Spanish, which converge with other observations of differences in individualism and collectivism between Anglo and Hispanic cultures.
76 FR 44535 - Testing of Bisphenol A
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-26
... Web site is an ``anonymous access'' system, which means EPA will not know your identity or contact... INFORMATION: I. General Information A. Does this action apply to me? You may be potentially affected by this.... Paper recyclers (NAICS codes 322110, 322121, 3222), e.g., pulp mills, paper (except newsprint) mills...
Proactive Aging Among Holocaust Survivors: Striving for the Best Possible Life.
Elran-Barak, Roni; Barak, Adi; Lomranz, Jacob; Benyamini, Yael
2016-10-14
To investigate methods that older Holocaust survivors and their age peers use in order to maintain the best possible life and to examine associations between these methods and subjective well-being. Participants were 481 older Israelis (mean age 77.4 ± 6.7 years): Holocaust survivors (n = 164), postwar immigrants (n = 183), and prewar immigrants (n = 134). Measures included sociodemographics and indicators of health and well-being. Respondents were asked to answer an open-ended question: "What are the methods you use to maintain the best possible life?". Answers were coded into eight categories. Holocaust survivors were significantly less likely to mention methods coded as "Enjoyment" (32.3%) relative to postwar (43.7%) and prewar (46.2%) immigrants and significantly more likely to mention methods coded as "Maintaining good health" (39.0%) relative to postwar (27.9%) and prewar (21.6%) immigrants. Controlling for sociodemographics and health status, Holocaust survivors still differed from their peers. Aging Holocaust survivors tended to focus on more essential/fundamental needs (e.g., health), whereas their peers tended to focus on a wider range of needs (e.g., enjoyment) in their effort to maintain the best possible life. Our findings may add to the proactivity model of successful aging by suggesting that aging individuals in Israel use both proactive (e.g., health) and cognitive (e.g., accepting the present) adaptation methods, regardless of their reported history during the war. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Parrish, Robert M; Burns, Lori A; Smith, Daniel G A; Simmonett, Andrew C; DePrince, A Eugene; Hohenstein, Edward G; Bozkaya, Uğur; Sokolov, Alexander Yu; Di Remigio, Roberto; Richard, Ryan M; Gonthier, Jérôme F; James, Andrew M; McAlexander, Harley R; Kumar, Ashutosh; Saitow, Masaaki; Wang, Xiao; Pritchard, Benjamin P; Verma, Prakash; Schaefer, Henry F; Patkowski, Konrad; King, Rollin A; Valeev, Edward F; Evangelista, Francesco A; Turney, Justin M; Crawford, T Daniel; Sherrill, C David
2017-07-11
Psi4 is an ab initio electronic structure program providing methods such as Hartree-Fock, density functional theory, configuration interaction, and coupled-cluster theory. The 1.1 release represents a major update meant to automate complex tasks, such as geometry optimization using complete-basis-set extrapolation or focal-point methods. Conversion of the top-level code to a Python module means that Psi4 can now be used in complex workflows alongside other Python tools. Several new features have been added with the aid of libraries providing easy access to techniques such as density fitting, Cholesky decomposition, and Laplace denominators. The build system has been completely rewritten to simplify interoperability with independent, reusable software components for quantum chemistry. Finally, a wide range of new theoretical methods and analyses have been added to the code base, including functional-group and open-shell symmetry adapted perturbation theory, density-fitted coupled cluster with frozen natural orbitals, orbital-optimized perturbation and coupled-cluster methods (e.g., OO-MP2 and OO-LCCD), density-fitted multiconfigurational self-consistent field, density cumulant functional theory, algebraic-diagrammatic construction excited states, improvements to the geometry optimizer, and the "X2C" approach to relativistic corrections, among many other improvements.
Li, Ying
2016-09-16
Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Zhou, Guang-xiang; Gao, Wen-chun; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-01-01
According to the requirements of the increasing development for optical transmission systems, a novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on the subgroup of the finite field multiplicative group is proposed. Furthermore, this construction method can effectively avoid the girth-4 phenomena and has the advantages such as simpler construction, easier implementation, lower encoding/decoding complexity, better girth properties and more flexible adjustment for the code length and code rate. The simulation results show that the error correction performance of the QC-LDPC(3 780,3 540) code with the code rate of 93.7% constructed by this proposed method is excellent, its net coding gain is respectively 0.3 dB, 0.55 dB, 1.4 dB and 1.98 dB higher than those of the QC-LDPC(5 334,4 962) code constructed by the method based on the inverse element characteristics in the finite field multiplicative group, the SCG-LDPC(3 969,3 720) code constructed by the systematically constructed Gallager (SCG) random construction method, the LDPC(32 640,30 592) code in ITU-T G.975.1 and the classic RS(255,239) code which is widely used in optical transmission systems in ITU-T G.975 at the bit error rate ( BER) of 10-7. Therefore, the constructed QC-LDPC(3 780,3 540) code is more suitable for optical transmission systems.
Perceiving groups: The people perception of diversity and hierarchy.
Phillips, L Taylor; Slepian, Michael L; Hughes, Brent L
2018-05-01
The visual perception of individuals has received considerable attention (visual person perception), but little social psychological work has examined the processes underlying the visual perception of groups of people (visual people perception). Ensemble-coding is a visual mechanism that automatically extracts summary statistics (e.g., average size) of lower-level sets of stimuli (e.g., geometric figures), and also extends to the visual perception of groups of faces. Here, we consider whether ensemble-coding supports people perception, allowing individuals to form rapid, accurate impressions about groups of people. Across nine studies, we demonstrate that people visually extract high-level properties (e.g., diversity, hierarchy) that are unique to social groups, as opposed to individual persons. Observers rapidly and accurately perceived group diversity and hierarchy, or variance across race, gender, and dominance (Studies 1-3). Further, results persist when observers are given very short display times, backward pattern masks, color- and contrast-controlled stimuli, and absolute versus relative response options (Studies 4a-7b), suggesting robust effects supported specifically by ensemble-coding mechanisms. Together, we show that humans can rapidly and accurately perceive not only individual persons, but also emergent social information unique to groups of people. These people perception findings demonstrate the importance of visual processes for enabling people to perceive social groups and behave effectively in group-based social interactions. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
76 FR 20611 - Electronic On-Board Recorders and Hours of Service Supporting Documents
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-13
..., used, and disseminated (e.g., in post- accident litigation or in personal litigation such as divorce proceedings). Based on the factors above, the Agency has determined that the statute requires it to protect... Doc. 2011-8789 Filed 4-12-11; 8:45 am] BILLING CODE 4910-EX-P ...
Zaker, Neda; Zehtabian, Mehdi; Sina, Sedigheh; Koontz, Craig; Meigooni, Ali S
2016-03-08
Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross-sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross-sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code - MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low-energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes.
1982-10-01
e.g., providing voters in TMR systems and detection-switching requirements in standby-sparing sys- tems. The application of mathematical thoery of...and time redundancy required for error detection and correction, are interrelated. Mathematical modeling, when applied to fault tolerant systems, can...9 1.1 Some Fundamental Principles............................. 11 1.2 Mathematical Theory of
Fracturing And Liquid CONvection
DOE Office of Scientific and Technical Information (OSTI.GOV)
2012-02-29
FALCON has been developed to enable simulation of the tightly coupled fluid-rock behavior in hydrothermal and engineered geothermal system (EGS) reservoirs, targeting the dynamics of fracture stimulation, fluid flow, rock deformation, and heat transport in a single integrated code, with the ultimate goal of providing a tool that can be used to test the viability of EGS in the United States and worldwide. Reliable reservoir performance predictions of EGS systems require accurate and robust modeling for the coupled thermal-hydrological-mechanical processes. Conventionally, these types of problems are solved using operator-splitting methods, usually by coupling a subsurface flow and heat transport simulatormore » with a solid mechanics simulator via input files. FALCON eliminates the need for using operator-splitting methods to simulate these systems, and the scalability of the underlying MOOSE architecture allows for simulating these tightly coupled processes at the reservoir scale, allowing for examination of the system as a whole (something the operator-splitting methodologies generally cannot do).« less
Transformation of Graphical ECA Policies into Executable PonderTalk Code
NASA Astrophysics Data System (ADS)
Romeikat, Raphael; Sinsel, Markus; Bauer, Bernhard
Rules are becoming more and more important in business modeling and systems engineering and are recognized as a high-level programming paradigma. For the effective development of rules it is desired to start at a high level, e.g. with graphical rules, and to refine them into code of a particular rule language for implementation purposes later. An model-driven approach is presented in this paper to transform graphical rules into executable code in a fully automated way. The focus is on event-condition-action policies as a special rule type. These are modeled graphically and translated into the PonderTalk language. The approach may be extended to integrate other rule types and languages as well.
Improving Shipbuilding Productivity Through Use of Standards
1978-06-01
ship- building industry. In addition to the more familiar standards (e.g. ASME Boiler and Pressure Vessel Code , IEEE-45, etc.) this will include an...will simply refer- ence valid standards as appropriate (e.g. ASME Boiler and Pressure Vessel Code ), and will hopefully work hand in hand with the
Chen, Chia-Lin; Wang, Yuchuan; Lee, Jason J S; Tsui, Benjamin M W
2008-07-01
The authors developed and validated an efficient Monte Carlo simulation (MCS) workflow to facilitate small animal pinhole SPECT imaging research. This workflow seamlessly integrates two existing MCS tools: simulation system for emission tomography (SimSET) and GEANT4 application for emission tomography (GATE). Specifically, we retained the strength of GATE in describing complex collimator/detector configurations to meet the anticipated needs for studying advanced pinhole collimation (e.g., multipinhole) geometry, while inserting the fast SimSET photon history generator (PHG) to circumvent the relatively slow GEANT4 MCS code used by GATE in simulating photon interactions inside voxelized phantoms. For validation, data generated from this new SimSET-GATE workflow were compared with those from GATE-only simulations as well as experimental measurements obtained using a commercial small animal pinhole SPECT system. Our results showed excellent agreement (e.g., in system point response functions and energy spectra) between SimSET-GATE and GATE-only simulations, and, more importantly, a significant computational speedup (up to approximately 10-fold) provided by the new workflow. Satisfactory agreement between MCS results and experimental data were also observed. In conclusion, the authors have successfully integrated SimSET photon history generator in GATE for fast and realistic pinhole SPECT simulations, which can facilitate research in, for example, the development and application of quantitative pinhole and multipinhole SPECT for small animal imaging. This integrated simulation tool can also be adapted for studying other preclinical and clinical SPECT techniques.
Illusory conjunctions in simultanagnosia: coarse coding of visual feature location?
McCrea, Simon M; Buxbaum, Laurel J; Coslett, H Branch
2006-01-01
Simultanagnosia is a disorder characterized by an inability to see more than one object at a time. We report a simultanagnosic patient (ED) with bilateral posterior infarctions who produced frequent illusory conjunctions on tasks involving form and surface features (e.g., a red T) and form alone. ED also produced "blend" errors in which features of one familiar perceptual unit appeared to migrate to another familiar perceptual unit (e.g., "RO" read as "PQ"). ED often misread scrambled letter strings as a familiar word (e.g., "hmoe" read as "home"). Finally, ED's success in reporting two letters in an array was inversely related to the distance between the letters. These findings are consistent with the hypothesis that ED's illusory reflect coarse coding of visual feature location that is ameliorated in part by top-down information from object and word recognition systems; the findings are also consistent, however, with Treisman's Feature Integration Theory. Finally, the data provide additional support for the claim that the dorsal parieto-occipital cortex is implicated in the binding of visual feature information.
A Response Surface Methodology for Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Altus, Troy David; Sobieski, Jaroslaw (Technical Monitor)
2002-01-01
The report describes a new method for optimization of engineering systems such as aerospace vehicles whose design must harmonize a number of subsystems and various physical phenomena, each represented by a separate computer code, e.g., aerodynamics, structures, propulsion, performance, etc. To represent the system internal couplings, the codes receive output from other codes as part of their inputs. The system analysis and optimization task is decomposed into subtasks that can be executed concurrently, each subtask conducted using local state and design variables and holding constant a set of the system-level design variables. The subtasks results are stored in form of the Response Surfaces (RS) fitted in the space of the system-level variables to be used as the subtask surrogates in a system-level optimization whose purpose is to optimize the system objective(s) and to reconcile the system internal couplings. By virtue of decomposition and execution concurrency, the method enables a broad workfront in organization of an engineering project involving a number of specialty groups that might be geographically dispersed, and it exploits the contemporary computing technology of massively concurrent and distributed processing. The report includes a demonstration test case of supersonic business jet design.
Position coding effects in a 2D scenario: the case of musical notation.
Perea, Manuel; García-Chamorro, Cristina; Centelles, Arnau; Jiménez, María
2013-07-01
How does the cognitive system encode the location of objects in a visual scene? In the past decade, this question has attracted much attention in the field of visual-word recognition (e.g., "jugde" is perceptually very close to "judge"). Letter transposition effects have been explained in terms of perceptual uncertainty or shared "open bigrams". In the present study, we focus on note position coding in music reading (i.e., a 2D scenario). The usual way to display music is the staff (i.e., a set of 5 horizontal lines and their resultant 4 spaces). When reading musical notation, it is critical to identify not only each note (temporal duration), but also its pitch (y-axis) and its temporal sequence (x-axis). To examine note position coding, we employed a same-different task in which two briefly and consecutively presented staves contained four notes. The experiment was conducted with experts (musicians) and non-experts (non-musicians). For the "different" trials, the critical conditions involved staves in which two internal notes that were switched vertically, horizontally, or fully transposed--as well as the appropriate control conditions. Results revealed that note position coding was only approximate at the early stages of processing and that this encoding process was modulated by expertise. We examine the implications of these findings for models of object position encoding. Copyright © 2013 Elsevier B.V. All rights reserved.
Waiblinger, Christian; Brugger, Dominik; Schwarz, Cornelius
2015-01-01
Which physical parameter of vibrissa deflections is extracted by the rodent tactile system for discrimination? Particularly, it remains unclear whether perception has access to instantaneous kinematic parameters (i.e., the details of the trajectory) or relies on temporally integration of the movement trajectory such as frequency (e.g., spectral information) and intensity (e.g., mean speed). Here, we use a novel detection of change paradigm in head-fixed rats, which presents pulsatile vibrissa stimuli in seamless sequence for discrimination. This procedure ensures that processes of decision making can directly tap into sensory signals (no memory functions involved). We find that discrimination performance based on instantaneous kinematic cues far exceeds the ones provided by frequency and intensity. Neuronal modeling based on barrel cortex single units shows that small populations of sensitive neurons provide a transient signal that optimally fits the characteristic of the subject's perception. The present study is the first to show that perceptual read-out is superior in situations allowing the subject to base perception on detailed trajectory cues, that is, instantaneous kinematic variables. A possible impact of this finding on tactile systems of other species is suggested by evidence for instantaneous coding also in primates. PMID:24169940
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelli, Diego; Prescott, Steven R; Smith, Curtis L
2011-07-01
In the Risk Informed Safety Margin Characterization (RISMC) approach we want to understand not just the frequency of an event like core damage, but how close we are (or are not) to key safety-related events and how might we increase our safety margins. The RISMC Pathway uses the probabilistic margin approach to quantify impacts to reliability and safety by coupling both probabilistic (via stochastic simulation) and mechanistic (via physics models) approaches. This coupling takes place through the interchange of physical parameters and operational or accident scenarios. In this paper we apply the RISMC approach to evaluate the impact of amore » power uprate on a pressurized water reactor (PWR) for a tsunami-induced flooding test case. This analysis is performed using the RISMC toolkit: RELAP-7 and RAVEN codes. RELAP-7 is the new generation of system analysis codes that is responsible for simulating the thermal-hydraulic dynamics of PWR and boiling water reactor systems. RAVEN has two capabilities: to act as a controller of the RELAP-7 simulation (e.g., system activation) and to perform statistical analyses (e.g., run multiple RELAP-7 simulations where sequencing/timing of events have been changed according to a set of stochastic distributions). By using the RISMC toolkit, we can evaluate how power uprate affects the system recovery measures needed to avoid core damage after the PWR lost all available AC power by a tsunami induced flooding. The simulation of the actual flooding is performed by using a smooth particle hydrodynamics code: NEUTRINO.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
B.C. Lyons, S.C. Jardin, and J.J. Ramos
2012-06-28
A new code, the Neoclassical Ion-Electron Solver (NIES), has been written to solve for stationary, axisymmetric distribution functions (f ) in the conventional banana regime for both ions and elec trons using a set of drift-kinetic equations (DKEs) with linearized Fokker-Planck-Landau collision operators. Solvability conditions on the DKEs determine the relevant non-adiabatic pieces of f (called h ). We work in a 4D phase space in which Ψ defines a flux surface, θ is the poloidal angle, v is the total velocity referenced to the mean flow velocity, and λ is the dimensionless magnetic moment parameter. We expand h inmore » finite elements in both v and λ . The Rosenbluth potentials, φ and ψ, which define the integral part of the collision operator, are expanded in Legendre series in cos χ , where χ is the pitch angle, Fourier series in cos θ , and finite elements in v . At each ψ , we solve a block tridiagonal system for hi (independent of fe ), then solve another block tridiagonal system for he (dependent on fi ). We demonstrate that such a formulation can be accurately and efficiently solved. NIES is coupled to the MHD equilibrium code JSOLVER [J. DeLucia, et al., J. Comput. Phys. 37 , pp 183-204 (1980).] allowing us to work with realistic magnetic geometries. The bootstrap current is calculated as a simple moment of the distribution function. Results are benchmarked against the Sauter analytic formulas and can be used as a kinetic closure for an MHD code (e.g., M3D-C1 [S.C. Jardin, et al ., Computational Science & Discovery, 4 (2012).]).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. Inmore » its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary 'Press' tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose 'GRAPE' hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is {approx}4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.« less
NASA Astrophysics Data System (ADS)
Wetzstein, M.; Nelson, Andrew F.; Naab, T.; Burkert, A.
2009-10-01
We present a numerical code for simulating the evolution of astrophysical systems using particles to represent the underlying fluid flow. The code is written in Fortran 95 and is designed to be versatile, flexible, and extensible, with modular options that can be selected either at the time the code is compiled or at run time through a text input file. We include a number of general purpose modules describing a variety of physical processes commonly required in the astrophysical community and we expect that the effort required to integrate additional or alternate modules into the code will be small. In its simplest form the code can evolve the dynamical trajectories of a set of particles in two or three dimensions using a module which implements either a Leapfrog or Runge-Kutta-Fehlberg integrator, selected by the user at compile time. The user may choose to allow the integrator to evolve the system using individual time steps for each particle or with a single, global time step for all. Particles may interact gravitationally as N-body particles, and all or any subset may also interact hydrodynamically, using the smoothed particle hydrodynamic (SPH) method by selecting the SPH module. A third particle species can be included with a module to model massive point particles which may accrete nearby SPH or N-body particles. Such particles may be used to model, e.g., stars in a molecular cloud. Free boundary conditions are implemented by default, and a module may be selected to include periodic boundary conditions. We use a binary "Press" tree to organize particles for rapid access in gravity and SPH calculations. Modules implementing an interface with special purpose "GRAPE" hardware may also be selected to accelerate the gravity calculations. If available, forces obtained from the GRAPE coprocessors may be transparently substituted for those obtained from the tree, or both tree and GRAPE may be used as a combination GRAPE/tree code. The code may be run without modification on single processors or in parallel using OpenMP compiler directives on large-scale, shared memory parallel machines. We present simulations of several test problems, including a merger simulation of two elliptical galaxies with 800,000 particles. In comparison to the Gadget-2 code of Springel, the gravitational force calculation, which is the most costly part of any simulation including self-gravity, is ~4.6-4.9 times faster with VINE when tested on different snapshots of the elliptical galaxy merger simulation when run on an Itanium 2 processor in an SGI Altix. A full simulation of the same setup with eight processors is a factor of 2.91 faster with VINE. The code is available to the public under the terms of the Gnu General Public License.
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Towards a Consolidated Approach for the Assessment of Evaluation Models of Nuclear Power Reactors
Epiney, A.; Canepa, S.; Zerkak, O.; ...
2016-11-02
The STARS project at the Paul Scherrer Institut (PSI) has adopted the TRACE thermal-hydraulic (T-H) code for best-estimate system transient simulations of the Swiss Light Water Reactors (LWRs). For analyses involving interactions between system and core, a coupling of TRACE with the SIMULATE-3K (S3K) LWR core simulator has also been developed. In this configuration, the TRACE code and associated nuclear power reactor simulation models play a central role to achieve a comprehensive safety analysis capability. Thus, efforts have now been undertaken to consolidate the validation strategy by implementing a more rigorous and structured assessment approach for TRACE applications involving eithermore » only system T-H evaluations or requiring interfaces to e.g. detailed core or fuel behavior models. The first part of this paper presents the preliminary concepts of this validation strategy. The principle is to systematically track the evolution of a given set of predicted physical Quantities of Interest (QoIs) over a multidimensional parametric space where each of the dimensions represent the evolution of specific analysis aspects, including e.g. code version, transient specific simulation methodology and model "nodalisation". If properly set up, such environment should provide code developers and code users with persistent (less affected by user effect) and quantified information (sensitivity of QoIs) on the applicability of a simulation scheme (codes, input models, methodology) for steady state and transient analysis of full LWR systems. Through this, for each given transient/accident, critical paths of the validation process can be identified that could then translate into defining reference schemes to be applied for downstream predictive simulations. In order to illustrate this approach, the second part of this paper presents a first application of this validation strategy to an inadvertent blowdown event that occurred in a Swiss BWR/6. The transient was initiated by the spurious actuation of the Automatic Depressurization System (ADS). The validation approach progresses through a number of dimensions here: First, the same BWR system simulation model is assessed for different versions of the TRACE code, up to the most recent one. The second dimension is the "nodalisation" dimension, where changes to the input model are assessed. The third dimension is the "methodology" dimension. In this case imposed power and an updated TRACE core model are investigated. For each step in each validation dimension, a common set of QoIs are investigated. For the steady-state results, these include fuel temperatures distributions. For the transient part of the present study, the evaluated QoIs include the system pressure evolution and water carry-over into the steam line.« less
Simulations of Control Schemes for Inductively Coupled Plasma Sources
NASA Astrophysics Data System (ADS)
Ventzek, P. L. G.; Oda, A.; Shon, J. W.; Vitello, P.
1997-10-01
Process control issues are becoming increasingly important in plasma etching. Numerical experiments are an excellent test-bench for evaluating a proposed control system. Models are generally reliable enough to provide information about controller robustness, fitness of diagnostics. We will present results from a two dimensional plasma transport code with a multi-species plasma chemstry obtained from a global model. [1-2] We will show a correlation of external etch parameters (e.g. input power) with internal plasma parameters (e.g. species fluxes) which in turn are correlated with etch results (etch rate, uniformity, and selectivity) either by comparison to experiment or by using a phenomenological etch model. After process characterization, a control scheme can be evaluated since the relationship between the variable to be controlled (e.g. uniformity) is related to the measurable variable (e.g. a density) and external parameter (e.g. coil current). We will present an evaluation using the HBr-Cl2 system as an example. [1] E. Meeks and J. W. Shon, IEEE Trans. on Plasma Sci., 23, 539, 1995. [2] P. Vitello, et al., IEEE Trans. on Plasma Sci., 24, 123, 1996.
NASA Technical Reports Server (NTRS)
Noble, Viveca K.
1993-01-01
There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
... intends to post the status of the test orders, including recipients' responses, on the EPA Web site so... screening program using appropriate validated test systems and other scientifically relevant information to... chemicals. Scientific research and development services (NAICS code 5417), e.g., persons who conduct testing...
Analysis of quantum error-correcting codes: Symplectic lattice codes and toric codes
NASA Astrophysics Data System (ADS)
Harrington, James William
Quantum information theory is concerned with identifying how quantum mechanical resources (such as entangled quantum states) can be utilized for a number of information processing tasks, including data storage, computation, communication, and cryptography. Efficient quantum algorithms and protocols have been developed for performing some tasks (e.g. , factoring large numbers, securely communicating over a public channel, and simulating quantum mechanical systems) that appear to be very difficult with just classical resources. In addition to identifying the separation between classical and quantum computational power, much of the theoretical focus in this field over the last decade has been concerned with finding novel ways of encoding quantum information that are robust against errors, which is an important step toward building practical quantum information processing devices. In this thesis I present some results on the quantum error-correcting properties of oscillator codes (also described as symplectic lattice codes) and toric codes. Any harmonic oscillator system (such as a mode of light) can be encoded with quantum information via symplectic lattice codes that are robust against shifts in the system's continuous quantum variables. I show the existence of lattice codes whose achievable rates match the one-shot coherent information over the Gaussian quantum channel. Also, I construct a family of symplectic self-dual lattices and search for optimal encodings of quantum information distributed between several oscillators. Toric codes provide encodings of quantum information into two-dimensional spin lattices that are robust against local clusters of errors and which require only local quantum operations for error correction. Numerical simulations of this system under various error models provide a calculation of the accuracy threshold for quantum memory using toric codes, which can be related to phase transitions in certain condensed matter models. I also present a local classical processing scheme for correcting errors on toric codes, which demonstrates that quantum information can be maintained in two dimensions by purely local (quantum and classical) resources.
Contextual barriers to lifestyle physical activity interventions in Hong Kong.
Eves, Frank F; Masters, Rich S W; McManus, Alison; Leung, Moon; Wong, Peggy; White, Mike J
2008-05-01
Increased lifestyle physical activity, for instance, use of active transport, is a current public health target. Active transport interventions that target stair climbing are consistently successful in English-speaking populations yet unsuccessful in Hong Kong. We report two further studies on active transport in the Hong Kong Chinese. Pedestrians on a mass transit escalator system (study 1) and in an air-conditioned shopping mall (study 2) were encouraged to take the stairs for their cardiovascular health by point-of-choice prompts. Observers coded sex, age, and walking on the mass transit system, with the additional variables of presence of children and bags coded in the shopping mall. In the first study, a 1-wk baseline was followed by 4 wk of intervention (N = 76,710) whereas in the second study (shopping mall) a 2-wk baseline was followed by a 2-wk intervention period (N = 18,257). A small but significant increase in stair climbing (+0.29%) on the mass transit system contrasted with no significant changes in the shopping mall (+0.09%). The active transport of walking on the mass transit system was reduced at higher rates of humidity and temperature, with steeper slopes for the effects of climate variables in men than in women. These studies confirm that lifestyle physical activity interventions do not have universal application. The context in which the behavior occurs (e.g., climate) may act as a barrier to active transport.
ogs6 - a new concept for porous-fractured media simulations
NASA Astrophysics Data System (ADS)
Naumov, Dmitri; Bilke, Lars; Fischer, Thomas; Rink, Karsten; Wang, Wenqing; Watanabe, Norihiro; Kolditz, Olaf
2015-04-01
OpenGeoSys (OGS) is a scientific open-source initiative for numerical simulation of thermo-hydro-mechanical/chemical (THMC) processes in porous and fractured media, continuously developed since the mid-eighties. The basic concept is to provide a flexible numerical framework for solving coupled multi-field problems. OGS is targeting mainly on applications in environmental geoscience, e.g. in the fields of contaminant hydrology, water resources management, waste deposits, or geothermal energy systems, but it has also been successfully applied to new topics in energy storage recently. OGS is actively participating several international benchmarking initiatives, e.g. DECOVALEX (waste management), CO2BENCH (CO2 storage and sequestration), SeSBENCH (reactive transport processes) and HM-Intercomp (coupled hydrosystems). Despite the broad applicability of OGS in geo-, hydro- and energy-sciences, several shortcomings became obvious concerning the computational efficiency as well as the code structure became too sophisticated for further efficient development. OGS-5 was designed for object-oriented FEM applications. However, in many multi-field problems a certain flexibility of tailored numerical schemes is essential. Therefore, a new concept was designed to overcome existing bottlenecks. The paradigms for ogs6 are: - Flexibility of numerical schemes (FEM#FVM#FDM), - Computational efficiency (PetaScale ready), - Developer- and user-friendly. ogs6 has a module-oriented architecture based on thematic libraries (e.g. MeshLib, NumLib) on the large scale and uses object-oriented approach for the small scale interfaces. Usage of a linear algebra library (Eigen3) for the mathematical operations together with the ISO C++11 standard increases the expressiveness of the code and makes it more developer-friendly. The new C++ standard also makes the template meta-programming technique code used for compile-time optimizations more compact. We have transitioned the main code development to the GitHub code hosting system (https://github.com/ufz/ogs). The very flexible revision control system Git in combination with issue tracking, developer feedback and the code review options improve the code quality and the development process in general. The continuous testing procedure of the benchmarks as it was established for OGS-5 is maintained. Additionally unit testing, which is automatically triggered by any code changes, is executed by two continuous integration frameworks (Jenkins CI, Travis CI) which build and test the code on different operating systems (Windows, Linux, Mac OS), in multiple configurations and with different compilers (GCC, Clang, Visual Studio). To improve the testing possibilities further, XML based file input formats are introduced helping with automatic validation of the user contributed benchmarks. The first ogs6 prototype version 6.0.1 has been implemented for solving generic elliptic problems. Next steps are envisaged to transient, non-linear and coupled problems. Literature: [1] Kolditz O, Shao H, Wang W, Bauer S (eds) (2014): Thermo-Hydro-Mechanical-Chemical Processes in Fractured Porous Media: Modelling and Benchmarking - Closed Form Solutions. In: Terrestrial Environmental Sciences, Vol. 1, Springer, Heidelberg, ISBN 978-3-319-11893-2, 315pp. http://www.springer.com/earth+sciences+and+geography/geology/book/978-3-319-11893-2 [2] Naumov D (2015): Computational Fluid Dynamics in Unconsolidated Sediments: Model Generation and Discrete Flow Simulations, PhD thesis, Technische Universität Dresden.
ERIC Educational Resources Information Center
Park, Insu
2010-01-01
The purpose of this study is to explore systems users' behavior on IS under the various circumstances (e.g., email usage and malware threats, online communication at the individual level, and IS usage in organizations). Specifically, the first essay develops a method for analyzing and predicting the impact category of malicious code, particularly…
Does Kaniso activate CASINO?: input coding schemes and phonology in visual-word recognition.
Acha, Joana; Perea, Manuel
2010-01-01
Most recent input coding schemes in visual-word recognition assume that letter position coding is orthographic rather than phonological in nature (e.g., SOLAR, open-bigram, SERIOL, and overlap). This assumption has been drawn - in part - by the fact that the transposed-letter effect (e.g., caniso activates CASINO) seems to be (mostly) insensitive to phonological manipulations (e.g., Perea & Carreiras, 2006, 2008; Perea & Pérez, 2009). However, one could argue that the lack of a phonological effect in prior research was due to the fact that the manipulation always occurred in internal letter positions - note that phonological effects tend to be stronger for the initial syllable (Carreiras, Ferrand, Grainger, & Perea, 2005). To reexamine this issue, we conducted a masked priming lexical decision experiment in which we compared the priming effect for transposed-letter pairs (e.g., caniso-CASINO vs. caviro-CASINO) and for pseudohomophone transposed-letter pairs (kaniso-CASINO vs. kaviro-CASINO). Results showed a transposed-letter priming effect for the correctly spelled pairs, but not for the pseudohomophone pairs. This is consistent with the view that letter position coding is (primarily) orthographic in nature.
NASA Astrophysics Data System (ADS)
Sihver, L.; Matthiä, D.; Koi, T.; Mancusi, D.
2008-10-01
Radiation exposure of aircrew is more and more recognized as an occupational hazard. The ionizing environment at standard commercial aircraft flight altitudes consists mainly of secondary particles, of which the neutrons give a major contribution to the dose equivalent. Accurate estimations of neutron spectra in the atmosphere are therefore essential for correct calculations of aircrew doses. Energetic solar particle events (SPE) could also lead to significantly increased dose rates, especially at routes close to the North Pole, e.g. for flights between Europe and USA. It is also well known that the radiation environment encountered by personnel aboard low Earth orbit (LEO) spacecraft or aboard a spacecraft traveling outside the Earth's protective magnetosphere is much harsher compared with that within the atmosphere since the personnel are exposed to radiation from both galactic cosmic rays (GCR) and SPE. The relative contribution to the dose from GCR when traveling outside the Earth's magnetosphere, e.g. to the Moon or Mars, is even greater, and reliable and accurate particle and heavy ion transport codes are essential to calculate the radiation risks for both aircrew and personnel on spacecraft. We have therefore performed calculations of neutron distributions in the atmosphere, total dose equivalents, and quality factors at different depths in a water sphere in an imaginary spacecraft during solar minimum in a geosynchronous orbit. The calculations were performed with the GEANT4 Monte Carlo (MC) code using both the binary cascade (BIC) model, which is part of the standard GEANT4 package, and the JQMD model, which is used in the particle and heavy ion transport code PHITS GEANT4.
McKenzie, Sam; Keene, Chris; Farovik, Anja; Blandon, John; Place, Ryan; Komorowski, Robert; Eichenbaum, Howard
2016-01-01
Here we consider the value of neural population analysis as an approach to understanding how information is represented in the hippocampus and cortical areas and how these areas might interact as a brain system to support memory. We argue that models based on sparse coding of different individual features by single neurons in these areas (e.g., place cells, grid cells) are inadequate to capture the complexity of experience represented within this system. By contrast, population analyses of neurons with denser coding and mixed selectivity reveal new and important insights into the organization of memories. Furthermore, comparisons of the organization of information in interconnected areas suggest a model of hippocampal-cortical interactions that mediates the fundamental features of memory. PMID:26748022
Survey of adaptive image coding techniques
NASA Technical Reports Server (NTRS)
Habibi, A.
1977-01-01
The general problem of image data compression is discussed briefly with attention given to the use of Karhunen-Loeve transforms, suboptimal systems, and block quantization. A survey is then conducted encompassing the four categories of adaptive systems: (1) adaptive transform coding (adaptive sampling, adaptive quantization, etc.), (2) adaptive predictive coding (adaptive delta modulation, adaptive DPCM encoding, etc.), (3) adaptive cluster coding (blob algorithms and the multispectral cluster coding technique), and (4) adaptive entropy coding.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yidong Xia; Mitch Plummer; Robert Podgorney
2016-02-01
Performance of heat production process over a 30-year period is assessed in a conceptual EGS model with a geothermal gradient of 65K per km depth in the reservoir. Water is circulated through a pair of parallel wells connected by a set of single large wing fractures. The results indicate that the desirable output electric power rate and lifespan could be obtained under suitable material properties and system parameters. A sensitivity analysis on some design constraints and operation parameters indicates that 1) the fracture horizontal spacing has profound effect on the long-term performance of heat production, 2) the downward deviation anglemore » for the parallel doublet wells may help overcome the difficulty of vertical drilling to reach a favorable production temperature, and 3) the thermal energy production rate and lifespan has close dependence on water mass flow rate. The results also indicate that the heat production can be improved when the horizontal fracture spacing, well deviation angle, and production flow rate are under reasonable conditions. To conduct the reservoir modeling and simulations, an open-source, finite element based, fully implicit, fully coupled hydrothermal code, namely FALCON, has been developed and used in this work. Compared with most other existing codes that are either closed-source or commercially available in this area, this new open-source code has demonstrated a code development strategy that aims to provide an unparalleled easiness for user-customization and multi-physics coupling. Test results have shown that the FALCON code is able to complete the long-term tests efficiently and accurately, thanks to the state-of-the-art nonlinear and linear solver algorithms implemented in the code.« less
The disclosure of diagnosis codes can breach research participants' privacy.
Loukides, Grigorios; Denny, Joshua C; Malin, Bradley
2010-01-01
De-identified clinical data in standardized form (eg, diagnosis codes), derived from electronic medical records, are increasingly combined with research data (eg, DNA sequences) and disseminated to enable scientific investigations. This study examines whether released data can be linked with identified clinical records that are accessible via various resources to jeopardize patients' anonymity, and the ability of popular privacy protection methodologies to prevent such an attack. The study experimentally evaluates the re-identification risk of a de-identified sample of Vanderbilt's patient records involved in a genome-wide association study. It also measures the level of protection from re-identification, and data utility, provided by suppression and generalization. Privacy protection is quantified using the probability of re-identifying a patient in a larger population through diagnosis codes. Data utility is measured at a dataset level, using the percentage of retained information, as well as its description, and at a patient level, using two metrics based on the difference between the distribution of Internal Classification of Disease (ICD) version 9 codes before and after applying privacy protection. More than 96% of 2800 patients' records are shown to be uniquely identified by their diagnosis codes with respect to a population of 1.2 million patients. Generalization is shown to reduce further the percentage of de-identified records by less than 2%, and over 99% of the three-digit ICD-9 codes need to be suppressed to prevent re-identification. Popular privacy protection methods are inadequate to deliver a sufficiently protected and useful result when sharing data derived from complex clinical systems. The development of alternative privacy protection models is thus required.
NASA Astrophysics Data System (ADS)
Selker, J. S.; Roques, C.; Higgins, C. W.; Good, S. P.; Hut, R.; Selker, A.
2015-12-01
The confluence of 3-Dimensional printing, low-cost solid-state-sensors, low-cost, low-power digital controllers (e.g., Arduinos); and open-source publishing (e.g., Github) is poised to transform environmental sensing. The Open-Source Published Environmental Sensing (OPENS) laboratory has launched and is available for all to use. OPENS combines cutting edge technologies and makes them available to the global environmental sensing community. OPENS includes a Maker lab space where people may collaborate in person or virtually via on-line forum for the publication and discussion of environmental sensing technology (Corvallis, Oregon, USA, please feel free to request a free reservation for space and equipment use). The physical lab houses a test-bed for sensors, as well as a complete classical machine shop, 3-D printers, electronics development benches, and workstations for code development. OPENS will provide a web-based formal publishing framework wherein global students and scientists can peer-review publish (with DOI) novel and evolutionary advancements in environmental sensor systems. This curated and peer-reviewed digital collection will include complete sets of "printable" parts and operating computer code for sensing systems. The physical lab will include all of the machines required to produce these sensing systems. These tools can be addressed in person or virtually, creating a truly global venue for advancement in monitoring earth's environment and agricultural systems. In this talk we will present an example of the process of design and publication the design and data from the OPENS-Permeameter. The publication includes 3-D printing code, Arduino (or other control/logging platform) operational code; sample data sets, and a full discussion of the design set in the scientific context of previous related devices. Editors for the peer-review process are currently sought - contact John.Selker@Oregonstate.edu or Clement.Roques@Oregonstate.edu.
Review: The transcripts associated with organ allograft rejection.
Halloran, Philip F; Venner, Jeffery M; Madill-Thomsen, Katelynn S; Einecke, Gunilla; Parkes, Michael D; Hidalgo, Luis G; Famulski, Konrad S
2018-04-01
The molecular mechanisms operating in human organ transplant rejection are best inferred from the mRNAs expressed in biopsies because the corresponding proteins often have low expression and short half-lives, while small non-coding RNAs lack specificity. Associations should be characterized in a population that rigorously identifies T cell-mediated (TCMR) and antibody-mediated rejection (ABMR). This is best achieved in kidney transplant biopsies, but the results are generalizable to heart, lung, or liver transplants. Associations can be universal (all rejection), TCMR-selective, or ABMR-selective, with universal being strongest and ABMR-selective weakest. Top universal transcripts are IFNG-inducible (eg, CXCL11 IDO1, WARS) or shared by effector T cells (ETCs) and NK cells (eg, KLRD1, CCL4). TCMR-selective transcripts are expressed in activated ETCs (eg, CTLA4, IFNG), activated (eg, ADAMDEC1), or IFNG-induced macrophages (eg, ANKRD22). ABMR-selective transcripts are expressed in NK cells (eg, FGFBP2, GNLY) and endothelial cells (eg, ROBO4, DARC). Transcript associations are highly reproducible between biopsy sets when the same rejection definitions, case mix, algorithm, and technology are applied, but exact ranks will vary. Previously published rejection-associated transcripts resemble universal and TCMR-selective transcripts due to incomplete representation of ABMR. Rejection-associated transcripts are never completely rejection-specific because they are shared with the stereotyped response-to-injury and innate immunity. © 2017 The American Society of Transplantation and the American Society of Transplant Surgeons.
The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics
NASA Astrophysics Data System (ADS)
Ganander, Hans
2003-10-01
For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.
Zaker, Neda; Sina, Sedigheh; Koontz, Craig; Meigooni1, Ali S.
2016-01-01
Monte Carlo simulations are widely used for calculation of the dosimetric parameters of brachytherapy sources. MCNP4C2, MCNP5, MCNPX, EGS4, EGSnrc, PTRAN, and GEANT4 are among the most commonly used codes in this field. Each of these codes utilizes a cross‐sectional library for the purpose of simulating different elements and materials with complex chemical compositions. The accuracies of the final outcomes of these simulations are very sensitive to the accuracies of the cross‐sectional libraries. Several investigators have shown that inaccuracies of some of the cross section files have led to errors in 125I and 103Pd parameters. The purpose of this study is to compare the dosimetric parameters of sample brachytherapy sources, calculated with three different versions of the MCNP code — MCNP4C, MCNP5, and MCNPX. In these simulations for each source type, the source and phantom geometries, as well as the number of the photons, were kept identical, thus eliminating the possible uncertainties. The results of these investigations indicate that for low‐energy sources such as 125I and 103Pd there are discrepancies in gL(r) values. Discrepancies up to 21.7% and 28% are observed between MCNP4C and other codes at a distance of 6 cm for 103Pd and 10 cm for 125I from the source, respectively. However, for higher energy sources, the discrepancies in gL(r) values are less than 1.1% for 192Ir and less than 1.2% for 137Cs between the three codes. PACS number(s): 87.56.bg PMID:27074460
Xenobiology: State-of-the-Art, Ethics, and Philosophy of New-to-Nature Organisms.
Schmidt, Markus; Pei, Lei; Budisa, Nediljko
The basic chemical constitution of all living organisms in the context of carbon-based chemistry consists of a limited number of small molecules and polymers. Until the twenty-first century, biology was mainly an analytical science and has now reached a point where it merges with engineering science, paving the way for synthetic biology. One of the objectives of synthetic biology is to try to change the chemical compositions of living cells, that is, to create an artificial biological diversity, which in turn fosters a new sub-field of synthetic biology, xenobiology. In particular, the genetic code in living systems is based on highly standardized chemistry composed of the same "letters" or nucleotides as informational polymers (DNA, RNA) and the 20 amino acids which serve as basic building blocks for proteins. The universality of the genetic code enables not only vertical gene transfer within the same species but also horizontal gene transfer across biological taxa, which require a high degree of standardization and interconnectivity. Although some minor alterations of the standard genetic code are found in nature (e.g., proteins containing non-conical amino acids exist in nature, and some organisms use alternated coding systems), all structurally deep chemistry changes within living systems are generally lethal, making the creation of artificial biological system an extremely difficult challenge.In this context, one of the great challenges for bioscience is the development of a strategy for expanding the standard basic chemical repertoire of living cells. Attempts to alter the meaning of the genetic information stored in DNA as an informational polymer by changing the chemistry of the polymer (i.e., xeno-nucleic acids) or by changes in the genetic code have already yielded successful results. In the future this should enable the partial or full redirection of the biological information flow to generate "new" version(s) of the genetic code derived from the "old" biological world.In addition to the scientific challenges, the attempt to increase biochemical diversity also raises important ethical and philosophical issues. Although promotors of this branch of synthetic biology highlight the many potential applications to come (e.g., novel tools for diagnostics and fighting infection diseases), such developments could also bring risks affecting social, political, and other structures of nearly all societies.
1983-08-15
Measurement of Material Damping," Experimental Mechanics, 297-302 (Aug 1977). 4. Feltner, C. E., and J. D. Morrow, " Microplastic Strain Hysteresis Energy as...Code OOKB, CP5, Room 606 Washington, DC 20360 Mr. Richard R. Graham, II Code 5243, Bldg. NC4 Naval Sea Systems Command "* Washington, DC 20362 Mr. Al...Harbage, Jr. Code 2723 DTNSRDC Annapolis, MD 21402 L’r. Martih Kandl Code 5231 Naval Sea Systems Command *i Washington, DC 20362 S. Karpe David W
Software for Collaborative Engineering of Launch Rockets
NASA Technical Reports Server (NTRS)
Stanley, Thomas Troy
2003-01-01
The Rocket Evaluation and Cost Integration for Propulsion and Engineering software enables collaborative computing with automated exchange of information in the design and analysis of launch rockets and other complex systems. RECIPE can interact with and incorporate a variety of programs, including legacy codes, that model aspects of a system from the perspectives of different technological disciplines (e.g., aerodynamics, structures, propulsion, trajectory, aeroheating, controls, and operations) and that are used by different engineers on different computers running different operating systems. RECIPE consists mainly of (1) ISCRM a file-transfer subprogram that makes it possible for legacy codes executed in their original operating systems on their original computers to exchange data and (2) CONES an easy-to-use filewrapper subprogram that enables the integration of legacy codes. RECIPE provides a tightly integrated conceptual framework that emphasizes connectivity among the programs used by the collaborators, linking these programs in a manner that provides some configuration control while facilitating collaborative engineering tradeoff studies, including design to cost studies. In comparison with prior collaborative-engineering schemes, one based on the use of RECIPE enables fewer engineers to do more in less time.
Toward Ultraintense Compact RBS Pump for Recombination 3.4 nm Laser via OFI
NASA Astrophysics Data System (ADS)
Suckewer, S.; Ren, J.; Li, S.; Lou, Y.; Morozov, A.; Turnbull, D.; Avitzour, Y.
In our presentation we overview progress we made in developing a new ultrashort and ultraintensive laser system based on Raman backscattering (RBS) amplifier /compressor from time of 10th XRL Conference in Berlin to present time of 11th XRL Conference in Belfast. One of the main objectives of RBS laser system development is to use it for pumping of recombination X-ray laser in transition to ground state of CVI ions at 3.4 nm. Using elaborate computer code the processes of Optical Field Ionization, electron energy distribution, and recombination were calculated. It was shown that in very earlier stage of recombination, when electron energy distribution is strongly non-Maxwellian, high gain in transition from the first excited level n=2 to ground level m=1 can be generated. Adding large amount of hydrogen gas into initial gas containing carbon atoms (e.g. methane, CH4) the calculated gain has reached values up to 150-200 cm-2 Taking into account this very encouraging result, we have proceed with arrangement of experimental setup. We will present the observation of plasma channels and measurements of electron density distribution required for generation of gain at 3.4 nm.
The Magnetic Reconnection Code: an AMR-based fully implicit simulation suite
NASA Astrophysics Data System (ADS)
Germaschewski, K.; Bhattacharjee, A.; Ng, C.-S.
2006-12-01
Extended MHD models, which incorporate two-fluid effects, are promising candidates to enhance understanding of collisionless reconnection phenomena in laboratory, space and astrophysical plasma physics. In this paper, we introduce two simulation codes in the Magnetic Reconnection Code suite which integrate reduced and full extended MHD models. Numerical integration of these models comes with two challenges: Small-scale spatial structures, e.g. thin current sheets, develop and must be well resolved by the code. Adaptive mesh refinement (AMR) is employed to provide high resolution where needed while maintaining good performance. Secondly, the two-fluid effects in extended MHD give rise to dispersive waves, which lead to a very stringent CFL condition for explicit codes, while reconnection happens on a much slower time scale. We use a fully implicit Crank--Nicholson time stepping algorithm. Since no efficient preconditioners are available for our system of equations, we instead use a direct solver to handle the inner linear solves. This requires us to actually compute the Jacobian matrix, which is handled by a code generator that calculates the derivative symbolically and then outputs code to calculate it.
Self-assembly of proglycinin and hybrid proglycinin synthesized in vitro from cDNA
Dickinson, Craig D.; Floener, Liliane A.; Lilley, Glenn G.; Nielsen, Niels C.
1987-01-01
An in vitro system was developed that results in the self-assembly of subunit precursors into complexes that resemble those found naturally in the endoplasmic reticulum. Subunits of glycinin, the predominant seed protein of soybeans, were synthesized from modified cDNAs using a combination of the SP6 transcription and the rabbit reticulocyte translation systems. Subunits produced from plasmid constructions that encoded either Gy4 or Gy5 gene products, but modified such that their signal sequences were absent, self-assembled into trimers equivalent in size to those precursors found in the endoplasmic reticulum. In contrast, proteins synthesized in vitro from Gy4 constructs failed to self-assemble when the signal sequence was left intact (e.g., preproglycinin) or when the coding sequence was modified to remove 27 amino acids from an internal hydrophobic region, which is highly conserved among the glycinin subunits. Various hybrid subunits were also produced by trading portions of Gy4 and Gy5 cDNAs and all self-assembled in our system. The in vitro assembly system provides an opportunity to study the self-assembly of precursors and to probe for regions important for assembly. It will also be helpful in attempts to engineer beneficial nutritional changes into this important food protein. Images PMID:16593868
Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.
Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less
Verification of Modelica-Based Models with Analytical Solutions for Tritium Diffusion
Rader, Jordan D.; Greenwood, Michael Scott; Humrickhouse, Paul W.
2018-03-20
Here, tritium transport in metal and molten salt fluids combined with diffusion through high-temperature structural materials is an important phenomenon in both magnetic confinement fusion (MCF) and molten salt reactor (MSR) applications. For MCF, tritium is desirable to capture for fusion fuel. For MSRs, uncaptured tritium potentially can be released to the environment. In either application, quantifying the time- and space-dependent tritium concentration in the working fluid(s) and structural components is necessary.Whereas capability exists specifically for calculating tritium transport in such systems (e.g., using TMAP for fusion reactors), it is desirable to unify the calculation of tritium transport with othermore » system variables such as dynamic fluid and structure temperature combined with control systems such as those that might be found in a system code. Some capability for radioactive trace substance transport exists in thermal-hydraulic systems codes (e.g., RELAP5-3D); however, this capability is not coupled to species diffusion through solids. Combined calculations of tritium transport and thermal-hydraulic solution have been demonstrated with TRIDENT but only for a specific type of MSR.Researchers at Oak Ridge National Laboratory have developed a set of Modelica-based dynamic system modeling tools called TRANsient Simulation Framework Of Reconfigurable Models (TRANSFORM) that were used previously to model advanced fission reactors and associated systems. In this system, the augmented TRANSFORM library includes dynamically coupled fluid and solid trace substance transport and diffusion. Results from simulations are compared against analytical solutions for verification.« less
A century of Gestalt psychology in visual perception: II. Conceptual and theoretical foundations.
Wagemans, Johan; Feldman, Jacob; Gepshtein, Sergei; Kimchi, Ruth; Pomerantz, James R; van der Helm, Peter A; van Leeuwen, Cees
2012-11-01
Our first review article (Wagemans et al., 2012) on the occasion of the centennial anniversary of Gestalt psychology focused on perceptual grouping and figure-ground organization. It concluded that further progress requires a reconsideration of the conceptual and theoretical foundations of the Gestalt approach, which is provided here. In particular, we review contemporary formulations of holism within an information-processing framework, allowing for operational definitions (e.g., integral dimensions, emergent features, configural superiority, global precedence, primacy of holistic/configural properties) and a refined understanding of its psychological implications (e.g., at the level of attention, perception, and decision). We also review 4 lines of theoretical progress regarding the law of Prägnanz-the brain's tendency of being attracted towards states corresponding to the simplest possible organization, given the available stimulation. The first considers the brain as a complex adaptive system and explains how self-organization solves the conundrum of trading between robustness and flexibility of perceptual states. The second specifies the economy principle in terms of optimization of neural resources, showing that elementary sensors working independently to minimize uncertainty can respond optimally at the system level. The third considers how Gestalt percepts (e.g., groups, objects) are optimal given the available stimulation, with optimality specified in Bayesian terms. Fourth, structural information theory explains how a Gestaltist visual system that focuses on internal coding efficiency yields external veridicality as a side effect. To answer the fundamental question of why things look as they do, a further synthesis of these complementary perspectives is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curca-Tivig, Florin; Merk, Stephan; Pautz, Andreas
2007-07-01
Anticipating future needs of our customers and willing to concentrate synergies and competences existing in the company for the benefit of our customers, AREVA NP decided in 2002 to develop the next generation of coupled neutronics/ core thermal-hydraulic (TH) code systems for fuel assembly and core design calculations for both, PWR and BWR applications. The global CONVERGENCE project was born: after a feasibility study of one year (2002) and a conceptual phase of another year (2003), development was started at the beginning of 2004. The present paper introduces the CONVERGENCE project, presents the main feature of the new code systemmore » ARCADIA{sup R} and concludes on customer benefits. ARCADIA{sup R} is designed to meet AREVA NP market and customers' requirements worldwide. Besides state-of-the-art physical modeling, numerical performance and industrial functionality, the ARCADIA{sup R} system is featuring state-of-the-art software engineering. The new code system will bring a series of benefits for our customers: e.g. improved accuracy for heterogeneous cores (MOX/ UOX, Gd...), better description of nuclide chains, and access to local neutronics/ thermal-hydraulics and possibly thermal-mechanical information (3D pin by pin full core modeling). ARCADIA is a registered trademark of AREVA NP. (authors)« less
Time trend of injection drug errors before and after implementation of bar-code verification system.
Sakushima, Ken; Umeki, Reona; Endoh, Akira; Ito, Yoichi M; Nasuhara, Yasuyuki
2015-01-01
Bar-code technology, used for verification of patients and their medication, could prevent medication errors in clinical practice. Retrospective analysis of electronically stored medical error reports was conducted in a university hospital. The number of reported medication errors of injected drugs, including wrong drug administration and administration to the wrong patient, was compared before and after implementation of the bar-code verification system for inpatient care. A total of 2867 error reports associated with injection drugs were extracted. Wrong patient errors decreased significantly after implementation of the bar-code verification system (17.4/year vs. 4.5/year, p< 0.05), although wrong drug errors did not decrease sufficiently (24.2/year vs. 20.3/year). The source of medication errors due to wrong drugs was drug preparation in hospital wards. Bar-code medication administration is effective for prevention of wrong patient errors. However, ordinary bar-code verification systems are limited in their ability to prevent incorrect drug preparation in hospital wards.
Mapping among Number Words, Numerals, and Nonsymbolic Quantities in Preschoolers
ERIC Educational Resources Information Center
Hurst, Michelle; Anderson, Ursula; Cordes, Sara
2017-01-01
In mathematically literate societies, numerical information is represented in 3 distinct codes: a verbal code (i.e., number words); a digital, symbolic code (e.g., Arabic numerals); and an analogical code (i.e., quantities; Dehaene, 1992). To communicate effectively using these numerical codes, our understanding of number must involve an…
Automated Testcase Generation for Numerical Support Functions in Embedded Systems
NASA Technical Reports Server (NTRS)
Schumann, Johann; Schnieder, Stefan-Alexander
2014-01-01
We present a tool for the automatic generation of test stimuli for small numerical support functions, e.g., code for trigonometric functions, quaternions, filters, or table lookup. Our tool is based on KLEE to produce a set of test stimuli for full path coverage. We use a method of iterative deepening over abstractions to deal with floating-point values. During actual testing the stimuli exercise the code against a reference implementation. We illustrate our approach with results of experiments with low-level trigonometric functions, interpolation routines, and mathematical support functions from an open source UAS autopilot.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, L.C.; Deen, J.R.; Woodruff, W.L.
1995-02-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
ERIC Educational Resources Information Center
Vallesi, Antonino; Binns, Malcolm A.; Shallice, Tim
2008-01-01
The present study addresses the question of how such an abstract concept as time is represented by our cognitive system. Specifically, the aim was to assess whether temporal information is cognitively represented through left-to-right spatial coordinates, as already shown for other ordered sequences (e.g., numbers). In Experiment 1, the…
Numerical calculation of the neoclassical electron distribution function in an axisymmetric torus
NASA Astrophysics Data System (ADS)
Lyons, B. C.; Jardin, S. C.; Ramos, J. J.
2011-10-01
We solve for a stationary, axisymmetric electron distribution function (fe) in a torus using a drift-kinetic equation (DKE) with complete Landau collision operator. All terms are kept to gyroradius and collisionality orders relevant to high- temperature tokamaks (i.e., the neoclassical banana regime for electrons). A solubility condition on the DKE determines the non-Maxwellian pieces of fe (called fNMe) to all relevant orders. We work in a 4D phase space (ψ , θ , v , λ) , where ψ defines a flux surface, θ is the poloidal angle, v is the total velocity, and λ is the pitch angle parameter. We expand fNMe in finite elements in both v and λ. The Rosenbluth potentials, Φ and Ψ, which define the collision operator, are expanded in Legendre series in cos χ , where χ is the pitch angle, Fourier series in cos θ , and finite elements in v. At each ψ, we solve a block tridiagonal system for fNMe, Φ, and Ψ simultaneously, resulting in a neoclassical fe for the entire torus. Our goal is to demonstrate that such a formulation can be accurately and efficiently solved numerically. Results will be compared to other codes (e.g., NCLASS, NEO) and could be used as a kinetic closure for an MHD code (e.g., M3D-C1). Supported by the DOE SCGF and DOE Contract # DE-AC02-09CH11466. Based on analytic work by Ramos, PoP 17, 082502 (2010).
Seeing the mean: ensemble coding for sets of faces.
Haberman, Jason; Whitney, David
2009-06-01
We frequently encounter groups of similar objects in our visual environment: a bed of flowers, a basket of oranges, a crowd of people. How does the visual system process such redundancy? Research shows that rather than code every element in a texture, the visual system favors a summary statistical representation of all the elements. The authors demonstrate that although it may facilitate texture perception, ensemble coding also occurs for faces-a level of processing well beyond that of textures. Observers viewed sets of faces varying in emotionality (e.g., happy to sad) and assessed the mean emotion of each set. Although observers retained little information about the individual set members, they had a remarkably precise representation of the mean emotion. Observers continued to discriminate the mean emotion accurately even when they viewed sets of 16 faces for 500 ms or less. Modeling revealed that perceiving the average facial expression in groups of faces was not due to noisy representation or noisy discrimination. These findings support the hypothesis that ensemble coding occurs extremely fast at multiple levels of visual analysis. (c) 2009 APA, all rights reserved.
Error correcting coding-theory for structured light illumination systems
NASA Astrophysics Data System (ADS)
Porras-Aguilar, Rosario; Falaggis, Konstantinos; Ramos-Garcia, Ruben
2017-06-01
Intensity discrete structured light illumination systems project a series of projection patterns for the estimation of the absolute fringe order using only the temporal grey-level sequence at each pixel. This work proposes the use of error-correcting codes for pixel-wise correction of measurement errors. The use of an error correcting code is advantageous in many ways: it allows reducing the effect of random intensity noise, it corrects outliners near the border of the fringe commonly present when using intensity discrete patterns, and it provides a robustness in case of severe measurement errors (even for burst errors where whole frames are lost). The latter aspect is particular interesting in environments with varying ambient light as well as in critical safety applications as e.g. monitoring of deformations of components in nuclear power plants, where a high reliability is ensured even in case of short measurement disruptions. A special form of burst errors is the so-called salt and pepper noise, which can largely be removed with error correcting codes using only the information of a given pixel. The performance of this technique is evaluated using both simulations and experiments.
2009-01-01
1008.3 r <•-• ADOR/Director NCST E. R. Franchi , 7000 Public Affairs (Unclassified/ Unlimited Only), Code 703Q 4 ’ iJL:,. iUn’i i’-"Vt... global ocean color sensors (e.g., MODIS). Also, this resolution roughly matches the swath of MicroSAS radiometric measurements in the visible range
NASA Technical Reports Server (NTRS)
Mazaheri, Alireza; Gnoffo, Peter A.; Johnston, Christopher O.; Kleb, Bil
2011-01-01
This users manual provides in-depth information concerning installation and execution of Laura, version 5. Laura is a structured, multi-block, computational aerothermodynamic simulation code. Version 5 represents a major refactoring of the original Fortran 77 Laura code toward a modular structure afforded by Fortran 95. The refactoring improved usability and maintainability by eliminating the requirement for problem dependent re-compilations, providing more intuitive distribution of functionality, and simplifying interfaces required for multi-physics coupling. As a result, Laura now shares gas-physics modules, MPI modules, and other low-level modules with the Fun3D unstructured-grid code. In addition to internal refactoring, several new features and capabilities have been added, e.g., a GNU-standard installation process, parallel load balancing, automatic trajectory point sequencing, free-energy minimization, and coupled ablation and flowfield radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Grady, K; Davis, S D; Papaconstadopoulos, P
2014-08-15
A PTW microLion liquid ionization chamber and an Exradin A1SL air-filled ionization chamber have been modeled using the egs-chamber user code of the EGSnrc system to determine their perturbation effects in water in a 5 × 5 cm{sup 2} 18 MV photon beam. A model of the Varian CL21EX linear accelerator was constructed using the BEAMnrc Monte Carlo code, and was validated by comparing measured PDDs and profiles from the microLion and A1SL chambers to calculated results that included chamber models. Measured PDDs for a 5 × 5 cm{sup 2} field for the microLion chamber agreed with calculations to withinmore » 1.5% beyond a depth of 0.5 cm, and the A1SL PDDs agreed within 1.0% beyond 1.0 cm. Measured and calculated profiles at 10 cm depth agreed within 1.0% for both chambers inside the field, and within 4.0% near the field edge. Local percent differences increased up to 15% at 4 cm outside the field. The ratio of dose to water in the absence of the chamber relative to dose in the chamber's active volume as a function of off-axis distance was calculated using the egs-chamber correlated sampling technique. The dose ratio was nearly constant inside the field and consistent with the stopping power ratios of water to detector material, but varied up to 3.3% near the field edge and 5.2% at 4 cm outside the field. Once these perturbation effects are fully characterized for more field sizes and detectors, they could be applied to clinical water tank measurements for improved dosimetric accuracy.« less
14 CFR Sec. 1-4 - System of accounts coding.
Code of Federal Regulations, 2010 CFR
2010-01-01
... General Accounting Provisions Sec. 1-4 System of accounts coding. (a) A four digit control number is assigned for each balance sheet and profit and loss account. Each balance sheet account is numbered sequentially, within blocks, designating basic balance sheet classifications. The first two digits of the four...
Variable Coded Modulation software simulation
NASA Astrophysics Data System (ADS)
Sielicki, Thomas A.; Hamkins, Jon; Thorsen, Denise
This paper reports on the design and performance of a new Variable Coded Modulation (VCM) system. This VCM system comprises eight of NASA's recommended codes from the Consultative Committee for Space Data Systems (CCSDS) standards, including four turbo and four AR4JA/C2 low-density parity-check codes, together with six modulations types (BPSK, QPSK, 8-PSK, 16-APSK, 32-APSK, 64-APSK). The signaling protocol for the transmission mode is based on a CCSDS recommendation. The coded modulation may be dynamically chosen, block to block, to optimize throughput.
Lee, Hwan Young; Yoo, Ji-Eun; Park, Myung Jin; Chung, Ukhee; Kim, Chong-Youl; Shin, Kyoung-Jin
2006-11-01
The present study analyzed 21 coding region SNP markers and one deletion motif for the determination of East Asian mitochondrial DNA (mtDNA) haplogroups by designing three multiplex systems which apply single base extension methods. Using two multiplex systems, all 593 Korean mtDNAs were allocated into 15 haplogroups: M, D, D4, D5, G, M7, M8, M9, M10, M11, R, R9, B, A, and N9. As the D4 haplotypes occurred most frequently in Koreans, the third multiplex system was used to further define D4 subhaplogroups: D4a, D4b, D4e, D4g, D4h, and D4j. This method allowed the complementation of coding region information with control region mutation motifs and the resultant findings also suggest reliable control region mutation motifs for the assignment of East Asian mtDNA haplogroups. These three multiplex systems produce good results in degraded samples as they contain small PCR products (101-154 bp) for single base extension reactions. SNP scoring was performed in 101 old skeletal remains using these three systems to prove their utility in degraded samples. The sequence analysis of mtDNA control region with high incidence of haplogroup-specific mutations and the selective scoring of highly informative coding region SNPs using the three multiplex systems are useful tools for most applications involving East Asian mtDNA haplogroup determination and haplogroup-directed stringent quality control.
Application of a GPU-Assisted Maxwell Code to Electromagnetic Wave Propagation in ITER
NASA Astrophysics Data System (ADS)
Kubota, S.; Peebles, W. A.; Woodbury, D.; Johnson, I.; Zolfaghari, A.
2014-10-01
The Low Field Side Reflectometer (LSFR) on ITER is envisioned to provide capabilities for electron density profile and fluctuations measurements in both the plasma core and edge. The current design for the Equatorial Port Plug 11 (EPP11) employs seven monostatic antennas for use with both fixed-frequency and swept-frequency systems. The present work examines the characteristics of this layout using the 3-D version of the GPU-Assisted Maxwell Code (GAMC-3D). Previous studies in this area were performed with either 2-D full wave codes or 3-D ray- and beam-tracing. GAMC-3D is based on the FDTD method and can be run with either a fixed-frequency or modulated (e.g. FMCW) source, and with either a stationary or moving target (e.g. Doppler backscattering). The code is designed to run on a single NVIDIA Tesla GPU accelerator, and utilizes a technique based on the moving window method to overcome the size limitation of the onboard memory. Effects such as beam drift, linear mode conversion, and diffraction/scattering will be examined. Comparisons will be made with beam-tracing calculations using the complex eikonal method. Supported by U.S. DoE Grants DE-FG02-99ER54527 and DE-AC02-09CH11466, and the DoE SULI Program at PPPL.
A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong
2013-01-01
Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.
Experimental study of non-binary LDPC coding for long-haul coherent optical QPSK transmissions.
Zhang, Shaoliang; Arabaci, Murat; Yaman, Fatih; Djordjevic, Ivan B; Xu, Lei; Wang, Ting; Inada, Yoshihisa; Ogata, Takaaki; Aoki, Yasuhiro
2011-09-26
The performance of rate-0.8 4-ary LDPC code has been studied in a 50 GHz-spaced 40 Gb/s DWDM system with PDM-QPSK modulation. The net effective coding gain of 10 dB is obtained at BER of 10(-6). With the aid of time-interleaving polarization multiplexing and MAP detection, 10,560 km transmission over legacy dispersion managed fiber is achieved without any countable errors. The proposed nonbinary quasi-cyclic LDPC code achieves an uncoded BER threshold at 4×10(-2). Potential issues like phase ambiguity and coding length are also discussed when implementing LDPC in current coherent optical systems. © 2011 Optical Society of America
48 CFR 501.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Publication and code arrangement. 501.105-1 Section 501.105-1 Federal Acquisition Regulations System GENERAL SERVICES... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily...
48 CFR 501.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Publication and code arrangement. 501.105-1 Section 501.105-1 Federal Acquisition Regulations System GENERAL SERVICES... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily...
48 CFR 501.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Publication and code arrangement. 501.105-1 Section 501.105-1 Federal Acquisition Regulations System GENERAL SERVICES... 501.105-1 Publication and code arrangement. The GSAR is published in the following sources: (a) Daily...
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
ISPOR Code of Ethics 2017 (4th Edition).
Santos, Jessica; Palumbo, Francis; Molsen-David, Elizabeth; Willke, Richard J; Binder, Louise; Drummond, Michael; Ho, Anita; Marder, William D; Parmenter, Louise; Sandhu, Gurmit; Shafie, Asrul A; Thompson, David
2017-12-01
As the leading health economics and outcomes research (HEOR) professional society, ISPOR has a responsibility to establish a uniform, harmonized international code for ethical conduct. ISPOR has updated its 2008 Code of Ethics to reflect the current research environment. This code addresses what is acceptable and unacceptable in research, from inception to the dissemination of its results. There are nine chapters: 1 - Introduction; 2 - Ethical Principles respect, beneficence and justice with reference to a non-exhaustive compilation of international, regional, and country-specific guidelines and standards; 3 - Scope HEOR definitions and how HEOR and the Code relate to other research fields; 4 - Research Design Considerations primary and secondary data related issues, e.g., participant recruitment, population and research setting, sample size/site selection, incentive/honorarium, administration databases, registration of retrospective observational studies and modeling studies; 5 - Data Considerations privacy and data protection, combining, verification and transparency of research data, scientific misconduct, etc.; 6 - Sponsorship and Relationships with Others (roles of researchers, sponsors, key opinion leaders and advisory board members, research participants and institutional review boards (IRBs) / independent ethics committees (IECs) approval and responsibilities); 7 - Patient Centricity and Patient Engagement new addition, with explanation and guidance; 8 - Publication and Dissemination; and 9 - Conclusion and Limitations. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Development of a CFD code for casting simulation
NASA Technical Reports Server (NTRS)
Murph, Jesse E.
1993-01-01
Because of high rejection rates for large structural castings (e.g., the Space Shuttle Main Engine Alternate Turbopump Design Program), a reliable casting simulation computer code is very desirable. This code would reduce both the development time and life cycle costs by allowing accurate modeling of the entire casting process. While this code could be used for other types of castings, the most significant reductions of time and cost would probably be realized in complex investment castings, where any reduction in the number of development castings would be of significant benefit. The casting process is conveniently divided into three distinct phases: (1) mold filling, where the melt is poured or forced into the mold cavity; (2) solidification, where the melt undergoes a phase change to the solid state; and (3) cool down, where the solidified part continues to cool to ambient conditions. While these phases may appear to be separate and distinct, temporal overlaps do exist between phases (e.g., local solidification occurring during mold filling), and some phenomenological events are affected by others (e.g., residual stresses depend on solidification and cooling rates). Therefore, a reliable code must accurately model all three phases and the interactions between each. While many codes have been developed (to various stages of complexity) to model the solidification and cool down phases, only a few codes have been developed to model mold filling.
A New Analytic-Adaptive Model for EGS Assessment, Development and Management Support
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danko, George L
To increase understanding of the energy extraction capacity of Enhanced Geothermal System(s) (EGS), a numerical model development and application project is completed. The general objective of the project is to develop and apply a new, data-coupled Thermal-Hydrological-Mechanical-Chemical (T-H-M-C) model in which the four internal components can be freely selected from existing simulation software without merging and cross-combining a diverse set of computational codes. Eight tasks are completed during the project period. The results are reported in five publications, an MS thesis, twelve quarterly, and two annual reports to DOE. Two US patents have also been issued during the project period,more » with one patent application originated prior to the start of the project. The “Multiphase Physical Transport Modeling Method and Modeling System” (U.S. Patent 8,396,693 B2, 2013), a key element in the GHE sub-model solution, is successfully used for EGS studies. The “Geothermal Energy Extraction System and Method" invention (U.S. Patent 8,430,166 B2, 2013) originates from the time of project performance, describing a new fluid flow control solution. The new, coupled T-H-M-C numerical model will help analyzing and designing new, efficient EGS systems.« less
NASA Astrophysics Data System (ADS)
Ford, Eric B.
2009-05-01
We present the results of a highly parallel Kepler equation solver using the Graphics Processing Unit (GPU) on a commercial nVidia GeForce 280GTX and the "Compute Unified Device Architecture" (CUDA) programming environment. We apply this to evaluate a goodness-of-fit statistic (e.g., χ2) for Doppler observations of stars potentially harboring multiple planetary companions (assuming negligible planet-planet interactions). Given the high-dimensionality of the model parameter space (at least five dimensions per planet), a global search is extremely computationally demanding. We expect that the underlying Kepler solver and model evaluator will be combined with a wide variety of more sophisticated algorithms to provide efficient global search, parameter estimation, model comparison, and adaptive experimental design for radial velocity and/or astrometric planet searches. We tested multiple implementations using single precision, double precision, pairs of single precision, and mixed precision arithmetic. We find that the vast majority of computations can be performed using single precision arithmetic, with selective use of compensated summation for increased precision. However, standard single precision is not adequate for calculating the mean anomaly from the time of observation and orbital period when evaluating the goodness-of-fit for real planetary systems and observational data sets. Using all double precision, our GPU code outperforms a similar code using a modern CPU by a factor of over 60. Using mixed precision, our GPU code provides a speed-up factor of over 600, when evaluating nsys > 1024 models planetary systems each containing npl = 4 planets and assuming nobs = 256 observations of each system. We conclude that modern GPUs also offer a powerful tool for repeatedly evaluating Kepler's equation and a goodness-of-fit statistic for orbital models when presented with a large parameter space.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, A.W.
1990-04-01
This paper describes an approach to solve air quality problems which frequently occur during iterations of the baseline change process. From a schedule standpoint, it is desirable to perform this evaluation in as short a time as possible while budgetary pressures limit the size of the staff available to do the work. Without a method in place to deal with baseline change proposal requests the environment analysts may not be able to produce the analysis results in the time frame expected. Using a concept called the Rapid Response Air Quality Analysis System (RAAS), the problems of timing and cost becomemore » tractable. The system could be adapted to assess other atmospheric pathway impacts, e.g., acoustics or visibility. The air quality analysis system used to perform the EA analysis (EA) for the Salt Repository Project (part of the Civilian Radioactive Waste Management Program), and later to evaluate the consequences of proposed baseline changes, consists of three components: Emission source data files; Emission rates contained in spreadsheets; Impact assessment model codes. The spreadsheets contain user-written codes (macros) that calculate emission rates from (1) emission source data (e.g., numbers and locations of sources, detailed operating schedules, and source specifications including horsepower, load factor, and duty cycle); (2) emission factors such as those published by the U.S. Environmental Protection Agency, and (3) control efficiencies.« less
Design implications for task-specific search utilities for retrieval and re-engineering of code
NASA Astrophysics Data System (ADS)
Iqbal, Rahat; Grzywaczewski, Adam; Halloran, John; Doctor, Faiyaz; Iqbal, Kashif
2017-05-01
The importance of information retrieval systems is unquestionable in the modern society and both individuals as well as enterprises recognise the benefits of being able to find information effectively. Current code-focused information retrieval systems such as Google Code Search, Codeplex or Koders produce results based on specific keywords. However, these systems do not take into account developers' context such as development language, technology framework, goal of the project, project complexity and developer's domain expertise. They also impose additional cognitive burden on users in switching between different interfaces and clicking through to find the relevant code. Hence, they are not used by software developers. In this paper, we discuss how software engineers interact with information and general-purpose information retrieval systems (e.g. Google, Yahoo!) and investigate to what extent domain-specific search and recommendation utilities can be developed in order to support their work-related activities. In order to investigate this, we conducted a user study and found that software engineers followed many identifiable and repeatable work tasks and behaviours. These behaviours can be used to develop implicit relevance feedback-based systems based on the observed retention actions. Moreover, we discuss the implications for the development of task-specific search and collaborative recommendation utilities embedded with the Google standard search engine and Microsoft IntelliSense for retrieval and re-engineering of code. Based on implicit relevance feedback, we have implemented a prototype of the proposed collaborative recommendation system, which was evaluated in a controlled environment simulating the real-world situation of professional software engineers. The evaluation has achieved promising initial results on the precision and recall performance of the system.
Majeran, Wojciech; Cai, Yang; Sun, Qi; van Wijk, Klaas J.
2005-01-01
Chloroplasts of maize (Zea mays) leaves differentiate into specific bundle sheath (BS) and mesophyll (M) types to accommodate C4 photosynthesis. Consequences for other plastid functions are not well understood but are addressed here through a quantitative comparative proteome analysis of purified M and BS chloroplast stroma. Three independent techniques were used, including cleavable stable isotope coded affinity tags. Enzymes involved in lipid biosynthesis, nitrogen import, and tetrapyrrole and isoprenoid biosynthesis are preferentially located in the M chloroplasts. By contrast, enzymes involved in starch synthesis and sulfur import preferentially accumulate in BS chloroplasts. The different soluble antioxidative systems, in particular peroxiredoxins, accumulate at higher levels in M chloroplasts. We also observed differential accumulation of proteins involved in expression of plastid-encoded proteins (e.g., EF-Tu, EF-G, and mRNA binding proteins) and thylakoid formation (VIPP1), whereas others were equally distributed. Enzymes related to the C4 shuttle, the carboxylation and regeneration phase of the Calvin cycle, and several regulators (e.g., CP12) distributed as expected. However, enzymes involved in triose phosphate reduction and triose phosphate isomerase are primarily located in the M chloroplasts, indicating that the M-localized triose phosphate shuttle should be viewed as part of the BS-localized Calvin cycle, rather than a parallel pathway. PMID:16243905
2014-10-01
offer a practical solution to calculating the grain -scale hetero- geneity present in the deformation field. Consequently, crystal plasticity models...process/performance simulation codes (e.g., crystal plasticity finite element method). 15. SUBJECT TERMS ICME; microstructure informatics; higher...iii) protocols for direct and efficient linking of materials models/databases into process/performance simulation codes (e.g., crystal plasticity
Knowing what and where: TMS evidence for the dual neural basis of geographical knowledge.
Hoffman, Paul; Crutch, Sebastian
2016-02-01
All animals acquire knowledge about the topography of their immediate environment through direct exploration. Uniquely, humans also acquire geographical knowledge indirectly through exposure to maps and verbal information, resulting in a rich database of global geographical knowledge. We used transcranial magnetic stimulation to investigate the structure and neural basis of this critical but poorly understood component of semantic knowledge. Participants completed tests of geographical knowledge that probed either information about spatial locations (e.g., France borders Spain) or non-spatial taxonomic information (e.g., France is a country). TMS applied to the anterior temporal lobe, a region that codes conceptual knowledge for words and objects, had a general disruptive effect on the geographical tasks. In contrast, stimulation of the intraparietal sulcus (IPS), a region involved in the coding of spatial and numerical information, had a highly selective effect on spatial geographical decisions but no effect on taxonomic judgements. Our results establish that geographical concepts lie at the intersection of two distinct neural representation systems, and provide insights into how the interaction of these systems shape our understanding of the world. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
A Reference Architecture for Space Information Management
NASA Technical Reports Server (NTRS)
Mattmann, Chris A.; Crichton, Daniel J.; Hughes, J. Steven; Ramirez, Paul M.; Berrios, Daniel C.
2006-01-01
We describe a reference architecture for space information management systems that elegantly overcomes the rigid design of common information systems in many domains. The reference architecture consists of a set of flexible, reusable, independent models and software components that function in unison, but remain separately managed entities. The main guiding principle of the reference architecture is to separate the various models of information (e.g., data, metadata, etc.) from implemented system code, allowing each to evolve independently. System modularity, systems interoperability, and dynamic evolution of information system components are the primary benefits of the design of the architecture. The architecture requires the use of information models that are substantially more advanced than those used by the vast majority of information systems. These models are more expressive and can be more easily modularized, distributed and maintained than simpler models e.g., configuration files and data dictionaries. Our current work focuses on formalizing the architecture within a CCSDS Green Book and evaluating the architecture within the context of the C3I initiative.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradley, Jon David; Oppel III, Fred J.; Hart, Brian E.
Umbra is a flexible simulation framework for complex systems that can be used by itself for modeling, simulation, and analysis, or to create specific applications. It has been applied to many operations, primarily dealing with robotics and system of system simulations. This version, from 4.8 to 4.8.3b, incorporates bug fixes, refactored code, and new managed C++ wrapper code that can be used to bridge new applications written in C# to the C++ libraries. The new managed C++ wrapper code includes (project/directories) BasicSimulation, CSharpUmbraInterpreter, LogFileView, UmbraAboutBox, UmbraControls, UmbraMonitor and UmbraWrapper.
1989-02-01
installs, and provides life cycle support for information management systems. 16. Provides information and reports to higher authority and the scientific com...instruction/policy. 29 November New Employees Margaret Overton Paula Augustine Staffing Clerk Clerk Typist Code OOB Code I I GS-203-4 GS-322-4 Sylvia ...Evaluation and Survey Systems-Develops systems to evaluate the effectiveness of quality of life programs and to improve the quality of personnel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Friedrichs, D.R.
1980-01-01
The Assessment of Effectiveness of Geologic Isolation Systems (AEGIS) Program is developing and applying the methodology for assessing the far-field, long-term post-closure safety of deep geologic nuclear waste repositories. AEGIS is being performed by Pacific Northwest Laboratory (PNL) under contract with the Office of Nuclear Waste Isolation (ONWI) for the Department of Energy (DOE). One task within AEGIS is the development of methodology for analysis of the consequences (water pathway) from loss of repository containment as defined by various release scenarios. Analysis of the long-term, far-field consequences of release scenarios requires the application of numerical codes which simulate the hydrologicmore » systems, model the transport of released radionuclides through the hydrologic systems to the biosphere, and, where applicable, assess the radiological dose to humans. The various input parameters required in the analysis are compiled in data systems. The data are organized and prepared by various input subroutines for use by the hydrologic and transport codes. The hydrologic models simulate the groundwater flow systems and provide water flow directions, rates, and velocities as inputs to the transport models. Outputs from the transport models are basically graphs of radionuclide concentration in the groundwater plotted against time. After dilution in the receiving surface-water body (e.g., lake, river, bay), these data are the input source terms for the dose models, if dose assessments are required. The dose models calculate radiation dose to individuals and populations. CIRMIS (Comprehensive Information Retrieval and Model Input Sequence) Data System is a storage and retrieval system for model input and output data, including graphical interpretation and display. This is the fourth of four volumes of the description of the CIRMIS Data System.« less
Kang, Jeeun; Yoon, Changhan; Lee, Jaejin; Kye, Sang-Bum; Lee, Yongbae; Chang, Jin Ho; Kim, Gi-Duck; Yoo, Yangmo; Song, Tai-kyong
2016-04-01
In this paper, we present a novel system-on-chip (SOC) solution for a portable ultrasound imaging system (PUS) for point-of-care applications. The PUS-SOC includes all of the signal processing modules (i.e., the transmit and dynamic receive beamformer modules, mid- and back-end processors, and color Doppler processors) as well as an efficient architecture for hardware-based imaging methods (e.g., dynamic delay calculation, multi-beamforming, and coded excitation and compression). The PUS-SOC was fabricated using a UMC 130-nm NAND process and has 16.8 GFLOPS of computing power with a total equivalent gate count of 12.1 million, which is comparable to a Pentium-4 CPU. The size and power consumption of the PUS-SOC are 27×27 mm(2) and 1.2 W, respectively. Based on the PUS-SOC, a prototype hand-held US imaging system was implemented. Phantom experiments demonstrated that the PUS-SOC can provide appropriate image quality for point-of-care applications with a compact PDA size ( 200×120×45 mm(3)) and 3 hours of battery life.
Simpkin, D J
1989-02-01
A Monte Carlo calculation has been performed to determine the transmission of broad constant-potential x-ray beams through Pb, concrete, gypsum wallboard, steel and plate glass. The EGS4 code system was used with a simple broad-beam geometric model to generate exposure transmission curves for published 70, 100, 120 and 140-kVcp x-ray spectra. These curves are compared to measured three-phase generated x-ray transmission data in the literature and found to be reasonable. For calculation ease the data are fit to an equation previously shown to describe such curves quite well. These calculated transmission data are then used to create three-phase shielding tables for Pb and concrete, as well as other materials not available in Report No. 49 of the NCRP.
LDPC coded OFDM over the atmospheric turbulence channel.
Djordjevic, Ivan B; Vasic, Bane; Neifeld, Mark A
2007-05-14
Low-density parity-check (LDPC) coded optical orthogonal frequency division multiplexing (OFDM) is shown to significantly outperform LDPC coded on-off keying (OOK) over the atmospheric turbulence channel in terms of both coding gain and spectral efficiency. In the regime of strong turbulence at a bit-error rate of 10(-5), the coding gain improvement of the LDPC coded single-side band unclipped-OFDM system with 64 sub-carriers is larger than the coding gain of the LDPC coded OOK system by 20.2 dB for quadrature-phase-shift keying (QPSK) and by 23.4 dB for binary-phase-shift keying (BPSK).
Whiteford, Kelly L.; Oxenham, Andrew J.
2015-01-01
The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding. PMID:26627783
Whiteford, Kelly L; Oxenham, Andrew J
2015-11-01
The question of how frequency is coded in the peripheral auditory system remains unresolved. Previous research has suggested that slow rates of frequency modulation (FM) of a low carrier frequency may be coded via phase-locked temporal information in the auditory nerve, whereas FM at higher rates and/or high carrier frequencies may be coded via a rate-place (tonotopic) code. This hypothesis was tested in a cohort of 100 young normal-hearing listeners by comparing individual sensitivity to slow-rate (1-Hz) and fast-rate (20-Hz) FM at a carrier frequency of 500 Hz with independent measures of phase-locking (using dynamic interaural time difference, ITD, discrimination), level coding (using amplitude modulation, AM, detection), and frequency selectivity (using forward-masking patterns). All FM and AM thresholds were highly correlated with each other. However, no evidence was obtained for stronger correlations between measures thought to reflect phase-locking (e.g., slow-rate FM and ITD sensitivity), or between measures thought to reflect tonotopic coding (fast-rate FM and forward-masking patterns). The results suggest that either psychoacoustic performance in young normal-hearing listeners is not limited by peripheral coding, or that similar peripheral mechanisms limit both high- and low-rate FM coding.
Cryptographic robustness of a quantum cryptography system using phase-time coding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Molotkov, S. N.
2008-01-15
A cryptographic analysis is presented of a new quantum key distribution protocol using phase-time coding. An upper bound is obtained for the error rate that guarantees secure key distribution. It is shown that the maximum tolerable error rate for this protocol depends on the counting rate in the control time slot. When no counts are detected in the control time slot, the protocol guarantees secure key distribution if the bit error rate in the sifted key does not exceed 50%. This protocol partially discriminates between errors due to system defects (e.g., imbalance of a fiber-optic interferometer) and eavesdropping. In themore » absence of eavesdropping, the counts detected in the control time slot are not caused by interferometer imbalance, which reduces the requirements for interferometer stability.« less
What do we do? Practices and learning strategies of medical education leaders.
Lieff, Susan; Albert, Mathieu
2012-01-01
Continuous changes in undergraduate and postgraduate medical education require faculty to assume a variety of new leadership roles. While numerous faculty development programmes have been developed, there is little evidence about the specific practices of medical education leaders or their learning strategies to help inform their design. This study aimed to explore what medical education leaders' actually do, their learning strategies and recommendations for faculty development. A total of 16 medical education leaders from a variety of contexts within the faculty of medicine of a large North American medical school participated in semi-structured interviews to explore the nature of their work and the learning strategies they employ. Using thematic analysis, interview transcripts were coded inductively and then clustered into emergent themes. Findings clustered into four key themes of practice: (1) intrapersonal (e.g., self-awareness), (2) interpersonal (e.g., fostering informal networks), (3) organizational (e.g., creating a shared vision) and (4) systemic (e.g. strategic navigation). Learning strategies employed included learning from experience and example, reflective practice, strategic mentoring or advanced training. Our findings illuminate a four-domain framework for understanding medical education leader practices and their learning preferences. While some of these findings are not unknown in the general leadership literature, our understanding of their application in medical education is unique. These practices and preferences have a potential utility for conceptualizing a coherent and relevant approach to the design of faculty development strategies for medical education leadership.
Asymmetric soft-error resistant memory
NASA Technical Reports Server (NTRS)
Buehler, Martin G. (Inventor); Perlman, Marvin (Inventor)
1991-01-01
A memory system is provided, of the type that includes an error-correcting circuit that detects and corrects, that more efficiently utilizes the capacity of a memory formed of groups of binary cells whose states can be inadvertently switched by ionizing radiation. Each memory cell has an asymmetric geometry, so that ionizing radiation causes a significantly greater probability of errors in one state than in the opposite state (e.g., an erroneous switch from '1' to '0' is far more likely than a switch from '0' to'1'. An asymmetric error correcting coding circuit can be used with the asymmetric memory cells, which requires fewer bits than an efficient symmetric error correcting code.
Baucom, Brian R W; Leo, Karena; Adamo, Colin; Georgiou, Panayiotis; Baucom, Katherine J W
2017-12-01
Observational behavioral coding methods are widely used for the study of relational phenomena. There are numerous guidelines for the development and implementation of these methods that include principles for creating new and adapting existing coding systems as well as principles for creating coding teams. While these principles have been successfully implemented in research on relational phenomena, the ever expanding array of phenomena being investigated with observational methods calls for a similar expansion of these principles. Specifically, guidelines are needed for decisions that arise in current areas of emphasis in couple research including observational investigation of related outcomes (e.g., relationship distress and psychological symptoms), the study of change in behavior over time, and the study of group similarities and differences in the enactment and perception of behavior. This article describes conceptual and statistical considerations involved in these 3 areas of research and presents principle- and empirically based rationale for design decisions related to these issues. A unifying principle underlying these guidelines is the need for careful consideration of fit between theory, research questions, selection of coding systems, and creation of coding teams. Implications of (mis)fit for the advancement of theory are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Sollie, Annet; Sijmons, Rolf H; Helsper, Charles; Numans, Mattijs E
2017-03-01
To assess quality and reusability of coded cancer diagnoses in routine primary care data. To identify factors that influence data quality and areas for improvement. A dynamic cohort study in a Dutch network database containing 250,000 anonymized electronic medical records (EMRs) from 52 general practices was performed. Coded data from 2000 to 2011 for the three most common cancer types (breast, colon and prostate cancer) was compared to the Netherlands Cancer Registry. Data quality is expressed in Standard Incidence Ratios (SIRs): the ratio between the number of coded cases observed in the primary care network database and the expected number of cases based on the Netherlands Cancer Registry. Ratios were multiplied by 100% for readability. The overall SIR was 91.5% (95%CI 88.5-94.5) and showed improvement over the years. SIRs differ between cancer types: from 71.5% for colon cancer in males to 103.9% for breast cancer. There are differences in data quality (SIRs 76.2% - 99.7%) depending on the EMR system used, with SIRs up to 232.9% for breast cancer. Frequently observed errors in routine healthcare data can be classified as: lack of integrity checks, inaccurate use and/or lack of codes, and lack of EMR system functionality. Re-users of coded routine primary care Electronic Medical Record data should be aware that 30% of cancer cases can be missed. Up to 130% of cancer cases found in the EMR data can be false-positive. The type of EMR system and the type of cancer influence the quality of coded diagnosis registry. While data quality can be improved (e.g. through improving system design and by training EMR system users), re-use should only be taken care of by appropriately trained experts. Copyright © 2016. Published by Elsevier B.V.
Proceedings of the 2004 NASA/ONR Circulation Control Workshop, Part 1
NASA Technical Reports Server (NTRS)
Jones, Gregory S. (Editor); Joslin, Ronald D. (Editor)
2005-01-01
As technological advances influence the efficiency and effectiveness of aerodynamic and hydrodynamic applications, designs and operations, this workshop was intended to address the technologies, systems, challenges and successes specific to Coanda driven circulation control in aerodynamics and hydrodynamics. A major goal of this workshop was to determine the 2004 state-of-the-art in circulation control and understand the roadblocks to its application. The workshop addressed applications, CFD, and experiments related to circulation control, emphasizing fundamental physics, systems analysis, and applied research. The workshop consisted of 34 single session oral presentations and written papers that focused on Naval hydrodynamic vehicles (e.g. submarines), Fixed Wing Aviation, V/STOL platforms, propulsion systems (including wind turbine systems), ground vehicles (automotive and trucks) and miscellaneous applications (e.g., poultry exhaust systems and vacuum systems). Several advanced CFD codes were benchmarked using a two-dimensional NCCR circulation control airfoil. The CFD efforts highlighted inconsistencies in turbulence modeling, separation and performance predictions.
LOINC, a universal standard for identifying laboratory observations: a 5-year update.
McDonald, Clement J; Huff, Stanley M; Suico, Jeffrey G; Hill, Gilbert; Leavelle, Dennis; Aller, Raymond; Forrey, Arden; Mercer, Kathy; DeMoor, Georges; Hook, John; Williams, Warren; Case, James; Maloney, Pat
2003-04-01
The Logical Observation Identifier Names and Codes (LOINC) database provides a universal code system for reporting laboratory and other clinical observations. Its purpose is to identify observations in electronic messages such as Health Level Seven (HL7) observation messages, so that when hospitals, health maintenance organizations, pharmaceutical manufacturers, researchers, and public health departments receive such messages from multiple sources, they can automatically file the results in the right slots of their medical records, research, and/or public health systems. For each observation, the database includes a code (of which 25 000 are laboratory test observations), a long formal name, a "short" 30-character name, and synonyms. The database comes with a mapping program called Regenstrief LOINC Mapping Assistant (RELMA(TM)) to assist the mapping of local test codes to LOINC codes and to facilitate browsing of the LOINC results. Both LOINC and RELMA are available at no cost from http://www.regenstrief.org/loinc/. The LOINC medical database carries records for >30 000 different observations. LOINC codes are being used by large reference laboratories and federal agencies, e.g., the CDC and the Department of Veterans Affairs, and are part of the Health Insurance Portability and Accountability Act (HIPAA) attachment proposal. Internationally, they have been adopted in Switzerland, Hong Kong, Australia, and Canada, and by the German national standards organization, the Deutsches Instituts für Normung. Laboratories should include LOINC codes in their outbound HL7 messages so that clinical and research clients can easily integrate these results into their clinical and research repositories. Laboratories should also encourage instrument vendors to deliver LOINC codes in their instrument outputs and demand LOINC codes in HL7 messages they get from reference laboratories to avoid the need to lump so many referral tests under the "send out lab" code.
48 CFR 501.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Publication and code arrangement. 501.105-1 Section 501.105-1 Federal Acquisition Regulations System GENERAL SERVICES ADMINISTRATION GENERAL GENERAL SERVICES ADMINISTRATION ACQUISITION REGULATION SYSTEM Purpose, Authority, Issuance...
Reporting of Sepsis Cases for Performance Measurement Versus for Reimbursement in New York State.
Prescott, Hallie C; Cope, Tara M; Gesten, Foster C; Ledneva, Tatiana A; Friedrich, Marcus E; Iwashyna, Theodore J; Osborn, Tiffany M; Seymour, Christopher W; Levy, Mitchell M
2018-05-01
Under "Rory's Regulations," New York State Article 28 acute care hospitals were mandated to implement sepsis protocols and report patient-level data. This study sought to determine how well cases reported under state mandate align with discharge records in a statewide administrative database. Observational cohort study. First 27 months of mandated sepsis reporting (April 1, 2014, to June 30, 2016). Hospitalizations with sepsis at New York State Article 28 acute care hospitals. Sepsis regulations with mandated reporting. We compared cases reported to the New York State Department of Health Sepsis Clinical Database with discharge records in the Statewide Planning and Research Cooperative System database. We classified discharges as 1) "coded sepsis discharges"-a diagnosis code for severe sepsis or septic shock and 2) "possible sepsis discharges," using Dombrovskiy and Angus criteria. Of 111,816 sepsis cases reported to the New York State Department of Health Sepsis Clinical Database, 105,722 (94.5%) were matched to discharge records in Statewide Planning and Research Cooperative System. The percentage of coded sepsis discharges reported increased from 67.5% in the first quarter to 81.3% in the final quarter of the study period (mean, 77.7%). Accounting for unmatched cases, as many as 82.7% of coded sepsis discharges were potentially reported, whereas at least 17.3% were unreported. Compared with unreported discharges, reported discharges had higher rates of acute organ dysfunction (e.g., cardiovascular dysfunction 63.0% vs 51.8%; p < 0.001) and higher in-hospital mortality (30.2% vs 26.1%; p < 0.001). Hospital characteristics (e.g., number of beds, teaching status, volume of sepsis cases) were similar between hospitals with a higher versus lower percent of discharges reported, p values greater than 0.05 for all. Hospitals' percent of discharges reported was not correlated with risk-adjusted mortality of their submitted cases (Pearson correlation coefficient 0.11; p = 0.17). Approximately four of five discharges with a diagnosis code of severe sepsis or septic shock in the Statewide Planning and Research Cooperative System data were reported in the New York State Department of Health Sepsis Clinical Database. Incomplete reporting appears to be driven more by underrecognition than attempts to game the system, with minimal bias to risk-adjusted hospital performance measurement.
Characterization of gamma rays existing in the NMIJ standard neutron field.
Harano, H; Matsumoto, T; Ito, Y; Uritani, A; Kudo, K
2004-01-01
Our laboratory provides national standards on fast neutron fluence. Neutron fields are always accompanied by gamma rays produced in neutron sources and surroundings. We have characterised these gamma rays in the 5.0 MeV standard neutron field. Gamma ray measurement was performed using an NE213 liquid scintillator. Pulse shape discrimination was incorporated to separate the events induced by gamma rays from those by neutrons. The measured gamma ray spectra were unfolded with the HEPRO program package to obtain the spectral fluences using the response matrix prepared with the EGS4 code. Corrections were made for the gamma rays produced by neutrons in the detector assembly using the MCNP4C code. The effective dose equivalents were estimated to be of the order of 25 microSv at the neutron fluence of 10(7) neutrons cm(-2).
THERMINATOR 2: THERMal heavy Io N gener ATOR 2
NASA Astrophysics Data System (ADS)
Chojnacki, Mikołaj; Kisiel, Adam; Florkowski, Wojciech; Broniowski, Wojciech
2012-03-01
We present an extended version of THERMINATOR, a Monte Carlo event generator dedicated to studies of the statistical production of particles in relativistic heavy-ion collisions. The package is written in C++ and uses the CERN ROOT data-analysis environment. The largely increased functionality of the code contains the following main features: 1) The possibility of input of any shape of the freeze-out hypersurface and the expansion velocity field, including the 3+1-dimensional profiles, in particular those generated externally with various hydrodynamic codes. 2) The hypersurfaces may have variable thermal parameters, which allow studies departing significantly from the mid-rapidity region where the baryon chemical potential becomes large. 3) We include a library of standard sets of hypersurfaces and velocity profiles describing the RHIC Au + Au data at √{s}=200 GeV for various centralities, as well as those anticipated for the LHC Pb + Pb collisions at √{s}=5.5 TeV. 4) A separate code, FEMTO-THERMINATOR, is provided to carry out the analysis of the pion-pion femtoscopic correlations which are an important source of information concerning the size and expansion of the system. 5) We also include several useful scripts that carry out auxiliary tasks, such as obtaining an estimate of the number of elastic collisions after the freeze-out, counting of particles flowing back into the fireball and violating causality (typically very few), or visualizing various results: the particle p-spectra, the elliptic flow coefficients, and the HBT correlation radii. Program summaryProgram title:THERMINATOR 2 Catalogue identifier: ADXL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXL_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 423 444 No. of bytes in distributed program, including test data, etc.: 2 854 602 Distribution format: tar.gz Programming language:C++ with the CERN ROOT libraries, BASH shell Computer: Any with a C++ compiler and the CERN ROOT environment, ver. 5.26 or later, tested with Intel Core2 Duo CPU E8400 @ 3 GHz, 4 GB RAM Operating system: Linux Ubuntu 10.10 x64 (gcc 4.4.5) ROOT 5.26 Linux Ubuntu 11.04 x64 (gcc Ubuntu/Linaro 4.5.2-8ubuntu4) ROOT 5.30/00 (compiled from source) Linux CentOS 5.2 (gcc Red Hat 4.1.2-42) ROOT 5.30/00 (compiled from source) Mac OS X 10.6.8 (i686-apple-darwin10-g++-4.2.1) ROOT 5.30/00 (for Mac OS X 10.6 x86-64 with gcc 4.2.1) cygwin-1.7.9-1 (gcc gcc4-g++-4.3.4-4) ROOT 5.30/00 (for cygwin gcc 4.3) RAM: 30 MB therm2 events 150 MB therm2 femto Classification: 11.2 Catalogue identifier of previous version: ADXL_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 669 External routines: CERN ROOT ( http://root.cern.ch/drupal/) Does the new version supersede the previous version?: Yes Nature of problem: Particle production via statistical hadronization in relativistic heavy-ion collisions. Solution method: Monte Carlo simulation, analyzed with ROOT. Reasons for new version: The increased functionality of the code contains the following important features. The input of any shape of the freeze-out hypersurface and the expansion velocity field, including the 3+1-dimensional profiles, in particular those generated externally with the various popular hydrodynamic codes. The hypersurfaces may have variable thermal parameters, which allows for studies departing significantly from the mid-rapidity region. We include a library of standard sets of hypersurfaces and velocity profiles describing the RHIC Au + Au and the LHC Pb+Pb data. A separate code, FEMTO-THERMINATOR, is provided to carry out the analysis of femtoscopic correlations. Summary of revisions: THERMINATOR 2 incorporates major revisions to encompass the enhanced functionality. Classes: The Integrator class has been expanded and a new subgroup of classes defined. Model and abstract class: These classes are responsible for the physical models of the freeze-out process. The functionality and readability of the code has been substantially increased by implementing each freeze-out model in a different class. The Hypersurface class was added to handle the input form hydrodynamic codes. The hydro input is passed to the program as a lattice of the freeze-out hypersurface. That information is stored in the .xml files. Input: THERMINATOR 2 programs are now controlled by *. ini type files. The programs parameters and the freeze-out model parameters are now in separate ini files. Output: The event files generated by the therm2_events program are not backward compatible with the previous version. The event*. root file structure was expanded with two new TTree structures. From the particle entry it is possible to back-trace the whole cascade. Event text output is now optional. The ROOT macros produce the *. eps figures with physics results, e.g. the pT-spectra, the elliptic-flow coefficient, rapidity distributions, etc. The THERMINATOR HBT package creates the ROOT files femto*. root ( therm2_femto) and hbtfit*. root ( therm2_hbtfit). Directory structure: The directory structure has been reorganized. Source code resides in the build directory. The freeze-out model input files, event files, ROOT macros are stored separately. The THERMINATOR 2 system, after installation, is able to run on a cluster. Scripts: The package contains a few BASH scripts helpful when running e.g. on a cluster the whole system can be executed via a single script. Additional comments: Typical data file size: default configuration. 45 MB/500 events; 35 MB/correlation file (one k bin); 45 kB/fit file (projections and fits). Running time: Default configuration at 3 GHz. primordial multiplicities 70 min (calculated only once per case); 8 min/500 events; 10 min - draw all figures; 25 min/one k bin in the HBT analysis with 5000 events.
1998-11-01
are already operational in the radar domain , e.g. in airborne radars. NATO fighter aircraft are equipped with transponder systems answering on...Mise en forme et 6talonnage des donn6es SER moyenne pour un domaine de fr6quence (bande passante du code utilis6) et un secteur Ce module extrait les...cooperatives) Papers presented at the Symposium of the RTO Systems Concepts and Integration Panel (SCI) held in Mannheim, Germany, 22-24 April 1998. 1
NASA Astrophysics Data System (ADS)
Han, B.; Li, Y.
2016-12-01
We present a three-dimensional (3D) forward and inverse modeling code for marine controlled-source electromagnetic (CSEM) surveys in anisotropic media. The forward solution is based on a primary/secondary field approach, in which secondary fields are solved using a staggered finite-volume (FV) method and primary fields are solved for 1D isotropic background models analytically. It is shown that it is rather straightforward to extend the isotopic 3D FV algorithm to a triaxial anisotropic one, while additional coefficients are required to account for full tensor conductivity. To solve the linear system resulting from FV discretization of Maxwell' s equations, both iterative Krylov solvers (e.g. BiCGSTAB) and direct solvers (e.g. MUMPS) have been implemented, makes the code flexible for different computing platforms and different problems. For iterative soloutions, the linear system in terms of electromagnetic potentials (A-Phi) is used to precondition the original linear system, transforming the discretized Curl-Curl equations to discretized Laplace-like equations, thus much more favorable numerical properties can be obtained. Numerical experiments suggest that this A-Phi preconditioner can dramatically improve the convergence rate of an iterative solver and high accuracy can be achieved without divergence correction even for low frequencies. To efficiently calculate the sensitivities, i.e. the derivatives of CSEM data with respect to tensor conductivity, the adjoint method is employed. For inverse modeling, triaxial anisotropy is taken into account. Since the number of model parameters to be resolved of triaxial anisotropic medias is twice or thrice that of isotropic medias, the data-space version of the Gauss-Newton (GN) minimization method is preferred due to its lower computational cost compared with the traditional model-space GN method. We demonstrate the effectiveness of the code with synthetic examples.
FPGA-based LDPC-coded APSK for optical communication systems.
Zou, Ding; Lin, Changyu; Djordjevic, Ivan B
2017-02-20
In this paper, with the aid of mutual information and generalized mutual information (GMI) capacity analyses, it is shown that the geometrically shaped APSK that mimics an optimal Gaussian distribution with equiprobable signaling together with the corresponding gray-mapping rules can approach the Shannon limit closer than conventional quadrature amplitude modulation (QAM) at certain range of FEC overhead for both 16-APSK and 64-APSK. The field programmable gate array (FPGA) based LDPC-coded APSK emulation is conducted on block interleaver-based and bit interleaver-based systems; the results verify a significant improvement in hardware efficient bit interleaver-based systems. In bit interleaver-based emulation, the LDPC-coded 64-APSK outperforms 64-QAM, in terms of symbol signal-to-noise ratio (SNR), by 0.1 dB, 0.2 dB, and 0.3 dB at spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz, respectively. It is found by emulation that LDPC-coded 64-APSK for spectral efficiencies of 4.8, 4.5, and 4.2 b/s/Hz is 1.6 dB, 1.7 dB, and 2.2 dB away from the GMI capacity.
48 CFR 401.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Publication and code arrangement. 401.105-1 Section 401.105-1 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE GENERAL AGRICULTURE ACQUISITION REGULATION SYSTEM Purpose, Authority, Issuance 401.105-1 Publication and...
48 CFR 401.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 4 2014-10-01 2014-10-01 false Publication and code arrangement. 401.105-1 Section 401.105-1 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE GENERAL AGRICULTURE ACQUISITION REGULATION SYSTEM Purpose, Authority, Issuance 401.105-1 Publication and...
48 CFR 401.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 48 Federal Acquisition Regulations System 4 2013-10-01 2013-10-01 false Publication and code arrangement. 401.105-1 Section 401.105-1 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE GENERAL AGRICULTURE ACQUISITION REGULATION SYSTEM Purpose, Authority, Issuance 401.105-1 Publication and...
48 CFR 401.105-1 - Publication and code arrangement.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 48 Federal Acquisition Regulations System 4 2012-10-01 2012-10-01 false Publication and code arrangement. 401.105-1 Section 401.105-1 Federal Acquisition Regulations System DEPARTMENT OF AGRICULTURE GENERAL AGRICULTURE ACQUISITION REGULATION SYSTEM Purpose, Authority, Issuance 401.105-1 Publication and...
An overview of platforms for cloud based development.
Fylaktopoulos, G; Goumas, G; Skolarikis, M; Sotiropoulos, A; Maglogiannis, I
2016-01-01
This paper provides an overview of the state of the art technologies for software development in cloud environments. The surveyed systems cover the whole spectrum of cloud-based development including integrated programming environments, code repositories, software modeling, composition and documentation tools, and application management and orchestration. In this work we evaluate the existing cloud development ecosystem based on a wide number of characteristics like applicability (e.g. programming and database technologies supported), productivity enhancement (e.g. editor capabilities, debugging tools), support for collaboration (e.g. repository functionality, version control) and post-development application hosting and we compare the surveyed systems. The conducted survey proves that software engineering in the cloud era has made its initial steps showing potential to provide concrete implementation and execution environments for cloud-based applications. However, a number of important challenges need to be addressed for this approach to be viable. These challenges are discussed in the article, while a conclusion is drawn that although several steps have been made, a compact and reliable solution does not yet exist.
Verbeke, J. M.; Petit, O.
2016-06-01
From nuclear safeguards to homeland security applications, the need for the better modeling of nuclear interactions has grown over the past decades. Current Monte Carlo radiation transport codes compute average quantities with great accuracy and performance; however, performance and averaging come at the price of limited interaction-by-interaction modeling. These codes often lack the capability of modeling interactions exactly: for a given collision, energy is not conserved, energies of emitted particles are uncorrelated, and multiplicities of prompt fission neutrons and photons are uncorrelated. Many modern applications require more exclusive quantities than averages, such as the fluctuations in certain observables (e.g., themore » neutron multiplicity) and correlations between neutrons and photons. In an effort to meet this need, the radiation transport Monte Carlo code TRIPOLI-4® was modified to provide a specific mode that models nuclear interactions in a full analog way, replicating as much as possible the underlying physical process. Furthermore, the computational model FREYA (Fission Reaction Event Yield Algorithm) was coupled with TRIPOLI-4 to model complete fission events. As a result, FREYA automatically includes fluctuations as well as correlations resulting from conservation of energy and momentum.« less
Music 4C, a multi-voiced synthesis program with instruments defined in C
NASA Astrophysics Data System (ADS)
Beauchamp, James W.
2003-04-01
Music 4C is a program which runs under Unix (including Linux) and provides a means for the synthesis of arbitrary signals as defined by the C code. The program is actually a loose translation of an earlier program, Music 4BF [H. S. Howe, Jr., Electronic Music Synthesis (Norton, 1975)]. A set of instrument definitions are driven by a numerical score which consists of a series of ``events.'' Each event gives an instrument name, start time and duration, and a number of parameters (e.g., pitch) which describe the event. Each instrument definition consists of event parameters, performance variables, initializations, and a synthesis algorithmic code. Thus, the synthetic signal, no matter how complex, is precisely defined. Moreover, the resulting sounds can be overlaid in any arbitrary pattern. The program serves as a mixer of algorithmically produced sounds or recorded sounds taken from sample files or synthesized from spectrum files. A score file can be entered by hand, generated from a program, translated from a MIDI file, or generated from an alpha-numeric score using an auxiliary program, Notepro. Output sample files are in wav, snd, or aiff format. The program is provided in the C source code for download.
EvoluCode: Evolutionary Barcodes as a Unifying Framework for Multilevel Evolutionary Data.
Linard, Benjamin; Nguyen, Ngoc Hoan; Prosdocimi, Francisco; Poch, Olivier; Thompson, Julie D
2012-01-01
Evolutionary systems biology aims to uncover the general trends and principles governing the evolution of biological networks. An essential part of this process is the reconstruction and analysis of the evolutionary histories of these complex, dynamic networks. Unfortunately, the methodologies for representing and exploiting such complex evolutionary histories in large scale studies are currently limited. Here, we propose a new formalism, called EvoluCode (Evolutionary barCode), which allows the integration of different evolutionary parameters (eg, sequence conservation, orthology, synteny …) in a unifying format and facilitates the multilevel analysis and visualization of complex evolutionary histories at the genome scale. The advantages of the approach are demonstrated by constructing barcodes representing the evolution of the complete human proteome. Two large-scale studies are then described: (i) the mapping and visualization of the barcodes on the human chromosomes and (ii) automatic clustering of the barcodes to highlight protein subsets sharing similar evolutionary histories and their functional analysis. The methodologies developed here open the way to the efficient application of other data mining and knowledge extraction techniques in evolutionary systems biology studies. A database containing all EvoluCode data is available at: http://lbgi.igbmc.fr/barcodes.
1999-01-01
Some means currently under investigation include domain-speci c languages which are easy to check (e.g., PLAN), proof-carrying code [NL96, Nec97...domain-speci c language coupled to an extension system with heavyweight checks. In this way, the frequent (per- packet) dynamic checks are inexpensive...to CISC architectures remains problematic. Typed assembly language [MWCG98] propagates type safety information to the assembly language level, so
2017-02-01
scale blade servers (Dell PowerEdge) [20]. It must be recognized however, that the findings are distributed over this collection of architectures not...current operating system designs run into millions of lines of code. Moreover, they compound the opportunity for compromise by granting device drivers...properties (e.g. IP & MAC address) so as to invalidate an adversary’s surveillance data. The current running and bootstrapping instances of the micro
NASA Astrophysics Data System (ADS)
Friesdorf, Florian; Pangercic, Dejan; Bubb, Heiner; Beetz, Michael
In mac, an ergonomic dialog-system and algorithms will be developed that enable human experts and companions to be integrated into knowledge gathering and decision making processes of highly complex cognitive systems (e.g. Assistive Household as manifested further in the paper). In this event we propose to join algorithms and methodologies coming from Ergonomics and Artificial Intelligence that: a) make cognitive systems more congenial for non-expert humans, b) facilitate their comprehension by utilizing a high-level expandable control code for human experts and c) augment representation of such cognitive system into “deep representation” obtained through an interaction with human companions.
Climate tools in mainstream Linux distributions
NASA Astrophysics Data System (ADS)
McKinstry, Alastair
2015-04-01
Debian/meterology is a project to integrate climate tools and analysis software into the mainstream Debian/Ubuntu Linux distributions. This work describes lessons learnt, and recommends practices for scientific software to be adopted and maintained in OS distributions. In addition to standard analysis tools (cdo,, grads, ferret, metview, ncl, etc.), software used by the Earth System Grid Federation was chosen for integraion, to enable ESGF portals to be built on this base; however exposing scientific codes via web APIs enables security weaknesses, normally ignorable, to be exposed. How tools are hardened, and what changes are required to handle security upgrades, are described. Secondly, to enable libraries and components (e.g. Python modules) to be integrated requires planning by writers: it is not sufficient to assume users can upgrade their code when you make incompatible changes. Here, practices are recommended to enable upgrades and co-installability of C, C++, Fortran and Python codes. Finally, software packages such as NetCDF and HDF5 can be built in multiple configurations. Tools may then expect incompatible versions of these libraries (e.g. serial and parallel) to be simultaneously available; how this was solved in Debian using "pkg-config" and shared library interfaces is described, and best practices for software writers to enable this are summarised.
HELIOS: A new open-source radiative transfer code
NASA Astrophysics Data System (ADS)
Malik, Matej; Grosheintz, Luc; Lukas Grimm, Simon; Mendonça, João; Kitzmann, Daniel; Heng, Kevin
2015-12-01
I present the new open-source code HELIOS, developed to accurately describe radiative transfer in a wide variety of irradiated atmospheres. We employ a one-dimensional multi-wavelength two-stream approach with scattering. Written in Cuda C++, HELIOS uses the GPU’s potential of massive parallelization and is able to compute the TP-profile of an atmosphere in radiative equilibrium and the subsequent emission spectrum in a few minutes on a single computer (for 60 layers and 1000 wavelength bins).The required molecular opacities are obtained with the recently published code HELIOS-K [1], which calculates the line shapes from an input line list and resamples the numerous line-by-line data into a manageable k-distribution format. Based on simple equilibrium chemistry theory [2] we combine the k-distribution functions of the molecules H2O, CO2, CO & CH4 to generate a k-table, which we then employ in HELIOS.I present our results of the following: (i) Various numerical tests, e.g. isothermal vs. non-isothermal treatment of layers. (ii) Comparison of iteratively determined TP-profiles with their analytical parametric prescriptions [3] and of the corresponding spectra. (iii) Benchmarks of TP-profiles & spectra for various elemental abundances. (iv) Benchmarks of averaged TP-profiles & spectra for the exoplanets GJ1214b, HD189733b & HD209458b. (v) Comparison with secondary eclipse data for HD189733b, XO-1b & Corot-2b.HELIOS is being developed, together with the dynamical core THOR and the chemistry solver VULCAN, in the group of Kevin Heng at the University of Bern as part of the Exoclimes Simulation Platform (ESP) [4], which is an open-source project aimed to provide community tools to model exoplanetary atmospheres.-----------------------------[1] Grimm & Heng 2015, ArXiv, 1503.03806[2] Heng, Lyons & Tsai, Arxiv, 1506.05501Heng & Lyons, ArXiv, 1507.01944[3] e.g. Heng, Mendonca & Lee, 2014, ApJS, 215, 4H[4] exoclime.net
Code of Federal Regulations, 2011 CFR
2011-07-01
... Ratings The Cardiovascular System § 4.100 Application of the evaluation criteria for diagnostic codes 7000... medical information does not sufficiently reflect the severity of the veteran's cardiovascular disability...
Code of Federal Regulations, 2014 CFR
2014-07-01
... Ratings The Cardiovascular System § 4.100 Application of the evaluation criteria for diagnostic codes 7000... medical information does not sufficiently reflect the severity of the veteran's cardiovascular disability...
Code of Federal Regulations, 2013 CFR
2013-07-01
... Ratings The Cardiovascular System § 4.100 Application of the evaluation criteria for diagnostic codes 7000... medical information does not sufficiently reflect the severity of the veteran's cardiovascular disability...
Code of Federal Regulations, 2012 CFR
2012-07-01
... Ratings The Cardiovascular System § 4.100 Application of the evaluation criteria for diagnostic codes 7000... medical information does not sufficiently reflect the severity of the veteran's cardiovascular disability...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Ratings The Cardiovascular System § 4.100 Application of the evaluation criteria for diagnostic codes 7000... medical information does not sufficiently reflect the severity of the veteran's cardiovascular disability...
Monte Carlo simulation of ò ó coincidence system using plastic scintillators in 4àgeometry
NASA Astrophysics Data System (ADS)
Dias, M. S.; Piuvezam-Filho, H.; Baccarelli, A. M.; Takeda, M. N.; Koskinas, M. F.
2007-09-01
A modified version of a Monte Carlo code called Esquema, developed at the Nuclear Metrology Laboratory in IPEN, São Paulo, Brazil, has been applied for simulating a 4 πβ(PS)-γ coincidence system designed for primary radionuclide standardisation. This system consists of a plastic scintillator in 4 π geometry, for alpha or electron detection, coupled to a NaI(Tl) counter for gamma-ray detection. The response curves for monoenergetic electrons and photons have been calculated previously by Penelope code and applied as input data to code Esquema. The latter code simulates all the disintegration processes, from the precursor nucleus to the ground state of the daughter radionuclide. As a result, the curve between the observed disintegration rate as a function of the beta efficiency parameter can be simulated. A least-squares fit between the experimental activity values and the Monte Carlo calculation provided the actual radioactive source activity, without need of conventional extrapolation procedures. Application of this methodology to 60Co and 133Ba radioactive sources is presented and showed results in good agreement with a conventional proportional counter 4 πβ(PC)-γ coincidence system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Danko, George L
To increase understanding of the energy extraction capacity of Enhanced Geothermal System(s) (EGS), a numerical model development and application project is completed. The general objective of the project is to develop and apply a new, data-coupled Thermal-Hydrological-Mechanical-Chemical (T-H-M-C) model in which the four internal components can be freely selected from existing simulation software without merging and cross-combining a diverse set of computational codes. Eight tasks are completed during the project period. The results are reported in five publications, an MS thesis, twelve quarterly, and two annual reports to DOE. Two US patents have also been issued during the project period,more » with one patent application originated prior to the start of the project. The “Multiphase Physical Transport Modeling Method and Modeling System” (U.S. Patent 8,396,693 B2, 2013), a key element in the GHE sub-model solution, is successfully used for EGS studies. The “Geothermal Energy Extraction System and Method" invention (U.S. Patent 8,430,166 B2, 2013) originates from the time of project performance, describing a new fluid flow control solution. The new, coupled T-H-M-C numerical model will help analyzing and designing new, efficient EGS systems.« less
Fostering Team Awareness in Earth System Modeling Communities
NASA Astrophysics Data System (ADS)
Easterbrook, S. M.; Lawson, A.; Strong, S.
2009-12-01
Existing Global Climate Models are typically managed and controlled at a single site, with varied levels of participation by scientists outside the core lab. As these models evolve to encompass a wider set of earth systems, this central control of the modeling effort becomes a bottleneck. But such models cannot evolve to become fully distributed open source projects unless they address the imbalance in the availability of communication channels: scientists at the core site have access to regular face-to-face communication with one another, while those at remote sites have access to only a subset of these conversations - e.g. formally scheduled teleconferences and user meetings. Because of this imbalance, critical decision making can be hidden from many participants, their code contributions can interact in unanticipated ways, and the community loses awareness of who knows what. We have documented some of these problems in a field study at one climate modeling centre, and started to develop tools to overcome these problems. We report on one such tool, TracSNAP, which analyzes the social network of the scientists contributing code to the model by extracting the data in an existing project code repository. The tool presents the results of this analysis to modelers and model users in a number of ways: recommendation for who has expertise on particular code modules, suggestions for code sections that are related to files being worked on, and visualizations of team communication patterns. The tool is currently available as a plugin for the Trac bug tracking system.
A novel construction method of QC-LDPC codes based on CRT for optical communications
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Liang, Meng-qi; Wang, Yong; Lin, Jin-zhao; Pang, Yu
2016-05-01
A novel construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed based on Chinese remainder theory (CRT). The method can not only increase the code length without reducing the girth, but also greatly enhance the code rate, so it is easy to construct a high-rate code. The simulation results show that at the bit error rate ( BER) of 10-7, the net coding gain ( NCG) of the regular QC-LDPC(4 851, 4 546) code is respectively 2.06 dB, 1.36 dB, 0.53 dB and 0.31 dB more than those of the classic RS(255, 239) code in ITU-T G.975, the LDPC(32 640, 30 592) code in ITU-T G.975.1, the QC-LDPC(3 664, 3 436) code constructed by the improved combining construction method based on CRT and the irregular QC-LDPC(3 843, 3 603) code constructed by the construction method based on the Galois field ( GF( q)) multiplicative group. Furthermore, all these five codes have the same code rate of 0.937. Therefore, the regular QC-LDPC(4 851, 4 546) code constructed by the proposed construction method has excellent error-correction performance, and can be more suitable for optical transmission systems.
TFaNS Tone Fan Noise Design/Prediction System. Volume 2; User's Manual; 1.4
NASA Technical Reports Server (NTRS)
Topol, David A.; Eversman, Walter
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: the codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. CUP3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides information on code input and file structure essential for potential users of TFANS. This report is divided into three volumes: Volume 1. System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume 2. User's Manual, TFANS Vers. 1.4; Volume 3. Evaluation of System Codes.
Mothers as Mediators of Cognitive Development: A Coding Manual. Updated.
ERIC Educational Resources Information Center
Friedman, Sarah L.; Sherman, Tracy L.
Coding systems developed for a study of the way mothers influence the cognitive development of their 2- to 4-year-old children are described in this report. The coding systems were developed for the analysis of data recorded on videotapes of 3 mother-child situations: 8 minutes of interaction starting with a reunion between mother and child, 5…
MacWilliams Identity for M-Spotty Weight Enumerator
NASA Astrophysics Data System (ADS)
Suzuki, Kazuyoshi; Fujiwara, Eiji
M-spotty byte error control codes are very effective for correcting/detecting errors in semiconductor memory systems that employ recent high-density RAM chips with wide I/O data (e.g., 8, 16, or 32bits). In this case, the width of the I/O data is one byte. A spotty byte error is defined as random t-bit errors within a byte of length b bits, where 1 le t ≤ b. Then, an error is called an m-spotty byte error if at least one spotty byte error is present in a byte. M-spotty byte error control codes are characterized by the m-spotty distance, which includes the Hamming distance as a special case for t =1 or t = b. The MacWilliams identity provides the relationship between the weight distribution of a code and that of its dual code. The present paper presents the MacWilliams identity for the m-spotty weight enumerator of m-spotty byte error control codes. In addition, the present paper clarifies that the indicated identity includes the MacWilliams identity for the Hamming weight enumerator as a special case.
Moore, Brian C J
2003-03-01
To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.
NASA Astrophysics Data System (ADS)
Jöckel, P.; Kerkweg, A.; Buchholz-Dietsch, J.; Tost, H.; Sander, R.; Pozzer, A.
2008-03-01
The implementation of processes related to chemistry into Earth System Models and their coupling within such systems requires the consistent description of the chemical species involved. We provide a tool (written in Fortran95) to structure and manage information about constituents, hereinafter referred to as tracers, namely the Modular Earth Submodel System (MESSy) generic (i.e., infrastructure) submodel TRACER. With TRACER it is possible to define a multitude of tracer sets, depending on the spatio-temporal representation (i.e., the grid structure) of the model. The required information about a specific chemical species is split into the static meta-information about the characteristics of the species, and its (generally in time and space variable) abundance in the corresponding representation. TRACER moreover includes two submodels. One is TRACER_FAMILY, an implementation of the tracer family concept. It distinguishes between two types: type-1 families are usually applied to handle strongly related tracers (e.g., fast equilibrating species) for a specific process (e.g., advection). In contrast to this, type-2 families are applied for tagging techniques. Tagging means the artificial decomposition of one or more species into parts, which are additionally labelled (e.g., by the region of their primary emission) and then processed as the species itself. The type-2 family concept is designed to conserve the linear relationship between the family and its members. The second submodel is TRACER_PDEF, which corrects and budgets numerical negative overshoots that arise in many process implementations due to the numerical limitations (e.g., rounding errors). The submodel therefore guarantees the positive definiteness of the tracers and stabilises the integration scheme. As a by-product, it further provides a global tracer mass diagnostic. Last but not least, we present the submodel PTRAC, which allows the definition of tracers via a Fortran95 namelist, as a complement to the standard tracer definition by application of the TRACER interface routines in the code. TRACER with its submodels and PTRAC can readily be applied to a variety of models without further requirements. The code and a documentation are included in the electronic supplement.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram
The systems resilience research community has developed methods to manually insert additional source-program level assertions to trap errors, and also devised tools to conduct fault injection studies for scalar program codes. In this work, we contribute the first vector oriented LLVM-level fault injector VULFI to help study the effects of faults in vector architectures that are of growing importance, especially for vectorizing loops. Using VULFI, we conduct a resiliency study of nine real-world vector benchmarks using Intel’s AVX and SSE extensions as the target vector instruction sets, and offer the first reported understanding of how faults affect vector instruction sets.more » We take this work further toward automating the insertion of resilience assertions during compilation. This is based on our observation that during intermediate (e.g., LLVM-level) code generation to handle full and partial vectorization, modern compilers exploit (and explicate in their code-documentation) critical invariants. These invariants are turned into error-checking code. We confirm the efficacy of these automatically inserted low-overhead error detectors for vectorized for-loops.« less
ERIC Educational Resources Information Center
Rhodes, Gillian; Jeffery, Linda; Boeing, Alexandra; Calder, Andrew J.
2013-01-01
Despite the discovery of body-selective neural areas in occipitotemporal cortex, little is known about how bodies are visually coded. We used perceptual adaptation to determine how body identity is coded. Brief exposure to a body (e.g., anti-Rose) biased perception toward an identity with opposite properties (Rose). Moreover, the size of this…
TacSat-4 COMMx, Advanced SATCOM Experiment
2009-01-01
Schein, M. T. Marley, C. T. Apland, R. E. Lee, B. D . Williams, E. D . Schaefer, S. R. Vernon, P . D . Schwartz , B. L. Kantsiper, E. J. Finnegan;The...Lee, B. D . Williams, E. D . Schaefer, P . D . Schwartz, R. Denissen, B. Kantsiper, E. J. Finnegan; The Johns Hopkins University Applied Physics...Mission Ops Lead, NRL Code 8233 Bob Kuzma, TacSat-4 Payload Controller, NRL Code 8242 Bob Skalitzky, TacSat-4 Power Systems, NRL Code 8244 Doug Bentz
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Lehua; Oldenburg, Curtis M.
TOGA is a numerical reservoir simulator for modeling non-isothermal flow and transport of water, CO 2, multicomponent oil, and related gas components for applications including CO 2-enhanced oil recovery (CO 2-EOR) and geologic carbon sequestration in depleted oil and gas reservoirs. TOGA uses an approach based on the Peng-Robinson equation of state (PR-EOS) to calculate the thermophysical properties of the gas and oil phases including the gas/oil components dissolved in the aqueous phase, and uses a mixing model to estimate the thermophysical properties of the aqueous phase. The phase behavior (e.g., occurrence and disappearance of the three phases, gas +more » oil + aqueous) and the partitioning of non-aqueous components (e.g., CO 2, CH 4, and n-oil components) between coexisting phases are modeled using K-values derived from assumptions of equal-fugacity that have been demonstrated to be very accurate as shown by comparison to measured data. Models for saturated (water) vapor pressure and water solubility (in the oil phase) are used to calculate the partitioning of the water (H 2O) component between the gas and oil phases. All components (e.g., CO 2, H 2O, and n hydrocarbon components) are allowed to be present in all phases (aqueous, gaseous, and oil). TOGA uses a multiphase version of Darcy’s Law to model flow and transport through porous media of mixtures with up to three phases over a range of pressures and temperatures appropriate to hydrocarbon recovery and geologic carbon sequestration systems. Transport of the gaseous and dissolved components is by advection and Fickian molecular diffusion. New methods for phase partitioning and thermophysical property modeling in TOGA have been validated against experimental data published in the literature for describing phase partitioning and phase behavior. Flow and transport has been verified by testing against related TOUGH2 EOS modules and CMG. The code has also been validated against a CO 2-EOR experimental core flood involving flow of three phases and 12 components. Results of simulations of a hypothetical 3D CO 2-EOR problem involving three phases and multiple components are presented to demonstrate the field-scale capabilities of the new code. This user guide provides instructions for use and sample problems for verification and demonstration.« less
41 CFR 101-30.403-2 - Management codes.
Code of Federal Regulations, 2011 CFR
2011-07-01
....403-2 Section 101-30.403-2 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30.4-Use of the Federal Catalog System § 101-30.403-2 Management codes. For internal use within an...
41 CFR 101-30.403-2 - Management codes.
Code of Federal Regulations, 2013 CFR
2013-07-01
....403-2 Section 101-30.403-2 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30.4-Use of the Federal Catalog System § 101-30.403-2 Management codes. For internal use within an...
41 CFR 101-30.403-2 - Management codes.
Code of Federal Regulations, 2012 CFR
2012-07-01
....403-2 Section 101-30.403-2 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30.4-Use of the Federal Catalog System § 101-30.403-2 Management codes. For internal use within an...
41 CFR 101-30.403-2 - Management codes.
Code of Federal Regulations, 2014 CFR
2014-07-01
....403-2 Section 101-30.403-2 Public Contracts and Property Management Federal Property Management Regulations System FEDERAL PROPERTY MANAGEMENT REGULATIONS SUPPLY AND PROCUREMENT 30-FEDERAL CATALOG SYSTEM 30.4-Use of the Federal Catalog System § 101-30.403-2 Management codes. For internal use within an...
MILAMIN 2 - Fast MATLAB FEM solver
NASA Astrophysics Data System (ADS)
Dabrowski, Marcin; Krotkiewski, Marcin; Schmid, Daniel W.
2013-04-01
MILAMIN is a free and efficient MATLAB-based two-dimensional FEM solver utilizing unstructured meshes [Dabrowski et al., G-cubed (2008)]. The code consists of steady-state thermal diffusion and incompressible Stokes flow solvers implemented in approximately 200 lines of native MATLAB code. The brevity makes the code easily customizable. An important quality of MILAMIN is speed - it can handle millions of nodes within minutes on one CPU core of a standard desktop computer, and is faster than many commercial solutions. The new MILAMIN 2 allows three-dimensional modeling. It is designed as a set of functional modules that can be used as building blocks for efficient FEM simulations using MATLAB. The utilities are largely implemented as native MATLAB functions. For performance critical parts we use MUTILS - a suite of compiled MEX functions optimized for shared memory multi-core computers. The most important features of MILAMIN 2 are: 1. Modular approach to defining, tracking, and discretizing the geometry of the model 2. Interfaces to external mesh generators (e.g., Triangle, Fade2d, T3D) and mesh utilities (e.g., element type conversion, fast point location, boundary extraction) 3. Efficient computation of the stiffness matrix for a wide range of element types, anisotropic materials and three-dimensional problems 4. Fast global matrix assembly using a dedicated MEX function 5. Automatic integration rules 6. Flexible prescription (spatial, temporal, and field functions) and efficient application of Dirichlet, Neuman, and periodic boundary conditions 7. Treatment of transient and non-linear problems 8. Various iterative and multi-level solution strategies 9. Post-processing tools (e.g., numerical integration) 10. Visualization primitives using MATLAB, and VTK export functions We provide a large number of examples that show how to implement a custom FEM solver using the MILAMIN 2 framework. The examples are MATLAB scripts of increasing complexity that address a given technical topic (e.g., creating meshes, reordering nodes, applying boundary conditions), a given numerical topic (e.g., using various solution strategies, non-linear iterations), or that present a fully-developed solver designed to address a scientific topic (e.g., performing Stokes flow simulations in synthetic porous medium). References: Dabrowski, M., M. Krotkiewski, and D. W. Schmid MILAMIN: MATLAB-based finite element method solver for large problems, Geochem. Geophys. Geosyst., 9, Q04030, 2008
Monte Carlo treatment planning for molecular targeted radiotherapy within the MINERVA system
NASA Astrophysics Data System (ADS)
Lehmann, Joerg; Hartmann Siantar, Christine; Wessol, Daniel E.; Wemple, Charles A.; Nigg, David; Cogliati, Josh; Daly, Tom; Descalle, Marie-Anne; Flickinger, Terry; Pletcher, David; DeNardo, Gerald
2005-03-01
The aim of this project is to extend accurate and patient-specific treatment planning to new treatment modalities, such as molecular targeted radiation therapy, incorporating previously crafted and proven Monte Carlo and deterministic computation methods. A flexible software environment is being created that allows planning radiation treatment for these new modalities and combining different forms of radiation treatment with consideration of biological effects. The system uses common input interfaces, medical image sets for definition of patient geometry and dose reporting protocols. Previously, the Idaho National Engineering and Environmental Laboratory (INEEL), Montana State University (MSU) and Lawrence Livermore National Laboratory (LLNL) had accrued experience in the development and application of Monte Carlo based, three-dimensional, computational dosimetry and treatment planning tools for radiotherapy in several specialized areas. In particular, INEEL and MSU have developed computational dosimetry systems for neutron radiotherapy and neutron capture therapy, while LLNL has developed the PEREGRINE computational system for external beam photon-electron therapy. Building on that experience, the INEEL and MSU are developing the MINERVA (modality inclusive environment for radiotherapeutic variable analysis) software system as a general framework for computational dosimetry and treatment planning for a variety of emerging forms of radiotherapy. In collaboration with this development, LLNL has extended its PEREGRINE code to accommodate internal sources for molecular targeted radiotherapy (MTR), and has interfaced it with the plugin architecture of MINERVA. Results from the extended PEREGRINE code have been compared to published data from other codes, and found to be in general agreement (EGS4—2%, MCNP—10%) (Descalle et al 2003 Cancer Biother. Radiopharm. 18 71-9). The code is currently being benchmarked against experimental data. The interpatient variability of the drug pharmacokinetics in MTR can only be properly accounted for by image-based, patient-specific treatment planning, as has been common in external beam radiation therapy for many years. MINERVA offers 3D Monte Carlo-based MTR treatment planning as its first integrated operational capability. The new MINERVA system will ultimately incorporate capabilities for a comprehensive list of radiation therapies. In progress are modules for external beam photon-electron therapy and boron neutron capture therapy (BNCT). Brachytherapy and proton therapy are planned. Through the open application programming interface (API), other groups can add their own modules and share them with the community.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brunett, A. J.; Fanning, T. H.
The United States has extensive experience with the design, construction, and operation of sodium cooled fast reactors (SFRs) over the last six decades. Despite the closure of various facilities, the U.S. continues to dedicate research and development (R&D) efforts to the design of innovative experimental, prototype, and commercial facilities. Accordingly, in support of the rich operating history and ongoing design efforts, the U.S. has been developing and maintaining a series of tools with capabilities that envelope all facets of SFR design and safety analyses. This paper provides an overview of the current U.S. SFR analysis toolset, including codes such asmore » SAS4A/SASSYS-1, MC2-3, SE2-ANL, PERSENT, NUBOW-3D, and LIFE-METAL, as well as the higher-fidelity tools (e.g. PROTEUS) being integrated into the toolset. Current capabilities of the codes are described and key ongoing development efforts are highlighted for some codes.« less
Evolutionary models of rotating dense stellar systems: challenges in software and hardware
NASA Astrophysics Data System (ADS)
Fiestas, Jose
2016-02-01
We present evolutionary models of rotating self-gravitating systems (e.g. globular clusters, galaxy cores). These models are characterized by the presence of initial axisymmetry due to rotation. Central black hole seeds are alternatively included in our models, and black hole growth due to consumption of stellar matter is simulated until the central potential dominates the kinematics in the core. Goal is to study the long-term evolution (~ Gyr) of relaxed dense stellar systems, which deviate from spherical symmetry, their morphology and final kinematics. With this purpose, we developed a 2D Fokker-Planck analytical code, which results we confirm by detailed N-Body techniques, applying a high performance code, developed for GPU machines. We compare our models to available observations of galactic rotating globular clusters, and conclude that initial rotation modifies significantly the shape and lifetime of these systems, and can not be neglected in studying the evolution of globular clusters, and the galaxy itself.
Problem Management Module: An Innovative System to Improve Problem List Workflow
Hodge, Chad M.; Kuttler, Kathryn G.; Bowes, Watson A.; Narus, Scott P.
2014-01-01
Electronic problem lists are essential to modern health record systems, with a primary goal to serve as the repository of a patient’s current health issues. Additionally, coded problems can be used to drive downstream activities such as decision support, evidence-based medicine, billing, and cohort generation for research. Meaningful Use also requires use of a coded problem list. Over the course of three years, Intermountain Healthcare developed a problem management module (PMM) that provided innovative functionality to improve clinical workflow and boost problem list adoption, e.g. smart search, user customizable views, problem evolution, and problem timelines. In 23 months of clinical use, clinicians entered over 70,000 health issues, the percentage of free-text items dropped to 1.2%, completeness of problem list items increased by 14%, and more collaborative habits were initiated. PMID:25954372
The additional impact of liaison psychiatry on the future funding of general hospital services.
Udoh, G; Afif, M; MacHale, S
2012-01-01
Accurate coding system is fundamental in determining Casemix, which is likely to become a major determinant of future funding of health care services. Our aim was to determine whether the Hospital Inpatient Enquiry (HIPE) system assigned codes for psychiatric disorders were accurate and reflective of Liaison psychiatric input into patients' care. The HIPE system's coding for psychiatric disorders were compared with psychiatrists' coding for the same patients over a prospective 6 months period, using ICD-10 diagnostic criteria. A total of 262 cases were reviewed of which 135 (51%) were male and 127 (49%) were female. The mean age was 49 years, ranging from 16 years to 87 years (SD 17.3). Our findings show a significant disparity between HIPE and psychiatrists' coding. Only 94 (36%) of the HIPE coded cases were compatible with the psychiatrists' coding. The commonest cause of incompatibility was the coding personnel's failure to code for a psychiatric disorder in the present of one 117 (69.9%), others were coding for a different diagnosis 36 (21%), coding for a psychiatric disorder in the absent of one 11 (6.6%), different sub-type and others 2 (1.2%) respectively. HIPE data coded depression 30 (11.5%) as the commonest diagnosis and general examination 1 (0.4%) as least but failed to code for dementia, illicit drug use and somatoform disorder despite their being coded for by the psychiatrists. In contrast, the psychiatrists coded delirium 46 (18%) and dementia 1 (0.4%) as the commonest and the least diagnosed disorders respectively. Given the marked increase in case complexity associated with psychiatric co-morbidities, future funding streams are at risk of inadequate payment for services rendered.
Stand-off detection of explosive particles by imaging Raman spectroscopy
NASA Astrophysics Data System (ADS)
Nordberg, Markus; Åkeson, Madeleine; Östmark, Henric; Carlsson, Torgny E.
2011-06-01
A multispectral imaging technique has been developed to detect and identify explosive particles, e.g. from a fingerprint, at stand-off distances using Raman spectroscopy. When handling IED's as well as other explosive devices, residues can easily be transferred via fingerprints onto other surfaces e.g. car handles, gear sticks and suite cases. By imaging the surface using multispectral imaging Raman technique the explosive particles can be identified and displayed using color-coding. The technique has been demonstrated by detecting fingerprints containing significant amounts of 2,4-dinitrotoulene (DNT), 2,4,6-trinitrotoulene (TNT) and ammonium nitrate at a distance of 12 m in less than 90 seconds (22 images × 4 seconds)1. For each measurement, a sequence of images, one image for each wave number, is recorded. The spectral data from each pixel is compared with reference spectra of the substances to be detected. The pixels are marked with different colors corresponding to the detected substances in the fingerprint. The system has now been further developed to become less complex and thereby less sensitive to the environment such as temperature fluctuations. The optical resolution has been improved to less than 70 μm measured at 546 nm wavelength. The total detection time is ranging from less then one minute to around five minutes depending on the size of the particles and how confident the identification should be. The results indicate a great potential for multi-spectral imaging Raman spectroscopy as a stand-off technique for detection of single explosive particles.
Beyond the First Optical Depth: Fusing Optical Data From Ocean Color Imagery and Gliders
2009-01-01
34*/ Office of Counsel,Code 1008.3 U •• "*-<-, ADOR/Director NCST E. R. Franchi , 7000 %. Public Affairs (Unclassified/ Unlimited Only). Code -rn...extreme weather (e.g., hurricanes) becoming a safe and efficient alternative to shipboard surveys3. Despite these benefits , data streams provided by...ECO-triplet poke, WetLabs). Unlike other glider types (e.g., spray, seaglider), the use of Slocums was especially advantageous in the WAP region to
Friendly Extensible Transfer Tool Beta Version
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collins, William P.; Gutierrez, Kenneth M.; McRee, Susan R.
2016-04-15
Often data transfer software is designed to meet specific requirements or apply to specific environments. Frequently, this requires source code integration for added functionality. An extensible data transfer framework is needed to more easily incorporate new capabilities, in modular fashion. Using FrETT framework, functionality may be incorporated (in many cases without need of source code) to handle new platform capabilities: I/O methods (e.g., platform specific data access), network transport methods, data processing (e.g., data compression.).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burns, T.D. Jr.
1996-05-01
The Monte Carlo Model System (MCMS) for the Washington State University (WSU) Radiation Center provides a means through which core criticality and power distributions can be calculated, as well as providing a method for neutron and photon transport necessary for BNCT epithermal neutron beam design. The computational code used in this Model System is MCNP4A. The geometric capability of this Monte Carlo code allows the WSU system to be modeled very accurately. A working knowledge of the MCNP4A neutron transport code increases the flexibility of the Model System and is recommended, however, the eigenvalue/power density problems can be run withmore » little direct knowledge of MCNP4A. Neutron and photon particle transport require more experience with the MCNP4A code. The Model System consists of two coupled subsystems; the Core Analysis and Source Plane Generator Model (CASP), and the BeamPort Shell Particle Transport Model (BSPT). The CASP Model incorporates the S({alpha}, {beta}) thermal treatment, and is run as a criticality problem yielding, the system eigenvalue (k{sub eff}), the core power distribution, and an implicit surface source for subsequent particle transport in the BSPT Model. The BSPT Model uses the source plane generated by a CASP run to transport particles through the thermal column beamport. The user can create filter arrangements in the beamport and then calculate characteristics necessary for assessing the BNCT potential of the given filter want. Examples of the characteristics to be calculated are: neutron fluxes, neutron currents, fast neutron KERMAs and gamma KERMAs. The MCMS is a useful tool for the WSU system. Those unfamiliar with the MCNP4A code can use the MCMS transparently for core analysis, while more experienced users will find the particle transport capabilities very powerful for BNCT filter design.« less
Paramedic attitudes regarding prehospital analgesia.
Walsh, Brooks; Cone, David C; Meyer, Emily M; Larkin, Gregory L
2013-01-01
Although pain is a major reason why patients summon emergency medical services (EMS), prehospital medical providers administer analgesic agents at inappropriately low rates. One possible reason is the role of EMS provider attitudes. This study was conducted to elicit attitudes that may act as impediments or deterrents to administering analgesia in the prehospital environment. A qualitative methodology was employed. We recruited experienced paramedics, with at least one year of full-time fieldwork, from a variety of agencies in New England. We sought to include a balance of rural and urban as well as both private and hospital-based agencies. Participants at each site were selected through purposive sampling. A semistructured discussion guide was designed to elicit the paramedics' past experiences with administering analgesia, as well as reflections on their role in the care of patients in pain. Both interviews and focus groups were conducted. These sessions were recorded and transcribed verbatim. The transcripts were topic-analyzed and iteratively coded by two independent investigators utilizing the constant comparative method of Glaser and Strauss' Grounded Theory; coding ambiguities were resolved by consensus. Through a series of conceptual mapping and iterative code refinement, themes and domains were generated. Fifteen paramedics from five EMS agencies in three New England states were recruited. Major themes were: 1) a reluctance to administer opioids to patients without significant objective signs (e.g., deformity, hypertension); 2) a preoccupation with potential malingering; 3) ambivalence about the degree of pain control to target or to expect (e.g., aiming to "take the edge off"); 4) a fear of masking diagnostic symptoms; and 5) an aversion to aggressive dosing of opioids (e.g., initial doses of morphine did not exceed 5 mg). A number of potentially modifiable attitudinal barriers to appropriate pain management were revealed.
Exceptionally long 5' UTR short tandem repeats specifically linked to primates.
Namdar-Aligoodarzi, P; Mohammadparast, S; Zaker-Kandjani, B; Talebi Kakroodi, S; Jafari Vesiehsari, M; Ohadi, M
2015-09-10
We have previously reported genome-scale short tandem repeats (STRs) in the core promoter interval (i.e. -120 to +1 to the transcription start site) of protein-coding genes that have evolved identically in primates vs. non-primates. Those STRs may function as evolutionary switch codes for primate speciation. In the current study, we used the Ensembl database to analyze the 5' untranslated region (5' UTR) between +1 and +60 of the transcription start site of the entire human protein-coding genes annotated in the GeneCards database, in order to identify "exceptionally long" STRs (≥5-repeats), which may be of selective/adaptive advantage. The importance of this critical interval is its function as core promoter, and its effect on transcription and translation. In order to minimize ascertainment bias, we analyzed the evolutionary status of the human 5' UTR STRs of ≥5-repeats in several species encompassing six major orders and superorders across mammals, including primates, rodents, Scandentia, Laurasiatheria, Afrotheria, and Xenarthra. We introduce primate-specific STRs, and STRs which have expanded from mouse to primates. Identical co-occurrence of the identified STRs of rare average frequency between 0.006 and 0.0001 in primates supports a role for those motifs in processes that diverged primates from other mammals, such as neuronal differentiation (e.g. APOD and FGF4), and craniofacial development (e.g. FILIP1L). A number of the identified STRs of ≥5-repeats may be human-specific (e.g. ZMYM3 and DAZAP1). Future work is warranted to examine the importance of the listed genes in primate/human evolution, development, and disease. Copyright © 2015 Elsevier B.V. All rights reserved.
PopCORN: Hunting down the differences between binary population synthesis codes
NASA Astrophysics Data System (ADS)
Toonen, S.; Claeys, J. S. W.; Mennekens, N.; Ruiter, A. J.
2014-02-01
Context. Binary population synthesis (BPS) modelling is a very effective tool to study the evolution and properties of various types of close binary systems. The uncertainty in the parameters of the model and their effect on a population can be tested in a statistical way, which then leads to a deeper understanding of the underlying (sometimes poorly understood) physical processes involved. Several BPS codes exist that have been developed with different philosophies and aims. Although BPS has been very successful for studies of many populations of binary stars, in the particular case of the study of the progenitors of supernovae Type Ia, the predicted rates and ZAMS progenitors vary substantially between different BPS codes. Aims: To understand the predictive power of BPS codes, we study the similarities and differences in the predictions of four different BPS codes for low- and intermediate-mass binaries. We investigate the differences in the characteristics of the predicted populations, and whether they are caused by different assumptions made in the BPS codes or by numerical effects, e.g. a lack of accuracy in BPS codes. Methods: We compare a large number of evolutionary sequences for binary stars, starting with the same initial conditions following the evolution until the first (and when applicable, the second) white dwarf (WD) is formed. To simplify the complex problem of comparing BPS codes that are based on many (often different) assumptions, we equalise the assumptions as much as possible to examine the inherent differences of the four BPS codes. Results: We find that the simulated populations are similar between the codes. Regarding the population of binaries with one WD, there is very good agreement between the physical characteristics, the evolutionary channels that lead to the birth of these systems, and their birthrates. Regarding the double WD population, there is a good agreement on which evolutionary channels exist to create double WDs and a rough agreement on the characteristics of the double WD population. Regarding which progenitor systems lead to a single and double WD system and which systems do not, the four codes agree well. Most importantly, we find that for these two populations, the differences in the predictions from the four codes are not due to numerical differences, but because of different inherent assumptions. We identify critical assumptions for BPS studies that need to be studied in more detail. Appendices are available in electronic form at http://www.aanda.org
DOE Office of Scientific and Technical Information (OSTI.GOV)
Giuseppe Palmiotti
In this work, the implementation of a collision history-based approach to sensitivity/perturbation calculations in the Monte Carlo code SERPENT is discussed. The proposed methods allow the calculation of the eects of nuclear data perturbation on several response functions: the eective multiplication factor, reaction rate ratios and bilinear ratios (e.g., eective kinetics parameters). SERPENT results are compared to ERANOS and TSUNAMI Generalized Perturbation Theory calculations for two fast metallic systems and for a PWR pin-cell benchmark. New methods for the calculation of sensitivities to angular scattering distributions are also presented, which adopts fully continuous (in energy and angle) Monte Carlo estimators.
P-adic valued models of swarm behaviour
NASA Astrophysics Data System (ADS)
Schumann, Andrew
2017-07-01
The swarm behaviour can be fully determined by attractants (food pieces) which change the directions of swarm propagation. If we assume that at each time step the swarm can find out not more than p - 1 attractants, then the swarm behaviour can be coded by p-adic integers. The main task of any swarm is to logistically optimize the road system connecting the reachable attractants. In the meanwhile, the transporting network of the swarm has loops (circles) and permanently changes, e.g. the swarm occupies some attractants and leaves the others. However, this complex dynamics can be effectively coded by p-adic integers. This allows us to represent the swarm behaviour as a calculation on p-adic valued strings.
RNA damage in biological conflicts and the diversity of responding RNA repair systems
Burroughs, A. Maxwell; Aravind, L.
2016-01-01
RNA is targeted in biological conflicts by enzymatic toxins or effectors. A vast diversity of systems which repair or ‘heal’ this damage has only recently become apparent. Here, we summarize the known effectors, their modes of action, and RNA targets before surveying the diverse systems which counter this damage from a comparative genomics viewpoint. RNA-repair systems show a modular organization with extensive shuffling and displacement of the constituent domains; however, a general ‘syntax’ is strongly maintained whereby systems typically contain: a RNA ligase (either ATP-grasp or RtcB superfamilies), nucleotidyltransferases, enzymes modifying RNA-termini for ligation (phosphatases and kinases) or protection (methylases), and scaffold or cofactor proteins. We highlight poorly-understood or previously-uncharacterized repair systems and components, e.g. potential scaffolding cofactors (Rot/TROVE and SPFH/Band-7 modules) with their respective cognate non-coding RNAs (YRNAs and a novel tRNA-like molecule) and a novel nucleotidyltransferase associating with diverse ligases. These systems have been extensively disseminated by lateral transfer between distant prokaryotic and microbial eukaryotic lineages consistent with intense inter-organismal conflict. Components have also often been ‘institutionalized’ for non-conflict roles, e.g. in RNA-splicing and in RNAi systems (e.g. in kinetoplastids) which combine a distinct family of RNA-acting prim-pol domains with DICER-like proteins. PMID:27536007
Combined experimental and Monte Carlo verification of
brachytherapy plans for vaginal applicators
NASA Astrophysics Data System (ADS)
Sloboda, Ron S.; Wang, Ruqing
1998-12-01
Dose rates in a phantom around a shielded and an unshielded vaginal applicator containing Selectron low-dose-rate
sources were determined by experiment and Monte Carlo simulation. Measurements were performed with thermoluminescent dosimeters in a white polystyrene phantom using an experimental protocol geared for precision. Calculations for the same set-up were done using a version of the EGS4 Monte Carlo code system modified for brachytherapy applications into which a new combinatorial geometry package developed by Bielajew was recently incorporated. Measured dose rates agree with Monte Carlo estimates to within 5% (1 SD) for the unshielded applicator, while highlighting some experimental uncertainties for the shielded applicator. Monte Carlo calculations were also done to determine a value for the effective transmission of the shield required for clinical treatment planning, and to estimate the dose rate in water at points in axial and sagittal planes transecting the shielded applicator. Comparison with dose rates generated by the planning system indicates that agreement is better than 5% (1 SD) at most positions. The precision thermoluminescent dosimetry protocol and modified Monte Carlo code are effective complementary tools for brachytherapy applicator dosimetry.
Northern Arabian Sea Circulation - Autonomous Research: Optimal Planning Systems (NASCar-OPS)
2015-09-30
vehicles ( gliders , drifters, floats, and/or wave- gliders ) - Provide guidance for persistent optimal sampling, including for long-duration observation...headings and relative operating speeds will be provided to the operational fleets of instruments and vehicles (e.g. gliders , drifters, floats or wave... gliders ). We plan to use models specific to vehicle types (floats, wave- gliders , etc.). We also plan to further parallelize and optimize our codes
Hadden, Kellie L; LeFort, Sandra; O'Brien, Michelle; Coyte, Peter C; Guerriere, Denise N
2016-04-01
The purpose of the current study was to examine the concurrent and discriminant validity of the Child Facial Coding System for children with cerebral palsy. Eighty-five children (mean = 8.35 years, SD = 4.72 years) were videotaped during a passive joint stretch with their physiotherapist and during 3 time segments: baseline, passive joint stretch, and recovery. Children's pain responses were rated from videotape using the Numerical Rating Scale and Child Facial Coding System. Results indicated that Child Facial Coding System scores during the passive joint stretch significantly correlated with Numerical Rating Scale scores (r = .72, P < .01). Child Facial Coding System scores were also significantly higher during the passive joint stretch than the baseline and recovery segments (P < .001). Facial activity was not significantly correlated with the developmental measures. These findings suggest that the Child Facial Coding System is a valid method of identifying pain in children with cerebral palsy. © The Author(s) 2015.
Finite elements numerical codes as primary tool to improve beam optics in NIO1
NASA Astrophysics Data System (ADS)
Baltador, C.; Cavenago, M.; Veltri, P.; Serianni, G.
2017-08-01
The RF negative ion source NIO1, built at Consorzio RFX in Padua (Italy), is aimed to investigate general issues on ion source physics in view of the full-size ITER injector MITICA as well as DEMO relevant solutions, like energy recovery and alternative neutralization systems, crucial for neutral beam injectors in future fusion experiments. NIO1 has been designed to produce 9 H-beamlets (in a 3x3 pattern) of 15mA each and 60keV, using a three electrodes system downstream the plasma source. At the moment the source is at its early operational stage and only operation at low power and low beam energy is possible. In particular, NIO1 presents a too strong set of SmCo co-extraction electron suppression magnets (CESM) in the extraction grid (EG) that will be replaced by a weaker set of Ferrite magnets. A completely new set of magnets will be also designed and mounted on the new EG that will be installed next year, replacing the present one. In this paper, the finite element code OPERA 3D is used to investigate the effects of the three sets of magnets on beamlet optics. A comparison of numerical results with measurements will be provided where possible.
2006-08-25
interleaving schemes defined in 802.11a standard, although only 6 Mbps data rate with BPSK and 1/2 Convolutional coding and puncturing is used in our...16-QAM/64-QAM Convolutional Code K = 7 (64 states) K = 7 (64 states) Coding Rates 1/2, 2/3, 3/4 1/2, 2/3, 3/4 Channel Spacing (MHz) 20 10 Signal...Since 3G systems need to be backward compatible with 2G systems, they are a combination of existing and evolved equipments with data rate up to 2 Mbps
Methodology, status and plans for development and assessment of the code ATHLET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teschendorff, V.; Austregesilo, H.; Lerchl, G.
1997-07-01
The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is being developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) for the analysis of anticipated and abnormal plant transients, small and intermediate leaks as well as large breaks in light water reactors. The aim of the code development is to cover the whole spectrum of design basis and beyond design basis accidents (without core degradation) for PWRs and BWRs with only one code. The main code features are: advanced thermal-hydraulics; modular code architecture; separation between physical models and numerical methods; pre- and post-processing tools; portability. The codemore » has features that are of special interest for applications to small leaks and transients with accident management, e.g. initialization by a steady-state calculation, full-range drift-flux model, dynamic mixture level tracking. The General Control Simulation Module of ATHLET is a flexible tool for the simulation of the balance-of-plant and control systems including the various operator actions in the course of accident sequences with AM measures. The code development is accompained by a systematic and comprehensive validation program. A large number of integral experiments and separate effect tests, including the major International Standard Problems, have been calculated by GRS and by independent organizations. The ATHLET validation matrix is a well balanced set of integral and separate effects tests derived from the CSNI proposal emphasizing, however, the German combined ECC injection system which was investigated in the UPTF, PKL and LOBI test facilities.« less
2008-10-01
Director NCST E. R. Franchi , 7000 ^^M^4^k ro£— 4// 2^/s y Public Affairs (Unclassified/ Unlimited Only), Code 7030 4 Division, Code Author, Code...from the Navy Operational Global Atmospheric Prediction System (NOGAPS, Hogan and Rosmond, 1991) and assimilates data via the Navy Coupled Ocean...forecasts using Global , Atlantic, Gulf of Mexico, and northern Gulf of Mexico configurations of HYCOM. Proceedings, Ocean Optics XIX, Castelvecchio Pascoli
Mobile Code: The Future of the Internet
1999-01-01
code ( mobile agents) to multiple proxies or servers " Customization " (e.g., re-formatting, filtering, metasearch) Information overload Diversified... Mobile code is necessary, rather than client-side code, since many customization features (such as information monitoring) do not work if the...economic foundation for Web sites, many Web sites earn money solely from advertisements . If these sites allow mobile agents to easily access the content
CLUMPY: A code for γ-ray signals from dark matter structures
NASA Astrophysics Data System (ADS)
Charbonnier, Aldée; Combet, Céline; Maurin, David
2012-03-01
We present the first public code for semi-analytical calculation of the γ-ray flux astrophysical J-factor from dark matter annihilation/decay in the Galaxy, including dark matter substructures. The core of the code is the calculation of the line of sight integral of the dark matter density squared (for annihilations) or density (for decaying dark matter). The code can be used in three modes: i) to draw skymaps from the Galactic smooth component and/or the substructure contributions, ii) to calculate the flux from a specific halo (that is not the Galactic halo, e.g. dwarf spheroidal galaxies) or iii) to perform simple statistical operations from a list of allowed DM profiles for a given object. Extragalactic contributions and other tracers of DM annihilation (e.g. positrons, anti-protons) will be included in a second release.
Avidan, Alexander; Weissman, Charles; Levin, Phillip D
2015-04-01
Quick response (QR) codes containing anesthesia syllabus data were introduced into an anesthesia information management system. The code was generated automatically at the conclusion of each case and available for resident case logging using a smartphone or tablet. The goal of this study was to evaluate the use and usability/user-friendliness of such system. Resident case logging practices were assessed prior to introducing the QR codes. QR code use and satisfactions amongst residents was reassessed at three and six months. Before QR code introduction only 12/23 (52.2%) residents maintained a case log. Most of the remaining residents (9/23, 39.1%) expected to receive a case list from the anesthesia information management system database at the end of their residency. At three months and six months 17/26 (65.4%) and 15/25 (60.0%) residents, respectively, were using the QR codes. Satisfaction was rated as very good or good. QR codes for residents' case logging with smartphones or tablets were successfully introduced in an anesthesia information management system and used by most residents. QR codes can be successfully implemented into medical practice to support data transfer. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A Comparison of Global Indexing Schemes to Facilitate Earth Science Data Management
NASA Astrophysics Data System (ADS)
Griessbaum, N.; Frew, J.; Rilee, M. L.; Kuo, K. S.
2017-12-01
Recent advances in database technology have led to systems optimized for managing petabyte-scale multidimensional arrays. These array databases are a good fit for subsets of the Earth's surface that can be projected into a rectangular coordinate system with acceptable geometric fidelity. However, for global analyses, array databases must address the same distortions and discontinuities that apply to map projections in general. The array database SciDB supports enormous databases spread across thousands of computing nodes. Additionally, the following SciDB characteristics are particularly germane to the coordinate system problem: SciDB efficiently stores and manipulates sparse (i.e. mostly empty) arrays. SciDB arrays have 64-bit indexes. SciDB supports user-defined data types, functions, and operators. We have implemented two geospatial indexing schemes in SciDB. The simplest uses two array dimensions to represent longitude and latitude. For representation as 64-bit integers, the coordinates are multiplied by a scale factor large enough to yield an appropriate Earth surface resolution (e.g., a scale factor of 100,000 yields a resolution of approximately 1m at the equator). Aside from the longitudinal discontinuity, the principal disadvantage of this scheme is its fixed scale factor. The second scheme uses a single array dimension to represent the bit-codes for locations in a hierarchical triangular mesh (HTM) coordinate system. A HTM maps the Earth's surface onto an octahedron, and then recursively subdivides each triangular face to the desired resolution. Earth surface locations are represented as the concatenation of an octahedron face code and a quadtree code within the face. Unlike our integerized lat-lon scheme, the HTM allow for objects of different size (e.g., pixels with differing resolutions) to be represented in the same indexing scheme. We present an evaluation of the relative utility of these two schemes for managing and analyzing MODIS swath data.
Homogeneous Photodynamical Analysis of Kepler's Multiply-Transiting Systems
NASA Astrophysics Data System (ADS)
Ragozzine, Darin
To search for planets more like our own, NASA s Kepler Space Telescope ( Kepler ) discovered thousands of exoplanet candidates that cross in front of ( transit ) their parent stars (e.g., Twicken et al. 2016). The Kepler exoplanet data represent an incredible observational leap forward as evidenced by hundreds of papers with thousands of citations. In particular, systems with multiple transiting planets combine the determination of physical properties of exoplanets (e.g., radii), the context provided by the system architecture, and insights from orbital dynamics. Such systems are the most information-rich exoplanetary systems (Ragozzine & Holman 2010). Thanks to Kepler s revolutionary dataset, understanding these Multi-Transiting Systems (MTSs) enables a wide variety of major science questions. In conclusion, existing analyses of MTSs are incomplete and suboptimal and our efficient and timely proposal will provide significant scientific gains ( 100 new mass measurements and 100 updated mass measurements). Furthermore, our homogeneous analysis enables future statistical analyses, including those necessary to characterize the small planet mass-radius relation with implications for understanding the formation, evolution, and habitability of planets. The overarching goal of this proposal is a complete homogeneous investigation of Kepler MTSs to provide detailed measurements (or constraints) on exoplanetary physical and orbital properties. Current investigations do not exploit the full power of the Kepler data; here we propose to use better data (Short Cadence observations), better methods (photodynamical modeling), and a better statistical method (Bayesian Differential Evolution Markov Chain Monte Carlo) in a homogenous analysis of all 700 Kepler MTSs. These techniques are particularly valuable for understanding small terrestrial planets. We propose to extract the near-maximum amount of information from these systems through a series of three research objectives. Research Objective 1 (RO1) Gather and detrend publicly-available light curves for Kepler MTSs; gather starting guesses of preliminary planetary and stellar parameters from the Kepler pipeline (e.g., Rowe et al. 2014) and other studies; and expand our existing photodynamical code (e.g., Mills & Fabrycky 2017) to handle all Kepler MTSs. All required data are publicly available and our significant past expertise demonstrates our ability to complete these tasks. The new photodynamical code will be called the PhotoDynamical Multi-planet Model (PhoDyMM) and described in a paper. Research Objective 2 (RO2) Apply PhoDyMM to the 600 known systems with 2-3 transiting planets; publish these results, including full posterior distributions for all systems (to be housed at the NASA Exoplanet Archive). Research Objective 3 (RO3) Apply PhoDyMM to the 100 Kepler MTSs with 4 or more planets. This astrophysics data analysis is a major step beyond existing efforts and will provide the definitive physical and orbital properties for Kepler MTSs. It is clearly responsive to the Astrophysics Data Analysis Program and relevant to NASA Astrophysics Goals. PI Ragozzine and Co-I Fabrycky have participated in the Kepler prime science mission since its inception and have significant experience in all required areas. Co-I Mills has the most published uses of a photodynamical model on some of the most difficult to analyze exoplanetary systems (Kepler-11, Kepler-108 Kepler-223, Kepler-444). We will employ best practices for Data Management such as archiving posterior distributions and providing open access to PhoDyMM. PI Ragozzine s startup provided sufficient computational resources to perform the extensive analyses. He will be supported by a graduate student and unfunded undergraduates.
Chibani, Omar; Li, X Allen
2002-05-01
Three Monte Carlo photon/electron transport codes (GEPTS, EGSnrc, and MCNP) are bench-marked against dose measurements in homogeneous (both low- and high-Z) media as well as at interfaces. A brief overview on physical models used by each code for photon and electron (positron) transport is given. Absolute calorimetric dose measurements for 0.5 and 1 MeV electron beams incident on homogeneous and multilayer media are compared with the predictions of the three codes. Comparison with dose measurements in two-layer media exposed to a 60Co gamma source is also performed. In addition, comparisons between the codes (including the EGS4 code) are done for (a) 0.05 to 10 MeV electron beams and positron point sources in lead, (b) high-energy photons (10 and 20 MeV) irradiating a multilayer phantom (water/steel/air), and (c) simulation of a 90Sr/90Y brachytherapy source. A good agreement is observed between the calorimetric electron dose measurements and predictions of GEPTS and EGSnrc in both homogeneous and multilayer media. MCNP outputs are found to be dependent on the energy-indexing method (Default/ITS style). This dependence is significant in homogeneous media as well as at interfaces. MCNP(ITS) fits more closely the experimental data than MCNP(DEF), except for the case of Be. At low energy (0.05 and 0.1 MeV), MCNP(ITS) dose distributions in lead show higher maximums in comparison with GEPTS and EGSnrc. EGS4 produces too penetrating electron-dose distributions in high-Z media, especially at low energy (<0.1 MeV). For positrons, differences between GEPTS and EGSnrc are observed in lead because GEPTS distinguishes positrons from electrons for both elastic multiple scattering and bremsstrahlung emission models. For the 60Co source, a quite good agreement between calculations and measurements is observed with regards to the experimental uncertainty. For the other cases (10 and 20 MeV photon sources and the 90Sr/90Y beta source), a good agreement is found between the three codes. In conclusion, differences between GEPTS and EGSnrc results are found to be very small for almost all media and energies studied. MCNP results depend significantly on the electron energy-indexing method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greene, N.M.; Petrie, L.M.; Westfall, R.M.
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automate the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.2 of the system. The manual is divided into three volumes: Volume 1--for the control module documentation; Volume 2--for functional module documentation; and Volume 3--for documentation of the data libraries and subroutine libraries.« less
Anomalous Rayleigh scattering with dilute concentrations of elements of biological importance
NASA Astrophysics Data System (ADS)
Hugtenburg, Richard P.; Bradley, David A.
2004-01-01
The anomalous scattering factor (ASF) correction to the relativistic form-factor approximation for Rayleigh scattering is examined in support of its utilization in radiographic imaging. ASF corrected total cross-section data have been generated for a low resolution grid for the Monte Carlo code EGS4 for the biologically important elements, K, Ca, Mn, Fe, Cu and Zn. Points in the fixed energy grid used by EGS4 as well as 8 other points in the vicinity of the K-edge have been chosen to achieve an uncertainty in the ASF component of 20% according to the Thomas-Reiche-Kuhn sum rule and an energy resolution of 20 eV. Such data is useful for analysis of imaging with a quasi-monoenergetic source. Corrections to the sampled distribution of outgoing photons, due to ASF, are given and new total cross-section data including that of the photoelectric effect have been computed using the Slater exchange self-consistent potential with the Latter tail. A measurement of Rayleigh scattering in a dilute aqueous solution of manganese (II) was performed, this system enabling determination of the absolute cross-section, although background subtraction was necessary to remove K β fluorescence and resonant Raman scattering occurring within several 100 eV of the edge. Measurements confirm the presence of below edge bound-bound structure and variation in the structure due to the ionic state that are not currently included in tabulations.
NASA Astrophysics Data System (ADS)
Makrakis, Dimitrios; Mathiopoulos, P. Takis
A maximum likelihood sequential decoder for the reception of digitally modulated signals with single or multiamplitude constellations transmitted over a multiplicative, nonselective fading channel is derived. It is shown that its structure consists of a combination of envelope, multiple differential, and coherent detectors. The outputs of each of these detectors are jointly processed by means of an algorithm. This algorithm is presented in a recursive form. The derivation of the new receiver is general enough to accommodate uncoded as well as coded (e.g., trellis-coded) schemes. Performance evaluation results for a reduced-complexity trellis-coded QPSK system have demonstrated that the proposed receiver dramatically reduces the error floors caused by fading. At Eb/N0 = 20 dB the new receiver structure results in bit-error-rate reductions of more than three orders of magnitude compared to a conventional Viterbi receiver, while being reasonably simple to implement.
Assessing Seismic Hazards - Algorithms, Maps, and Emergency Scenarios
NASA Astrophysics Data System (ADS)
Ferriz, H.
2007-05-01
Public officials in charge of building codes, land use planning, and emergency response need sound estimates of seismic hazards. Sources may be well defined (e.g., active faults that have a surface trace) or diffuse (e.g., a subduction zone or a blind-thrust belt), but in both cases one can use a deterministic or worst-case scenario approach. For each scenario, a design earthquake is selected based on historic data or the known length of Holocene ruptures (as determined by geologic mapping). Horizontal ground accelerations (HGAs) can then be estimated at different distances from the earthquake epicenter using published attenuation relations (e.g., Seismological Res. Letters, v. 68, 1997) and estimates of the elastic properties of the substrate materials. No good algorithms are available to take into account reflection of elastic waves across other fault planes (e.g., a common effect in California, where there are many strands of the San Andreas fault), or amplification of waves in water-saturated alluvial and lacustrine basins (e.g., the Mexico City basin), but empirical relations can be developed by correlating historic damage patterns with predicted HGAs. The ultimate result is a map of HGAs. With this map, and with additional data on depth to groundwater and geotechnical properties of local soils, a liquefaction susceptibility map can be prepared, using published algorithms (e.g., J. of Geotech. Geoenv. Eng., v. 127, p. 817-833, 2001; Eng. Geology Practice in N. California, p. 579-594, 2001). Finally, the HGA estimates, digital elevation models, geologic structural data, and geotechnical properties of local geologic units can be used to prepare a slope failure susceptibility map (e.g., Eng. Geology Practice in N. California, p. 77-94, 2001). Seismic hazard maps are used by: (1) Building officials to determine areas of the city where special construction codes have to be implemented, and where existing buildings may need to be retrofitted. (2) Planning officials to evaluate plans for new growth (though in most cities land use patterns are historically established). (3) Emergency response officials to plan emergency operations. (4) Insurance commissioners to estimate losses and insurance claims (e.g., with FEMA's software HAZUS).
Simonaitis, Linas; McDonald, Clement J
2009-10-01
The utility of National Drug Codes (NDCs) and drug knowledge bases (DKBs) in the organization of prescription records from multiple sources was studied. The master files of most pharmacy systems include NDCs and local codes to identify the products they dispense. We obtained a large sample of prescription records from seven different sources. These records carried a national product code or a local code that could be translated into a national product code via their formulary master. We obtained mapping tables from five DKBs. We measured the degree to which the DKB mapping tables covered the national product codes carried in or associated with the sample of prescription records. Considering the total prescription volume, DKBs covered 93.0-99.8% of the product codes from three outpatient sources and 77.4-97.0% of the product codes from four inpatient sources. Among the in-patient sources, invented codes explained 36-94% of the noncoverage. Outpatient pharmacy sources rarely invented codes, which comprised only 0.11-0.21% of their total prescription volume, compared with inpatient pharmacy sources for which invented codes comprised 1.7-7.4% of their prescription volume. The distribution of prescribed products was highly skewed, with 1.4-4.4% of codes accounting for 50% of the message volume and 10.7-34.5% accounting for 90% of the message volume. DKBs cover the product codes used by outpatient sources sufficiently well to permit automatic mapping. Changes in policies and standards could increase coverage of product codes used by inpatient sources.
LAS - LAND ANALYSIS SYSTEM, VERSION 5.0
NASA Technical Reports Server (NTRS)
Pease, P. B.
1994-01-01
The Land Analysis System (LAS) is an image analysis system designed to manipulate and analyze digital data in raster format and provide the user with a wide spectrum of functions and statistical tools for analysis. LAS offers these features under VMS with optional image display capabilities for IVAS and other display devices as well as the X-Windows environment. LAS provides a flexible framework for algorithm development as well as for the processing and analysis of image data. Users may choose between mouse-driven commands or the traditional command line input mode. LAS functions include supervised and unsupervised image classification, film product generation, geometric registration, image repair, radiometric correction and image statistical analysis. Data files accepted by LAS include formats such as Multi-Spectral Scanner (MSS), Thematic Mapper (TM) and Advanced Very High Resolution Radiometer (AVHRR). The enhanced geometric registration package now includes both image to image and map to map transformations. The over 200 LAS functions fall into image processing scenario categories which include: arithmetic and logical functions, data transformations, fourier transforms, geometric registration, hard copy output, image restoration, intensity transformation, multispectral and statistical analysis, file transfer, tape profiling and file management among others. Internal improvements to the LAS code have eliminated the VAX VMS dependencies and improved overall system performance. The maximum LAS image size has been increased to 20,000 lines by 20,000 samples with a maximum of 256 bands per image. The catalog management system used in earlier versions of LAS has been replaced by a more streamlined and maintenance-free method of file management. This system is not dependent on VAX/VMS and relies on file naming conventions alone to allow the use of identical LAS file names on different operating systems. While the LAS code has been improved, the original capabilities of the system have been preserved. These include maintaining associated image history, session logging, and batch, asynchronous and interactive mode of operation. The LAS application programs are integrated under version 4.1 of an interface called the Transportable Applications Executive (TAE). TAE 4.1 has four modes of user interaction: menu, direct command, tutor (or help), and dynamic tutor. In addition TAE 4.1 allows the operation of LAS functions using mouse-driven commands under the TAE-Facelift environment provided with TAE 4.1. These modes of operation allow users, from the beginner to the expert, to exercise specific application options. LAS is written in C-language and FORTRAN 77 for use with DEC VAX computers running VMS with approximately 16Mb of physical memory. This program runs under TAE 4.1. Since TAE 4.1 is not a current version of TAE, TAE 4.1 is included within the LAS distribution. Approximately 130,000 blocks (65Mb) of disk storage space are necessary to store the source code and files generated by the installation procedure for LAS and 44,000 blocks (22Mb) of disk storage space are necessary for TAE 4.1 installation. The only other dependencies for LAS are the subroutine libraries for the specific display device(s) that will be used with LAS/DMS (e.g. X-Windows and/or IVAS). The standard distribution medium for LAS is a set of two 9track 6250 BPI magnetic tapes in DEC VAX BACKUP format. It is also available on a set of two TK50 tape cartridges in DEC VAX BACKUP format. This program was developed in 1986 and last updated in 1992.
The emerging role of epigenetics in rheumatic diseases.
Gay, Steffen; Wilson, Anthony G
2014-03-01
Epigenetics is a key mechanism regulating the expression of genes. There are three main and interrelated mechanisms: DNA methylation, post-translational modification of histone proteins and non-coding RNA. Gene activation is generally associated with lower levels of DNA methylation in promoters and with distinct histone marks such as acetylation of amino acids in histones. Unlike the genetic code, the epigenome is altered by endogenous (e.g. hormonal) and environmental (e.g. diet, exercise) factors and changes with age. Recent evidence implicates epigenetic mechanisms in the pathogenesis of common rheumatic disease, including RA, OA, SLE and scleroderma. Epigenetic drift has been implicated in age-related changes in the immune system that result in the development of a pro-inflammatory status termed inflammageing, potentially increasing the risk of age-related conditions such as polymyalgia rheumatica. Therapeutic targeting of the epigenome has shown promise in animal models of rheumatic diseases. Rapid advances in computational biology and DNA sequencing technology will lead to a more comprehensive understanding of the roles of epigenetics in the pathogenesis of common rheumatic diseases.
A prototype Knowledge-Based System to Aid Space System Restoration Management.
1986-12-01
Systems. ......... 122 Appendix B: Computation of Weights With AHP . . .. 132 Appendix C: ART Code .. ............... 138 Appendix D: Test Outputs...45 5.1 Earth Coverage With Geosynchronous Satellites 49 5.2 Space System Configurations ... ........... . 50 5.3 AHP Hierarchy...67 5.4 AHP Hierarchy With Weights .... ............ 68 6.1 TALK Schema Structure ..... .............. 75 6.2 ART Code for TALK Satellite C
Medical reliable network using concatenated channel codes through GSM network.
Ahmed, Emtithal; Kohno, Ryuji
2013-01-01
Although the 4(th) generation (4G) of global mobile communication network, i.e. Long Term Evolution (LTE) coexisting with the 3(rd) generation (3G) has successfully started; the 2(nd) generation (2G), i.e. Global System for Mobile communication (GSM) still playing an important role in many developing countries. Without any other reliable network infrastructure, GSM can be applied for tele-monitoring applications, where high mobility and low cost are necessary. A core objective of this paper is to introduce the design of a more reliable and dependable Medical Network Channel Code system (MNCC) through GSM Network. MNCC design based on simple concatenated channel code, which is cascade of an inner code (GSM) and an extra outer code (Convolution Code) in order to protect medical data more robust against channel errors than other data using the existing GSM network. In this paper, the MNCC system will provide Bit Error Rate (BER) equivalent to the BER for medical tele monitoring of physiological signals, which is 10(-5) or less. The performance of the MNCC has been proven and investigated using computer simulations under different channels condition such as, Additive White Gaussian Noise (AWGN), Rayleigh noise and burst noise. Generally the MNCC system has been providing better performance as compared to GSM.
Chang, Pamara F
2017-08-01
To understand the dynamic experiences of parents undergoing the decision-making process regarding cochlear implants for their child(ren). Thirty-three parents of d/Deaf children participated in semi-structured interviews. Interviews were digitally recorded, transcribed, and coded using iterative and thematic coding. The results from this study reveal four salient topics related to parents' decision-making process regarding cochlear implantation: 1) factors parents considered when making the decision to get the cochlear implant for their child (e.g., desire to acculturate child into one community), 2) the extent to which parents' communities influence their decision-making (e.g., norms), 3) information sources parents seek and value when decision-making (e.g., parents value other parent's experiences the most compared to medical or online sources), and 4) personal experiences with stigma affecting their decision to not get the cochlear implant for their child. This study provides insights into values and perspectives that can be utilized to improve informed decision-making, when making risky medical decisions with long-term implications. With thorough information provisions, delineation of addressing parents' concerns and encompassing all aspects of the decision (i.e., medical, social and cultural), health professional teams could reduce the uncertainty and anxiety for parents in this decision-making process for cochlear implantation. Copyright © 2017 Elsevier B.V. All rights reserved.
A narrowband CDMA communications payload for little LEOS applications
NASA Astrophysics Data System (ADS)
Michalik, H.; Hävecker, W.; Ginati, A.
1996-09-01
In recent years Code Division Multiple Access (CDMA) techniques have been investigated for application in Local Area Networks [J. A. Salehi, IEEE Trans. Commun. 37 (1989)]as well as in Mobile Communications [R. Kohno et al., IEEE Commun. Mag. Jan (1995)]. The main attraction of these techniques is due to potential higher throughput and capacity of such systems under certain conditions compared to conventional multi-access schemes like frequency and time division multiplexing. Mobile communication over a Satellite Link represents in some terms the "worst case" for operating a CDMA-system. Considering e.g. the uplink case from mobile to satellite, the imperfections due to different and time varying channel conditions will add to the well known effects of Multiple Access Interference (MAI) between the simultaneously active users at the satellite receiver. In addition, bandwidth constraints due to the non-availability of large bandwidth channels in the interesting frequency bands, exist for small systems. As a result, for a given service in terms of user data rates, the practical code sequence lengths are limited as well as the available number of codes within a code set. In this paper a communications payload for Small Satellite Applications with CDMA uplink and C/TDMA downlink under the constraint of bandwidth limitations is proposed. To optimise the performance under the above addressed imperfections the system provides ability for power control and synchronisation for the CDMA uplink. The major objectives of this project are studying, development and testing of such a system for educational purposes and technology development at Hochschule Bremen.
Lee, Jin Hee; Hong, Ki Jeong; Kim, Do Kyun; Kwak, Young Ho; Jang, Hye Young; Kim, Hahn Bom; Noh, Hyun; Park, Jungho; Song, Bongkyu; Jung, Jae Yun
2013-12-01
A clinically sensible diagnosis grouping system (DGS) is needed for describing pediatric emergency diagnoses for research, medical resource preparedness, and making national policy for pediatric emergency medical care. The Pediatric Emergency Care Applied Research Network (PECARN) developed the DGS successfully. We developed the modified PECARN DGS based on the different pediatric population of South Korea and validated the system to obtain the accurate and comparable epidemiologic data of pediatric emergent conditions of the selected population. The data source used to develop and validate the modified PECARN DGS was the National Emergency Department Information System of South Korea, which was coded by the International Classification of Diseases, 10th Revision (ICD-10) code system. To develop the modified DGS based on ICD-10 code, we matched the selected ICD-10 codes with those of the PECARN DGS by the General Equivalence Mappings (GEMs). After converting ICD-10 codes to ICD-9 codes by GEMs, we matched ICD-9 codes into PECARN DGS categories using the matrix developed by PECARN group. Lastly, we conducted the expert panel survey using Delphi method for the remaining diagnosis codes that were not matched. A total of 1879 ICD-10 codes were used in development of the modified DGS. After 1078 (57.4%) of 1879 ICD-10 codes were assigned to the modified DGS by GEM and PECARN conversion tools, investigators assigned each of the remaining 801 codes (42.6%) to DGS subgroups by 2 rounds of electronic Delphi surveys. And we assigned the remaining 29 codes (4%) into the modified DGS at the second expert consensus meeting. The modified DGS accounts for 98.7% and 95.2% of diagnoses of the 2008 and 2009 National Emergency Department Information System data set. This modified DGS also exhibited strong construct validity using the concepts of age, sex, site of care, and seasons. This also reflected the 2009 outbreak of H1N1 influenza in Korea. We developed and validated clinically feasible and sensible DGS system for describing pediatric emergent conditions in Korea. The modified PECARN DGS showed good comprehensiveness and demonstrated reliable construct validity. This modified DGS based on PECARN DGS framework may be effectively implemented for research, reporting, and resource planning in pediatric emergency system of South Korea.
Automated Instrumentation, Monitoring and Visualization of PVM Programs Using AIMS
NASA Technical Reports Server (NTRS)
Mehra, Pankaj; VanVoorst, Brian; Yan, Jerry; Lum, Henry, Jr. (Technical Monitor)
1994-01-01
We present views and analysis of the execution of several PVM (Parallel Virtual Machine) codes for Computational Fluid Dynamics on a networks of Sparcstations, including: (1) NAS Parallel Benchmarks CG and MG; (2) a multi-partitioning algorithm for NAS Parallel Benchmark SP; and (3) an overset grid flowsolver. These views and analysis were obtained using our Automated Instrumentation and Monitoring System (AIMS) version 3.0, a toolkit for debugging the performance of PVM programs. We will describe the architecture, operation and application of AIMS. The AIMS toolkit contains: (1) Xinstrument, which can automatically instrument various computational and communication constructs in message-passing parallel programs; (2) Monitor, a library of runtime trace-collection routines; (3) VK (Visual Kernel), an execution-animation tool with source-code clickback; and (4) Tally, a tool for statistical analysis of execution profiles. Currently, Xinstrument can handle C and Fortran 77 programs using PVM 3.2.x; Monitor has been implemented and tested on Sun 4 systems running SunOS 4.1.2; and VK uses XIIR5 and Motif 1.2. Data and views obtained using AIMS clearly illustrate several characteristic features of executing parallel programs on networked workstations: (1) the impact of long message latencies; (2) the impact of multiprogramming overheads and associated load imbalance; (3) cache and virtual-memory effects; and (4) significant skews between workstation clocks. Interestingly, AIMS can compensate for constant skew (zero drift) by calibrating the skew between a parent and its spawned children. In addition, AIMS' skew-compensation algorithm can adjust timestamps in a way that eliminates physically impossible communications (e.g., messages going backwards in time). Our current efforts are directed toward creating new views to explain the observed performance of PVM programs. Some of the features planned for the near future include: (1) ConfigView, showing the physical topology of the virtual machine, inferred using specially formatted IP (Internet Protocol) packets: and (2) LoadView, synchronous animation of PVM-program execution and resource-utilization patterns.
Biosignal PI, an Affordable Open-Source ECG and Respiration Measurement System
Abtahi, Farhad; Snäll, Jonatan; Aslamy, Benjamin; Abtahi, Shirin; Seoane, Fernando; Lindecrantz, Kaj
2015-01-01
Bioimedical pilot projects e.g., telemedicine, homecare, animal and human trials usually involve several physiological measurements. Technical development of these projects is time consuming and in particular costly. A versatile but affordable biosignal measurement platform can help to reduce time and risk while keeping the focus on the important goal and making an efficient use of resources. In this work, an affordable and open source platform for development of physiological signals is proposed. As a first step an 8–12 leads electrocardiogram (ECG) and respiration monitoring system is developed. Chips based on iCoupler technology have been used to achieve electrical isolation as required by IEC 60601 for patient safety. The result shows the potential of this platform as a base for prototyping compact, affordable, and medically safe measurement systems. Further work involves both hardware and software development to develop modules. These modules may require development of front-ends for other biosignals or just collect data wirelessly from different devices e.g., blood pressure, weight, bioimpedance spectrum, blood glucose, e.g., through Bluetooth. All design and development documents, files and source codes will be available for non-commercial use through project website, BiosignalPI.org. PMID:25545268
Four year-olds use norm-based coding for face identity.
Jeffery, Linda; Read, Ainsley; Rhodes, Gillian
2013-05-01
Norm-based coding, in which faces are coded as deviations from an average face, is an efficient way of coding visual patterns that share a common structure and must be distinguished by subtle variations that define individuals. Adults and school-aged children use norm-based coding for face identity but it is not yet known if pre-school aged children also use norm-based coding. We reasoned that the transition to school could be critical in developing a norm-based system because school places new demands on children's face identification skills and substantially increases experience with faces. Consistent with this view, face identification performance improves steeply between ages 4 and 7. We used face identity aftereffects to test whether norm-based coding emerges between these ages. We found that 4 year-old children, like adults, showed larger face identity aftereffects for adaptors far from the average than for adaptors closer to the average, consistent with use of norm-based coding. We conclude that experience prior to age 4 is sufficient to develop a norm-based face-space and that failure to use norm-based coding cannot explain 4 year-old children's poor face identification skills. Copyright © 2013 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Moses, Annie M.; Golos, Debbie B.; Bennett, Colleen M.
2015-01-01
Early childhood educators need access to research-based practices and materials to help all children learn to read. Some theorists have suggested that individuals learn to read through "dual coding" (i.e., a verbal code and a nonverbal code) and may benefit from more than one route to literacy (e.g., dual coding theory). Although deaf…
A molecular dynamics implementation of the 3D Mercedes-Benz water model
NASA Astrophysics Data System (ADS)
Hynninen, T.; Dias, C. L.; Mkrtchyan, A.; Heinonen, V.; Karttunen, M.; Foster, A. S.; Ala-Nissila, T.
2012-02-01
The three-dimensional Mercedes-Benz model was recently introduced to account for the structural and thermodynamic properties of water. It treats water molecules as point-like particles with four dangling bonds in tetrahedral coordination, representing H-bonds of water. Its conceptual simplicity renders the model attractive in studies where complex behaviors emerge from H-bond interactions in water, e.g., the hydrophobic effect. A molecular dynamics (MD) implementation of the model is non-trivial and we outline here the mathematical framework of its force-field. Useful routines written in modern Fortran are also provided. This open source code is free and can easily be modified to account for different physical context. The provided code allows both serial and MPI-parallelized execution. Program summaryProgram title: CASHEW (Coarse Approach Simulator for Hydrogen-bonding Effects in Water) Catalogue identifier: AEKM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKM_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 20 501 No. of bytes in distributed program, including test data, etc.: 551 044 Distribution format: tar.gz Programming language: Fortran 90 Computer: Program has been tested on desktop workstations and a Cray XT4/XT5 supercomputer. Operating system: Linux, Unix, OS X Has the code been vectorized or parallelized?: The code has been parallelized using MPI. RAM: Depends on size of system, about 5 MB for 1500 molecules. Classification: 7.7 External routines: A random number generator, Mersenne Twister ( http://www.math.sci.hiroshima-u.ac.jp/m-mat/MT/VERSIONS/FORTRAN/mt95.f90), is used. A copy of the code is included in the distribution. Nature of problem: Molecular dynamics simulation of a new geometric water model. Solution method: New force-field for water molecules, velocity-Verlet integration, representation of molecules as rigid particles with rotations described using quaternion algebra. Restrictions: Memory and cpu time limit the size of simulations. Additional comments: Software web site: https://gitorious.org/cashew/. Running time: Depends on the size of system. The sample tests provided only take a few seconds.
NASA Astrophysics Data System (ADS)
Fehenberger, Tobias
2018-02-01
This paper studies probabilistic shaping in a multi-span wavelength-division multiplexing optical fiber system with 64-ary quadrature amplitude modulation (QAM) input. In split-step fiber simulations and via an enhanced Gaussian noise model, three figures of merit are investigated, which are signal-to-noise ratio (SNR), achievable information rate (AIR) for capacity-achieving forward error correction (FEC) with bit-metric decoding, and the information rate achieved with low-density parity-check (LDPC) FEC. For the considered system parameters and different shaped input distributions, shaping is found to decrease the SNR by 0.3 dB yet simultaneously increases the AIR by up to 0.4 bit per 4D-symbol. The information rates of LDPC-coded modulation with shaped 64QAM input are improved by up to 0.74 bit per 4D-symbol, which is larger than the shaping gain when considering AIRs. This increase is attributed to the reduced coding gap of the higher-rate code that is used for decoding the nonuniform QAM input.
Extended Plate and Beam Wall System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunderson, Patti
Home Innovation Research Labs studied the extended plate and beam wall (EP&B) system during a two-year period from mid-2015 to mid-2017 to determine the wall’s structural performance, moisture durability, constructability, and costeffectiveness for use as a high-R enclosure system for energy code minimum and above-code performance in climate zones 4–8.
Perinetti, Giuseppe; Bianchet, Alberto; Franchi, Lorenzo; Contardo, Luca
2017-05-01
To date, little information is available regarding individual cervical vertebral maturation (CVM) morphologic changes. Moreover, contrasting results regarding the repeatability of the CVM method call for the use of objective and transparent reporting procedures. In this study, we used a rigorous morphometric objective CVM code staging system, called the "CVM code" that was applied to a 6-year longitudinal circumpubertal analysis of individual CVM morphologic changes to find cases outside the reported norms and analyze individual maturation processes. From the files of the Oregon Growth Study, 32 subjects (17 boys, 15 girls) with 6 annual lateral cephalograms taken from 10 to 16 years of age were included, for a total of 221 recordings. A customized cephalometric analysis was used, and each recording was converted into a CVM code according to the concavities of cervical vertebrae (C) C2 through C4 and the shapes of C3 and C4. The retrieved CVM codes, either falling within the reported norms (regular cases) or not (exception cases), were also converted into the CVM stages. Overall, 31 exception cases (14%) were seen. with most of them accounting for pubertal CVM stage 4. The overall durations of the CVM stages 2 to 4 were about 1 year, even though only 4 subjects had regular annual durations of CVM stages 2 to 5. Whereas the overall CVM changes are consistent with previous reports, intersubject variability must be considered when dealing with individual treatment timing. Future research on CVM may take advantage of the CVM code system. Copyright © 2017 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nutaro, James J
The purpose of this model was to facilitate the design of a control system that uses fine grained control of residential and small commercial HVAC loads to counterbalance voltage swings caused by intermittent solar power sources (e.g., rooftop panels) installed in that distribution circuit. Included is the source code and pre-compiled 64 bit dll for adding building HVAC loads to an OpenDSS distribution circuit. As written, the Makefile assumes you are using the Microsoft C++ development tools.
Robust Airborne Networking Extensions (RANGE)
2008-02-01
IMUNES [13] project, which provides an entire network stack virtualization and topology control inside a single FreeBSD machine . The emulated topology...Multicast versus broadcast in a manet.” in ADHOC-NOW, 2004, pp. 14–27. [9] J. Mukherjee, R. Atwood , “ Rendezvous point relocation in protocol independent...computer with an Ethernet connection, or a Linux virtual machine on some other (e.g., Windows) operating system, should work. 2.1 Patching the source code
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin
2015-09-01
In this paper, a Golay complementary training sequence (TS)-based symbol synchronization scheme is proposed and experimentally demonstrated in multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband over fiber (UWBoF) system with a variable rate low-density parity-check (LDPC) code. Meanwhile, the coding gain and spectral efficiency in the variable rate LDPC-coded MB-OFDM UWBoF system are investigated. By utilizing the non-periodic auto-correlation property of the Golay complementary pair, the start point of LDPC-coded MB-OFDM UWB signal can be estimated accurately. After 100 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1×10-3, the experimental results show that the short block length 64QAM-LDPC coding provides a coding gain of 4.5 dB, 3.8 dB and 2.9 dB for a code rate of 62.5%, 75% and 87.5%, respectively.
ERIC Educational Resources Information Center
Karabenick, Stuart A.; Brackney, Barbara E.; Dansky, Jeffrey; Schippers, John; Smith, Stephanie; Stephens, Sarah; Hicks, Brian
This study examined relationships between college students' (n=94) recall of important school-related events and the students' current academic engagement. Autobiographical narratives were coded for time period (e.g., middle school), theme (e.g., achievement), context (e.g., academics, sports), and the presence of goal-directed content (e.g.,…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ivanov, Oleg P.; Semin, Ilya A.; Potapov, Victor N.
Gamma-ray imaging is the most important way to identify unknown gamma-ray emitting objects in decommissioning, security, overcoming accidents. Over the past two decades a system for producing of gamma images in these conditions became more or less portable devices. But in recent years these systems have become the hand-held devices. This is very important, especially in emergency situations, and measurements for safety reasons. We describe the first integrated hand-held instrument for emergency and security applications. The device is based on the coded aperture image formation, position sensitive gamma-ray (X-ray) detector Medipix2 (detectors produces by X-ray Imaging Europe) and tablet computer.more » The development was aimed at creating a very low weight system with high angular resolution. We present some sample gamma-ray images by camera. Main estimated parameters of the system are the following. The field of view video channel ∼ 490 deg. The field of view gamma channel ∼ 300 deg. The sensitivity of the system with a hexagonal mask for the source of Cs-137 (Eg = 662 keV), is in units of dose D ∼ 100 mR. This option is less then order of magnitude worse than for the heavy, non-hand-held systems (e.g., gamma-camera Cartogam, by Canberra.) The angular resolution of the gamma channel for the sources of Cs-137 (Eg = 662 keV) is about 1.20 deg. (authors)« less
Current Insights into Long Non-Coding RNAs (LncRNAs) in Prostate Cancer
Smolle, Maria A.; Bauernhofer, Thomas; Pummer, Karl; Calin, George A.; Pichler, Martin
2017-01-01
The importance of long non-coding RNAs (lncRNAs) in the pathogenesis of various malignancies has been uncovered over the last few years. Their dysregulation often contributes to or is a result of tumour progression. In prostate cancer, the most common malignancy in men, lncRNAs can promote castration resistance, cell proliferation, invasion, and metastatic spread. Expression patterns of lncRNAs often change during tumour progression; their expression levels may constantly rise (e.g., HOX transcript antisense RNA, HOTAIR), or steadily decrease (e.g., downregulated RNA in cancer, DRAIC). In prostate cancer, lncRNAs likewise have diagnostic (e.g., prostate cancer antigen 3, PCA3), prognostic (e.g., second chromosome locus associated with prostate-1, SChLAP1), and predictive (e.g., metastasis-associated lung adenocarcinoma transcript-1, MALAT-1) functions. Considering their dynamic role in prostate cancer, lncRNAs may also serve as therapeutic targets, helping to prevent development of castration resistance, maintain stable disease, and prohibit metastatic spread. PMID:28241429
Proposed scheme for parallel 10Gb/s VSR system and its verilog HDL realization
NASA Astrophysics Data System (ADS)
Zhou, Yi; Chen, Hongda; Zuo, Chao; Jia, Jiuchun; Shen, Rongxuan; Chen, Xiongbin
2005-02-01
This paper proposes a novel and innovative scheme for 10Gb/s parallel Very Short Reach (VSR) optical communication system. The optimized scheme properly manages the SDH/SONET redundant bytes and adjusts the position of error detecting bytes and error correction bytes. Compared with the OIF-VSR4-01.0 proposal, the scheme has a coding process module. The SDH/SONET frames in transmission direction are disposed as follows: (1) The Framer-Serdes Interface (FSI) gets 16×622.08Mb/s STM-64 frame. (2) The STM-64 frame is byte-wise stripped across 12 channels, all channels are data channels. During this process, the parity bytes and CRC bytes are generated in the similar way as OIF-VSR4-01.0 and stored in the code process module. (3) The code process module will regularly convey the additional parity bytes and CRC bytes to all 12 data channels. (4) After the 8B/10B coding, the 12 channels is transmitted to the parallel VCSEL array. The receive process approximately in reverse order of transmission process. By applying this scheme to 10Gb/s VSR system, the frame size in VSR system is reduced from 15552×12 bytes to 14040×12 bytes, the system redundancy is reduced obviously.
Why do you fear the bogeyman? An embodied predictive coding model of perceptual inference.
Pezzulo, Giovanni
2014-09-01
Why are we scared by nonperceptual entities such as the bogeyman, and why does the bogeyman only visit us during the night? Why does hearing a window squeaking in the night suggest to us the unlikely idea of a thief or a killer? And why is this more likely to happen after watching a horror movie? To answer these and similar questions, we need to put mind and body together again and consider the embodied nature of perceptual and cognitive inference. Predictive coding provides a general framework for perceptual inference; I propose to extend it by including interoceptive and bodily information. The resulting embodied predictive coding inference permits one to compare alternative hypotheses (e.g., is the sound I hear generated by a thief or the wind?) using the same inferential scheme as in predictive coding, but using both sensory and interoceptive information as evidence, rather than just considering sensory events. If you hear a window squeaking in the night after watching a horror movie, you may consider plausible a very unlikely hypothesis (e.g., a thief, or even the bogeyman) because it explains both what you sense (e.g., the window squeaking in the night) and how you feel (e.g., your high heart rate). The good news is that the inference that I propose is fully rational and gives minds and bodies equal dignity. The bad news is that it also gives an embodiment to the bogeyman, and a reason to fear it.
Arabic Natural Language Processing System Code Library
2014-06-01
Code Compilation 2 4. Training Instructions 2 5. Applying the System to New Examples 2 6. License 3 7. History 3 8. Important Note 4 9. Papers to...a slightly different English dependency scheme and contained a variety of improvements. However, the PropBank-style SRL module was not maintained...than those in the http://sourceforge.net/projects/miacp/ release.) 8. Important Note This release contains a variety of bug fixes and other generally
Overview of codes and tools for nuclear engineering education
NASA Astrophysics Data System (ADS)
Yakovlev, D.; Pryakhin, A.; Medvedeva, L.
2017-01-01
The recent world trends in nuclear education have been developed in the direction of social education, networking, virtual tools and codes. MEPhI as a global leader on the world education market implements new advanced technologies for the distance and online learning and for student research work. MEPhI produced special codes, tools and web resources based on the internet platform to support education in the field of nuclear technology. At the same time, MEPhI actively uses codes and tools from the third parties. Several types of the tools are considered: calculation codes, nuclear data visualization tools, virtual labs, PC-based educational simulators for nuclear power plants (NPP), CLP4NET, education web-platforms, distance courses (MOOCs and controlled and managed content systems). The university pays special attention to integrated products such as CLP4NET, which is not a learning course, but serves to automate the process of learning through distance technologies. CLP4NET organizes all tools in the same information space. Up to now, MEPhI has achieved significant results in the field of distance education and online system implementation.
Outbreaks of Illness Associated with Recreational Water--United States, 2011-2012.
Hlavsa, Michele C; Roberts, Virginia A; Kahler, Amy M; Hilborn, Elizabeth D; Mecher, Taryn R; Beach, Michael J; Wade, Timothy J; Yoder, Jonathan S
2015-06-26
Outbreaks of illness associated with recreational water use result from exposure to chemicals or infectious pathogens in recreational water venues that are treated (e.g., pools and hot tubs or spas) or untreated (e.g., lakes and oceans). For 2011-2012, the most recent years for which finalized data were available, public health officials from 32 states and Puerto Rico reported 90 recreational water-associated outbreaks to CDC's Waterborne Disease and Outbreak Surveillance System (WBDOSS) via the National Outbreak Reporting System (NORS). The 90 outbreaks resulted in at least 1,788 cases, 95 hospitalizations, and one death. Among 69 (77%) outbreaks associated with treated recreational water, 36 (52%) were caused by Cryptosporidium. Among 21 (23%) outbreaks associated with untreated recreational water, seven (33%) were caused by Escherichia coli (E. coli O157:H7 or E. coli O111). Guidance, such as the Model Aquatic Health Code (MAHC), for preventing and controlling recreational water-associated outbreaks can be optimized when informed by national outbreak and laboratory (e.g., molecular typing of Cryptosporidium) data.
Preliminary results of 3D dose calculations with MCNP-4B code from a SPECT image.
Rodríguez Gual, M; Lima, F F; Sospedra Alfonso, R; González González, J; Calderón Marín, C
2004-01-01
Interface software was developed to generate the input file to run Monte Carlo MCNP-4B code from medical image in Interfile format version 3.3. The software was tested using a spherical phantom of tomography slides with known cumulated activity distribution in Interfile format generated with IMAGAMMA medical image processing system. The 3D dose calculation obtained with Monte Carlo MCNP-4B code was compared with the voxel S factor method. The results show a relative error between both methods less than 1 %.
Mitra, Monika; Smith, Lauren D; Smeltzer, Suzanne C; Long-Bellil, Linda M; Sammet Moring, Nechama; Iezzoni, Lisa I
2017-07-01
Women with physical disabilities are known to experience disparities in maternity care access and quality, and communication gaps with maternity care providers, however there is little research exploring the maternity care experiences of women with physical disabilities from the perspective of their health care practitioners. This study explored health care practitioners' experiences and needs around providing perinatal care to women with physical disabilities in order to identify potential drivers of these disparities. We conducted semi-structured telephone interviews with 14 health care practitioners in the United States who provide maternity care to women with physical disabilities, as identified by affiliation with disability-related organizations, publications and snowball sampling. Descriptive coding and content analysis techniques were used to develop an iterative code book related to barriers to caring for this population. Public health theory regarding levels of barriers was applied to generate broad barrier categories, which were then analyzed using content analysis. Participant-reported barriers to providing optimal maternity care to women with physical disabilities were grouped into four levels: practitioner level (e.g., unwillingness to provide care), clinical practice level (e.g., accessible office equipment like adjustable exam tables), system level (e.g., time limits, reimbursement policies), and barriers relating to lack of scientific evidence (e.g., lack of disability-specific clinical data). Participants endorsed barriers to providing optimal maternity care to women with physical disabilities. Our findings highlight the needs for maternity care practice guidelines for women with physical disabilities, and for training and education regarding the maternity care needs of this population. Copyright © 2016 Elsevier Inc. All rights reserved.
Physician's perceived roles, as well as barriers, towards caring for women sex assault survivors
Amin, Priyanka; Buranosky, Raquel; Chang, Judy C.
2016-01-01
Background Sexual assault (SA) affects about 40% of women in the US and has many mental and physical health sequelae. Physicians often do not address SA with patients although SA survivors describe a desire to talk to physicians to obtain additional help. Little information exists on how providers perceive their roles regarding caring for women SA survivors and what barriers they face in providing this care. Methods We performed a qualitative study using semi-structured one-on-one interviews with sixteen faculty physicians from five specialties: obstetrics and gynecology (four), internal medicine (four), family medicine (one), emergency medicine (three), and psychiatry (four). Interviews were conducted between July 2011 and July 2012, transcribed verbatim, and coded using a constant comparative approach. Once a final coding scheme was applied to all transcripts, we identified patterns and themes related to perceived roles and barriers to caring for SA survivors. Results Physicians described two main categories of roles: clinical tasks (e.g. testing and treating for sexually transmitted infections, managing associated mental health sequelae) and interpersonal roles (e.g. providing support, acting as patient advocate). Physician barriers fell into three main categories: (1) internal barriers (e.g. discomfort with the topic of SA); (2) physician-patient communication; and (3) system obstacles (e.g. competing priorities for time). Conclusions Although physicians describe key roles in caring for SA survivors, several barriers hinder their ability to fulfill these roles. Training interventions are needed to reduce the barriers that would ultimately improve clinical care for SA survivors. PMID:27863981
Studies of Heat Transfer in Complex Internal Flows.
1982-01-01
D.C. 20362 (Tel 202-692-6874) Mr. Richard S. Carlton Director, Engines Division, Code 523 NC #4 Naval Sea Systems Command Washington, D.C. 20362...Walter Ritz Code 033C Naval Ships Systems Engineering Station Philadelphia, Pennsylvania 19112 (Tel. 215-755-3841) Dr. Simion Kuo United Tech. Res
48 CFR 52.204-7 - System for Award Management.
Code of Federal Regulations, 2013 CFR
2013-10-01
... for Award Manangement (JUL 2013) (a) Definitions. As used in this provision— Data Universal Numbering... information, including the DUNS number or the DUNS+4 number, the Contractor and Government Entity (CAGE) code... Zip Code. (iv) Company Mailing Address, City, State and Zip Code (if separate from physical). (v...
48 CFR 52.204-7 - System for Award Management.
Code of Federal Regulations, 2014 CFR
2014-10-01
... for Award Manangement (JUL 2013) (a) Definitions. As used in this provision— Data Universal Numbering... information, including the DUNS number or the DUNS+4 number, the Contractor and Government Entity (CAGE) code... Zip Code. (iv) Company Mailing Address, City, State and Zip Code (if separate from physical). (v...
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
NASA Astrophysics Data System (ADS)
Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik
2015-04-01
The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They range from simpler, purely thermal cases (benchmark T1) to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test cases database at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases (TH1, TH2 & TH3). Further perspectives of the exercise will also be presented. Extensions to more complex physical conditions (e.g. unsaturated conditions and geometrical deformations) are contemplated. In addition, 1D vertical cases of interest to the Climate Modeling community will be proposed. Keywords: Permafrost; Numerical modeling; River-soil interaction; Arctic systems; soil freeze-thaw
Replica Exchange Molecular Dynamics in the Age of Heterogeneous Architectures
NASA Astrophysics Data System (ADS)
Roitberg, Adrian
2014-03-01
The rise of GPU-based codes has allowed MD to reach timescales only dreamed of only 5 years ago. Even within this new paradigm there is still need for advanced sampling techniques. Modern supercomputers (e.g. Blue Waters, Titan, Keeneland) have made available to users a significant number of GPUS and CPUS, which in turn translate into amazing opportunities for dream calculations. Replica-exchange based methods can optimally use tis combination of codes and architectures to explore conformational variabilities in large systems. I will show our recent work in porting the program Amber to GPUS, and the support for replica exchange methods, where the replicated dimension could be Temperature, pH, Hamiltonian, Umbrella windows and combinations of those schemes.
Accelerating Climate Simulations Through Hybrid Computing
NASA Technical Reports Server (NTRS)
Zhou, Shujia; Sinno, Scott; Cruz, Carlos; Purcell, Mark
2009-01-01
Unconventional multi-core processors (e.g., IBM Cell B/E and NYIDIDA GPU) have emerged as accelerators in climate simulation. However, climate models typically run on parallel computers with conventional processors (e.g., Intel and AMD) using MPI. Connecting accelerators to this architecture efficiently and easily becomes a critical issue. When using MPI for connection, we identified two challenges: (1) identical MPI implementation is required in both systems, and; (2) existing MPI code must be modified to accommodate the accelerators. In response, we have extended and deployed IBM Dynamic Application Virtualization (DAV) in a hybrid computing prototype system (one blade with two Intel quad-core processors, two IBM QS22 Cell blades, connected with Infiniband), allowing for seamlessly offloading compute-intensive functions to remote, heterogeneous accelerators in a scalable, load-balanced manner. Currently, a climate solar radiation model running with multiple MPI processes has been offloaded to multiple Cell blades with approx.10% network overhead.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
1983-09-01
6ENFRAL. ELECTROMAGNETIC MODEL FOR THE ANALYSIS OF COMPLEX SYSTEMS **%(GEMA CS) Computer Code Documentation ii( Version 3 ). A the BDM Corporation Dr...ANALYSIS FnlTcnclRpr F COMPLEX SYSTEM (GmCS) February 81 - July 83- I TR CODE DOCUMENTATION (Version 3 ) 6.PROMN N.REPORT NUMBER 5. CONTRACT ORGAT97...the ti and t2 directions on the source patch. 3 . METHOD: The electric field at a segment observation point due to the source patch j is given by 1-- lnA
HitWalker2: visual analytics for precision medicine and beyond.
Bottomly, Daniel; McWeeney, Shannon K; Wilmot, Beth
2016-04-15
The lack of visualization frameworks to guide interpretation and facilitate discovery is a potential bottleneck for precision medicine, systems genetics and other studies. To address this we have developed an interactive, reproducible, web-based prioritization approach that builds on our earlier work. HitWalker2 is highly flexible and can utilize many data types and prioritization methods based upon available data and desired questions, allowing it to be utilized in a diverse range of studies such as cancer, infectious disease and psychiatric disorders. Source code is freely available at https://github.com/biodev/HitWalker2 and implemented using Python/Django, Neo4j and Javascript (D3.js and jQuery). We support major open source browsers (e.g. Firefox and Chromium/Chrome). wilmotb@ohsu.edu Supplementary data are available at Bioinformatics online. Additional information/instructions are available at https://github.com/biodev/HitWalker2/wiki. © The Author 2015. Published by Oxford University Press.
A study of tungsten spectra using large helical device and compact electron beam ion trap in NIFS
NASA Astrophysics Data System (ADS)
Morita, S.; Dong, C. F.; Goto, M.; Kato, D.; Murakami, I.; Sakaue, H. A.; Hasuo, M.; Koike, F.; Nakamura, N.; Oishi, T.; Sasaki, A.; Wang, E. H.
2013-07-01
Tungsten spectra have been observed from Large Helical Device (LHD) and Compact electron Beam Ion Trap (CoBIT) in wavelength ranges of visible to EUV. The EUV spectra with unresolved transition array (UTA), e.g., 6g-4f, 5g-4f, 5f-4d and 5p-4d transitions for W+24-+33, measured from LHD plasmas are compared with those measured from CoBIT with monoenergetic electron beam (≤2keV). The tungsten spectra from LHD are well analyzed based on the knowledge from CoBIT tungsten spectra. The C-R model code has been developed to explain the UTA spectra in details. Radial profiles of EUV spectra from highly ionized tungsten ions have been measured and analyzed by impurity transport simulation code with ADPAK atomic database code to examine the ionization balance determined by ionization and recombination rate coefficients. As the first trial, analysis of the tungsten density in LHD plasmas is attempted from radial profile of Zn-like WXLV (W44+) 4p-4s transition at 60.9Å based on the emission rate coefficient calculated with HULLAC code. As a result, a total tungsten ion density of 3.5×1010cm-3 at the plasma center is reasonably obtained. In order to observe the spectra from tungsten ions in lower-ionized charge stages, which can give useful information on the tungsten influx in fusion plasmas, the ablation cloud of the impurity pellet is directly measured with visible spectroscopy. A lot of spectra from neutral and singly ionized tungsten are observed and some of them are identified. A magnetic forbidden line from highly ionized tungsten ions has been examined and Cd-like WXXVII (W26+) at 3893.7Å is identified as the ground-term fine-structure transition of 4f23H5-3H4. The possibility of α particle diagnostic in D-T burning plasmas using the magnetic forbidden line is discussed.
2012-01-01
Background Procedures documented by general practitioners in primary care have not been studied in relation to procedure coding systems. We aimed to describe procedures documented by Swedish general practitioners in electronic patient records and to compare them to the Swedish Classification of Health Interventions (KVÅ) and SNOMED CT. Methods Procedures in 200 record entries were identified, coded, assessed in relation to two procedure coding systems and analysed. Results 417 procedures found in the 200 electronic patient record entries were coded with 36 different Classification of Health Interventions categories and 148 different SNOMED CT concepts. 22.8% of the procedures could not be coded with any Classification of Health Interventions category and 4.3% could not be coded with any SNOMED CT concept. 206 procedure-concept/category pairs were assessed as a complete match in SNOMED CT compared to 10 in the Classification of Health Interventions. Conclusions Procedures documented by general practitioners were present in nearly all electronic patient record entries. Almost all procedures could be coded using SNOMED CT. Classification of Health Interventions covered the procedures to a lesser extent and with a much lower degree of concordance. SNOMED CT is a more flexible terminology system that can be used for different purposes for procedure coding in primary care. PMID:22230095
Computer codes developed and under development at Lewis
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1992-01-01
The objective of this summary is to provide a brief description of: (1) codes developed or under development at LeRC; and (2) the development status of IPACS with some typical early results. The computer codes that have been developed and/or are under development at LeRC are listed in the accompanying charts. This list includes: (1) the code acronym; (2) select physics descriptors; (3) current enhancements; and (4) present (9/91) code status with respect to its availability and documentation. The computer codes list is grouped by related functions such as: (1) composite mechanics; (2) composite structures; (3) integrated and 3-D analysis; (4) structural tailoring; and (5) probabilistic structural analysis. These codes provide a broad computational simulation infrastructure (technology base-readiness) for assessing the structural integrity/durability/reliability of propulsion systems. These codes serve two other very important functions: they provide an effective means of technology transfer; and they constitute a depository of corporate memory.
Transposed Letter Priming with Horizontal and Vertical Text in Japanese and English Readers
ERIC Educational Resources Information Center
Witzel, Naoko; Qiao, Xiaomei; Forster, Kenneth
2011-01-01
It is well established that in masked priming, a target word (e.g., "JUDGE") is primed more effectively by a transposed letter (TL) prime (e.g., "jugde") than by an orthographic control prime (e.g., "junpe"). This is inconsistent with the slot coding schemes used in many models of visual word recognition. Several…
A Cloud Based Framework For Monitoring And Predicting Subsurface System Behaviour
NASA Astrophysics Data System (ADS)
Versteeg, R. J.; Rodzianko, A.; Johnson, D. V.; Soltanian, M. R.; Dwivedi, D.; Dafflon, B.; Tran, A. P.; Versteeg, O. J.
2015-12-01
Subsurface system behavior is driven and controlled by the interplay of physical, chemical, and biological processes which occur at multiple temporal and spatial scales. Capabilities to monitor, understand and predict this behavior in an effective and timely manner are needed for both scientific purposes and for effective subsurface system management. Such capabilities require three elements: Models, Data and an enabling cyberinfrastructure, which allow users to use these models and data in an effective manner. Under a DOE Office of Science funded STTR award Subsurface Insights and LBNL have designed and implemented a cloud based predictive assimilation framework (PAF) which automatically ingests, controls quality and stores heterogeneous physical and chemical subsurface data and processes these data using different inversion and modeling codes to provide information on the current state and evolution of subsurface systems. PAF is implemented as a modular cloud based software application with five components: (1) data acquisition, (2) data management, (3) data assimilation and processing, (4) visualization and result delivery and (5) orchestration. Serverside PAF uses ZF2 (a PHP web application framework) and Python and both open source (ODM2) and in house developed data models. Clientside PAF uses CSS and JS to allow for interactive data visualization and analysis. Client side modularity (which allows for a responsive interface) of the system is achieved by implementing each core capability of PAF (such as data visualization, user configuration and control, electrical geophysical monitoring and email/SMS alerts on data streams) as a SPA (Single Page Application). One of the recent enhancements is the full integration of a number of flow and mass transport and parameter estimation codes (e.g., MODFLOW, MT3DMS, PHT3D, TOUGH, PFLOTRAN) in this framework. This integration allows for autonomous and user controlled modeling of hydrological and geochemical processes. In our presentation we will discuss our software architecture and present the results of using these codes and the overall developed performance of our framework using hydrological, geochemical and geophysical data from the LBNL SFA2 Rifle field site.
ERIC Educational Resources Information Center
Bowers, Jeffrey S.
2009-01-01
A fundamental claim associated with parallel distributed processing (PDP) theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts (e.g. "dog"), that is, coded with their own dedicated…
Porting plasma physics simulation codes to modern computing architectures using the
NASA Astrophysics Data System (ADS)
Germaschewski, Kai; Abbott, Stephen
2015-11-01
Available computing power has continued to grow exponentially even after single-core performance satured in the last decade. The increase has since been driven by more parallelism, both using more cores and having more parallelism in each core, e.g. in GPUs and Intel Xeon Phi. Adapting existing plasma physics codes is challenging, in particular as there is no single programming model that covers current and future architectures. We will introduce the open-source
Schneider, Adam D.; Jamali, Mohsen; Carriot, Jerome; Chacron, Maurice J.
2015-01-01
Efficient processing of incoming sensory input is essential for an organism's survival. A growing body of evidence suggests that sensory systems have developed coding strategies that are constrained by the statistics of the natural environment. Consequently, it is necessary to first characterize neural responses to natural stimuli to uncover the coding strategies used by a given sensory system. Here we report for the first time the statistics of vestibular rotational and translational stimuli experienced by rhesus monkeys during natural (e.g., walking, grooming) behaviors. We find that these stimuli can reach intensities as high as 1500 deg/s and 8 G. Recordings from afferents during naturalistic rotational and linear motion further revealed strongly nonlinear responses in the form of rectification and saturation, which could not be accurately predicted by traditional linear models of vestibular processing. Accordingly, we used linear–nonlinear cascade models and found that these could accurately predict responses to naturalistic stimuli. Finally, we tested whether the statistics of natural vestibular signals constrain the neural coding strategies used by peripheral afferents. We found that both irregular otolith and semicircular canal afferents, because of their higher sensitivities, were more optimized for processing natural vestibular stimuli as compared with their regular counterparts. Our results therefore provide the first evidence supporting the hypothesis that the neural coding strategies used by the vestibular system are matched to the statistics of natural stimuli. PMID:25855169
A Spherical Active Coded Aperture for 4π Gamma-ray Imaging
Hellfeld, Daniel; Barton, Paul; Gunter, Donald; ...
2017-09-22
Gamma-ray imaging facilitates the efficient detection, characterization, and localization of compact radioactive sources in cluttered environments. Fieldable detector systems employing active planar coded apertures have demonstrated broad energy sensitivity via both coded aperture and Compton imaging modalities. But, planar configurations suffer from a limited field-of-view, especially in the coded aperture mode. In order to improve upon this limitation, we introduce a novel design by rearranging the detectors into an active coded spherical configuration, resulting in a 4pi isotropic field-of-view for both coded aperture and Compton imaging. This work focuses on the low- energy coded aperture modality and the optimization techniquesmore » used to determine the optimal number and configuration of 1 cm 3 CdZnTe coplanar grid detectors on a 14 cm diameter sphere with 192 available detector locations.« less
Evans, Spencer C; Amaro, Christina M; Herbert, Robyn; Blossom, Jennifer B; Roberts, Michael C
2018-01-01
If a doctoral dissertation represents an original investigation that makes a contribution to one's field, then dissertation research could, and arguably should, be disseminated into the scientific literature. However, the extent and nature of dissertation publication remains largely unknown within psychology. The present study investigated the peer-reviewed publication outcomes of psychology dissertation research in the United States. Additionally, we examined publication lag, scientific impact, and variations across subfields. To investigate these questions, we first drew a stratified random cohort sample of 910 psychology Ph.D. dissertations from ProQuest Dissertations & Theses. Next, we conducted comprehensive literature searches for peer-reviewed journal articles derived from these dissertations published 0-7 years thereafter. Published dissertation articles were coded for their bibliographic details, citation rates, and journal impact metrics. Results showed that only one-quarter (25.6% [95% CI: 23.0, 28.4]) of dissertations were ultimately published in peer-reviewed journals, with significant variations across subfields (range: 10.1 to 59.4%). Rates of dissertation publication were lower in professional/applied subfields (e.g., clinical, counseling) compared to research/academic subfields (e.g., experimental, cognitive). When dissertations were published, however, they often appeared in influential journals (e.g., Thomson Reuters Impact Factor M = 2.84 [2.45, 3.23], 5-year Impact Factor M = 3.49 [3.07, 3.90]) and were cited numerous times (Web of Science citations per year M = 3.65 [2.88, 4.42]). Publication typically occurred within 2-3 years after the dissertation year. Overall, these results indicate that the large majority of Ph.D. dissertation research in psychology does not get disseminated into the peer-reviewed literature. The non-publication of dissertation research appears to be a systemic problem affecting both research and training in psychology. Efforts to improve the quality and "publishability" of doctoral dissertation research could benefit psychological science on multiple fronts.
Amaro, Christina M.; Herbert, Robyn; Blossom, Jennifer B.; Roberts, Michael C.
2018-01-01
If a doctoral dissertation represents an original investigation that makes a contribution to one’s field, then dissertation research could, and arguably should, be disseminated into the scientific literature. However, the extent and nature of dissertation publication remains largely unknown within psychology. The present study investigated the peer-reviewed publication outcomes of psychology dissertation research in the United States. Additionally, we examined publication lag, scientific impact, and variations across subfields. To investigate these questions, we first drew a stratified random cohort sample of 910 psychology Ph.D. dissertations from ProQuest Dissertations & Theses. Next, we conducted comprehensive literature searches for peer-reviewed journal articles derived from these dissertations published 0–7 years thereafter. Published dissertation articles were coded for their bibliographic details, citation rates, and journal impact metrics. Results showed that only one-quarter (25.6% [95% CI: 23.0, 28.4]) of dissertations were ultimately published in peer-reviewed journals, with significant variations across subfields (range: 10.1 to 59.4%). Rates of dissertation publication were lower in professional/applied subfields (e.g., clinical, counseling) compared to research/academic subfields (e.g., experimental, cognitive). When dissertations were published, however, they often appeared in influential journals (e.g., Thomson Reuters Impact Factor M = 2.84 [2.45, 3.23], 5-year Impact Factor M = 3.49 [3.07, 3.90]) and were cited numerous times (Web of Science citations per year M = 3.65 [2.88, 4.42]). Publication typically occurred within 2–3 years after the dissertation year. Overall, these results indicate that the large majority of Ph.D. dissertation research in psychology does not get disseminated into the peer-reviewed literature. The non-publication of dissertation research appears to be a systemic problem affecting both research and training in psychology. Efforts to improve the quality and “publishability” of doctoral dissertation research could benefit psychological science on multiple fronts. PMID:29444130
Chronic pain patients' perspectives of medical cannabis.
Piper, Brian J; Beals, Monica L; Abess, Alexander T; Nichols, Stephanie D; Martin, Maurice W; Cobb, Catherine M; DeKeuster, Rebecca M
2017-07-01
Medical cannabis (MC) is used for a variety of conditions including chronic pain. The goal of this report was to provide an in-depth qualitative exploration of patient perspectives on the strengths and limitations of MC. Members of MC dispensaries (N = 984) in New England including two-thirds with a history of chronic pain completed an online survey. In response to "How effective is medical cannabis in treating your symptoms or conditions?," with options of 0% "no relief" to 100% "complete relief," the average was 74.6% ± 0.6. The average amount spent on MC each year was $3064.47 ± 117.60, median = $2320.23, range = $52.14 to $52,140.00. Open-ended responses were coded into themes and subthemes. Analysis of answers to "What is it that you like most about MC?" (N = 2592 responses) identified 10 themes, including health benefits (36.0% of responses, eg, "Changes perception and experience of my chronic pain."), the product (14.2%, eg, "Knowing exactly what strain you are getting"), nonhealth benefits (14.1%), general considerations (10.3%), and medications (7.1%). Responses (N = 1678) to "What is it that you like least about MC?" identified 12 themes, including money (28.4%, eg, "The cost is expensive for someone on a fixed income"), effects (21.7%, eg, "The effects on my lungs"), the view of others (11.4%), access (8.2%), and method of administration (7.1%). These findings provide a patient-centered view on the advantages (eg, efficacy in pain treatment, reduced use of other medications) and disadvantages (eg, economic and stigma) of MC.
Dhakal, Sanjaya; Burwen, Dale R; Polakowski, Laura L; Zinderman, Craig E; Wise, Robert P
2014-03-01
Assess whether Medicare data are useful for monitoring tissue allograft safety and utilization. We used health care claims (billing) data from 2007 for 35 million fee-for-service Medicare beneficiaries, a predominantly elderly population. Using search terms for transplant-related procedures, we generated lists of ICD-9-CM and CPT(®) codes and assessed the frequency of selected allograft procedures. Step 1 used inpatient data and ICD-9-CM procedure codes. Step 2 added non-institutional provider (e.g., physician) claims, outpatient institutional claims, and CPT codes. We assembled preliminary lists of diagnosis codes for infections after selected allograft procedures. Many ICD-9-CM codes were ambiguous as to whether the procedure involved an allograft. Among 1.3 million persons with a procedure ascertained using the list of ICD-9-CM codes, only 1,886 claims clearly involved an allograft. CPT codes enabled better ascertainment of some allograft procedures (over 17,000 persons had corneal transplants and over 2,700 had allograft skin transplants). For spinal fusion procedures, CPT codes improved specificity for allografts; of nearly 100,000 patients with ICD-9-CM codes for spinal fusions, more than 34,000 had CPT codes indicating allograft use. Monitoring infrequent events (infections) after infrequent exposures (tissue allografts) requires large study populations. A strength of the large Medicare databases is the substantial number of certain allograft procedures. Limitations include lack of clinical detail and donor information. Medicare data can potentially augment passive reporting systems and may be useful for monitoring tissue allograft safety and utilization where codes clearly identify allograft use and coding algorithms can effectively screen for infections.
Advances in Geologic Disposal System Modeling and Application to Crystalline Rock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mariner, Paul E.; Stein, Emily R.; Frederick, Jennifer M.
The Used Fuel Disposition Campaign (UFDC) of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (OFCT) is conducting research and development (R&D) on geologic disposal of used nuclear fuel (UNF) and high-level nuclear waste (HLW). Two of the high priorities for UFDC disposal R&D are design concept development and disposal system modeling (DOE 2011). These priorities are directly addressed in the UFDC Generic Disposal Systems Analysis (GDSA) work package, which is charged with developing a disposal system modeling and analysis capability for evaluating disposal system performance for nuclear waste in geologic mediamore » (e.g., salt, granite, clay, and deep borehole disposal). This report describes specific GDSA activities in fiscal year 2016 (FY 2016) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code and the Dakota uncertainty sampling and propagation code. Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through engineered barriers and natural geologic barriers to the biosphere. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.« less
NASA Astrophysics Data System (ADS)
Tang, J.; Riley, W. J.
2015-12-01
Previous studies have identified four major sources of predictive uncertainty in modeling land biogeochemical (BGC) processes: (1) imperfect initial conditions (e.g., assumption of preindustrial equilibrium); (2) imperfect boundary conditions (e.g., climate forcing data); (3) parameterization (type I equifinality); and (4) model structure (type II equifinality). As if that were not enough to cause substantial sleep loss in modelers, we propose here a fifth element of uncertainty that results from implementation ambiguity that occurs when the model's mathematical description is translated into computational code. We demonstrate the implementation ambiguity using the example of nitrogen down regulation, a necessary process in modeling carbon-climate feedbacks. We show that, depending on common land BGC model interpretations of the governing equations for mineral nitrogen, there are three different implementations of nitrogen down regulation. We coded these three implementations in the ACME land model (ALM), and explored how they lead to different preindustrial and contemporary land biogeochemical states and fluxes. We also show how this implementation ambiguity can lead to different carbon-climate feedback estimates across the RCP scenarios. We conclude by suggesting how to avoid such implementation ambiguity in ESM BGC models.
Coupling of TRAC-PF1/MOD2, Version 5.4.25, with NESTLE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Knepper, P.L.; Hochreiter, L.E.; Ivanov, K.N.
1999-09-01
A three-dimensional (3-D) spatial kinetics capability within a thermal-hydraulics system code provides a more correct description of the core physics during reactor transients that involve significant variations in the neutron flux distribution. Coupled codes provide the ability to forecast safety margins in a best-estimate manner. The behavior of a reactor core and the feedback to the plant dynamics can be accurately simulated. For each time step, coupled codes are capable of resolving system interaction effects on neutronics feedback and are capable of describing local neutronics effects caused by the thermal hydraulics and neutronics coupling. With the improvements in computational technology,more » modeling complex reactor behaviors with coupled thermal hydraulics and spatial kinetics is feasible. Previously, reactor analysis codes were limited to either a detailed thermal-hydraulics model with simplified kinetics or multidimensional neutron kinetics with a simplified thermal-hydraulics model. The authors discuss the coupling of the Transient Reactor Analysis Code (TRAC)-PF1/MOD2, Version 5.4.25, with the NESTLE code.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lichtner, Peter C.; Hammond, Glenn E.; Lu, Chuan
PFLOTRAN solves a system of generally nonlinear partial differential equations describing multi-phase, multicomponent and multiscale reactive flow and transport in porous materials. The code is designed to run on massively parallel computing architectures as well as workstations and laptops (e.g. Hammond et al., 2011). Parallelization is achieved through domain decomposition using the PETSc (Portable Extensible Toolkit for Scientific Computation) libraries for the parallelization framework (Balay et al., 1997). PFLOTRAN has been developed from the ground up for parallel scalability and has been run on up to 218 processor cores with problem sizes up to 2 billion degrees of freedom. Writtenmore » in object oriented Fortran 90, the code requires the latest compilers compatible with Fortran 2003. At the time of this writing this requires gcc 4.7.x, Intel 12.1.x and PGC compilers. As a requirement of running problems with a large number of degrees of freedom, PFLOTRAN allows reading input data that is too large to fit into memory allotted to a single processor core. The current limitation to the problem size PFLOTRAN can handle is the limitation of the HDF5 file format used for parallel IO to 32 bit integers. Noting that 2 32 = 4; 294; 967; 296, this gives an estimate of the maximum problem size that can be currently run with PFLOTRAN. Hopefully this limitation will be remedied in the near future.« less
A good performance watermarking LDPC code used in high-speed optical fiber communication system
NASA Astrophysics Data System (ADS)
Zhang, Wenbo; Li, Chao; Zhang, Xiaoguang; Xi, Lixia; Tang, Xianfeng; He, Wenxue
2015-07-01
A watermarking LDPC code, which is a strategy designed to improve the performance of the traditional LDPC code, was introduced. By inserting some pre-defined watermarking bits into original LDPC code, we can obtain a more correct estimation about the noise level in the fiber channel. Then we use them to modify the probability distribution function (PDF) used in the initial process of belief propagation (BP) decoding algorithm. This algorithm was tested in a 128 Gb/s PDM-DQPSK optical communication system and results showed that the watermarking LDPC code had a better tolerances to polarization mode dispersion (PMD) and nonlinearity than that of traditional LDPC code. Also, by losing about 2.4% of redundancy for watermarking bits, the decoding efficiency of the watermarking LDPC code is about twice of the traditional one.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1989-01-01
Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.
Studying de-implementation in health: an analysis of funded research grants.
Norton, Wynne E; Kennedy, Amy E; Chambers, David A
2017-12-04
Studying de-implementation-defined herein as reducing or stopping the use of a health service or practice provided to patients by healthcare practitioners and systems-has gained traction in recent years. De-implementing ineffective, unproven, harmful, overused, inappropriate, and/or low-value health services and practices is important for mitigating patient harm, improving processes of care, and reducing healthcare costs. A better understanding of the state-of-the-science is needed to guide future objectives and funding initiatives. To this end, we characterized de-implementation research grants funded by the United States (US) National Institutes of Health (NIH) and the Agency for Healthcare Research and Quality (AHRQ). We used systematic methods to search, identify, and describe de-implementation research grants funded across all 27 NIH Institutes and Centers (ICs) and AHRQ from fiscal year 2000 through 2017. Eleven key terms and three funding opportunity announcements were used to search for research grants in the NIH Query, View and Report (QVR) system. Two coders identified eligible grants based on inclusion/exclusion criteria. A codebook was developed, pilot tested, and revised before coding the full grant applications of the final sample. A total of 1277 grants were identified through the QVR system; 542 remained after removing duplicates. After the multistep eligibility assessment and review process, 20 grant applications were coded. Many grants were funded by NIH (n = 15), with fewer funded by AHRQ, and a majority were funded between fiscal years 2015 and 2016 (n = 11). Grant proposals focused on de-implementing a range of health services and practices (e.g., medications, therapies, screening tests) across various health areas (e.g., cancer, cardiovascular disease) and delivery settings (e.g., hospitals, nursing homes, schools). Grants proposed to use a variety of study designs and research methods (e.g., experimental, observational, mixed methods) to accomplish study aims. Based on the systematic portfolio analysis of NIH- and AHRQ-funded research grants over the past 17 years, relatively few have focused on studying the de-implementation of ineffective, unproven, harmful, overused, inappropriate, and/or low-value health services and practices provided to patients by healthcare practitioners and systems. Strategies for raising the profile and growing the field of research on de-implementation are discussed.
Gene Trapping Using Gal4 in Zebrafish
Balciuniene, Jorune; Balciunas, Darius
2013-01-01
Large clutch size and external development of optically transparent embryos make zebrafish an exceptional vertebrate model system for in vivo insertional mutagenesis using fluorescent reporters to tag expression of mutated genes. Several laboratories have constructed and tested enhancer- and gene-trap vectors in zebrafish, using fluorescent proteins, Gal4- and lexA- based transcriptional activators as reporters 1-7. These vectors had two potential drawbacks: suboptimal stringency (e.g. lack of ability to differentiate between enhancer- and gene-trap events) and low mutagenicity (e.g. integrations into genes rarely produced null alleles). Gene Breaking Transposon (GBTs) were developed to address these drawbacks 8-10. We have modified one of the first GBT vectors, GBT-R15, for use with Gal4-VP16 as the primary gene trap reporter and added UAS:eGFP as the secondary reporter for direct detection of gene trap events. Application of Gal4-VP16 as the primary gene trap reporter provides two main advantages. First, it increases sensitivity for genes expressed at low expression levels. Second, it enables researchers to use gene trap lines as Gal4 drivers to direct expression of other transgenes in very specific tissues. This is especially pertinent for genes with non-essential or redundant functions, where gene trap integration may not result in overt phenotypes. The disadvantage of using Gal4-VP16 as the primary gene trap reporter is that genes coding for proteins with N-terminal signal sequences are not amenable to trapping, as the resulting Gal4-VP16 fusion proteins are unlikely to be able to enter the nucleus and activate transcription. Importantly, the use of Gal4-VP16 does not pre-select for nuclear proteins: we recovered gene trap mutations in genes encoding proteins which function in the nucleus, the cytoplasm and the plasma membrane. PMID:24121167
NASA Astrophysics Data System (ADS)
He, Jing; Wen, Xuejie; Chen, Ming; Chen, Lin; Su, Jinshu
2015-01-01
To improve the transmission performance of multiband orthogonal frequency division multiplexing (MB-OFDM) ultra-wideband (UWB) over optical fiber, a pre-coding scheme based on low-density parity-check (LDPC) is adopted and experimentally demonstrated in the intensity-modulation and direct-detection MB-OFDM UWB over fiber system. Meanwhile, a symbol synchronization and pilot-aided channel estimation scheme is implemented on the receiver of the MB-OFDM UWB over fiber system. The experimental results show that the LDPC pre-coding scheme can work effectively in the MB-OFDM UWB over fiber system. After 70 km standard single-mode fiber (SSMF) transmission, at the bit error rate of 1 × 10-3, the receiver sensitivities are improved about 4 dB when the LDPC code rate is 75%.
Waldrop, Deborah P; Clemency, Brian; Lindstrom, Heather A; Clemency Cordes, Colleen
2015-09-01
Emergency 911 calls are often made when the end stage of an advanced illness is accompanied by alarming symptoms and substantial anxiety for family caregivers, particularly when an approaching death is not anticipated. How prehospital providers (paramedics and emergency medical technicians) manage emergency calls near death influences how and where people will die, if their end-of-life choices are upheld and how appropriately health care resources are used. The purpose of this study was to explore and describe how prehospital providers assess and manage end-of-life emergency calls. In-depth and in-person interviews were conducted with 43 prehospital providers. Interviews were audiotaped, transcribed, and entered into ATLAS.ti for data management and coding. Qualitative data analysis involved systematic and axial coding to identify and describe emergent themes. Four themes illustrate the nature and dynamics of emergency end-of-life calls: 1) multifocal assessment (e.g., of the patient, family, and environment), 2) family responses (e.g., emotional, behavioral), 3) conflicts (e.g., missing do-not-resuscitate order, patient-family conflicts), and 4) management of the dying process (e.g., family witnessed resuscitation or asking family to leave, decisions about hospital transport). After a rapid comprehensive multifocal assessment, family responses and the existence of conflicts mediate decision making about possible interventions. The importance of managing symptom crises and stress responses that accompany the dying process is particularly germane to quality care at life's end. The results suggest the importance of increasing prehospital providers' abilities to uphold advance directives and patients' end-of-life wishes while managing family emotions near death. Copyright © 2015 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.
Low Density Parity Check Codes: Bandwidth Efficient Channel Coding
NASA Technical Reports Server (NTRS)
Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu
2003-01-01
Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
SCALE--a modular code system for Standardized Computer Analyses Licensing Evaluation--has been developed by Oak Ridge National Laboratory at the request of the US Nuclear Regulatory Commission. The SCALE system utilizes well-established computer codes and methods within standard analysis sequences that (1) allow an input format designed for the occasional user and/or novice, (2) automated the data processing and coupling between modules, and (3) provide accurate and reliable results. System development has been directed at problem-dependent cross-section processing and analysis of criticality safety, shielding, heat transfer, and depletion/decay problems. Since the initial release of SCALE in 1980, the code system hasmore » been heavily used for evaluation of nuclear fuel facility and package designs. This revision documents Version 4.3 of the system.« less
48 CFR 4.1803 - Verifying CAGE codes prior to award.
Code of Federal Regulations, 2014 CFR
2014-10-01
... registration in the System for Award Management (SAM). Active registrations in SAM have had the associated CAGE codes verified. (b) For entities not required to be registered in SAM, the contracting officer shall...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-22
... number. (vii) Medium code; how the data is recorded, e.g., barcode, contact memory button. (viii) Value, e.g., actual text or data string that is recorded in its human-readable form. (ix) Set (used to...
ERIC Educational Resources Information Center
National Fire Protection Association, Boston, MA.
These NFPA recommendations are phrased in terms of performance or objectives, the intent being to permit the utilization of any methods, devices, or materials which will produce the desired results. The major topics included are--(1) extinguishing systems, (2) standpipe and hose systems, (3) wetting agents, (4) fire hydrants, (5) water charges for…
A mega-analysis of memory reports from eight peer-reviewed false memory implantation studies.
Scoboria, Alan; Wade, Kimberley A; Lindsay, D Stephen; Azad, Tanjeem; Strange, Deryn; Ost, James; Hyman, Ira E
2017-02-01
Understanding that suggestive practices can promote false beliefs and false memories for childhood events is important in many settings (e.g., psychotherapeutic, medical, and legal). The generalisability of findings from memory implantation studies has been questioned due to variability in estimates across studies. Such variability is partly due to false memories having been operationalised differently across studies and to differences in memory induction techniques. We explored ways of defining false memory based on memory science and developed a reliable coding system that we applied to reports from eight published implantation studies (N = 423). Independent raters coded transcripts using seven criteria: accepting the suggestion, elaboration beyond the suggestion, imagery, coherence, emotion, memory statements, and not rejecting the suggestion. Using this scheme, 30.4% of cases were classified as false memories and another 23% were classified as having accepted the event to some degree. When the suggestion included self-relevant information, an imagination procedure, and was not accompanied by a photo depicting the event, the memory formation rate was 46.1%. Our research demonstrates a useful procedure for systematically combining data that are not amenable to meta-analysis, and provides the most valid estimate of false memory formation and associated moderating factors within the implantation literature to date.
Updated Chemical Kinetics and Sensitivity Analysis Code
NASA Technical Reports Server (NTRS)
Radhakrishnan, Krishnan
2005-01-01
An updated version of the General Chemical Kinetics and Sensitivity Analysis (LSENS) computer code has become available. A prior version of LSENS was described in "Program Helps to Determine Chemical-Reaction Mechanisms" (LEW-15758), NASA Tech Briefs, Vol. 19, No. 5 (May 1995), page 66. To recapitulate: LSENS solves complex, homogeneous, gas-phase, chemical-kinetics problems (e.g., combustion of fuels) that are represented by sets of many coupled, nonlinear, first-order ordinary differential equations. LSENS has been designed for flexibility, convenience, and computational efficiency. The present version of LSENS incorporates mathematical models for (1) a static system; (2) steady, one-dimensional inviscid flow; (3) reaction behind an incident shock wave, including boundary layer correction; (4) a perfectly stirred reactor; and (5) a perfectly stirred reactor followed by a plug-flow reactor. In addition, LSENS can compute equilibrium properties for the following assigned states: enthalpy and pressure, temperature and pressure, internal energy and volume, and temperature and volume. For static and one-dimensional-flow problems, including those behind an incident shock wave and following a perfectly stirred reactor calculation, LSENS can compute sensitivity coefficients of dependent variables and their derivatives, with respect to the initial values of dependent variables and/or the rate-coefficient parameters of the chemical reactions.
Biases in GNSS-Data Processing
NASA Astrophysics Data System (ADS)
Schaer, S. C.; Dach, R.; Lutz, S.; Meindl, M.; Beutler, G.
2010-12-01
Within the Global Positioning System (GPS) traditionally different types of pseudo-range measurements (P-code, C/A-code) are available on the first frequency that are tracked by the receivers with different technologies. For that reason, P1-C1 and P1-P2 Differential Code Biases (DCB) need to be considered in a GPS data processing with a mix of different receiver types. Since the Block IIR-M series of GPS satellites also provide C/A-code on the second frequency, P2-C2 DCB need to be added to the list of biases for maintenance. Potential quarter-cycle biases between different phase observables (specifically L2P and L2C) are another issue. When combining GNSS (currently GPS and GLONASS), careful consideration of inter-system biases (ISB) is indispensable, in particular when an adequate combination of individual GLONASS clock correction results from different sources (using, e.g., different software packages) is intended. Facing the GPS and GLONASS modernization programs and the upcoming GNSS, like the European Galileo and the Chinese Compass, an increasing number of types of biases is expected. The Center for Orbit Determination in Europe (CODE) is monitoring these GPS and GLONASS related biases for a long time based on RINEX files of the tracking network of the International GNSS Service (IGS) and in the frame of the data processing as one of the global analysis centers of the IGS. Within the presentation we give an overview on the stability of the biases based on the monitoring. Biases derived from different sources are compared. Finally, we give an outlook on the potential handling of such biases with the big variety of signals and systems expected in the future.
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
NASA Astrophysics Data System (ADS)
Soto, M. A.; Sahu, P. K.; Faralli, S.; Sacchi, G.; Bolognini, G.; Di Pasquale, F.; Nebendahl, B.; Rueck, C.
2007-07-01
The performance of distributed temperature sensor systems based on spontaneous Raman scattering and coded OTDR are investigated. The evaluated DTS system, which is based on correlation coding, uses graded-index multimode fibers, operates over short-to-medium distances (up to 8 km) with high spatial and temperature resolutions (better than 1 m and 0.3 K at 4 km distance with 10 min measuring time) and high repeatability even throughout a wide temperature range.
Kopf, Matthias; Klähn, Stephan; Scholz, Ingeborg; Hess, Wolfgang R; Voß, Björn
2015-04-22
In all studied organisms, a substantial portion of the transcriptome consists of non-coding RNAs that frequently execute regulatory functions. Here, we have compared the primary transcriptomes of the cyanobacteria Synechocystis sp. PCC 6714 and PCC 6803 under 10 different conditions. These strains share 2854 protein-coding genes and a 16S rRNA identity of 99.4%, indicating their close relatedness. Conserved major transcriptional start sites (TSSs) give rise to non-coding transcripts within the sigB gene, from the 5'UTRs of cmpA and isiA, and 168 loci in antisense orientation. Distinct differences include single nucleotide polymorphisms rendering promoters inactive in one of the strains, e.g., for cmpR and for the asRNA PsbA2R. Based on the genome-wide mapped location, regulation and classification of TSSs, non-coding transcripts were identified as the most dynamic component of the transcriptome. We identified a class of mRNAs that originate by read-through from an sRNA that accumulates as a discrete and abundant transcript while also serving as the 5'UTR. Such an sRNA/mRNA structure, which we name 'actuaton', represents another way for bacteria to remodel their transcriptional network. Our findings support the hypothesis that variations in the non-coding transcriptome constitute a major evolutionary element of inter-strain divergence and capability for physiological adaptation.
Owens, Mandy D; Rowell, Lauren N; Moyers, Theresa
2017-10-01
Motivational Interviewing (MI) is an evidence-based approach shown to be helpful for a variety of behaviors across many populations. Treatment fidelity is an important tool for understanding how and with whom MI may be most helpful. The Motivational Interviewing Treatment Integrity coding system was recently updated to incorporate new developments in the research and theory of MI, including the relational and technical hypotheses of MI (MITI 4.2). To date, no studies have examined the MITI 4.2 with forensic populations. In this project, twenty-two brief MI interventions with jail inmates were evaluated to test the reliability of the MITI 4.2. Validity of the instrument was explored using regression models to examine the associations between global scores (Empathy, Partnership, Cultivating Change Talk and Softening Sustain Talk) and outcomes. Reliability of this coding system with these data was strong. We found that therapists had lower ratings of Empathy with participants who had more extensive criminal histories. Both Relational and Technical global scores were associated with criminal histories as well as post-intervention ratings of motivation to decrease drug use. Findings indicate that the MITI 4.2 was reliable for coding sessions with jail inmates. Additionally, results provided information related to the relational and technical hypotheses of MI. Future studies can use the MITI 4.2 to better understand the mechanisms behind how MI works with this high-risk group. Published by Elsevier Ltd.
Grammar Coding in the "Oxford Advanced Learner's Dictionary of Current English."
ERIC Educational Resources Information Center
Wekker, Herman
1992-01-01
Focuses on the revised system of grammar coding for verbs in the fourth edition of the "Oxford Advanced Learner's Dictionary of Current English" (OALD4), comparing it with two other similar dictionaries. It is shown that the OALD4 is found to be more favorable on many criteria than the other comparable dictionaries. (16 references) (VWL)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Naqvi, S
2014-06-15
Purpose: Most medical physics programs emphasize proficiency in routine clinical calculations and QA. The formulaic aspect of these calculations and prescriptive nature of measurement protocols obviate the need to frequently apply basic physical principles, which, therefore, gradually decay away from memory. E.g. few students appreciate the role of electron transport in photon dose, making it difficult to understand key concepts such as dose buildup, electronic disequilibrium effects and Bragg-Gray theory. These conceptual deficiencies manifest when the physicist encounters a new system, requiring knowledge beyond routine activities. Methods: Two interactive computer simulation tools are developed to facilitate deeper learning of physicalmore » principles. One is a Monte Carlo code written with a strong educational aspect. The code can “label” regions and interactions to highlight specific aspects of the physics, e.g., certain regions can be designated as “starters” or “crossers,” and any interaction type can be turned on and off. Full 3D tracks with specific portions highlighted further enhance the visualization of radiation transport problems. The second code calculates and displays trajectories of a collection electrons under arbitrary space/time dependent Lorentz force using relativistic kinematics. Results: Using the Monte Carlo code, the student can interactively study photon and electron transport through visualization of dose components, particle tracks, and interaction types. The code can, for instance, be used to study kerma-dose relationship, explore electronic disequilibrium near interfaces, or visualize kernels by using interaction forcing. The electromagnetic simulator enables the student to explore accelerating mechanisms and particle optics in devices such as cyclotrons and linacs. Conclusion: The proposed tools are designed to enhance understanding of abstract concepts by highlighting various aspects of the physics. The simulations serve as virtual experiments that give deeper and long lasting understanding of core principles. The student can then make sound judgements in novel situations encountered beyond routine clinical activities.« less
Microwave metamaterials—from passive to digital and programmable controls of electromagnetic waves
NASA Astrophysics Data System (ADS)
Cui, Tie Jun
2017-08-01
Since 2004, my group at Southeast University has been carrying out research into microwave metamaterials, which are classified into three catagories: metamaterials based on the effective medium model, plasmonic metamaterials for spoof surface plasmon polaritons (SPPs), and coding and programmable metamaterials. For effective-medium metamaterials, we have developed a general theory to accurately describe effective permittivity and permeability in semi-analytical forms, from which we have designed and realized a three dimensional (3D) wideband ground-plane invisibility cloak, a free-space electrostatic invisibility cloak, an electromagnetic black hole, optical/radar illusions, and radially anisotropic zero-index metamaterial for omni-directional radiation and a nearly perfect power combination of source array, etc. We have also considered the engineering applications of microwave metamaterials, such as a broadband and low-loss 3D transformation-optics lens for wide-angle scanning, a 3D planar gradient-index lens for high-gain radiations, and a random metasurface for reducing radar cross sections. In the area of plasmonic metamaterials, we proposed an ultrathin, narrow, and flexible corrugated metallic strip to guide SPPs with a small bending loss and radiation loss, from which we designed and realized a series of SPP passive devices (e.g. power divider, coupler, filter, and resonator) and active devices (e.g. amplifier and duplexer). We also showed a significant feature of the ultrathin SPP waveguide in overcoming the challenge of signal integrity in traditional integrated circuits, which will help build a high-performance SPP wireless communication system. In the area of coding and programmable metamaterials, we proposed a new measure to describe a metamaterial from the viewpoint of information theory. We have illustrated theoretically and experimentally that coding metamaterials composed of digital units can be controlled by coding sequences, leading to different functions. We realised that when the digital state of a coding unit is controlled by a field programmable gate array, the programmable metamaterial, which is capable of manipulating electromagnetic waves in real time, can generate many different functions.
PMD compensation in fiber-optic communication systems with direct detection using LDPC-coded OFDM.
Djordjevic, Ivan B
2007-04-02
The possibility of polarization-mode dispersion (PMD) compensation in fiber-optic communication systems with direct detection using a simple channel estimation technique and low-density parity-check (LDPC)-coded orthogonal frequency division multiplexing (OFDM) is demonstrated. It is shown that even for differential group delay (DGD) of 4/BW (BW is the OFDM signal bandwidth), the degradation due to the first-order PMD can be completely compensated for. Two classes of LDPC codes designed based on two different combinatorial objects (difference systems and product of combinatorial designs) suitable for use in PMD compensation are introduced.
Bracher, Michael; Corner, Dame Jessica; Wagland, Richard
2016-09-02
To provide the first systematic analysis of a national (Wales) sample of free-text comments from patients with cancer, to determine emerging themes and insights regarding experiences of cancer care in Wales. Thematic analysis of free-text data from a population-based survey. Adult patients with a confirmed cancer diagnosis treated within a 3-month period during 2012 in the 7 health boards and 1 trust providing cancer care in Wales. Free-text categorised by theme, coded as positive or negative, with ratios. Overarching themes are identified incorporating comment categories. 4672 respondents (of n=7352 survey respondents) provided free-text comments. Data were coded using a multistage approach: (1) coding of comments into general categories (eg, nursing, surgery, etc), (2) coding of subcategories within main categories (eg, nursing care, nursing communication, etc), (3) cross-sectional analysis to identify themes cutting across categories, (4) mapping of categories/subcategories to corresponding closed questions in the Wales Cancer Patient Experience Survey (WCPES) data for comparison. Most free-text respondents (82%, n 3818) provided positive comments about their cancer care, with 49% (n=2313) giving a negative comment (ratio 0.6:1, negative-to-positive). 3172 respondents (67.9% of free-text respondents) provided a comment mapping to 1 of 4 overarching themes: communication (n=1673, 35.8% free-text respondents, a ratio of 1.0:1); waiting during the treatment and/or post-treatment phase (n=923, 19.8%, ratio 1.5:1); staffing and resource levels (n=671, 14.4% ratio 5.3:1); speed and quality of diagnostic care (n=374, 8.0%, ratio 1.5:1). Within these areas, constituent subthemes are discussed. This study presents specific areas of concern for patients with cancer, and reveals a number of themes present across the cancer journey. While the majority of comments were positive, analysis reveals concerns shared by significant numbers of respondents. Timely communication can help to manage these anxieties, even where delays or difficulties in treatment may be encountered. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
E-4 Test Facility Design Status
NASA Technical Reports Server (NTRS)
Ryan, Harry; Canady, Randy; Sewell, Dale; Rahman, Shamim; Gilbrech, Rick
2001-01-01
Combined-cycle propulsion technology is a strong candidate for meeting NASA space transportation goals. Extensive ground testing of integrated air-breathing/rocket system (e.g., components, subsystems and engine systems) across all propulsion operational modes (e.g., ramjet, scramjet) will be needed to demonstrate this propulsion technology. Ground testing will occur at various test centers based on each center's expertise. Testing at the NASA John C. Stennis Space Center will be primarily concentrated on combined-cycle power pack and engine systems at sea level conditions at a dedicated test facility, E-4. This paper highlights the status of the SSC E-4 test Facility design.
Some practical universal noiseless coding techniques, part 3, module PSl14,K+
NASA Technical Reports Server (NTRS)
Rice, Robert F.
1991-01-01
The algorithmic definitions, performance characterizations, and application notes for a high-performance adaptive noiseless coding module are provided. Subsets of these algorithms are currently under development in custom very large scale integration (VLSI) at three NASA centers. The generality of coding algorithms recently reported is extended. The module incorporates a powerful adaptive noiseless coder for Standard Data Sources (i.e., sources whose symbols can be represented by uncorrelated non-negative integers, where smaller integers are more likely than the larger ones). Coders can be specified to provide performance close to the data entropy over any desired dynamic range (of entropy) above 0.75 bit/sample. This is accomplished by adaptively choosing the best of many efficient variable-length coding options to use on each short block of data (e.g., 16 samples) All code options used for entropies above 1.5 bits/sample are 'Huffman Equivalent', but they require no table lookups to implement. The coding can be performed directly on data that have been preprocessed to exhibit the characteristics of a standard source. Alternatively, a built-in predictive preprocessor can be used where applicable. This built-in preprocessor includes the familiar 1-D predictor followed by a function that maps the prediction error sequences into the desired standard form. Additionally, an external prediction can be substituted if desired. A broad range of issues dealing with the interface between the coding module and the data systems it might serve are further addressed. These issues include: multidimensional prediction, archival access, sensor noise, rate control, code rate improvements outside the module, and the optimality of certain internal code options.
Coded Modulation in C and MATLAB
NASA Technical Reports Server (NTRS)
Hamkins, Jon; Andrews, Kenneth S.
2011-01-01
This software, written separately in C and MATLAB as stand-alone packages with equivalent functionality, implements encoders and decoders for a set of nine error-correcting codes and modulators and demodulators for five modulation types. The software can be used as a single program to simulate the performance of such coded modulation. The error-correcting codes implemented are the nine accumulate repeat-4 jagged accumulate (AR4JA) low-density parity-check (LDPC) codes, which have been approved for international standardization by the Consultative Committee for Space Data Systems, and which are scheduled to fly on a series of NASA missions in the Constellation Program. The software implements the encoder and decoder functions, and contains compressed versions of generator and parity-check matrices used in these operations.
Geographic Information Systems: A Primer
1990-10-01
AVAILABILITY OF REPORT Approved for public release; distribution 2b DECLASSjFICATION/ DOWNGRADING SCHEDULE unlimited. 4 PERFORMING ORGANIZATION REPORT...utilizing sophisticated integrated databases (usually vector-based), avoid the indirect value coding scheme by recognizing names or direct magnitudes...intricate involvement required by the operator in order to establish a functional coding scheme . A simple raster system, in which cell values indicate
An Upgrade of the Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) Software
NASA Technical Reports Server (NTRS)
Mason, Michelle L.; Rufer, Shann J.
2015-01-01
The Imaging for Hypersonic Experimental Aeroheating Testing (IHEAT) code is used at NASA Langley Research Center to analyze global aeroheating data on wind tunnel models tested in the Langley Aerothermodynamics Laboratory. One-dimensional, semi-infinite heating data derived from IHEAT are used to design thermal protection systems to mitigate the risks due to the aeroheating loads on hypersonic vehicles, such as re-entry vehicles during descent and landing procedures. This code was originally written in the PV-WAVE programming language to analyze phosphor thermography data from the two-color, relativeintensity system developed at Langley. To increase the efficiency, functionality, and reliability of IHEAT, the code was migrated to MATLAB syntax and compiled as a stand-alone executable file labeled version 4.0. New features of IHEAT 4.0 include the options to batch process all of the data from a wind tunnel run, to map the two-dimensional heating distribution to a three-dimensional computer-aided design model of the vehicle to be viewed in Tecplot, and to extract data from a segmented line that follows an interesting feature in the data. Results from IHEAT 4.0 were compared on a pixel level to the output images from the legacy code to validate the program. The differences between the two codes were on the order of 10-5 to 10-7. IHEAT 4.0 replaces the PV-WAVE version as the production code for aeroheating experiments conducted in the hypersonic facilities at NASA Langley.
JDFTx: Software for joint density-functional theory
Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.; ...
2017-11-14
Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less
JDFTx: Software for joint density-functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sundararaman, Ravishankar; Letchworth-Weaver, Kendra; Schwarz, Kathleen A.
Density-functional theory (DFT) has revolutionized computational prediction of atomic-scale properties from first principles in physics, chemistry and materials science. Continuing development of new methods is necessary for accurate predictions of new classes of materials and properties, and for connecting to nano- and mesoscale properties using coarse-grained theories. JDFTx is a fully-featured open-source electronic DFT software designed specifically to facilitate rapid development of new theories, models and algorithms. Using an algebraic formulation as an abstraction layer, compact C++11 code automatically performs well on diverse hardware including GPUs (Graphics Processing Units). This code hosts the development of joint density-functional theory (JDFT) thatmore » combines electronic DFT with classical DFT and continuum models of liquids for first-principles calculations of solvated and electrochemical systems. In addition, the modular nature of the code makes it easy to extend and interface with, facilitating the development of multi-scale toolkits that connect to ab initio calculations, e.g. photo-excited carrier dynamics combining electron and phonon calculations with electromagnetic simulations.« less
Non-contact assessment of melanin distribution via multispectral temporal illumination coding
NASA Astrophysics Data System (ADS)
Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.
2015-03-01
Melanin is a pigment that is highly absorptive in the UV and visible electromagnetic spectra. It is responsible for perceived skin tone, and protects against harmful UV effects. Abnormal melanin distribution is often an indicator for melanoma. We propose a novel approach for non-contact melanin distribution via multispectral temporal illumination coding to estimate the two-dimensional melanin distribution based on its absorptive characteristics. In the proposed system, a novel multispectral, cross-polarized, temporally-coded illumination sequence is synchronized with a camera to measure reflectance under both multispectral and ambient illumination. This allows us to eliminate the ambient illumination contribution from the acquired reflectance measurements, and also to determine the melanin distribution in an observed region based on the spectral properties of melanin using the Beer-Lambert law. Using this information, melanin distribution maps can be generated for objective, quantitative assessment of skin type of individuals. We show that the melanin distribution map correctly identifies areas with high melanin densities (e.g., nevi).
Verification of the predictive capabilities of the 4C code cryogenic circuit model
NASA Astrophysics Data System (ADS)
Zanino, R.; Bonifetto, R.; Hoa, C.; Richard, L. Savoldi
2014-01-01
The 4C code was developed to model thermal-hydraulics in superconducting magnet systems and related cryogenic circuits. It consists of three coupled modules: a quasi-3D thermal-hydraulic model of the winding; a quasi-3D model of heat conduction in the magnet structures; an object-oriented a-causal model of the cryogenic circuit. In the last couple of years the code and its different modules have undergone a series of validation exercises against experimental data, including also data coming from the supercritical He loop HELIOS at CEA Grenoble. However, all this analysis work was done each time after the experiments had been performed. In this paper a first demonstration is given of the predictive capabilities of the 4C code cryogenic circuit module. To do that, a set of ad-hoc experimental scenarios have been designed, including different heating and control strategies. Simulations with the cryogenic circuit module of 4C have then been performed before the experiment. The comparison presented here between the code predictions and the results of the HELIOS measurements gives the first proof of the excellent predictive capability of the 4C code cryogenic circuit module.
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core-Concrete Interaction
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robb, Kevin R.; Farmer, Mitchell T.; Francis, Matthew W.
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for the analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, in this paper an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 weremore » used as input. MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH. For the MELCOR-based melt pour scenario, CORQUENCH predicted the melt would readily cool within 2.5 h after the pour, and the sumps would experience limited ablation (approximately 18 cm) under water-flooded conditions. Finally, for the MAAP-based melt pour scenarios, CORQUENCH predicted that the melt would cool in approximately 22.5 h, and the sumps would experience approximately 65 cm of concrete ablation under water-flooded conditions.« less
Fukushima Daiichi Unit 1 Ex-Vessel Prediction: Core-Concrete Interaction
Robb, Kevin R.; Farmer, Mitchell T.; Francis, Matthew W.
2016-10-31
Lower head failure and corium-concrete interaction were predicted to occur at Fukushima Daiichi Unit 1 (1F1) by several different system-level code analyses, including MELCOR v2.1 and MAAP5. Although these codes capture a wide range of accident phenomena, they do not contain detailed models for ex-vessel core melt behavior. However, specialized codes exist for the analysis of ex-vessel melt spreading (e.g., MELTSPREAD) and long-term debris coolability (e.g., CORQUENCH). On this basis, in this paper an analysis was carried out to further evaluate ex-vessel behavior for 1F1 using MELTSPREAD and CORQUENCH. Best-estimate melt pour conditions predicted by MELCOR v2.1 and MAAP5 weremore » used as input. MELTSPREAD was then used to predict the spatially dependent melt conditions and extent of spreading during relocation from the vessel. The results of the MELTSPREAD analysis are reported in a companion paper. This information was used as input for the long-term debris coolability analysis with CORQUENCH. For the MELCOR-based melt pour scenario, CORQUENCH predicted the melt would readily cool within 2.5 h after the pour, and the sumps would experience limited ablation (approximately 18 cm) under water-flooded conditions. Finally, for the MAAP-based melt pour scenarios, CORQUENCH predicted that the melt would cool in approximately 22.5 h, and the sumps would experience approximately 65 cm of concrete ablation under water-flooded conditions.« less
[Adjustment of the German DRG system in 2009].
Wenke, A; Franz, D; Pühse, G; Volkmer, B; Roeder, N
2009-07-01
The 2009 version of the German DRG system brought significant changes for urology concerning coding of diagnoses, medical procedures and the DRG structure. In view of the political situation and considerable economic pressure, a critical analysis of the 2009 German DRG system is warranted. Analysis of relevant diagnoses, medical procedures and G-DRGs in the versions 2008 and 2009 based on the publications of the German DRG-institute (InEK) and the German Institute of Medical Documentation and Information (DIMDI). The relevant diagnoses, medical procedures and German DRGs in the versions 2008 and 2009 were analysed based on the publications of the German DRG Institute (InEK) and the German Institute of Medical Documentation and Information (DIMDI). Changes for 2009 focus on the development of the DRG structure, DRG validation and codes for medical procedures to be used for very complex cases. The outcome of these changes for German hospitals may vary depending in the range of activities. The German DRG system again gained complexity. High demands are made on correct and complete coding of complex urology cases. The quality of case allocation in the German DRG system was improved. On the one hand some of the old problems (e.g. enterostomata) still persist, while on the other hand new problems evolved out of the attempt to improve the case allocation of highly complex and expensive cases. Time will tell whether the increase in highly specialized DRG with low case numbers will continue to endure and reach acceptable rates of annual fluctuations.
Plug and Play PV Systems for American Homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoepfner, Christian
2016-12-22
The core objectives of the Plug & Play PV Systems Project were to develop a PV system that can be installed on a residential rooftop for less than $1.50/W in 2020, and in less than 10 hours (from point of purchase to commissioning). The Fraunhofer CSE team’s approach to this challenge involved a holistic approach to system design – hardware and software – that make Plug & Play PV systems: • Quick, easy, and safe to install • Easy to demonstrate as code compliant • Permitted, inspected, and interconnected via an electronic process Throughout the three years of work duringmore » this Department of Energy SunShot funded project, the team engaged in a substantive way with inspectional services departments and utilities, manufacturers, installers, and distributors. We received iterative feedback on the system design and on ideas for how such systems can be commercialized. This ultimately led us to conceiving of Plug & Play PV Systems as a framework, with a variety of components compatible with the Plug & Play PV approach, including string or microinverters, conventional modules or emerging lightweight modules. The framework enables a broad group of manufacturers to participate in taking Plug & Play PV Systems to market, and increases the market size for such systems. Key aspects of the development effort centered on the system hardware and associated engineering work, the development of a Plug & Play PV Server to enable the electronic permitting, inspection and interconnection process, understanding the details of code compliance and, on occasion, supporting applications for modifications to the code to allow lightweight modules, for example. We have published a number of papers on our testing and assessment of novel technologies (e.g., adhered lightweight modules) and on the electronic architecture.« less
VISAGE: Interactive Visual Graph Querying.
Pienta, Robert; Navathe, Shamkant; Tamersoy, Acar; Tong, Hanghang; Endert, Alex; Chau, Duen Horng
2016-06-01
Extracting useful patterns from large network datasets has become a fundamental challenge in many domains. We present VISAGE, an interactive visual graph querying approach that empowers users to construct expressive queries, without writing complex code (e.g., finding money laundering rings of bankers and business owners). Our contributions are as follows: (1) we introduce graph autocomplete , an interactive approach that guides users to construct and refine queries, preventing over-specification; (2) VISAGE guides the construction of graph queries using a data-driven approach, enabling users to specify queries with varying levels of specificity, from concrete and detailed (e.g., query by example), to abstract (e.g., with "wildcard" nodes of any types), to purely structural matching; (3) a twelve-participant, within-subject user study demonstrates VISAGE's ease of use and the ability to construct graph queries significantly faster than using a conventional query language; (4) VISAGE works on real graphs with over 468K edges, achieving sub-second response times for common queries.
VISAGE: Interactive Visual Graph Querying
Pienta, Robert; Navathe, Shamkant; Tamersoy, Acar; Tong, Hanghang; Endert, Alex; Chau, Duen Horng
2017-01-01
Extracting useful patterns from large network datasets has become a fundamental challenge in many domains. We present VISAGE, an interactive visual graph querying approach that empowers users to construct expressive queries, without writing complex code (e.g., finding money laundering rings of bankers and business owners). Our contributions are as follows: (1) we introduce graph autocomplete, an interactive approach that guides users to construct and refine queries, preventing over-specification; (2) VISAGE guides the construction of graph queries using a data-driven approach, enabling users to specify queries with varying levels of specificity, from concrete and detailed (e.g., query by example), to abstract (e.g., with “wildcard” nodes of any types), to purely structural matching; (3) a twelve-participant, within-subject user study demonstrates VISAGE’s ease of use and the ability to construct graph queries significantly faster than using a conventional query language; (4) VISAGE works on real graphs with over 468K edges, achieving sub-second response times for common queries. PMID:28553670
NASA Astrophysics Data System (ADS)
Deng, Rui; Yu, Jianjun; He, Jing; Wei, Yiran
2018-05-01
In this paper, we experimentally demonstrated a complete real-time 4-level pulse amplitude modulation (PAM-4) Q-band radio-over-fiber (RoF) system with optical heterodyning and envelope detector (ED) down-conversion. Meanwhile, a cost-efficient real-time implementation scheme of cascaded multi-modulus algorithm (CMMA) equalization is proposed in this paper. By using the proposed scheme, the CMMA equalization is applied in the system for signal recovery. In addition, to improve the transmission performance of the system, an interleaved Reed-Solomon (RS) code is applied in the real-time system. Although there is serious power impulse noise in the system, the system can still achieve a bit error rate (BER) at below 1 × 10-7 after 25 km standard single mode fiber (SSMF) transmission and 1-m wireless transmission.
1992-01-14
N00014-88-K-0286 (R&T4428014) Reproduction in whole or pai t is permitted for any purpose of the United States Government Approved for public release ...128.DISRIBTIO , AAILAILIY SATEENT12b. DISTRIBUTION CODE Approved for public release ; distribution unlimited. CX 13. ABSTRACT (Maximum 200 words) A...Structures 30 16.5 16.6 17.5 25.5* p<(.004 ) (e"g. esophagus) Substances 8 0.5 0.4 1.0 4.0 * p<(.0001) (e.g. gastrin ) Neuro-endo 6 0.5 0.2 0.5 2.2 * p<(.0008
Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images
NASA Technical Reports Server (NTRS)
Fischer, Bernd
2004-01-01
Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.
Shaped Charge Jet Penetration of Discontinuous Media
1977-07-01
operational at the Ballistic1Research Laboratory. These codes are OIL, 1 TOIL, 2 DORF, 3 and HELP,4 ,5 which are Eulerian formulated, and HEMP ,6 which...ELastic Plastic ) is a FORTRAN code developed by Systems, Science and Software, Inc. It evolved from three major hydrodynamic codes previously developed...introduced into the treatment of moving surfaces. The HELP code, using the von Mises yield condition, treats materials as being elastic- plastic . The input for
Popota, F D; Aguiar, P; España, S; Lois, C; Udias, J M; Ros, D; Pavia, J; Gispert, J D
2015-01-07
In this work a comparison between experimental and simulated data using GATE and PeneloPET Monte Carlo simulation packages is presented. All simulated setups, as well as the experimental measurements, followed exactly the guidelines of the NEMA NU 4-2008 standards using the microPET R4 scanner. The comparison was focused on spatial resolution, sensitivity, scatter fraction and counting rates performance. Both GATE and PeneloPET showed reasonable agreement for the spatial resolution when compared to experimental measurements, although they lead to slight underestimations for the points close to the edge. High accuracy was obtained between experiments and simulations of the system's sensitivity and scatter fraction for an energy window of 350-650 keV, as well as for the counting rate simulations. The latter was the most complicated test to perform since each code demands different specifications for the characterization of the system's dead time. Although simulated and experimental results were in excellent agreement for both simulation codes, PeneloPET demanded more information about the behavior of the real data acquisition system. To our knowledge, this constitutes the first validation of these Monte Carlo codes for the full NEMA NU 4-2008 standards for small animal PET imaging systems.
Audit of Clinical Coding of Major Head and Neck Operations
Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean
2009-01-01
INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944
MPEG4: coding for content, interactivity, and universal accessibility
NASA Astrophysics Data System (ADS)
Reader, Cliff
1996-01-01
MPEG4 is a natural extension of audiovisual coding, and yet from many perspectives breaks new ground as a standard. New coding techniques are being introduced, of course, but they will work on new data structures. The standard itself has a new architecture, and will use a new operational model when implemented on equipment that is likely to have innovative system architecture. The author introduces the background developments in technology and applications that are driving or enabling the standard, introduces the focus of MPEG4, and enumerates the new functionalities to be supported. Key applications in interactive TV and heterogeneous environments are discussed. The architecture of MPEG4 is described, followed by a discussion of the multiphase MPEG4 communication scenarios, and issues of practical implementation of MPEG4 terminals. The paper concludes with a description of the MPEG4 workplan. In summary, MPEG4 has two fundamental attributes. First, it is the coding of audiovisual objects, which may be natural or synthetic data in two or three dimensions. Second, the heart of MPEG4 is its syntax: the MPEG4 Syntactic Descriptive Language -- MSDL.
Kim, Dong-Sun; Kwon, Jin-San
2014-01-01
Research on real-time health systems have received great attention during recent years and the needs of high-quality personal multichannel medical signal compression for personal medical product applications are increasing. The international MPEG-4 audio lossless coding (ALS) standard supports a joint channel-coding scheme for improving compression performance of multichannel signals and it is very efficient compression method for multi-channel biosignals. However, the computational complexity of such a multichannel coding scheme is significantly greater than that of other lossless audio encoders. In this paper, we present a multichannel hardware encoder based on a low-complexity joint-coding technique and shared multiplier scheme for portable devices. A joint-coding decision method and a reference channel selection scheme are modified for a low-complexity joint coder. The proposed joint coding decision method determines the optimized joint-coding operation based on the relationship between the cross correlation of residual signals and the compression ratio. The reference channel selection is designed to select a channel for the entropy coding of the joint coding. The hardware encoder operates at a 40 MHz clock frequency and supports two-channel parallel encoding for the multichannel monitoring system. Experimental results show that the compression ratio increases by 0.06%, whereas the computational complexity decreases by 20.72% compared to the MPEG-4 ALS reference software encoder. In addition, the compression ratio increases by about 11.92%, compared to the single channel based bio-signal lossless data compressor. PMID:25237900
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gonzalez Gonzalez, R.; Petruzzi, A.; D'Auria, F.
2012-07-01
Atucha-2 is a Siemens-designed PHWR reactor under construction in the Republic of Argentina. Its geometrical complexity and (e.g., oblique Control Rods, Positive Void coefficient) required a developed and validated complex three dimensional (3D) neutron kinetics (NK) coupled thermal hydraulic (TH) model. Reactor shut-down is obtained by oblique CRs and, during accidental conditions, by an emergency shut-down system (JDJ) injecting a highly concentrated boron solution (boron clouds) in the moderator tank, the boron clouds reconstruction is obtained using a CFD (CFX) code calculation. A complete LBLOCA calculation implies the application of the RELAP5-3D{sup C} system code. Within the framework of themore » third Agreement 'NA-SA - Univ. of Pisa' a new RELAP5-3D control system for the boron injection system was developed and implemented in the validated coupled RELAP5-3D/NESTLE model of the Atucha 2 NPP. The aim of this activity is to find out the limiting case (maximum break area size) for the Peak Cladding Temperature for LOCAs under fixed boundary conditions. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gutierrez, Marte
2013-12-31
This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understandmore » of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less
Xyce parallel electronic simulator : users' guide.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.
2011-05-01
This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-artmore » algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical simulation capability, designed to meet the unique needs of the laboratory.« less
Perea, Manuel; Acha, Joana
2009-02-01
Recently, a number of input coding schemes (e.g., SOLAR model, SERIOL model, open-bigram model, overlap model) have been proposed that capture the transposed-letter priming effect (i.e., faster response times for jugde-JUDGE than for jupte-JUDGE). In their current version, these coding schemes do not assume any processing differences between vowels and consonants. However, in a lexical decision task, Perea and Lupker (2004, JML; Lupker, Perea, & Davis, 2008, L&CP) reported that transposed-letter priming effects occurred for consonant transpositions but not for vowel transpositions. This finding poses a challenge for these recently proposed coding schemes. Here, we report four masked priming experiments that examine whether this consonant/vowel dissociation in transposed-letter priming is task-specific. In Experiment 1, we used a lexical decision task and found a transposed-letter priming effect only for consonant transpositions. In Experiments 2-4, we employed a same-different task - a task which taps early perceptual processes - and found a robust transposed-letter priming effect that did not interact with consonant/vowel status. We examine the implications of these findings for the front-end of the models of visual word recognition.
Can 100Gb/s wavelengths be deployed using 10Gb/s engineering rules?
NASA Astrophysics Data System (ADS)
Saunders, Ross; Nicholl, Gary; Wollenweber, Kevin; Schmidt, Ted
2007-09-01
A key challenge set by carriers for 40Gb/s deployments was that the 40Gb/s wavelengths should be deployable over existing 10Gb/s DWDM systems, using 10Gb/s link engineering design rules. Typical 10Gb/s link engineering rules are: 1. Polarization Mode Dispersion (PMD) tolerance of 10ps (mean); 2. Chromatic Dispersion (CD) tolerance of +/-700ps/nm 3. Operation at 50GHz channel spacing, including transit through multiple cascaded [R]OADMs; 4. Optical reach up to 2,000km. By using a combination of advanced modulation formats and adaptive dispersion compensation (technologies rarely seen at 10Gb/s outside of the submarine systems space), vendors did respond to the challenge and broadly met this requirement. As we now start to explore feasible technologies for 100Gb/s optical transport, driven by 100GE port availability on core IP routers, the carrier challenge remains the same. 100Gb/s links should be deployable over existing 10Gb/s DWDM systems using 10Gb/s link engineering rules (as listed above). To meet this challenge, optical transport technology must evolve to yet another level of complexity/maturity in both modulation formats and adaptive compensation techniques. Many clues as to how this might be achieved can be gained by first studying sister telecommunications industries, e.g. satellite (QPSK, QAM, LDCP FEC codes), wireless (advanced DSP, MSK), HDTV (TCM), etc. The optical industry is not a pioneer of new ideas in modulation schemes and coding theory, we will always be followers. However, we do have the responsibility of developing the highest capacity "modems" on the planet to carry the core backbone traffic of the Internet. As such, the key to our success will be to analyze the pros and cons of advanced modulation/coding techniques and balance this with the practical limitations of high speed electronics processing speed and the challenges of real world optical layer impairments. This invited paper will present a view on what advanced technologies are likely candidates to support 100GE optical IP transport over existing 10Gb/s DWDM systems, using 10Gb/s link engineering rules.
The CCONE Code System and its Application to Nuclear Data Evaluation for Fission and Other Reactions
NASA Astrophysics Data System (ADS)
Iwamoto, O.; Iwamoto, N.; Kunieda, S.; Minato, F.; Shibata, K.
2016-01-01
A computer code system, CCONE, was developed for nuclear data evaluation within the JENDL project. The CCONE code system integrates various nuclear reaction models needed to describe nucleon, light charged nuclei up to alpha-particle and photon induced reactions. The code is written in the C++ programming language using an object-oriented technology. At first, it was applied to neutron-induced reaction data on actinides, which were compiled into JENDL Actinide File 2008 and JENDL-4.0. It has been extensively used in various nuclear data evaluations for both actinide and non-actinide nuclei. The CCONE code has been upgraded to nuclear data evaluation at higher incident energies for neutron-, proton-, and photon-induced reactions. It was also used for estimating β-delayed neutron emission. This paper describes the CCONE code system indicating the concept and design of coding and inputs. Details of the formulation for modelings of the direct, pre-equilibrium and compound reactions are presented. Applications to the nuclear data evaluations such as neutron-induced reactions on actinides and medium-heavy nuclei, high-energy nucleon-induced reactions, photonuclear reaction and β-delayed neutron emission are mentioned.
Viterbi decoding for satellite and space communication.
NASA Technical Reports Server (NTRS)
Heller, J. A.; Jacobs, I. M.
1971-01-01
Convolutional coding and Viterbi decoding, along with binary phase-shift keyed modulation, is presented as an efficient system for reliable communication on power limited satellite and space channels. Performance results, obtained theoretically and through computer simulation, are given for optimum short constraint length codes for a range of code constraint lengths and code rates. System efficiency is compared for hard receiver quantization and 4 and 8 level soft quantization. The effects on performance of varying of certain parameters relevant to decoder complexity and cost are examined. Quantitative performance degradation due to imperfect carrier phase coherence is evaluated and compared to that of an uncoded system. As an example of decoder performance versus complexity, a recently implemented 2-Mbit/sec constraint length 7 Viterbi decoder is discussed. Finally a comparison is made between Viterbi and sequential decoding in terms of suitability to various system requirements.
Learning to See: Research in Training a Robot Vision System
2008-12-01
Seraji (2001) and Howard et al. (2001) was focused...Threshold Fe at ur es P er S eg m en t 0.1 0.2 0.3 0.4 0.5 0 5 10 15 20 25 0.1 0.2 0.3 0.4 0.5 Trafficability Score Distance Threshold Fe at ur es P er...S eg m en t 5 10 15 20 25 Fe at ur es P er S eg m en t Fe at ur es P er S eg m en t Fe at ur es P er S eg m en t Fe at ur es P er S eg m en
Joe Iovenitti
2013-05-15
The Engineered Geothermal System (EGS) Exploration Methodology Project is developing an exploration approach for EGS through the integration of geoscientific data. The Project chose the Dixie Valley Geothermal System in Nevada as a field laboratory site for methodlogy calibration purposes because, in the public domain, it is a highly characterized geothermal systems in the Basin and Range with a considerable amount of geoscience and most importantly, well data. This Baseline Conceptual Model report summarizes the results of the first three project tasks (1) collect and assess the existing public domain geoscience data, (2) design and populate a GIS database, and (3) develop a baseline (existing data) geothermal conceptual model, evaluate geostatistical relationships, and generate baseline, coupled EGS favorability/trust maps from +1km above sea level (asl) to -4km asl for the Calibration Area (Dixie Valley Geothermal Wellfield) to identify EGS drilling targets at a scale of 5km x 5km. It presents (1) an assessment of the readily available public domain data and some proprietary data provided by Terra-Gen Power, LLC, (2) a re-interpretation of these data as required, (3) an exploratory geostatistical data analysis, (4) the baseline geothermal conceptual model, and (5) the EGS favorability/trust mapping. The conceptual model presented applies to both the hydrothermal system and EGS in the Dixie Valley region.
Definition of the Semisubmersible Floating System for Phase II of OC4
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robertson, A.; Jonkman, J.; Masciola, M.
Phase II of the Offshore Code Comparison Collaboration Continuation (OC4) project involved modeling of a semisubmersible floating offshore wind system as shown below. This report documents the specifications of the floating system, which were needed by the OC4 participants for building aero-hydro-servo-elastic models.
I-Ching, dyadic groups of binary numbers and the geno-logic coding in living bodies.
Hu, Zhengbing; Petoukhov, Sergey V; Petukhova, Elena S
2017-12-01
The ancient Chinese book I-Ching was written a few thousand years ago. It introduces the system of symbols Yin and Yang (equivalents of 0 and 1). It had a powerful impact on culture, medicine and science of ancient China and several other countries. From the modern standpoint, I-Ching declares the importance of dyadic groups of binary numbers for the Nature. The system of I-Ching is represented by the tables with dyadic groups of 4 bigrams, 8 trigrams and 64 hexagrams, which were declared as fundamental archetypes of the Nature. The ancient Chinese did not know about the genetic code of protein sequences of amino acids but this code is organized in accordance with the I-Ching: in particularly, the genetic code is constructed on DNA molecules using 4 nitrogenous bases, 16 doublets, and 64 triplets. The article also describes the usage of dyadic groups as a foundation of the bio-mathematical doctrine of the geno-logic code, which exists in parallel with the known genetic code of amino acids but serves for a different goal: to code the inherited algorithmic processes using the logical holography and the spectral logic of systems of genetic Boolean functions. Some relations of this doctrine with the I-Ching are discussed. In addition, the ratios of musical harmony that can be revealed in the parameters of DNA structure are also represented in the I-Ching book. Copyright © 2017 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Sumner, Tyler S.
2016-04-17
An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less
Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan
2017-10-03
Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes.
Peng, Hui; Lan, Chaowang; Liu, Yuansheng; Liu, Tao; Blumenstein, Michael; Li, Jinyan
2017-01-01
Disease-related protein-coding genes have been widely studied, but disease-related non-coding genes remain largely unknown. This work introduces a new vector to represent diseases, and applies the newly vectorized data for a positive-unlabeled learning algorithm to predict and rank disease-related long non-coding RNA (lncRNA) genes. This novel vector representation for diseases consists of two sub-vectors, one is composed of 45 elements, characterizing the information entropies of the disease genes distribution over 45 chromosome substructures. This idea is supported by our observation that some substructures (e.g., the chromosome 6 p-arm) are highly preferred by disease-related protein coding genes, while some (e.g., the 21 p-arm) are not favored at all. The second sub-vector is 30-dimensional, characterizing the distribution of disease gene enriched KEGG pathways in comparison with our manually created pathway groups. The second sub-vector complements with the first one to differentiate between various diseases. Our prediction method outperforms the state-of-the-art methods on benchmark datasets for prioritizing disease related lncRNA genes. The method also works well when only the sequence information of an lncRNA gene is known, or even when a given disease has no currently recognized long non-coding genes. PMID:29108274
TFaNS Tone Fan Noise Design/Prediction System. Volume 3; Evaluation of System Codes
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFANS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFANS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report evaluates TFANS versus full-scale and ADP 22" fig data using the semi-empirical wake modelling in the system. This report is divided into three volumes: Volume 1: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFANS Version 1.4; Volume III: Evaluation of System Codes.
Development of probabilistic internal dosimetry computer code
NASA Astrophysics Data System (ADS)
Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki
2017-02-01
Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values ( e.g. the 2.5th, 5th, median, 95th, and 97.5th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases of severe internal exposure, the causation probability of a deterministic health effect can be derived from the dose distribution, and a high statistical value ( e.g., the 95th percentile of the distribution) can be used to determine the appropriate intervention. The distribution-based sensitivity analysis can also be used to quantify the contribution of each factor to the dose uncertainty, which is essential information for reducing and optimizing the uncertainty in the internal dose assessment. Therefore, the present study can contribute to retrospective dose assessment for accidental internal exposure scenarios, as well as to internal dose monitoring optimization and uncertainty reduction.
Understanding multi-scale structural evolution in granular systems through gMEMS
NASA Astrophysics Data System (ADS)
Walker, David M.; Tordesillas, Antoinette
2013-06-01
We show how the rheological response of a material to applied loads can be systematically coded, analyzed and succinctly summarized, according to an individual grain's property (e.g. kinematics). Individual grains are considered as their own smart sensor akin to microelectromechanical systems (e.g. gyroscopes, accelerometers), each capable of recognizing their evolving role within self-organizing building block structures (e.g. contact cycles and force chains). A symbolic time series is used to represent their participation in such self-assembled building blocks and a complex network summarizing their interrelationship with other grains is constructed. In particular, relationships between grain time series are determined according to the information theory Hamming distance or the metric Euclidean distance. We then use topological distance to find network communities enabling groups of grains at remote physical metric distances in the material to share a classification. In essence grains with similar structural and functional roles at different scales are identified together. This taxonomy distills the dissipative structural rearrangements of grains down to its essential features and thus provides pointers for objective physics-based internal variable formalisms used in the construction of robust predictive continuum models.
Joint Services Electronics Program Annual Progress Report.
1985-11-01
one symbol memory) adaptive lHuffman codes were performed, and the compression achieved was compared with that of Ziv - Lempel coding. As was expected...MATERIALS 8 4. Information Systems 9 4.1 REAL TIME STATISTICAL DATA PROCESSING 9 -. 4.2 DATA COMPRESSION for COMPUTER DATA STRUCTURES 9 5. PhD...a. Real Time Statistical Data Processing (T. Kailatb) b. Data Compression for Computer Data Structures (J. Gill) Acces Fo NTIS CRA&I I " DTIC TAB
NASA Technical Reports Server (NTRS)
Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush
2006-01-01
This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).
Coding for reliable satellite communications
NASA Technical Reports Server (NTRS)
Gaarder, N. T.; Lin, S.
1986-01-01
This research project was set up to study various kinds of coding techniques for error control in satellite and space communications for NASA Goddard Space Flight Center. During the project period, researchers investigated the following areas: (1) decoding of Reed-Solomon codes in terms of dual basis; (2) concatenated and cascaded error control coding schemes for satellite and space communications; (3) use of hybrid coding schemes (error correction and detection incorporated with retransmission) to improve system reliability and throughput in satellite communications; (4) good codes for simultaneous error correction and error detection, and (5) error control techniques for ring and star networks.
Malik, Azhar H; Shimazoe, Kenji; Takahashi, Hiroyuki
2013-01-01
In order to obtain plasma time activity curve (PTAC), input function for almost all quantitative PET studies, patient blood is sampled manually from the artery or vein which has various drawbacks. Recently a novel compact Time over Threshold (ToT) based Pr:LuAG-APD animal PET tomograph is developed in our laboratory which has 10% energy resolution, 4.2 ns time resolution and 1.76 mm spatial resolution. The measured value of spatial resolution shows much promise for imaging the blood vascular, i.e; artery of diameter 2.3-2.4mm, and hence, to measure PTAC for quantitative PET studies. To find the measurement time required to obtain reasonable counts for image reconstruction, the most important parameter is the sensitivity of the system. Usually small animal PET systems are characterized by using a point source in air. We used Electron Gamma Shower 5 (EGS5) code to simulate a point source at different positions inside the sensitive volume of tomograph and the axial and radial variations in the sensitivity are studied in air and phantom equivalent water cylinder. An average sensitivity difference of 34% in axial direction and 24.6% in radial direction is observed when point source is displaced inside water cylinder instead of air.
HEC Applications on Columbia Project
NASA Technical Reports Server (NTRS)
Taft, Jim
2004-01-01
NASA's Columbia system consists of a cluster of twenty 512 processor SGI Altix systems. Each of these systems is 3 TFLOP/s in peak performance - approximately the same as the entire compute capability at NAS just one year ago. Each 512p system is a single system image machine with one Linunx O5, one high performance file system, and one globally shared memory. The NAS Terascale Applications Group (TAG) is chartered to assist in scaling NASA's mission critical codes to at least 512p in order to significantly improve emergency response during flight operations, as well as provide significant improvements in the codes. and rate of scientific discovery across the scientifc disciplines within NASA's Missions. Recent accomplishments are 4x improvements to codes in the ocean modeling community, 10x performance improvements in a number of computational fluid dynamics codes used in aero-vehicle design, and 5x improvements in a number of space science codes dealing in extreme physics. The TAG group will continue its scaling work to 2048p and beyond (10240 cpus) as the Columbia system becomes fully operational and the upgrades to the SGI NUMAlink memory fabric are in place. The NUMlink uprades dramatically improve system scalability for a single application. These upgrades will allow a number of codes to execute faster at higher fidelity than ever before on any other system, thus increasing the rate of scientific discovery even further
Canham-Chervak, Michelle; Steelman, Ryan A; Schuh, Anna; Jones, Bruce H
2016-11-01
Injuries are a barrier to military medical readiness, and overexertion has historically been a leading mechanism of injury among active duty U.S. Army soldiers. Details are needed to inform prevention planning. The Defense Medical Surveillance System (DMSS) was queried for unique medical encounters among active duty Army soldiers consistent with the military injury definition and assigned an overexertion external cause code (ICD-9: E927.0-E927.9) in 2014 (n=21,891). Most (99.7%) were outpatient visits and 60% were attributed specifically to sudden strenuous movement. Among the 41% (n=9,061) of visits with an activity code (ICD-9: E001-E030), running was the most common activity (n=2,891, 32%); among the 19% (n=4,190) with a place of occurrence code (ICD-9: E849.0-E849.9), the leading location was recreation/sports facilities (n=1,332, 32%). External cause codes provide essential details, but the data represented less than 4% of all injury-related medical encounters among U.S. Army soldiers in 2014. Efforts to improve external cause coding are needed, and could be aligned with training on and enforcement of ICD-10 coding guidelines throughout the Military Health System.
BOCA BASIC BUILDING CODE. 4TH ED., 1965 AND 1967. BOCA BASIC BUILDING CODE ACCUMULATIVE SUPPLEMENT.
ERIC Educational Resources Information Center
Building Officials Conference of America, Inc., Chicago, IL.
NATIONALLY RECOGNIZED STANDARDS FOR THE EVALUATION OF MINIMUM SAFE PRACTICE OR FOR DETERMINING THE PERFORMANCE OF MATERIALS OR SYSTEMS OF CONSTRUCTION HAVE BEEN COMPILED AS AN AID TO DESIGNERS AND LOCAL OFFICIALS. THE CODE PRESENTS REGULATIONS IN TERMS OF MEASURED PERFORMANCE RATHER THAN IN RIGID SPECIFICATION OF MATERIALS OR METHODS. THE AREAS…
Numerical Electromagnetic Code (NEC)-Basic Scattering Code. Part 2. Code Manual
1979-09-01
imaging of source axes for magnetic source. Ax R VSOURC(1,1) + 9 VSOURC(1,2) + T VSOURC(1,3) 4pi = x VIMAG(I,1) + ^ VINAG (1,2)+ VIMAG(l,3) An =unit...VNC A. yt and z components of the end cap unit normal OUTPUT VARIABLE VINAG X.. Y, and z components defining thesource image coordinate system axesin
Performance of Low-Density Parity-Check Coded Modulation
NASA Astrophysics Data System (ADS)
Hamkins, J.
2011-02-01
This article presents the simulated performance of a family of nine AR4JA low-density parity-check (LDPC) codes when used with each of five modulations. In each case, the decoder inputs are codebit log-likelihood ratios computed from the received (noisy) modulation symbols using a general formula which applies to arbitrary modulations. Suboptimal soft-decision and hard-decision demodulators are also explored. Bit-interleaving and various mappings of bits to modulation symbols are considered. A number of subtle decoder algorithm details are shown to affect performance, especially in the error floor region. Among these are quantization dynamic range and step size, clipping degree-one variable nodes, "Jones clipping" of variable nodes, approximations of the min* function, and partial hard-limiting messages from check nodes. Using these decoder optimizations, all coded modulations simulated here are free of error floors down to codeword error rates below 10^{-6}. The purpose of generating this performance data is to aid system engineers in determining an appropriate code and modulation to use under specific power and bandwidth constraints, and to provide information needed to design a variable/adaptive coded modulation (VCM/ACM) system using the AR4JA codes. IPNPR Volume 42-185 Tagged File.txt
Parallelization of a Monte Carlo particle transport simulation code
NASA Astrophysics Data System (ADS)
Hadjidoukas, P.; Bousis, C.; Emfietzoglou, D.
2010-05-01
We have developed a high performance version of the Monte Carlo particle transport simulation code MC4. The original application code, developed in Visual Basic for Applications (VBA) for Microsoft Excel, was first rewritten in the C programming language for improving code portability. Several pseudo-random number generators have been also integrated and studied. The new MC4 version was then parallelized for shared and distributed-memory multiprocessor systems using the Message Passing Interface. Two parallel pseudo-random number generator libraries (SPRNG and DCMT) have been seamlessly integrated. The performance speedup of parallel MC4 has been studied on a variety of parallel computing architectures including an Intel Xeon server with 4 dual-core processors, a Sun cluster consisting of 16 nodes of 2 dual-core AMD Opteron processors and a 200 dual-processor HP cluster. For large problem size, which is limited only by the physical memory of the multiprocessor server, the speedup results are almost linear on all systems. We have validated the parallel implementation against the serial VBA and C implementations using the same random number generator. Our experimental results on the transport and energy loss of electrons in a water medium show that the serial and parallel codes are equivalent in accuracy. The present improvements allow for studying of higher particle energies with the use of more accurate physical models, and improve statistics as more particles tracks can be simulated in low response time.
A qualitative content analysis of peer mentoring video calls in adolescents with chronic illness.
Ahola Kohut, Sara; Stinson, Jennifer; Forgeron, Paula; van Wyk, Margaret; Harris, Lauren; Luca, Stephanie
2018-05-01
This article endeavored to determine the topics of discussion during open-ended peer mentoring between adolescents and young adults living with chronic illness. This study occurred alongside a study of the iPeer2Peer Program. Fifty-two calls (7 mentor-mentee pairings) were audio recorded, transcribed verbatim, and analyzed using inductive coding with an additional 30 calls (21 mentor-mentee pairings) coded to ensure representativeness of the data. Three categories emerged: (1) illness impact (e.g., relationships, school/work, self-identity, personal stories), (2) self-management (e.g., treatment adherence, transition to adult care, coping strategies), and (3) non-illness-related adolescent issues (e.g., post-secondary goals, hobbies, social environments). Differences in discussed topics were noted between sexes and by diagnosis. Peer mentors provided informational, appraisal, and emotional support to adolescents.
A /31,15/ Reed-Solomon Code for large memory systems
NASA Technical Reports Server (NTRS)
Lim, R. S.
1979-01-01
This paper describes the encoding and the decoding of a (31,15) Reed-Solomon Code for multiple-burst error correction for large memory systems. The decoding procedure consists of four steps: (1) syndrome calculation, (2) error-location polynomial calculation, (3) error-location numbers calculation, and (4) error values calculation. The principal features of the design are the use of a hardware shift register for both high-speed encoding and syndrome calculation, and the use of a commercially available (31,15) decoder for decoding Steps 2, 3 and 4.
Histone H4 Methyltransferase Suv420h2 Maintains Fidelity of Osteoblast Differentiation.
Khani, Farzaneh; Thaler, Roman; Paradise, Christopher R; Deyle, David R; Kruijthof-de Julio, Marianne; Galindo, Mario; Gordon, Jonathan A; Stein, Gary S; Dudakovic, Amel; van Wijnen, Andre J
2017-05-01
Osteogenic lineage commitment and progression is controlled by multiple signaling pathways (e.g., WNT, BMP, FGF) that converge on bone-related transcription factors. Access of osteogenic transcription factors to chromatin is controlled by epigenetic regulators that generate post-translational modifications of histones ("histone code"), as well as read, edit and/or erase these modifications. Our understanding of the biological role of epigenetic regulators in osteoblast differentiation remains limited. Therefore, we performed next-generation RNA sequencing (RNA-seq) and established which chromatin-related proteins are robustly expressed in mouse bone tissues (e.g., fracture callus, calvarial bone). These studies also revealed that cells with increased osteogenic potential have higher levels of the H4K20 methyl transferase Suv420h2 compared to other methyl transferases (e.g., Suv39h1, Suv39h2, Suv420h1, Ezh1, Ezh2). We find that all six epigenetic regulators are transiently expressed at different stages of osteoblast differentiation in culture, with maximal mRNAs levels of Suv39h1 and Suv39h2 (at day 3) preceding maximal expression of Suv420h1 and Suv420h2 (at day 7) and developmental stages that reflect, respectively, early and later collagen matrix deposition. Loss of function analysis of Suv420h2 by siRNA depletion shows loss of H4K20 methylation and decreased expression of bone biomarkers (e.g., alkaline phosphatase/Alpl) and osteogenic transcription factors (e.g., Sp7/Osterix). Furthermore, Suv420h2 is required for matrix mineralization during osteoblast differentiation. We conclude that Suv420h2 controls the H4K20 methylome of osteoblasts and is critical for normal progression of osteoblastogenesis. J. Cell. Biochem. 118: 1262-1272, 2017. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
40 CFR 262.87 - Reporting and recordkeeping.
Code of Federal Regulations, 2012 CFR
2012-07-01
... movements subject to this subpart, persons (e.g., exporters, recognized traders) who meet the definition of...(s) and applicable waste code(s) from the appropriate OECD waste list incorporated by reference in... imprisonment. (b) Exception reports. Any person who meets the definition of primary exporter in § 262.51 or who...
40 CFR 262.87 - Reporting and recordkeeping.
Code of Federal Regulations, 2011 CFR
2011-07-01
... movements subject to this subpart, persons (e.g., exporters, recognized traders) who meet the definition of...(s) and applicable waste code(s) from the appropriate OECD waste list incorporated by reference in... imprisonment. (b) Exception reports. Any person who meets the definition of primary exporter in § 262.51 or who...
40 CFR 262.87 - Reporting and recordkeeping.
Code of Federal Regulations, 2013 CFR
2013-07-01
... movements subject to this subpart, persons (e.g., exporters, recognized traders) who meet the definition of...(s) and applicable waste code(s) from the appropriate OECD waste list incorporated by reference in... imprisonment. (b) Exception reports. Any person who meets the definition of primary exporter in § 262.51 or who...
40 CFR 262.87 - Reporting and recordkeeping.
Code of Federal Regulations, 2014 CFR
2014-07-01
... movements subject to this subpart, persons (e.g., exporters, recognized traders) who meet the definition of...(s) and applicable waste code(s) from the appropriate OECD waste list incorporated by reference in... imprisonment. (b) Exception reports. Any person who meets the definition of primary exporter in § 262.51 or who...
Schmitz, Matthew; Forst, Linda
2016-02-15
Inclusion of information about a patient's work, industry, and occupation, in the electronic health record (EHR) could facilitate occupational health surveillance, better health outcomes, prevention activities, and identification of workers' compensation cases. The US National Institute for Occupational Safety and Health (NIOSH) has developed an autocoding system for "industry" and "occupation" based on 1990 Bureau of Census codes; its effectiveness requires evaluation in conjunction with promoting the mandatory addition of these variables to the EHR. The objective of the study was to evaluate the intercoder reliability of NIOSH's Industry and Occupation Computerized Coding System (NIOCCS) when applied to data collected in a community survey conducted under the Affordable Care Act; to determine the proportion of records that are autocoded using NIOCCS. Standard Occupational Classification (SOC) codes are used by several federal agencies in databases that capture demographic, employment, and health information to harmonize variables related to work activities among these data sources. There are 359 industry and occupation responses that were hand coded by 2 investigators, who came to a consensus on every code. The same variables were autocoded using NIOCCS at the high and moderate criteria level. Kappa was .84 for agreement between hand coders and between the hand coder consensus code versus NIOCCS high confidence level codes for the first 2 digits of the SOC code. For 4 digits, NIOCCS coding versus investigator coding ranged from kappa=.56 to .70. In this study, NIOCCS was able to achieve production rates (ie, to autocode) 31%-36% of entered variables at the "high confidence" level and 49%-58% at the "medium confidence" level. Autocoding (production) rates are somewhat lower than those reported by NIOSH. Agreement between manually coded and autocoded data are "substantial" at the 2-digit level, but only "fair" to "good" at the 4-digit level. This work serves as a baseline for performance of NIOCCS by investigators in the field. Further field testing will clarify NIOCCS effectiveness in terms of ability to assign codes and coding accuracy and will clarify its value as inclusion of these occupational variables in the EHR is promoted.
Cloud-based design of high average power traveling wave linacs
NASA Astrophysics Data System (ADS)
Kutsaev, S. V.; Eidelman, Y.; Bruhwiler, D. L.; Moeller, P.; Nagler, R.; Barbe Welzel, J.
2017-12-01
The design of industrial high average power traveling wave linacs must accurately consider some specific effects. For example, acceleration of high current beam reduces power flow in the accelerating waveguide. Space charge may influence the stability of longitudinal or transverse beam dynamics. Accurate treatment of beam loading is central to the design of high-power TW accelerators, and it is especially difficult to model in the meter-scale region where the electrons are nonrelativistic. Currently, there are two types of available codes: tracking codes (e.g. PARMELA or ASTRA) that cannot solve self-consistent problems, and particle-in-cell codes (e.g. Magic 3D or CST Particle Studio) that can model the physics correctly but are very time-consuming and resource-demanding. Hellweg is a special tool for quick and accurate electron dynamics simulation in traveling wave accelerating structures. The underlying theory of this software is based on the differential equations of motion. The effects considered in this code include beam loading, space charge forces, and external magnetic fields. We present the current capabilities of the code, provide benchmarking results, and discuss future plans. We also describe the browser-based GUI for executing Hellweg in the cloud.
14 CFR 25.1435 - Hydraulic systems.
Code of Federal Regulations, 2010 CFR
2010-01-01
.... Pressure vessels containing gas: High pressure (e.g., accumulators) 3.0 4.0 Low pressure (e.g., reservoirs...) Element design. Each element of the hydraulic system must be designed to: (1) Withstand the proof pressure... ultimate pressure without rupture. The proof and ultimate pressures are defined in terms of the design...
Tsujikawa, Kenji; Kuwayama, Kenji; Miyaguchi, Hajime; Kanamori, Tatsuyuki; Iwata, Yuko T; Yoshida, Takemi; Inoue, Hiroyuki
2008-02-04
We tried to develop a library search system using a portable, attenuated total reflection Fourier transform infrared (ATR-FT-IR) spectrometer for on-site identification of 3,4-methylenedioxymethamphetamine (MDMA) and 3,4-methylenedioxyamphetamine (MDA) tablets. The library consisted of the spectra from mixtures of controlled drugs (e.g. MDMA and ketamine), adulterants (e.g. caffeine), and diluents (e.g. lactose). In the seven library search algorithms, the derivative correlation coefficient showed the best discriminant capability. This was enhanced by segmentation of the search area. The optimized search algorithm was validated by the positive (n=154, e.g. the standard mixtures containing the controlled drug, and the MDMA/MDA tablets confiscated) and negative samples (n=56, e.g. medicinal tablets). All validation samples except for four were judged truly. Final criteria for positive identification were decided on the basis of the results of the validation. In conclusion, a portable ATR-FT-IR spectrometer with our library search system would be a useful tool for on-site identification of amphetamine-type stimulant tablets.
Geochemical modelling of EGS fracture stimulation applying weak and strong acid treatments
NASA Astrophysics Data System (ADS)
Sigfusson, Bergur; Sif Pind Aradottir, Edda
2015-04-01
Engineered Geothermal systems (EGS) provide geothermal power by tapping into the Earth's deep geothermal resources that are otherwise not exploitable due to lack of water and fractures, location or rock type. EGS technologies have the potential to cost effectively produce large amounts of electricity almost anywhere in the world. The EGS technology creates permeability in the rock by hydro-fracturing the reservoir with cold water pumped into the first well (the injection well) at a high pressure. The second well (the production well) intersects the stimulated fracture system and returns the hot water to the surface where electricity can be generated. A significant technological hurdle is ensuring effective connection between the wells and the fracture system and to control the deep-rooted fractures (can exceed 5 000 m depth). A large area for heat transfer and sufficient mass flow needs to be ensured between wells without creating fast flowing paths in the fracture network. Maintaining flow through the fracture system can cause considerable energy penalty to the overall process. Therefore, chemical methods to maintain fractures and prevent scaling can be necessary to prevent excessive pressure build up in the re-injection wells of EGS systems. The effect of different acid treatments on the porosity development of selected rock types was simulated with the aid of the Petrasim interface to the Toughreact simulation code. The thermodynamic and kinetic database of Aradottir et al. (2014) was expanded to include new minerals and the most important fluoride bearing species involved in mineral reactions during acid stimulation of geothermal systems. A series of simulations with injection waters containing fluoric acid, hydrochloric acid and CO2 or mixtures thereof were then carried out and porosity development in the fracture system monitored. The periodic injection of weak acid mixtures into EGS systems may be cost effective in some isolated cases to prevent pressure build-up and therefore lowering pumping costs during operation. Selection of the acid is though highly dependent on the chemistry of the reservoir in question. Reference Aradottir, E. S. P., Gunnarsson, I., Sigfusson, B., Gunnarsson, G., Juliusson, B. M., Gunnlaugsson, E., Sigurdardóttir, H., Arnarson, M. T., Sonnenthal, E., 2014. Toward Cleaner Geothermal Energy Utilization: Capturing and Sequestering CO2 and H2S Emissions from Geothermal Power Plants. Transport in Porous Media. DOI 10.1007s/11242-014-0316-5
Special issue on network coding
NASA Astrophysics Data System (ADS)
Monteiro, Francisco A.; Burr, Alister; Chatzigeorgiou, Ioannis; Hollanti, Camilla; Krikidis, Ioannis; Seferoglu, Hulya; Skachek, Vitaly
2017-12-01
Future networks are expected to depart from traditional routing schemes in order to embrace network coding (NC)-based schemes. These have created a lot of interest both in academia and industry in recent years. Under the NC paradigm, symbols are transported through the network by combining several information streams originating from the same or different sources. This special issue contains thirteen papers, some dealing with design aspects of NC and related concepts (e.g., fountain codes) and some showcasing the application of NC to new services and technologies, such as data multi-view streaming of video or underwater sensor networks. One can find papers that show how NC turns data transmission more robust to packet losses, faster to decode, and more resilient to network changes, such as dynamic topologies and different user options, and how NC can improve the overall throughput. This issue also includes papers showing that NC principles can be used at different layers of the networks (including the physical layer) and how the same fundamental principles can lead to new distributed storage systems. Some of the papers in this issue have a theoretical nature, including code design, while others describe hardware testbeds and prototypes.
Neutron Source Facility Training Simulator Based on EPICS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Young Soo; Wei, Thomas Y.; Vilim, Richard B.
A plant operator training simulator is developed for training the plant operators as well as for design verification of plant control system (PCS) and plant protection system (PPS) for the Kharkov Institute of Physics and Technology Neutron Source Facility. The simulator provides the operator interface for the whole plant including the sub-critical assembly coolant loop, target coolant loop, secondary coolant loop, and other facility systems. The operator interface is implemented based on Experimental Physics and Industrial Control System (EPICS), which is a comprehensive software development platform for distributed control systems. Since its development at Argonne National Laboratory, it has beenmore » widely adopted in the experimental physics community, e.g. for control of accelerator facilities. This work is the first implementation for a nuclear facility. The main parts of the operator interface are the plant control panel and plant protection panel. The development involved implementation of process variable database, sequence logic, and graphical user interface (GUI) for the PCS and PPS utilizing EPICS and related software tools, e.g. sequencer for sequence logic, and control system studio (CSS-BOY) for graphical use interface. For functional verification of the PCS and PPS, a plant model is interfaced, which is a physics-based model of the facility coolant loops implemented as a numerical computer code. The training simulator is tested and demonstrated its effectiveness in various plant operation sequences, e.g. start-up, shut-down, maintenance, and refueling. It was also tested for verification of the plant protection system under various trip conditions.« less
A real-time chirp-coded imaging system with tissue attenuation compensation.
Ramalli, A; Guidi, F; Boni, E; Tortoli, P
2015-07-01
In ultrasound imaging, pulse compression methods based on the transmission (TX) of long coded pulses and matched receive filtering can be used to improve the penetration depth while preserving the axial resolution (coded-imaging). The performance of most of these methods is affected by the frequency dependent attenuation of tissue, which causes mismatch of the receiver filter. This, together with the involved additional computational load, has probably so far limited the implementation of pulse compression methods in real-time imaging systems. In this paper, a real-time low-computational-cost coded-imaging system operating on the beamformed and demodulated data received by a linear array probe is presented. The system has been implemented by extending the firmware and the software of the ULA-OP research platform. In particular, pulse compression is performed by exploiting the computational resources of a single digital signal processor. Each image line is produced in less than 20 μs, so that, e.g., 192-line frames can be generated at up to 200 fps. Although the system may work with a large class of codes, this paper has been focused on the test of linear frequency modulated chirps. The new system has been used to experimentally investigate the effects of tissue attenuation so that the design of the receive compression filter can be accordingly guided. Tests made with different chirp signals confirm that, although the attainable compression gain in attenuating media is lower than the theoretical value expected for a given TX Time-Bandwidth product (BT), good SNR gains can be obtained. For example, by using a chirp signal having BT=19, a 13 dB compression gain has been measured. By adapting the frequency band of the receiver to the band of the received echo, the signal-to-noise ratio and the penetration depth have been further increased, as shown by real-time tests conducted on phantoms and in vivo. In particular, a 2.7 dB SNR increase has been measured through a novel attenuation compensation scheme, which only requires to shift the demodulation frequency by 1 MHz. The proposed method characterizes for its simplicity and easy implementation. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Topol, David A.
1999-01-01
TFaNS is the Tone Fan Noise Design/Prediction System developed by Pratt & Whitney under contract to NASA Lewis (presently NASA Glenn). The purpose of this system is to predict tone noise emanating from a fan stage including the effects of reflection and transmission by the rotor and stator and by the duct inlet and nozzle. These effects have been added to an existing annular duct/isolated stator noise prediction capability. TFaNS consists of: The codes that compute the acoustic properties (reflection and transmission coefficients) of the various elements and write them to files. Cup3D: Fan Noise Coupling Code that reads these files, solves the coupling problem, and outputs the desired noise predictions. AWAKEN: CFD/Measured Wake Postprocessor which reformats CFD wake predictions and/or measured wake data so it can be used by the system. This volume of the report provides technical background for TFaNS including the organization of the system and CUP3D technical documentation. This document also provides information for code developers who must write Acoustic Property Files in the CUP3D format. This report is divided into three volumes: Volume I: System Description, CUP3D Technical Documentation, and Manual for Code Developers; Volume II: User's Manual, TFaNS Vers. 1.4; Volume III: Evaluation of System Codes.
Scott, Erika; Bell, Erin; Krupa, Nicole; Hirabayashi, Liane; Jenkins, Paul
2017-09-01
Agriculture and logging are dangerous industries, and though data on fatal injury exists, less is known about non-fatal injury. Establishing a non-fatal injury surveillance system is a top priority. Pre-hospital care reports and hospitalization data were explored as a low-cost option for ongoing surveillance of occupational injury. Using pre-hospital care report free-text and location codes, along with hospital ICD-9-CM external cause of injury codes, we created a surveillance system that tracked farm and logging injuries. In Maine and New Hampshire, 1585 injury events were identified (2008-2010). The incidence of injuries was 12.4/1000 for agricultural workers, compared to 10.4/1000 to 12.2/1000 for logging workers. These estimates are consistent with other recent estimates. This system is limited to traumatic injury for which medical treatment is administered, and is limited by the accuracy of coding and spelling. This system has the potential to be both sustainable and low cost. © 2017 Wiley Periodicals, Inc.
Accessing the Food Systems in Urban and Rural Minnesotan Communities
ERIC Educational Resources Information Center
Smith, Chery; Miller, Hannah
2011-01-01
Objective: Explore how urban and rural Minnesotans access the food system and to investigate whether community infrastructure supports a healthful food system. Design: Eight (4 urban and 4 rural) focus groups were conducted. Setting and Participants: Eight counties with urban influence codes of 1, 2, 4, 5, 8, and 10. Fifty-nine (urban, n = 27;…
Pesticide-Related Hospitalizations Among Children and Teenagers in Texas, 2004-2013.
Trueblood, Amber B; Shipp, Eva; Han, Daikwon; Ross, Jennifer; Cizmas, Leslie H
2016-01-01
Acute exposure to pesticides is associated with nausea, headaches, rashes, eye irritation, seizures, and, in severe cases, death. We characterized pesticide-related hospitalizations in Texas among children and teenagers for 2004-2013 to characterize exposures in this population, which is less well understood than pesticide exposure among adults. We abstracted information on pesticide-related hospitalizations from hospitalization data using pesticide-related International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes and E-codes. We calculated the prevalence of pesticide-related hospitalizations among children and teenagers aged #19 years for all hospitalizations, unintentional exposures, intentional exposures, pesticide classifications, and illness severity. We also calculated age- and sex-specific prevalence of pesticide-related hospitalizations among children. The prevalence of pesticide-related hospitalizations among children and teenagers was 2.1 per 100,000 population. The prevalence of pesticide-related hospitalizations per 100,000 population was 2.7 for boys and 1.5 for girls. The age-specific prevalence per 100,000 population was 5.3 for children aged 0-4 years, 0.3 for children and teenagers aged 5-14 years, and 2.3 for teenagers aged 15-19 years. Children aged 0-4 years had the highest prevalence of unintentional exposures, whereas teenagers aged 15-19 years had the highest prevalence of intentional exposures. Commonly reported pesticide categories were organophosphates/carbamates, disinfectants, rodenticides, and other pesticides (e.g., pyrethrins, pyrethroids). Of the 158 pesticide-related hospitalizations, most were coded as having minor (n=86) or moderate (n=40) illness severity. Characterizing the prevalence of pesticide-related hospitalizations among children and teenagers leads to a better understanding of the burden of pesticide exposures, including the type of pesticides used and the severity of potential health effects. This study found differences in the frequency of pesticide-related hospitalizations by sex, age, and intent (e.g., unintentional vs. intentional).
Pesticide-Related Hospitalizations Among Children and Teenagers in Texas, 2004–2013
Shipp, Eva; Han, Daikwon; Ross, Jennifer; Cizmas, Leslie H.
2016-01-01
Objective Acute exposure to pesticides is associated with nausea, headaches, rashes, eye irritation, seizures, and, in severe cases, death. We characterized pesticide-related hospitalizations in Texas among children and teenagers for 2004–2013 to characterize exposures in this population, which is less well understood than pesticide exposure among adults. Methods We abstracted information on pesticide-related hospitalizations from hospitalization data using pesticide-related International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes and E-codes. We calculated the prevalence of pesticide-related hospitalizations among children and teenagers aged #19 years for all hospitalizations, unintentional exposures, intentional exposures, pesticide classifications, and illness severity. We also calculated age- and sex-specific prevalence of pesticide-related hospitalizations among children. Results The prevalence of pesticide-related hospitalizations among children and teenagers was 2.1 per 100,000 population. The prevalence of pesticide-related hospitalizations per 100,000 population was 2.7 for boys and 1.5 for girls. The age-specific prevalence per 100,000 population was 5.3 for children aged 0–4 years, 0.3 for children and teenagers aged 5–14 years, and 2.3 for teenagers aged 15–19 years. Children aged 0–4 years had the highest prevalence of unintentional exposures, whereas teenagers aged 15–19 years had the highest prevalence of intentional exposures. Commonly reported pesticide categories were organophosphates/carbamates, disinfectants, rodenticides, and other pesticides (e.g., pyrethrins, pyrethroids). Of the 158 pesticide-related hospitalizations, most were coded as having minor (n=86) or moderate (n=40) illness severity. Conclusion Characterizing the prevalence of pesticide-related hospitalizations among children and teenagers leads to a better understanding of the burden of pesticide exposures, including the type of pesticides used and the severity of potential health effects. This study found differences in the frequency of pesticide-related hospitalizations by sex, age, and intent (e.g., unintentional vs. intentional). PMID:27453604
Epistemic Sensibility: Third Dimension of Virtue Epistemology
ERIC Educational Resources Information Center
Belbase, Shashidhar
2012-01-01
The author tries to argue how epistemic sensibility as virtue sensibility can complement virtue epistemology. Many philosophers interrelated virtue reliabilism (e.g., Brogaard, 2006) and virtue responsibilism (e.g., Code, 1987) to virtue epistemology as two dimensions with many diverging and a few converging characters. The possible new dimension…
Morley, Katherine I; Wallace, Joshua; Denaxas, Spiros C; Hunter, Ross J; Patel, Riyaz S; Perel, Pablo; Shah, Anoop D; Timmis, Adam D; Schilling, Richard J; Hemingway, Harry
2014-01-01
National electronic health records (EHR) are increasingly used for research but identifying disease cases is challenging due to differences in information captured between sources (e.g. primary and secondary care). Our objective was to provide a transparent, reproducible model for integrating these data using atrial fibrillation (AF), a chronic condition diagnosed and managed in multiple ways in different healthcare settings, as a case study. Potentially relevant codes for AF screening, diagnosis, and management were identified in four coding systems: Read (primary care diagnoses and procedures), British National Formulary (BNF; primary care prescriptions), ICD-10 (secondary care diagnoses) and OPCS-4 (secondary care procedures). From these we developed a phenotype algorithm via expert review and analysis of linked EHR data from 1998 to 2010 for a cohort of 2.14 million UK patients aged ≥ 30 years. The cohort was also used to evaluate the phenotype by examining associations between incident AF and known risk factors. The phenotype algorithm incorporated 286 codes: 201 Read, 63 BNF, 18 ICD-10, and four OPCS-4. Incident AF diagnoses were recorded for 72,793 patients, but only 39.6% (N = 28,795) were recorded in primary care and secondary care. An additional 7,468 potential cases were inferred from data on treatment and pre-existing conditions. The proportion of cases identified from each source differed by diagnosis age; inferred diagnoses contributed a greater proportion of younger cases (≤ 60 years), while older patients (≥ 80 years) were mainly diagnosed in SC. Associations of risk factors (hypertension, myocardial infarction, heart failure) with incident AF defined using different EHR sources were comparable in magnitude to those from traditional consented cohorts. A single EHR source is not sufficient to identify all patients, nor will it provide a representative sample. Combining multiple data sources and integrating information on treatment and comorbid conditions can substantially improve case identification.
Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System
NASA Technical Reports Server (NTRS)
Taft, James R.
2000-01-01
The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full aircraft are routinely undertaken. Typical large problems might require 100s of Cray C90 CPU hours to complete. The dramatic performance gains with the 256 CPU steger system are exciting. Obtaining results in hours instead of months is revolutionizing the way in which aircraft manufacturers are looking at future aircraft simulation work. Figure 2 below is a current state of the art plot of OVERFLOW-MLP performance on the 512 CPU Lomax system. As can be seen, the chart indicates that OVERFLOW-MLP continues to scale linearly with CPU count up to 512 CPUs on a large 35 million point full aircraft RANS simulation. At this point performance is such that a fully converged simulation of 2500 time steps is completed in less than 2 hours of elapsed time. Further work over the next few weeks will improve the performance of this code even further.The LAURA code has been converted to the MLP format as well. This code is currently being optimized for the 512 CPU system. Performance statistics indicate that the goal of 100 GFLOP/s will be achieved by year's end. This amounts to 20x the 16 CPU C90 result and strongly demonstrates the viability of the new parallel systems rapidly solving very large simulations in a production environment.
Park, Seong C; Finnell, John T
2012-01-01
In 2009, Indianapolis launched an electronic medical record system within their ambulances1 and started to exchange patient data with the Indiana Network for Patient Care (INPC) This unique system allows EMS personnel to get important information prior to the patient's arrival to the hospital. In this descriptive study, we found EMS personnel requested patient data on 14% of all transports, with a "success" match rate of 46%, and a match "failure" rate of 17%. The three major factors for causing match "failure" were ZIP code 55%, Patient Name 22%, and Birth date 12%. We conclude that the ZIP code matching process needs to be improved by applying a limitation of 5 digits in ZIP code instead of using ZIP+4 code. Non-ZIP code identifiers may be a better choice due to inaccuracies and changes of the ZIP code in a patient's record.
1974-07-31
Multiple scoring regions are permitted and these may be either finite volume regions or point detectors or both. Other sccres of interest, e.g., collision... Multiplicities ...... . . . . 43 2,3.5.2 Photon Production Cross Sections. . 44 2.3.5.3 Anisotropy of Photon Production . . 44 2.3.5.4 Continuous...hepting, count rates, etc., are calculated as functions of energy, time and position. Multiple scoring regions are permitted and these may be either
RETRACTED — PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2012-11-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved as an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this paper as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10 MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes brings incremental performance of error correction, and the PMD tolerance is 10 ps at OSNR=11.4 dB. The results show that the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
PMD mitigation through interleaving LDPC codes with polarization scramblers
NASA Astrophysics Data System (ADS)
Han, Dahai; Chen, Haoran; Xi, Lixia
2013-09-01
The combination of forward error correction (FEC) and distributed fast polarization scramblers (D-FPSs) is approved an effective method to mitigate polarization mode dispersion (PMD) in high-speed optical fiber communication system. The low-density parity-check (LDPC) codes are newly introduced into the PMD mitigation scheme with D-FPSs in this article as one of the promising FEC codes to achieve better performance. The scrambling speed of FPS for LDPC (2040, 1903) codes system is discussed, and the reasonable speed 10MHz is obtained from the simulation results. For easy application in practical large scale integrated (LSI) circuit, the number of iterations in decoding LDPC codes is also investigated. The PMD tolerance and cut-off optical signal-to-noise ratio (OSNR) of LDPC codes are compared with Reed-Solomon (RS) codes in different conditions. In the simulation, the interleaving LDPC codes bring incremental performance of error correction, and the PMD tolerance is 10ps at OSNR=11.4dB. The results show the meaning of the work is that LDPC codes are a substitute for traditional RS codes with D-FPSs and all of the executable code files are open for researchers who have practical LSI platform for PMD mitigation.
TU-AB-BRC-08: Egs-brachy, a Fast and Versatile Monte Carlo Code for Brachytherapy Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chamberland, M; Taylor, R; Rogers, D
2016-06-15
Purpose: To introduce egs-brachy, a new, fast, and versatile Monte Carlo code for brachytherapy applications. Methods: egs-brachy is an EGSnrc user-code based on the EGSnrc C++ class library (egs++). Complex phantom, applicator, and source model geometries are built using the egs++ geometry module. egs-brachy uses a tracklength estimator to score collision kerma in voxels. Interaction, spectrum, energy fluence, and phase space scoring are also implemented. Phase space sources and particle recycling may be used to improve simulation efficiency. HDR treatments (e.g. stepping source through dwell positions) can be simulated. Standard brachytherapy seeds, as well as electron and miniature x-ray tubemore » sources are fully modelled. Variance reduction techniques for electron source simulations are implemented (Bremsstrahlung cross section enhancement, uniform Bremsstrahlung splitting, and Russian Roulette). TG-43 parameters of seeds are computed and compared to published values. Example simulations of various treatments are carried out on a single 2.5 GHz Intel Xeon E5-2680 v3 processor core. Results: TG-43 parameters calculated with egs-brachy show excellent agreement with published values. Using a phase space source, 2% average statistical uncertainty in the PTV ((2mm){sup 3} voxels) can be achieved in 10 s for 100 {sup 125}I or {sup 103}Pd seeds in a 36.2 cm{sup 3} prostate PTV, 31 s for 64 {sup 103}Pd seeds in a 64 cm{sup 3} breast PTV, and 56 s for a miniature x-ray tube in a 27 cm{sup 3} breast PTV. Comparable uncertainty is reached in 12 s in a (1 mm){sup 3} water voxel 5 mm away from a COMS 16mm eye plaque with 13 {sup 103}Pd seeds. Conclusion: The accuracy of egs-brachy has been demonstrated through benchmarking calculations. Calculation times are sufficiently fast to allow full MC simulations for routine treatment planning for diverse brachytherapy treatments (LDR, HDR, miniature x-ray tube). egs-brachy will be available as free and open-source software to the medical physics research community. This work is partially funded by the Canada Research Chairs program, the Natural Sciences and Engineering Research Council of Canada, and the Ontario Ministry of Research and Innovation (Ontario Early Researcher Award).« less
Association between implementation of a code stroke system and poststroke epilepsy.
Chen, Ziyi; Churilov, Leonid; Chen, Ziyuan; Naylor, Jillian; Koome, Miriam; Yan, Bernard; Kwan, Patrick
2018-03-27
We aimed to investigate the effect of a code stroke system on the development of poststroke epilepsy. We retrospectively analyzed consecutive patients treated with IV thrombolysis under or outside the code stroke system between 2003 and 2012. Patients were followed up for at least 2 years or until death. Factors with p < 0.1 in univariate comparisons were selected for multivariable logistic and Cox regression. A total of 409 patients met the eligibility criteria. Their median age at stroke onset was 75 years (interquartile range 64-83 years); 220 (53.8%) were male. The median follow-up duration was 1,074 days (interquartile range 119-1,671 days). Thirty-two patients (7.8%) had poststroke seizures during follow-up, comprising 7 (1.7%) with acute symptomatic seizures and 25 (6.1%) with late-onset seizures. Twenty-six patients (6.4%) fulfilled the definition of poststroke epilepsy. Three hundred eighteen patients (77.8%) were treated with the code stroke system while 91 (22.2%) were not. After adjustment for age and stroke etiology, use of the code stroke system was associated with decreased odds of poststroke epilepsy (odds ratio = 0.36, 95% confidence interval 0.14-0.87, p = 0.024). Cox regression showed lower adjusted hazard rates for poststroke epilepsy within 5 years for patients managed under the code stroke system (hazard ratio = 0.60, 95% confidence interval 0.47-0.79, p < 0.001). The code stroke system was associated with reduced odds and instantaneous risk of poststroke epilepsy. Further studies are required to identify the contribution of the individual components and mechanisms against epileptogenesis after stroke. This study provides Class III evidence that for people with acute ischemic stroke, implementation of a code stroke system reduces the risk of poststroke epilepsy. © 2018 American Academy of Neurology.
The NJOY Nuclear Data Processing System, Version 2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macfarlane, Robert; Muir, Douglas W.; Boicourt, R. M.
The NJOY Nuclear Data Processing System, version 2016, is a comprehensive computer code package for producing pointwise and multigroup cross sections and related quantities from evaluated nuclear data in the ENDF-4 through ENDF-6 legacy card-image formats. NJOY works with evaluated files for incident neutrons, photons, and charged particles, producing libraries for a wide variety of particle transport and reactor analysis codes.
A study of tungsten spectra using large helical device and compact electron beam ion trap in NIFS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morita, S.; Goto, M.; Murakami, I.
2013-07-11
Tungsten spectra have been observed from Large Helical Device (LHD) and Compact electron Beam Ion Trap (CoBIT) in wavelength ranges of visible to EUV. The EUV spectra with unresolved transition array (UTA), e.g., 6g-4f, 5g-4f, 5f-4d and 5p-4d transitions for W{sup +24-+33}, measured from LHD plasmas are compared with those measured from CoBIT with monoenergetic electron beam ({<=}2keV). The tungsten spectra from LHD are well analyzed based on the knowledge from CoBIT tungsten spectra. The C-R model code has been developed to explain the UTA spectra in details. Radial profiles of EUV spectra from highly ionized tungsten ions have beenmore » measured and analyzed by impurity transport simulation code with ADPAK atomic database code to examine the ionization balance determined by ionization and recombination rate coefficients. As the first trial, analysis of the tungsten density in LHD plasmas is attempted from radial profile of Zn-like WXLV (W{sup 44+}) 4p-4s transition at 60.9A based on the emission rate coefficient calculated with HULLAC code. As a result, a total tungsten ion density of 3.5 Multiplication-Sign 10{sup 10}cm{sup -3} at the plasma center is reasonably obtained. In order to observe the spectra from tungsten ions in lower-ionized charge stages, which can give useful information on the tungsten influx in fusion plasmas, the ablation cloud of the impurity pellet is directly measured with visible spectroscopy. A lot of spectra from neutral and singly ionized tungsten are observed and some of them are identified. A magnetic forbidden line from highly ionized tungsten ions has been examined and Cd-like WXXVII (W{sup 26+}) at 3893.7A is identified as the ground-term fine-structure transition of 4f{sup 23}H{sub 5}-{sup 3}H{sub 4}. The possibility of {alpha} particle diagnostic in D-T burning plasmas using the magnetic forbidden line is discussed.« less
Kopf, Matthias; Klähn, Stephan; Scholz, Ingeborg; Hess, Wolfgang R.; Voß, Björn
2015-01-01
In all studied organisms, a substantial portion of the transcriptome consists of non-coding RNAs that frequently execute regulatory functions. Here, we have compared the primary transcriptomes of the cyanobacteria Synechocystis sp. PCC 6714 and PCC 6803 under 10 different conditions. These strains share 2854 protein-coding genes and a 16S rRNA identity of 99.4%, indicating their close relatedness. Conserved major transcriptional start sites (TSSs) give rise to non-coding transcripts within the sigB gene, from the 5′UTRs of cmpA and isiA, and 168 loci in antisense orientation. Distinct differences include single nucleotide polymorphisms rendering promoters inactive in one of the strains, e.g., for cmpR and for the asRNA PsbA2R. Based on the genome-wide mapped location, regulation and classification of TSSs, non-coding transcripts were identified as the most dynamic component of the transcriptome. We identified a class of mRNAs that originate by read-through from an sRNA that accumulates as a discrete and abundant transcript while also serving as the 5′UTR. Such an sRNA/mRNA structure, which we name ‘actuaton’, represents another way for bacteria to remodel their transcriptional network. Our findings support the hypothesis that variations in the non-coding transcriptome constitute a major evolutionary element of inter-strain divergence and capability for physiological adaptation. PMID:25902393
Bruni, Rebecca A; Laupacis, Andreas; Levinson, Wendy; Martin, Douglas K
2007-11-16
As no health system can afford to provide all possible services and treatments for the people it serves, each system must set priorities. Priority setting decision makers are increasingly involving the public in policy making. This study focuses on public engagement in a key priority setting context that plagues every health system around the world: wait list management. The purpose of this study is to describe and evaluate priority setting for the Ontario Wait Time Strategy, with special attention to public engagement. This study was conducted at the Ontario Wait Time Strategy in Ontario, Canada which is part of a Federal-Territorial-Provincial initiative to improve access and reduce wait times in five areas: cancer, cardiac, sight restoration, joint replacements, and diagnostic imaging. There were two sources of data: (1) over 25 documents (e.g. strategic planning reports, public updates), and (2) 28 one-on-one interviews with informants (e.g. OWTS participants, MOHLTC representatives, clinicians, patient advocates). Analysis used a modified thematic technique in three phases: open coding, axial coding, and evaluation. The Ontario Wait Time Strategy partially meets the four conditions of 'accountability for reasonableness'. The public was not directly involved in the priority setting activities of the Ontario Wait Time Strategy. Study participants identified both benefits (supporting the initiative, experts of the lived experience, a publicly funded system and sustainability of the healthcare system) and concerns (personal biases, lack of interest to be involved, time constraints, and level of technicality) for public involvement in the Ontario Wait Time Strategy. Additionally, the participants identified concern for the consequences (sustainability, cannibalism, and a class system) resulting from the Ontario Wait Times Strategy. We described and evaluated a wait time management initiative (the Ontario Wait Time Strategy) with special attention to public engagement, and provided a concrete plan to operationalize a strategy for improving public involvement in this, and other, wait time initiatives.
Bruni, Rebecca A; Laupacis, Andreas; Levinson, Wendy; Martin, Douglas K
2007-01-01
Background As no health system can afford to provide all possible services and treatments for the people it serves, each system must set priorities. Priority setting decision makers are increasingly involving the public in policy making. This study focuses on public engagement in a key priority setting context that plagues every health system around the world: wait list management. The purpose of this study is to describe and evaluate priority setting for the Ontario Wait Time Strategy, with special attention to public engagement. Methods This study was conducted at the Ontario Wait Time Strategy in Ontario, Canada which is part of a Federal-Territorial-Provincial initiative to improve access and reduce wait times in five areas: cancer, cardiac, sight restoration, joint replacements, and diagnostic imaging. There were two sources of data: (1) over 25 documents (e.g. strategic planning reports, public updates), and (2) 28 one-on-one interviews with informants (e.g. OWTS participants, MOHLTC representatives, clinicians, patient advocates). Analysis used a modified thematic technique in three phases: open coding, axial coding, and evaluation. Results The Ontario Wait Time Strategy partially meets the four conditions of 'accountability for reasonableness'. The public was not directly involved in the priority setting activities of the Ontario Wait Time Strategy. Study participants identified both benefits (supporting the initiative, experts of the lived experience, a publicly funded system and sustainability of the healthcare system) and concerns (personal biases, lack of interest to be involved, time constraints, and level of technicality) for public involvement in the Ontario Wait Time Strategy. Additionally, the participants identified concern for the consequences (sustainability, cannibalism, and a class system) resulting from the Ontario Wait Times Strategy. Conclusion We described and evaluated a wait time management initiative (the Ontario Wait Time Strategy) with special attention to public engagement, and provided a concrete plan to operationalize a strategy for improving public involvement in this, and other, wait time initiatives. PMID:18021393
Current and anticipated uses of thermal-hydraulic codes in Germany
DOE Office of Scientific and Technical Information (OSTI.GOV)
Teschendorff, V.; Sommer, F.; Depisch, F.
1997-07-01
In Germany, one third of the electrical power is generated by nuclear plants. ATHLET and S-RELAP5 are successfully applied for safety analyses of the existing PWR and BWR reactors and possible future reactors, e.g. EPR. Continuous development and assessment of thermal-hydraulic codes are necessary in order to meet present and future needs of licensing organizations, utilities, and vendors. Desired improvements include thermal-hydraulic models, multi-dimensional simulation, computational speed, interfaces to coupled codes, and code architecture. Real-time capability will be essential for application in full-scope simulators. Comprehensive code validation and quantification of uncertainties are prerequisites for future best-estimate analyses.
Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.
Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen
2014-02-01
The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.
Hodge, Meryl C; Dixon, Stephanie; Garg, Amit X; Clemens, Kristin K
2017-06-01
To determine the positive predictive value and sensitivity of an International Statistical Classification of Diseases and Related Health Problems, 10th Revision, coding algorithm for hospital encounters concerning hypoglycemia. We carried out 2 retrospective studies in Ontario, Canada. We examined medical records from 2002 through 2014, in which older adults (mean age, 76) were assigned at least 1 code for hypoglycemia (E15, E160, E161, E162, E1063, E1163, E1363, E1463). The positive predictive value of the algorithm was calculated using a gold-standard definition (blood glucose value <4 mmol/L or physician diagnosis of hypoglycemia). To determine the algorithm's sensitivity, we used linked healthcare databases to identify older adults (mean age, 77) with laboratory plasma glucose values <4 mmol/L during a hospital encounter that took place between 2003 and 2011. We assessed how frequently a code for hypoglycemia was present. We also examined the algorithm's performance in differing clinical settings (e.g. inpatient vs. emergency department, by hypoglycemia severity). The positive predictive value of the algorithm was 94.0% (95% confidence interval 89.3% to 97.0%), and its sensitivity was 12.7% (95% confidence interval 11.9% to 13.5%). It performed better in the emergency department and in cases of more severe hypoglycemia (plasma glucose values <3.5 mmol/L compared with ≥3.5 mmol/L). Our hypoglycemia algorithm has a high positive predictive value but is limited in sensitivity. Although we can be confident that older adults who are assigned 1 of these codes truly had a hypoglycemia event, many episodes will not be captured by studies using administrative databases. Copyright © 2017 Diabetes Canada. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
EMAM, M; Eldib, A; Lin, M
2014-06-01
Purpose: An in-house Monte Carlo based treatment planning system (MC TPS) has been developed for modulated electron radiation therapy (MERT). Our preliminary MERT planning experience called for a more user friendly graphical user interface. The current work aimed to design graphical windows and tools to facilitate the contouring and planning process. Methods: Our In-house GUI MC TPS is built on a set of EGS4 user codes namely MCPLAN and MCBEAM in addition to an in-house optimization code, which was named as MCOPTIM. Patient virtual phantom is constructed using the tomographic images in DICOM format exported from clinical treatment planning systemsmore » (TPS). Treatment target volumes and critical structures were usually contoured on clinical TPS and then sent as a structure set file. In our GUI program we developed a visualization tool to allow the planner to visualize the DICOM images and delineate the various structures. We implemented an option in our code for automatic contouring of the patient body and lungs. We also created an interface window displaying a three dimensional representation of the target and also showing a graphical representation of the treatment beams. Results: The new GUI features helped streamline the planning process. The implemented contouring option eliminated the need for performing this step on clinical TPS. The auto detection option for contouring the outer patient body and lungs was tested on patient CTs and it was shown to be accurate as compared to that of clinical TPS. The three dimensional representation of the target and the beams allows better selection of the gantry, collimator and couch angles. Conclusion: An in-house GUI program has been developed for more efficient MERT planning. The application of aiding tools implemented in the program is time saving and gives better control of the planning process.« less
Low-cost coding of directivity information for the recording of musical instruments
NASA Astrophysics Data System (ADS)
Braasch, Jonas; Martens, William L.; Woszczyk, Wieslaw
2004-05-01
Most musical instruments radiate sound according to characteristic spatial directivity patterns. These patterns are usually not only strongly frequency dependent, but also time-variant functions of various parameters of the instrument, such as pitch and the playing technique applied (e.g., plucking versus bowing of string instruments). To capture the directivity information when recording an instrument, Warusfel and Misdariis (2001) proposed to record an instrument using four channels, one for the monopole and the others for three orthogonal dipole parts. In the new recording setup presented here, it is proposed to store one channel at a high sampling frequency, along with directivity information that is updated only every few milliseconds. Taking the binaural sluggishness of the human auditory system into account in this way provides a low-cost coding scheme for subsequent reproduction of time-variant directivity patterns.
Enhanced decoding for the Galileo S-band mission
NASA Technical Reports Server (NTRS)
Dolinar, S.; Belongie, M.
1993-01-01
A coding system under consideration for the Galileo S-band low-gain antenna mission is a concatenated system using a variable redundancy Reed-Solomon outer code and a (14,1/4) convolutional inner code. The 8-bit Reed-Solomon symbols are interleaved to depth 8, and the eight 255-symbol codewords in each interleaved block have redundancies 64, 20, 20, 20, 64, 20, 20, and 20, respectively (or equivalently, the codewords have 191, 235, 235, 235, 191, 235, 235, and 235 8-bit information symbols, respectively). This concatenated code is to be decoded by an enhanced decoder that utilizes a maximum likelihood (Viterbi) convolutional decoder; a Reed Solomon decoder capable of processing erasures; an algorithm for declaring erasures in undecoded codewords based on known erroneous symbols in neighboring decodable words; a second Viterbi decoding operation (redecoding) constrained to follow only paths consistent with the known symbols from previously decodable Reed-Solomon codewords; and a second Reed-Solomon decoding operation using the output from the Viterbi redecoder and additional erasure declarations to the extent possible. It is estimated that this code and decoder can achieve a decoded bit error rate of 1 x 10(exp 7) at a concatenated code signal-to-noise ratio of 0.76 dB. By comparison, a threshold of 1.17 dB is required for a baseline coding system consisting of the same (14,1/4) convolutional code, a (255,223) Reed-Solomon code with constant redundancy 32 also interleaved to depth 8, a one-pass Viterbi decoder, and a Reed Solomon decoder incapable of declaring or utilizing erasures. The relative gain of the enhanced system is thus 0.41 dB. It is predicted from analysis based on an assumption of infinite interleaving that the coding gain could be further improved by approximately 0.2 dB if four stages of Viterbi decoding and four levels of Reed-Solomon redundancy are permitted. Confirmation of this effect and specification of the optimum four-level redundancy profile for depth-8 interleaving is currently being done.
Supporting Operational Data Assimilation Capabilities to the Research Community
NASA Astrophysics Data System (ADS)
Shao, H.; Hu, M.; Stark, D. R.; Zhou, C.; Beck, J.; Ge, G.
2017-12-01
The Developmental Testbed Center (DTC), in partnership with the National Centers for Environmental Prediction (NCEP) and other operational and research institutions, provides operational data assimilation capabilities to the research community and helps transition research advances to operations. The primary data assimilation system supported currently by the DTC is the Gridpoint Statistical Interpolation (GSI) system and the National Oceanic and Atmospheric Administration (NOAA) Ensemble Kalman Filter (EnKF) system. GSI is a variational based system being used for daily operations at NOAA, NCEP, the National Aeronautics and Space Administration, and other operational agencies. Recently, GSI has evolved into a four-dimensional EnVar system. Since 2009, the DTC has been releasing the GSI code to the research community annually and providing user support. In addition to GSI, the DTC, in 2015, began supporting the ensemble based EnKF data assimilation system. EnKF shares the observation operator with GSI and therefore, just as GSI, can assimilate both conventional and non-conventional data (e.g., satellite radiance). Currently, EnKF is being implemented as part of the GSI based hybrid EnVar system for NCEP Global Forecast System operations. This paper will summarize the current code management and support framework for these two systems. Following that is a description of available community services and facilities. Also presented is the pathway for researchers to contribute their development to the daily operations of these data assimilation systems.
26 CFR 1.441-1 - Period for computation of taxable income.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...
26 CFR 1.441-1 - Period for computation of taxable income.
Code of Federal Regulations, 2013 CFR
2013-04-01
... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...
26 CFR 1.441-1 - Period for computation of taxable income.
Code of Federal Regulations, 2012 CFR
2012-04-01
... Internal Revenue Code, and the regulations thereunder. (2) Length of taxable year. Except as otherwise provided in the Internal Revenue Code and the regulations thereunder (e.g., § 1.441-2 regarding 52-53-week... and definitions. The general rules and definitions in this paragraph (b) apply for purposes of...
Tools for Designing and Analyzing Structures
NASA Technical Reports Server (NTRS)
Luz, Paul L.
2005-01-01
Structural Design and Analysis Toolset is a collection of approximately 26 Microsoft Excel spreadsheet programs, each of which performs calculations within a different subdiscipline of structural design and analysis. These programs present input and output data in user-friendly, menu-driven formats. Although these programs cannot solve complex cases like those treated by larger finite element codes, these programs do yield quick solutions to numerous common problems more rapidly than the finite element codes, thereby making it possible to quickly perform multiple preliminary analyses - e.g., to establish approximate limits prior to detailed analyses by the larger finite element codes. These programs perform different types of calculations, as follows: 1. determination of geometric properties for a variety of standard structural components; 2. analysis of static, vibrational, and thermal- gradient loads and deflections in certain structures (mostly beams and, in the case of thermal-gradients, mirrors); 3. kinetic energies of fans; 4. detailed analysis of stress and buckling in beams, plates, columns, and a variety of shell structures; and 5. temperature dependent properties of materials, including figures of merit that characterize strength, stiffness, and deformation response to thermal gradients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jankovsky, Zachary Kyle; Denman, Matthew R.
It is difficult to assess the consequences of a transient in a sodium-cooled fast reactor (SFR) using traditional probabilistic risk assessment (PRA) methods, as numerous safety-related sys- tems have passive characteristics. Often there is significant dependence on the value of con- tinuous stochastic parameters rather than binary success/failure determinations. One form of dynamic PRA uses a system simulator to represent the progression of a transient, tracking events through time in a discrete dynamic event tree (DDET). In order to function in a DDET environment, a simulator must have characteristics that make it amenable to changing physical parameters midway through themore » analysis. The SAS4A SFR system analysis code did not have these characteristics as received. This report describes the code modifications made to allow dynamic operation as well as the linking to a Sandia DDET driver code. A test case is briefly described to demonstrate the utility of the changes.« less
New quantum codes derived from a family of antiprimitive BCH codes
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin
The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.
Subband/transform functions for image processing
NASA Technical Reports Server (NTRS)
Glover, Daniel
1993-01-01
Functions for image data processing written for use with the MATLAB(TM) software package are presented. These functions provide the capability to transform image data with block transformations (such as the Walsh Hadamard) and to produce spatial frequency subbands of the transformed data. Block transforms are equivalent to simple subband systems. The transform coefficients are reordered using a simple permutation to give subbands. The low frequency subband is a low resolution version of the original image, while the higher frequency subbands contain edge information. The transform functions can be cascaded to provide further decomposition into more subbands. If the cascade is applied to all four of the first stage subbands (in the case of a four band decomposition), then a uniform structure of sixteen bands is obtained. If the cascade is applied only to the low frequency subband, an octave structure of seven bands results. Functions for the inverse transforms are also given. These functions can be used for image data compression systems. The transforms do not in themselves produce data compression, but prepare the data for quantization and compression. Sample quantization functions for subbands are also given. A typical compression approach is to subband the image data, quantize it, then use statistical coding (e.g., run-length coding followed by Huffman coding) for compression. Contour plots of image data and subbanded data are shown.
RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2012-06-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less
RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, G.; Epiney, A. S.
2012-07-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less
Amodal processing in human prefrontal cortex.
Tamber-Rosenau, Benjamin J; Dux, Paul E; Tombu, Michael N; Asplund, Christopher L; Marois, René
2013-07-10
Information enters the cortex via modality-specific sensory regions, whereas actions are produced by modality-specific motor regions. Intervening central stages of information processing map sensation to behavior. Humans perform this central processing in a flexible, abstract manner such that sensory information in any modality can lead to response via any motor system. Cognitive theories account for such flexible behavior by positing amodal central information processing (e.g., "central executive," Baddeley and Hitch, 1974; "supervisory attentional system," Norman and Shallice, 1986; "response selection bottleneck," Pashler, 1994). However, the extent to which brain regions embodying central mechanisms of information processing are amodal remains unclear. Here we apply multivariate pattern analysis to functional magnetic resonance imaging (fMRI) data to compare response selection, a cognitive process widely believed to recruit an amodal central resource across sensory and motor modalities. We show that most frontal and parietal cortical areas known to activate across a wide variety of tasks code modality, casting doubt on the notion that these regions embody a central processor devoid of modality representation. Importantly, regions of anterior insula and dorsolateral prefrontal cortex consistently failed to code modality across four experiments. However, these areas code at least one other task dimension, process (instantiated as response selection vs response execution), ensuring that failure to find coding of modality is not driven by insensitivity of multivariate pattern analysis in these regions. We conclude that abstract encoding of information modality is primarily a property of subregions of the prefrontal cortex.
Iparraguirre, Leire; Muñoz-Culla, Maider; Prada-Luengo, Iñigo; Castillo-Triviño, Tamara; Olascoaga, Javier; Otaegui, David
2017-09-15
Multiple sclerosis is an autoimmune disease, with higher prevalence in women, in whom the immune system is dysregulated. This dysregulation has been shown to correlate with changes in transcriptome expression as well as in gene-expression regulators, such as non-coding RNAs (e.g. microRNAs). Indeed, some of these have been suggested as biomarkers for multiple sclerosis even though few biomarkers have reached the clinical practice. Recently, a novel family of non-coding RNAs, circular RNAs, has emerged as a new player in the complex network of gene-expression regulation. MicroRNA regulation function through a 'sponge system' and a RNA splicing regulation function have been proposed for the circular RNAs. This regulating role together with their high stability in biofluids makes them seemingly good candidates as biomarkers. Given the dysregulation of both protein-coding and non-coding transcriptome that have been reported in multiple sclerosis patients, we hypothesised that circular RNA expression may also be altered. Therefore, we carried out expression profiling of 13.617 circular RNAs in peripheral blood leucocytes from multiple sclerosis patients and healthy controls finding 406 differentially expressed (P-value < 0.05, Fold change > 1.5) and demonstrate after validation that, circ_0005402 and circ_0035560 are underexpressed in multiple sclerosis patients and could be used as biomarkers of the disease. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DCU@TRECMed 2012: Using Ad-Hoc Baselines for Domain-Specific Retrieval
2012-11-01
description to extend the query, for example: Patients with complicated GERD who receive endoscopy will be extended with Gastroesophageal reflux disease ... Diseases and Related Health Problems, version 9) for the patient’s admission or discharge status [1, 5]; treating negation (e.g. negative test results or...codes were mapped to a description of the code, usually a short phrase/sentence. For instance, the ICD9 code 253.5 corresponds to the disease Diabetes
A scaling relationship for impact-induced melt volume
NASA Astrophysics Data System (ADS)
Nakajima, M.; Rubie, D. C.; Melosh, H., IV; Jacobson, S. A.; Golabek, G.; Nimmo, F.; Morbidelli, A.
2016-12-01
During the late stages of planetary accretion, protoplanets experience a number of giant impacts and extensive mantle melting. The impactor's core sinks through the molten part of the target mantle (magma ocean) and experiences metal-silicate partitioning (e.g., Stevenson, 1990). For understanding the chemical evolution of the planetary mantle and core, we need to determine the impact-induced melt volume because the partitioning strongly depends on the ranges of the pressures and temperatures within the magma ocean. Previous studies have investigated the effects of small impacts (i.e. impact cratering) on melt volume, but those for giant impacts are not well understood yet. Here, we perform giant impact simulations to derive a scaling law for melt volume as a function of impact velocity, impact angle, and impactor-to-target mass ratio. We use two different numerical codes, namely smoothed particle hydrodynamics we developed (SPH, a particle method) and the code iSALE (a grid-based method) to compare their outcomes. Our simulations show that these two codes generally agree as long as the same equation of state is used. We also find that some of the previous studies developed for small impacts (e.g., Abramov et al., 2012) overestimate giant impact melt volume by orders of magnitudes partly because these models do not consider self-gravity of the impacting bodies. Therefore, these models may not be extrapolated to large impacts. Our simulations also show that melt volume can be scaled by the total mass of the system. In this presentation, we further discuss geochemical implications for giant impacts on planets, including Earth and Mars.
Hjerpe, Per; Boström, Kristina Bengtsson; Lindblad, Ulf; Merlo, Juan
2012-12-01
To investigate the impact on ICD coding behaviour of a new case-mix reimbursement system based on coded patient diagnoses. The main hypothesis was that after the introduction of the new system the coding of chronic diseases like hypertension and cancer would increase and the variance in propensity for coding would decrease on both physician and health care centre (HCC) levels. Cross-sectional multilevel logistic regression analyses were performed in periods covering the time before and after the introduction of the new reimbursement system. Skaraborg primary care, Sweden. All patients (n = 76 546 to 79 826) 50 years of age and older visiting 468 to 627 physicians at the 22 public HCCs in five consecutive time periods of one year each. Registered codes for hypertension and cancer diseases in Skaraborg primary care database (SPCD). After the introduction of the new reimbursement system the adjusted prevalence of hypertension and cancer in SPCD increased from 17.4% to 32.2% and from 0.79% to 2.32%, respectively, probably partly due to an increased diagnosis coding of indirect patient contacts. The total variance in the propensity for coding declined simultaneously at the physician level for both diagnosis groups. Changes in the healthcare reimbursement system may directly influence the contents of a research database that retrieves data from clinical practice. This should be taken into account when using such a database for research purposes, and the data should be validated for each diagnosis.
NASA Astrophysics Data System (ADS)
Pryadko, Leonid P.; Dumer, Ilya; Kovalev, Alexey A.
2015-03-01
We construct a lower (existence) bound for the threshold of scalable quantum computation which is applicable to all stabilizer codes, including degenerate quantum codes with sublinear distance scaling. The threshold is based on enumerating irreducible operators in the normalizer of the code, i.e., those that cannot be decomposed into a product of two such operators with non-overlapping support. For quantum LDPC codes with logarithmic or power-law distances, we get threshold values which are parametrically better than the existing analytical bound based on percolation. The new bound also gives a finite threshold when applied to other families of degenerate quantum codes, e.g., the concatenated codes. This research was supported in part by the NSF Grant PHY-1416578 and by the ARO Grant W911NF-11-1-0027.
Infant differential behavioral responding to discrete emotions.
Walle, Eric A; Reschke, Peter J; Camras, Linda A; Campos, Joseph J
2017-10-01
Emotional communication regulates the behaviors of social partners. Research on individuals' responding to others' emotions typically compares responses to a single negative emotion compared with responses to a neutral or positive emotion. Furthermore, coding of such responses routinely measure surface level features of the behavior (e.g., approach vs. avoidance) rather than its underlying function (e.g., the goal of the approach or avoidant behavior). This investigation examined infants' responding to others' emotional displays across 5 discrete emotions: joy, sadness, fear, anger, and disgust. Specifically, 16-, 19-, and 24-month-old infants observed an adult communicate a discrete emotion toward a stimulus during a naturalistic interaction. Infants' responses were coded to capture the function of their behaviors (e.g., exploration, prosocial behavior, and security seeking). The results revealed a number of instances indicating that infants use different functional behaviors in response to discrete emotions. Differences in behaviors across emotions were clearest in the 24-month-old infants, though younger infants also demonstrated some differential use of behaviors in response to discrete emotions. This is the first comprehensive study to identify differences in how infants respond with goal-directed behaviors to discrete emotions. Additionally, the inclusion of a function-based coding scheme and interpersonal paradigms may be informative for future emotion research with children and adults. Possible developmental accounts for the observed behaviors and the benefits of coding techniques emphasizing the function of social behavior over their form are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
The Nature and Neural Correlates of Semantic Association versus Conceptual Similarity
Jackson, Rebecca L.; Hoffman, Paul; Pobric, Gorana; Lambon Ralph, Matthew A.
2015-01-01
The ability to represent concepts and the relationships between them is critical to human cognition. How does the brain code relationships between items that share basic conceptual properties (e.g., dog and wolf) while simultaneously representing associative links between dissimilar items that co-occur in particular contexts (e.g., dog and bone)? To clarify the neural bases of these semantic components in neurologically intact participants, both types of semantic relationship were investigated in an fMRI study optimized for anterior temporal lobe (ATL) coverage. The clear principal finding was that the same core semantic network (ATL, superior temporal sulcus, ventral prefrontal cortex) was equivalently engaged when participants made semantic judgments on the basis of association or conceptual similarity. Direct comparisons revealed small, weaker differences for conceptual similarity > associative decisions (e.g., inferior prefrontal cortex) and associative > conceptual similarity (e.g., ventral parietal cortex) which appear to reflect graded differences in task difficulty. Indeed, once reaction time was entered as a covariate into the analysis, no associative versus category differences remained. The paper concludes with a discussion of how categorical/feature-based and associative relationships might be represented within a single, unified semantic system. PMID:25636912
NASA. Lewis Research Center Advanced Modulation and Coding Project: Introduction and overview
NASA Technical Reports Server (NTRS)
Budinger, James M.
1992-01-01
The Advanced Modulation and Coding Project at LeRC is sponsored by the Office of Space Science and Applications, Communications Division, Code EC, at NASA Headquarters and conducted by the Digital Systems Technology Branch of the Space Electronics Division. Advanced Modulation and Coding is one of three focused technology development projects within the branch's overall Processing and Switching Program. The program consists of industry contracts for developing proof-of-concept (POC) and demonstration model hardware, university grants for analyzing advanced techniques, and in-house integration and testing of performance verification and systems evaluation. The Advanced Modulation and Coding Project is broken into five elements: (1) bandwidth- and power-efficient modems; (2) high-speed codecs; (3) digital modems; (4) multichannel demodulators; and (5) very high-data-rate modems. At least one contract and one grant were awarded for each element.
Bellis, Jennifer R; Kirkham, Jamie J; Nunn, Anthony J; Pirmohamed, Munir
2014-12-17
National Health Service (NHS) hospitals in the UK use a system of coding for patient episodes. The coding system used is the International Classification of Disease (ICD-10). There are ICD-10 codes which may be associated with adverse drug reactions (ADRs) and there is a possibility of using these codes for ADR surveillance. This study aimed to determine whether ADRs prospectively identified in children admitted to a paediatric hospital were coded appropriately using ICD-10. The electronic admission abstract for each patient with at least one ADR was reviewed. A record was made of whether the ADR(s) had been coded using ICD-10. Of 241 ADRs, 76 (31.5%) were coded using at least one ICD-10 ADR code. Of the oncology ADRs, 70/115 (61%) were coded using an ICD-10 ADR code compared with 6/126 (4.8%) non-oncology ADRs (difference in proportions 56%, 95% CI 46.2% to 65.8%; p < 0.001). The majority of ADRs detected in a prospective study at a paediatric centre would not have been identified if the study had relied on ICD-10 codes as a single means of detection. Data derived from administrative healthcare databases are not reliable for identifying ADRs by themselves, but may complement other methods of detection.