2012-01-01
Background Routine cytomegalovirus (CMV) screening during pregnancy is not recommended in the United States and the extent to which it is performed is unknown. Using a medical claims database, we computed rates of CMV-specific testing among pregnant women. Methods We used medical claims from the 2009 Truven Health MarketScan® Commercial databases. We computed CMV-specific testing rates using CPT codes. Results We identified 77,773 pregnant women, of whom 1,668 (2%) had a claim for CMV-specific testing. CMV-specific testing was significantly associated with older age, Northeast or urban residence, and a diagnostic code for mononucleosis. We identified 44 women with a diagnostic code for mononucleosis, of whom 14% had CMV-specific testing. Conclusions Few pregnant women had CMV-specific testing, suggesting that screening for CMV infection during pregnancy is not commonly performed. In the absence of national surveillance for CMV infections during pregnancy, healthcare claims are a potential source for monitoring practices of CMV-specific testing. PMID:23198949
Utilization of genetic tests: analysis of gene-specific billing in Medicare claims data.
Lynch, Julie A; Berse, Brygida; Dotson, W David; Khoury, Muin J; Coomer, Nicole; Kautter, John
2017-08-01
We examined the utilization of precision medicine tests among Medicare beneficiaries through analysis of gene-specific tier 1 and 2 billing codes developed by the American Medical Association in 2012. We conducted a retrospective cross-sectional study. The primary source of data was 2013 Medicare 100% fee-for-service claims. We identified claims billed for each laboratory test, the number of patients tested, expenditures, and the diagnostic codes indicated for testing. We analyzed variations in testing by patient demographics and region of the country. Pharmacogenetic tests were billed most frequently, accounting for 48% of the expenditures for new codes. The most common indications for testing were breast cancer, long-term use of medications, and disorders of lipid metabolism. There was underutilization of guideline-recommended tumor mutation tests (e.g., epidermal growth factor receptor) and substantial overutilization of a test discouraged by guidelines (methylenetetrahydrofolate reductase). Methodology-based tier 2 codes represented 15% of all claims billed with the new codes. The highest rate of testing per beneficiary was in Mississippi and the lowest rate was in Alaska. Gene-specific billing codes significantly improved our ability to conduct population-level research of precision medicine. Analysis of these data in conjunction with clinical records should be conducted to validate findings.Genet Med advance online publication 26 January 2017.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-21
... exceed the test facility limits and reduces the number of functional tests for specific valve designs... addresses reducing the number of functional tests for specific valve designs. The NRC has identified no... the required test pressure for the new Class 1 incompressible-fluid, pressure-relief valve designs...
Standardized verification of fuel cycle modeling
Feng, B.; Dixon, B.; Sunny, E.; ...
2016-04-05
A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-yearmore » basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.« less
Administrative database code accuracy did not vary notably with changes in disease prevalence.
van Walraven, Carl; English, Shane; Austin, Peter C
2016-11-01
Previous mathematical analyses of diagnostic tests based on the categorization of a continuous measure have found that test sensitivity and specificity varies significantly by disease prevalence. This study determined if the accuracy of diagnostic codes varied by disease prevalence. We used data from two previous studies in which the true status of renal disease and primary subarachnoid hemorrhage, respectively, had been determined. In multiple stratified random samples from the two previous studies having varying disease prevalence, we measured the accuracy of diagnostic codes for each disease using sensitivity, specificity, and positive and negative predictive value. Diagnostic code sensitivity and specificity did not change notably within clinically sensible disease prevalence. In contrast, positive and negative predictive values changed significantly with disease prevalence. Disease prevalence had no important influence on the sensitivity and specificity of diagnostic codes in administrative databases. Copyright © 2016 Elsevier Inc. All rights reserved.
Liang, Su-Ying; Phillips, Kathryn A.; Wang, Grace; Keohane, Carol; Armstrong, Joanne; Morris, William M.; Haas, Jennifer S.
2012-01-01
Background Administrative claims and medical records are important data sources to examine healthcare utilization and outcomes. Little is known about identifying personalized medicine technologies in these sources. Objectives To describe agreement, sensitivity, and specificity of administrative claims compared to medical records for two pairs of targeted tests and treatments for breast cancer. Research Design Retrospective analysis of medical records linked to administrative claims from a large health plan. We examined whether agreement varied by factors that facilitate tracking in claims (coding and cost) and that enhance medical record completeness (records from multiple providers). Subjects Women (35 – 65 years) with incident breast cancer diagnosed in 2006–2007 (n=775). Measures Use of human epidermal growth factor receptor 2 (HER2) and gene expression profiling (GEP) testing, trastuzumab and adjuvant chemotherapy in claims and medical records. Results Agreement between claims and records was substantial for GEP, trastuzumab, and chemotherapy, and lowest for HER2 tests. GEP, an expensive test with unique billing codes, had higher agreement (91.6% vs. 75.2%), sensitivity (94.9% vs. 76.7%), and specificity (90.1% vs. 29.2%) than HER2, a test without unique billing codes. Trastuzumab, a treatment with unique billing codes, had slightly higher agreement (95.1% vs. 90%) and sensitivity (98.1% vs. 87.9%) than adjuvant chemotherapy. Conclusions Higher agreement and specificity were associated with services that had unique billing codes and high cost. Administrative claims may be sufficient for examining services with unique billing codes. Medical records provide better data for identifying tests lacking specific codes and for research requiring detailed clinical information. PMID:21422962
Automatic Testcase Generation for Flight Software
NASA Technical Reports Server (NTRS)
Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.
2008-01-01
The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.
Jayasinghe, Sanjay; Macartney, Kristine
2013-01-30
Hospital discharge records and laboratory data have shown a substantial early impact from the rotavirus vaccination program that commenced in 2007 in Australia. However, these assessments are affected by the validity and reliability of hospital discharge coding and stool testing to measure the true incidence of hospitalised disease. The aim of this study was to assess the validity of these data sources for disease estimation, both before and after, vaccine introduction. All hospitalisations at a major paediatric centre in children aged <5 years from 2000 to 2009 containing acute gastroenteritis (AGE) ICD 10 AM diagnosis codes were linked to hospital laboratory stool testing data. The validity of the rotavirus-specific diagnosis code (A08.0) and the incidence of hospitalisations attributable to rotavirus by both direct estimation and with adjustments for non-testing and miscoding were calculated for pre- and post-vaccination periods. A laboratory record of stool testing was available for 36% of all AGE hospitalisations (n=4948) the rotavirus code had high specificity (98.4%; 95% CI, 97.5-99.1%) and positive predictive value (96.8%; 94.8-98.3%), and modest sensitivity (61.6%; 58-65.1%). Of all rotavirus test positive hospitalisations only a third had a rotavirus code. The estimated annual average number of rotavirus hospitalisations, following adjustment for non-testing and miscoding was 5- and 6-fold higher than identified, respectively, from testing and coding alone. Direct and adjusted estimates yielded similar percentage reductions in annual average rotavirus hospitalisations of over 65%. Due to the limited use of stool testing and poor sensitivity of the rotavirus-specific diagnosis code routine hospital discharge and laboratory data substantially underestimate the true incidence of rotavirus hospitalisations and absolute vaccine impact. However, this data can still be used to monitor vaccine impact as the effects of miscoding and under-testing appear to be comparable between pre and post vaccination periods. Copyright © 2012 Elsevier Ltd. All rights reserved.
Automated Source-Code-Based Testing of Object-Oriented Software
NASA Astrophysics Data System (ADS)
Gerlich, Ralf; Gerlich, Rainer; Dietrich, Carsten
2014-08-01
With the advent of languages such as C++ and Java in mission- and safety-critical space on-board software, new challenges for testing and specifically automated testing arise. In this paper we discuss some of these challenges, consequences and solutions based on an experiment in automated source- code-based testing for C++.
Acquisition Handbook - Update. Comprehensive Approach to Reusable Defensive Software (CARDS)
1994-03-25
designs, and implementation components (source code, test plans, procedures and results, and system/software documentation). This handbook provides a...activities where software components are acquired, evaluated, tested and sometimes modified. In addition to serving as a facility for the acquisition and...systems from such components [1]. Implementation components are at the lowest level and consist of: specifications; detailed designs; code, test
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2011 CFR
2011-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2012 CFR
2012-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2014 CFR
2014-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... or will be developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar...
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2010 CFR
2010-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... developed exclusively with Government funds; (ii) Studies, analyses, test data, or similar data produced for...
Experimental and analytical comparison of flowfields in a 110 N (25 lbf) H2/O2 rocket
NASA Technical Reports Server (NTRS)
Reed, Brian D.; Penko, Paul F.; Schneider, Steven J.; Kim, Suk C.
1991-01-01
A gaseous hydrogen/gaseous oxygen 110 N (25 lbf) rocket was examined through the RPLUS code using the full Navier-Stokes equations with finite rate chemistry. Performance tests were conducted on the rocket in an altitude test facility. Preliminary parametric analyses were performed for a range of mixture ratios and fuel film cooling pcts. It is shown that the computed values of specific impulse and characteristic exhaust velocity follow the trend of the experimental data. Specific impulse computed by the code is lower than the comparable test values by about two to three percent. The computed characteristic exhaust velocity values are lower than the comparable test values by three to four pct. Thrust coefficients computed by the code are found to be within two pct. of the measured values. It is concluded that the discrepancy between computed and experimental performance values could not be attributed to experimental uncertainty.
Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Garg Vijay; Ameri, Ali
2005-01-01
The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.
48 CFR 252.227-7013 - Rights in technical data-Noncommercial items.
Code of Federal Regulations, 2013 CFR
2013-10-01
... causing a computer to perform a specific operation or series of operations. (3) Computer software means computer programs, source code, source code listings, object code listings, design details, algorithms... funds; (ii) Studies, analyses, test data, or similar data produced for this contract, when the study...
Engineering High Assurance Distributed Cyber Physical Systems
2015-01-15
decisions: number of interacting agents and co-dependent decisions made in real-time without causing interference . To engineer a high assurance DART...environment specification, architecture definition, domain-specific languages, design patterns, code - generation, analysis, test-generation, and simulation...include synchronization between the models and source code , debugging at the model level, expression of the design intent, and quality of service
76 FR 30551 - Specifications for Packagings
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-26
... 178 Specifications for Packagings CFR Correction In Title 49 of the Code of Federal Regulations, Parts... design qualification test and each periodic retest on a packaging, a test report must be prepared. The test report must be maintained at each location where the packaging is manufactured and each location...
Unit Testing for the Application Control Language (ACL) Software
NASA Technical Reports Server (NTRS)
Heinich, Christina Marie
2014-01-01
In the software development process, code needs to be tested before it can be packaged for release in order to make sure the program actually does what it says is supposed to happen as well as to check how the program deals with errors and edge cases (such as negative or very large numbers). One of the major parts of the testing process is unit testing, where you test specific units of the code to make sure each individual part of the code works. This project is about unit testing many different components of the ACL software and fixing any errors encountered. To do this, mocks of other objects need to be created and every line of code needs to be exercised to make sure every case is accounted for. Mocks are important to make because it gives direct control of the environment the unit lives in instead of attempting to work with the entire program. This makes it easier to achieve the second goal of exercising every line of code.
Reference Materials and Subject Matter Knowledge Codes for Airman Knowledge Testing
DOT National Transportation Integrated Search
2004-06-08
The listings of reference materials and subject matter knowledge codes have been : prepared by the Federal Aviation Administration (FAA) to establish specific : references for all knowledge standards. The listings contain reference materials : to be ...
RELAP5-3D Resolution of Known Restart/Backup Issues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mesina, George L.; Anderson, Nolan A.
2014-12-01
The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less
Survey Probe Infrared Celestial Experiment (SPICE).
1985-01-01
amplitude modulation (PAM) Word clock Bit clock 3.1.2.3.2 Each signal shall be buffered and short- circuit proofed and capable of delivering a signal...TABLE OF CONTENTS I Appendix A I SPICE II Electronic Test Report Appendix B VC409-OOOl Pulse Code Modulator Specification - SPICE I VC409-OOOl-21 Pulse...Code Modulator Specification - SPICE II Appendix C - Specifications AA0209-103 Evacuation AA0209-104 Cryogen Filling AA0209-105 Leak Rate AA0209-106
L1 and L2 Picture Naming in Mandarin-English Bilinguals: A Test of Bilingual Dual Coding Theory
ERIC Educational Resources Information Center
Jared, Debra; Poh, Rebecca Pei Yun; Paivio, Allan
2013-01-01
This study examined the nature of bilinguals' conceptual representations and the links from these representations to words in L1 and L2. Specifically, we tested an assumption of the Bilingual Dual Coding Theory that conceptual representations include image representations, and that learning two languages in separate contexts can result in…
Creating and Testing Simulation Software
NASA Technical Reports Server (NTRS)
Heinich, Christina M.
2013-01-01
The goal of this project is to learn about the software development process, specifically the process to test and fix components of the software. The paper will cover the techniques of testing code, and the benefits of using one style of testing over another. It will also discuss the overall software design and development lifecycle, and how code testing plays an integral role in it. Coding is notorious for always needing to be debugged due to coding errors or faulty program design. Writing tests either before or during program creation that cover all aspects of the code provide a relatively easy way to locate and fix errors, which will in turn decrease the necessity to fix a program after it is released for common use. The backdrop for this paper is the Spaceport Command and Control System (SCCS) Simulation Computer Software Configuration Item (CSCI), a project whose goal is to simulate a launch using simulated models of the ground systems and the connections between them and the control room. The simulations will be used for training and to ensure that all possible outcomes and complications are prepared for before the actual launch day. The code being tested is the Programmable Logic Controller Interface (PLCIF) code, the component responsible for transferring the information from the models to the model Programmable Logic Controllers (PLCs), basic computers that are used for very simple tasks.
Large liquid rocket engine transient performance simulation system
NASA Technical Reports Server (NTRS)
Mason, J. R.; Southwick, R. D.
1989-01-01
Phase 1 of the Rocket Engine Transient Simulation (ROCETS) program consists of seven technical tasks: architecture; system requirements; component and submodel requirements; submodel implementation; component implementation; submodel testing and verification; and subsystem testing and verification. These tasks were completed. Phase 2 of ROCETS consists of two technical tasks: Technology Test Bed Engine (TTBE) model data generation; and system testing verification. During this period specific coding of the system processors was begun and the engineering representations of Phase 1 were expanded to produce a simple model of the TTBE. As the code was completed, some minor modifications to the system architecture centering on the global variable common, GLOBVAR, were necessary to increase processor efficiency. The engineering modules completed during Phase 2 are listed: INJTOO - main injector; MCHBOO - main chamber; NOZLOO - nozzle thrust calculations; PBRNOO - preburner; PIPE02 - compressible flow without inertia; PUMPOO - polytropic pump; ROTROO - rotor torque balance/speed derivative; and TURBOO - turbine. Detailed documentation of these modules is in the Appendix. In addition to the engineering modules, several submodules were also completed. These submodules include combustion properties, component performance characteristics (maps), and specific utilities. Specific coding was begun on the system configuration processor. All functions necessary for multiple module operation were completed but the SOLVER implementation is still under development. This system, the Verification Checkout Facility (VCF) allows interactive comparison of module results to store data as well as provides an intermediate checkout of the processor code. After validation using the VCF, the engineering modules and submodules were used to build a simple TTBE.
Prati, Gabriele; Pietrantoni, Luca
2013-01-01
The aim of the present study was to examine the comprehension of gesture in a situation in which the communicator cannot (or can only with difficulty) use verbal communication. Based on theoretical considerations, we expected to obtain higher semantic comprehension for emblems (gestures with a direct verbal definition or translation that is well known by all members of a group, or culture) compared to illustrators (gestures regarded as spontaneous and idiosyncratic and that do not have a conventional definition). Based on the extant literature, we predicted higher semantic specificity associated with arbitrarily coded and iconically coded emblems compared to intrinsically coded illustrators. Using a scenario of emergency evacuation, we tested the difference in semantic specificity between different categories of gestures. 138 participants saw 10 videos each illustrating a gesture performed by a firefighter. They were requested to imagine themselves in a dangerous situation and to report the meaning associated with each gesture. The results showed that intrinsically coded illustrators were more successfully understood than arbitrarily coded emblems, probably because the meaning of intrinsically coded illustrators is immediately comprehensible without recourse to symbolic interpretation. Furthermore, there was no significant difference between the comprehension of iconically coded emblems and that of both arbitrarily coded emblems and intrinsically coded illustrators. It seems that the difference between the latter two types of gestures was supported by their difference in semantic specificity, although in a direction opposite to that predicted. These results are in line with those of Hadar and Pinchas-Zamir (2004), which showed that iconic gestures have higher semantic specificity than conventional gestures.
Verifying a computational method for predicting extreme ground motion
Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.
2011-01-01
In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.
2017-10-01
for all project Aims. Timeline- months 3-6. Status: completed. Task 6: Complete primary analyses and hypothesis testing for Aim 2, including...glucose. For each of these lab tests , each VA site can name them something different and can change names over times. Labs should be linked to Logical...Observation Identifiers Names (LOINC) codes, an international standard system that assigns a numeric code to specific lab tests . However, VA data
The MINERVA Software Development Process
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony; Munoz, Cesar A.; Dutle, Aaron M.
2017-01-01
This paper presents a software development process for safety-critical software components of cyber-physical systems. The process is called MINERVA, which stands for Mirrored Implementation Numerically Evaluated against Rigorously Verified Algorithms. The process relies on formal methods for rigorously validating code against its requirements. The software development process uses: (1) a formal specification language for describing the algorithms and their functional requirements, (2) an interactive theorem prover for formally verifying the correctness of the algorithms, (3) test cases that stress the code, and (4) numerical evaluation on these test cases of both the algorithm specifications and their implementations in code. The MINERVA process is illustrated in this paper with an application to geo-containment algorithms for unmanned aircraft systems. These algorithms ensure that the position of an aircraft never leaves a predetermined polygon region and provide recovery maneuvers when the region is inadvertently exited.
Testing and Performance Analysis of the Multichannel Error Correction Code Decoder
NASA Technical Reports Server (NTRS)
Soni, Nitin J.
1996-01-01
This report provides the test results and performance analysis of the multichannel error correction code decoder (MED) system for a regenerative satellite with asynchronous, frequency-division multiple access (FDMA) uplink channels. It discusses the system performance relative to various critical parameters: the coding length, data pattern, unique word value, unique word threshold, and adjacent-channel interference. Testing was performed under laboratory conditions and used a computer control interface with specifically developed control software to vary these parameters. Needed technologies - the high-speed Bose Chaudhuri-Hocquenghem (BCH) codec from Harris Corporation and the TRW multichannel demultiplexer/demodulator (MCDD) - were fully integrated into the mesh very small aperture terminal (VSAT) onboard processing architecture and were demonstrated.
Testing Scientific Software: A Systematic Literature Review.
Kanewala, Upulee; Bieman, James M
2014-10-01
Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques.
Seeing the Invisible: Embedding Tests in Code That Cannot be Modified
NASA Technical Reports Server (NTRS)
O'Malley, Owen; Mansouri-Samani, Masoud; Mehlitz, Peter; Penix, John
2005-01-01
The difficulty of characterizing and observing valid software behavior during testing can be very difficult in flight systems. To address this issue, we evaluated several approaches to increasing test observability on the Shuttle Abort Flight Management (SAFM) system. To increase test observability, we added probes into the running system to evaluate the internal state and analyze test data. To minimize the impact of the instrumentation and reduce manual effort, we used Aspect-Oriented Programming (AOP) tools to instrument the source code. We developed and elicited a spectrum of properties, from generic to application specific properties, to be monitored via the instrumentation. To evaluate additional approaches, SAFM was ported to Linux, enabling the use of gcov for measuring test coverage, Valgrind for looking for memory usage errors, and libraries for finding non-normal floating point values. An in-house C++ source code scanning tool was also used to identify violations of SAFM coding standards, and other potentially problematic C++ constructs. Using these approaches with the existing test data sets, we were able to verify several important properties, confirm several problems and identify some previously unidentified issues.
Laminar fMRI and computational theories of brain function.
Stephan, K E; Petzschner, F H; Kasper, L; Bayer, J; Wellstein, K V; Stefanics, G; Pruessmann, K P; Heinzle, J
2017-11-02
Recently developed methods for functional MRI at the resolution of cortical layers (laminar fMRI) offer a novel window into neurophysiological mechanisms of cortical activity. Beyond physiology, laminar fMRI also offers an unprecedented opportunity to test influential theories of brain function. Specifically, hierarchical Bayesian theories of brain function, such as predictive coding, assign specific computational roles to different cortical layers. Combined with computational models, laminar fMRI offers a unique opportunity to test these proposals noninvasively in humans. This review provides a brief overview of predictive coding and related hierarchical Bayesian theories, summarises their predictions with regard to layered cortical computations, examines how these predictions could be tested by laminar fMRI, and considers methodological challenges. We conclude by discussing the potential of laminar fMRI for clinically useful computational assays of layer-specific information processing. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Reasonable agreement was obtained between the code prediction and the experimental data over a wide range of engine operating conditions.
NASA Technical Reports Server (NTRS)
Geng, Steven M.
1987-01-01
A free-piston Stirling engine performance code is being upgraded and validated at the NASA Lewis Research Center under an interagency agreement between the Department of Energy's Oak Ridge National Laboratory and NASA Lewis. Many modifications were made to the free-piston code in an attempt to decrease the calibration effort. A procedure was developed that made the code calibration process more systematic. Engine-specific calibration parameters are often used to bring predictions and experimental data into better agreement. The code was calibrated to a matrix of six experimental data points. Predictions of the calibrated free-piston code are compared with RE-1000 free-piston Stirling engine sensitivity test data taken at NASA Lewis. Resonable agreement was obtained between the code predictions and the experimental data over a wide range of engine operating conditions.
Analysis of PANDA Passive Containment Cooling Steady-State Tests with the Spectra Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stempniewicz, Marek M
2000-07-15
Results of post test simulation of the PANDA passive containment cooling (PCC) steady-state tests (S-series tests), performed at the PANDA facility at the Paul Scherrer Institute, Switzerland, are presented. The simulation has been performed using the computer code SPECTRA, a thermal-hydraulic code, designed specifically for analyzing containment behavior of nuclear power plants.Results of the present calculations are compared to the measurement data as well as the results obtained earlier with the codes MELCOR, TRAC-BF1, and TRACG. The calculated PCC efficiencies are somewhat lower than the measured values. Similar underestimation of PCC efficiencies had been obtained in the past, with themore » other computer codes. To explain this difference, it is postulated that condensate coming into the tubes forms a stream of liquid in one or two tubes, leaving most of the tubes unaffected. The condensate entering the water box is assumed to fall down in the form of droplets. With these assumptions, the results calculated with SPECTRA are close to the experimental data.It is concluded that the SPECTRA code is a suitable tool for analyzing containments of advanced reactors, equipped with passive containment cooling systems.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobromir Panayotov; Andrew Grief; Brad J. Merrill
'Fusion for Energy' (F4E) develops designs and implements the European Test Blanket Systems (TBS) in ITER - Helium-Cooled Lithium-Lead (HCLL) and Helium-Cooled Pebble-Bed (HCPB). Safety demonstration is an essential element for the integration of TBS in ITER and accident analyses are one of its critical segments. A systematic approach to the accident analyses had been acquired under the F4E contract on TBS safety analyses. F4E technical requirements and AMEC and INL efforts resulted in the development of a comprehensive methodology for fusion breeding blanket accident analyses. It addresses the specificity of the breeding blankets design, materials and phenomena and atmore » the same time is consistent with the one already applied to ITER accident analyses. Methodology consists of several phases. At first the reference scenarios are selected on the base of FMEA studies. In the second place elaboration of the accident analyses specifications we use phenomena identification and ranking tables to identify the requirements to be met by the code(s) and TBS models. Thus the limitations of the codes are identified and possible solutions to be built into the models are proposed. These include among others the loose coupling of different codes or code versions in order to simulate multi-fluid flows and phenomena. The code selection and issue of the accident analyses specifications conclude this second step. Furthermore the breeding blanket and ancillary systems models are built on. In this work challenges met and solutions used in the development of both MELCOR and RELAP5 codes models of HCLL and HCPB TBSs will be shared. To continue the developed models are qualified by comparison with finite elements analyses, by code to code comparison and sensitivity studies. Finally, the qualified models are used for the execution of the accident analyses of specific scenario. When possible the methodology phases will be illustrated in the paper by limited number of tables and figures. Description of each phase and its results in detail as well the methodology applications to EU HCLL and HCPB TBSs will be published in separate papers. The developed methodology is applicable to accident analyses of other TBSs to be tested in ITER and as well to DEMO breeding blankets.« less
Seismology software: state of the practice
NASA Astrophysics Data System (ADS)
Smith, W. Spencer; Zeng, Zheng; Carette, Jacques
2018-05-01
We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools.
Seismology software: state of the practice
NASA Astrophysics Data System (ADS)
Smith, W. Spencer; Zeng, Zheng; Carette, Jacques
2018-02-01
We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools.
Testing Strategies for Model-Based Development
NASA Technical Reports Server (NTRS)
Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.
2006-01-01
This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.
Guidelines for the Design, Fabrication, Testing, Installation and Operation of Srf Cavities
NASA Astrophysics Data System (ADS)
Theilacker, J.; Carter, H.; Foley, M.; Hurh, P.; Klebaner, A.; Krempetz, K.; Nicol, T.; Olis, D.; Page, T.; Peterson, T.; Pfund, P.; Pushka, D.; Schmitt, R.; Wands, R.
2010-04-01
Superconducting Radio-Frequency (SRF) cavities containing cryogens under pressure pose a potential rupture hazard to equipment and personnel. Generally, pressure vessels fall within the scope of the ASME Boiler and Pressure Vessel Code however, the use of niobium as a material for the SRF cavities is beyond the applicability of the Code. Fermilab developed a guideline to ensure sound engineering practices governing the design, fabrication, testing, installation and operation of SRF cavities. The objective of the guideline is to reduce hazards and to achieve an equivalent level of safety afforded by the ASME Code. The guideline addresses concerns specific to SRF cavities in the areas of materials, design and analysis, welding and brazing, pressure relieving requirements, pressure testing and quality control.
A high order approach to flight software development and testing
NASA Technical Reports Server (NTRS)
Steinbacher, J.
1981-01-01
The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.
NASA Astrophysics Data System (ADS)
Prettyman, T. H.; Gardner, R. P.; Verghese, K.
1993-08-01
A new specific purpose Monte Carlo code called McENL for modeling the time response of epithermal neutron lifetime tools is described. The weight windows technique, employing splitting and Russian roulette, is used with an automated importance function based on the solution of an adjoint diffusion model to improve the code efficiency. Complete composition and density correlated sampling is also included in the code, and can be used to study the effect on tool response of small variations in the formation, borehole, or logging tool composition and density. An illustration of the latter application is given for the density of a thermal neutron filter. McENL was benchmarked against test-pit data for the Mobil pulsed neutron porosity tool and was found to be very accurate. Results of the experimental validation and details of code performance are presented.
Aging and the Baseline Code: An Alternative to the "Normless Elderly."
ERIC Educational Resources Information Center
Offenbacher, Deborah I.; Poster, Constance H.
1985-01-01
A projective test administered to 120 older persons revealed a "baseline normative code" to which respondents held themselves and their contemporaries. Findings suggest that, in the absence of age-specific norms, the elderly do not become "normless" but develop their own normative prescriptions to fit their past socialization and present…
NASA Astrophysics Data System (ADS)
Carles, Guillem; Ferran, Carme; Carnicer, Artur; Bosch, Salvador
2012-01-01
A computational imaging system based on wavefront coding is presented. Wavefront coding provides an extension of the depth-of-field at the expense of a slight reduction of image quality. This trade-off results from the amount of coding used. By using spatial light modulators, a flexible coding is achieved which permits it to be increased or decreased as needed. In this paper a computational method is proposed for evaluating the output of a wavefront coding imaging system equipped with a spatial light modulator, with the aim of thus making it possible to implement the most suitable coding strength for a given scene. This is achieved in an unsupervised manner, thus the whole system acts as a dynamically selfadaptable imaging system. The program presented here controls the spatial light modulator and the camera, and also processes the images in a synchronised way in order to implement the dynamic system in real time. A prototype of the system was implemented in the laboratory and illustrative examples of the performance are reported in this paper. Program summaryProgram title: DynWFC (Dynamic WaveFront Coding) Catalogue identifier: AEKC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEKC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 483 No. of bytes in distributed program, including test data, etc.: 2 437 713 Distribution format: tar.gz Programming language: Labview 8.5 and NI Vision and MinGW C Compiler Computer: Tested on PC Intel ® Pentium ® Operating system: Tested on Windows XP Classification: 18 Nature of problem: The program implements an enhanced wavefront coding imaging system able to adapt the degree of coding to the requirements of a specific scene. The program controls the acquisition by a camera, the display of a spatial light modulator and the image processing operations synchronously. The spatial light modulator is used to implement the phase mask with flexibility given the trade-off between depth-of-field extension and image quality achieved. The action of the program is to evaluate the depth-of-field requirements of the specific scene and subsequently control the coding established by the spatial light modulator, in real time.
Testing Scientific Software: A Systematic Literature Review
Kanewala, Upulee; Bieman, James M.
2014-01-01
Context Scientific software plays an important role in critical decision making, for example making weather predictions based on climate models, and computation of evidence for research publications. Recently, scientists have had to retract publications due to errors caused by software faults. Systematic testing can identify such faults in code. Objective This study aims to identify specific challenges, proposed solutions, and unsolved problems faced when testing scientific software. Method We conducted a systematic literature survey to identify and analyze relevant literature. We identified 62 studies that provided relevant information about testing scientific software. Results We found that challenges faced when testing scientific software fall into two main categories: (1) testing challenges that occur due to characteristics of scientific software such as oracle problems and (2) testing challenges that occur due to cultural differences between scientists and the software engineering community such as viewing the code and the model that it implements as inseparable entities. In addition, we identified methods to potentially overcome these challenges and their limitations. Finally we describe unsolved challenges and how software engineering researchers and practitioners can help to overcome them. Conclusions Scientific software presents special challenges for testing. Specifically, cultural differences between scientist developers and software engineers, along with the characteristics of the scientific software make testing more difficult. Existing techniques such as code clone detection can help to improve the testing process. Software engineers should consider special challenges posed by scientific software such as oracle problems when developing testing techniques. PMID:25125798
Generating Test Templates via Automated Theorem Proving
NASA Technical Reports Server (NTRS)
Kancherla, Mani Prasad
1997-01-01
Testing can be used during the software development process to maintain fidelity between evolving specifications, program designs, and code implementations. We use a form of specification-based testing that employs the use of an automated theorem prover to generate test templates. A similar approach was developed using a model checker on state-intensive systems. This method applies to systems with functional rather than state-based behaviors. This approach allows for the use of incomplete specifications to aid in generation of tests for potential failure cases. We illustrate the technique on the cannonical triangle testing problem and discuss its use on analysis of a spacecraft scheduling system.
Comparison Between Simulated and Experimentally Measured Performance of a Four Port Wave Rotor
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Wilson, Jack; Welch, Gerard E.
2007-01-01
Performance and operability testing has been completed on a laboratory-scale, four-port wave rotor, of the type suitable for use as a topping cycle on a gas turbine engine. Many design aspects, and performance estimates for the wave rotor were determined using a time-accurate, one-dimensional, computational fluid dynamics-based simulation code developed specifically for wave rotors. The code follows a single rotor passage as it moves past the various ports, which in this reference frame become boundary conditions. This paper compares wave rotor performance predicted with the code to that measured during laboratory testing. Both on and off-design operating conditions were examined. Overall, the match between code and rig was found to be quite good. At operating points where there were disparities, the assumption of larger than expected internal leakage rates successfully realigned code predictions and laboratory measurements. Possible mechanisms for such leakage rates are discussed.
High-Content Optical Codes for Protecting Rapid Diagnostic Tests from Counterfeiting.
Gökçe, Onur; Mercandetti, Cristina; Delamarche, Emmanuel
2018-06-19
Warnings and reports on counterfeit diagnostic devices are released several times a year by regulators and public health agencies. Unfortunately, mishandling, altering, and counterfeiting point-of-care diagnostics (POCDs) and rapid diagnostic tests (RDTs) is lucrative, relatively simple and can lead to devastating consequences. Here, we demonstrate how to implement optical security codes in silicon- and nitrocellulose-based flow paths for device authentication using a smartphone. The codes are created by inkjet spotting inks directly on nitrocellulose or on micropillars. Codes containing up to 32 elements per mm 2 and 8 colors can encode as many as 10 45 combinations. Codes on silicon micropillars can be erased by setting a continuous flow path across the entire array of code elements or for nitrocellulose by simply wicking a liquid across the code. Static or labile code elements can further be formed on nitrocellulose to create a hidden code using poly(ethylene glycol) (PEG) or glycerol additives to the inks. More advanced codes having a specific deletion sequence can also be created in silicon microfluidic devices using an array of passive routing nodes, which activate in a particular, programmable sequence. Such codes are simple to fabricate, easy to view, and efficient in coding information; they can be ideally used in combination with information on a package to protect diagnostic devices from counterfeiting.
Assessment of algorithms to identify patients with thrombophilia following venous thromboembolism.
Delate, Thomas; Hsiao, Wendy; Kim, Benjamin; Witt, Daniel M; Meyer, Melissa R; Go, Alan S; Fang, Margaret C
2016-01-01
Routine testing for thrombophilia following venous thromboembolism (VTE) is controversial. The use of large datasets to study the clinical impact of thrombophilia testing on patterns of care and patient outcomes may enable more efficient analysis of this practice in a wide range of settings. We set out to examine how accurately algorithms using International Classification of Diseases 9th Revision (ICD-9) codes and/or pharmacy data reflect laboratory-confirmed thrombophilia diagnoses. A random sample of adult Kaiser Permanente Colorado patients diagnosed with unprovoked VTE between 1/2004 and 12/2010 underwent medical record abstraction of thrombophilia test results. Algorithms using "ICD-9" (positive if a thrombophilia ICD-9 code was present), "Extended anticoagulation (AC)" (positive if AC therapy duration was >6 months), and "ICD-9 & Extended AC" (positive for both) criteria to identify possible thrombophilia cases were tested. Using positive thrombophilia laboratory results as the gold standard, the sensitivity, specificity, positive predictive value (PPV), and negative predictive value of each algorithm were calculated, along with 95% confidence intervals (CIs). In our cohort of 636 patients, sensitivities were low (<50%) for each algorithm. "ICD-9" yielded the highest PPV (41.5%, 95% CI 26.3-57.9%) and a high specificity (95.9%, 95% CI 94.0-97.4%). "Extended AC" had the highest sensitivity but lowest specificity, and "ICD-9 & Extended AC" had the highest specificity but lowest sensitivity. ICD-9 codes for thrombophilia are highly specific for laboratory-confirmed cases, but all algorithms had low sensitivities. Further development of methods to identify thrombophilia patients in large datasets is warranted. Copyright © 2015 Elsevier Ltd. All rights reserved.
DCU@TRECMed 2012: Using Ad-Hoc Baselines for Domain-Specific Retrieval
2012-11-01
description to extend the query, for example: Patients with complicated GERD who receive endoscopy will be extended with Gastroesophageal reflux disease ... Diseases and Related Health Problems, version 9) for the patient’s admission or discharge status [1, 5]; treating negation (e.g. negative test results or...codes were mapped to a description of the code, usually a short phrase/sentence. For instance, the ICD9 code 253.5 corresponds to the disease Diabetes
LogiKit - assisting complex logic specification and implementation for embedded control systems
NASA Astrophysics Data System (ADS)
Diglio, A.; Nicolodi, B.
2002-07-01
LogiKit provides an overall lifecycle solution. LogiKit is a powerful software engineering case toolkit for requirements specification, simulation and documentation. LogiKit also provides an automatic ADA software design, code and unit test generator.
Verification of Gyrokinetic codes: Theoretical background and applications
NASA Astrophysics Data System (ADS)
Tronko, Natalia; Bottino, Alberto; Görler, Tobias; Sonnendrücker, Eric; Told, Daniel; Villard, Laurent
2017-05-01
In fusion plasmas, the strong magnetic field allows the fast gyro-motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the subsequent transport. Naturally, these codes require thorough verification and validation. Here, we present a new and generic theoretical framework and specific numerical applications to test the faithfulness of the implemented models to theory and to verify the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which has rarely been done and therefore makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The verification of the numerical scheme is proposed via the benchmark effort. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC) and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations implemented in the ORB5 and GENE codes using the Lagrangian variational formulation. At the computational level, detailed verifications of global electromagnetic test cases developed from the CYCLONE Base Case are considered, including a parametric β-scan covering the transition from ITG to KBM and the spectral properties at the nominal β value.
Control of Technology Transfer at JPL
NASA Technical Reports Server (NTRS)
Oliver, Ronald
2006-01-01
Controlled Technology: 1) Design: preliminary or critical design data, schematics, technical flow charts, SNV code/diagnostics, logic flow diagrams, wirelist, ICDs, detailed specifications or requirements. 2) Development: constraints, computations, configurations, technical analyses, acceptance criteria, anomaly resolution, detailed test plans, detailed technical proposals. 3) Production: process or how-to: assemble, operated, repair, maintain, modify. 4) Manufacturing: technical instructions, specific parts, specific materials, specific qualities, specific processes, specific flow. 5) Operations: how-to operate, contingency or standard operating plans, Ops handbooks. 6) Repair: repair instructions, troubleshooting schemes, detailed schematics. 7) Test: specific procedures, data, analysis, detailed test plan and retest plans, detailed anomaly resolutions, detailed failure causes and corrective actions, troubleshooting, trended test data, flight readiness data. 8) Maintenance: maintenance schedules and plans, methods for regular upkeep, overhaul instructions. 9) Modification: modification instructions, upgrades kit parts, including software
ERIC Educational Resources Information Center
Cowan, Earl; And Others
The curriculum guide for welding instruction contains 16 units presented in six sections. Each unit is divided into the following areas, each of which is color coded: terminal objectives, specific objectives, suggested activities, and instructional materials; information sheet; transparency masters; assignment sheet; test; and test answers. The…
NASA Technical Reports Server (NTRS)
Biringen, S. H.; Mcmillan, O. J.
1980-01-01
The use of a computer code for the calculation of two dimensional inlet flow fields in a supersonic free stream and a nonorthogonal mesh-generation code are illustrated by specific examples. Input, output, and program operation and use are given and explained for the case of supercritical inlet operation at a subdesign Mach number (M Mach free stream = 2.09) for an isentropic-compression, drooped-cowl inlet. Source listings of the computer codes are also provided.
Juselius Baghdassarian, Eva; Nilsson Markhed, Maria; Lindström, Eva; Nilsson, Björn M; Lewander, Tommy
2018-06-01
To evaluate the performances of two auditory brainstem response (ABR) profiling tests as potential biomarkers and diagnostic support for schizophrenia and adult attention-deficit hyperactivity disorder (ADHD), respectively, in an investigator-initiated blinded study design. Male and female patients with schizophrenia (n=26) and adult ADHD (n=24) meeting Diagnostic and Statistical Manual of Mental Disorders Fourth Edition (DSM IV) diagnostic criteria and healthy controls (n=58) comprised the analysis set (n=108) of the total number of study participants (n=119). Coded sets of randomized ABR recordings were analysed by an independent party blinded to clinical diagnoses before a joint code-breaking session. The ABR profiling test for schizophrenia identified schizophrenia patients versus controls with a sensitivity of 84.6% and a specificity of 93.1%. The ADHD test identified patients with adult ADHD versus controls with a sensitivity of 87.5% and a specificity of 91.4%. The ABR profiling tests discriminated schizophrenia and ADHD versus healthy controls with high sensitivity and specificity. The methods deserve to be further explored in larger clinical studies including a broad range of psychiatric disorders to determine their utility as potential diagnostic biomarkers.
Injecting Errors for Testing Built-In Test Software
NASA Technical Reports Server (NTRS)
Gender, Thomas K.; Chow, James
2010-01-01
Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers
Accelerating execution of the integrated TIGER series Monte Carlo radiation transport codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, L.M.; Hochstedler, R.D.
1997-02-01
Execution of the integrated TIGER series (ITS) of coupled electron/photon Monte Carlo radiation transport codes has been accelerated by modifying the FORTRAN source code for more efficient computation. Each member code of ITS was benchmarked and profiled with a specific test case that directed the acceleration effort toward the most computationally intensive subroutines. Techniques for accelerating these subroutines included replacing linear search algorithms with binary versions, replacing the pseudo-random number generator, reducing program memory allocation, and proofing the input files for geometrical redundancies. All techniques produced identical or statistically similar results to the original code. Final benchmark timing of themore » accelerated code resulted in speed-up factors of 2.00 for TIGER (the one-dimensional slab geometry code), 1.74 for CYLTRAN (the two-dimensional cylindrical geometry code), and 1.90 for ACCEPT (the arbitrary three-dimensional geometry code).« less
SPIN: An Inversion Code for the Photospheric Spectral Line
NASA Astrophysics Data System (ADS)
Yadav, Rahul; Mathew, Shibu K.; Tiwary, Alok Ranjan
2017-08-01
Inversion codes are the most useful tools to infer the physical properties of the solar atmosphere from the interpretation of Stokes profiles. In this paper, we present the details of a new Stokes Profile INversion code (SPIN) developed specifically to invert the spectro-polarimetric data of the Multi-Application Solar Telescope (MAST) at Udaipur Solar Observatory. The SPIN code has adopted Milne-Eddington approximations to solve the polarized radiative transfer equation (RTE) and for the purpose of fitting a modified Levenberg-Marquardt algorithm has been employed. We describe the details and utilization of the SPIN code to invert the spectro-polarimetric data. We also present the details of tests performed to validate the inversion code by comparing the results from the other widely used inversion codes (VFISV and SIR). The inverted results of the SPIN code after its application to Hinode/SP data have been compared with the inverted results from other inversion codes.
Verification of Gyrokinetic codes: theoretical background and applications
NASA Astrophysics Data System (ADS)
Tronko, Natalia
2016-10-01
In fusion plasmas the strong magnetic field allows the fast gyro motion to be systematically removed from the description of the dynamics, resulting in a considerable model simplification and gain of computational time. Nowadays, the gyrokinetic (GK) codes play a major role in the understanding of the development and the saturation of turbulence and in the prediction of the consequent transport. We present a new and generic theoretical framework and specific numerical applications to test the validity and the domain of applicability of existing GK codes. For a sound verification process, the underlying theoretical GK model and the numerical scheme must be considered at the same time, which makes this approach pioneering. At the analytical level, the main novelty consists in using advanced mathematical tools such as variational formulation of dynamics for systematization of basic GK code's equations to access the limits of their applicability. The indirect verification of numerical scheme is proposed via the Benchmark process. In this work, specific examples of code verification are presented for two GK codes: the multi-species electromagnetic ORB5 (PIC), and the radially global version of GENE (Eulerian). The proposed methodology can be applied to any existing GK code. We establish a hierarchy of reduced GK Vlasov-Maxwell equations using the generic variational formulation. Then, we derive and include the models implemented in ORB5 and GENE inside this hierarchy. At the computational level, detailed verification of global electromagnetic test cases based on the CYCLONE are considered, including a parametric β-scan covering the transition between the ITG to KBM and the spectral properties at the nominal β value.
49 CFR 178.338-16 - Inspection and testing.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 49 Transportation 2 2010-10-01 2010-10-01 false Inspection and testing. 178.338-16 Section 178.338... PACKAGINGS Specifications for Containers for Motor Vehicle Transportation § 178.338-16 Inspection and testing... Section VIII of the ASME Code. The welder and the welding procedure must be qualified in accordance with...
Measuring and Specifying Combinatorial Coverage of Test Input Configurations
Kuhn, D. Richard; Kacker, Raghu N.; Lei, Yu
2015-01-01
A key issue in testing is how many tests are needed for a required level of coverage or fault detection. Estimates are often based on error rates in initial testing, or on code coverage. For example, tests may be run until a desired level of statement or branch coverage is achieved. Combinatorial methods present an opportunity for a different approach to estimating required test set size, using characteristics of the test set. This paper describes methods for estimating the coverage of, and ability to detect, t-way interaction faults of a test set based on a covering array. We also develop a connection between (static) combinatorial coverage and (dynamic) code coverage, such that if a specific condition is satisfied, 100% branch coverage is assured. Using these results, we propose practical recommendations for using combinatorial coverage in specifying test requirements. PMID:28133442
McBee, Morgan P; Laor, Tal; Pryor, Rebecca M; Smith, Rachel; Hardin, Judy; Ulland, Lisa; May, Sally; Zhang, Bin; Towbin, Alexander J
2018-02-01
The purpose of this study was to adapt our radiology reports to provide the documentation required for specific International Classification of Diseases, tenth rev (ICD-10) diagnosis coding. Baseline data were analyzed to identify the reports with the greatest number of unspecified ICD-10 codes assigned by computer-assisted coding software. A two-part quality improvement initiative was subsequently implemented. The first component involved improving clinical histories by utilizing technologists to obtain information directly from the patients or caregivers, which was then imported into the radiologist's report within the speech recognition software. The second component involved standardization of report terminology and creation of four different structured report templates to determine which yielded the fewest reports with an unspecified ICD-10 code assigned by an automated coding engine. In all, 12,077 reports were included in the baseline analysis. Of these, 5,151 (43%) had an unspecified ICD-10 code. The majority of deficient reports were for radiographs (n = 3,197; 62%). Inadequacies included insufficient clinical history provided and lack of detailed fracture descriptions. Therefore, the focus was standardizing terminology and testing different structured reports for radiographs obtained for fractures. At baseline, 58% of radiography reports contained a complete clinical history with improvement to >95% 8 months later. The total number of reports that contained an unspecified ICD-10 code improved from 43% at baseline to 27% at completion of this study (P < .0001). The number of radiology studies with a specific ICD-10 code can be improved through quality improvement methodology, specifically through the use of technologist-acquired clinical histories and structured reporting. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.
TRAC-PF1/MOD1 support calculations for the MIST/OTIS program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujita, R.K.; Knight, T.D.
1984-01-01
We are using the Transient Reactor Analysis Code (TRAC), specifically version TRAC-PF1/MOD1, to perform analyses in support of the MultiLoop Integral-System Test (MIST) and the Once-Through Integral-System (OTIS) experiment program. We have analyzed Geradrohr Dampferzeuger Anlage (GERDA) Test 1605AA to benchmark the TRAC-PF1/MOD1 code against phenomena expected to occur in a raised-loop B and W plant during a small-break loss-of-coolant accident (SBLOCA). These results show that the code can calculate both single- and two-phase natural circulation, flow interruption, boiler-condenser-mode (BCM) heat transfer, and primary-system refill in a B and W-type geometry with low-elevation auxiliary feedwater. 19 figures, 7 tables.
1991-08-01
specifications are taken primarily from the 1983 version of the ASME Boiler and Pressure Vessel Code . Other design requirements were developea from standard safe...rules and practices of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code to provide a safe and reliable system
ERIC Educational Resources Information Center
Guan, Connie Qun; Liu, Ying; Chan, Derek Ho Leung; Ye, Feifei; Perfetti, Charles A.
2011-01-01
Learning to write words may strengthen orthographic representations and thus support word-specific recognition processes. This hypothesis applies especially to Chinese because its writing system encourages character-specific recognition that depends on accurate representation of orthographic form. We report 2 studies that test this hypothesis in…
Pankhurst, Louise; Macfarlane-Smith, Louissa; Buchanan, James; Anson, Luke; Davies, Kerrie; O'Connor, Lily; Ashwin, Helen; Pike, Graham; Dingle, Kate E; Peto, Timothy Ea; Wordsworth, Sarah; Walker, A Sarah; Wilcox, Mark H; Crook, Derrick W
2014-08-01
Every year approximately 5000-9000 patients are admitted to a hospital with diarrhoea, which in up to 90% of cases has a non-infectious cause. As a result, single rooms are 'blocked' by patients with non-infectious diarrhoea, while patients with infectious diarrhoea are still in open bays because of a lack of free side rooms. A rapid test for differentiating infectious from non-infectious diarrhoea could be very beneficial for patients. To evaluate MassCode multiplex polymerase chain reaction (PCR) for the simultaneous diagnosis of multiple enteropathogens directly from stool, in terms of sensitivity/specificity to detect four common important enteropathogens: Clostridium difficile, Campylobacter spp., Salmonella spp. and norovirus. A retrospective study of fixed numbers of samples positive for C. difficile (n = 200), Campylobacter spp. (n = 200), Salmonella spp. (n = 100) and norovirus (n = 200) plus samples negative for all these pathogens (n = 300). Samples were sourced from NHS microbiology laboratories in Oxford and Leeds where initial diagnostic testing was performed according to Public Health England methodology. Researchers carrying out MassCode assays were blind to this information. A questionnaire survey, examining current practice for infection control teams and microbiology laboratories managing infectious diarrhoea, was also carried out. MassCode assays were carried out at Oxford University Hospitals NHS Trust. Further multiplex assays, carried out using Luminex, were run on the same set of samples at Leeds Teaching Hospitals NHS Trust. The questionnaire was completed by various NHS trusts. Sensitivity and specificity to detect C. difficile, Campylobacter spp., Salmonella spp., and norovirus. Nucleic acids were extracted from 948 clinical samples using an optimised protocol (200 Campylobacter spp., 199 C. difficile, 60 S. enterica, 199 norovirus and 295 negative samples; some samples contained more than one pathogen). Using the MassCode assay, sensitivities for each organism compared with standard microbiological testing ranged from 43% to 94% and specificities from 95% to 98%, with particularly poor performance for S. enterica. Relatively large numbers of unexpected positives not confirmed with quantitative PCR were also observed, particularly for S. enterica, Giardia lamblia and Cryptosporidium spp. As the results indicated that S. enterica detection might provide generic challenges to other multiplex assays for gastrointestinal pathogens, the Luminex xTag(®) gastrointestinal assay was also run blinded on the same extracts (937/948 remaining) and on re-extracted samples (839/948 with sufficient material). For Campylobacter spp., C. difficile and norovirus, high sensitivities (> 92%) and specificities (> 96%) were observed. For S. enterica, on the original MassCode/Oxford extracts, Luminex sensitivity compared with standard microbiological testing was 84% [95% confidence interval (CI) 73% to 93%], but this dropped to 46% on a fresh extract, very similar to MassCode, with a corresponding increase in specificity from 92% to 99%. Overall agreement on the per-sample diagnosis compared with combined microbiology plus PCR for the main four/all pathogens was 85.6%/64.7%, 87.0%/82.9% and 89.8%/86.8% for the MassCode assay, Luminex assay/MassCode extract and Luminex assay/fresh extract, respectively. Luminex assay results from fresh extracts implied that 5% of samples did not represent infectious diarrhoea, even though enteropathogens were genuinely present. Managing infectious diarrhoea was a significant burden for infection control teams (taking 21% of their time) and better diagnostics were identified as having major potential benefits for patients. Overall, the Luminex xTag gastrointestinal panel showed similar or superior sensitivity and specificity to the MassCode assay. However, on fresh extracts, this test had low sensitivity to detect a key enteric pathogen, S. enterica; making it an unrealistic option for most microbiology laboratories. Extraction efficiency appears to be a major obstacle for nucleic acid-based tests for this organism, and possibly the whole Enterobacteriaceae family. To improve workflows in service microbiology laboratories, to reduce workload for infection control practitioners, and to improve outcomes for NHS patients, further research on deoxyribonucleic acid-based multiplex gastrointestinal diagnostics is urgently needed. The Health Technology Assessment programme of the National Institute for Health Research.
ERIC Educational Resources Information Center
New South Wales Dept. of Education, Sydney (Australia).
As part of a series of tests to measure mastery of specific skills in the natural sciences, copies of tests 39 through 50 include: (39) using a code; (40) naming the parts of a microscope; (41) calculating density and predicting flotation; (42) estimating metric length; (43) using SI symbols; (44) using s=vt; (45) applying a novel theory; (46)…
Thermodynamic properties of UF sub 6 measured with a ballistic piston compressor
NASA Technical Reports Server (NTRS)
Sterritt, D. E.; Lalos, G. T.; Schneider, R. T.
1973-01-01
From experiments performed with a ballistic piston compressor, certain thermodynamic properties of uranium hexafluoride were investigated. Difficulties presented by the nonideal processes encountered in ballistic compressors are discussed and a computer code BCCC (Ballistic Compressor Computer Code) is developed to analyze the experimental data. The BCCC unfolds the thermodynamic properties of uranium hexafluoride from the helium-uranium hexafluoride mixture used as the test gas in the ballistic compressor. The thermodynamic properties deduced include the specific heat at constant volume, the ratio of specific heats for UF6, and the viscous coupling constant of helium-uranium hexafluoride mixtures.
Development and feasibility testing of the Pediatric Emergency Discharge Interaction Coding Scheme.
Curran, Janet A; Taylor, Alexandra; Chorney, Jill; Porter, Stephen; Murphy, Andrea; MacPhee, Shannon; Bishop, Andrea; Haworth, Rebecca
2017-08-01
Discharge communication is an important aspect of high-quality emergency care. This study addresses the gap in knowledge on how to describe discharge communication in a paediatric emergency department (ED). The objective of this feasibility study was to develop and test a coding scheme to characterize discharge communication between health-care providers (HCPs) and caregivers who visit the ED with their children. The Pediatric Emergency Discharge Interaction Coding Scheme (PEDICS) and coding manual were developed following a review of the literature and an iterative refinement process involving HCP observations, inter-rater assessments and team consensus. The coding scheme was pilot-tested through observations of HCPs across a range of shifts in one urban paediatric ED. Overall, 329 patient observations were carried out across 50 observational shifts. Inter-rater reliability was evaluated in 16% of the observations. The final version of the PEDICS contained 41 communication elements. Kappa scores were greater than .60 for the majority of communication elements. The most frequently observed communication elements were under the Introduction node and the least frequently observed were under the Social Concerns node. HCPs initiated the majority of the communication. Pediatric Emergency Discharge Interaction Coding Scheme addresses an important gap in the discharge communication literature. The tool is useful for mapping patterns of discharge communication between HCPs and caregivers. Results from our pilot test identified deficits in specific areas of discharge communication that could impact adherence to discharge instructions. The PEDICS would benefit from further testing with a different sample of HCPs. © 2017 The Authors. Health Expectations Published by John Wiley & Sons Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stofleth, Jerome H.; Tribble, Megan Kimberly; Crocker, Robert W.
2017-05-01
The V27 containment vessel was procured by the US Army Recovered Chemical Material Directorate ( RCMD ) as a replacement vessel for use on the P2 Explosive Destruction Systems. It is the third EDS vessel to be fabricated under Code Case 2564 of the ASME Boiler and Pressure Vessel Code, which provides rules for the design of impulsively loaded vessels. The explosive rating for the vessel, based on the Code Case, is nine (9) pounds TNT - equivalent for up to 637 detonations . This report documents the results of explosive tests that were done on the vessel at Sandiamore » National Laboratories in Albuquerque New Mexico to qualify the vessel for explosive use . The primary qualification test consisted of si x 1.5 pound charges of Composition C - 4 (equivalent to 11.25 pounds TNT) distributed around the vessel in accordance with the User Design Specification. Four subsequent tests using less explosive evaluated the effects of slight variations in orientation of the charges . All vessel acceptance criteria were met.« less
Larouche, Geneviève; Chiquette, Jocelyne; Plante, Marie; Pelletier, Sylvie; Simard, Jacques; Dorval, Michel
2016-11-01
In Canada, recommendations for clinical management of hereditary breast and ovarian cancer among individuals carrying a deleterious BRCA1 or BRCA2 mutation have been available since 2007. Eight years later, very little is known about the uptake of screening and risk-reduction measures in this population. Because Canada's public health care system falls under provincial jurisdictions, using provincial health care administrative databases appears a valuable option to assess management of BRCA1/2 mutation carriers. The objective was to explore the usefulness of public health insurance administrative databases in British Columbia, Ontario, and Quebec to assess management after BRCA1/2 genetic testing. Official public health insurance documents were considered potentially useful if they had specific procedure codes, and pertained to procedures performed in the public and private health care systems. All 3 administrative databases have specific procedures codes for mammography and breast ultrasounds. Only Quebec and Ontario have a specific procedure code for breast magnetic resonance imaging. It is impossible to assess, on an individual basis, the frequency of others screening exams, with the exception of CA-125 testing in British Columbia. Screenings done in private practice are excluded from the administrative databases unless covered by special agreements for reimbursement, such as all breast imaging exams in Ontario and mammograms in British Columbia and Quebec. There are no specific procedure codes for risk-reduction surgeries for breast and ovarian cancer. Population-based assessment of breast and ovarian cancer risk management strategies other than mammographic screening, using only administrative data, is currently challenging in the 3 Canadian provinces studied. Copyright © 2016 Canadian Association of Radiologists. Published by Elsevier Inc. All rights reserved.
Schultheiss, Oliver C.
2013-01-01
Traditionally, implicit motives (i.e., non-conscious preferences for specific classes of incentives) are assessed through semantic coding of imaginative stories. The present research tested the marker-word hypothesis, which states that implicit motives are reflected in the frequencies of specific words. Using Linguistic Inquiry and Word Count (LIWC; Pennebaker et al., 2001), Study 1 identified word categories that converged with a content-coding measure of the implicit motives for power, achievement, and affiliation in picture stories collected in German and US student samples, showed discriminant validity with self-reported motives, and predicted well-validated criteria of implicit motives (gender difference for the affiliation motive; in interaction with personal-goal progress: emotional well-being). Study 2 demonstrated LIWC-based motive scores' causal validity by documenting their sensitivity to motive arousal. PMID:24137149
DD3MAT - a code for yield criteria anisotropy parameters identification.
NASA Astrophysics Data System (ADS)
Barros, P. D.; Carvalho, P. D.; Alves, J. L.; Oliveira, M. C.; Menezes, L. F.
2016-08-01
This work presents the main strategies and algorithms adopted in the DD3MAT inhouse code, specifically developed for identifying the anisotropy parameters. The algorithm adopted is based on the minimization of an error function, using a downhill simplex method. The set of experimental values can consider yield stresses and r -values obtained from in-plane tension, for different angles with the rolling direction (RD), yield stress and r -value obtained for biaxial stress state, and yield stresses from shear tests performed also for different angles to RD. All these values can be defined for a specific value of plastic work. Moreover, it can also include the yield stresses obtained from in-plane compression tests. The anisotropy parameters are identified for an AA2090-T3 aluminium alloy, highlighting the importance of the user intervention to improve the numerical fit.
Nonspecific gastrointestinal (GI) disease is a common cause of GI-related hospitalizations in U.S. elderly (82.9% of all cases) and it peaks concurrently with viral enteritis, suggesting a lack of diagnostic testing. The lack of etiological specificity in the current coding syste...
Lin, M.C.; Vreeman, D.J.; Huff, S.M.
2012-01-01
Objectives We wanted to develop a method for evaluating the consistency and usefulness of LOINC code use across different institutions, and to evaluate the degree of interoperability that can be attained when using LOINC codes for laboratory data exchange. Our specific goals were to: 1) Determine if any contradictory knowledge exists in LOINC. 2) Determine how many LOINC codes were used in a truly interoperable fashion between systems. 3) Provide suggestions for improving the semantic interoperability of LOINC. Methods We collected Extensional Definitions (EDs) of LOINC usage from three institutions. The version space approach was used to divide LOINC codes into small sets, which made auditing of LOINC use across the institutions feasible. We then compared pairings of LOINC codes from the three institutions for consistency and usefulness. Results The number of LOINC codes evaluated were 1,917, 1,267 and 1,693 as obtained from ARUP, Intermountain and Regenstrief respectively. There were 2,022, 2,030, and 2,301 version spaces among ARUP & Intermountain, Intermountain & Regenstrief and ARUP & Regenstrief respectively. Using the EDs as the gold standard, there were 104, 109 and 112 pairs containing contradictory knowledge and there were 1,165, 765 and 1,121 semantically interoperable pairs. The interoperable pairs were classified into three levels: 1) Level I – No loss of meaning, complete information was exchanged by identical codes. 2) Level II – No loss of meaning, but processing of data was needed to make the data completely comparable. 3) Level III – Some loss of meaning. For example, tests with a specific ‘method’ could be rolled-up with tests that were ‘methodless’. Conclusions There are variations in the way LOINC is used for data exchange that result in some data not being truly interoperable across different enterprises. To improve its semantic interoperability, we need to detect and correct any contradictory knowledge within LOINC and add computable relationships that can be used for making reliable inferences about the data. The LOINC committee should also provide detailed guidance on best practices for mapping from local codes to LOINC codes and for using LOINC codes in data exchange. PMID:22306382
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
Réfega, Susana; Girard-Misguich, Fabienne; Bourdieu, Christiane; Péry, Pierre; Labbé, Marie
2003-04-02
Specific antibodies were produced ex vivo from intestinal culture of Eimeria tenella infected chickens. The specificity of these intestinal antibodies was tested against different parasite stages. These antibodies were used to immunoscreen first generation schizont and sporozoite cDNA libraries permitting the identification of new E. tenella antigens. We obtained a total of 119 cDNA clones which were subjected to sequence analysis. The sequences coding for the proteins inducing local immune responses were compared with nucleotide or protein databases and with expressed sequence tags (ESTs) databases. We identified new Eimeria genes coding for heat shock proteins, a ribosomal protein, a pyruvate kinase and a pyridoxine kinase. Specific features of other sequences are discussed.
Validation of asthma recording in the Clinical Practice Research Datalink (CPRD)
Morales, Daniel R; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J; Quint, Jennifer K
2017-01-01
Objectives The optimal method of identifying people with asthma from electronic health records in primary care is not known. The aim of this study is to determine the positive predictive value (PPV) of different algorithms using clinical codes and prescription data to identify people with asthma in the United Kingdom Clinical Practice Research Datalink (CPRD). Methods 684 participants registered with a general practitioner (GP) practice contributing to CPRD between 1 December 2013 and 30 November 2015 were selected according to one of eight predefined potential asthma identification algorithms. A questionnaire was sent to the GPs to confirm asthma status and provide additional information to support an asthma diagnosis. Two study physicians independently reviewed and adjudicated the questionnaires and additional information to form a gold standard for asthma diagnosis. The PPV was calculated for each algorithm. Results 684 questionnaires were sent, of which 494 (72%) were returned and 475 (69%) were complete and analysed. All five algorithms including a specific Read code indicating asthma or non-specific Read code accompanied by additional conditions performed well. The PPV for asthma diagnosis using only a specific asthma code was 86.4% (95% CI 77.4% to 95.4%). Extra information on asthma medication prescription (PPV 83.3%), evidence of reversibility testing (PPV 86.0%) or a combination of all three selection criteria (PPV 86.4%) did not result in a higher PPV. The algorithm using non-specific asthma codes, information on reversibility testing and respiratory medication use scored highest (PPV 90.7%, 95% CI (82.8% to 98.7%), but had a much lower identifiable population. Algorithms based on asthma symptom codes had low PPVs (43.1% to 57.8%)%). Conclusions People with asthma can be accurately identified from UK primary care records using specific Read codes. The inclusion of spirometry or asthma medications in the algorithm did not clearly improve accuracy. Ethics and dissemination The protocol for this research was approved by the Independent Scientific Advisory Committee (ISAC) for MHRA Database Research (protocol number15_257) and the approved protocol was made available to the journal and reviewers during peer review. Generic ethical approval for observational research using the CPRD with approval from ISAC has been granted by a Health Research Authority Research Ethics Committee (East Midlands—Derby, REC reference number 05/MRE04/87). The results will be submitted for publication and will be disseminated through research conferences and peer-reviewed journals. PMID:28801439
TRAC-PF1/MOD1 pretest predictions of MIST experiments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boyack, B.E.; Steiner, J.L.; Siebe, D.A.
Los Alamos National Laboratory is a participant in the Integral System Test (IST) program initiated in June 1983 to provide integral system test data on specific issues and phenomena relevant to post small-break loss-of-coolant accidents (SBLOCAs) in Babcock and Wilcox plant designs. The Multi-Loop Integral System Test (MIST) facility is the largest single component in the IST program. During Fiscal Year 1986, Los Alamos performed five MIST pretest analyses. The five experiments were chosen on the basis of their potential either to approach the facility limits or to challenge the predictive capability of the TRAC-PF1/MOD1 code. Three SBLOCA tests weremore » examined which included nominal test conditions, throttled auxiliary feedwater and asymmetric steam-generator cooldown, and reduced high-pressure-injection (HPI) capacity, respectively. Also analyzed were two ''feed-and-bleed'' cooling tests with reduced HPI and delayed HPI initiation. Results of the tests showed that the MIST facility limits would not be approached in the five tests considered. Early comparisons with preliminary test data indicate that the TRAC-PF1/MOD1 code is correctly calculating the dominant phenomena occurring in the MIST facility during the tests. Posttest analyses are planned to provide a quantitative assessment of the code's ability to predict MIST transients.« less
Transient Ejector Analysis (TEA) code user's guide
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1993-01-01
A FORTRAN computer program for the semi analytic prediction of unsteady thrust augmenting ejector performance has been developed, based on a theoretical analysis for ejectors. That analysis blends classic self-similar turbulent jet descriptions with control-volume mixing region elements. Division of the ejector into an inlet, diffuser, and mixing region allowed flexibility in the modeling of the physics for each region. In particular, the inlet and diffuser analyses are simplified by a quasi-steady-analysis, justified by the assumption that pressure is the forcing function in those regions. Only the mixing region is assumed to be dominated by viscous effects. The present work provides an overview of the code structure, a description of the required input and output data file formats, and the results for a test case. Since there are limitations to the code for applications outside the bounds of the test case, the user should consider TEA as a research code (not as a production code), designed specifically as an implementation of the proposed ejector theory. Program error flags are discussed, and some diagnostic routines are presented.
NASA Technical Reports Server (NTRS)
Clark, R. T.; Mccallister, R. D.
1982-01-01
The particular coding option identified as providing the best level of coding gain performance in an LSI-efficient implementation was the optimal constraint length five, rate one-half convolutional code. To determine the specific set of design parameters which optimally matches this decoder to the LSI constraints, a breadboard MCD (maximum-likelihood convolutional decoder) was fabricated and used to generate detailed performance trade-off data. The extensive performance testing data gathered during this design tradeoff study are summarized, and the functional and physical MCD chip characteristics are presented.
HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual
NASA Technical Reports Server (NTRS)
Moitra, Anutosh
1989-01-01
A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.
ERIC Educational Resources Information Center
Rønberg, Louise Flensted; Petersen, Dorthe Klint
2016-01-01
This study explores the incidence of poor comprehenders, that is, children identified as having reading comprehension difficulties, despite age-appropriate word reading skills. It supports the findings that some children do show poor reading comprehension, despite age-appropriate word reading, as measured with a phonological coding test. However,…
Automated encoding of clinical documents based on natural language processing.
Friedman, Carol; Shagina, Lyudmila; Lussier, Yves; Hripcsak, George
2004-01-01
The aim of this study was to develop a method based on natural language processing (NLP) that automatically maps an entire clinical document to codes with modifiers and to quantitatively evaluate the method. An existing NLP system, MedLEE, was adapted to automatically generate codes. The method involves matching of structured output generated by MedLEE consisting of findings and modifiers to obtain the most specific code. Recall and precision applied to Unified Medical Language System (UMLS) coding were evaluated in two separate studies. Recall was measured using a test set of 150 randomly selected sentences, which were processed using MedLEE. Results were compared with a reference standard determined manually by seven experts. Precision was measured using a second test set of 150 randomly selected sentences from which UMLS codes were automatically generated by the method and then validated by experts. Recall of the system for UMLS coding of all terms was .77 (95% CI.72-.81), and for coding terms that had corresponding UMLS codes recall was .83 (.79-.87). Recall of the system for extracting all terms was .84 (.81-.88). Recall of the experts ranged from .69 to .91 for extracting terms. The precision of the system was .89 (.87-.91), and precision of the experts ranged from .61 to .91. Extraction of relevant clinical information and UMLS coding were accomplished using a method based on NLP. The method appeared to be comparable to or better than six experts. The advantage of the method is that it maps text to codes along with other related information, rendering the coded output suitable for effective retrieval.
Three-Dimensional Simulation of Traveling-Wave Tube Cold-Test Characteristics Using MAFIA
NASA Technical Reports Server (NTRS)
Kory, Carol L.; Wilson, Jeffrey D.
1995-01-01
The three-dimensional simulation code MAFIA was used to compute the cold-test parameters - frequency-phase dispersion, beam on-axis interaction impedance, and attenuation - for two types of traveling-wave tube (TWT) slow-wave circuits. The potential for this electromagnetic computer modeling code to reduce the time and cost of TWT development is demonstrated by the high degree of accuracy achieved in calculating these parameters. Generalized input files were developed for ferruled coupled-cavity and TunneLadder slow-wave circuits. These files make it easy to model circuits of arbitrary dimensions. The utility of these files was tested by applying each to a specific TWT slow-wave circuit and comparing the results with experimental data. Excellent agreement was obtained.
Standardized development of computer software. Part 2: Standards
NASA Technical Reports Server (NTRS)
Tausworthe, R. C.
1978-01-01
This monograph contains standards for software development and engineering. The book sets forth rules for design, specification, coding, testing, documentation, and quality assurance audits of software; it also contains detailed outlines for the documentation to be produced.
Earthquake Early Warning ShakeAlert System: Testing and certification platform
Cochran, Elizabeth S.; Kohler, Monica D.; Given, Douglas; Guiwits, Stephen; Andrews, Jennifer; Meier, Men-Andrin; Ahmad, Mohammad; Henson, Ivan; Hartog, Renate; Smith, Deborah
2017-01-01
Earthquake early warning systems provide warnings to end users of incoming moderate to strong ground shaking from earthquakes. An earthquake early warning system, ShakeAlert, is providing alerts to beta end users in the western United States, specifically California, Oregon, and Washington. An essential aspect of the earthquake early warning system is the development of a framework to test modifications to code to ensure functionality and assess performance. In 2016, a Testing and Certification Platform (TCP) was included in the development of the Production Prototype version of ShakeAlert. The purpose of the TCP is to evaluate the robustness of candidate code that is proposed for deployment on ShakeAlert Production Prototype servers. TCP consists of two main components: a real‐time in situ test that replicates the real‐time production system and an offline playback system to replay test suites. The real‐time tests of system performance assess code optimization and stability. The offline tests comprise a stress test of candidate code to assess if the code is production ready. The test suite includes over 120 events including local, regional, and teleseismic historic earthquakes, recentering and calibration events, and other anomalous and potentially problematic signals. Two assessments of alert performance are conducted. First, point‐source assessments are undertaken to compare magnitude, epicentral location, and origin time with the Advanced National Seismic System Comprehensive Catalog, as well as to evaluate alert latency. Second, we describe assessment of the quality of ground‐motion predictions at end‐user sites by comparing predicted shaking intensities to ShakeMaps for historic events and implement a threshold‐based approach that assesses how often end users initiate the appropriate action, based on their ground‐shaking threshold. TCP has been developed to be a convenient streamlined procedure for objectively testing algorithms, and it has been designed with flexibility to accommodate significant changes in development of new or modified system code. It is expected that the TCP will continue to evolve along with the ShakeAlert system, and the framework we describe here provides one example of how earthquake early warning systems can be evaluated.
Build-up Approach to Updating the Mock Quiet Spike(TradeMark) Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike(TradeMark) (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented in order to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike(TradeMark) project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
Build-up Approach to Updating the Mock Quiet Spike(TM)Beam Model
NASA Technical Reports Server (NTRS)
Herrera, Claudia Y.; Pak, Chan-gi
2007-01-01
A crucial part of aircraft design is ensuring that the required margin for flutter is satisfied. A trustworthy flutter analysis, which begins by possessing an accurate dynamics model, is necessary for this task. Traditionally, a model was updated manually by fine tuning specific stiffness parameters until the analytical results matched test data. This is a time consuming iterative process. The NASA Dryden Flight Research Center has developed a mode matching code to execute this process in a more efficient manner. Recently, this code was implemented in the F-15B/Quiet Spike (Gulfstream Aerospace Corporation, Savannah, Georgia) model update. A build-up approach requiring several ground vibration test configurations and a series of model updates was implemented to determine the connection stiffness between aircraft and test article. The mode matching code successfully updated various models for the F-15B/Quiet Spike project to within 1 percent error in frequency and the modal assurance criteria values ranged from 88.51-99.42 percent.
On transform coding tools under development for VP10
NASA Astrophysics Data System (ADS)
Parker, Sarah; Chen, Yue; Han, Jingning; Liu, Zoe; Mukherjee, Debargha; Su, Hui; Wang, Yongzhe; Bankoski, Jim; Li, Shunyao
2016-09-01
Google started the WebM Project in 2010 to develop open source, royaltyfree video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec, VP10, that achieves at least a generational improvement in coding efficiency over VP9. Starting from VP9, a set of new experimental coding tools have already been added to VP10 to achieve decent coding gains. Subsequently, Google joined a consortium of major tech companies called the Alliance for Open Media to jointly develop a new codec AV1. As a result, the VP10 effort is largely expected to merge with AV1. In this paper, we focus primarily on new tools in VP10 that improve coding of the prediction residue using transform coding techniques. Specifically, we describe tools that increase the flexibility of available transforms, allowing the codec to handle a more diverse range or residue structures. Results are presented on a standard test set.
Torres-Montúfar, Alejandro; Borsch, Thomas; Ochoterena, Helga
2018-05-01
The conceptualization and coding of characters is a difficult issue in phylogenetic systematics, no matter which inference method is used when reconstructing phylogenetic trees or if the characters are just mapped onto a specific tree. Complex characters are groups of features that can be divided into simpler hierarchical characters (reductive coding), although the implied hierarchical relational information may change depending on the type of coding (composite vs. reductive). Up to now, there is no common agreement to either code characters as complex or simple. Phylogeneticists have discussed which coding method is best but have not incorporated the heuristic process of reciprocal illumination to evaluate the coding. Composite coding allows to test whether 1) several characters were linked resulting in a structure described as a complex character or trait or 2) independently evolving characters resulted in the configuration incorrectly interpreted as a complex character. We propose that complex characters or character states should be decomposed iteratively into simpler characters when the original homology hypothesis is not corroborated by a phylogenetic analysis, and the character or character state is retrieved as homoplastic. We tested this approach using the case of fruit types within subfamily Cinchonoideae (Rubiaceae). The iterative reductive coding of characters associated with drupes allowed us to unthread fruit evolution within Cinchonoideae. Our results show that drupes and berries are not homologous. As a consequence, a more precise ontology for the Cinchonoideae drupes is required.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Y. Q.; Shemon, E. R.; Thomas, J. W.
SHARP is an advanced modeling and simulation toolkit for the analysis of nuclear reactors. It is comprised of several components including physical modeling tools, tools to integrate the physics codes for multi-physics analyses, and a set of tools to couple the codes within the MOAB framework. Physics modules currently include the neutronics code PROTEUS, the thermal-hydraulics code Nek5000, and the structural mechanics code Diablo. This manual focuses on performing multi-physics calculations with the SHARP ToolKit. Manuals for the three individual physics modules are available with the SHARP distribution to help the user to either carry out the primary multi-physics calculationmore » with basic knowledge or perform further advanced development with in-depth knowledge of these codes. This manual provides step-by-step instructions on employing SHARP, including how to download and install the code, how to build the drivers for a test case, how to perform a calculation and how to visualize the results. Since SHARP has some specific library and environment dependencies, it is highly recommended that the user read this manual prior to installing SHARP. Verification tests cases are included to check proper installation of each module. It is suggested that the new user should first follow the step-by-step instructions provided for a test problem in this manual to understand the basic procedure of using SHARP before using SHARP for his/her own analysis. Both reference output and scripts are provided along with the test cases in order to verify correct installation and execution of the SHARP package. At the end of this manual, detailed instructions are provided on how to create a new test case so that user can perform novel multi-physics calculations with SHARP. Frequently asked questions are listed at the end of this manual to help the user to troubleshoot issues.« less
Auto Code Generation for Simulink-Based Attitude Determination Control System
NASA Technical Reports Server (NTRS)
MolinaFraticelli, Jose Carlos
2012-01-01
This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.
Proceedings of the 21st DOE/NRC Nuclear Air Cleaning Conference; Sessions 1--8
DOE Office of Scientific and Technical Information (OSTI.GOV)
First, M.W.
1991-02-01
Separate abstracts have been prepared for the papers presented at the meeting on nuclear facility air cleaning technology in the following specific areas of interest: air cleaning technologies for the management and disposal of radioactive wastes; Canadian waste management program; radiological health effects models for nuclear power plant accident consequence analysis; filter testing; US standard codes on nuclear air and gas treatment; European community nuclear codes and standards; chemical processing off-gas cleaning; incineration and vitrification; adsorbents; nuclear codes and standards; mathematical modeling techniques; filter technology; safety; containment system venting; and nuclear air cleaning programs around the world. (MB)
Bachanek-Bankowska, Katarzyna; Mero, Herieth R; Wadsworth, Jemma; Mioulet, Valerie; Sallu, Raphael; Belsham, Graham J; Kasanga, Christopher J; Knowles, Nick J; King, Donald P
2016-11-01
Rapid, reliable and accurate diagnostic methods provide essential support to programmes that monitor and control foot-and-mouth disease (FMD). While pan-specific molecular tests for FMD virus (FMDV) detection are well established and widely used in endemic and FMD-free countries, current serotyping methods mainly rely either on antigen detection ELISAs or nucleotide sequencing approaches. This report describes the development of a panel of serotype-specific real-time RT-PCR assays (rRT-PCR) tailored to detect FMDV lineages currently circulating in East Africa. These assays target sequences within the VP1-coding region that share high intra-lineage identity, but do not cross-react with FMD viruses from other serotypes that circulate in the region. These serotype-specific assays operate with the same thermal profile as the pan-diagnostic tests making it possible to run them in parallel to produce C T values comparable to the pan-diagnostic test detecting the 3D-coding region. These assays were evaluated alongside the established pan-specific molecular test using field samples and virus isolates collected from Tanzania, Kenya and Ethiopia that had been previously characterised by nucleotide sequencing. Samples (n=71) representing serotype A (topotype AFRICA, lineage G-I), serotype O (topotypes EA-2 and EA-4), serotype SAT 1 (topotype I (NWZ)) and serotype SAT2 (topotype IV) were correctly identified with these rRT-PCR assays. Furthermore, FMDV RNA from samples that did not contain infectious virus could still be serotyped using these assays. These serotype-specific real-time RT-PCR assays can detect and characterise FMDVs currently circulating in East Africa and hence improve disease control in this region. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
1980-10-01
Specifications are given for the shipping, marking, inspection, testing, and start up of equipment to be used in a proposed wood fuel cogeneration system in Maine. Couplings, mechanical drives, electric motors, spare parts, coatings, assembling, and materials handling and packaging are covered. Both OSHA and noise control regulations are included along with the ASME code.
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.; Dieck, Ronald H.; Chuang, Isaac
1987-01-01
A preliminary uncertainty analysis was performed for the High Area Ratio Rocket Nozzle test program which took place at the altitude test capsule of the Rocket Engine Test Facility at the NASA Lewis Research Center. Results from the study establish the uncertainty of measured and calculated parameters required for the calculation of rocket engine specific impulse. A generalized description of the uncertainty methodology used is provided. Specific equations and a detailed description of the analysis is presented. Verification of the uncertainty analysis model was performed by comparison with results from the experimental program's data reduction code. Final results include an uncertainty for specific impulse of 1.30 percent. The largest contributors to this uncertainty were calibration errors from the test capsule pressure and thrust measurement devices.
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.; Dieck, Ronald H.; Chuang, Isaac
1987-01-01
A preliminary uncertainty analysis has been performed for the High Area Ratio Rocket Nozzle test program which took place at the altitude test capsule of the Rocket Engine Test Facility at the NASA Lewis Research Center. Results from the study establish the uncertainty of measured and calculated parameters required for the calculation of rocket engine specific impulse. A generalized description of the uncertainty methodology used is provided. Specific equations and a detailed description of the analysis are presented. Verification of the uncertainty analysis model was performed by comparison with results from the experimental program's data reduction code. Final results include an uncertainty for specific impulse of 1.30 percent. The largest contributors to this uncertainty were calibration errors from the test capsule pressure and thrust measurement devices.
Infrastructure for Rapid Development of Java GUI Programs
NASA Technical Reports Server (NTRS)
Jones, Jeremy; Hostetter, Carl F.; Wheeler, Philip
2006-01-01
The Java Application Shell (JAS) is a software framework that accelerates the development of Java graphical-user-interface (GUI) application programs by enabling the reuse of common, proven GUI elements, as distinguished from writing custom code for GUI elements. JAS is a software infrastructure upon which Java interactive application programs and graphical user interfaces (GUIs) for those programs can be built as sets of plug-ins. JAS provides an application- programming interface that is extensible by application-specific plugins that describe and encapsulate both specifications of a GUI and application-specific functionality tied to the specified GUI elements. The desired GUI elements are specified in Extensible Markup Language (XML) descriptions instead of in compiled code. JAS reads and interprets these descriptions, then creates and configures a corresponding GUI from a standard set of generic, reusable GUI elements. These elements are then attached (again, according to the XML descriptions) to application-specific compiled code and scripts. An application program constructed by use of JAS as its core can be extended by writing new plug-ins and replacing existing plug-ins. Thus, JAS solves many problems that Java programmers generally solve anew for each project, thereby reducing development and testing time.
Space Telecommunications Radio System (STRS) Compliance Testing
NASA Technical Reports Server (NTRS)
Handler, Louis M.
2011-01-01
The Space Telecommunications Radio System (STRS) defines an open architecture for software defined radios. This document describes the testing methodology to aid in determining the degree of compliance to the STRS architecture. Non-compliances are reported to the software and hardware developers as well as the NASA project manager so that any non-compliances may be fixed or waivers issued. Since the software developers may be divided into those that provide the operating environment including the operating system and STRS infrastructure (OE) and those that supply the waveform applications, the tests are divided accordingly. The static tests are also divided by the availability of an automated tool that determines whether the source code and configuration files contain the appropriate items. Thus, there are six separate step-by-step test procedures described as well as the corresponding requirements that they test. The six types of STRS compliance tests are: STRS application automated testing, STRS infrastructure automated testing, STRS infrastructure testing by compiling WFCCN with the infrastructure, STRS configuration file testing, STRS application manual code testing, and STRS infrastructure manual code testing. Examples of the input and output of the scripts are shown in the appendices as well as more specific information about what to configure and test in WFCCN for non-compliance. In addition, each STRS requirement is listed and the type of testing briefly described. Attached is also a set of guidelines on what to look for in addition to the requirements to aid in the document review process.
Determination of photovoltaic concentrator optical design specifications using performance modeling
NASA Astrophysics Data System (ADS)
Kerschen, Kevin A.; Levy, Sheldon L.
The strategy used to develop an optical design specification for a 500X concentration photovoltaic module to be used with a 28-percent-efficient concentrator photovoltaic cell is reported. The computer modeling code (PVOPTICS) developed for this purpose, a Fresnel lens design strategy, and optical component specification procedures are described. Comparisons are made between the predicted performance and the measured performance of components fabricated to those specifications. An acrylic lens and a reflective secondary optical element have been tested, showing efficiencies exceeding 88 percent.
NASA Technical Reports Server (NTRS)
1974-01-01
A feasibility unit suitable for use as a voice recorder on the space shuttle was developed. A modification, development, and test program is described. A LM-DSEA recorder was modified to achieve the following goals: (1) redesign case to allow in-flight cartridge change; (2) time code change from LM code to IRIG-B 100 pps code; (3) delete cold plate requirements (also requires deletion of long-term thermal vacuum operation at 0.00001 MMHg); (4) implement track sequence reset during cartridge change; (5) reduce record time per cartridge because of unavailability of LM thin-base tape; and (6) add an internal Vox key circuit to turn on/off transport and electronics with voice data input signal. The recorder was tested at both the LM and shuttle vibration levels. The modified recorder achieved the same level of flutter during vibration as the DSEA recorder prior to modification. Several improvements were made over the specification requirements. The high manufacturing cost is discussed.
NASA Technical Reports Server (NTRS)
Quaranto, Kristy
2014-01-01
This internship provided an opportunity for an intern to work with NASA's Ground Support Equipment (GSE) for the Spaceport Command and Control System (SCCS) at Kennedy Space Center as a remote display developer, under NASA technical mentor Kurt Leucht. The main focus was on creating remote displays and applications for the hypergolic and high pressure helium subsystem team to help control the filling of the respective tanks. As a remote display and application developer for the GSE hypergolic and high pressure helium subsystem team the intern was responsible for creating and testing graphical remote displays and applications to be used in the Launch Control Center (LCC) on the Firing Room's computers. To become more familiar with the subsystem, the individual attended multiple project meetings and acquired their specific requirements regarding what needed to be included in the software. After receiving the requirements for the displays, the next step was to create displays that had both visual appeal and logical order using the Display Editor, on the Virtual Machine (VM). In doing so, all Compact Unique Identifiers (CUI), which are associated with specific components within the subsystem, were need to be included in each respective display for the system to run properly. Then, once the display was created it was to be tested to ensure that the display runs as intended by using the Test Driver, also found on the VM. This Test Driver is a specific application that checks to make sure all the CUIs in the display are running properly and returning the correct form of information. After creating and locally testing the display it needed to go through further testing and evaluation before deemed suitable for actual use. For the remote applications the intern was responsible for creating a project that focused on channelizing each component included in each display. The core of the application code was created by setting up spreadsheets and having an auto test generator, generate the complete code structure. This application code was then loaded and ran on a testing environment set to ensure the code runs as anticipated. By the end of the semester-long experience at NASA's Kennedy Space Center, the individual should have gained great knowledge and experience in various areas of both display and application development and testing. They were able to demonstrate this new knowledge obtained by creating multiple successful remote displays that will one day be used by the hypergolic and high pressure helium subsystem team in the LCC's firing rooms to service the new Orion spacecraft. The completed display channelization application will be used to receive verification from NASA quality engineers.
Interactive specification acquisition via scenarios: A proposal
NASA Technical Reports Server (NTRS)
Hall, Robert J.
1992-01-01
Some reactive systems are most naturally specified by giving large collections of behavior scenarios. These collections not only specify the behavior of the system, but also provide good test suites for validating the implemented system. Due to the complexity of the systems and the number of scenarios, however, it appears that automated assistance is necessary to make this software development process workable. Interactive Specification Acquisition Tool (ISAT) is a proposed interactive system for supporting the acquisition and maintenance of a formal system specification from scenarios, as well as automatic synthesis of control code and automated test generation. This paper discusses the background, motivation, proposed functions, and implementation status of ISAT.
CBT Specific Process in Exposure-Based Treatments: Initial Examination in a Pediatric OCD Sample
Benito, Kristen Grabill; Conelea, Christine; Garcia, Abbe M.; Freeman, Jennifer B.
2012-01-01
Cognitive-Behavioral theory and empirical support suggest that optimal activation of fear is a critical component for successful exposure treatment. Using this theory, we developed coding methodology for measuring CBT-specific process during exposure. We piloted this methodology in a sample of young children (N = 18) who previously received CBT as part of a randomized controlled trial. Results supported the preliminary reliability and predictive validity of coding variables with 12 week and 3 month treatment outcome data, generally showing results consistent with CBT theory. However, given our limited and restricted sample, additional testing is warranted. Measurement of CBT-specific process using this methodology may have implications for understanding mechanism of change in exposure-based treatments and for improving dissemination efforts through identification of therapist behaviors associated with improved outcome. PMID:22523609
Measurements for liquid rocket engine performance code verification
NASA Technical Reports Server (NTRS)
Praharaj, Sarat C.; Palko, Richard L.
1986-01-01
The goal of the rocket engine performance code verification tests is to obtain the I sub sp with an accuracy of 0.25% or less. This needs to be done during the sequence of four related tests (two reactive and two hot gas simulation) to best utilize the loss separation technique recommended in this study. In addition to I sub sp, the measurements of the input and output parameters for the codes are needed. This study has shown two things in regard to obtaining the I sub sp uncertainty within the 0.25% target. First, this target is generally not being realized at the present time, and second, the instrumentation and testing technology does exist to obtain this 0.25% uncertainty goal. However, to achieve this goal will require carefully planned, designed, and conducted testing. In addition, the test-stand (or system) dynamics must be evaluated in the pre-test and post-test phases of the design of the experiment and data analysis, respectively always keeping in mind that a .25% overall uncertainty in I sub sp is targeted. A table gives the maximum allowable uncertainty required for obtaining I sub sp with 0.25% uncertainty, the currently-quoted instrument specification, and present test uncertainty for the parameters. In general, it appears that measurement of the mass flow parameter within the required uncertainty may be the most difficult.
The structure of affective action representations: temporal binding of affective response codes.
Eder, Andreas B; Müsseler, Jochen; Hommel, Bernhard
2012-01-01
Two experiments examined the hypothesis that preparing an action with a specific affective connotation involves the binding of this action to an affective code reflecting this connotation. This integration into an action plan should lead to a temporary occupation of the affective code, which should impair the concurrent representation of affectively congruent events, such as the planning of another action with the same valence. This hypothesis was tested with a dual-task setup that required a speeded choice between approach- and avoidance-type lever movements after having planned and before having executed an evaluative button press. In line with the code-occupation hypothesis, slower lever movements were observed when the lever movement was affectively compatible with the prepared evaluative button press than when the two actions were affectively incompatible. Lever movements related to approach and avoidance and evaluative button presses thus seem to share a code that represents affective meaning. A model of affective action control that is based on the theory of event coding is discussed.
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling
Lareo, Angel; Forlim, Caroline G.; Pinto, Reynaldo D.; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox. PMID:27766078
Temporal Code-Driven Stimulation: Definition and Application to Electric Fish Signaling.
Lareo, Angel; Forlim, Caroline G; Pinto, Reynaldo D; Varona, Pablo; Rodriguez, Francisco de Borja
2016-01-01
Closed-loop activity-dependent stimulation is a powerful methodology to assess information processing in biological systems. In this context, the development of novel protocols, their implementation in bioinformatics toolboxes and their application to different description levels open up a wide range of possibilities in the study of biological systems. We developed a methodology for studying biological signals representing them as temporal sequences of binary events. A specific sequence of these events (code) is chosen to deliver a predefined stimulation in a closed-loop manner. The response to this code-driven stimulation can be used to characterize the system. This methodology was implemented in a real time toolbox and tested in the context of electric fish signaling. We show that while there are codes that evoke a response that cannot be distinguished from a control recording without stimulation, other codes evoke a characteristic distinct response. We also compare the code-driven response to open-loop stimulation. The discussed experiments validate the proposed methodology and the software toolbox.
Host computer software specifications for a zero-g payload manhandling simulator
NASA Technical Reports Server (NTRS)
Wilson, S. W.
1986-01-01
The HP PASCAL source code was developed for the Mission Planning and Analysis Division (MPAD) of NASA/JSC, and takes the place of detailed flow charts defining the host computer software specifications for MANHANDLE, a digital/graphical simulator that can be used to analyze the dynamics of onorbit (zero-g) payload manhandling operations. Input and output data for representative test cases are contained.
SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.
Chu, Annie; Cui, Jenny; Dinov, Ivo D
2009-03-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models.
Cracking Taste Codes by Tapping into Sensory Neuron Impulse Traffic
Frank, Marion E.; Lundy, Robert F.; Contreras, Robert J.
2008-01-01
Insights into the biological basis for mammalian taste quality coding began with electrophysiological recordings from “taste” nerves and this technique continues to produce essential information today. Chorda tympani (geniculate ganglion) neurons, which are particularly involved in taste quality discrimination, are specialists or generalists. Specialists respond to stimuli characterized by a single taste quality as defined by behavioral cross-generalization in conditioned taste tests. Generalists respond to electrolytes that elicit multiple aversive qualities. Na+-salt (N) specialists in rodents and sweet-stimulus (S) specialists in multiple orders of mammals are well-characterized. Specialists are associated with species’ nutritional needs and their activation is known to be malleable by internal physiological conditions and contaminated external caloric sources. S specialists, associated with the heterodimeric G-protein coupled receptor: T1R, and N specialists, associated with the epithelial sodium channel: ENaC, are consistent with labeled line coding from taste bud to afferent neuron. Yet, S-specialist neurons and behavior are less specific thanT1R2-3 in encompassing glutamate and E generalist neurons are much less specific than a candidate, PDK TRP channel, sour receptor in encompassing salts and bitter stimuli. Specialist labeled lines for nutrients and generalist patterns for aversive electrolytes may be transmitting taste information to the brain side by side. However, specific roles of generalists in taste quality coding may be resolved by selecting stimuli and stimulus levels found in natural situations. T2Rs, participating in reflexes via the glossopharynygeal nerve, became highly diversified in mammalian phylogenesis as they evolved to deal with dangerous substances within specific environmental niches. Establishing the information afferent neurons traffic to the brain about natural taste stimuli imbedded in dynamic complex mixtures will ultimately “crack taste codes.” PMID:18824076
Analysis of visual quality improvements provided by known tools for HDR content
NASA Astrophysics Data System (ADS)
Kim, Jaehwan; Alshina, Elena; Lee, JongSeok; Park, Youngo; Choi, Kwang Pyo
2016-09-01
In this paper, the visual quality of different solutions for high dynamic range (HDR) compression using MPEG test contents is analyzed. We also simulate the method for an efficient HDR compression which is based on statistical property of the signal. The method is compliant with HEVC specification and also easily compatible with other alternative methods which might require HEVC specification changes. It was subjectively tested on commercial TVs and compared with alternative solutions for HDR coding. Subjective visual quality tests were performed using SUHD TVs model which is SAMSUNG JS9500 with maximum luminance up to 1000nit in test. The solution that is based on statistical property shows not only improvement of objective performance but improvement of visual quality compared to other HDR solutions, while it is compatible with HEVC specification.
Circular codes revisited: a statistical approach.
Gonzalez, D L; Giannerini, S; Rosa, R
2011-04-21
In 1996 Arquès and Michel [1996. A complementary circular code in the protein coding genes. J. Theor. Biol. 182, 45-58] discovered the existence of a common circular code in eukaryote and prokaryote genomes. Since then, circular code theory has provoked great interest and underwent a rapid development. In this paper we discuss some theoretical issues related to the synchronization properties of coding sequences and circular codes with particular emphasis on the problem of retrieval and maintenance of the reading frame. Motivated by the theoretical discussion, we adopt a rigorous statistical approach in order to try to answer different questions. First, we investigate the covering capability of the whole class of 216 self-complementary, C(3) maximal codes with respect to a large set of coding sequences. The results indicate that, on average, the code proposed by Arquès and Michel has the best covering capability but, still, there exists a great variability among sequences. Second, we focus on such code and explore the role played by the proportion of the bases by means of a hierarchy of permutation tests. The results show the existence of a sort of optimization mechanism such that coding sequences are tailored as to maximize or minimize the coverage of circular codes on specific reading frames. Such optimization clearly relates the function of circular codes with reading frame synchronization. Copyright © 2011 Elsevier Ltd. All rights reserved.
Witt, Jana; Elwyn, Glyn; Wood, Fiona; Rogers, Mark T; Menon, Usha; Brain, Kate
2014-11-01
To test whether the coping in deliberation (CODE) framework can be adapted to a specific preference-sensitive medical decision: risk-reducing bilateral salpingo-oophorectomy (RRSO) in women at increased risk of ovarian cancer. We performed a systematic literature search to identify issues important to women during deliberations about RRSO. Three focus groups with patients (most were pre-menopausal and untested for genetic mutations) and 11 interviews with health professionals were conducted to determine which issues mattered in the UK context. Data were used to adapt the generic CODE framework. The literature search yielded 49 relevant studies, which highlighted various issues and coping options important during deliberations, including mutation status, risks of surgery, family obligations, physician recommendation, peer support and reliable information sources. Consultations with UK stakeholders confirmed most of these factors as pertinent influences on deliberations. Questions in the generic framework were adapted to reflect the issues and coping options identified. The generic CODE framework was readily adapted to a specific preference-sensitive medical decision, showing that deliberations and coping are linked during deliberations about RRSO. Adapted versions of the CODE framework may be used to develop tailored decision support methods and materials in order to improve patient-centred care. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Statistical core design methodology using the VIPRE thermal-hydraulics code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lloyd, M.W.; Feltus, M.A.
1994-12-31
This Penn State Statistical Core Design Methodology (PSSCDM) is unique because it not only includes the EPRI correlation/test data standard deviation but also the computational uncertainty for the VIPRE code model and the new composite box design correlation. The resultant PSSCDM equation mimics the EPRI DNBR correlation results well, with an uncertainty of 0.0389. The combined uncertainty yields a new DNBR limit of 1.18 that will provide more plant operational flexibility. This methodology and its associated correlation and uniqe coefficients are for a very particular VIPRE model; thus, the correlation will be specifically linked with the lumped channel and subchannelmore » layout. The results of this research and methodology, however, can be applied to plant-specific VIPRE models.« less
Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cipiti, Benjamin B.; Shoman, Nathan
The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generatemore » performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.« less
7 CFR 1924.5 - Planning development work.
Code of Federal Regulations, 2014 CFR
2014-01-01
... cash to be furnished by the borrower, proceeds from cost sharing programs such as Agricultural...) Drawings, specifications, and estimates will fully describe the work. Technical data, tests, or engineering... building code. (i) Agricultural buildings that are not intended for human habitation are exempt from these...
7 CFR 1924.5 - Planning development work.
Code of Federal Regulations, 2013 CFR
2013-01-01
... cash to be furnished by the borrower, proceeds from cost sharing programs such as Agricultural...) Drawings, specifications, and estimates will fully describe the work. Technical data, tests, or engineering... building code. (i) Agricultural buildings that are not intended for human habitation are exempt from these...
7 CFR 1924.5 - Planning development work.
Code of Federal Regulations, 2012 CFR
2012-01-01
... cash to be furnished by the borrower, proceeds from cost sharing programs such as Agricultural...) Drawings, specifications, and estimates will fully describe the work. Technical data, tests, or engineering... building code. (i) Agricultural buildings that are not intended for human habitation are exempt from these...
21 CFR 177.1430 - Isobutylene-butene copolymers.
Code of Federal Regulations, 2013 CFR
2013-04-01
...) of this section. (b) Specifications: Isobutylene-butene copolymers Molecular weight (range) Viscosity...: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. (2) Viscosity. Viscosity shall be determined by ASTM method D445-74, “Test for Kinematic Viscosity of Transparent and...
21 CFR 177.1430 - Isobutylene-butene copolymers.
Code of Federal Regulations, 2012 CFR
2012-04-01
...) of this section. (b) Specifications: Isobutylene-butene copolymers Molecular weight (range) Viscosity...: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. (2) Viscosity. Viscosity shall be determined by ASTM method D445-74, “Test for Kinematic Viscosity of Transparent and...
21 CFR 177.1430 - Isobutylene-butene copolymers.
Code of Federal Regulations, 2011 CFR
2011-04-01
...) of this section. (b) Specifications: Isobutylene-butene copolymers Molecular weight (range) Viscosity...: http://www.archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. (2) Viscosity. Viscosity shall be determined by ASTM method D445-74, “Test for Kinematic Viscosity of Transparent and...
Qualification test and analysis report: Solar collectors
NASA Technical Reports Server (NTRS)
1978-01-01
Test results show that the Owens-Illinois Sunpak TM Model SEC 601 air-cooled collector meets the national standards and codes as defined in the Subsystem Peformance Specification and Verification Plan of NASA/MSFC, dated October 28, 1976. The program calls for the development, fabrication, qualification and delivery of an air-cooled solar collector for solar heating, combined heating and cooling, and/or hot water systems.
A graphics-card implementation of Monte-Carlo simulations for cosmic-ray transport
NASA Astrophysics Data System (ADS)
Tautz, R. C.
2016-05-01
A graphics card implementation of a test-particle simulation code is presented that is based on the CUDA extension of the C/C++ programming language. The original CPU version has been developed for the calculation of cosmic-ray diffusion coefficients in artificial Kolmogorov-type turbulence. In the new implementation, the magnetic turbulence generation, which is the most time-consuming part, is separated from the particle transport and is performed on a graphics card. In this article, the modification of the basic approach of integrating test particle trajectories to employ the SIMD (single instruction, multiple data) model is presented and verified. The efficiency of the new code is tested and several language-specific accelerating factors are discussed. For the example of isotropic magnetostatic turbulence, sample results are shown and a comparison to the results of the CPU implementation is performed.
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Harvey, J. M.; Katz, I.
1977-01-01
The NASCAP (NASA Charging Analyzer Program) code simulates the charging process for a complex object in either tenuous plasma or ground test environment. Detailed specifications needed to run the code are presented. The object definition section, OBJDEF, allows the test object to be easily defined in the cubic mesh. The test object is composed of conducting sections which may be wholly or partially covered with thin dielectric coatings. The potential section, POTENT, obtains the electrostatic potential in the space surrounding the object. It uses the conjugate gradient method to solve the finite element formulation of Poisson's equation. The CHARGE section of NASCAP treats charge redistribution among the surface cells of the object as well as charging through radiation bombardment. NASCAP has facilities for extensive graphical output, including several types of object display plots, potential contour plots, space charge density contour plots, current density plots, and particle trajectory plots.
Experiences with Cray multi-tasking
NASA Technical Reports Server (NTRS)
Miya, E. N.
1985-01-01
The issues involved in modifying an existing code for multitasking is explored. They include Cray extensions to FORTRAN, an examination of the application code under study, designing workable modifications, specific code modifications to the VAX and Cray versions, performance, and efficiency results. The finished product is a faster, fully synchronous, parallel version of the original program. A production program is partitioned by hand to run on two CPUs. Loop splitting multitasks three key subroutines. Simply dividing subroutine data and control structure down the middle of a subroutine is not safe. Simple division produces results that are inconsistent with uniprocessor runs. The safest way to partition the code is to transfer one block of loops at a time and check the results of each on a test case. Other issues include debugging and performance. Task startup and maintenance (e.g., synchronization) are potentially expensive.
A possible role for a paralemniscal auditory pathway in the coding of slow temporal information
Abrams, Daniel A.; Nicol, Trent; Zecker, Steven; Kraus, Nina
2010-01-01
Low frequency temporal information present in speech is critical for normal perception, however the neural mechanism underlying the differentiation of slow rates in acoustic signals is not known. Data from the rat trigeminal system suggest that the paralemniscal pathway may be specifically tuned to code low-frequency temporal information. We tested whether this phenomenon occurs in the auditory system by measuring the representation of temporal rate in lemniscal and paralemniscal auditory thalamus and cortex in guinea pig. Similar to the trigeminal system, responses measured in auditory thalamus indicate that slow rates are differentially represented in a paralemniscal pathway. In cortex, both lemniscal and paralemniscal neurons indicated sensitivity to slow rates. We speculate that a paralemniscal pathway in the auditory system may be specifically tuned to code low frequency temporal information present in acoustic signals. These data suggest that somatosensory and auditory modalities have parallel sub-cortical pathways that separately process slow rates and the spatial representation of the sensory periphery. PMID:21094680
High-Level Prediction Signals in a Low-Level Area of the Macaque Face-Processing Hierarchy.
Schwiedrzik, Caspar M; Freiwald, Winrich A
2017-09-27
Theories like predictive coding propose that lower-order brain areas compare their inputs to predictions derived from higher-order representations and signal their deviation as a prediction error. Here, we investigate whether the macaque face-processing system, a three-level hierarchy in the ventral stream, employs such a coding strategy. We show that after statistical learning of specific face sequences, the lower-level face area ML computes the deviation of actual from predicted stimuli. But these signals do not reflect the tuning characteristic of ML. Rather, they exhibit identity specificity and view invariance, the tuning properties of higher-level face areas AL and AM. Thus, learning appears to endow lower-level areas with the capability to test predictions at a higher level of abstraction than what is afforded by the feedforward sweep. These results provide evidence for computational architectures like predictive coding and suggest a new quality of functional organization of information-processing hierarchies beyond pure feedforward schemes. Copyright © 2017 Elsevier Inc. All rights reserved.
Novel inter and intra prediction tools under consideration for the emerging AV1 video codec
NASA Astrophysics Data System (ADS)
Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil
2017-09-01
Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.
A Software Development Platform for Wearable Medical Applications.
Zhang, Ruikai; Lin, Wei
2015-10-01
Wearable medical devices have become a leading trend in healthcare industry. Microcontrollers are computers on a chip with sufficient processing power and preferred embedded computing units in those devices. We have developed a software platform specifically for the design of the wearable medical applications with a small code footprint on the microcontrollers. It is supported by the open source real time operating system FreeRTOS and supplemented with a set of standard APIs for the architectural specific hardware interfaces on the microcontrollers for data acquisition and wireless communication. We modified the tick counter routine in FreeRTOS to include a real time soft clock. When combined with the multitasking features in the FreeRTOS, the platform offers the quick development of wearable applications and easy porting of the application code to different microprocessors. Test results have demonstrated that the application software developed using this platform are highly efficient in CPU usage while maintaining a small code foot print to accommodate the limited memory space in microcontrollers.
CoNNeCT Baseband Processor Module Boot Code SoftWare (BCSW)
NASA Technical Reports Server (NTRS)
Yamamoto, Clifford K.; Orozco, David S.; Byrne, D. J.; Allen, Steven J.; Sahasrabudhe, Adit; Lang, Minh
2012-01-01
This software provides essential startup and initialization routines for the CoNNeCT baseband processor module (BPM) hardware upon power-up. A command and data handling (C&DH) interface is provided via 1553 and diagnostic serial interfaces to invoke operational, reconfiguration, and test commands within the code. The BCSW has features unique to the hardware it is responsible for managing. In this case, the CoNNeCT BPM is configured with an updated CPU (Atmel AT697 SPARC processor) and a unique set of memory and I/O peripherals that require customized software to operate. These features include configuration of new AT697 registers, interfacing to a new HouseKeeper with a flash controller interface, a new dual Xilinx configuration/scrub interface, and an updated 1553 remote terminal (RT) core. The BCSW is intended to provide a "safe" mode for the BPM when initially powered on or when an unexpected trap occurs, causing the processor to reset. The BCSW allows the 1553 bus controller in the spacecraft or payload controller to operate the BPM over 1553 to upload code; upload Xilinx bit files; perform rudimentary tests; read, write, and copy the non-volatile flash memory; and configure the Xilinx interface. Commands also exist over 1553 to cause the CPU to jump or call a specified address to begin execution of user-supplied code. This may be in the form of a real-time operating system, test routine, or specific application code to run on the BPM.
Generating Safety-Critical PLC Code From a High-Level Application Software Specification
NASA Technical Reports Server (NTRS)
2008-01-01
The benefits of automatic-application code generation are widely accepted within the software engineering community. These benefits include raised abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at Kennedy Space Center recognized the need for PLC code generation while developing the new ground checkout and launch processing system, called the Launch Control System (LCS). Engineers developed a process and a prototype software tool that automatically translates a high-level representation or specification of application software into ladder logic that executes on a PLC. All the computer hardware in the LCS is planned to be commercial off the shelf (COTS), including industrial controllers or PLCs that are connected to the sensors and end items out in the field. Most of the software in LCS is also planned to be COTS, with only small adapter software modules that must be developed in order to interface between the various COTS software products. A domain-specific language (DSL) is a programming language designed to perform tasks and to solve problems in a particular domain, such as ground processing of launch vehicles. The LCS engineers created a DSL for developing test sequences of ground checkout and launch operations of future launch vehicle and spacecraft elements, and they are developing a tabular specification format that uses the DSL keywords and functions familiar to the ground and flight system users. The tabular specification format, or tabular spec, allows most ground and flight system users to document how the application software is intended to function and requires little or no software programming knowledge or experience. A small sample from a prototype tabular spec application is shown.
Introduction to study and simulation of low rate video coding schemes
NASA Technical Reports Server (NTRS)
1992-01-01
During this period, the development of simulators for the various HDTV systems proposed to the FCC were developed. These simulators will be tested using test sequences from the MPEG committee. The results will be extrapolated to HDTV video sequences. Currently, the simulator for the compression aspects of the Advanced Digital Television (ADTV) was completed. Other HDTV proposals are at various stages of development. A brief overview of the ADTV system is given. Some coding results obtained using the simulator are discussed. These results are compared to those obtained using the CCITT H.261 standard. These results in the context of the CCSDS specifications are evaluated and some suggestions as to how the ADTV system could be implemented in the NASA network are made.
40 CFR 205.169 - Labeling requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... standard) dBA for the following motorcycles: (list of model specific codes). Installation of this exhaust... designed and marketed for motorcycles, and tested in accordance with § 205.168 as a constituent of a complete exhaust system comprising non-original equipment components (other than itself), as provided for...
40 CFR 205.169 - Labeling requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... standard) dBA for the following motorcycles: (list of model specific codes). Installation of this exhaust... designed and marketed for motorcycles, and tested in accordance with § 205.168 as a constituent of a complete exhaust system comprising non-original equipment components (other than itself), as provided for...
14 CFR 91.1019 - Conducting tests and inspections.
Code of Federal Regulations, 2010 CFR
2010-01-01
... program manager under this subpart is complying with title 49 of the United States Code, applicable regulations, and the program manager's management specifications. (b) The program manager must— (1) Make available to the Administrator at the program manager's principal base of operations, or at a place approved...
Code of Federal Regulations, 2010 CFR
2010-10-01
... Specifications for Containers for Motor Vehicle Transportation § 178.345-14 Marking. (a) General. The manufacturer shall certify that each cargo tank motor vehicle has been designed, constructed and tested in....348) cargo tank requirements and, when applicable, with Section VIII of the ASME Code (IBR, see § 171...
Billing code algorithms to identify cases of peripheral artery disease from administrative data
Fan, Jin; Arruda-Olson, Adelaide M; Leibson, Cynthia L; Smith, Carin; Liu, Guanghui; Bailey, Kent R; Kullo, Iftikhar J
2013-01-01
Objective To construct and validate billing code algorithms for identifying patients with peripheral arterial disease (PAD). Methods We extracted all encounters and line item details including PAD-related billing codes at Mayo Clinic Rochester, Minnesota, between July 1, 1997 and June 30, 2008; 22 712 patients evaluated in the vascular laboratory were divided into training and validation sets. Multiple logistic regression analysis was used to create an integer code score from the training dataset, and this was tested in the validation set. We applied a model-based code algorithm to patients evaluated in the vascular laboratory and compared this with a simpler algorithm (presence of at least one of the ICD-9 PAD codes 440.20–440.29). We also applied both algorithms to a community-based sample (n=4420), followed by a manual review. Results The logistic regression model performed well in both training and validation datasets (c statistic=0.91). In patients evaluated in the vascular laboratory, the model-based code algorithm provided better negative predictive value. The simpler algorithm was reasonably accurate for identification of PAD status, with lesser sensitivity and greater specificity. In the community-based sample, the sensitivity (38.7% vs 68.0%) of the simpler algorithm was much lower, whereas the specificity (92.0% vs 87.6%) was higher than the model-based algorithm. Conclusions A model-based billing code algorithm had reasonable accuracy in identifying PAD cases from the community, and in patients referred to the non-invasive vascular laboratory. The simpler algorithm had reasonable accuracy for identification of PAD in patients referred to the vascular laboratory but was significantly less sensitive in a community-based sample. PMID:24166724
Inspecting Engineering Samples
2017-12-08
Goddard's Ritsko Wins 2011 SAVE Award The winner of the 2011 SAVE Award is Matthew Ritsko, a Goddard financial manager. His tool lending library would track and enable sharing of expensive space-flight tools and hardware after projects no longer need them. This set of images represents the types of tools used at NASA. To read more go to: www.nasa.gov/topics/people/features/ritsko-save.html Dr. Doug Rabin (Code 671) and PI La Vida Cooper (Code 564) inspect engineering samples of the HAS-2 imager which will be tested and readout using a custom ASIC with a 16-bit ADC (analog to digital converter) and CDS (correlated double sampling) circuit designed by the Code 564 ASIC group as a part of an FY10 IRAD. The purpose of the IRAD was to develop and high resolution digitizer for Heliophysics applications such as imaging. Future goals for the collaboration include characterization testing and eventually a sounding rocket flight of the integrated system. *ASIC= Application Specific Integrated Circuit NASA/GSFC/Chris Gunn
NASA Lewis Stirling engine computer code evaluation
NASA Technical Reports Server (NTRS)
Sullivan, Timothy J.
1989-01-01
In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was -11 percent for the P-40 and 12 percent for the Re-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvements to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuhn, J K; von Fuchs, G F; Zob, A P
1980-05-01
Two water tank component simulation models have been selected and upgraded. These models are called the CSU Model and the Extended SOLSYS Model. The models have been standardized and links have been provided for operation in the TRNSYS simulation program. The models are described in analytical terms as well as in computer code. Specific water tank tests were performed for the purpose of model validation. Agreement between model data and test data is excellent. A description of the limitations has also been included. Streamlining results and criteria for the reduction of computer time have also been shown for both watermore » tank computer models. Computer codes for the models and instructions for operating these models in TRNSYS have also been included, making the models readily available for DOE and industry use. Rock bed component simulation models have been reviewed and a model selected and upgraded. This model is a logical extension of the Mumma-Marvin model. Specific rock bed tests have been performed for the purpose of validation. Data have been reviewed for consistency. Details of the test results concerned with rock characteristics and pressure drop through the bed have been explored and are reported.« less
NASA Technical Reports Server (NTRS)
Rarig, P. L.
1980-01-01
A program to calculate upwelling infrared radiation was modified to operate efficiently on the STAR-100. The modified software processes specific test cases significantly faster than the initial STAR-100 code. For example, a midlatitude summer atmospheric model is executed in less than 2% of the time originally required on the STAR-100. Furthermore, the optimized program performs extra operations to save the calculated absorption coefficients. Some of the advantages and pitfalls of virtual memory and vector processing are discussed along with strategies used to avoid loss of accuracy and computing power. Results from the vectorized code, in terms of speed, cost, and relative error with respect to serial code solutions are encouraging.
Global magnetosphere simulations using constrained-transport Hall-MHD with CWENO reconstruction
NASA Astrophysics Data System (ADS)
Lin, L.; Germaschewski, K.; Maynard, K. M.; Abbott, S.; Bhattacharjee, A.; Raeder, J.
2013-12-01
We present a new CWENO (Centrally-Weighted Essentially Non-Oscillatory) reconstruction based MHD solver for the OpenGGCM global magnetosphere code. The solver was built using libMRC, a library for creating efficient parallel PDE solvers on structured grids. The use of libMRC gives us access to its core functionality of providing an automated code generation framework which takes a user provided PDE right hand side in symbolic form to generate an efficient, computer architecture specific, parallel code. libMRC also supports block-structured adaptive mesh refinement and implicit-time stepping through integration with the PETSc library. We validate the new CWENO Hall-MHD solver against existing solvers both in standard test problems as well as in global magnetosphere simulations.
Summary of Documentation for DYNA3D-ParaDyn's Software Quality Assurance Regression Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zywicz, Edward
The Software Quality Assurance (SQA) regression test suite for DYNA3D (Zywicz and Lin, 2015) and ParaDyn (DeGroot, et al., 2015) currently contains approximately 600 problems divided into 21 suites, and is a required component of ParaDyn’s SQA plan (Ferencz and Oliver, 2013). The regression suite allows developers to ensure that software modifications do not unintentionally alter the code response. The entire regression suite is run prior to permanently incorporating any software modification or addition. When code modifications alter test problem results, the specific cause must be determined and fully understood before the software changes and revised test answers can bemore » incorporated. The regression suite is executed on LLNL platforms using a Python script and an associated data file. The user specifies the DYNA3D or ParaDyn executable, number of processors to use, test problems to run, and other options to the script. The data file details how each problem and its answer extraction scripts are executed. For each problem in the regression suite there exists an input deck, an eight-processor partition file, an answer file, and various extraction scripts. These scripts assemble a temporary answer file in a specific format from the simulation results. The temporary and stored answer files are compared to a specific level of numerical precision, and when differences are detected the test problem is flagged as failed. Presently, numerical results are stored and compared to 16 digits. At this accuracy level different processor types, compilers, number of partitions, etc. impact the results to various degrees. Thus, for consistency purposes the regression suite is run with ParaDyn using 8 processors on machines with a specific processor type (currently the Intel Xeon E5530 processor). For non-parallel regression problems, i.e., the two XFEM problems, DYNA3D is used instead. When environments or platforms change, executables using the current source code and the new resource are created and the regression suite is run. If differences in answers arise, the new answers are retained provided that the differences are inconsequential. This bootstrap approach allows the test suite answers to evolve in a controlled manner with a high level of confidence. Developers also run the entire regression suite with (serial) DYNA3D. While these results normally differ from the stored (parallel) answers, abnormal termination or wildly different values are strong indicators of potential issues.« less
7 CFR 801.12 - Design requirements incorporated by reference.
Code of Federal Regulations, 2010 CFR
2010-01-01
... reference: General Code (1.10.) G-S.5.5. Money Values, Mathematical Agreement G-T.1. Acceptance Tolerances G.... Tolerance Values T.3. For Test Weight Per Bushel Indications or Recorded Representations UR.3.2. Other... Standards and Technology's (NIST) Handbook 44, “Specifications, Tolerances, and Other Technical Requirements...
High density arrays of micromirrors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folta, J. M.; Decker, J. Y.; Kolman, J.
We established and achieved our goal to (1) fabricate and evaluate test structures based on the micromirror design optimized for maskless lithography applications, (2) perform system analysis and code development for the maskless lithography concept, and (3) identify specifications for micromirror arrays (MMAs) for LLNL's adaptive optics (AO) applications and conceptualize new devices.
An att site-based recombination reporter system for genome engineering and synthetic DNA assembly.
Bland, Michael J; Ducos-Galand, Magaly; Val, Marie-Eve; Mazel, Didier
2017-07-14
Direct manipulation of the genome is a widespread technique for genetic studies and synthetic biology applications. The tyrosine and serine site-specific recombination systems of bacteriophages HK022 and ΦC31 are widely used for stable directional exchange and relocation of DNA sequences, making them valuable tools in these contexts. We have developed site-specific recombination tools that allow the direct selection of recombination events by embedding the attB site from each system within the β-lactamase resistance coding sequence (bla). The HK and ΦC31 tools were developed by placing the attB sites from each system into the signal peptide cleavage site coding sequence of bla. All possible open reading frames (ORFs) were inserted and tested for recombination efficiency and bla activity. Efficient recombination was observed for all tested ORFs (3 for HK, 6 for ΦC31) as shown through a cointegrate formation assay. The bla gene with the embedded attB site was functional for eight of the nine constructs tested. The HK/ΦC31 att-bla system offers a simple way to directly select recombination events, thus enhancing the use of site-specific recombination systems for carrying out precise, large-scale DNA manipulation, and adding useful tools to the genetics toolbox. We further show the power and flexibility of bla to be used as a reporter for recombination.
Multiloop integral system test (MIST): Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gloudemans, J.R.
1991-04-01
The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility --more » the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST program is reported in 11 volumes. Volumes 2 through 8 pertain to groups of Phase 3 tests by type; Volume 9 presents inter-group comparisons; Volume 10 provides comparisons between the RELAP5/MOD2 calculations and MIST observations, and Volume 11 (with addendum) presents the later Phase 4 tests. This is Volume 1 of the MIST final report, a summary of the entire MIST program. Major topics include, Test Advisory Group (TAG) issues, facility scaling and design, test matrix, observations, comparison of RELAP5 calculations to MIST observations, and MIST versus the TAG issues. MIST generated consistent integral-system data covering a wide range of transient interactions. MIST provided insight into integral system behavior and assisted the code effort. The MIST observations addressed each of the TAG issues. 11 refs., 29 figs., 9 tabs.« less
Main steam line break accident simulation of APR1400 using the model of ATLAS facility
NASA Astrophysics Data System (ADS)
Ekariansyah, A. S.; Deswandri; Sunaryo, Geni R.
2018-02-01
A main steam line break simulation for APR1400 as an advanced design of PWR has been performed using the RELAP5 code. The simulation was conducted in a model of thermal-hydraulic test facility called as ATLAS, which represents a scaled down facility of the APR1400 design. The main steam line break event is described in a open-access safety report document, in which initial conditions and assumptionsfor the analysis were utilized in performing the simulation and analysis of the selected parameter. The objective of this work was to conduct a benchmark activities by comparing the simulation results of the CESEC-III code as a conservative approach code with the results of RELAP5 as a best-estimate code. Based on the simulation results, a general similarity in the behavior of selected parameters was observed between the two codes. However the degree of accuracy still needs further research an analysis by comparing with the other best-estimate code. Uncertainties arising from the ATLAS model should be minimized by taking into account much more specific data in developing the APR1400 model.
Great Expectations: Is there Evidence for Predictive Coding in Auditory Cortex?
Heilbron, Micha; Chait, Maria
2017-08-04
Predictive coding is possibly one of the most influential, comprehensive, and controversial theories of neural function. While proponents praise its explanatory potential, critics object that key tenets of the theory are untested or even untestable. The present article critically examines existing evidence for predictive coding in the auditory modality. Specifically, we identify five key assumptions of the theory and evaluate each in the light of animal, human and modeling studies of auditory pattern processing. For the first two assumptions - that neural responses are shaped by expectations and that these expectations are hierarchically organized - animal and human studies provide compelling evidence. The anticipatory, predictive nature of these expectations also enjoys empirical support, especially from studies on unexpected stimulus omission. However, for the existence of separate error and prediction neurons, a key assumption of the theory, evidence is lacking. More work exists on the proposed oscillatory signatures of predictive coding, and on the relation between attention and precision. However, results on these latter two assumptions are mixed or contradictory. Looking to the future, more collaboration between human and animal studies, aided by model-based analyses will be needed to test specific assumptions and implementations of predictive coding - and, as such, help determine whether this popular grand theory can fulfill its expectations. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Vatne, Torun M; Finset, Arnstein; Ørnes, Knut; Ruland, Cornelia M
2010-09-01
Adult patients present concerns as defined in the Verona Coding Definitions of Emotional Sequences (VR-CoDES), but we do not know how children express their concerns during medical consultations. This study aimed to evaluate the applicability of VR-CoDES to pediatric oncology consultations. Twenty-eight pediatric consultations were coded with the Verona Coding Definitions of Emotional Sequences (VR-CoDES), and the material was also qualitatively analyzed for descriptive purposes. Five consultations were randomly selected for reliability testing and descriptive statistics were computed. Perfect inter-rater reliability for concerns and moderate reliability for cues were obtained. Cues and/or concerns were present in over half of the consultations. Cues were more frequent than concerns, with the majority of cues being verbal hints to hidden concerns or non-verbal cues. Intensity of expressions, limitations in vocabulary, commonality of statements, and complexity of the setting complicated the use of VR-CoDES. Child-specific cues; use of the imperative, cues about past experiences, and use of onomatopoeia were observed. Children with cancer express concerns during medical consultations. VR-CoDES is a reliable tool for coding concerns in pediatric data sets. For future applications in pediatric settings an appendix should be developed to incorporate the child-specific traits. Copyright (c) 2010 Elsevier Ireland Ltd. All rights reserved.
HDTS 2017.0 Testing and verification document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteside, Tad S.
2017-08-01
This report is a continuation of the series of Hunter Dose Tracking System (HDTS) Quality Assurance documents including (Foley and Powell, 2010; Dixon, 2012). In this report we have created a suite of automated test cases and a system to analyze the results of those tests as well as documented the methodology to ensure the field system performs within specifications. The software test cases cover all of the functions and interactions of functions that are practical to test. With the developed framework, if software defects are discovered, it will be easy to create one or more test cases to reproducemore » the defect and ensure that code changes correct the defect. These tests con rm HDTS version 2017.0 performs according to its specifications and documentation and that its performance meets the needs of its users at the Savannah River Site.« less
Comparison of theoretical and flight-measured local flow aerodynamics for a low-aspect-ratio fin
NASA Technical Reports Server (NTRS)
Johnson, J. B.; Sandlin, D. R.
1984-01-01
Flight test and theoretical aerodynamic data were obtained for a flight test fixture mounted on the underside of an F-104G aircraft. The theoretical data were generated using two codes, a two dimensional transonic code called Code H, and a three dimensional subsonic and supersonic code call wing-body. Pressure distributions generated by the codes for the flight test fixture as well as boundary layer displacement thickness generated by the two dimensional code were compared to the flight test data. The two dimensional code pressure distributions compared well except at the minimum pressure point and trailing edge. Shock locations compared well except at high transonic speeds. The three dimensional code pressure distributions compared well except at the trailing edge of the flight test fixture. The two dimensional code does not predict displacement thickness of the flight test fixture well.
The effect of multiple internal representations on context-rich instruction
NASA Astrophysics Data System (ADS)
Lasry, Nathaniel; Aulls, Mark W.
2007-11-01
We discuss n-coding, a theoretical model of multiple internal mental representations. The n-coding construct is developed from a review of cognitive and imaging data that demonstrates the independence of information processed along different modalities such as verbal, visual, kinesthetic, logico-mathematic, and social modalities. A study testing the effectiveness of the n-coding construct in classrooms is presented. Four sections differing in the level of n-coding opportunities were compared. Besides a traditional-instruction section used as a control group, each of the remaining three sections were given context-rich problems, which differed by the level of n-coding opportunities designed into their laboratory environment. To measure the effectiveness of the construct, problem-solving skills were assessed as conceptual learning using the force concept inventory. We also developed several new measures that take students' confidence in concepts into account. Our results show that the n-coding construct is useful in designing context-rich environments and can be used to increase learning gains in problem solving, conceptual knowledge, and concept confidence. Specifically, when using props in designing context-rich problems, we find n-coding to be a useful construct in guiding which additional dimensions need to be attended to.
Photovoltaic module certification and laboratory accreditation criteria development
NASA Astrophysics Data System (ADS)
Osterwald, Carl R.; Zerlaut, Gene; Hammond, Robert; D'Aiello, Robert
1996-01-01
This paper overviews a model product certification and test laboratory accreditation program for photovoltaic (PV) modules that was recently developed by the National Renewable Energy Laboratory and Arizona State University. The specific objective of this project was to produce a document that details the equipment, facilities, quality assurance procedures, and technical expertise an accredited laboratory needs for performance and qualification testing of PV modules, along with the specific tests needed for a module design to be certified. The document was developed in conjunction with a criteria development committee consisting of representatives from 30 U.S. PV manufacturers, end users, standards and codes organizations, and testing laboratories. The intent is to lay the groundwork for a future U.S. PV certification and accreditation program that will be beneficial to the PV industry as a whole.
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G.; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J.; Arruda-Olson, Adelaide M.
2016-01-01
Objective Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm to billing code algorithms, using ankle-brachial index (ABI) test results as the gold standard. Methods We compared the performance of the NLP algorithm to 1) results of gold standard ABI; 2) previously validated algorithms based on relevant ICD-9 diagnostic codes (simple model) and 3) a combination of ICD-9 codes with procedural codes (full model). A dataset of 1,569 PAD patients and controls was randomly divided into training (n= 935) and testing (n= 634) subsets. Results We iteratively refined the NLP algorithm in the training set including narrative note sections, note types and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP: 91.8%, full model: 81.8%, simple model: 83%, P<.001), PPV (NLP: 92.9%, full model: 74.3%, simple model: 79.9%, P<.001), and specificity (NLP: 92.5%, full model: 64.2%, simple model: 75.9%, P<.001). Conclusions A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. PMID:28189359
Unsteady Flow Interactions Between the LH2 Feed Line and SSME LPFP Inducer
NASA Technical Reports Server (NTRS)
Dorney, Dan; Griffin, Lisa; Marcu, Bogdan; Williams, Morgan
2006-01-01
An extensive computational effort has been performed in order to investigate the nature of unsteady flow in the fuel line supplying the three Space Shuttle Main Engines during flight. Evidence of high cycle fatigue (HCF) in the flow liner one diameter upstream of the Low Pressure Fuel Pump inducer has been observed in several locations. The analysis presented in this report has the objective of determining the driving mechanisms inducing HCF and the associated fluid flow phenomena. The simulations have been performed using two different computational codes, the NASA MSFC PHANTOM code and the Pratt and Whitney Rocketdyne ENIGMA code. The fuel flow through the flow liner and the pump inducer have been modeled in full three-dimensional geometry, and the results of the computations compared with test data taken during hot fire tests at NASA Stennis Space Center, and cold-flow water flow test data obtained at NASA MSFC. The numerical results indicate that unsteady pressure fluctuations at specific frequencies develop in the duct at the flow-liner location. Detailed frequency analysis of the flow disturbances is presented. The unsteadiness is believed to be an important source for fluctuating pressures generating high cycle fatigue.
Fluoridation: a violation of medical ethics and human rights.
Cross, Douglas W; Carton, Robert J
2003-01-01
Silicofluorides, widely used in water fluoridation, are unlicensed medicinal substances, administered to large populations without informed consent or supervision by a qualified medical practitioner. Fluoridation fails the test of reliability and specificity, and, lacking toxicity testing of silicofluorides, constitutes unlawful medical research. It is banned in most of Europe; European Union human rights legislation makes it illegal. Silicofluorides have never been submitted to the U.S. FDA for approval as medicines. The ethical validity of fluoridation policy does not stand up to scrutiny relative to the Nuremberg Code and other codes of medical ethics, including the Council of Europe's Biomedical Convention of 1999. The police power of the State has been used in the United States to override health concerns, with the support of the courts, which have given deference to health authorities.
Metasurfaced Reverberation Chamber.
Sun, Hengyi; Li, Zhuo; Gu, Changqing; Xu, Qian; Chen, Xinlei; Sun, Yunhe; Lu, Shengchen; Martin, Ferran
2018-01-25
The concept of metasurfaced reverberation chamber (RC) is introduced in this paper. It is shown that by coating the chamber wall with a rotating 1-bit random coding metasurface, it is possible to enlarge the test zone of the RC while maintaining the field uniformity as good as that in a traditional RC with mechanical stirrers. A 1-bit random coding diffusion metasurface is designed to obtain all-direction backscattering under normal incidence. Three specific cases are studied for comparisons, including a (traditional) mechanical stirrer RC, a mechanical stirrer RC with a fixed diffusion metasurface, and a RC with a rotating diffusion metasurface. Simulation results show that the compact rotating diffusion metasurface can act as a stirrer with good stirring efficiency. By using such rotating diffusion metasurface, the test region of the RC can be greatly extended.
43 CFR 11.64 - Injury determination phase-testing and sampling methods.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...
43 CFR 11.64 - Injury determination phase-testing and sampling methods.
Code of Federal Regulations, 2014 CFR
2014-10-01
.... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...
43 CFR 11.64 - Injury determination phase-testing and sampling methods.
Code of Federal Regulations, 2013 CFR
2013-10-01
.... In developing these objectives, the availability of information from response actions relating to the...), test cases proving the code works, and any alteration of previously documented code made to adapt the... computer code (if any), test cases proving the code works, and any alteration of previously documented code...
Proceedings of the First NASA Formal Methods Symposium
NASA Technical Reports Server (NTRS)
Denney, Ewen (Editor); Giannakopoulou, Dimitra (Editor); Pasareanu, Corina S. (Editor)
2009-01-01
Topics covered include: Model Checking - My 27-Year Quest to Overcome the State Explosion Problem; Applying Formal Methods to NASA Projects: Transition from Research to Practice; TLA+: Whence, Wherefore, and Whither; Formal Methods Applications in Air Transportation; Theorem Proving in Intel Hardware Design; Building a Formal Model of a Human-Interactive System: Insights into the Integration of Formal Methods and Human Factors Engineering; Model Checking for Autonomic Systems Specified with ASSL; A Game-Theoretic Approach to Branching Time Abstract-Check-Refine Process; Software Model Checking Without Source Code; Generalized Abstract Symbolic Summaries; A Comparative Study of Randomized Constraint Solvers for Random-Symbolic Testing; Component-Oriented Behavior Extraction for Autonomic System Design; Automated Verification of Design Patterns with LePUS3; A Module Language for Typing by Contracts; From Goal-Oriented Requirements to Event-B Specifications; Introduction of Virtualization Technology to Multi-Process Model Checking; Comparing Techniques for Certified Static Analysis; Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder; jFuzz: A Concolic Whitebox Fuzzer for Java; Machine-Checkable Timed CSP; Stochastic Formal Correctness of Numerical Algorithms; Deductive Verification of Cryptographic Software; Coloured Petri Net Refinement Specification and Correctness Proof with Coq; Modeling Guidelines for Code Generation in the Railway Signaling Context; Tactical Synthesis Of Efficient Global Search Algorithms; Towards Co-Engineering Communicating Autonomous Cyber-Physical Systems; and Formal Methods for Automated Diagnosis of Autosub 6000.
Many human accelerated regions are developmental enhancers
Capra, John A.; Erwin, Genevieve D.; McKinsey, Gabriel; Rubenstein, John L. R.; Pollard, Katherine S.
2013-01-01
The genetic changes underlying the dramatic differences in form and function between humans and other primates are largely unknown, although it is clear that gene regulatory changes play an important role. To identify regulatory sequences with potentially human-specific functions, we and others used comparative genomics to find non-coding regions conserved across mammals that have acquired many sequence changes in humans since divergence from chimpanzees. These regions are good candidates for performing human-specific regulatory functions. Here, we analysed the DNA sequence, evolutionary history, histone modifications, chromatin state and transcription factor (TF) binding sites of a combined set of 2649 non-coding human accelerated regions (ncHARs) and predicted that at least 30% of them function as developmental enhancers. We prioritized the predicted ncHAR enhancers using analysis of TF binding site gain and loss, along with the functional annotations and expression patterns of nearby genes. We then tested both the human and chimpanzee sequence for 29 ncHARs in transgenic mice, and found 24 novel developmental enhancers active in both species, 17 of which had very consistent patterns of activity in specific embryonic tissues. Of these ncHAR enhancers, five drove expression patterns suggestive of different activity for the human and chimpanzee sequence at embryonic day 11.5. The changes to human non-coding DNA in these ncHAR enhancers may modify the complex patterns of gene expression necessary for proper development in a human-specific manner and are thus promising candidates for understanding the genetic basis of human-specific biology. PMID:24218637
Corbi, N; Libri, V; Fanciulli, M; Tinsley, J M; Davies, K E; Passananti, C
2000-06-01
Up-regulation of utrophin gene expression is recognized as a plausible therapeutic approach in the treatment of Duchenne muscular dystrophy (DMD). We have designed and engineered new zinc finger-based transcription factors capable of binding and activating transcription from the promoter of the dystrophin-related gene, utrophin. Using the recognition 'code' that proposes specific rules between zinc finger primary structure and potential DNA binding sites, we engineered a new gene named 'Jazz' that encodes for a three-zinc finger peptide. Jazz belongs to the Cys2-His2 zinc finger type and was engineered to target the nine base pair DNA sequence: 5'-GCT-GCT-GCG-3', present in the promoter region of both the human and mouse utrophin gene. The entire zinc finger alpha-helix region, containing the amino acid positions that are crucial for DNA binding, was specifically chosen on the basis of the contacts more frequently represented in the available list of the 'code'. Here we demonstrate that Jazz protein binds specifically to the double-stranded DNA target, with a dissociation constant of about 32 nM. Band shift and super-shift experiments confirmed the high affinity and specificity of Jazz protein for its DNA target. Moreover, we show that chimeric proteins, named Gal4-Jazz and Sp1-Jazz, are able to drive the transcription of a test gene from the human utrophin promoter.
Automated generation of lattice QCD Feynman rules
NASA Astrophysics Data System (ADS)
Hart, A.; von Hippel, G. M.; Horgan, R. R.; Müller, E. H.
2009-12-01
The derivation of the Feynman rules for lattice perturbation theory from actions and operators is complicated, especially for highly improved actions such as HISQ. This task is, however, both important and particularly suitable for automation. We describe a suite of software to generate and evaluate Feynman rules for a wide range of lattice field theories with gluons and (relativistic and/or heavy) quarks. Our programs are capable of dealing with actions as complicated as (m)NRQCD and HISQ. Automated differentiation methods are used to calculate also the derivatives of Feynman diagrams. Program summaryProgram title: HiPPY, HPsrc Catalogue identifier: AEDX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEDX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GPLv2 (see Additional comments below) No. of lines in distributed program, including test data, etc.: 513 426 No. of bytes in distributed program, including test data, etc.: 4 893 707 Distribution format: tar.gz Programming language: Python, Fortran95 Computer: HiPPy: Single-processor workstations. HPsrc: Single-processor workstations and MPI-enabled multi-processor systems Operating system: HiPPy: Any for which Python v2.5.x is available. HPsrc: Any for which a standards-compliant Fortran95 compiler is available Has the code been vectorised or parallelised?: Yes RAM: Problem specific, typically less than 1 GB for either code Classification: 4.4, 11.5 Nature of problem: Derivation and use of perturbative Feynman rules for complicated lattice QCD actions. Solution method: An automated expansion method implemented in Python (HiPPy) and code to use expansions to generate Feynman rules in Fortran95 (HPsrc). Restrictions: No general restrictions. Specific restrictions are discussed in the text. Additional comments: The HiPPy and HPsrc codes are released under the second version of the GNU General Public Licence (GPL v2). Therefore anyone is free to use or modify the code for their own calculations. As part of the licensing, we ask that any publications including results from the use of this code or of modifications of it cite Refs. [1,2] as well as this paper. Finally, we also ask that details of these publications, as well as of any bugs or required or useful improvements of this core code, would be communicated to us. Running time: Very problem specific, depending on the complexity of the Feynman rules and the number of integration points. Typically between a few minutes and several weeks. The installation tests provided with the program code take only a few seconds to run. References:A. Hart, G.M. von Hippel, R.R. Horgan, L.C. Storoni, Automatically generating Feynman rules for improved lattice eld theories, J. Comput. Phys. 209 (2005) 340-353, doi:10.1016/j.jcp.2005.03.010, arXiv:hep-lat/0411026. M. Lüscher, P. Weisz, Efficient Numerical Techniques for Perturbative Lattice Gauge Theory Computations, Nucl. Phys. B 266 (1986) 309, doi:10.1016/0550-3213(86)90094-5.
Potential Energy Cost Savings from Increased Commercial Energy Code Compliance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Athalye, Rahul A.
2016-08-22
An important question for commercial energy code compliance is: “How much energy cost savings can better compliance achieve?” This question is in sharp contrast to prior efforts that used a checklist of code requirements, each of which was graded pass or fail. Percent compliance for any given building was simply the percent of individual requirements that passed. A field investigation method is being developed that goes beyond the binary approach to determine how much energy cost savings is not realized. Prototype building simulations were used to estimate the energy cost impact of varying levels of non-compliance for newly constructed officemore » buildings in climate zone 4C. Field data collected from actual buildings on specific conditions relative to code requirements was then applied to the simulation results to find the potential lost energy savings for a single building or for a sample of buildings. This new methodology was tested on nine office buildings in climate zone 4C. The amount of additional energy cost savings they could have achieved had they complied fully with the 2012 International Energy Conservation Code is determined. This paper will present the results of the test, lessons learned, describe follow-on research that is needed to verify that the methodology is both accurate and practical, and discuss the benefits that might accrue if the method were widely adopted.« less
Good, Ryan J; Leroue, Matthew K; Czaja, Angela S
2018-06-07
Noninvasive positive pressure ventilation (NIPPV) is increasingly used in critically ill pediatric patients, despite limited data on safety and efficacy. Administrative data may be a good resource for observational studies. Therefore, we sought to assess the performance of the International Classification of Diseases, Ninth Revision procedure code for NIPPV. Patients admitted to the PICU requiring NIPPV or heated high-flow nasal cannula (HHFNC) over the 11-month study period were identified from the Virtual PICU System database. The gold standard was manual review of the electronic health record to verify the use of NIPPV or HHFNC among the cohort. The presence or absence of a NIPPV procedure code was determined by using administrative data. Test characteristics with 95% confidence intervals (CIs) were generated, comparing administrative data with the gold standard. Among the cohort ( n = 562), the majority were younger than 5 years, and the most common primary diagnosis was bronchiolitis. Most (82%) required NIPPV, whereas 18% required only HHFNC. The NIPPV code had a sensitivity of 91.1% (95% CI: 88.2%-93.6%) and a specificity of 57.6% (95% CI: 47.2%-67.5%), with a positive likelihood ratio of 2.15 (95% CI: 1.70-2.71) and negative likelihood ratio of 0.15 (95% CI: 0.11-0.22). Among our critically ill pediatric cohort, NIPPV procedure codes had high sensitivity but only moderate specificity. On the basis of our study results, there is a risk of misclassification, specifically failure to identify children who require NIPPV, when using administrative data to study the use of NIPPV in this population. Copyright © 2018 by the American Academy of Pediatrics.
SOCR Analyses – an Instructional Java Web-based Statistical Analysis Toolkit
Chu, Annie; Cui, Jenny; Dinov, Ivo D.
2011-01-01
The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test. The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website. In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most updated information and newly added models. PMID:21546994
DiClemente, Carlo C; Crouch, Taylor Berens; Norwood, Amber E Q; Delahanty, Janine; Welsh, Christopher
2015-03-01
Screening, brief intervention, and referral to treatment (SBIRT) has become an empirically supported and widely implemented approach in primary and specialty care for addressing substance misuse. Accordingly, training of providers in SBIRT has increased exponentially in recent years. However, the quality and fidelity of training programs and subsequent interventions are largely unknown because of the lack of SBIRT-specific evaluation tools. The purpose of this study was to create a coding scale to assess quality and fidelity of SBIRT interactions addressing alcohol, tobacco, illicit drugs, and prescription medication misuse. The scale was developed to evaluate performance in an SBIRT residency training program. Scale development was based on training protocol and competencies with consultation from Motivational Interviewing coding experts. Trained medical residents practiced SBIRT with standardized patients during 10- to 15-min videotaped interactions. This study included 25 tapes from the Family Medicine program coded by 3 unique coder pairs with varying levels of coding experience. Interrater reliability was assessed for overall scale components and individual items via intraclass correlation coefficients. Coder pair-specific reliability was also assessed. Interrater reliability was excellent overall for the scale components (>.85) and nearly all items. Reliability was higher for more experienced coders, though still adequate for the trained coder pair. Descriptive data demonstrated a broad range of adherence and skills. Subscale correlations supported concurrent and discriminant validity. Data provide evidence that the MD3 SBIRT Coding Scale is a psychometrically reliable coding system for evaluating SBIRT interactions and can be used to evaluate implementation skills for fidelity, training, assessment, and research. Recommendations for refinement and further testing of the measure are discussed. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Water NSTF Design, Instrumentation, and Test Planning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisowski, Darius D.; Gerardi, Craig D.; Hu, Rui
The following report serves as a formal introduction to the water-based Natural convection Shutdown heat removal Test Facility (NSTF) program at Argonne. Since 2005, this US Department of Energy (DOE) sponsored program has conducted large scale experimental testing to generate high-quality and traceable validation data for guiding design decisions of the Reactor Cavity Cooling System (RCCS) concept for advanced reactor designs. The most recent facility iteration, and focus of this report, is the operation of a 1/2 scale model of a water-RCCS concept. Several features of the NSTF prototype align with the conceptual design that has been publicly released formore » the AREVA 625 MWt SC-HTGR. The design of the NSTF also retains all aspects common to a fundamental boiling water thermosiphon, and thus is well poised to provide necessary experimental data to advance basic understanding of natural circulation phenomena and contribute to computer code validation. Overall, the NSTF program operates to support the DOE vision of aiding US vendors in design choices of future reactor concepts, advancing the maturity of codes for licensing, and ultimately developing safe and reliable reactor technologies. In this report, the top-level program objectives, testing requirements, and unique considerations for the water cooled test assembly are discussed, and presented in sufficient depth to support defining the program’s overall scope and purpose. A discussion of the proposed 6-year testing program is then introduced, which outlines the specific strategy and testing plan for facility operations. The proposed testing plan has been developed to meet the toplevel objective of conducting high-quality test operations that span across a broad range of single- and two-phase operating conditions. Details of characterization, baseline test cases, accident scenario, and parametric variations are provided, including discussions of later-stage test cases that examine the influence of geometric variations and off-normal configurations. The facility design follows, including as-built dimensions and specifications of the various mechanical and liquid systems, design choices for the test section, water storage tank, and network piping. Specifications of the instrumentation suite are then presented, along with specific information on performance windows, measurement uncertainties, and installation locations. Finally, descriptions of the control systems and heat removal networks are provided, which have been engineered to support precise quantification of energy balances and facilitate well-controlled test operations.« less
Flexible Generation of Kalman Filter Code
NASA Technical Reports Server (NTRS)
Richardson, Julian; Wilson, Edward
2006-01-01
Domain-specific program synthesis can automatically generate high quality code in complex domains from succinct specifications, but the range of programs which can be generated by a given synthesis system is typically narrow. Obtaining code which falls outside this narrow scope necessitates either 1) extension of the code generator, which is usually very expensive, or 2) manual modification of the generated code, which is often difficult and which must be redone whenever changes are made to the program specification. In this paper, we describe adaptations and extensions of the AUTOFILTER Kalman filter synthesis system which greatly extend the range of programs which can be generated. Users augment the input specification with a specification of code fragments and how those fragments should interleave with or replace parts of the synthesized filter. This allows users to generate a much wider range of programs without their needing to modify the synthesis system or edit generated code. We demonstrate the usefulness of the approach by applying it to the synthesis of a complex state estimator which combines code from several Kalman filters with user-specified code. The work described in this paper allows the complex design decisions necessary for real-world applications to be reflected in the synthesized code. When executed on simulated input data, the generated state estimator was found to produce comparable estimates to those produced by a handcoded estimator
Imaging Sensor Flight and Test Equipment Software
NASA Technical Reports Server (NTRS)
Freestone, Kathleen; Simeone, Louis; Robertson, Byran; Frankford, Maytha; Trice, David; Wallace, Kevin; Wilkerson, DeLisa
2007-01-01
The Lightning Imaging Sensor (LIS) is one of the components onboard the Tropical Rainfall Measuring Mission (TRMM) satellite, and was designed to detect and locate lightning over the tropics. The LIS flight code was developed to run on a single onboard digital signal processor, and has operated the LIS instrument since 1997 when the TRMM satellite was launched. The software provides controller functions to the LIS Real-Time Event Processor (RTEP) and onboard heaters, collects the lightning event data from the RTEP, compresses and formats the data for downlink to the satellite, collects housekeeping data and formats the data for downlink to the satellite, provides command processing and interface to the spacecraft communications and data bus, and provides watchdog functions for error detection. The Special Test Equipment (STE) software was designed to operate specific test equipment used to support the LIS hardware through development, calibration, qualification, and integration with the TRMM spacecraft. The STE software provides the capability to control instrument activation, commanding (including both data formatting and user interfacing), data collection, decompression, and display and image simulation. The LIS STE code was developed for the DOS operating system in the C programming language. Because of the many unique data formats implemented by the flight instrument, the STE software was required to comprehend the same formats, and translate them for the test operator. The hardware interfaces to the LIS instrument using both commercial and custom computer boards, requiring that the STE code integrate this variety into a working system. In addition, the requirement to provide RTEP test capability dictated the need to provide simulations of background image data with short-duration lightning transients superimposed. This led to the development of unique code used to control the location, intensity, and variation above background for simulated lightning strikes at user-selected locations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messner, Mark C.; Sham, Sam; Wang, Yanli
This report summarizes the experiments performed in FY17 on Gr. 91 steels. The testing of Gr. 91 has technical significance because, currently, it is the only approved material for Class A construction that is strongly cyclic softening. Specific FY17 testing includes the following activities for Gr. 91 steel. First, two types of key feature testing have been initiated, including two-bar thermal ratcheting and Simplified Model Testing (SMT). The goal is to qualify the Elastic – Perfectly Plastic (EPP) design methodologies and to support incorporation of these rules for Gr. 91 into the ASME Division 5 Code. The preliminary SMT testmore » results show that Gr. 91 is most damaging when tested with compression hold mode under the SMT creep fatigue testing condition. Two-bar thermal ratcheting test results at a temperature range between 350 to 650o C were compared with the EPP strain limits code case evaluation, and the results show that the EPP strain limits code case is conservative. The material information obtained from these key feature tests can also be used to verify its material model. Second, to provide experimental data in support of the viscoplastic material model development at Argonne National Laboratory, selective tests were performed to evaluate the effect of cyclic softening on strain rate sensitivity and creep rates. The results show the prior cyclic loading history decreases the strain rate sensitivity and increases creep rates. In addition, isothermal cyclic stress-strain curves were generated at six different temperatures, and a nonisothermal thermomechanical testing was also performed to provide data to calibrate the viscoplastic material model.« less
Shahraz, Saeid; Lagu, Tara; Ritter, Grant A; Liu, Xiadong; Tompkins, Christopher
2017-03-01
Selection of International Classification of Diseases (ICD)-based coded information for complex conditions such as severe sepsis is a subjective process and the results are sensitive to the codes selected. We use an innovative data exploration method to guide ICD-based case selection for severe sepsis. Using the Nationwide Inpatient Sample, we applied Latent Class Analysis (LCA) to determine if medical coders follow any uniform and sensible coding for observations with severe sepsis. We examined whether ICD-9 codes specific to sepsis (038.xx for septicemia, a subset of 995.9 codes representing Systemic Inflammatory Response syndrome, and 785.52 for septic shock) could all be members of the same latent class. Hospitalizations coded with sepsis-specific codes could be assigned to a latent class of their own. This class constituted 22.8% of all potential sepsis observations. The probability of an observation with any sepsis-specific codes being assigned to the residual class was near 0. The chance of an observation in the residual class having a sepsis-specific code as the principal diagnosis was close to 0. Validity of sepsis class assignment is supported by empirical results, which indicated that in-hospital deaths in the sepsis-specific class were around 4 times as likely as that in the residual class. The conventional methods of defining severe sepsis cases in observational data substantially misclassify sepsis cases. We suggest a methodology that helps reliable selection of ICD codes for conditions that require complex coding.
Orso, Massimiliano; Serraino, Diego; Abraha, Iosief; Fusco, Mario; Giovannini, Gianni; Casucci, Paola; Cozzolino, Francesco; Granata, Annalisa; Gobbato, Michele; Stracci, Fabrizio; Ciullo, Valerio; Vitale, Maria Francesca; Eusebi, Paolo; Orlandi, Walter; Montedori, Alessandro; Bidoli, Ettore
2018-04-20
To assess the accuracy of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes in identifying subjects with melanoma. A diagnostic accuracy study comparing melanoma ICD-9-CM codes (index test) with medical chart (reference standard). Case ascertainment was based on neoplastic lesion of the skin and a histological diagnosis from a primary or metastatic site positive for melanoma. Administrative databases from Umbria Region, Azienda Sanitaria Locale (ASL) Napoli 3 Sud (NA) and Friuli Venezia Giulia (FVG) Region. 112, 130 and 130 cases (subjects with melanoma) were randomly selected from Umbria, NA and FVG, respectively; 94 non-cases (subjects without melanoma) were randomly selected from each unit. Sensitivity and specificity for ICD-9-CM code 172.x located in primary position. The most common melanoma subtype was malignant melanoma of skin of trunk, except scrotum (ICD-9-CM code: 172.5), followed by malignant melanoma of skin of lower limb, including hip (ICD-9-CM code: 172.7). The mean age of the patients ranged from 60 to 61 years. Most of the diagnoses were performed in surgical departments.The sensitivities were 100% (95% CI 96% to 100%) for Umbria, 99% (95% CI 94% to 100%) for NA and 98% (95% CI 93% to 100%) for FVG. The specificities were 88% (95% CI 80% to 93%) for Umbria, 77% (95% CI 69% to 85%) for NA and 79% (95% CI 71% to 86%) for FVG. The case definition for melanoma based on clinical or instrumental diagnosis, confirmed by histological examination, showed excellent sensitivities and good specificities in the three operative units. Administrative databases from the three operative units can be used for epidemiological and outcome research of melanoma. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Orso, Massimiliano; Serraino, Diego; Fusco, Mario; Giovannini, Gianni; Casucci, Paola; Cozzolino, Francesco; Granata, Annalisa; Gobbato, Michele; Stracci, Fabrizio; Ciullo, Valerio; Vitale, Maria Francesca; Orlandi, Walter; Montedori, Alessandro; Bidoli, Ettore
2018-01-01
Objectives To assess the accuracy of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes in identifying subjects with melanoma. Design A diagnostic accuracy study comparing melanoma ICD-9-CM codes (index test) with medical chart (reference standard). Case ascertainment was based on neoplastic lesion of the skin and a histological diagnosis from a primary or metastatic site positive for melanoma. Setting Administrative databases from Umbria Region, Azienda Sanitaria Locale (ASL) Napoli 3 Sud (NA) and Friuli Venezia Giulia (FVG) Region. Participants 112, 130 and 130 cases (subjects with melanoma) were randomly selected from Umbria, NA and FVG, respectively; 94 non-cases (subjects without melanoma) were randomly selected from each unit. Outcome measures Sensitivity and specificity for ICD-9-CM code 172.x located in primary position. Results The most common melanoma subtype was malignant melanoma of skin of trunk, except scrotum (ICD-9-CM code: 172.5), followed by malignant melanoma of skin of lower limb, including hip (ICD-9-CM code: 172.7). The mean age of the patients ranged from 60 to 61 years. Most of the diagnoses were performed in surgical departments. The sensitivities were 100% (95% CI 96% to 100%) for Umbria, 99% (95% CI 94% to 100%) for NA and 98% (95% CI 93% to 100%) for FVG. The specificities were 88% (95% CI 80% to 93%) for Umbria, 77% (95% CI 69% to 85%) for NA and 79% (95% CI 71% to 86%) for FVG. Conclusions The case definition for melanoma based on clinical or instrumental diagnosis, confirmed by histological examination, showed excellent sensitivities and good specificities in the three operative units. Administrative databases from the three operative units can be used for epidemiological and outcome research of melanoma. PMID:29678984
47 CFR 11.61 - Tests of EAS procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... EAS header codes, Attention Signal, Test Script and EOM code. (i) Tests in odd numbered months shall... substitute for a monthly test, activation must include transmission of the EAS header codes, Attention Signal, emergency message and EOM code and comply with the visual message requirements in § 11.51. To substitute for...
Langner, Ingo; Mikolajczyk, Rafael; Garbe, Edeltraut
2011-08-17
Health insurance claims data are increasingly used for health services research in Germany. Hospital diagnoses in these data are coded according to the International Classification of Diseases, German modification (ICD-10-GM). Due to the historical division into West and East Germany, different coding practices might persist in both former parts. Additionally, the introduction of Diagnosis Related Groups (DRGs) in Germany in 2003/2004 might have changed the coding. The aim of this study was to investigate regional and temporal variations in coding of hospitalisation diagnoses in Germany. We analysed hospitalisation diagnoses for oesophageal bleeding (OB) and upper gastrointestinal bleeding (UGIB) from the official German Hospital Statistics provided by the Federal Statistical Office. Bleeding diagnoses were classified as "specific" (origin of bleeding provided) or "unspecific" (origin of bleeding not provided) coding. We studied regional (former East versus West Germany) differences in incidence of hospitalisations with specific or unspecific coding for OB and UGIB and temporal variations between 2000 and 2005. For each year, incidence ratios of hospitalisations for former East versus West Germany were estimated with log-linear regression models adjusting for age, gender and population density. Significant differences in specific and unspecific coding between East and West Germany and over time were found for both, OB and UGIB hospitalisation diagnoses, respectively. For example in 2002, incidence ratios of hospitalisations for East versus West Germany were 1.24 (95% CI 1.16-1.32) for specific and 0.67 (95% CI 0.60-0.74) for unspecific OB diagnoses and 1.43 (95% CI 1.36-1.51) for specific and 0.83 (95% CI 0.80-0.87) for unspecific UGIB. Regional differences nearly disappeared and time trends were less marked when using combined specific and unspecific diagnoses of OB or UGIB, respectively. During the study period, there were substantial regional and temporal variations in the coding of OB and UGIB diagnoses in hospitalised patients. Possible explanations for the observed regional variations are different coding preferences, further influenced by changes in coding and reimbursement rules. Analysing groups of diagnoses including specific and unspecific codes reduces the influence of varying coding practices.
Schütz, U; Reichel, H; Dreinhöfer, K
2007-01-01
We introduce a grouping system for clinical practice which allows the separation of DRG coding in specific orthopaedic groups based on anatomic regions, operative procedures, therapeutic interventions and morbidity equivalent diagnosis groups. With this, a differentiated aim-oriented analysis of illustrated internal DRG data becomes possible. The group-specific difference of the coding quality between the DRG groups following primary coding by the orthopaedic surgeon and final coding by the medical controlling is analysed. In a consecutive series of 1600 patients parallel documentation and group-specific comparison of the relevant DRG parameters were carried out in every case after primary and final coding. Analysing the group-specific share in the additional CaseMix coding, the group "spine surgery" dominated, closely followed by the groups "arthroplasty" and "surgery due to infection, tumours, diabetes". Altogether, additional cost-weight-relevant coding was necessary most frequently in the latter group (84%), followed by group "spine surgery" (65%). In DRGs representing conservative orthopaedic treatment documented procedures had nearly no influence on the cost weight. The introduced system of case group analysis in internal DRG documentation can lead to the detection of specific problems in primary coding and cost-weight relevant changes of the case mix. As an instrument for internal process control in the orthopaedic field, it can serve as a communicative interface between an economically oriented classification of the hospital performance and a specific problem solution of the medical staff involved in the department management.
Collisionless stellar hydrodynamics as an efficient alternative to N-body methods
NASA Astrophysics Data System (ADS)
Mitchell, Nigel L.; Vorobyov, Eduard I.; Hensler, Gerhard
2013-01-01
The dominant constituents of the Universe's matter are believed to be collisionless in nature and thus their modelling in any self-consistent simulation is extremely important. For simulations that deal only with dark matter or stellar systems, the conventional N-body technique is fast, memory efficient and relatively simple to implement. However when extending simulations to include the effects of gas physics, mesh codes are at a distinct disadvantage compared to Smooth Particle Hydrodynamics (SPH) codes. Whereas implementing the N-body approach into SPH codes is fairly trivial, the particle-mesh technique used in mesh codes to couple collisionless stars and dark matter to the gas on the mesh has a series of significant scientific and technical limitations. These include spurious entropy generation resulting from discreteness effects, poor load balancing and increased communication overhead which spoil the excellent scaling in massively parallel grid codes. In this paper we propose the use of the collisionless Boltzmann moment equations as a means to model the collisionless material as a fluid on the mesh, implementing it into the massively parallel FLASH Adaptive Mesh Refinement (AMR) code. This approach which we term `collisionless stellar hydrodynamics' enables us to do away with the particle-mesh approach and since the parallelization scheme is identical to that used for the hydrodynamics, it preserves the excellent scaling of the FLASH code already demonstrated on peta-flop machines. We find that the classic hydrodynamic equations and the Boltzmann moment equations can be reconciled under specific conditions, allowing us to generate analytic solutions for collisionless systems using conventional test problems. We confirm the validity of our approach using a suite of demanding test problems, including the use of a modified Sod shock test. By deriving the relevant eigenvalues and eigenvectors of the Boltzmann moment equations, we are able to use high order accurate characteristic tracing methods with Riemann solvers to generate numerical solutions which show excellent agreement with our analytic solutions. We conclude by demonstrating the ability of our code to model complex phenomena by simulating the evolution of a two-armed spiral galaxy whose properties agree with those predicted by the swing amplification theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Argueello, J.G.; Dohrmann, C.R.; Carne, T.G.
The combined analysis/test effort described in this paper compares predictions with measured data from a step-relaxation test in the absence of significant wind-driven aerodynamic loading. The process described here is intended to illustrate a method for validation of time domain codes for structural analysis of wind turbine structures. Preliminary analyses were performed to investigate the transient dynamic response that the rotating Sandia 34 m Vertical Axis Wind Turbine (VAWT) would undergo when one of the two blades was excited by step-relaxation. The calculations served two purposes. The first was for pretest planning to evaluate the relative importance of the variousmore » forces that would be acting on the structure during the test and to determine if the applied force in the step-relaxation would be sufficient to produce an excitation that was distinguishable from that produced by the aerodynamic loads. The second was to provide predictions that could subsequently be compared to the data from the test. The test was carried out specifically to help in the validation of the time-domain structural dynamics code, VAWT-SDS, which predicts the dynamic response of VAWTs subject to transient events. Post-test comparisons with the data were performed and showed a qualitative agreement between pretest predictions and measured response. However, they also showed that there was significantly more damping in the measurements than included in the predictions. Efforts to resolve this difference, including post-test analyses, were undertaken and are reported herein. The overall effort described in this paper represents a major step in the process of arriving at a validated structural dynamics code.« less
1985-11-01
User Interface that consists of a set of callable execution time routines available to an application program for form processing . IISS Function Screen...provisions for test consists of the normal testing techniques that are accomplished during the construction process . They consist of design and code...application presents a form * to the user which must be filled in with information for processing by that application. The application then
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemaire, H.; Barat, E.; Carrel, F.
In this work, we tested Maximum likelihood expectation-maximization (MLEM) algorithms optimized for gamma imaging applications on two recent coded mask gamma cameras. We respectively took advantage of the characteristics of the GAMPIX and Caliste HD-based gamma cameras: noise reduction thanks to mask/anti-mask procedure but limited energy resolution for GAMPIX, high energy resolution for Caliste HD. One of our short-term perspectives is the test of MAPEM algorithms integrating specific prior values for the data to reconstruct adapted to the gamma imaging topic. (authors)
HMPT: Hazardous Waste Transportation Live 27928, Test 27929
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simpson, Lewis Edward
2016-03-17
HMPT: Hazardous Waste Transportation (Live 27928, suggested one time and associated Test 27929, required initially and every 36 months) addresses the Department of Transportation (DOT) function-specific training requirements of the hazardous materials packagings and transportation (HMPT) Los Alamos National Laboratory (LANL) lab-wide training. This course addresses the requirements of the DOT that are unique to hazardous waste shipments. Appendix B provides the Title 40 Code of Federal Regulations (CFR) reference material needed for this course.
The accuracy of burn diagnosis codes in health administrative data: A validation study.
Mason, Stephanie A; Nathens, Avery B; Byrne, James P; Fowler, Rob; Gonzalez, Alejandro; Karanicolas, Paul J; Moineddin, Rahim; Jeschke, Marc G
2017-03-01
Health administrative databases may provide rich sources of data for the study of outcomes following burn. We aimed to determine the accuracy of International Classification of Diseases diagnoses codes for burn in a population-based administrative database. Data from a regional burn center's clinical registry of patients admitted between 2006-2013 were linked to administrative databases. Burn total body surface area (TBSA), depth, mechanism, and inhalation injury were compared between the registry and administrative records. The sensitivity, specificity, and positive and negative predictive values were determined, and coding agreement was assessed with the kappa statistic. 1215 burn center patients were linked to administrative records. TBSA codes were highly sensitive and specific for ≥10 and ≥20% TBSA (89/93% sensitive and 95/97% specific), with excellent agreement (κ, 0.85/κ, 0.88). Codes were weakly sensitive (68%) in identifying ≥10% TBSA full-thickness burn, though highly specific (86%) with moderate agreement (κ, 0.46). Codes for inhalation injury had limited sensitivity (43%) but high specificity (99%) with moderate agreement (κ, 0.54). Burn mechanism had excellent coding agreement (κ, 0.84). Administrative data diagnosis codes accurately identify burn by burn size and mechanism, while identification of inhalation injury or full-thickness burns is less sensitive but highly specific. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
Technical Support Document for Version 3.6.1 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2009-09-29
This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards.
Code of Fair Testing Practices in Education (Revised)
ERIC Educational Resources Information Center
Educational Measurement: Issues and Practice, 2005
2005-01-01
A note from the Working Group of the Joint Committee on Testing Practices: The "Code of Fair Testing Practices in Education (Code)" prepared by the Joint Committee on Testing Practices (JCTP) has just been revised for the first time since its initial introduction in 1988. The revision of the Code was inspired primarily by the revision of…
Methodology, status and plans for development and assessment of Cathare code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestion, D.; Barre, F.; Faydide, B.
1997-07-01
This paper presents the methodology, status and plans for the development, assessment and uncertainty evaluation of the Cathare code. Cathare is a thermalhydraulic code developed by CEA (DRN), IPSN, EDF and FRAMATOME for PWR safety analysis. First, the status of the code development and assessment is presented. The general strategy used for the development and the assessment of the code is presented. Analytical experiments with separate effect tests, and component tests are used for the development and the validation of closure laws. Successive Revisions of constitutive laws are implemented in successive Versions of the code and assessed. System tests ormore » integral tests are used to validate the general consistency of the Revision. Each delivery of a code Version + Revision is fully assessed and documented. A methodology is being developed to determine the uncertainty on all constitutive laws of the code using calculations of many analytical tests and applying the Discrete Adjoint Sensitivity Method (DASM). At last, the plans for the future developments of the code are presented. They concern the optimization of the code performance through parallel computing - the code will be used for real time full scope plant simulators - the coupling with many other codes (neutronic codes, severe accident codes), the application of the code for containment thermalhydraulics. Also, physical improvements are required in the field of low pressure transients and in the modeling for the 3-D model.« less
Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions
NASA Astrophysics Data System (ADS)
Kwak, Kyujin; Yang, Seungwon
2015-08-01
The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.
Development of a Grid-Based Gyro-Kinetic Simulation Code
NASA Astrophysics Data System (ADS)
Lapillonne, Xavier; Brunetti, Maura; Tran, Trach-Minh; Brunner, Stephan
2006-10-01
A grid-based semi-Lagrangian code using cubic spline interpolation is being developed at CRPP, for solving the electrostatic drift-kinetic equations [M. Brunetti et. al, Comp. Phys. Comm. 163, 1 (2004)] in a cylindrical system. This 4-dim code, CYGNE, is part of a project with long term aim of studying microturbulence in toroidal fusion devices, in the more general frame of gyro-kinetic equations. Towards their non-linear phase, the simulations from this code are subject to significant overshoot problems, reflected by the development of negative value regions of the distribution function, which leads to bad energy conservation. This has motivated the study of alternative schemes. On the one hand, new time integration algorithms are considered in the semi-Lagrangian frame. On the other hand, fully Eulerian schemes, which separate time and space discretisation (method of lines), are investigated. In particular, the Essentially Non Oscillatory (ENO) approach, constructed so as to minimize the overshoot problem, has been considered. All these methods have first been tested in the simpler case of the 2-dim guiding-center model for the Kelvin-Helmholtz instability, which enables to address the specific issue of the E xB drift also met in the more complex gyrokinetic-type equations. Based on these preliminary studies, the most promising methods are being implemented and tested in CYGNE.
Development and validation of a low-frequency modeling code for high-moment transmitter rod antennas
NASA Astrophysics Data System (ADS)
Jordan, Jared Williams; Sternberg, Ben K.; Dvorak, Steven L.
2009-12-01
The goal of this research is to develop and validate a low-frequency modeling code for high-moment transmitter rod antennas to aid in the design of future low-frequency TX antennas with high magnetic moments. To accomplish this goal, a quasi-static modeling algorithm was developed to simulate finite-length, permeable-core, rod antennas. This quasi-static analysis is applicable for low frequencies where eddy currents are negligible, and it can handle solid or hollow cores with winding insulation thickness between the antenna's windings and its core. The theory was programmed in Matlab, and the modeling code has the ability to predict the TX antenna's gain, maximum magnetic moment, saturation current, series inductance, and core series loss resistance, provided the user enters the corresponding complex permeability for the desired core magnetic flux density. In order to utilize the linear modeling code to model the effects of nonlinear core materials, it is necessary to use the correct complex permeability for a specific core magnetic flux density. In order to test the modeling code, we demonstrated that it can accurately predict changes in the electrical parameters associated with variations in the rod length and the core thickness for antennas made out of low carbon steel wire. These tests demonstrate that the modeling code was successful in predicting the changes in the rod antenna characteristics under high-current nonlinear conditions due to changes in the physical dimensions of the rod provided that the flux density in the core was held constant in order to keep the complex permeability from changing.
Test Analysis Tools to Ensure Higher Quality of On-Board Real Time Software for Space Applications
NASA Astrophysics Data System (ADS)
Boudillet, O.; Mescam, J.-C.; Dalemagne, D.
2008-08-01
EADS Astrium Space Transportation, in its Les Mureaux premises, is responsible for the French M51 nuclear deterrent missile onboard SW. There was also developed over 1 million of line of code, mostly in ADA, for the Automated Transfer Vehicle (ATV) onboard SW and the flight control SW of the ARIANE5 launcher which has put it into orbit. As part of the ATV SW, ASTRIUM ST has developed the first Category A SW ever qualified for a European space application. To ensure that all these embedded SW have been developed with the highest quality and reliability level, specific development tools have been designed to cover the steps of source code verification, automated validation test or complete target instruction coverage verification. Three of such dedicated tools are presented here.
76 FR 12600 - Review of the Emergency Alert System
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... appropriate, various administrative procedures for national tests, including test codes to be used and pre... administrative procedures for national tests, including test codes to be used and pre-test outreach. B. Summary... test codes to be used and pre-test outreach, the Commission has instructed the Bureau to factor in the...
Afzal, Naveed; Sohn, Sunghwan; Abram, Sara; Scott, Christopher G; Chaudhry, Rajeev; Liu, Hongfang; Kullo, Iftikhar J; Arruda-Olson, Adelaide M
2017-06-01
Lower extremity peripheral arterial disease (PAD) is highly prevalent and affects millions of individuals worldwide. We developed a natural language processing (NLP) system for automated ascertainment of PAD cases from clinical narrative notes and compared the performance of the NLP algorithm with billing code algorithms, using ankle-brachial index test results as the gold standard. We compared the performance of the NLP algorithm to (1) results of gold standard ankle-brachial index; (2) previously validated algorithms based on relevant International Classification of Diseases, Ninth Revision diagnostic codes (simple model); and (3) a combination of International Classification of Diseases, Ninth Revision codes with procedural codes (full model). A dataset of 1569 patients with PAD and controls was randomly divided into training (n = 935) and testing (n = 634) subsets. We iteratively refined the NLP algorithm in the training set including narrative note sections, note types, and service types, to maximize its accuracy. In the testing dataset, when compared with both simple and full models, the NLP algorithm had better accuracy (NLP, 91.8%; full model, 81.8%; simple model, 83%; P < .001), positive predictive value (NLP, 92.9%; full model, 74.3%; simple model, 79.9%; P < .001), and specificity (NLP, 92.5%; full model, 64.2%; simple model, 75.9%; P < .001). A knowledge-driven NLP algorithm for automatic ascertainment of PAD cases from clinical notes had greater accuracy than billing code algorithms. Our findings highlight the potential of NLP tools for rapid and efficient ascertainment of PAD cases from electronic health records to facilitate clinical investigation and eventually improve care by clinical decision support. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Extremely accurate sequential verification of RELAP5-3D
Mesina, George L.; Aumiller, David L.; Buschman, Francis X.
2015-11-19
Large computer programs like RELAP5-3D solve complex systems of governing, closure and special process equations to model the underlying physics of nuclear power plants. Further, these programs incorporate many other features for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. For RELAP5-3D, verification and validation are restricted to nuclear power plant applications. Verification means ensuring that the program is built right by checking that it meets its design specifications, comparing coding to algorithms and equations and comparing calculations against analytical solutions and method ofmore » manufactured solutions. Sequential verification performs these comparisons initially, but thereafter only compares code calculations between consecutive code versions to demonstrate that no unintended changes have been introduced. Recently, an automated, highly accurate sequential verification method has been developed for RELAP5-3D. The method also provides to test that no unintended consequences result from code development in the following code capabilities: repeating a timestep advancement, continuing a run from a restart file, multiple cases in a single code execution, and modes of coupled/uncoupled operation. In conclusion, mathematical analyses of the adequacy of the checks used in the comparisons are provided.« less
Ogunrin, Olubunmi A; Daniel, Folasade; Ansa, Victor
2016-12-01
Responsibility for protection of research participants from harm and exploitation rests on Research Ethics Committees and principal investigators. The Nigerian National Code of Health Research Ethics defines responsibilities of stakeholders in research so its knowledge among researchers will likely aid ethical conduct of research. The levels of awareness and knowledge of the Code among biomedical researchers in southern Nigerian research institutions was assessed. Four institutions were selected using a stratified random sampling technique. Research participants were selected by purposive sampling and completed a pre-tested structured questionnaire. A total of 102 biomedical researchers completed the questionnaires. Thirty percent of the participants were aware of the National Code though 64% had attended at least one training seminar in research ethics. Twenty-five percent had a fairly acceptable knowledge (scores 50%-74%) and 10% had excellent knowledge of the code (score ≥75%). Ninety-five percent expressed intentions to learn more about the National Code and agreed that it is highly relevant to the ethical conduct of research. Awareness and knowledge of the Code were found to be very limited among biomedical researchers in southern Nigeria. There is need to improve awareness and knowledge through ethics seminars and training. Use of existing Nigeria-specific online training resources is also encouraged.
Code of Federal Regulations, 2011 CFR
2011-10-01
... design pressure in the ASME Code, Section VIII (IBR, see § 171.7 of this subchapter). Holding time is the... 49 Transportation 3 2011-10-01 2011-10-01 false Requirements for the design, construction... FOR PACKAGINGS Specifications for Portable Tanks § 178.277 Requirements for the design, construction...
Code of Federal Regulations, 2012 CFR
2012-10-01
... design pressure in the ASME Code, Section VIII (IBR, see § 171.7 of this subchapter). Holding time is the... 49 Transportation 3 2012-10-01 2012-10-01 false Requirements for the design, construction... FOR PACKAGINGS Specifications for Portable Tanks § 178.277 Requirements for the design, construction...
Code of Federal Regulations, 2014 CFR
2014-10-01
... design pressure in the ASME Code, Section VIII (IBR, see § 171.7 of this subchapter). Holding time is the... 49 Transportation 3 2014-10-01 2014-10-01 false Requirements for the design, construction... FOR PACKAGINGS Specifications for Portable Tanks § 178.277 Requirements for the design, construction...
Code of Federal Regulations, 2010 CFR
2010-10-01
... design pressure in the ASME Code, Section VIII (IBR, see § 171.7 of this subchapter). Holding time is the... 49 Transportation 2 2010-10-01 2010-10-01 false Requirements for the design, construction... FOR PACKAGINGS Specifications for Portable Tanks § 178.277 Requirements for the design, construction...
Code of Federal Regulations, 2013 CFR
2013-10-01
... design pressure in the ASME Code, Section VIII (IBR, see § 171.7 of this subchapter). Holding time is the... 49 Transportation 3 2013-10-01 2013-10-01 false Requirements for the design, construction... FOR PACKAGINGS Specifications for Portable Tanks § 178.277 Requirements for the design, construction...
ERIC Educational Resources Information Center
Linebarger, Deborah L.; Piotrowski, Jessica Taylor
2010-01-01
Educational TV has been consistently linked to children's learning. In this research, educational TV characteristics were identified, coded, and tested for their influence on children's program-specific comprehension and vocabulary outcomes. Study 1 details a content analysis of TV features including a program's macrostructure (i.e., narrative or…
Labor Market Frictions and Production Efficiency in Public Schools. Working Paper 163
ERIC Educational Resources Information Center
Kim, Dongwoo; Koedel, Cory; Ni, Shawn; Podgursky, Michael
2016-01-01
State-specific licensing policies and pension plans create mobility costs for educators who cross state lines. We empirically test whether these costs affect production in schools--a hypothesis that follows directly from economic theory on labor frictions--using geo-coded data from the lower-48 states. We find that achievement is lower in…
Concrete Finisher: Apprenticeship Course Outline. Apprenticeship and Industry Training. 4805.2
ERIC Educational Resources Information Center
Alberta Advanced Education, 2005
2005-01-01
The graduate of the Concrete Finisher apprenticeship program is a certified journeyperson who will be able to: (1) perform tests to confirm concrete quality; (2) interpret building codes, plans and specifications as they apply to the trade; (3) place and finish concrete in a professional manner; (4) cut, patch, maintain and repair concrete…
Computerized scoring algorithms for the Autobiographical Memory Test.
Takano, Keisuke; Gutenbrunner, Charlotte; Martens, Kris; Salmon, Karen; Raes, Filip
2018-02-01
Reduced specificity of autobiographical memories is a hallmark of depressive cognition. Autobiographical memory (AM) specificity is typically measured by the Autobiographical Memory Test (AMT), in which respondents are asked to describe personal memories in response to emotional cue words. Due to this free descriptive responding format, the AMT relies on experts' hand scoring for subsequent statistical analyses. This manual coding potentially impedes research activities in big data analytics such as large epidemiological studies. Here, we propose computerized algorithms to automatically score AM specificity for the Dutch (adult participants) and English (youth participants) versions of the AMT by using natural language processing and machine learning techniques. The algorithms showed reliable performances in discriminating specific and nonspecific (e.g., overgeneralized) autobiographical memories in independent testing data sets (area under the receiver operating characteristic curve > .90). Furthermore, outcome values of the algorithms (i.e., decision values of support vector machines) showed a gradient across similar (e.g., specific and extended memories) and different (e.g., specific memory and semantic associates) categories of AMT responses, suggesting that, for both adults and youth, the algorithms well capture the extent to which a memory has features of specific memories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Multiloop Integral System Test (MIST): MIST Facility Functional Specification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Habib, T F; Koksal, C G; Moskal, T E
1991-04-01
The Multiloop Integral System Test (MIST) is part of a multiphase program started in 1983 to address small-break loss-of-coolant accidents (SBLOCAs) specific to Babcock and Wilcox designed plants. MIST is sponsored by the US Nuclear Regulatory Commission, the Babcock Wilcox Owners Group, the Electric Power Research Institute, and Babcock and Wilcox. The unique features of the Babcock and Wilcox design, specifically the hot leg U-bends and steam generators, prevented the use of existing integral system data or existing integral facilities to address the thermal-hydraulic SBLOCA questions. MIST was specifically designed and constructed for this program, and an existing facility --more » the Once Through Integral System (OTIS) -- was also used. Data from MIST and OTIS are used to benchmark the adequacy of system codes, such as RELAP5 and TRAC, for predicting abnormal plant transients. The MIST Functional Specification documents as-built design features, dimensions, instrumentation, and test approach. It also presents the scaling basis for the facility and serves to define the scope of work for the facility design and construction. 13 refs., 112 figs., 38 tabs.« less
Overview of the relevant CFD work at Thiokol Corporation
NASA Technical Reports Server (NTRS)
Chwalowski, Pawel; Loh, Hai-Tien
1992-01-01
An in-house developed proprietary advanced computational fluid dynamics code called SHARP (Trademark) is a primary tool for many flow simulations and design analyses. The SHARP code is a time dependent, two dimensional (2-D) axisymmetric numerical solution technique for the compressible Navier-Stokes equations. The solution technique in SHARP uses a vectorizable implicit, second order accurate in time and space, finite volume scheme based on an upwind flux-difference splitting of a Roe-type approximated Riemann solver, Van Leer's flux vector splitting, and a fourth order artificial dissipation scheme with a preconditioning to accelerate the flow solution. Turbulence is simulated by an algebraic model, and ultimately the kappa-epsilon model. Some other capabilities of the code are 2-D two-phase Lagrangian particle tracking and cell blockages. Extensive development and testing has been conducted on the 3-D version of the code with flow, combustion, and turbulence interactions. The emphasis here is on the specific applications of SHARP in Solid Rocket Motor design. Information is given in viewgraph form.
Doebling, Scott William
2016-10-22
This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less
Schmoock, Gernot; Elschner, Mandy; Sprague, Lisa D
2015-03-07
A frame-shift mutation in the flagellum motor gene motB coding for the chemotaxis MotB protein of Burkholderia mallei has been utilized to design a conventional duplex PCR assay with fluorescent labelled primers. Species specificity was tested with a panel of 13 Burkholderia type strains. A total of 41 B. mallei field strains, 36 B. pseudomallei field strains, and 1 B. thailandensis field strain from different geographic regions were tested and correctly identified. Testing of 55 non-Burkholderia bacterial species revealed 100% specificity of the assay. The minimum detection limit was 1 pg DNA or 160 GE for B. mallei and 130 GE for B. pseudomallei, respectively. This assay enables the clear distinction between B. mallei and B. pseudomallei/B. thailandensis.
Instrument Systems Analysis and Verification Facility (ISAVF) users guide
NASA Technical Reports Server (NTRS)
Davis, J. F.; Thomason, J. O.; Wolfgang, J. L.
1985-01-01
The ISAVF facility is primarily an interconnected system of computers, special purpose real time hardware, and associated generalized software systems, which will permit the Instrument System Analysts, Design Engineers and Instrument Scientists, to perform trade off studies, specification development, instrument modeling, and verification of the instrument, hardware performance. It is not the intent of the ISAVF to duplicate or replace existing special purpose facilities such as the Code 710 Optical Laboratories or the Code 750 Test and Evaluation facilities. The ISAVF will provide data acquisition and control services for these facilities, as needed, using remote computer stations attached to the main ISAVF computers via dedicated communication lines.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solovyev, V.V.; Salamov, A.A.; Lawrence, C.B.
1994-12-31
Discriminant analysis is applied to the problem of recognition 5`-, internal and 3`-exons in human DNA sequences. Specific recognition functions were developed for revealing exons of particular types. The method based on a splice site prediction algorithm that uses the linear Fisher discriminant to combine the information about significant triplet frequencies of various functional parts of splice site regions and preferences of oligonucleotide in protein coding and nation regions. The accuracy of our splice site recognition function is about 97%. A discriminant function for 5`-exon prediction includes hexanucleotide composition of upstream region, triplet composition around the ATG codon, ORF codingmore » potential, donor splice site potential and composition of downstream introit region. For internal exon prediction, we combine in a discriminant function the characteristics describing the 5`- intron region, donor splice site, coding region, acceptor splice site and Y-intron region for each open reading frame flanked by GT and AG base pairs. The accuracy of precise internal exon recognition on a test set of 451 exon and 246693 pseudoexon sequences is 77% with a specificity of 79% and a level of pseudoexon ORF prediction of 99.96%. The recognition quality computed at the level of individual nucleotides is 89%, for exon sequences and 98% for intron sequences. A discriminant function for 3`-exon prediction includes octanucleolide composition of upstream nation region, triplet composition around the stop codon, ORF coding potential, acceptor splice site potential and hexanucleotide composition of downstream region. We unite these three discriminant functions in exon predicting program FEX (find exons). FEX exactly predicts 70% of 1016 exons from the test of 181 complete genes with specificity 73%, and 89% exons are exactly or partially predicted. On the average, 85% of nucleotides were predicted accurately with specificity 91%.« less
Not all order memory is equal: Test demands reveal dissociations in memory for sequence information.
Jonker, Tanya R; MacLeod, Colin M
2017-02-01
Remembering the order of a sequence of events is a fundamental feature of episodic memory. Indeed, a number of formal models represent temporal context as part of the memory system, and memory for order has been researched extensively. Yet, the nature of the code(s) underlying sequence memory is still relatively unknown. Across 4 experiments that manipulated encoding task, we found evidence for 3 dissociable facets of order memory. Experiment 1 introduced a test requiring a judgment of which of 2 alternatives had immediately followed a word during encoding. This measure revealed better retention of interitem associations following relational encoding (silent reading) than relatively item-specific encoding (judging referent size), a pattern consistent with that observed in previous research using order reconstruction tests. In sharp contrast, Experiment 2 demonstrated the reverse pattern: Memory for the studied order of 2 sequentially presented items was actually better following item-specific encoding than following relational encoding. Experiment 3 reproduced this dissociation in a single experiment using both tests. Experiment 4 extended these findings by further dissociating the roles of relational encoding and item strength in the 2 tests. Taken together, these results indicate that memory for event sequence is influenced by (a) interitem associations, (b) the emphasized directionality of an association, and (c) an item's strength independent of other items. Memory for order is more complicated than has been portrayed in theories of memory and its nuances should be carefully considered when designing tests and models of temporal and relational memory. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Shiff, Natalie Jane; Oen, Kiem; Rabbani, Rasheda; Lix, Lisa M
2017-09-01
We validated case ascertainment algorithms for juvenile idiopathic arthritis (JIA) in the provincial health administrative databases of Manitoba, Canada. A population-based pediatric rheumatology clinical database from April 1st 1980 to March 31st 2012 was used to test case definitions in individuals diagnosed at ≤15 years of age. The case definitions varied the number of diagnosis codes (1, 2, or 3), time frame (1, 2 or 3 years), time between diagnoses (ever, >1 day, or ≥8 weeks), and physician specialty. Positive predictive value (PPV), sensitivity, and specificity with 95% confidence intervals (CIs) are reported. A case definition of 1 hospitalization or ≥2 diagnoses in 2 years by any provider ≥8 weeks apart using diagnosis codes for rheumatoid arthritis and ankylosing spondylitis produced a sensitivity of 89.2% (95% CI 86.8, 91.6), specificity of 86.3% (95% CI 83.0, 89.6), and PPV of 90.6% (95% CI 88.3, 92.9) when seronegative enthesopathy and arthropathy (SEA) was excluded as JIA; and sensitivity of 88.2% (95% CI 85.7, 90.7), specificity of 90.4% (95% CI 87.5, 93.3), and PPV of 93.9% (95% CI 92.0, 95.8) when SEA was included as JIA. This study validates case ascertainment algorithms for JIA in Canadian administrative health data using diagnosis codes for both rheumatoid arthritis (RA) and ankylosing spondylitis, to better reflect current JIA classification than codes for RA alone. Researchers will be able to use these results to define cohorts for population-based studies.
Influence of rubbing on rotor dynamics, part 2
NASA Technical Reports Server (NTRS)
Muszynska, Agnes; Bently, Donald E.; Franklin, Wesley D.; Hayashida, Robert D.; Kingsley, Lori M.; Curry, Arthur E.
1989-01-01
Rotor dynamic behavior depends considerably on how much the specific physical phenomena accompanying rotor rubbing against the stator is involved. The experimental results of rotor-to-stator rubbing contact are analyzed. The computer code is described for obtaining numerical calculations of rotor-to-stator rubbing system dynamic responses. Computer generated results are provided. The reduced dynamic data from High Pressure Fuel Turbo Pump (HPFTP) hot fire test are given. The results provide some significant conclusions. Information is provided on the electronic instrumentation used in the experimental testing.
Applying Quantum Monte Carlo to the Electronic Structure Problem
NASA Astrophysics Data System (ADS)
Powell, Andrew D.; Dawes, Richard
2016-06-01
Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).
Fracture mechanics life analytical methods verification testing
NASA Technical Reports Server (NTRS)
Favenesi, J. A.; Clemons, T. G.; Riddell, W. T.; Ingraffea, A. R.; Wawrzynek, P. A.
1994-01-01
The objective was to evaluate NASCRAC (trademark) version 2.0, a second generation fracture analysis code, for verification and validity. NASCRAC was evaluated using a combination of comparisons to the literature, closed-form solutions, numerical analyses, and tests. Several limitations and minor errors were detected. Additionally, a number of major flaws were discovered. These major flaws were generally due to application of a specific method or theory, not due to programming logic. Results are presented for the following program capabilities: K versus a, J versus a, crack opening area, life calculation due to fatigue crack growth, tolerable crack size, proof test logic, tearing instability, creep crack growth, crack transitioning, crack retardation due to overloads, and elastic-plastic stress redistribution. It is concluded that the code is an acceptable fracture tool for K solutions of simplified geometries, for a limited number of J and crack opening area solutions, and for fatigue crack propagation with the Paris equation and constant amplitude loads when the Paris equation is applicable.
Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne
2008-03-01
The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.
Code of Federal Regulations, 2014 CFR
2014-10-01
... standards of safety, decency, and sanitation and in conformity with applicable codes, specifications and standards. (b) Applicable codes, specifications, and standards shall include any disaster resistant building code that meets the minimum requirements of the National Flood Insurance Program (NFIP) as well as...
Code of Federal Regulations, 2011 CFR
2011-10-01
... standards of safety, decency, and sanitation and in conformity with applicable codes, specifications and standards. (b) Applicable codes, specifications, and standards shall include any disaster resistant building code that meets the minimum requirements of the National Flood Insurance Program (NFIP) as well as...
Code of Federal Regulations, 2012 CFR
2012-10-01
... standards of safety, decency, and sanitation and in conformity with applicable codes, specifications and standards. (b) Applicable codes, specifications, and standards shall include any disaster resistant building code that meets the minimum requirements of the National Flood Insurance Program (NFIP) as well as...
Code of Federal Regulations, 2013 CFR
2013-10-01
... standards of safety, decency, and sanitation and in conformity with applicable codes, specifications and standards. (b) Applicable codes, specifications, and standards shall include any disaster resistant building code that meets the minimum requirements of the National Flood Insurance Program (NFIP) as well as...
Code of Federal Regulations, 2010 CFR
2010-10-01
... standards of safety, decency, and sanitation and in conformity with applicable codes, specifications and standards. (b) Applicable codes, specifications, and standards shall include any disaster resistant building code that meets the minimum requirements of the National Flood Insurance Program (NFIP) as well as...
HDTS 2017.1 Testing and Verification Document
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteside, T.
2017-12-01
This report is a continuation of the series of Hunter Dose Tracking System (HDTS) Quality Assurance documents including (Foley and Powell, 2010; Dixon, 2012; Whiteside, 2017b). In this report we have created a suite of automated test cases and a system to analyze the results of those tests as well as documented the methodology to ensure the field system performs within specifications. The software test cases cover all of the functions and interactions of functions that are practical to test. With the developed framework, if software defects are discovered, it will be easy to create one or more test casesmore » to reproduce the defect and ensure that code changes correct the defect.« less
Is phonology bypassed in normal or dyslexic development?
Pennington, B F; Lefly, D L; Van Orden, G C; Bookman, M O; Smith, S D
1987-01-01
A pervasive assumption in most accounts of normal reading and spelling development is that phonological coding is important early in development but is subsequently superseded by faster, orthographic coding which bypasses phonology. We call this assumption, which derives from dual process theory, the developmental bypass hypothesis. The present study tests four specific predictions of the developmental bypass hypothesis by comparing dyslexics and nondyslexics from the same families in a cross-sectional design. The four predictions are: 1) That phonological coding skill develops early in normal readers and soon reaches asymptote, whereas orthographic coding skill has a protracted course of development; 2) that the correlation of adult reading or spelling performance with phonological coding skill is considerably less than the correlation with orthographic coding skill; 3) that dyslexics who are mainly deficient in phonological coding skill should be able to bypass this deficit and eventually close the gap in reading and spelling performance; and 4) that the greatest differences between dyslexics and developmental controls on measures of phonological coding skill should be observed early rather than late in development.None of the four predictions of the developmental bypass hypothesis were upheld. Phonological coding skill continued to develop in nondyslexics until adulthood. It accounted for a substantial (32-53 percent) portion of the variance in reading and spelling performance in adult nondyslexics, whereas orthographic coding skill did not account for a statistically reliable portion of this variance. The dyslexics differed little across age in phonological coding skill, but made linear progress in orthographic coding skill, surpassing spelling-age (SA) controls by adulthood. Nonetheless, they didnot close the gap in reading and spelling performance. Finally, dyslexics were significantly worse than SA (and Reading Age [RA]) controls in phonological coding skill only in adulthood.
Vo, Elaine; Davila, Jessica A; Hou, Jason; Hodge, Krystle; Li, Linda T; Suliburk, James W; Kao, Lillian S; Berger, David H; Liang, Mike K
2013-08-01
Large databases provide a wealth of information for researchers, but identifying patient cohorts often relies on the use of current procedural terminology (CPT) codes. In particular, studies of stoma surgery have been limited by the accuracy of CPT codes in identifying and differentiating ileostomy procedures from colostomy procedures. It is important to make this distinction because the prevalence of complications associated with stoma formation and reversal differ dramatically between types of stoma. Natural language processing (NLP) is a process that allows text-based searching. The Automated Retrieval Console is an NLP-based software that allows investigators to design and perform NLP-assisted document classification. In this study, we evaluated the role of CPT codes and NLP in differentiating ileostomy from colostomy procedures. Using CPT codes, we conducted a retrospective study that identified all patients undergoing a stoma-related procedure at a single institution between January 2005 and December 2011. All operative reports during this time were reviewed manually to abstract the following variables: formation or reversal and ileostomy or colostomy. Sensitivity and specificity for validation of the CPT codes against the mastery surgery schedule were calculated. Operative reports were evaluated by use of NLP to differentiate ileostomy- from colostomy-related procedures. Sensitivity and specificity for identifying patients with ileostomy or colostomy procedures were calculated for CPT codes and NLP for the entire cohort. CPT codes performed well in identifying stoma procedures (sensitivity 87.4%, specificity 97.5%). A total of 664 stoma procedures were identified by CPT codes between 2005 and 2011. The CPT codes were adequate in identifying stoma formation (sensitivity 97.7%, specificity 72.4%) and stoma reversal (sensitivity 74.1%, specificity 98.7%), but they were inadequate in identifying ileostomy (sensitivity 35.0%, specificity 88.1%) and colostomy (75.2% and 80.9%). NLP performed with greater sensitivity, specificity, and accuracy than CPT codes in identifying stoma procedures and stoma types. Major differences where NLP outperformed CPT included identifying ileostomy (specificity 95.8%, sensitivity 88.3%, and accuracy 91.5%) and colostomy (97.6%, 90.5%, and 92.8%, respectively). CPT codes can identify effectively patients who have had stoma procedures and are adequate in distinguishing between formation and reversal; however, CPT codes cannot differentiate ileostomy from colostomy. NLP can be used to differentiate between ileostomy- and colostomy-related procedures. The role of NLP in conjunction with electronic medical records in data retrieval warrants further investigation. Published by Mosby, Inc.
Development and Assessment of CTF for Pin-resolved BWR Modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K; Wysocki, Aaron J; Collins, Benjamin S
2017-01-01
CTF is the modernized and improved version of the subchannel code, COBRA-TF. It has been adopted by the Consortium for Advanced Simulation for Light Water Reactors (CASL) for subchannel analysis applications and thermal hydraulic feedback calculations in the Virtual Environment for Reactor Applications Core Simulator (VERA-CS). CTF is now jointly developed by Oak Ridge National Laboratory and North Carolina State University. Until now, CTF has been used for pressurized water reactor modeling and simulation in CASL, but in the future it will be extended to boiling water reactor designs. This required development activities to integrate the code into the VERA-CSmore » workflow and to make it more ecient for full-core, pin resolved simulations. Additionally, there is a significant emphasis on producing high quality tools that follow a regimented software quality assurance plan in CASL. Part of this plan involves performing validation and verification assessments on the code that are easily repeatable and tied to specific code versions. This work has resulted in the CTF validation and verification matrix being expanded to include several two-phase flow experiments, including the General Electric 3 3 facility and the BWR Full-Size Fine Mesh Bundle Tests (BFBT). Comparisons with both experimental databases is reasonable, but the BFBT analysis reveals a tendency of CTF to overpredict void, especially in the slug flow regime. The execution of these tests is fully automated, analysis is documented in the CTF Validation and Verification manual, and the tests have become part of CASL continuous regression testing system. This paper will summarize these recent developments and some of the two-phase assessments that have been performed on CTF.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, M.E.
1997-12-05
This V and V Report includes analysis of two revisions of the DMS [data management system] System Requirements Specification (SRS) and the Preliminary System Design Document (PSDD); the source code for the DMS Communication Module (DMSCOM) messages; the source code for selected DMS Screens, and the code for the BWAS Simulator. BDM Federal analysts used a series of matrices to: compare the requirements in the System Requirements Specification (SRS) to the specifications found in the System Design Document (SDD), to ensure the design supports the business functions, compare the discreet parts of the SDD with each other, to ensure thatmore » the design is consistent and cohesive, compare the source code of the DMS Communication Module with the specifications, to ensure that the resultant messages will support the design, compare the source code of selected screens to the specifications to ensure that resultant system screens will support the design, compare the source code of the BWAS simulator with the requirements to interface with DMS messages and data transfers relating to the BWAS operations.« less
Immunochromatographic diagnostic test analysis using Google Glass.
Feng, Steve; Caire, Romain; Cortazar, Bingen; Turan, Mehmet; Wong, Andrew; Ozcan, Aydogan
2014-03-25
We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health.
Immunochromatographic Diagnostic Test Analysis Using Google Glass
2014-01-01
We demonstrate a Google Glass-based rapid diagnostic test (RDT) reader platform capable of qualitative and quantitative measurements of various lateral flow immunochromatographic assays and similar biomedical diagnostics tests. Using a custom-written Glass application and without any external hardware attachments, one or more RDTs labeled with Quick Response (QR) code identifiers are simultaneously imaged using the built-in camera of the Google Glass that is based on a hands-free and voice-controlled interface and digitally transmitted to a server for digital processing. The acquired JPEG images are automatically processed to locate all the RDTs and, for each RDT, to produce a quantitative diagnostic result, which is returned to the Google Glass (i.e., the user) and also stored on a central server along with the RDT image, QR code, and other related information (e.g., demographic data). The same server also provides a dynamic spatiotemporal map and real-time statistics for uploaded RDT results accessible through Internet browsers. We tested this Google Glass-based diagnostic platform using qualitative (i.e., yes/no) human immunodeficiency virus (HIV) and quantitative prostate-specific antigen (PSA) tests. For the quantitative RDTs, we measured activated tests at various concentrations ranging from 0 to 200 ng/mL for free and total PSA. This wearable RDT reader platform running on Google Glass combines a hands-free sensing and image capture interface with powerful servers running our custom image processing codes, and it can be quite useful for real-time spatiotemporal tracking of various diseases and personal medical conditions, providing a valuable tool for epidemiology and mobile health. PMID:24571349
Sensory feedback in prosthetics: a standardized test bench for closed-loop control.
Dosen, Strahinja; Markovic, Marko; Hartmann, Cornelia; Farina, Dario
2015-03-01
Closing the control loop by providing sensory feedback to the user of a prosthesis is an important challenge, with major impact on the future of prosthetics. Developing and comparing closed-loop systems is a difficult task, since there are many different methods and technologies that can be used to implement each component of the system. Here, we present a test bench developed in Matlab Simulink for configuring and testing the closed-loop human control system in standardized settings. The framework comprises a set of connected generic blocks with normalized inputs and outputs, which can be customized by selecting specific implementations from a library of predefined components. The framework is modular and extensible and it can be used to configure, compare and test different closed-loop system prototypes, thereby guiding the development towards an optimal system configuration. The use of the test bench was demonstrated by investigating two important aspects of closed-loop control: performance of different electrotactile feedback interfaces (spatial versus intensity coding) during a pendulum stabilization task and feedforward methods (joystick versus myocontrol) for force control. The first experiment demonstrated that in the case of trained subjects the intensity coding might be superior to spatial coding. In the second experiment, the control of force was rather poor even with a stable and precise control interface (joystick), demonstrating that inherent characteristics of the prosthesis can be an important limiting factor when considering the overall effectiveness of the closed-loop control. The presented test bench is an important instrument for investigating different aspects of human manual control with sensory feedback.
Chishima, Takafumi; Iwakiri, Junichi
2018-01-01
It has been recently suggested that transposable elements (TEs) are re-used as functional elements of long non-coding RNAs (lncRNAs). This is supported by some examples such as the human endogenous retrovirus subfamily H (HERVH) elements contained within lncRNAs and expressed specifically in human embryonic stem cells (hESCs), as required to maintain hESC identity. There are at least two unanswered questions about all lncRNAs. How many TEs are re-used within lncRNAs? Are there any other TEs that affect tissue specificity of lncRNA expression? To answer these questions, we comprehensively identify TEs that are significantly related to tissue-specific expression levels of lncRNAs. We downloaded lncRNA expression data corresponding to normal human tissue from the Expression Atlas and transformed the data into tissue specificity estimates. Then, Fisher’s exact tests were performed to verify whether the presence or absence of TE-derived sequences influences the tissue specificity of lncRNA expression. Many TE–tissue pairs associated with tissue-specific expression of lncRNAs were detected, indicating that multiple TE families can be re-used as functional domains or regulatory sequences of lncRNAs. In particular, we found that the antisense promoter region of L1PA2, a LINE-1 subfamily, appears to act as a promoter for lncRNAs with placenta-specific expression. PMID:29315213
A proto-code of ethics and conduct for European nurse directors.
Stievano, Alessandro; De Marinis, Maria Grazia; Kelly, Denise; Filkins, Jacqueline; Meyenburg-Altwarg, Iris; Petrangeli, Mauro; Tschudin, Verena
2012-03-01
The proto-code of ethics and conduct for European nurse directors was developed as a strategic and dynamic document for nurse managers in Europe. It invites critical dialogue, reflective thinking about different situations, and the development of specific codes of ethics and conduct by nursing associations in different countries. The term proto-code is used for this document so that specifically country-orientated or organization-based and practical codes can be developed from it to guide professionals in more particular or situation-explicit reflection and values. The proto-code of ethics and conduct for European nurse directors was designed and developed by the European Nurse Directors Association's (ENDA) advisory team. This article gives short explanations of the code' s preamble and two main parts: Nurse directors' ethical basis, and Principles of professional practice, which is divided into six specific points: competence, care, safety, staff, life-long learning and multi-sectorial working.
Methodology, status and plans for development and assessment of TUF and CATHENA codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luxat, J.C.; Liu, W.S.; Leung, R.K.
1997-07-01
An overview is presented of the Canadian two-fluid computer codes TUF and CATHENA with specific focus on the constraints imposed during development of these codes and the areas of application for which they are intended. Additionally a process for systematic assessment of these codes is described which is part of a broader, industry based initiative for validation of computer codes used in all major disciplines of safety analysis. This is intended to provide both the licensee and the regulator in Canada with an objective basis for assessing the adequacy of codes for use in specific applications. Although focused specifically onmore » CANDU reactors, Canadian experience in developing advanced two-fluid codes to meet wide-ranging application needs while maintaining past investment in plant modelling provides a useful contribution to international efforts in this area.« less
Certifying Auto-Generated Flight Code
NASA Technical Reports Server (NTRS)
Denney, Ewen
2008-01-01
Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.
Reggiani, Claudio; Coppens, Sandra; Sekhara, Tayeb; Dimov, Ivan; Pichon, Bruno; Lufin, Nicolas; Addor, Marie-Claude; Belligni, Elga Fabia; Digilio, Maria Cristina; Faletra, Flavio; Ferrero, Giovanni Battista; Gerard, Marion; Isidor, Bertrand; Joss, Shelagh; Niel-Bütschi, Florence; Perrone, Maria Dolores; Petit, Florence; Renieri, Alessandra; Romana, Serge; Topa, Alexandra; Vermeesch, Joris Robert; Lenaerts, Tom; Casimir, Georges; Abramowicz, Marc; Bontempi, Gianluca; Vilain, Catheline; Deconinck, Nicolas; Smits, Guillaume
2017-07-19
Tissue-specific integrative omics has the potential to reveal new genic elements important for developmental disorders. Two pediatric patients with global developmental delay and intellectual disability phenotype underwent array-CGH genetic testing, both showing a partial deletion of the DLG2 gene. From independent human and murine omics datasets, we combined copy number variations, histone modifications, developmental tissue-specific regulation, and protein data to explore the molecular mechanism at play. Integrating genomics, transcriptomics, and epigenomics data, we describe two novel DLG2 promoters and coding first exons expressed in human fetal brain. Their murine conservation and protein-level evidence allowed us to produce new DLG2 gene models for human and mouse. These new genic elements are deleted in 90% of 29 patients (public and in-house) showing partial deletion of the DLG2 gene. The patients' clinical characteristics expand the neurodevelopmental phenotypic spectrum linked to DLG2 gene disruption to cognitive and behavioral categories. While protein-coding genes are regarded as well known, our work shows that integration of multiple omics datasets can unveil novel coding elements. From a clinical perspective, our work demonstrates that two new DLG2 promoters and exons are crucial for the neurodevelopmental phenotypes associated with this gene. In addition, our work brings evidence for the lack of cross-annotation in human versus mouse reference genomes and nucleotide versus protein databases.
Eger, E; Pinel, P; Dehaene, S; Kleinschmidt, A
2015-05-01
Macaque electrophysiology has revealed neurons responsive to number in lateral (LIP) and ventral (VIP) intraparietal areas. Recently, fMRI pattern recognition revealed information discriminative of individual numbers in human parietal cortex but without precisely localizing the relevant sites or testing for subregions with different response profiles. Here, we defined the human functional equivalents of LIP (feLIP) and VIP (feVIP) using neurophysiologically motivated localizers. We applied multivariate pattern recognition to investigate whether both regions represent numerical information and whether number codes are position specific or invariant. In a delayed number comparison paradigm with laterally presented numerosities, parietal cortex discriminated between numerosities better than early visual cortex, and discrimination generalized across hemifields in parietal, but not early visual cortex. Activation patterns in the 2 parietal regions of interest did not differ in the coding of position-specific or position-independent number information, but in the expression of a numerical distance effect which was more pronounced in feLIP. Thus, the representation of number in parietal cortex is at least partially position invariant. Both feLIP and feVIP contain information about individual numerosities in humans, but feLIP hosts a coarser representation of numerosity than feVIP, compatible with either broader tuning or a summation code. © The Author 2013. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Autonomous Flying Controls Testbed
NASA Technical Reports Server (NTRS)
Motter, Mark A.
2005-01-01
The Flying Controls Testbed (FLiC) is a relatively small and inexpensive unmanned aerial vehicle developed specifically to test highly experimental flight control approaches. The most recent version of the FLiC is configured with 16 independent aileron segments, supports the implementation of C-coded experimental controllers, and is capable of fully autonomous flight from takeoff roll to landing, including flight test maneuvers. The test vehicle is basically a modified Army target drone, AN/FQM-117B, developed as part of a collaboration between the Aviation Applied Technology Directorate (AATD) at Fort Eustis,Virginia and NASA Langley Research Center. Several vehicles have been constructed and collectively have flown over 600 successful test flights.
Improving the accuracy of operation coding in surgical discharge summaries
Martinou, Eirini; Shouls, Genevieve; Betambeau, Nadine
2014-01-01
Procedural coding in surgical discharge summaries is extremely important; as well as communicating to healthcare staff which procedures have been performed, it also provides information that is used by the hospital's coding department. The OPCS code (Office of Population, Censuses and Surveys Classification of Surgical Operations and Procedures) is used to generate the tariff that allows the hospital to be reimbursed for the procedure. We felt that the OPCS coding on discharge summaries was often incorrect within our breast and endocrine surgery department. A baseline measurement over two months demonstrated that 32% of operations had been incorrectly coded, resulting in an incorrect tariff being applied and an estimated loss to the Trust of £17,000. We developed a simple but specific OPCS coding table in collaboration with the clinical coding team and breast surgeons that summarised all operations performed within our department. This table was disseminated across the team, specifically to the junior doctors who most frequently complete the discharge summaries. Re-audit showed 100% of operations were accurately coded, demonstrating the effectiveness of the coding table. We suggest that specifically designed coding tables be introduced across each surgical department to ensure accurate OPCS codes are used to produce better quality surgical discharge summaries and to ensure correct reimbursement to the Trust. PMID:26734286
CFD Simulation of Liquid Rocket Engine Injectors
NASA Technical Reports Server (NTRS)
Farmer, Richard; Cheng, Gary; Chen, Yen-Sen; Garcia, Roberto (Technical Monitor)
2001-01-01
Detailed design issues associated with liquid rocket engine injectors and combustion chamber operation require CFD methodology which simulates highly three-dimensional, turbulent, vaporizing, and combusting flows. The primary utility of such simulations involves predicting multi-dimensional effects caused by specific injector configurations. SECA, Inc. and Engineering Sciences, Inc. have been developing appropriate computational methodology for NASA/MSFC for the past decade. CFD tools and computers have improved dramatically during this time period; however, the physical submodels used in these analyses must still remain relatively simple in order to produce useful results. Simulations of clustered coaxial and impinger injector elements for hydrogen and hydrocarbon fuels, which account for real fluid properties, is the immediate goal of this research. The spray combustion codes are based on the FDNS CFD code' and are structured to represent homogeneous and heterogeneous spray combustion. The homogeneous spray model treats the flow as a continuum of multi-phase, multicomponent fluids which move without thermal or velocity lags between the phases. Two heterogeneous models were developed: (1) a volume-of-fluid (VOF) model which represents the liquid core of coaxial or impinger jets and their atomization and vaporization, and (2) a Blob model which represents the injected streams as a cloud of droplets the size of the injector orifice which subsequently exhibit particle interaction, vaporization, and combustion. All of these spray models are computationally intensive, but this is unavoidable to accurately account for the complex physics and combustion which is to be predicted, Work is currently in progress to parallelize these codes to improve their computational efficiency. These spray combustion codes were used to simulate the three test cases which are the subject of the 2nd International Workshop on-Rocket Combustion Modeling. Such test cases are considered by these investigators to be very valuable for code validation because combustion kinetics, turbulence models and atomization models based on low pressure experiments of hydrogen air combustion do not adequately verify analytical or CFD submodels which are necessary to simulate rocket engine combustion. We wish to emphasize that the simulations which we prepared for this meeting are meant to test the accuracy of the approximations used in our general purpose spray combustion models, rather than represent a definitive analysis of each of the experiments which were conducted. Our goal is to accurately predict local temperatures and mixture ratios in rocket engines; hence predicting individual experiments is used only for code validation. To replace the conventional JANNAF standard axisymmetric finite-rate (TDK) computer code 2 for performance prediction with CFD cases, such codes must posses two features. Firstly, they must be as easy to use and of comparable run times for conventional performance predictions. Secondly, they must provide more detailed predictions of the flowfields near the injector face. Specifically, they must accurately predict the convective mixing of injected liquid propellants in terms of the injector element configurations.
New Millenium Inflatable Structures Technology
NASA Technical Reports Server (NTRS)
Mollerick, Ralph
1997-01-01
Specific applications where inflatable technology can enable or enhance future space missions are tabulated. The applicability of the inflatable technology to large aperture infra-red astronomy missions is discussed. Space flight validation and risk reduction are emphasized along with the importance of analytical tools in deriving structurally sound concepts and performing optimizations using compatible codes. Deployment dynamics control, fabrication techniques, and system testing are addressed.
International News in the Canadian and American Press: A Comparative News Flow Study.
ERIC Educational Resources Information Center
Sparkes, Vernone M.; Robinson, Gertrude Joch
This study tested the power of "elite nation" factors (trade, population, and gross national product) to predict the amount of foreign news coverage for specific countries. A composite week for the first quarter of 1975 was randomly drawn, and ten Canadian and twenty-nine United States newspapers were coded for all news items reported on…
Code of Federal Regulations, 2011 CFR
2011-07-01
... building code that have been incorporated to limit destruction of records. The report should make specific... Association, and any testing or modeling or other sources used in the design. (b) NARA action. (1) NARA will... determination. Before any consultation, NARA may ask the agency for additional clarifying information. NARA will...
Code of Federal Regulations, 2013 CFR
2013-07-01
... building code that have been incorporated to limit destruction of records. The report should make specific... Association, and any testing or modeling or other sources used in the design. (b) NARA action. (1) NARA will... determination. Before any consultation, NARA may ask the agency for additional clarifying information. NARA will...
Code of Federal Regulations, 2014 CFR
2014-07-01
... building code that have been incorporated to limit destruction of records. The report should make specific... Association, and any testing or modeling or other sources used in the design. (b) NARA action. (1) NARA will... determination. Before any consultation, NARA may ask the agency for additional clarifying information. NARA will...
Code of Federal Regulations, 2012 CFR
2012-07-01
... building code that have been incorporated to limit destruction of records. The report should make specific... Association, and any testing or modeling or other sources used in the design. (b) NARA action. (1) NARA will... determination. Before any consultation, NARA may ask the agency for additional clarifying information. NARA will...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, S.A.
1997-07-01
The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for codemore » use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.« less
NASA Technical Reports Server (NTRS)
Hall, William B.
1992-01-01
Verbal discussions during the biannual meeting of the Industry Advisory Committee for Carbon-phenolic constituent test methodology, which is constituted under the Solid Propulsion Integrity Program (SPIP), are addressed. The items on the agenda are: (1) NASA video tape library; (2) product code identification; (3) NMR progress; (4) IR and DMTA workshop; (5) aerospace database update; (6) M vision database demonstration; (7) constituent fingerprinting; (8) cured materials test development; (9) engineering needs for computer modeling; and (10) review action items. The materials prepared to support some of the oral presentations are also included in the Appendix.
OpenMP 4.5 Validation and Verification Suite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pophale, Swaroop S; Bernholdt, David E; Hernandez, Oscar R
2017-12-15
OpenMP, a directive-based programming API, introduce directives for accelerator devices that programmers are starting to use more frequently in production codes. To make sure OpenMP directives work correctly across architectures, it is critical to have a mechanism that tests for an implementation's conformance to the OpenMP standard. This testing process can uncover ambiguities in the OpenMP specification, which helps compiler developers and users make a better use of the standard. We fill this gap through our validation and verification test suite that focuses on the offload directives available in OpenMP 4.5.
Villanueva, Pía; Nudel, Ron; Hoischen, Alexander; Fernández, María Angélica; Simpson, Nuala H; Gilissen, Christian; Reader, Rose H; Jara, Lillian; Echeverry, María Magdalena; Echeverry, Maria Magdalena; Francks, Clyde; Baird, Gillian; Conti-Ramsden, Gina; O'Hare, Anne; Bolton, Patrick F; Hennessy, Elizabeth R; Palomino, Hernán; Carvajal-Carmona, Luis; Veltman, Joris A; Cazier, Jean-Baptiste; De Barbieri, Zulema; Fisher, Simon E; Newbury, Dianne F
2015-03-01
Children affected by Specific Language Impairment (SLI) fail to acquire age appropriate language skills despite adequate intelligence and opportunity. SLI is highly heritable, but the understanding of underlying genetic mechanisms has proved challenging. In this study, we use molecular genetic techniques to investigate an admixed isolated founder population from the Robinson Crusoe Island (Chile), who are affected by a high incidence of SLI, increasing the power to discover contributory genetic factors. We utilize exome sequencing in selected individuals from this population to identify eight coding variants that are of putative significance. We then apply association analyses across the wider population to highlight a single rare coding variant (rs144169475, Minor Allele Frequency of 4.1% in admixed South American populations) in the NFXL1 gene that confers a nonsynonymous change (N150K) and is significantly associated with language impairment in the Robinson Crusoe population (p = 2.04 × 10-4, 8 variants tested). Subsequent sequencing of NFXL1 in 117 UK SLI cases identified four individuals with heterozygous variants predicted to be of functional consequence. We conclude that coding variants within NFXL1 confer an increased risk of SLI within a complex genetic model.
Independent Validation and Verification of automated information systems in the Department of Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunteman, W.J.; Caldwell, R.
1994-07-01
The Department of Energy (DOE) has established an Independent Validation and Verification (IV&V) program for all classified automated information systems (AIS) operating in compartmented or multi-level modes. The IV&V program was established in DOE Order 5639.6A and described in the manual associated with the Order. This paper describes the DOE IV&V program, the IV&V process and activities, the expected benefits from an IV&V, and the criteria and methodologies used during an IV&V. The first IV&V under this program was conducted on the Integrated Computing Network (ICN) at Los Alamos National Laboratory and several lessons learned are presented. The DOE IV&Vmore » program is based on the following definitions. An IV&V is defined as the use of expertise from outside an AIS organization to conduct validation and verification studies on a classified AIS. Validation is defined as the process of applying the specialized security test and evaluation procedures, tools, and equipment needed to establish acceptance for joint usage of an AIS by one or more departments or agencies and their contractors. Verification is the process of comparing two levels of an AIS specification for proper correspondence (e.g., security policy model with top-level specifications, top-level specifications with source code, or source code with object code).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kontogeorgakos, D.; Derstine, K.; Wright, A.
2013-06-01
The purpose of the TREAT reactor is to generate large transient neutron pulses in test samples without over-heating the core to simulate fuel assembly accident conditions. The power transients in the present HEU core are inherently self-limiting such that the core prevents itself from overheating even in the event of a reactivity insertion accident. The objective of this study was to support the assessment of the feasibility of the TREAT core conversion based on the present reactor performance metrics and the technical specifications of the HEU core. The LEU fuel assembly studied had the same overall design, materials (UO 2more » particles finely dispersed in graphite) and impurities content as the HEU fuel assembly. The Monte Carlo N–Particle code (MCNP) and the point kinetics code TREKIN were used in the analyses.« less
Pretest aerosol code comparisons for LWR aerosol containment tests LA1 and LA2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wright, A.L.; Wilson, J.H.; Arwood, P.C.
The Light-Water-Reactor (LWR) Aerosol Containment Experiments (LACE) are being performed in Richland, Washington, at the Hanford Engineering Development Laboratory (HEDL) under the leadership of an international project board and the Electric Power Research Institute. These tests have two objectives: (1) to investigate, at large scale, the inherent aerosol retention behavior in LWR containments under simulated severe accident conditions, and (2) to provide an experimental data base for validating aerosol behavior and thermal-hydraulic computer codes. Aerosol computer-code comparison activities are being coordinated at the Oak Ridge National Laboratory. For each of the six LACE tests, ''pretest'' calculations (for code-to-code comparisons) andmore » ''posttest'' calculations (for code-to-test data comparisons) are being performed. The overall goals of the comparison effort are (1) to provide code users with experience in applying their codes to LWR accident-sequence conditions and (2) to evaluate and improve the code models.« less
Tests of Exoplanet Atmospheric Radiative Transfer Codes
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Challener, Ryan; DeLarme, Emerson; Cubillos, Patricio; Blecic, Jasmina; Foster, Austin; Garland, Justin
2016-10-01
Atmospheric radiative transfer codes are used both to predict planetary spectra and in retrieval algorithms to interpret data. Observational plans, theoretical models, and scientific results thus depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. In the process of writing our own code, we became aware of several others with artifacts of unknown origin and even outright errors in their spectra. We present a series of tests to verify atmospheric radiative-transfer codes. These include: simple, single-line line lists that, when combined with delta-function abundance profiles, should produce a broadened line that can be verified easily; isothermal atmospheres that should produce analytically-verifiable blackbody spectra at the input temperatures; and model atmospheres with a range of complexities that can be compared to the output of other codes. We apply the tests to our own code, Bayesian Atmospheric Radiative Transfer (BART) and to several other codes. The test suite is open-source software. We propose this test suite as a standard for verifying current and future radiative transfer codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G.
Small passenger car transmission test-Chevrolet 200 transmission
NASA Technical Reports Server (NTRS)
Bujold, M. P.
1980-01-01
The small passenger car transmission was tested to supply electric vehicle manufacturers with technical information regarding the performance of commerically available transmissions which would enable them to design a more energy efficient vehicle. With this information the manufacturers could estimate vehicle driving range as well as speed and torque requirements for specific road load performance characteristics. A 1979 Chevrolet Model 200 automatic transmission was tested per a passenger car automatic transmission test code (SAE J651b) which required drive performance, coast performance, and no load test conditions. The transmission attained maximum efficiencies in the mid-eighty percent range for both drive performance tests and coast performance tests. Torque, speed and efficiency curves map the complete performance characteristics for Chevrolet Model 200 transmission.
Methodology for extracting local constants from petroleum cracking flows
Chang, Shen-Lin; Lottes, Steven A.; Zhou, Chenn Q.
2000-01-01
A methodology provides for the extraction of local chemical kinetic model constants for use in a reacting flow computational fluid dynamics (CFD) computer code with chemical kinetic computations to optimize the operating conditions or design of the system, including retrofit design improvements to existing systems. The coupled CFD and kinetic computer code are used in combination with data obtained from a matrix of experimental tests to extract the kinetic constants. Local fluid dynamic effects are implicitly included in the extracted local kinetic constants for each particular application system to which the methodology is applied. The extracted local kinetic model constants work well over a fairly broad range of operating conditions for specific and complex reaction sets in specific and complex reactor systems. While disclosed in terms of use in a Fluid Catalytic Cracking (FCC) riser, the inventive methodology has application in virtually any reaction set to extract constants for any particular application and reaction set formulation. The methodology includes the step of: (1) selecting the test data sets for various conditions; (2) establishing the general trend of the parametric effect on the measured product yields; (3) calculating product yields for the selected test conditions using coupled computational fluid dynamics and chemical kinetics; (4) adjusting the local kinetic constants to match calculated product yields with experimental data; and (5) validating the determined set of local kinetic constants by comparing the calculated results with experimental data from additional test runs at different operating conditions.
Beyond Molecular Codes: Simple Rules to Wire Complex Brains
Hassan, Bassem A.; Hiesinger, P. Robin
2015-01-01
Summary Molecular codes, like postal zip codes, are generally considered a robust way to ensure the specificity of neuronal target selection. However, a code capable of unambiguously generating complex neural circuits is difficult to conceive. Here, we re-examine the notion of molecular codes in the light of developmental algorithms. We explore how molecules and mechanisms that have been considered part of a code may alternatively implement simple pattern formation rules sufficient to ensure wiring specificity in neural circuits. This analysis delineates a pattern-based framework for circuit construction that may contribute to our understanding of brain wiring. PMID:26451480
From Verified Models to Verifiable Code
NASA Technical Reports Server (NTRS)
Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.
Conversion of the agent-oriented domain-specific language ALAS into JavaScript
NASA Astrophysics Data System (ADS)
Sredojević, Dejan; Vidaković, Milan; Okanović, Dušan; Mitrović, Dejan; Ivanović, Mirjana
2016-06-01
This paper shows generation of JavaScript code from code written in agent-oriented domain-specific language ALAS. ALAS is an agent-oriented domain-specific language for writing software agents that are executed within XJAF middleware. Since the agents can be executed on various platforms, they must be converted into a language of the target platform. We also try to utilize existing tools and technologies to make the whole conversion process as simple as possible, as well as faster and more efficient. We use the Xtext framework that is compatible with Java to implement ALAS infrastructure - editor and code generator. Since Xtext supports Java, generation of Java code from ALAS code is straightforward. To generate a JavaScript code that will be executed within the target JavaScript XJAF implementation, Google Web Toolkit (GWT) is used.
Rover Attitude and Pointing System Simulation Testbed
NASA Technical Reports Server (NTRS)
Vanelli, Charles A.; Grinblat, Jonathan F.; Sirlin, Samuel W.; Pfister, Sam
2009-01-01
The MER (Mars Exploration Rover) Attitude and Pointing System Simulation Testbed Environment (RAPSSTER) provides a simulation platform used for the development and test of GNC (guidance, navigation, and control) flight algorithm designs for the Mars rovers, which was specifically tailored to the MERs, but has since been used in the development of rover algorithms for the Mars Science Laboratory (MSL) as well. The software provides an integrated simulation and software testbed environment for the development of Mars rover attitude and pointing flight software. It provides an environment that is able to run the MER GNC flight software directly (as opposed to running an algorithmic model of the MER GNC flight code). This improves simulation fidelity and confidence in the results. Further more, the simulation environment allows the user to single step through its execution, pausing, and restarting at will. The system also provides for the introduction of simulated faults specific to Mars rover environments that cannot be replicated in other testbed platforms, to stress test the GNC flight algorithms under examination. The software provides facilities to do these stress tests in ways that cannot be done in the real-time flight system testbeds, such as time-jumping (both forwards and backwards), and introduction of simulated actuator faults that would be difficult, expensive, and/or destructive to implement in the real-time testbeds. Actual flight-quality codes can be incorporated back into the development-test suite of GNC developers, closing the loop between the GNC developers and the flight software developers. The software provides fully automated scripting, allowing multiple tests to be run with varying parameters, without human supervision.
Dimerization drives EGFR endocytosis through two sets of compatible endocytic codes.
Wang, Qian; Chen, Xinmei; Wang, Zhixiang
2015-03-01
We have shown previously that epidermal growth factor (EGF) receptor (EGFR) endocytosis is controlled by EGFR dimerization. However, it is not clear how the dimerization drives receptor internalization. We propose that EGFR endocytosis is driven by dimerization, bringing two sets of endocytic codes, one contained in each receptor monomer, in close proximity. Here, we tested this hypothesis by generating specific homo- or hetero-dimers of various receptors and their mutants. We show that ErbB2 and ErbB3 homodimers are endocytosis deficient owing to the lack of endocytic codes. Interestingly, EGFR-ErbB2 or EGFR-ErbB3 heterodimers are also endocytosis deficient. Moreover, the heterodimer of EGFR and the endocytosis-deficient mutant EGFRΔ1005-1017 is also impaired in endocytosis. These results indicate that two sets of endocytic codes are required for receptor endocytosis. We found that an EGFR-PDGFRβ heterodimer is endocytosis deficient, although both EGFR and PDGFRβ homodimers are endocytosis-competent, indicating that two compatible sets of endocytic codes are required. Finally, we found that to mediate the endocytosis of the receptor dimer, the two sets of compatible endocytic codes, one contained in each receptor molecule, have to be spatially coordinated. © 2015. Published by The Company of Biologists Ltd.
Highly selective BSA imprinted polyacrylamide hydrogels facilitated by a metal-coding MIP approach.
El-Sharif, H F; Yapati, H; Kalluru, S; Reddy, S M
2015-12-01
We report the fabrication of metal-coded molecularly imprinted polymers (MIPs) using hydrogel-based protein imprinting techniques. A Co(II) complex was prepared using (E)-2-((2 hydrazide-(4-vinylbenzyl)hydrazono)methyl)phenol; along with iron(III) chloroprotoporphyrin (Hemin), vinylferrocene (VFc), zinc(II) protoporphyrin (ZnPP) and protoporphyrin (PP), these complexes were introduced into the MIPs as co-monomers for metal-coding of non-metalloprotein imprints. Results indicate a 66% enhancement for bovine serum albumin (BSA) protein binding capacities (Q, mg/g) via metal-ion/ligand exchange properties within the metal-coded MIPs. Specifically, Co(II)-complex-based MIPs exhibited 92 ± 1% specific binding with Q values of 5.7 ± 0.45 mg BSA/g polymer and imprinting factors (IF) of 14.8 ± 1.9 (MIP/non-imprinted (NIP) control). The selectivity of our Co(II)-coded BSA MIPs were also tested using bovine haemoglobin (BHb), lysozyme (Lyz), and trypsin (Tryp). By evaluating imprinting factors (K), each of the latter proteins was found to have lower affinities in comparison to cognate BSA template. The hydrogels were further characterised by thermal analysis and differential scanning calorimetry (DSC) to assess optimum polymer composition. The development of hydrogel-based molecularly imprinted polymer (HydroMIPs) technology for the memory imprinting of proteins and for protein biosensor development presents many possibilities, including uses in bio-sample clean-up or selective extraction, replacement of biological antibodies in immunoassays and biosensors for medicine and the environment. Biosensors for proteins and viruses are currently expensive to develop because they require the use of expensive antibodies. Because of their biomimicry capabilities (and their potential to act as synthetic antibodies), HydroMIPs potentially offer a route to the development of new low-cost biosensors. Herein, a metal ion-mediated imprinting approach was employed to metal-code our hydrogel-based MIPs for the selective recognition of bovine serum albumin (BSA). Specifically, Co(II)-complex based MIPs exhibited a 66% enhancement (in comparison to our normal MIPs) exhibiting 92 ± 1% specific binding with Q values of 5.7 ± 0.45 mg BSA/g polymer and imprinting factors (IF) of 14.8 ± 1.9 (MIP/ non-imprinted (NIP) control). The proposed metal-coded MIPs for protein recognition are intended to lead to unprecedented improvement in MIP selectivity and for future biosensor development that rely on an electrochemical redox processes. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.
Janjua, Naveed Zafar; Islam, Nazrul; Kuo, Margot; Yu, Amanda; Wong, Stanley; Butt, Zahid A; Gilbert, Mark; Buxton, Jane; Chapinal, Nuria; Samji, Hasina; Chong, Mei; Alvarez, Maria; Wong, Jason; Tyndall, Mark W; Krajden, Mel
2018-05-01
Large linked healthcare administrative datasets could be used to monitor programs providing prevention and treatment services to people who inject drugs (PWID). However, diagnostic codes in administrative datasets do not differentiate non-injection from injection drug use (IDU). We validated algorithms based on diagnostic codes and prescription records representing IDU in administrative datasets against interview-based IDU data. The British Columbia Hepatitis Testers Cohort (BC-HTC) includes ∼1.7 million individuals tested for HCV/HIV or reported HBV/HCV/HIV/tuberculosis cases in BC from 1990 to 2015, linked to administrative datasets including physician visit, hospitalization and prescription drug records. IDU, assessed through interviews as part of enhanced surveillance at the time of HIV or HCV/HBV diagnosis from a subset of cases included in the BC-HTC (n = 6559), was used as the gold standard. ICD-9/ICD-10 codes for IDU and injecting-related infections (IRI) were grouped with records of opioid substitution therapy (OST) into multiple IDU algorithms in administrative datasets. We assessed the performance of IDU algorithms through calculation of sensitivity, specificity, positive predictive, and negative predictive values. Sensitivity was highest (90-94%), and specificity was lowest (42-73%) for algorithms based either on IDU or IRI and drug misuse codes. Algorithms requiring both drug misuse and IRI had lower sensitivity (57-60%) and higher specificity (90-92%). An optimal sensitivity and specificity combination was found with two medical visits or a single hospitalization for injectable drugs with (83%/82%) and without OST (78%/83%), respectively. Based on algorithms that included two medical visits, a single hospitalization or OST records, there were 41,358 (1.2% of 11-65 years individuals in BC) recent PWID in BC based on health encounters during 3- year period (2013-2015). Algorithms for identifying PWID using diagnostic codes in linked administrative data could be used for tracking the progress of programing aimed at PWID. With population-based datasets, this tool can be used to inform much needed estimates of PWID population size. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William
This paper documents the escape of high explosive (HE) products problem. The problem, first presented by Fickett & Rivard, tests the implementation and numerical behavior of a high explosive detonation and energy release model and its interaction with an associated compressible hydrodynamics simulation code. The problem simulates the detonation of a finite-length, one-dimensional piece of HE that is driven by a piston from one end and adjacent to a void at the other end. The HE equation of state is modeled as a polytropic ideal gas. The HE detonation is assumed to be instantaneous with an infinitesimal reaction zone. Viamore » judicious selection of the material specific heat ratio, the problem has an exact solution with linear characteristics, enabling a straightforward calculation of the physical variables as a function of time and space. Lastly, implementation of the exact solution in the Python code ExactPack is discussed, as are verification cases for the exact solution code.« less
JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning
NASA Astrophysics Data System (ADS)
Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro
2015-12-01
We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.
NASA Technical Reports Server (NTRS)
1974-01-01
An analysis of the requirements for the Earth Observatory Satellite (EOS) system specifications is presented. The analysis consists of requirements obtained from existing documentation and those derived from functional analysis. The requirements follow the hierarchy of program, mission, system, and subsystem. The code for designating specific requirements is explained. Among the subjects considered are the following: (1) the traffic model, (2) space shuttle related performance, (3) booster related performance, (4) the data collection system, (5) spacecraft structural tests, and (6) the ground support requirements.
Verification testing of the compression performance of the HEVC screen content coding extensions
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng
2017-09-01
This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.
Moles: Tool-Assisted Environment Isolation with Closures
NASA Astrophysics Data System (ADS)
de Halleux, Jonathan; Tillmann, Nikolai
Isolating test cases from environment dependencies is often desirable, as it increases test reliability and reduces test execution time. However, code that calls non-virtual methods or consumes sealed classes is often impossible to test in isolation. Moles is a new lightweight framework which addresses this problem. For any .NET method, Moles allows test-code to provide alternative implementations, given as .NET delegates, for which C# provides very concise syntax while capturing local variables in a closure object. Using code instrumentation, the Moles framework will redirect calls to provided delegates instead of the original methods. The Moles framework is designed to work together with the dynamic symbolic execution tool Pex to enable automated test generation. In a case study, testing code programmed against the Microsoft SharePoint Foundation API, we achieved full code coverage while running tests in isolation without an actual SharePoint server. The Moles framework integrates with .NET and Visual Studio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klasky, Marc Louis; Myers, Steven Charles; James, Michael R.
To facilitate the timely execution of System Threat Reviews (STRs) for DNDO, and also to develop a methodology for performing STRs, LANL performed comparisons of several radiation transport codes (MCNP, GADRAS, and Gamma-Designer) that have been previously utilized to compute radiation signatures. While each of these codes has strengths, it is of paramount interest to determine the limitations of each of the respective codes and also to identify the most time efficient means by which to produce computational results, given the large number of parametric cases that are anticipated in performing STR's. These comparisons serve to identify regions of applicabilitymore » for each code and provide estimates of uncertainty that may be anticipated. Furthermore, while performing these comparisons, examination of the sensitivity of the results to modeling assumptions was also examined. These investigations serve to enable the creation of the LANL methodology for performing STRs. Given the wide variety of radiation test sources, scenarios, and detectors, LANL calculated comparisons of the following parameters: decay data, multiplicity, device (n,γ) leakages, and radiation transport through representative scenes and shielding. This investigation was performed to understand potential limitations utilizing specific codes for different aspects of the STR challenges.« less
Opinion survey on proposals for improving code stroke in Murcia Health District V, 2014.
González-Navarro, M; Martínez-Sánchez, M A; Morales-Camacho, V; Valera-Albert, M; Atienza-Ayala, S V; Limiñana-Alcaraz, G
2017-05-01
Stroke is a time-dependent neurological disease. Health District V in the Murcia Health System has certain demographic and geographical characteristics that make it necessary to create specific improvement strategies to ensure proper functioning of code stroke (CS). The study objectives were to assess local professionals' opinions about code stroke activation and procedure, and to share these suggestions with the regional multidisciplinary group for code stroke. This cross-sectional and descriptive study used the Delphi technique to develop a questionnaire for doctors and nurses working at all care levels in Area V. An anonymous electronic survey was sent to 154 professionals. The analysis was performed using the SWOT method (Strengths, Weaknesses, Opportunities, and Threats). Researchers collected 51 questionnaires. The main proposals were providing training, promoting communication with the neurologist, overcoming physical distances, using diagnostic imaging tests, motivating professionals, and raising awareness in the general population. Most of the interventions proposed by the participants have been listed in published literature. These improvement proposals were forwarded to the Regional Code Stroke Improvement Group. Copyright © 2015 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
Formal specification and verification of Ada software
NASA Technical Reports Server (NTRS)
Hird, Geoffrey R.
1991-01-01
The use of formal methods in software development achieves levels of quality assurance unobtainable by other means. The Larch approach to specification is described, and the specification of avionics software designed to implement the logic of a flight control system is given as an example. Penelope is described which is an Ada-verification environment. The Penelope user inputs mathematical definitions, Larch-style specifications and Ada code and performs machine-assisted proofs that the code obeys its specifications. As an example, the verification of a binary search function is considered. Emphasis is given to techniques assisting the reuse of a verification effort on modified code.
Posttest analysis of the FFTF inherent safety tests
DOE Office of Scientific and Technical Information (OSTI.GOV)
Padilla, A. Jr.; Claybrook, S.W.
Inherent safety tests were performed during 1986 in the 400-MW (thermal) Fast Flux Test Facility (FFTF) reactor to demonstrate the effectiveness of an inherent shutdown device called the gas expansion module (GEM). The GEM device provided a strong negative reactivity feedback during loss-of-flow conditions by increasing the neutron leakage as a result of an expanding gas bubble. The best-estimate pretest calculations for these tests were performed using the IANUS plant analysis code (Westinghouse Electric Corporation proprietary code) and the MELT/SIEX3 core analysis code. These two codes were also used to perform the required operational safety analyses for the FFTF reactormore » and plant. Although it was intended to also use the SASSYS systems (core and plant) analysis code, the calibration of the SASSYS code for FFTF core and plant analysis was not completed in time to perform pretest analyses. The purpose of this paper is to present the results of the posttest analysis of the 1986 FFTF inherent safety tests using the SASSYS code.« less
Software engineering and automatic continuous verification of scientific software
NASA Astrophysics Data System (ADS)
Piggott, M. D.; Hill, J.; Farrell, P. E.; Kramer, S. C.; Wilson, C. R.; Ham, D.; Gorman, G. J.; Bond, T.
2011-12-01
Software engineering of scientific code is challenging for a number of reasons including pressure to publish and a lack of awareness of the pitfalls of software engineering by scientists. The Applied Modelling and Computation Group at Imperial College is a diverse group of researchers that employ best practice software engineering methods whilst developing open source scientific software. Our main code is Fluidity - a multi-purpose computational fluid dynamics (CFD) code that can be used for a wide range of scientific applications from earth-scale mantle convection, through basin-scale ocean dynamics, to laboratory-scale classic CFD problems, and is coupled to a number of other codes including nuclear radiation and solid modelling. Our software development infrastructure consists of a number of free tools that could be employed by any group that develops scientific code and has been developed over a number of years with many lessons learnt. A single code base is developed by over 30 people for which we use bazaar for revision control, making good use of the strong branching and merging capabilities. Using features of Canonical's Launchpad platform, such as code review, blueprints for designing features and bug reporting gives the group, partners and other Fluidity uers an easy-to-use platform to collaborate and allows the induction of new members of the group into an environment where software development forms a central part of their work. The code repositoriy are coupled to an automated test and verification system which performs over 20,000 tests, including unit tests, short regression tests, code verification and large parallel tests. Included in these tests are build tests on HPC systems, including local and UK National HPC services. The testing of code in this manner leads to a continuous verification process; not a discrete event performed once development has ceased. Much of the code verification is done via the "gold standard" of comparisons to analytical solutions via the method of manufactured solutions. By developing and verifying code in tandem we avoid a number of pitfalls in scientific software development and advocate similar procedures for other scientific code applications.
Coding conventions and principles for a National Land-Change Modeling Framework
Donato, David I.
2017-07-14
This report establishes specific rules for writing computer source code for use with the National Land-Change Modeling Framework (NLCMF). These specific rules consist of conventions and principles for writing code primarily in the C and C++ programming languages. Collectively, these coding conventions and coding principles create an NLCMF programming style. In addition to detailed naming conventions, this report provides general coding conventions and principles intended to facilitate the development of high-performance software implemented with code that is extensible, flexible, and interoperable. Conventions for developing modular code are explained in general terms and also enabled and demonstrated through the appended templates for C++ base source-code and header files. The NLCMF limited-extern approach to module structure, code inclusion, and cross-module access to data is both explained in the text and then illustrated through the module templates. Advice on the use of global variables is provided.
Validity of data in the Danish Colorectal Cancer Screening Database.
Thomsen, Mette Kielsholm; Njor, Sisse Helle; Rasmussen, Morten; Linnemann, Dorte; Andersen, Berit; Baatrup, Gunnar; Friis-Hansen, Lennart Jan; Jørgensen, Jens Christian Riis; Mikkelsen, Ellen Margrethe
2017-01-01
In Denmark, a nationwide screening program for colorectal cancer was implemented in March 2014. Along with this, a clinical database for program monitoring and research purposes was established. The aim of this study was to estimate the agreement and validity of diagnosis and procedure codes in the Danish Colorectal Cancer Screening Database (DCCSD). All individuals with a positive immunochemical fecal occult blood test (iFOBT) result who were invited to screening in the first 3 months since program initiation were identified. From these, a sample of 150 individuals was selected using stratified random sampling by age, gender and region of residence. Data from the DCCSD were compared with data from hospital records, which were used as the reference. Agreement, sensitivity, specificity and positive and negative predictive values were estimated for categories of codes "clean colon", "colonoscopy performed", "overall completeness of colonoscopy", "incomplete colonoscopy", "polypectomy", "tumor tissue left behind", "number of polyps", "lost polyps", "risk group of polyps" and "colorectal cancer and polyps/benign tumor". Hospital records were available for 136 individuals. Agreement was highest for "colorectal cancer" (97.1%) and lowest for "lost polyps" (88.2%). Sensitivity varied between moderate and high, with 60.0% for "incomplete colonoscopy" and 98.5% for "colonoscopy performed". Specificity was 92.7% or above, except for the categories "colonoscopy performed" and "overall completeness of colonoscopy", where the specificity was low; however, the estimates were imprecise. A high level of agreement between categories of codes in DCCSD and hospital records indicates that DCCSD reflects the hospital records well. Further, the validity of the categories of codes varied from moderate to high. Thus, the DCCSD may be a valuable data source for future research on colorectal cancer screening.
GROUND-WATER MODEL TESTING: SYSTEMATIC EVALUATION AND TESTING OF CODE FUNCTIONALITY AND PERFORMANCE
Effective use of ground-water simulation codes as management decision tools requires the establishment of their functionality, performance characteristics, and applicability to the problem at hand. This is accomplished through application of a systematic code-testing protocol and...
NASA Technical Reports Server (NTRS)
Keba, John E.
1996-01-01
Rotordynamic coefficients obtained from testing two different hydrostatic bearings are compared to values predicted by two different computer programs. The first set of test data is from a relatively long (L/D=1) orifice compensated hydrostatic bearing tested in water by Texas A&M University (TAMU Bearing No.9). The second bearing is a shorter (L/D=.37) bearing and was tested in a lower viscosity fluid by Rocketdyne Division of Rockwell (Rocketdyne 'Generic' Bearing) at similar rotating speeds and pressures. Computed predictions of bearing rotordynamic coefficients were obtained from the cylindrical seal code 'ICYL', one of the industrial seal codes developed for NASA-LeRC by Mechanical Technology Inc., and from the hydrodynamic bearing code 'HYDROPAD'. The comparison highlights the difference the bearing has on the accuracy of the predictions. The TAMU Bearing No. 9 test data is closely matched by the predictions obtained for the HYDROPAD code (except for added mass terms) whereas significant differences exist between the data from the Rocketdyne 'Generic' bearing the code predictions. The results suggest that some aspects of the fluid behavior in the shorter, higher Reynolds Number 'Generic' bearing may not be modeled accurately in the codes. The ICYL code predictions for flowrate and direct stiffness approximately equal those of HYDROPAD. Significant differences in cross-coupled stiffness and the damping terms were obtained relative to HYDROPAD and both sets of test data. Several observations are included concerning application of the ICYL code.
Factor Structure and Incremental Validity of the Enhanced Computer- Administered Tests
1992-07-01
performance in the mechanical maintenance specialties. 14. SUBJECT TERMS Aptitude tests, ASVAB (Armed services vocational aptitude battery), CAT ...Code 11) Attn: Dir, Personnel Systems (Code 12) Attn: Dir, Testing Systems (Code 13) Attn: CAT /ASVABPMO FJB1 COMNAVCRUITCOM FT1 CNET V8 CG MCRD...test, a computerized adaptive testing version of the ASVAB ( CAT -ASVAB), the psychomotor portion of the General Aptitude Test Battery (GATB), and the
Position specific variation in the rate of evolution in transcription factor binding sites
Moses, Alan M; Chiang, Derek Y; Kellis, Manolis; Lander, Eric S; Eisen, Michael B
2003-01-01
Background The binding sites of sequence specific transcription factors are an important and relatively well-understood class of functional non-coding DNAs. Although a wide variety of experimental and computational methods have been developed to characterize transcription factor binding sites, they remain difficult to identify. Comparison of non-coding DNA from related species has shown considerable promise in identifying these functional non-coding sequences, even though relatively little is known about their evolution. Results Here we analyse the genome sequences of the budding yeasts Saccharomyces cerevisiae, S. bayanus, S. paradoxus and S. mikatae to study the evolution of transcription factor binding sites. As expected, we find that both experimentally characterized and computationally predicted binding sites evolve slower than surrounding sequence, consistent with the hypothesis that they are under purifying selection. We also observe position-specific variation in the rate of evolution within binding sites. We find that the position-specific rate of evolution is positively correlated with degeneracy among binding sites within S. cerevisiae. We test theoretical predictions for the rate of evolution at positions where the base frequencies deviate from background due to purifying selection and find reasonable agreement with the observed rates of evolution. Finally, we show how the evolutionary characteristics of real binding motifs can be used to distinguish them from artefacts of computational motif finding algorithms. Conclusion As has been observed for protein sequences, the rate of evolution in transcription factor binding sites varies with position, suggesting that some regions are under stronger functional constraint than others. This variation likely reflects the varying importance of different positions in the formation of the protein-DNA complex. The characterization of the pattern of evolution in known binding sites will likely contribute to the effective use of comparative sequence data in the identification of transcription factor binding sites and is an important step toward understanding the evolution of functional non-coding DNA. PMID:12946282
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-20
... (Code 324), Field Border (Code 386), Filter Strip (Code 393), Land Smoothing (Code 466), Livestock... the implementation requirement document to the specifications and plans. Filter Strip (Code 393)--The...
ERIC Educational Resources Information Center
Blackburn, Angelique Michelle
2013-01-01
Bilinguals sometimes outperform age-matched monolinguals on non-language tasks involving cognitive control. But the bilingual advantage is not consistently found in every experiment and may reflect specific attributes of the bilinguals tested. The goal of this dissertation was to determine if the way in which bilinguals use language, specifically…
Residential tornado safe room from commodity wood products – impact and wind pressure testing
Robert H. Falk; James J. Bridwell; C. Adam Senalik; Marshall Begel
2018-01-01
A tornado safe room is a shelter designed to provide protection during a tornado and is specifically engineered to resist the high wind pressures and debris impact generated by these high wind events. The required performance criteria of these shelters has been established and is found in the International Code Council Standard for the Design and Construction of Storm...
Bayesian Inference in the Modern Design of Experiments
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2008-01-01
This paper provides an elementary tutorial overview of Bayesian inference and its potential for application in aerospace experimentation in general and wind tunnel testing in particular. Bayes Theorem is reviewed and examples are provided to illustrate how it can be applied to objectively revise prior knowledge by incorporating insights subsequently obtained from additional observations, resulting in new (posterior) knowledge that combines information from both sources. A logical merger of Bayesian methods and certain aspects of Response Surface Modeling is explored. Specific applications to wind tunnel testing, computational code validation, and instrumentation calibration are discussed.
Equations of state for detonation products of high energy PBX explosives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, E. L.; Helm, F. H.; Finger, M.
1977-08-01
It has become apparent that the accumulated changes in the analysis of cylinder test data, in the material specifications, and in the hydrodynamic code simulation of the cylinder test necessitated an update of the detonation product EOS description for explosives in common use at LLL. The explosives reviewed are PBX-9404-3, LX-04-1, LX-10-1, LX-14-0 and LX-09-1. In order to maintain the proper relation of predicted performance of these standard explosives, they have been revised as a single set.
Imitation Learning Errors Are Affected by Visual Cues in Both Performance and Observation Phases.
Mizuguchi, Takashi; Sugimura, Ryoko; Shimada, Hideaki; Hasegawa, Takehiro
2017-08-01
Mechanisms of action imitation were examined. Previous studies have suggested that success or failure of imitation is determined at the point of observing an action. In other words, cognitive processing after observation is not related to the success of imitation; 20 university students participated in each of three experiments in which they observed a series of object manipulations consisting of four elements (hands, tools, object, and end points) and then imitated the manipulations. In Experiment 1, a specific intially observed element was color coded, and the specific manipulated object at the imitation stage was identically color coded; participants accurately imitated the color coded element. In Experiment 2, a specific element was color coded at the observation but not at the imitation stage, and there were no effects of color coding on imitation. In Experiment 3, participants were verbally instructed to attend to a specific element at the imitation stage, but the verbal instructions had no effect. Thus, the success of imitation may not be determined at the stage of observing an action and color coding can provide a clue for imitation at the imitation stage.
Lakshminarayan, Kamakshi; Larson, Joseph C.; Virnig, Beth; Fuller, Candace; Allen, Norrina Bai; Limacher, Marian; Winkelmayer, Wolfgang C.; Safford, Monika M.; Burwen, Dale R.
2014-01-01
Background and Purpose Many studies use medical record review for ascertaining outcomes. One large, longitudinal study, the Women’s Health Initiative (WHI) ascertains strokes using participant self-report and subsequent physician review of medical records. This is resource-intensive. Herein, we assess whether Medicare data can reliably assess stroke events in the WHI. Methods Subjects were WHI participants with fee-for-service Medicare. Four stroke definitions were created for Medicare data using discharge diagnoses in hospitalization claims. Definition 1: stroke codes in any position; Definition 2: primary position stroke codes; Definitions 3 & 4: hemorrhagic and ischemic stroke codes respectively. WHI data were randomly split into training (50%) and test sets. A concordance matrix was used to examine agreement between WHI and Medicare stroke diagnosis. A WHI stroke and a Medicare stroke were considered a match if they occurred within +/− 7 days of each other. Refined analyses excluded Medicare events where medical records were unavailable for comparison. Results Training data (n=24,428): There were 577 WHI strokes and 557 Medicare strokes using definition 1. Of these, 478 were a match. Algorithm performance: Specificity 99.7%; Negative Predictive Value 99.7%; Sensitivity 82.8%; Positive Predictive Value 85.8%; kappa 0.84. Performance was similar for test data. While specificity and negative predictive value exceeded 99%, sensitivity ranged from 75 to 88% and positive predictive value ranged from 80 to 90% across stroke definitions. Conclusion Medicare data appear useful for population-based stroke research; however the performance characteristics depend on the definition selected. PMID:24525955
Validation and verification of the laser range safety tool (LRST)
NASA Astrophysics Data System (ADS)
Kennedy, Paul K.; Keppler, Kenneth S.; Thomas, Robert J.; Polhamus, Garrett D.; Smith, Peter A.; Trevino, Javier O.; Seaman, Daniel V.; Gallaway, Robert A.; Crockett, Gregg A.
2003-06-01
The U.S. Dept. of Defense (DOD) is currently developing and testing a number of High Energy Laser (HEL) weapons systems. DOD range safety officers now face the challenge of designing safe methods of testing HEL's on DOD ranges. In particular, safety officers need to ensure that diffuse and specular reflections from HEL system targets, as well as direct beam paths, are contained within DOD boundaries. If both the laser source and the target are moving, as they are for the Airborne Laser (ABL), a complex series of calculations is required and manual calculations are impractical. Over the past 5 years, the Optical Radiation Branch of the Air Force Research Laboratory (AFRL/HEDO), the ABL System Program Office, Logicon-RDA, and Northrup-Grumman, have worked together to develop a computer model called teh Laser Range Safety Tool (LRST), specifically designed for HEL reflection hazard analyses. The code, which is still under development, is currently tailored to support the ABL program. AFRL/HEDO has led an LRST Validation and Verification (V&V) effort since 1998, in order to determine if code predictions are accurate. This paper summarizes LRST V&V efforts to date including: i) comparison of code results with laboratory measurements of reflected laser energy and with reflection measurements made during actual HEL field tests, and ii) validation of LRST's hazard zone computations.
Micromagnetic Code Development of Advanced Magnetic Structures Final Report CRADA No. TC-1561-98
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, Charles J.; Shi, Xizeng
The specific goals of this project were to: Further develop the previously written micromagnetic code DADIMAG (DOE code release number 980017); Validate the code. The resulting code was expected to be more realistic and useful for simulations of magnetic structures of specific interest to Read-Rite programs. We also planned to further the code for use in internal LLNL programs. This project complemented LLNL CRADA TC-840-94 between LLNL and Read-Rite, which allowed for simulations of the advanced magnetic head development completed under the CRADA. TC-1561-98 was effective concurrently with LLNL non-exclusive copyright license (TL-1552-98) to Read-Rite for DADIMAG Version 2 executablemore » code.« less
NASA Technical Reports Server (NTRS)
Radhakrishnan, K.
1984-01-01
The efficiency and accuracy of several algorithms recently developed for the efficient numerical integration of stiff ordinary differential equations are compared. The methods examined include two general-purpose codes, EPISODE and LSODE, and three codes (CHEMEQ, CREK1D, and GCKP84) developed specifically to integrate chemical kinetic rate equations. The codes are applied to two test problems drawn from combustion kinetics. The comparisons show that LSODE is the fastest code currently available for the integration of combustion kinetic rate equations. An important finding is that an interactive solution of the algebraic energy conservation equation to compute the temperature does not result in significant errors. In addition, this method is more efficient than evaluating the temperature by integrating its time derivative. Significant reductions in computational work are realized by updating the rate constants (k = at(supra N) N exp(-E/RT) only when the temperature change exceeds an amount delta T that is problem dependent. An approximate expression for the automatic evaluation of delta T is derived and is shown to result in increased efficiency.
NASA Technical Reports Server (NTRS)
Ngan, Angelen; Biezad, Daniel
1996-01-01
A study has been conducted to develop and to analyze a FORTRAN computer code for performing agility analysis on fighter aircraft configurations. This program is one of the modules of the NASA Ames ACSYNT (AirCraft SYNThesis) design code. The background of the agility research in the aircraft industry and a survey of a few agility metrics are discussed. The methodology, techniques, and models developed for the code are presented. The validity of the existing code was evaluated by comparing with existing flight test data. A FORTRAN program was developed for a specific metric, PM (Pointing Margin), as part of the agility module. Example trade studies using the agility module along with ACSYNT were conducted using a McDonnell Douglas F/A-18 Hornet aircraft model. Tile sensitivity of thrust loading, wing loading, and thrust vectoring on agility criteria were investigated. The module can compare the agility potential between different configurations and has capability to optimize agility performance in the preliminary design process. This research provides a new and useful design tool for analyzing fighter performance during air combat engagements in the preliminary design.
Development of an agility assessment module for preliminary fighter design
NASA Technical Reports Server (NTRS)
Ngan, Angelen; Bauer, Brent; Biezad, Daniel; Hahn, Andrew
1996-01-01
A FORTRAN computer program is presented to perform agility analysis on fighter aircraft configurations. This code is one of the modules of the NASA Ames ACSYNT (AirCraft SYNThesis) design code. The background of the agility research in the aircraft industry and a survey of a few agility metrics are discussed. The methodology, techniques, and models developed for the code are presented. FORTRAN programs were developed for two specific metrics, CCT (Combat Cycle Time) and PM (Pointing Margin), as part of the agility module. The validity of the code was evaluated by comparing with existing flight test data. Example trade studies using the agility module along with ACSYNT were conducted using Northrop F-20 Tigershark and McDonnell Douglas F/A-18 Hornet aircraft models. The sensitivity of thrust loading and wing loading on agility criteria were investigated. The module can compare the agility potential between different configurations and has the capability to optimize agility performance in the preliminary design process. This research provides a new and useful design tool for analyzing fighter performance during air combat engagements.
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
González-de Paz, Luis; Devant-Altimir, Meritxell; Kostov, Belchin; Mitjavila-López, Joan; Navarro-Rubio, M Dolors; Sisó-Almirall, Antoni
2013-12-01
Assessing ethical endorsement is crucial to the study of professional performance and moral conduct. There are no specific instruments that verify patients and professional experiences of ethical practice in the specific area of primary health care (PHC). To study the psychometric properties of two questionnaires to identify professional and patient endorsement of normative ethics. A methodological study conducted in PHC centres from an urban area (Barcelona). A group of items from an ethical code were generated using a qualitative study with focus groups. Items underwent expert validation, item refinement and test-retest reliability. Two groups of items for PHC professionals and patients were validated. The structure of the constructs and the internal consistency were studied after participants completed the questionnaires. Principal component analysis with supplementary variables showed the utility of the validated questionnaires. The patients' questionnaire consisted of 17 general items plus 11 additional items on specific conditions, and the health professional's contained 24 general and 9 specific items. The construct of the questionnaires comprised a three-factor solution for patients and a five-factor solution for professionals. Principal component analysis with supplementary variables showed that patients with higher scores on ethical perception were associated with better opinions on health care quality and more confidence in professionals. In PHC professionals, higher scores were associated with effective knowledge of the code. Both questionnaires showed good psychometric properties and are valid to screen ethical attitudes. The instrument warrants further testing and use with culturally diverse patients and PHC professionals.
Establishment of a Beta Test Center for the NPARC Code at Central State University
NASA Technical Reports Server (NTRS)
Okhio, Cyril B.
1996-01-01
Central State University has received a supplementary award to purchase computer workstations for the NPARC (National Propulsion Ames Research Center) computational fluid dynamics code BETA Test Center. The computational code has also been acquired for installation on the workstations. The acquisition of this code is an initial step for CSU in joining an alliance composed of NASA, AEDC, The Aerospace Industry, and academia. A post-Doctoral research Fellow from a neighboring university will assist the PI in preparing a template for Tutorial documents for the BETA test center. The major objective of the alliance is to establish a national applications-oriented CFD capability, centered on the NPARC code. By joining the alliance, the BETA test center at CSU will allow the PI, as well as undergraduate and post-graduate students to test the capability of the NPARC code in predicting the physics of aerodynamic/geometric configurations that are of interest to the alliance. Currently, CSU is developing a once a year, hands-on conference/workshop based upon the experience acquired from running other codes similar to the NPARC code in the first year of this grant.
Gel, Bernat; Díez-Villanueva, Anna; Serra, Eduard; Buschbeck, Marcus; Peinado, Miguel A; Malinverni, Roberto
2016-01-15
Statistically assessing the relation between a set of genomic regions and other genomic features is a common challenging task in genomic and epigenomic analyses. Randomization based approaches implicitly take into account the complexity of the genome without the need of assuming an underlying statistical model. regioneR is an R package that implements a permutation test framework specifically designed to work with genomic regions. In addition to the predefined randomization and evaluation strategies, regioneR is fully customizable allowing the use of custom strategies to adapt it to specific questions. Finally, it also implements a novel function to evaluate the local specificity of the detected association. regioneR is an R package released under Artistic-2.0 License. The source code and documents are freely available through Bioconductor (http://www.bioconductor.org/packages/regioneR). rmalinverni@carrerasresearch.org. © The Author 2015. Published by Oxford University Press.
Identifying Pediatric Severe Sepsis and Septic Shock: Accuracy of Diagnosis Codes.
Balamuth, Fran; Weiss, Scott L; Hall, Matt; Neuman, Mark I; Scott, Halden; Brady, Patrick W; Paul, Raina; Farris, Reid W D; McClead, Richard; Centkowski, Sierra; Baumer-Mouradian, Shannon; Weiser, Jason; Hayes, Katie; Shah, Samir S; Alpern, Elizabeth R
2015-12-01
To evaluate accuracy of 2 established administrative methods of identifying children with sepsis using a medical record review reference standard. Multicenter retrospective study at 6 US children's hospitals. Subjects were children >60 days to <19 years of age and identified in 4 groups based on International Classification of Diseases, Ninth Revision, Clinical Modification codes: (1) severe sepsis/septic shock (sepsis codes); (2) infection plus organ dysfunction (combination codes); (3) subjects without codes for infection, organ dysfunction, or severe sepsis; and (4) infection but not severe sepsis or organ dysfunction. Combination codes were allowed, but not required within the sepsis codes group. We determined the presence of reference standard severe sepsis according to consensus criteria. Logistic regression was performed to determine whether addition of codes for sepsis therapies improved case identification. A total of 130 out of 432 subjects met reference SD of severe sepsis. Sepsis codes had sensitivity 73% (95% CI 70-86), specificity 92% (95% CI 87-95), and positive predictive value 79% (95% CI 70-86). Combination codes had sensitivity 15% (95% CI 9-22), specificity 71% (95% CI 65-76), and positive predictive value 18% (95% CI 11-27). Slight improvements in model characteristics were observed when codes for vasoactive medications and endotracheal intubation were added to sepsis codes (c-statistic 0.83 vs 0.87, P = .008). Sepsis specific International Classification of Diseases, Ninth Revision, Clinical Modification codes identify pediatric patients with severe sepsis in administrative data more accurately than a combination of codes for infection plus organ dysfunction. Copyright © 2015 Elsevier Inc. All rights reserved.
Code Verification Results of an LLNL ASC Code on Some Tri-Lab Verification Test Suite Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, S R; Bihari, B L; Salari, K
As scientific codes become more complex and involve larger numbers of developers and algorithms, chances for algorithmic implementation mistakes increase. In this environment, code verification becomes essential to building confidence in the code implementation. This paper will present first results of a new code verification effort within LLNL's B Division. In particular, we will show results of code verification of the LLNL ASC ARES code on the test problems: Su Olson non-equilibrium radiation diffusion, Sod shock tube, Sedov point blast modeled with shock hydrodynamics, and Noh implosion.
Specific and Modular Binding Code for Cytosine Recognition in Pumilio/FBF (PUF) RNA-binding Domains
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dong, Shuyun; Wang, Yang; Cassidy-Amstutz, Caleb
2011-10-28
Pumilio/fem-3 mRNA-binding factor (PUF) proteins possess a recognition code for bases A, U, and G, allowing designed RNA sequence specificity of their modular Pumilio (PUM) repeats. However, recognition side chains in a PUM repeat for cytosine are unknown. Here we report identification of a cytosine-recognition code by screening random amino acid combinations at conserved RNA recognition positions using a yeast three-hybrid system. This C-recognition code is specific and modular as specificity can be transferred to different positions in the RNA recognition sequence. A crystal structure of a modified PUF domain reveals specific contacts between an arginine side chain and themore » cytosine base. We applied the C-recognition code to design PUF domains that recognize targets with multiple cytosines and to generate engineered splicing factors that modulate alternative splicing. Finally, we identified a divergent yeast PUF protein, Nop9p, that may recognize natural target RNAs with cytosine. This work deepens our understanding of natural PUF protein target recognition and expands the ability to engineer PUF domains to recognize any RNA sequence.« less
DYNA3D/ParaDyn Regression Test Suite Inventory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Jerry I.
2016-09-01
The following table constitutes an initial assessment of feature coverage across the regression test suite used for DYNA3D and ParaDyn. It documents the regression test suite at the time of preliminary release 16.1 in September 2016. The columns of the table represent groupings of functionalities, e.g., material models. Each problem in the test suite is represented by a row in the table. All features exercised by the problem are denoted by a check mark (√) in the corresponding column. The definition of “feature” has not been subdivided to its smallest unit of user input, e.g., algorithmic parameters specific to amore » particular type of contact surface. This represents a judgment to provide code developers and users a reasonable impression of feature coverage without expanding the width of the table by several multiples. All regression testing is run in parallel, typically with eight processors, except problems involving features only available in serial mode. Many are strictly regression tests acting as a check that the codes continue to produce adequately repeatable results as development unfolds; compilers change and platforms are replaced. A subset of the tests represents true verification problems that have been checked against analytical or other benchmark solutions. Users are welcomed to submit documented problems for inclusion in the test suite, especially if they are heavily exercising, and dependent upon, features that are currently underrepresented.« less
Time for change: a roadmap to guide the implementation of the World Anti-Doping Code 2015
Dvorak, Jiri; Baume, Norbert; Botré, Francesco; Broséus, Julian; Budgett, Richard; Frey, Walter O; Geyer, Hans; Harcourt, Peter Rex; Ho, Dave; Howman, David; Isola, Victor; Lundby, Carsten; Marclay, François; Peytavin, Annie; Pipe, Andrew; Pitsiladis, Yannis P; Reichel, Christian; Robinson, Neil; Rodchenkov, Grigory; Saugy, Martial; Sayegh, Souheil; Segura, Jordi; Thevis, Mario; Vernec, Alan; Viret, Marjolaine; Vouillamoz, Marc; Zorzoli, Mario
2014-01-01
A medical and scientific multidisciplinary consensus meeting was held from 29 to 30 November 2013 on Anti-Doping in Sport at the Home of FIFA in Zurich, Switzerland, to create a roadmap for the implementation of the 2015 World Anti-Doping Code. The consensus statement and accompanying papers set out the priorities for the antidoping community in research, science and medicine. The participants achieved consensus on a strategy for the implementation of the 2015 World Anti-Doping Code. Key components of this strategy include: (1) sport-specific risk assessment, (2) prevalence measurement, (3) sport-specific test distribution plans, (4) storage and reanalysis, (5) analytical challenges, (6) forensic intelligence, (7) psychological approach to optimise the most deterrent effect, (8) the Athlete Biological Passport (ABP) and confounding factors, (9) data management system (Anti-Doping Administration & Management System (ADAMS), (10) education, (11) research needs and necessary advances, (12) inadvertent doping and (13) management and ethics: biological data. True implementation of the 2015 World Anti-Doping Code will depend largely on the ability to align thinking around these core concepts and strategies. FIFA, jointly with all other engaged International Federations of sports (Ifs), the International Olympic Committee (IOC) and World Anti-Doping Agency (WADA), are ideally placed to lead transformational change with the unwavering support of the wider antidoping community. The outcome of the consensus meeting was the creation of the ad hoc Working Group charged with the responsibility of moving this agenda forward. PMID:24764550
Song, Jiang-Hua; Cao, Jia-Shu; Wang, Cheng-Gang
2013-01-01
KEY MESSAGE : BcMF11 as a non-coding RNA gene has an essential role in pollen development, and might be useful for regulating the pollen fertility of crops by antisense RNA technology. We previously identified a 828-bp full-length cDNA of BcMF11, a novel pollen-specific non-coding mRNA-like gene from Chinese cabbage (Brassica campestris L. ssp. chinensis Makino). However, little information is known about the function of BcMF11 in pollen development. To investigate its exact biological roles in pollen development, the BcMF11 cDNA was antisense inhibited in transgenic Chinese cabbage under the control of a tapetum-specific promoter BcA9 and a constitutive promoter CaMV 35S. Antisense RNA transgenic plants displayed decreasing expression of BcMF11 and showed distinct morphological defects. Pollen germination test in vitro and in vivo of the transgenic plants suggested that inhibition of BcMF11 decreased pollen germination efficiency and delayed the pollen tubes' extension in the style. Under scanning electron microscopy, many shrunken and collapsed pollen grains were detected in the antisense BcMF11 transgenic Chinese cabbage. Further cytological observation revealed abnormal pollen development process in transgenic plants, including delayed degradation of tapetum, asynchronous separation of microspore, and aborted development of pollen grain. These results suggest that BcMF11, as a non-coding RNA, plays an essential role in pollen development and male fertility.
Time for change: a roadmap to guide the implementation of the World Anti-Doping Code 2015.
Dvorak, Jiri; Baume, Norbert; Botré, Francesco; Broséus, Julian; Budgett, Richard; Frey, Walter O; Geyer, Hans; Harcourt, Peter Rex; Ho, Dave; Howman, David; Isola, Victor; Lundby, Carsten; Marclay, François; Peytavin, Annie; Pipe, Andrew; Pitsiladis, Yannis P; Reichel, Christian; Robinson, Neil; Rodchenkov, Grigory; Saugy, Martial; Sayegh, Souheil; Segura, Jordi; Thevis, Mario; Vernec, Alan; Viret, Marjolaine; Vouillamoz, Marc; Zorzoli, Mario
2014-05-01
A medical and scientific multidisciplinary consensus meeting was held from 29 to 30 November 2013 on Anti-Doping in Sport at the Home of FIFA in Zurich, Switzerland, to create a roadmap for the implementation of the 2015 World Anti-Doping Code. The consensus statement and accompanying papers set out the priorities for the antidoping community in research, science and medicine. The participants achieved consensus on a strategy for the implementation of the 2015 World Anti-Doping Code. Key components of this strategy include: (1) sport-specific risk assessment, (2) prevalence measurement, (3) sport-specific test distribution plans, (4) storage and reanalysis, (5) analytical challenges, (6) forensic intelligence, (7) psychological approach to optimise the most deterrent effect, (8) the Athlete Biological Passport (ABP) and confounding factors, (9) data management system (Anti-Doping Administration & Management System (ADAMS), (10) education, (11) research needs and necessary advances, (12) inadvertent doping and (13) management and ethics: biological data. True implementation of the 2015 World Anti-Doping Code will depend largely on the ability to align thinking around these core concepts and strategies. FIFA, jointly with all other engaged International Federations of sports (Ifs), the International Olympic Committee (IOC) and World Anti-Doping Agency (WADA), are ideally placed to lead transformational change with the unwavering support of the wider antidoping community. The outcome of the consensus meeting was the creation of the ad hoc Working Group charged with the responsibility of moving this agenda forward.
Researcher Perceptions of Ethical Guidelines and Codes of Conduct
Giorgini, Vincent; Mecca, Jensen T.; Gibson, Carter; Medeiros, Kelsey; Mumford, Michael D.; Connelly, Shane; Devenport, Lynn D.
2014-01-01
Ethical codes of conduct exist in almost every profession. Field-specific codes of conduct have been around for decades, each articulating specific ethical and professional guidelines. However, there has been little empirical research on researchers’ perceptions of these codes of conduct. In the present study, we interviewed faculty members in six research disciplines and identified five themes bearing on the circumstances under which they use ethical guidelines and the underlying reasons for not adhering to such guidelines. We then identify problems with the manner in which codes of conduct in academia are constructed and offer solutions for overcoming these problems. PMID:25635845
DOE Office of Scientific and Technical Information (OSTI.GOV)
Raimondo, E.; Capman, J.L.; Herovard, M.
1985-05-01
Requirements for qualification of electrical equipment used in French-built nuclear power plants are stated in a national code, the RCC-E, or Regles de Construction et de Conception des Materiels Electriques. Under the RCC-E, safety related equipment is assigned to one of three different categories, according to location in the plant and anticipated normal, accident and post-accident behavior. Qualification tests differ for each category and procedures range in scope from the standard seismic test to the highly stringent VISA program, which specifies a predetermined sequence of aging, radiation, seismic and simulated accident testing. A network of official French test facilities wasmore » developed specifically to meet RCC-E requirements.« less
Development tests for the 2.5 megawatt Mod-2 wind turbine generator
NASA Technical Reports Server (NTRS)
Andrews, J. S.; Baskin, J. M.
1982-01-01
The 2.5 megawatt MOD-2 wind turbine generator test program is discussed. The development of the 2.5 megawatt MOD-2 wind turbine generator included an extensive program of testing which encompassed verification of analytical procedures, component development, and integrated system verification. The test program was to assure achievement of the thirty year design operational life of the wind turbine system as well as to minimize costly design modifications which would otherwise have been required during on site system testing. Computer codes were modified, fatigue life of structure and dynamic components were verified, mechanical and electrical component and subsystems were functionally checked and modified where necessary to meet system specifications, and measured dynamic responses of coupled systems confirmed analytical predictions.
Do third-party plans really pay for CVS care?
Soden, Richard
2002-04-01
Until specific CPT and ICD-9 codes are created and approved for CVS, and until there is uniform agreement that CVS is a true medical anomaly (or not), each practitioner will have to decide on how to bill for the signs and symptoms of Computer Vision Syndrome. If the practitioner chooses to view CVS as a medical problem, then the same guidelines and rules for all other patients should be followed with appropriate documentation using CPT and ICD-9 coding. If the practitioner chooses to consider CVS solely as an optical problem, this is a "noncovered" service and the patient or any applicable optical plan will be responsible for payment. One final note: each practitioner who tests for CVS will also have to determine if there is a separate fee for CVS testing. If the practitioner considers CVS to be a medical problem, it may be applicable to include testing for Computer Vision Syndrome as an incidental test to the medical office visit. If the practitioner chooses to consider CVS to be purely an optical problem, it may be appropriate to add an appropriate charge to the noncovered examination. Whatever the decision is, there must be consistency from patient to patient.
Boeri, Eduardo J.; Wanke, María M.; Madariaga, María J.; Teijeiro, María L.; Elena, Sebastian A.; Trangoni, Marcos D.
2018-01-01
Aim: This study aimed to compare the sensitivity (S), specificity (Sp), and positive likelihood ratios (LR+) of four polymerase chain reaction (PCR) assays for the detection of Brucella spp. in dog’s clinical samples. Materials and Methods: A total of 595 samples of whole blood, urine, and genital fluids were evaluated between October 2014 and November 2016. To compare PCR assays, the gold standard was defined using a combination of different serological and microbiological test. Bacterial isolation from urine and blood cultures was carried out. Serological methods such as rapid slide agglutination test, indirect enzyme-linked immunosorbent assay, agar gel immunodiffusion test, and buffered plate antigen test were performed. Four genes were evaluated: (i) The gene coding for the BCSP31 protein, (ii) the ribosomal gene coding for the 16S-23S intergenic spacer region, (iii) the gene coding for porins omp2a/omp2b, and (iv) the gene coding for the insertion sequence IS711. Results: The results obtained were as follows: (1) For the primers that amplify the gene coding for the BCSP31 protein: S: 45.64% (confidence interval [CI] 39.81-51.46), Sp: 95.62% (CI 93.13-98.12), and LR+: 10.43 (CI 6.04-18); (2) for the primers that amplify the ribosomal gene of the 16S-23S rDNA intergenic spacer region: S: 69.80% (CI 64.42-75.18), Sp: 95.62 % (CI 93.13-98.12), and LR+: 11.52 (CI 7.31-18.13); (3) for the primers that amplify the omp2a and omp2b genes: S: 39.26% (CI 33.55-44.97), Sp: 97.31% (CI 95.30-99.32), and LR+ 14.58 (CI 7.25-29.29); and (4) for the primers that amplify the insertion sequence IS711: S: 22.82% (CI 17.89 - 27.75), Sp: 99.66% (CI 98.84-100), and LR+ 67.77 (CI 9.47-484.89). Conclusion: We concluded that the gene coding for the 16S-23S rDNA intergenic spacer region was the one that best detected Brucella spp. in canine clinical samples. PMID:29657404
NASA Technical Reports Server (NTRS)
Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.
1989-01-01
The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.
NASA Technical Reports Server (NTRS)
Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell
1999-01-01
AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.
Chan, Jason; Mack, David R.; Manuel, Douglas G.; Mojaverian, Nassim; de Nanassy, Joseph
2017-01-01
Importance Celiac disease (CD) is a common pediatric illness, and awareness of gluten-related disorders including CD is growing. Health administrative data represents a unique opportunity to conduct population-based surveillance of this chronic condition and assess the impact of caring for children with CD on the health system. Objective The objective of the study was to validate an algorithm based on health administrative data diagnostic codes to accurately identify children with biopsy-proven CD. We also evaluated trends over time in the use of health services related to CD by children in Ontario, Canada. Study design and setting We conducted a retrospective cohort study and validation study of population-based health administrative data in Ontario, Canada. All cases of biopsy-proven CD diagnosed 2005–2011 in Ottawa were identified through chart review from a large pediatric health care center, and linked to the Ontario health administrative data to serve as positive reference standard. All other children living within Ottawa served as the negative reference standard. Case-identifying algorithms based on outpatient physician visits with associated ICD-9 code for CD plus endoscopy billing code were constructed and tested. Sensitivity, specificity, PPV and NPV were tested for each algorithm (with 95% CI). Poisson regression, adjusting for sex and age at diagnosis, was used to explore the trend in outpatient visits associated with a CD diagnostic code from 1995–2011. Results The best algorithm to identify CD consisted of an endoscopy billing claim follow by 1 or more adult or pediatric gastroenterologist encounters after the endoscopic procedure. The sensitivity, specificity, PPV, and NPV for the algorithm were: 70.4% (95% CI 61.1–78.4%), >99.9% (95% CI >99.9->99.9%), 53.3% (95% CI 45.1–61.4%) and >99.9% (95% CI >99.9->99.9%) respectively. It identified 1289 suspected CD cases from Ontario-wide administrative data. There was a 9% annual increase in the use of this combination of CD-associated diagnostic codes in physician billing data (RR 1.09, 95% CI 1.07–1.10, P<0.001). Conclusions With its current structure and variables Ontario health administrative data is not suitable in identifying incident pediatric CD cases. The tested algorithms suffer from poor sensitivity and/or poor PPV, which increase the risk of case misclassification that could lead to biased estimation of CD incidence rate. This study reinforced the importance of validating the codes used to identify cohorts or outcomes when conducting research using health administrative data. PMID:28662204
The Relationship Between Financial Incentives and Quality of Diabetes Care in Ontario, Canada
Kiran, Tara; Victor, J. Charles; Kopp, Alexander; Shah, Baiju R.; Glazier, Richard H.
2012-01-01
OBJECTIVE We assessed the impact of a diabetes incentive code introduced for primary care physicians in Ontario, Canada, in 2002 on quality of diabetes care at the population and patient level. RESEARCH DESIGN AND METHODS We analyzed administrative data for 757,928 Ontarians with diabetes to examine the use of the code and receipt of three evidence-based monitoring tests from 2006 to 2008. We assessed testing rates over time and before and after billing of the incentive code. RESULTS One-quarter of Ontarians with diabetes had an incentive code billed by their physician. The proportion receiving the optimal number of all three monitoring tests (HbA1c, cholesterol, and eye tests) rose gradually from 16% in 2000 to 27% in 2008. Individuals who were younger, lived in rural areas, were not enrolled in a primary care model, or had a mental illness were less likely to receive all three recommended tests. Patients with higher numbers of incentive code billings in 2006–2008 were more likely to receive recommended testing but also were more likely to have received the highest level of recommended testing prior to introduction of the incentive code. Following the same patients over time, improvement in recommended testing was no greater after billing of the first incentive code than before. CONCLUSIONS The diabetes incentive code led to minimal improvement in quality of diabetes care at the population and patient level. Our findings suggest that physicians who provide the highest quality care prior to incentives may be those most likely to claim incentive payments. PMID:22456866
Juul, Malene; Bertl, Johanna; Guo, Qianyun; Nielsen, Morten Muhlig; Świtnicki, Michał; Hornshøj, Henrik; Madsen, Tobias; Hobolth, Asger; Pedersen, Jakob Skou
2017-01-01
Non-coding mutations may drive cancer development. Statistical detection of non-coding driver regions is challenged by a varying mutation rate and uncertainty of functional impact. Here, we develop a statistically founded non-coding driver-detection method, ncdDetect, which includes sample-specific mutational signatures, long-range mutation rate variation, and position-specific impact measures. Using ncdDetect, we screened non-coding regulatory regions of protein-coding genes across a pan-cancer set of whole-genomes (n = 505), which top-ranked known drivers and identified new candidates. For individual candidates, presence of non-coding mutations associates with altered expression or decreased patient survival across an independent pan-cancer sample set (n = 5454). This includes an antigen-presenting gene (CD1A), where 5’UTR mutations correlate significantly with decreased survival in melanoma. Additionally, mutations in a base-excision-repair gene (SMUG1) correlate with a C-to-T mutational-signature. Overall, we find that a rich model of mutational heterogeneity facilitates non-coding driver identification and integrative analysis points to candidates of potential clinical relevance. DOI: http://dx.doi.org/10.7554/eLife.21778.001 PMID:28362259
Improving the sensitivity and specificity of the abbreviated injury scale coding system.
Kramer, C F; Barancik, J I; Thode, H C
1990-01-01
The Abbreviated Injury Scale with Epidemiologic Modifications (AIS 85-EM) was developed to make it possible to code information about anatomic injury types and locations that, although generally available from medical records, is not codable under the standard Abbreviated Injury Scale, published by the American Association for Automotive Medicine in 1985 (AIS 85). In a population-based sample of 3,223 motor vehicle trauma cases, 68 percent of the patients had one or more injuries that were coded to the AIS 85 body region nonspecific category external. When the same patients' injuries were coded using the AIS 85-EM coding procedure, only 15 percent of the patients had injuries that could not be coded to a specific body region. With AIS 85-EM, the proportion of codable head injury cases increased from 16 percent to 37 percent, thereby improving the potential for identifying cases with head and threshold brain injury. The data suggest that body region coding of all injuries is necessary to draw valid and reliable conclusions about changes in injury patterns and their sequelae. The increased specificity of body region coding improves assessments of the efficacy of injury intervention strategies and countermeasure programs using epidemiologic methodology. PMID:2116633
NASA Astrophysics Data System (ADS)
Astuti, DT; Pujiastuti, Y.; Suparman, SHK; Damiri, N.; Nugraha, S.; Sembiring, ER; Mulawarman
2018-01-01
Bacillus thuringiensis is a gram-positive bacterium that produces crystal proteins toxic (ᴕ-endotoxin) specific to the target insect, but is not toxic to humans and non-target organisms. This study aims to explore the origin of the soil bacterium B. thuringiensis sub-district Sekayu, Banyuasin, South Sumatra and toxicity to larvae of lepidoptera. Fifty soil samples were taken from Musi Banyuasin District, namely 15 from Kayuare strip 2, 20 from Kayuare and 15 from Lumpatan. Isolation, characterization, identification and screening test were conducted in the laboratorium of Pest and Disease, Agricultural Faculty, Sriwijaya University. Isolat codes were given based on the area origin of the samples. Results of the study showed that from 50 isolates of bacteria that had been isolated, there were 15 bacterial isolates, characterized by morphology and physiology the same as B. thuringiensis, which has round colonies, white, wrinkled edges, slippery, elevation arise, aerobic and gram-positive. Of the 15 codes that contain positive isolates of B. thuringiensis, we have obtained several isolates of the following codes: KJ2D5, KJ2N1, KJ2N4, KJ2B3, KJ3R1, KJ3R2, KJ3R3, KJ3R5, KJ3J3, KJ3J4, KJ3P1, DLM5, DLKK12, and DLKK23. Results of screening tests on insects of the Lepidoptera Order showed that there were six isolates that had toxic to Plutella xylostella and Spodoptera litura insects, ie bacterial isolate codes DLM5, KJ3R3, KJ3R5, KJ3J4, KJ3P1, and DLKK23.
ERIC Educational Resources Information Center
Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed
2013-01-01
Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…
Fundamental differences between optimization code test problems in engineering applications
NASA Technical Reports Server (NTRS)
Eason, E. D.
1984-01-01
The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.
Lupo, Michela; Ferlazzo, Fabio; Aloise, Fabio; Di Nocera, Francesco; Tedesco, Anna Maria; Cardillo, Chiara; Leggio, Maria
2018-04-27
Several studies have demonstrated that the processing of visuospatial memory for locations in reaching space and in navigational space is supported by independent systems, and that the coding of visuospatial information depends on the modality of the presentation (i.e., sequential or simultaneous). However, these lines of evidence and the most common neuropsychological tests used by clinicians to investigate visuospatial memory have several limitations (e.g., they are unable to analyze all the subcomponents of this function and are not directly comparable). Therefore, we developed a new battery of tests that is able to investigate these subcomponents. We recruited 71 healthy subjects who underwent sequential and simultaneous navigational tests by using an innovative sensorized platform, as well as comparable paper tests to evaluate the same components in reaching space (Exp. 1). Consistent with the literature, the principal-component method of analysis used in this study demonstrated the presence of distinct memory for sequences in different portions of space, but no distinction was found for simultaneous presentation, suggesting that different modalities of eye gaze exploration are used when subjects have to perform different types of tasks. For this purpose, an infrared Tobii Eye-Tracking X50 system was used in both spatial conditions (Exp. 2), showing that a clear effect of the presentation modality was due to the specific strategy used by subjects to explore the stimuli in space. Given these findings, the neuropsychological battery established in the present study allows us to show basic differences in the normal coding of stimuli, which can explain the specific visuospatial deficits found in various neurological conditions.
Kaplowitz, Stan A; Perlstadt, Harry; D'Onofrio, Gail; Melnick, Edward R; Baum, Carl R; Kirrane, Barbara M; Post, Lori A
2012-01-01
We derived a clinical decision rule for determining which young children need testing for lead poisoning. We developed an equation that combines lead exposure self-report questions with the child's census-block housing and socioeconomic characteristics, personal demographic characteristics, and Medicaid status. This equation better predicts elevated blood lead level (EBLL) than one using ZIP code and Medicaid status. A survey regarding potential lead exposure was administered from October 2001 to January 2003 to Michigan parents at pediatric clinics (n=3,396). These self-report survey data were linked to a statewide clinical registry of blood lead level (BLL) tests. Sensitivity and specificity were calculated and then used to estimate the cost-effectiveness of the equation. The census-block group prediction equation explained 18.1% of the variance in BLLs. Replacing block group characteristics with the self-report questions and dichotomized ZIP code risk explained only 12.6% of the variance. Adding three self-report questions to the census-block group model increased the variance explained to 19.9% and increased specificity with no loss in sensitivity in detecting EBLLs of ≥ 10 micrograms per deciliter. Relying solely on self-reports of lead exposure predicted BLL less effectively than the block group model. However, adding three of 13 self-report questions to our clinical decision rule significantly improved prediction of which children require a BLL test. Using the equation as the clinical decision rule would annually eliminate more than 7,200 unnecessary tests in Michigan and save more than $220,000.
Effect of two doses of ginkgo biloba extract (EGb 761) on the dual-coding test in elderly subjects.
Allain, H; Raoul, P; Lieury, A; LeCoz, F; Gandon, J M; d'Arbigny, P
1993-01-01
The subjects of this double-blind study were 18 elderly men and women (mean age, 69.3 years) with slight age-related memory impairment. In a crossover-study design, each subject received placebo or an extract of Ginkgo biloba (EGb 761) (320 mg or 600 mg) 1 hour before performing a dual-coding test that measures the speed of information processing; the test consists of several coding series of drawings and words presented at decreasing times of 1920, 960, 480, 240, and 120 ms. The dual-coding phenomenon (a break point between coding verbal material and images) was demonstrated in all the tests. After placebo, the break point was observed at 960 ms and dual coding beginning at 1920 ms. After each dose of the ginkgo extract, the break point (at 480 ms) and dual coding (at 960 ms) were significantly shifted toward a shorter presentation time, indicating an improvement in the speed of information processing.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Standardized Definitions for Code Verification Test Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doebling, Scott William
This document contains standardized definitions for several commonly used code verification test problems. These definitions are intended to contain sufficient information to set up the test problem in a computational physics code. These definitions are intended to be used in conjunction with exact solutions to these problems generated using Exact- Pack, www.github.com/lanl/exactpack.
Verification and benchmark testing of the NUFT computer code
NASA Astrophysics Data System (ADS)
Lee, K. H.; Nitao, J. J.; Kulshrestha, A.
1993-10-01
This interim report presents results of work completed in the ongoing verification and benchmark testing of the NUFT (Nonisothermal Unsaturated-saturated Flow and Transport) computer code. NUFT is a suite of multiphase, multicomponent models for numerical solution of thermal and isothermal flow and transport in porous media, with application to subsurface contaminant transport problems. The code simulates the coupled transport of heat, fluids, and chemical components, including volatile organic compounds. Grid systems may be cartesian or cylindrical, with one-, two-, or fully three-dimensional configurations possible. In this initial phase of testing, the NUFT code was used to solve seven one-dimensional unsaturated flow and heat transfer problems. Three verification and four benchmarking problems were solved. In the verification testing, excellent agreement was observed between NUFT results and the analytical or quasianalytical solutions. In the benchmark testing, results of code intercomparison were very satisfactory. From these testing results, it is concluded that the NUFT code is ready for application to field and laboratory problems similar to those addressed here. Multidimensional problems, including those dealing with chemical transport, will be addressed in a subsequent report.
Nada: A new code for studying self-gravitating tori around black holes
NASA Astrophysics Data System (ADS)
Montero, Pedro J.; Font, José A.; Shibata, Masaru
2008-09-01
We present a new two-dimensional numerical code called Nada designed to solve the full Einstein equations coupled to the general relativistic hydrodynamics equations. The code is mainly intended for studies of self-gravitating accretion disks (or tori) around black holes, although it is also suitable for regular spacetimes. Concerning technical aspects the Einstein equations are formulated and solved in the code using a formulation of the standard 3+1 Arnowitt-Deser-Misner canonical formalism system, the so-called Baumgarte-Shapiro Shibata-Nakamura approach. A key feature of the code is that derivative terms in the spacetime evolution equations are computed using a fourth-order centered finite difference approximation in conjunction with the Cartoon method to impose the axisymmetry condition under Cartesian coordinates (the choice in Nada), and the puncture/moving puncture approach to carry out black hole evolutions. Correspondingly, the general relativistic hydrodynamics equations are written in flux-conservative form and solved with high-resolution, shock-capturing schemes. We perform and discuss a number of tests to assess the accuracy and expected convergence of the code, namely, (single) black hole evolutions, shock tubes, and evolutions of both spherical and rotating relativistic stars in equilibrium, the gravitational collapse of a spherical relativistic star leading to the formation of a black hole. In addition, paving the way for specific applications of the code, we also present results from fully general relativistic numerical simulations of a system formed by a black hole surrounded by a self-gravitating torus in equilibrium.
Modeling radiation belt dynamics using a 3-D layer method code
NASA Astrophysics Data System (ADS)
Wang, C.; Ma, Q.; Tao, X.; Zhang, Y.; Teng, S.; Albert, J. M.; Chan, A. A.; Li, W.; Ni, B.; Lu, Q.; Wang, S.
2017-08-01
A new 3-D diffusion code using a recently published layer method has been developed to analyze radiation belt electron dynamics. The code guarantees the positivity of the solution even when mixed diffusion terms are included. Unlike most of the previous codes, our 3-D code is developed directly in equatorial pitch angle (α0), momentum (p), and L shell coordinates; this eliminates the need to transform back and forth between (α0,p) coordinates and adiabatic invariant coordinates. Using (α0,p,L) is also convenient for direct comparison with satellite data. The new code has been validated by various numerical tests, and we apply the 3-D code to model the rapid electron flux enhancement following the geomagnetic storm on 17 March 2013, which is one of the Geospace Environment Modeling Focus Group challenge events. An event-specific global chorus wave model, an AL-dependent statistical plasmaspheric hiss wave model, and a recently published radial diffusion coefficient formula from Time History of Events and Macroscale Interactions during Substorms (THEMIS) statistics are used. The simulation results show good agreement with satellite observations, in general, supporting the scenario that the rapid enhancement of radiation belt electron flux for this event results from an increased level of the seed population by radial diffusion, with subsequent acceleration by chorus waves. Our results prove that the layer method can be readily used to model global radiation belt dynamics in three dimensions.
Villanueva, Pía; Nudel, Ron; Hoischen, Alexander; Fernández, María Angélica; Simpson, Nuala H.; Gilissen, Christian; Reader, Rose H.; Jara, Lillian; Echeverry, Maria Magdalena; Francks, Clyde; Baird, Gillian; Conti-Ramsden, Gina; O’Hare, Anne; Bolton, Patrick F.; Hennessy, Elizabeth R.; Palomino, Hernán; Carvajal-Carmona, Luis; Veltman, Joris A.; Cazier, Jean-Baptiste; De Barbieri, Zulema
2015-01-01
Children affected by Specific Language Impairment (SLI) fail to acquire age appropriate language skills despite adequate intelligence and opportunity. SLI is highly heritable, but the understanding of underlying genetic mechanisms has proved challenging. In this study, we use molecular genetic techniques to investigate an admixed isolated founder population from the Robinson Crusoe Island (Chile), who are affected by a high incidence of SLI, increasing the power to discover contributory genetic factors. We utilize exome sequencing in selected individuals from this population to identify eight coding variants that are of putative significance. We then apply association analyses across the wider population to highlight a single rare coding variant (rs144169475, Minor Allele Frequency of 4.1% in admixed South American populations) in the NFXL1 gene that confers a nonsynonymous change (N150K) and is significantly associated with language impairment in the Robinson Crusoe population (p = 2.04 × 10–4, 8 variants tested). Subsequent sequencing of NFXL1 in 117 UK SLI cases identified four individuals with heterozygous variants predicted to be of functional consequence. We conclude that coding variants within NFXL1 confer an increased risk of SLI within a complex genetic model. PMID:25781923
Benyo, B; Biro, J C; Benyo, Z
2004-01-01
The theory of "codon-amino acid coevolution" was first proposed by Woese in 1967. It suggests that there is a stereochemical matching - that is, affinity - between amino acids and certain of the base triplet sequences that code for those amino acids. We have constructed a common periodic table of codons and amino acids, where the nucleic acid table showed perfect axial symmetry for codons and the corresponding amino acid table also displayed periodicity regarding the biochemical properties (charge and hydrophobicity) of the 20 amino acids and the position of the stop signals. The table indicates that the middle (2/sup nd/) amino acid in the codon has a prominent role in determining some of the structural features of the amino acids. The possibility that physical contact between codons and amino acids might exist was tested on restriction enzymes. Many recognition site-like sequences were found in the coding sequences of these enzymes and as many as 73 examples of codon-amino acid co-location were observed in the 7 known 3D structures (December 2003) of endonuclease-nucleic acid complexes. These results indicate that the smallest possible units of specific nucleic acid-protein interaction are indeed the stereochemically compatible codons and amino acids.
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc
1998-01-01
For long linear block codes, maximum likelihood decoding based on full code trellises would be very hard to implement if not impossible. In this case, we may wish to trade error performance for the reduction in decoding complexity. Sub-optimum soft-decision decoding of a linear block code based on a low-weight sub-trellis can be devised to provide an effective trade-off between error performance and decoding complexity. This chapter presents such a suboptimal decoding algorithm for linear block codes. This decoding algorithm is iterative in nature and based on an optimality test. It has the following important features: (1) a simple method to generate a sequence of candidate code-words, one at a time, for test; (2) a sufficient condition for testing a candidate code-word for optimality; and (3) a low-weight sub-trellis search for finding the most likely (ML) code-word.
Users manual for updated computer code for axial-flow compressor conceptual design
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
An existing computer code that determines the flow path for an axial-flow compressor either for a given number of stages or for a given overall pressure ratio was modified for use in air-breathing engine conceptual design studies. This code uses a rapid approximate design methodology that is based on isentropic simple radial equilibrium. Calculations are performed at constant-span-fraction locations from tip to hub. Energy addition per stage is controlled by specifying the maximum allowable values for several aerodynamic design parameters. New modeling was introduced to the code to overcome perceived limitations. Specific changes included variable rather than constant tip radius, flow path inclination added to the continuity equation, input of mass flow rate directly rather than indirectly as inlet axial velocity, solution for the exact value of overall pressure ratio rather than for any value that met or exceeded it, and internal computation of efficiency rather than the use of input values. The modified code was shown to be capable of computing efficiencies that are compatible with those of five multistage compressors and one fan that were tested experimentally. This report serves as a users manual for the revised code, Compressor Spanline Analysis (CSPAN). The modeling modifications, including two internal loss correlations, are presented. Program input and output are described. A sample case for a multistage compressor is included.
Studying the genetic basis of speciation in high gene flow marine invertebrates
2016-01-01
A growing number of genes responsible for reproductive incompatibilities between species (barrier loci) exhibit the signals of positive selection. However, the possibility that genes experiencing positive selection diverge early in speciation and commonly cause reproductive incompatibilities has not been systematically investigated on a genome-wide scale. Here, I outline a research program for studying the genetic basis of speciation in broadcast spawning marine invertebrates that uses a priori genome-wide information on a large, unbiased sample of genes tested for positive selection. A targeted sequence capture approach is proposed that scores single-nucleotide polymorphisms (SNPs) in widely separated species populations at an early stage of allopatric divergence. The targeted capture of both coding and non-coding sequences enables SNPs to be characterized at known locations across the genome and at genes with known selective or neutral histories. The neutral coding and non-coding SNPs provide robust background distributions for identifying FST-outliers within genes that can, in principle, identify specific mutations experiencing diversifying selection. If natural hybridization occurs between species, the neutral coding and non-coding SNPs can provide a neutral admixture model for genomic clines analyses aimed at finding genes exhibiting strong blocks to introgression. Strongylocentrotid sea urchins are used as a model system to outline the approach but it can be used for any group that has a complete reference genome available. PMID:29491951
An embedded barcode for "connected" malaria rapid diagnostic tests.
Scherr, Thomas F; Gupta, Sparsh; Wright, David W; Haselton, Frederick R
2017-03-29
Many countries are shifting their efforts from malaria control to disease elimination. New technologies will be necessary to meet the more stringent demands of elimination campaigns, including improved quality control of malaria diagnostic tests, as well as an improved means for communicating test results among field healthcare workers, test manufacturers, and national ministries of health. In this report, we describe and evaluate an embedded barcode within standard rapid diagnostic tests as one potential solution. This information-augmented diagnostic test operates on the familiar principles of traditional lateral flow assays and simply replaces the control line with a control grid patterned in the shape of a QR (quick response) code. After the test is processed, the QR code appears on both positive or negative tests. In this report we demonstrate how this multipurpose code can be used not only to fulfill the control line role of test validation, but also to embed test manufacturing details, serve as a trigger for image capture, enable registration for image analysis, and correct for lighting effects. An accompanying mobile phone application automatically captures an image of the test when the QR code is recognized, decodes the QR code, performs image processing to determine the concentration of the malarial biomarker histidine-rich protein 2 at the test line, and transmits the test results and QR code payload to a secure web portal. This approach blends automated, sub-nanomolar biomarker detection, with near real-time reporting to provide quality assurance data that will help to achieve malaria elimination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wodtke, C.H.; Frizzell, D.R.; Plunkett, W.A.
1986-06-01
Procedure WPS-1003 is qualified under Section IX of the ASME Boiler and Pressure Vessel Code for gas tungsten arc welding of aluminum alloys 6061 and 6063 (P-23), in thickness range 0.035 to 0.516 inch; filler metal is ER4043 (F-23) or ER5356 (F-22); shielding gas is argon.
West, Devin M.; McCauley, Lindsay M.; Sorensen, Jeffrey S.; Jephson, Al R.
2016-01-01
The pneumocococcal urine antigen test increases specific microbiological diagnosis over conventional culture methods in pneumonia patients. Data are limited regarding its yield and effect on antibiotic prescribing among patients with community-onset pneumonia in clinical practice. We performed a secondary analysis of 2837 emergency department patients admitted to seven Utah hospitals over 2 years with international diagnostic codes version 9 codes and radiographic evidence of pneumonia. Mean age was 64.2 years, 47.2% were male and all-cause 30-day mortality was 9.6%. Urinary antigen testing was performed in 1110 (39%) patients yielding 134 (12%) positives. Intensive care unit patients were more likely to undergo testing, and have a positive result (15% versus 8.8% for ward patients; p<0.01). Patients with risk factors for healthcare-associated pneumonia had fewer urinary antigen tests performed, but 8.4% were positive. Physicians changed to targeted antibiotic therapy in 20 (15%) patients, de-escalated antibiotic therapy in 76 patients (57%). In 38 (28%) patients, antibiotics were not changed. Only one patient changed to targeted therapy suffered clinical relapse. Length of stay and mortality were lower in patients receiving targeted therapy. Pneumococcal urinary antigen testing is an inexpensive, noninvasive test that favourably influenced antibiotic prescribing in a “real world”, multi-hospital observational study. PMID:28053969
Definition of a prospective payment system to reimburse emergency departments.
Levaggi, Rosella; Montefiori, Marcello
2013-10-11
Payers are increasingly turning to Prospective Payment Systems (PPSs) because they incentivize efficiency, but their application to emergency departments (EDs) is difficult because of the high level of uncertainty and variability in the cost of treating each patient.To the best of our knowledge, our work represents the first attempt at defining a PPS for this part of hospital activity. Data were specifically collected for this study and relate to 1011 patients who were triaged at an ED of a major Italian hospital, during 1 week in December 2010.The cost for each patient was analytically estimated by adding up several components: 1) physician and other staff costs that were imputed on the basis of the time each physician claimed to have spent treating the patient; 2) the cost for each test/treatment each patient actually underwent; 3) overhead costs, shared among patients using the time elapsed between first examination and discharge from the ED. The distribution of costs by triage code shows that, although the average cost increases across the four triage groups, the variance within each code is quite high. The maximum cost for a yellow code is €1074.7, compared with €680 for red, the most serious code. Using cluster analysis, the red code cluster is enveloped by yellow, and their costs are therefore indistinguishable, while green codes span all cost groups. This suggests that triage code alone is not a good proxy for the patient cost, and that other cost drivers need to be included. Crude triage codes cannot be used to define PPSs because they are not sufficiently correlated with costs and are characterized by large variances. However, if combined with other information, such as the number of laboratory and non-laboratory tests/examinations, it is possible to define cost groups that are sufficiently homogeneous to be reimbursed prospectively. This should discourage strategic behavior and allow the ED to break even or create profits, which can be reinvested to improve services. The study provides health policy administrators with a new and feasible tool to implement prospective payment for EDs, and improve planning and cost control.
Development of real-time software environments for NASA's modern telemetry systems
NASA Technical Reports Server (NTRS)
Horner, Ward; Sabia, Steve
1989-01-01
An effort has been made to maintain maximum performance and flexibility for NASA-Goddard's VLSI telemetry system elements through the development of two real-time systems: (1) the Base System Environment, which supports generic system integration and furnishes the basic porting of various manufacturers' cards, and (2) the Modular Environment for Data Systems, which supports application-specific developments and furnishes designers with a set of tested generic library functions that can be employed to speed up the development of such application-specific real-time codes. The performance goals and design rationale for these two systems are discussed.
specification How to install the software How to use the software Download the source code (using .gz). Standard Exchange Format (SHEF) is a documented set of rules for coding of data in a form for both visual and information to describe the data. Current SHEF specification How to install the software How to use the
ERIC Educational Resources Information Center
Moral, Cristian; de Antonio, Angelica; Ferre, Xavier; Lara, Graciela
2015-01-01
Introduction: In this article we propose a qualitative analysis tool--a coding system--that can support the formalisation of the information-seeking process in a specific field: research in computer science. Method: In order to elaborate the coding system, we have conducted a set of qualitative studies, more specifically a focus group and some…
van der Mei, Sijrike F; Dijkers, Marcel P J M; Heerkens, Yvonne F
2011-12-01
To examine to what extent the concept and the domains of participation as defined in the International Classification of Functioning, Disability and Health (ICF) are represented in general cancer-specific health-related quality of life (HRQOL) instruments. Using the ICF linking rules, two coders independently extracted the meaningful concepts of ten instruments and linked these to ICF codes. The proportion of concepts that could be linked to ICF codes ranged from 68 to 95%. Although all instruments contained concepts linked to Participation (Chapters d7-d9 of the classification of 'Activities and Participation'), the instruments covered only a small part of all available ICF codes. The proportion of ICF codes in the instruments that were participation related ranged from 3 to 35%. 'Major life areas' (d8) was the most frequently used Participation Chapter, with d850 'remunerative employment' as the most used ICF code. The number of participation-related ICF codes covered in the instruments is limited. General cancer-specific HRQOL instruments only assess social life of cancer patients to a limited degree. This study's information on the content of these instruments may guide researchers in selecting the appropriate instrument for a specific research purpose.
Software engineering for ESO's VLT project
NASA Astrophysics Data System (ADS)
Filippi, G.
1994-12-01
This paper reports on the experience at the European Southern Observatory on the application of software engineering techniques to a 200 man-year control software project for the Very Large Telescope (VLT). This shall provide astronomers, before the end of the century, with one of the most powerful telescopes in the world. From the definition of the general model, described in the software management plan, specific activities have been and will be defined: standards for documents and for code development, design approach using a CASE tool, the process of reviewing both documentation and code, quality assurance, test strategy, etc. The initial choices, the current implementation and the future planned activities are presented and, where feedback is already available, pros and cons are discussed.
TANDEM: matching proteins with tandem mass spectra.
Craig, Robertson; Beavis, Ronald C
2004-06-12
Tandem mass spectra obtained from fragmenting peptide ions contain some peptide sequence specific information, but often there is not enough information to sequence the original peptide completely. Several proprietary software applications have been developed to attempt to match the spectra with a list of protein sequences that may contain the sequence of the peptide. The application TANDEM was written to provide the proteomics research community with a set of components that can be used to test new methods and algorithms for performing this type of sequence-to-data matching. The source code and binaries for this software are available at http://www.proteome.ca/opensource.html, for Windows, Linux and Macintosh OSX. The source code is made available under the Artistic License, from the authors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Driscoll, Frederick R.
The University of Washington (UW) - Northwest National Marine Renewable Energy Center (UW-NNMREC) and the National Renewable Energy Laboratory (NREL) will collaborate to advance research and development (R&D) of Marine Hydrokinetic (MHK) renewable energy technology, specifically renewable energy captured from ocean tidal currents. UW-NNMREC is endeavoring to establish infrastructure, capabilities and tools to support in-water testing of marine energy technology. NREL is leveraging its experience and capabilities in field testing of wind systems to develop protocols and instrumentation to advance field testing of MHK systems. Under this work, UW-NNMREC and NREL will work together to develop a common instrumentation systemmore » and testing methodologies, standards and protocols. UW-NNMREC is also establishing simulation capabilities for MHK turbine and turbine arrays. NREL has extensive experience in wind turbine array modeling and is developing several computer based numerical simulation capabilities for MHK systems. Under this CRADA, UW-NNMREC and NREL will work together to augment single device and array modeling codes. As part of this effort UW NNMREC will also work with NREL to run simulations on NREL's high performance computer system.« less
Constrained-transport Magnetohydrodynamics with Adaptive Mesh Refinement in CHARM
NASA Astrophysics Data System (ADS)
Miniati, Francesco; Martin, Daniel F.
2011-07-01
We present the implementation of a three-dimensional, second-order accurate Godunov-type algorithm for magnetohydrodynamics (MHD) in the adaptive-mesh-refinement (AMR) cosmological code CHARM. The algorithm is based on the full 12-solve spatially unsplit corner-transport-upwind (CTU) scheme. The fluid quantities are cell-centered and are updated using the piecewise-parabolic method (PPM), while the magnetic field variables are face-centered and are evolved through application of the Stokes theorem on cell edges via a constrained-transport (CT) method. The so-called multidimensional MHD source terms required in the predictor step for high-order accuracy are applied in a simplified form which reduces their complexity in three dimensions without loss of accuracy or robustness. The algorithm is implemented on an AMR framework which requires specific synchronization steps across refinement levels. These include face-centered restriction and prolongation operations and a reflux-curl operation, which maintains a solenoidal magnetic field across refinement boundaries. The code is tested against a large suite of test problems, including convergence tests in smooth flows, shock-tube tests, classical two- and three-dimensional MHD tests, a three-dimensional shock-cloud interaction problem, and the formation of a cluster of galaxies in a fully cosmological context. The magnetic field divergence is shown to remain negligible throughout.
Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code
NASA Astrophysics Data System (ADS)
Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.
2015-12-01
WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).
Esophageal function testing: Billing and coding update.
Khan, A; Massey, B; Rao, S; Pandolfino, J
2018-01-01
Esophageal function testing is being increasingly utilized in diagnosis and management of esophageal disorders. There have been several recent technological advances in the field to allow practitioners the ability to more accurately assess and treat such conditions, but there has been a relative lack of education in the literature regarding the associated Common Procedural Terminology (CPT) codes and methods of reimbursement. This review, commissioned and supported by the American Neurogastroenterology and Motility Society Council, aims to summarize each of the CPT codes for esophageal function testing and show the trends of associated reimbursement, as well as recommend coding methods in a practical context. We also aim to encourage many of these codes to be reviewed on a gastrointestinal (GI) societal level, by providing evidence of both discrepancies in coding definitions and inadequate reimbursement in this new era of esophageal function testing. © 2017 John Wiley & Sons Ltd.
Subjective evaluation of next-generation video compression algorithms: a case study
NASA Astrophysics Data System (ADS)
De Simone, Francesca; Goldmann, Lutz; Lee, Jong-Seok; Ebrahimi, Touradj; Baroncini, Vittorio
2010-08-01
This paper describes the details and the results of the subjective quality evaluation performed at EPFL, as a contribution to the effort of the Joint Collaborative Team on Video Coding (JCT-VC) for the definition of the next-generation video coding standard. The performance of 27 coding technologies have been evaluated with respect to two H.264/MPEG-4 AVC anchors, considering high definition (HD) test material. The test campaign involved a total of 494 naive observers and took place over a period of four weeks. While similar tests have been conducted as part of the standardization process of previous video coding technologies, the test campaign described in this paper is by far the most extensive in the history of video coding standardization. The obtained subjective quality scores show high consistency and support an accurate comparison of the performance of the different coding solutions.
NASA Technical Reports Server (NTRS)
Iscoe, Neil; Liu, Zheng-Yang; Feng, Guohui; Yenne, Britt; Vansickle, Larry; Ballantyne, Michael
1992-01-01
Domain-specific knowledge is required to create specifications, generate code, and understand existing systems. Our approach to automating software design is based on instantiating an application domain model with industry-specific knowledge and then using that model to achieve the operational goals of specification elicitation and verification, reverse engineering, and code generation. Although many different specification models can be created from any particular domain model, each specification model is consistent and correct with respect to the domain model.
Signatures of criticality arise from random subsampling in simple population models.
Nonnenmacher, Marcel; Behrens, Christian; Berens, Philipp; Bethge, Matthias; Macke, Jakob H
2017-10-01
The rise of large-scale recordings of neuronal activity has fueled the hope to gain new insights into the collective activity of neural ensembles. How can one link the statistics of neural population activity to underlying principles and theories? One attempt to interpret such data builds upon analogies to the behaviour of collective systems in statistical physics. Divergence of the specific heat-a measure of population statistics derived from thermodynamics-has been used to suggest that neural populations are optimized to operate at a "critical point". However, these findings have been challenged by theoretical studies which have shown that common inputs can lead to diverging specific heat. Here, we connect "signatures of criticality", and in particular the divergence of specific heat, back to statistics of neural population activity commonly studied in neural coding: firing rates and pairwise correlations. We show that the specific heat diverges whenever the average correlation strength does not depend on population size. This is necessarily true when data with correlations is randomly subsampled during the analysis process, irrespective of the detailed structure or origin of correlations. We also show how the characteristic shape of specific heat capacity curves depends on firing rates and correlations, using both analytically tractable models and numerical simulations of a canonical feed-forward population model. To analyze these simulations, we develop efficient methods for characterizing large-scale neural population activity with maximum entropy models. We find that, consistent with experimental findings, increases in firing rates and correlation directly lead to more pronounced signatures. Thus, previous reports of thermodynamical criticality in neural populations based on the analysis of specific heat can be explained by average firing rates and correlations, and are not indicative of an optimized coding strategy. We conclude that a reliable interpretation of statistical tests for theories of neural coding is possible only in reference to relevant ground-truth models.
Numerical relativity for D dimensional axially symmetric space-times: Formalism and code tests
NASA Astrophysics Data System (ADS)
Zilhão, Miguel; Witek, Helvi; Sperhake, Ulrich; Cardoso, Vitor; Gualtieri, Leonardo; Herdeiro, Carlos; Nerozzi, Andrea
2010-04-01
The numerical evolution of Einstein’s field equations in a generic background has the potential to answer a variety of important questions in physics: from applications to the gauge-gravity duality, to modeling black hole production in TeV gravity scenarios, to analysis of the stability of exact solutions, and to tests of cosmic censorship. In order to investigate these questions, we extend numerical relativity to more general space-times than those investigated hitherto, by developing a framework to study the numerical evolution of D dimensional vacuum space-times with an SO(D-2) isometry group for D≥5, or SO(D-3) for D≥6. Performing a dimensional reduction on a (D-4) sphere, the D dimensional vacuum Einstein equations are rewritten as a 3+1 dimensional system with source terms, and presented in the Baumgarte, Shapiro, Shibata, and Nakamura formulation. This allows the use of existing 3+1 dimensional numerical codes with small adaptations. Brill-Lindquist initial data are constructed in D dimensions and a procedure to match them to our 3+1 dimensional evolution equations is given. We have implemented our framework by adapting the Lean code and perform a variety of simulations of nonspinning black hole space-times. Specifically, we present a modified moving puncture gauge, which facilitates long-term stable simulations in D=5. We further demonstrate the internal consistency of the code by studying convergence and comparing numerical versus analytic results in the case of geodesic slicing for D=5, 6.
Nicholson, Amanda; Ford, Elizabeth; Davies, Kevin A.; Smith, Helen E.; Rait, Greta; Tate, A. Rosemary; Petersen, Irene; Cassell, Jackie
2013-01-01
Background Research using electronic health records (EHRs) relies heavily on coded clinical data. Due to variation in coding practices, it can be difficult to aggregate the codes for a condition in order to define cases. This paper describes a methodology to develop ‘indicator markers’ found in patients with early rheumatoid arthritis (RA); these are a broader range of codes which may allow a probabilistic case definition to use in cases where no diagnostic code is yet recorded. Methods We examined EHRs of 5,843 patients in the General Practice Research Database, aged ≥30y, with a first coded diagnosis of RA between 2005 and 2008. Lists of indicator markers for RA were developed initially by panels of clinicians drawing up code-lists and then modified based on scrutiny of available data. The prevalence of indicator markers, and their temporal relationship to RA codes, was examined in patients from 3y before to 14d after recorded RA diagnosis. Findings Indicator markers were common throughout EHRs of RA patients, with 83.5% having 2 or more markers. 34% of patients received a disease-specific prescription before RA was coded; 42% had a referral to rheumatology, and 63% had a test for rheumatoid factor. 65% had at least one joint symptom or sign recorded and in 44% this was at least 6-months before recorded RA diagnosis. Conclusion Indicator markers of RA may be valuable for case definition in cases which do not yet have a diagnostic code. The clinical diagnosis of RA is likely to occur some months before it is coded, shown by markers frequently occurring ≥6 months before recorded diagnosis. It is difficult to differentiate delay in diagnosis from delay in recording. Information concealed in free text may be required for the accurate identification of patients and to assess the quality of care in general practice. PMID:23451024
ERIC Educational Resources Information Center
Davis, Colin J.; Bowers, Jeffrey S.
2006-01-01
Five theories of how letter position is coded are contrasted: position-specific slot-coding, Wickelcoding, open-bigram coding (discrete and continuous), and spatial coding. These theories make different predictions regarding the relative similarity of three different types of pairs of letter strings: substitution neighbors,…
Core Physics and Kinetics Calculations for the Fissioning Plasma Core Reactor
NASA Technical Reports Server (NTRS)
Butler, C.; Albright, D.
2007-01-01
Highly efficient, compact nuclear reactors would provide high specific impulse spacecraft propulsion. This analysis and numerical simulation effort has focused on the technical feasibility issues related to the nuclear design characteristics of a novel reactor design. The Fissioning Plasma Core Reactor (FPCR) is a shockwave-driven gaseous-core nuclear reactor, which uses Magneto Hydrodynamic effects to generate electric power to be used for propulsion. The nuclear design of the system depends on two major calculations: core physics calculations and kinetics calculations. Presently, core physics calculations have concentrated on the use of the MCNP4C code. However, initial results from other codes such as COMBINE/VENTURE and SCALE4a. are also shown. Several significant modifications were made to the ISR-developed QCALC1 kinetics analysis code. These modifications include testing the state of the core materials, an improvement to the calculation of the material properties of the core, the addition of an adiabatic core temperature model and improvement of the first order reactivity correction model. The accuracy of these modifications has been verified, and the accuracy of the point-core kinetics model used by the QCALC1 code has also been validated. Previously calculated kinetics results for the FPCR were described in the ISR report, "QCALC1: A code for FPCR Kinetics Model Feasibility Analysis" dated June 1, 2002.
Genome-Wide Discovery of Long Non-Coding RNAs in Rainbow Trout.
Al-Tobasei, Rafet; Paneru, Bam; Salem, Mohamed
2016-01-01
The ENCODE project revealed that ~70% of the human genome is transcribed. While only 1-2% of the RNAs encode for proteins, the rest are non-coding RNAs. Long non-coding RNAs (lncRNAs) form a diverse class of non-coding RNAs that are longer than 200 nt. Emerging evidence indicates that lncRNAs play critical roles in various cellular processes including regulation of gene expression. LncRNAs show low levels of gene expression and sequence conservation, which make their computational identification in genomes difficult. In this study, more than two billion Illumina sequence reads were mapped to the genome reference using the TopHat and Cufflinks software. Transcripts shorter than 200 nt, with more than 83-100 amino acids ORF, or with significant homologies to the NCBI nr-protein database were removed. In addition, a computational pipeline was used to filter the remaining transcripts based on a protein-coding-score test. Depending on the filtering stringency conditions, between 31,195 and 54,503 lncRNAs were identified, with only 421 matching known lncRNAs in other species. A digital gene expression atlas revealed 2,935 tissue-specific and 3,269 ubiquitously-expressed lncRNAs. This study annotates the lncRNA rainbow trout genome and provides a valuable resource for functional genomics research in salmonids.
Turco, Gina; Schnable, James C.; Pedersen, Brent; Freeling, Michael
2013-01-01
Conserved non-coding sequences (CNS) are islands of non-coding sequence that, like protein coding exons, show less divergence in sequence between related species than functionless DNA. Several CNSs have been demonstrated experimentally to function as cis-regulatory regions. However, the specific functions of most CNSs remain unknown. Previous searches for CNS in plants have either anchored on exons and only identified nearby sequences or required years of painstaking manual annotation. Here we present an open source tool that can accurately identify CNSs between any two related species with sequenced genomes, including both those immediately adjacent to exons and distal sequences separated by >12 kb of non-coding sequence. We have used this tool to characterize new motifs, associate CNSs with additional functions, and identify previously undetected genes encoding RNA and protein in the genomes of five grass species. We provide a list of 15,363 orthologous CNSs conserved across all grasses tested. We were also able to identify regulatory sequences present in the common ancestor of grasses that have been lost in one or more extant grass lineages. Lists of orthologous gene pairs and associated CNSs are provided for reference inbred lines of arabidopsis, Japonica rice, foxtail millet, sorghum, brachypodium, and maize. PMID:23874343
Comparative Study of Advanced Turbulence Models for Turbomachinery
NASA Technical Reports Server (NTRS)
Hadid, Ali H.; Sindir, Munir M.
1996-01-01
A computational study has been undertaken to study the performance of advanced phenomenological turbulence models coded in a modular form to describe incompressible turbulent flow behavior in two dimensional/axisymmetric and three dimensional complex geometry. The models include a variety of two equation models (single and multi-scale k-epsilon models with different near wall treatments) and second moment algebraic and full Reynolds stress closure models. These models were systematically assessed to evaluate their performance in complex flows with rotation, curvature and separation. The models are coded as self contained modules that can be interfaced with a number of flow solvers. These modules are stand alone satellite programs that come with their own formulation, finite-volume discretization scheme, solver and boundary condition implementation. They will take as input (from any generic Navier-Stokes solver) the velocity field, grid (structured H-type grid) and computational domain specification (boundary conditions), and will deliver, depending on the model used, turbulent viscosity, or the components of the Reynolds stress tensor. There are separate 2D/axisymmetric and/or 3D decks for each module considered. The modules are tested using Rocketdyn's proprietary code REACT. The code utilizes an efficient solution procedure to solve Navier-Stokes equations in a non-orthogonal body-fitted coordinate system. The differential equations are discretized over a finite-volume grid using a non-staggered variable arrangement and an efficient solution procedure based on the SIMPLE algorithm for the velocity-pressure coupling is used. The modules developed have been interfaced and tested using finite-volume, pressure-correction CFD solvers which are widely used in the CFD community. Other solvers can also be used to test these modules since they are independently structured with their own discretization scheme and solver methodology. Many of these modules have been independently tested by Professor C.P. Chen and his group at the University of Alabama at Huntsville (UAH) by interfacing them with own flow solver (MAST).
NASA Technical Reports Server (NTRS)
Zhang, Yuhan; Lu, Dr. Thomas
2010-01-01
The objectives of this project were to develop a ROI (Region of Interest) detector using Haar-like feature similar to the face detection in Intel's OpenCV library, implement it in Matlab code, and test the performance of the new ROI detector against the existing ROI detector that uses Optimal Trade-off Maximum Average Correlation Height filter (OTMACH). The ROI detector included 3 parts: 1, Automated Haar-like feature selection in finding a small set of the most relevant Haar-like features for detecting ROIs that contained a target. 2, Having the small set of Haar-like features from the last step, a neural network needed to be trained to recognize ROIs with targets by taking the Haar-like features as inputs. 3, using the trained neural network from the last step, a filtering method needed to be developed to process the neural network responses into a small set of regions of interests. This needed to be coded in Matlab. All the 3 parts needed to be coded in Matlab. The parameters in the detector needed to be trained by machine learning and tested with specific datasets. Since OpenCV library and Haar-like feature were not available in Matlab, the Haar-like feature calculation needed to be implemented in Matlab. The codes for Adaptive Boosting and max/min filters in Matlab could to be found from the Internet but needed to be integrated to serve the purpose of this project. The performance of the new detector was tested by comparing the accuracy and the speed of the new detector against the existing OTMACH detector. The speed was referred as the average speed to find the regions of interests in an image. The accuracy was measured by the number of false positives (false alarms) at the same detection rate between the two detectors.
ERIC Educational Resources Information Center
Leach, Mark M.; Oakland, Thomas
2007-01-01
Ethics codes are designed to protect the public by prescribing behaviors professionals are expected to exhibit. Although test use is universal, albeit reflecting strong Western influences, previous studies that examine the degree issues pertaining to test development and use and that are addressed in ethics codes of national psychological…
Summary of Data from the First AIAA CFD Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Levy, David W.; Zickuhr, Tom; Vassberg, John; Agrawal, Shreekant; Wahls, Richard A.; Pirzadeh, Shahyar; Hemsch, Michael J.
2002-01-01
The results from the first AIAA CFD Drag Prediction Workshop are summarized. The workshop was designed specifically to assess the state-of-the-art of computational fluid dynamics methods for force and moment prediction. An impartial forum was provided to evaluate the effectiveness of existing computer codes and modeling techniques, and to identify areas needing additional research and development. The subject of the study was the DLR-F4 wing-body configuration, which is representative of transport aircraft designed for transonic flight. Specific test cases were required so that valid comparisons could be made. Optional test cases included constant-C(sub L) drag-rise predictions typically used in airplane design by industry. Results are compared to experimental data from three wind tunnel tests. A total of 18 international participants using 14 different codes submitted data to the workshop. No particular grid type or turbulence model was more accurate, when compared to each other, or to wind tunnel data. Most of the results overpredicted C(sub Lo) and C(sub Do), but induced drag (dC(sub D)/dC(sub L)(exp 2)) agreed fairly well. Drag rise at high Mach number was underpredicted, however, especially at high C(sub L). On average, the drag data were fairly accurate, but the scatter was greater than desired. The results show that well-validated Reynolds-Averaged Navier-Stokes CFD methods are sufficiently accurate to make design decisions based on predicted drag.
Analysis of space telescope data collection systems
NASA Technical Reports Server (NTRS)
Ingels, F. M.
1984-01-01
The Multiple Access (MA) communication link of the Space Telescope (ST) is described. An expected performance bit error rate is presented. The historical perspective and rationale behind the ESTL space shuttle end-to-end tests are given. The concatenated coding scheme using a convolutional encoder for the outer coder is developed. The ESTL end-to-end tests on the space shuttle communication link are described. Most important is how a concatenated coding system will perform. This is a go-no-go system with respect to received signal-to-noise ratio. A discussion of the verification requirements and Specification document is presented, and those sections that apply to Space Telescope data and communications system are discussed. The Space Telescope System consists of the Space Telescope Orbiting Observatory (ST), the Space Telescope Science Institute, and the Space Telescope Operation Control Center. The MA system consists of the ST, the return link from the ST via the Tracking and Delay Relay Satellite system to White Sands, and from White Sands via the Domestic Communications Satellite to the STOCC.
van Dyck, Peter C; Rinaldo, Piero; McDonald, Clement; Howell, R Rodrey; Zuckerman, Alan; Downing, Gregory
2010-01-01
Capture, coding and communication of newborn screening (NBS) information represent a challenge for public health laboratories, health departments, hospitals, and ambulatory care practices. An increasing number of conditions targeted for screening and the complexity of interpretation contribute to a growing need for integrated information-management strategies. This makes NBS an important test of tools and architecture for electronic health information exchange (HIE) in this convergence of individual patient care and population health activities. For this reason, the American Health Information Community undertook three tasks described in this paper. First, a newborn screening use case was established to facilitate standards harmonization for common terminology and interoperability specifications guiding HIE. Second, newborn screening coding and terminology were developed for integration into electronic HIE activities. Finally, clarification of privacy, security, and clinical laboratory regulatory requirements governing information exchange was provided, serving as a framework to establish pathways for improving screening program timeliness, effectiveness, and efficiency of quality patient care services. PMID:20064796
C code generation from Petri-net-based logic controller specification
NASA Astrophysics Data System (ADS)
Grobelny, Michał; Grobelna, Iwona; Karatkevich, Andrei
2017-08-01
The article focuses on programming of logic controllers. It is important that a programming code of a logic controller is executed flawlessly according to the primary specification. In the presented approach we generate C code for an AVR microcontroller from a rule-based logical model of a control process derived from a control interpreted Petri net. The same logical model is also used for formal verification of the specification by means of the model checking technique. The proposed rule-based logical model and formal rules of transformation ensure that the obtained implementation is consistent with the already verified specification. The approach is validated by practical experiments.
Amoroso, P J; Smith, G S; Bell, N S
2000-04-01
Accurate injury cause data are essential for injury prevention research. U.S. military hospitals, unlike civilian hospitals, use the NATO STANAG system for cause-of-injury coding. Reported deficiencies in civilian injury cause data suggested a need to specifically evaluate the STANAG. The Total Army Injury and Health Outcomes Database (TAIHOD) was used to evaluate worldwide Army injury hospitalizations, especially STANAG Trauma, Injury, and Place of Occurrence coding. We conducted a review of hospital procedures at Tripler Army Medical Center (TAMC) including injury cause and intent coding, potential crossover between acute injuries and musculoskeletal conditions, and data for certain hospital patients who are not true admissions. We also evaluated the use of free-text injury comment fields in three hospitals. Army-wide review of injury records coding revealed full compliance with cause coding, although nonspecific codes appeared to be overused. A small but intensive single hospital records review revealed relatively poor intent coding but good activity and cause coding. Data on specific injury history were present on most acute injury records and 75% of musculoskeletal conditions. Place of Occurrence coding, although inherently nonspecific, was over 80% accurate. Review of text fields produced additional details of the injuries in over 80% of cases. STANAG intent coding specificity was poor, while coding of cause of injury was at least comparable to civilian systems. The strengths of military hospital data systems are an exceptionally high compliance with injury cause coding, the availability of free text, and capture of all population hospital records without regard to work-relatedness. Simple changes in procedures could greatly improve data quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Jun Soo; Choi, Yong Joon
The RELAP-7 code verification and validation activities are ongoing under the code assessment plan proposed in the previous document (INL-EXT-16-40015). Among the list of V&V test problems in the ‘RELAP-7 code V&V RTM (Requirements Traceability Matrix)’, the RELAP-7 7-equation model has been tested with additional demonstration problems and the results of these tests are reported in this document. In this report, we describe the testing process, the test cases that were conducted, and the results of the evaluation.
Generic Kalman Filter Software
NASA Technical Reports Server (NTRS)
Lisano, Michael E., II; Crues, Edwin Z.
2005-01-01
The Generic Kalman Filter (GKF) software provides a standard basis for the development of application-specific Kalman-filter programs. Historically, Kalman filters have been implemented by customized programs that must be written, coded, and debugged anew for each unique application, then tested and tuned with simulated or actual measurement data. Total development times for typical Kalman-filter application programs have ranged from months to weeks. The GKF software can simplify the development process and reduce the development time by eliminating the need to re-create the fundamental implementation of the Kalman filter for each new application. The GKF software is written in the ANSI C programming language. It contains a generic Kalman-filter-development directory that, in turn, contains a code for a generic Kalman filter function; more specifically, it contains a generically designed and generically coded implementation of linear, linearized, and extended Kalman filtering algorithms, including algorithms for state- and covariance-update and -propagation functions. The mathematical theory that underlies the algorithms is well known and has been reported extensively in the open technical literature. Also contained in the directory are a header file that defines generic Kalman-filter data structures and prototype functions and template versions of application-specific subfunction and calling navigation/estimation routine code and headers. Once the user has provided a calling routine and the required application-specific subfunctions, the application-specific Kalman-filter software can be compiled and executed immediately. During execution, the generic Kalman-filter function is called from a higher-level navigation or estimation routine that preprocesses measurement data and post-processes output data. The generic Kalman-filter function uses the aforementioned data structures and five implementation- specific subfunctions, which have been developed by the user on the basis of the aforementioned templates. The GKF software can be used to develop many different types of unfactorized Kalman filters. A developer can choose to implement either a linearized or an extended Kalman filter algorithm, without having to modify the GKF software. Control dynamics can be taken into account or neglected in the filter-dynamics model. Filter programs developed by use of the GKF software can be made to propagate equations of motion for linear or nonlinear dynamical systems that are deterministic or stochastic. In addition, filter programs can be made to operate in user-selectable "covariance analysis" and "propagation-only" modes that are useful in design and development stages.
Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder
NASA Technical Reports Server (NTRS)
Staats, Matt
2009-01-01
We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.
The commerce of professional psychology and the new ethics code.
Koocher, G P
1994-11-01
The 1992 version of the American Psychological Association's Ethical Principles of Psychologists and Code of Conduct brings some changes in requirements and new specificity to the practice of psychology. The impact of the new code on therapeutic contracts, informed consent to psychological services, advertising, financial aspects of psychological practice, and other topics related to the commerce of professional psychology are discussed. The genesis of many new thrusts in the code is reviewed from the perspective of psychological service provider. Specific recommendations for improved attention to ethical matters in professional practice are made.
Rcount: simple and flexible RNA-Seq read counting.
Schmid, Marc W; Grossniklaus, Ueli
2015-02-01
Analysis of differential gene expression by RNA sequencing (RNA-Seq) is frequently done using feature counts, i.e. the number of reads mapping to a gene. However, commonly used count algorithms (e.g. HTSeq) do not address the problem of reads aligning with multiple locations in the genome (multireads) or reads aligning with positions where two or more genes overlap (ambiguous reads). Rcount specifically addresses these issues. Furthermore, Rcount allows the user to assign priorities to certain feature types (e.g. higher priority for protein-coding genes compared to rRNA-coding genes) or to add flanking regions. Rcount provides a fast and easy-to-use graphical user interface requiring no command line or programming skills. It is implemented in C++ using the SeqAn (www.seqan.de) and the Qt libraries (qt-project.org). Source code and 64 bit binaries for (Ubuntu) Linux, Windows (7) and MacOSX are released under the GPLv3 license and are freely available on github.com/MWSchmid/Rcount. marcschmid@gmx.ch Test data, genome annotation files, useful Python and R scripts and a step-by-step user guide (including run-time and memory usage tests) are available on github.com/MWSchmid/Rcount. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Dame, L. T.; Stouffer, D. C.
1986-01-01
A tool for the mechanical analysis of nickel base single crystal superalloys, specifically Rene N4, used in gas turbine engine components is developed. This is achieved by a rate dependent anisotropic constitutive model implemented in a nonlinear three dimensional finite element code. The constitutive model is developed from metallurigical concepts utilizing a crystallographic approach. A non Schmid's law formulation is used to model the tension/compression asymmetry and orientation dependence in octahedral slip. Schmid's law is a good approximation to the inelastic response of the material in cube slip. The constitutive equations model the tensile behavior, creep response, and strain rate sensitivity of these alloys. Methods for deriving the material constants from standard tests are presented. The finite element implementation utilizes an initial strain method and twenty noded isoparametric solid elements. The ability to model piecewise linear load histories is included in the finite element code. The constitutive equations are accurately and economically integrated using a second order Adams-Moulton predictor-corrector method with a dynamic time incrementing procedure. Computed results from the finite element code are compared with experimental data for tensile, creep and cyclic tests at 760 deg C. The strain rate sensitivity and stress relaxation capabilities of the model are evaluated.
Chumney, Elinor C G; Biddle, Andrea K; Simpson, Kit N; Weinberger, Morris; Magruder, Kathryn M; Zelman, William N
2004-01-01
As cost-effectiveness analyses (CEAs) are increasingly used to inform policy decisions, there is a need for more information on how different cost determination methods affect cost estimates and the degree to which the resulting cost-effectiveness ratios (CERs) may be affected. The lack of specificity of diagnosis-related groups (DRGs) could mean that they are ill-suited for costing applications in CEAs. Yet, the implications of using International Classification of Diseases-9th edition (ICD-9) codes or a form of disease-specific risk group stratification instead of DRGs has yet to be clearly documented. To demonstrate the implications of different disease coding mechanisms on costs and the magnitude of error that could be introduced in head-to-head comparisons of resulting CERs. We based our analyses on a previously published Markov model for HIV/AIDS therapies. We used the Healthcare Cost and Utilisation Project Nationwide Inpatient Sample (HCUP-NIS) data release 6, which contains all-payer data on hospital inpatient stays from selected states. We added costs for the mean number of hospitalisations, derived from analyses based on either DRG or ICD-9 codes or risk group stratification cost weights, to the standard outpatient and prescription drug costs to yield an estimate of total charges for each AIDS-defining illness (ADI). Finally, we estimated the Markov model three times with the appropriate ADI cost weights to obtain CERs specific to the use of either DRG or ICD-9 codes or risk group. Contrary to expectations, we found that the choice of coding/grouping assumptions that are disease-specific by either DRG codes, ICD-9 codes or risk group resulted in very similar CER estimates for highly active antiretroviral therapy. The large variations in the specific ADI cost weights across the three different coding approaches was especially interesting. However, because no one approach produced consistently higher estimates than the others, the Markov model's weighted cost per event and resulting CERs were remarkably close in value to one another. Although DRG codes are based on broader categories and contain less information than ICD-9 codes, in practice the choice of whether to use DRGs or ICD-9 codes may have little effect on the CEA results in heterogeneous conditions such as HIV/AIDS.
Blair, Stephanie; Duthie, Grant; Robertson, Sam; Hopkins, William; Ball, Kevin
2018-05-17
Wearable inertial measurement systems (IMS) allow for three-dimensional analysis of human movements in a sport-specific setting. This study examined the concurrent validity of a IMS (Xsens MVN system) for measuring lower extremity and pelvis kinematics in comparison to a Vicon motion analysis system (MAS) during kicking. Thirty footballers from Australian football (n = 10), soccer (n = 10), rugby league and rugby union (n = 10) clubs completed 20 kicks across four conditions. Concurrent validity was assessed using a linear mixed-modelling approach, which allowed the partition of between and within-subject variance from the device measurement error. Results were expressed in raw and standardised units for assessments of differences in means and measurement error, and interpreted via non-clinical magnitude-based inferences. Trivial to small differences were found in linear velocities (foot and pelvis), angular velocities (knee, shank and thigh), sagittal joint (knee and hip) and segment angle (shank and pelvis) means (mean difference: 0.2-5.8%) between the IMS and MAS in Australian football, soccer and the rugby codes. Trivial to small measurement errors (from 0.1 to 5.8%) were found between the IMS and MAS in all kinematic parameters. The IMS demonstrated acceptable levels of concurrent validity compared to a MAS when measuring kicking biomechanics across the four football codes. Wearable IMS offers various benefits over MAS, such as, out-of-laboratory testing, larger measurement range and quick data output, to help improve the ecological validity of biomechanical testing and the timing of feedback. The results advocate the use of IMS to quantify biomechanics of high-velocity movements in sport-specific settings. Copyright © 2018 Elsevier Ltd. All rights reserved.
Analysis of the OPERA 15-pin experiment with SABRE-2P. [LMFBR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rose, S.D.; Carbajo, J.J.
The OPERA (Out-of-Pile Expulsion and Reentry Apparatus) experiment simulates the initial phase of a pump coastdown without scram of a liquid-metal fast breeder reactor, specifically the Fast Flux Test Facility. The test section is a 15-pin 60/sup 0/ triangular sector designed to simulate a full-size 61-pin hexagonal bundle. A previous study indicates this to be an adequate simulation. In this paper, experimental results from the OPERA 15-pin experiment performed at ANL in 1982 are compared to analytical calculations obtained with the SABRE-2P code at ORNL.
NASA Technical Reports Server (NTRS)
Hardman, R. R.; Mahan, J. R.; Smith, M. H.; Gelhausen, P. A.; Van Dalsem, W. R.
1991-01-01
The need for a validation technique for computational fluid dynamics (CFD) codes in STOVL applications has led to research efforts to apply infrared thermal imaging techniques to visualize gaseous flow fields. Specifically, a heated, free-jet test facility was constructed. The gaseous flow field of the jet exhaust was characterized using an infrared imaging technique in the 2 to 5.6 micron wavelength band as well as conventional pitot tube and thermocouple methods. These infrared images are compared to computer-generated images using the equations of radiative exchange based on the temperature distribution in the jet exhaust measured with the thermocouple traverses. Temperature and velocity measurement techniques, infrared imaging, and the computer model of the infrared imaging technique are presented and discussed. From the study, it is concluded that infrared imaging techniques coupled with the radiative exchange equations applied to CFD models are a valid method to qualitatively verify CFD codes used in STOVL applications.
Experimental identification of closely spaced modes using NExT-ERA
NASA Astrophysics Data System (ADS)
Hosseini Kordkheili, S. A.; Momeni Massouleh, S. H.; Hajirezayi, S.; Bahai, H.
2018-01-01
This article presents a study on the capability of the time domain OMA method, NExT-ERA, to identify closely spaced structural dynamic modes. A survey in the literature reveals that few experimental studies have been conducted on the effectiveness of the NExT-ERA methodology in case of closely spaced modes specifically. In this paper we present the formulation for NExT-ERA. This formulation is then implemented in an algorithm and a code, developed in house to identify the modal parameters of different systems using their generated time history data. Some numerical models are firstly investigated to validate the code. Two different case studies involving a plate with closely spaced modes and a pulley ring with greater extent of closeness in repeated modes are presented. Both structures are excited by random impulses under the laboratory condition. The resulting time response acceleration data are then used as input in the developed code to extract modal parameters of the structures. The accuracy of the results is checked against those obtained from experimental tests.
The MCUCN simulation code for ultracold neutron physics
NASA Astrophysics Data System (ADS)
Zsigmond, G.
2018-02-01
Ultracold neutrons (UCN) have very low kinetic energies 0-300 neV, thereby can be stored in specific material or magnetic confinements for many hundreds of seconds. This makes them a very useful tool in probing fundamental symmetries of nature (for instance charge-parity violation by neutron electric dipole moment experiments) and contributing important parameters for the Big Bang nucleosynthesis (neutron lifetime measurements). Improved precision experiments are in construction at new and planned UCN sources around the world. MC simulations play an important role in the optimization of such systems with a large number of parameters, but also in the estimation of systematic effects, in benchmarking of analysis codes, or as part of the analysis. The MCUCN code written at PSI has been extensively used for the optimization of the UCN source optics and in the optimization and analysis of (test) experiments within the nEDM project based at PSI. In this paper we present the main features of MCUCN and interesting benchmark and application examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
TESP combines existing domain simulators in the electric power grid, with new transactive agents, growth models and evaluation scripts. The existing domain simulators include GridLAB-D for the distribution grid and single-family residential buildings, MATPOWER for transmission and bulk generation, and EnergyPlus for large buildings. More are planned for subsequent versions of TESP. The new elements are: TEAgents - simulate market participants and transactive systems for market clearing. Some of this functionality was extracted from GridLAB-D and implemented in Python for customization by PNNL and others; Growth Model - a means for simulating system changes over a multiyear period, including bothmore » normal load growth and specific investment decisions. Customizable in Python code; and Evaluation Script - a means of evaluating different transactive systems through customizable post-processing in Python code. TESP provides a method for other researchers and vendors to design transactive systems, and test them in a virtual environment. It allows customization of the key components by modifying Python code.« less
Airfoil Vibration Dampers program
NASA Technical Reports Server (NTRS)
Cook, Robert M.
1991-01-01
The Airfoil Vibration Damper program has consisted of an analysis phase and a testing phase. During the analysis phase, a state-of-the-art computer code was developed, which can be used to guide designers in the placement and sizing of friction dampers. The use of this computer code was demonstrated by performing representative analyses on turbine blades from the High Pressure Oxidizer Turbopump (HPOTP) and High Pressure Fuel Turbopump (HPFTP) of the Space Shuttle Main Engine (SSME). The testing phase of the program consisted of performing friction damping tests on two different cantilever beams. Data from these tests provided an empirical check on the accuracy of the computer code developed in the analysis phase. Results of the analysis and testing showed that the computer code can accurately predict the performance of friction dampers. In addition, a valuable set of friction damping data was generated, which can be used to aid in the design of friction dampers, as well as provide benchmark test cases for future code developers.
Multi-level bandwidth efficient block modulation codes
NASA Technical Reports Server (NTRS)
Lin, Shu
1989-01-01
The multilevel technique is investigated for combining block coding and modulation. There are four parts. In the first part, a formulation is presented for signal sets on which modulation codes are to be constructed. Distance measures on a signal set are defined and their properties are developed. In the second part, a general formulation is presented for multilevel modulation codes in terms of component codes with appropriate Euclidean distances. The distance properties, Euclidean weight distribution and linear structure of multilevel modulation codes are investigated. In the third part, several specific methods for constructing multilevel block modulation codes with interdependency among component codes are proposed. Given a multilevel block modulation code C with no interdependency among the binary component codes, the proposed methods give a multilevel block modulation code C which has the same rate as C, a minimum squared Euclidean distance not less than that of code C, a trellis diagram with the same number of states as that of C and a smaller number of nearest neighbor codewords than that of C. In the last part, error performance of block modulation codes is analyzed for an AWGN channel based on soft-decision maximum likelihood decoding. Error probabilities of some specific codes are evaluated based on their Euclidean weight distributions and simulation results.
QuASAR-MPRA: accurate allele-specific analysis for massively parallel reporter assays.
Kalita, Cynthia A; Moyerbrailean, Gregory A; Brown, Christopher; Wen, Xiaoquan; Luca, Francesca; Pique-Regi, Roger
2018-03-01
The majority of the human genome is composed of non-coding regions containing regulatory elements such as enhancers, which are crucial for controlling gene expression. Many variants associated with complex traits are in these regions, and may disrupt gene regulatory sequences. Consequently, it is important to not only identify true enhancers but also to test if a variant within an enhancer affects gene regulation. Recently, allele-specific analysis in high-throughput reporter assays, such as massively parallel reporter assays (MPRAs), have been used to functionally validate non-coding variants. However, we are still missing high-quality and robust data analysis tools for these datasets. We have further developed our method for allele-specific analysis QuASAR (quantitative allele-specific analysis of reads) to analyze allele-specific signals in barcoded read counts data from MPRA. Using this approach, we can take into account the uncertainty on the original plasmid proportions, over-dispersion, and sequencing errors. The provided allelic skew estimate and its standard error also simplifies meta-analysis of replicate experiments. Additionally, we show that a beta-binomial distribution better models the variability present in the allelic imbalance of these synthetic reporters and results in a test that is statistically well calibrated under the null. Applying this approach to the MPRA data, we found 602 SNPs with significant (false discovery rate 10%) allele-specific regulatory function in LCLs. We also show that we can combine MPRA with QuASAR estimates to validate existing experimental and computational annotations of regulatory variants. Our study shows that with appropriate data analysis tools, we can improve the power to detect allelic effects in high-throughput reporter assays. http://github.com/piquelab/QuASAR/tree/master/mpra. fluca@wayne.edu or rpique@wayne.edu. Supplementary data are available online at Bioinformatics. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Zamani, K.; Bombardelli, F. A.
2014-12-01
Verification of geophysics codes is imperative to avoid serious academic as well as practical consequences. In case that access to any given source code is not possible, the Method of Manufactured Solution (MMS) cannot be employed in code verification. In contrast, employing the Method of Exact Solution (MES) has several practical advantages. In this research, we first provide four new one-dimensional analytical solutions designed for code verification; these solutions are able to uncover the particular imperfections of the Advection-diffusion-reaction equation, such as nonlinear advection, diffusion or source terms, as well as non-constant coefficient equations. After that, we provide a solution of Burgers' equation in a novel setup. Proposed solutions satisfy the continuity of mass for the ambient flow, which is a crucial factor for coupled hydrodynamics-transport solvers. Then, we use the derived analytical solutions for code verification. To clarify gray-literature issues in the verification of transport codes, we designed a comprehensive test suite to uncover any imperfection in transport solvers via a hierarchical increase in the level of tests' complexity. The test suite includes hundreds of unit tests and system tests to check vis-a-vis the portions of the code. Examples for checking the suite start by testing a simple case of unidirectional advection; then, bidirectional advection and tidal flow and build up to nonlinear cases. We design tests to check nonlinearity in velocity, dispersivity and reactions. The concealing effect of scales (Peclet and Damkohler numbers) on the mesh-convergence study and appropriate remedies are also discussed. For the cases in which the appropriate benchmarks for mesh convergence study are not available, we utilize symmetry. Auxiliary subroutines for automation of the test suite and report generation are designed. All in all, the test package is not only a robust tool for code verification but it also provides comprehensive insight on the ADR solvers capabilities. Such information is essential for any rigorous computational modeling of ADR equation for surface/subsurface pollution transport. We also convey our experiences in finding several errors which were not detectable with routine verification techniques.
Frye, Victoria; Wilton, Leo; Hirshfield, Sabina; Chiasson, Mary Ann; Lucy, Debbie; Usher, DaShawn; McCrossin, Jermaine; Greene, Emily; Koblin, Beryl
2018-01-01
Promoting consistent HIV testing is critical among young, Black Men Who Have Sex With Men (MSM) and transgender women who are overrepresented among new HIV cases in the United States. New HIV test options are available, including mobile unit testing, one-minute testing, at home or self-testing and couples HIV testing and counseling (CHTC). In the context of these newer options, the objective of this study was to explore whether and how preferences for specific characteristics of the tests acted as barriers to and/or facilitators of testing in general and consistent testing specifically among young Black MSM and transgender women aged 16 to 29. We conducted 30 qualitative, semi-structured, in-depth interviews with young, Black, gay, bisexual or MSM and transgender women in the New York City metropolitan area to identify preferences for specific HIV tests and aspects of HIV testing options. Participants were primarily recruited from online and mobile sites, followed by community-based, face-to-face recruitment strategies to specifically reach younger participants. Thematic coding was utilized to analyze the qualitative data based on a grounded theoretical approach. We identified how past experiences, perceived test characteristics (e.g., accuracy, cost, etc.) and beliefs about the "fit" between the individual, and the test relate to preferred testing methods and consistent testing. Three major themes emerged as important to preferences for HIV testing methods: the perceived accuracy of the test method, venue characteristics, and lack of knowledge or experience with the newer testing options, including self-testing and CHTC. These findings suggest that increasing awareness of and access to newer HIV testing options (e.g., free or reduced price on home or self-tests or CHTC available at all testing venues) is critical if these new options are to facilitate increased levels of consistent testing among young, Black MSM and transgender women. Addressing perceptions of test accuracy and supporting front line staff in creating welcoming and safe testing environments may be key intervention targets. Connecting young Black MSM and transgender women to the best test option, given preferences for specific characteristics, may support more and more consistent HIV testing.
Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications
Khodak, Andrei
2017-08-21
Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less
Gap Analysis of Material Properties Data for Ferritic/Martensitic HT-9 Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, Neil R.; Serrano De Caro, Magdalena; Rodriguez, Edward A.
2012-08-28
The US Department of Energy (DOE), Office of Nuclear Energy (NE), is supporting the development of an ASME Code Case for adoption of 12Cr-1Mo-VW ferritic/martensitic (F/M) steel, commonly known as HT-9, primarily for use in elevated temperature design of liquid-metal fast reactors (LMFR) and components. In 2011, Los Alamos National Laboratory (LANL) nuclear engineering staff began assisting in the development of a small modular reactor (SMR) design concept, previously known as the Hyperion Module, now called the Gen4 Module. LANL staff immediately proposed HT-9 for the reactor vessel and components, as well as fuel clad and ducting, due to itsmore » superior thermal qualities. Although the ASME material Code Case, for adoption of HT-9 as an approved elevated temperature material for LMFR service, is the ultimate goal of this project, there are several key deliverables that must first be successfully accomplished. The most important key deliverable is the research, accumulation, and documentation of specific material parameters; physical, mechanical, and environmental, which becomes the basis for an ASME Code Case. Time-independent tensile and ductility data and time-dependent creep and creep-rupture behavior are some of the material properties required for a successful ASME Code case. Although this report provides a cursory review of the available data, a much more comprehensive study of open-source data would be necessary. This report serves three purposes: (a) provides a list of already existing material data information that could ultimately be made available to the ASME Code, (b) determines the HT-9 material properties data missing from available sources that would be required and (c) estimates the necessary material testing required to close the gap. Ultimately, the gap analysis demonstrates that certain material properties testing will be required to fulfill the necessary information package for an ASME Code Case.« less
Numerical Analysis of 2-D and 3-D MHD Flows Relevant to Fusion Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khodak, Andrei
Here, the analysis of many fusion applications such as liquid-metal blankets requires application of computational fluid dynamics (CFD) methods for electrically conductive liquids in geometrically complex regions and in the presence of a strong magnetic field. A current state of the art general purpose CFD code allows modeling of the flow in complex geometric regions, with simultaneous conjugated heat transfer analysis in liquid and surrounding solid parts. Together with a magnetohydrodynamics (MHD) capability, the general purpose CFD code will be a valuable tool for the design and optimization of fusion devices. This paper describes an introduction of MHD capability intomore » the general purpose CFD code CFX, part of the ANSYS Workbench. The code was adapted for MHD problems using a magnetic induction approach. CFX allows introduction of user-defined variables using transport or Poisson equations. For MHD adaptation of the code three additional transport equations were introduced for the components of the magnetic field, in addition to the Poisson equation for electric potential. The Lorentz force is included in the momentum transport equation as a source term. Fusion applications usually involve very strong magnetic fields, with values of the Hartmann number of up to tens of thousands. In this situation a system of MHD equations become very rigid with very large source terms and very strong variable gradients. To increase system robustness, special measures were introduced during the iterative convergence process, such as linearization using source coefficient for momentum equations. The MHD implementation in general purpose CFD code was tested against benchmarks, specifically selected for liquid-metal blanket applications. Results of numerical simulations using present implementation closely match analytical solutions for a Hartmann number of up to 1500 for a 2-D laminar flow in the duct of square cross section, with conducting and nonconducting walls. Results for a 3-D test case are also included.« less
Statistical Analysis of CFD Solutions from the Third AIAA Drag Prediction Workshop
NASA Technical Reports Server (NTRS)
Morrison, Joseph H.; Hemsch, Michael J.
2007-01-01
The first AIAA Drag Prediction Workshop, held in June 2001, evaluated the results from an extensive N-version test of a collection of Reynolds-Averaged Navier-Stokes CFD codes. The code-to-code scatter was more than an order of magnitude larger than desired for design and experimental validation of cruise conditions for a subsonic transport configuration. The second AIAA Drag Prediction Workshop, held in June 2003, emphasized the determination of installed pylon-nacelle drag increments and grid refinement studies. The code-to-code scatter was significantly reduced compared to the first DPW, but still larger than desired. However, grid refinement studies showed no significant improvement in code-to-code scatter with increasing grid refinement. The third Drag Prediction Workshop focused on the determination of installed side-of-body fairing drag increments and grid refinement studies for clean attached flow on wing alone configurations and for separated flow on the DLR-F6 subsonic transport model. This work evaluated the effect of grid refinement on the code-to-code scatter for the clean attached flow test cases and the separated flow test cases.
QR code for medical information uses.
Fontelo, Paul; Liu, Fang; Ducut, Erick G
2008-11-06
We developed QR code online tools, simulated and tested QR code applications for medical information uses including scanning QR code labels, URLs and authentication. Our results show possible applications for QR code in medicine.
Performance Measures of Diagnostic Codes for Detecting Opioid Overdose in the Emergency Department.
Rowe, Christopher; Vittinghoff, Eric; Santos, Glenn-Milo; Behar, Emily; Turner, Caitlin; Coffin, Phillip O
2017-04-01
Opioid overdose mortality has tripled in the United States since 2000 and opioids are responsible for more than half of all drug overdose deaths, which reached an all-time high in 2014. Opioid overdoses resulting in death, however, represent only a small fraction of all opioid overdose events and efforts to improve surveillance of this public health problem should include tracking nonfatal overdose events. International Classification of Disease (ICD) diagnosis codes, increasingly used for the surveillance of nonfatal drug overdose events, have not been rigorously assessed for validity in capturing overdose events. The present study aimed to validate the use of ICD, 9th revision, Clinical Modification (ICD-9-CM) codes in identifying opioid overdose events in the emergency department (ED) by examining multiple performance measures, including sensitivity and specificity. Data on ED visits from January 1, 2012, to December 31, 2014, including clinical determination of whether the visit constituted an opioid overdose event, were abstracted from electronic medical records for patients prescribed long-term opioids for pain from any of six safety net primary care clinics in San Francisco, California. Combinations of ICD-9-CM codes were validated in the detection of overdose events as determined by medical chart review. Both sensitivity and specificity of different combinations of ICD-9-CM codes were calculated. Unadjusted logistic regression models with robust standard errors and accounting for clustering by patient were used to explore whether overdose ED visits with certain characteristics were more or less likely to be assigned an opioid poisoning ICD-9-CM code by the documenting physician. Forty-four (1.4%) of 3,203 ED visits among 804 patients were determined to be opioid overdose events. Opioid-poisoning ICD-9-CM codes (E850.2-E850.2, 965.00-965.09) identified overdose ED visits with a sensitivity of 25.0% (95% confidence interval [CI] = 13.6% to 37.8%) and specificity of 99.9% (95% CI = 99.8% to 100.0%). Expanding the ICD-9-CM codes to include both nonspecified and general (i.e., without a decimal modifier) drug poisoning and drug abuse codes identified overdose ED visits with a sensitivity of 56.8% (95% CI = 43.6%-72.7%) and specificity of 96.2% (95% CI = 94.8%-97.2%). Additional ICD-9-CM codes not explicitly relevant to opioid overdose were necessary to further enhance sensitivity. Among the 44 overdose ED visits, neither naloxone administration during the visit, whether the patient responded to the naloxone, nor the specific opioids involved were associated with the assignment of an opioid poisoning ICD-9-CM code (p ≥ 0.05). Tracking opioid overdose ED visits by diagnostic coding is fairly specific but insensitive, and coding was not influenced by administration of naloxone or the specific opioids involved. The reason for the high rate of missed cases is uncertain, although these results suggest that a more clearly defined case definition for overdose may be necessary to ensure effective opioid overdose surveillance. Changes in coding practices under ICD-10 might help to address these deficiencies. © 2016 by the Society for Academic Emergency Medicine.
Technical Support Document for Version 3.4.0 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2007-09-14
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards.« less
Lewis Structures Technology, 1988. Volume 1: Structural Dynamics
NASA Technical Reports Server (NTRS)
1988-01-01
The specific purpose of the symposium was to familiarize the engineering structures community with the depth and range of research performed by the Structures Division of the Lewis Research Center and its academic and industrial partners. Sessions covered vibration control, fracture mechanics, ceramic component reliability, parallel computing, nondestructive testing, dynamical systems, fatigue and damage, wind turbines, hot section technology, structural mechanics codes, computational methods for dynamics, structural optimization, and applications of structural dynamics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1978-12-04
The following appendices are included; Dynamic Simulation Program (ODSP-3); sample results of dynamic simulation; trip report - NH/sub 3/ safety precautions/accident records; trip report - US Coast Guard Headquarters; OTEC power system development, preliminary design test program report; medium turbine generator inspection point program; net energy analysis; bus bar cost of electricity; OTEC technical specifications; and engineer drawings. (WHK)
Poly(A) code analyses reveal key determinants for tissue-specific mRNA alternative polyadenylation
Weng, Lingjie; Li, Yi; Xie, Xiaohui; Shi, Yongsheng
2016-01-01
mRNA alternative polyadenylation (APA) is a critical mechanism for post-transcriptional gene regulation and is often regulated in a tissue- and/or developmental stage-specific manner. An ultimate goal for the APA field has been to be able to computationally predict APA profiles under different physiological or pathological conditions. As a first step toward this goal, we have assembled a poly(A) code for predicting tissue-specific poly(A) sites (PASs). Based on a compendium of over 600 features that have known or potential roles in PAS selection, we have generated and refined a machine-learning algorithm using multiple high-throughput sequencing-based data sets of tissue-specific and constitutive PASs. This code can predict tissue-specific PASs with >85% accuracy. Importantly, by analyzing the prediction performance based on different RNA features, we found that PAS context, including the distance between alternative PASs and the relative position of a PAS within the gene, is a key feature for determining the susceptibility of a PAS to tissue-specific regulation. Our poly(A) code provides a useful tool for not only predicting tissue-specific APA regulation, but also for studying its underlying molecular mechanisms. PMID:27095026
Operational rate-distortion performance for joint source and channel coding of images.
Ruf, M J; Modestino, J W
1999-01-01
This paper describes a methodology for evaluating the operational rate-distortion behavior of combined source and channel coding schemes with particular application to images. In particular, we demonstrate use of the operational rate-distortion function to obtain the optimum tradeoff between source coding accuracy and channel error protection under the constraint of a fixed transmission bandwidth for the investigated transmission schemes. Furthermore, we develop information-theoretic bounds on performance for specific source and channel coding systems and demonstrate that our combined source-channel coding methodology applied to different schemes results in operational rate-distortion performance which closely approach these theoretical limits. We concentrate specifically on a wavelet-based subband source coding scheme and the use of binary rate-compatible punctured convolutional (RCPC) codes for transmission over the additive white Gaussian noise (AWGN) channel. Explicit results for real-world images demonstrate the efficacy of this approach.
Evaluation in industry of a draft code of practice for manual handling.
Ashby, Liz; Tappin, David; Bentley, Tim
2004-05-01
This paper reports findings from a study which evaluated the draft New Zealand Code of Practice for Manual Handling. The evaluation assessed the ease of use, applicability and validity of the Code and in particular the associated manual handling hazard assessment tools, within New Zealand industry. The Code was studied in a sample of eight companies from four sectors of industry. Subjective feedback and objective findings indicated that the Code was useful, applicable and informative. The manual handling hazard assessment tools incorporated in the Code could be adequately applied by most users, with risk assessment outcomes largely consistent with the findings of researchers using more specific ergonomics methodologies. However, some changes were recommended to the risk assessment tools to improve usability and validity. The evaluation concluded that both the Code and the tools within it would benefit from simplification, improved typography and layout, and industry-specific information on manual handling hazards.
Regulation of mammalian cell differentiation by long non-coding RNAs
Hu, Wenqian; Alvarez-Dominguez, Juan R; Lodish, Harvey F
2012-01-01
Differentiation of specialized cell types from stem and progenitor cells is tightly regulated at several levels, both during development and during somatic tissue homeostasis. Many long non-coding RNAs have been recognized as an additional layer of regulation in the specification of cellular identities; these non-coding species can modulate gene-expression programmes in various biological contexts through diverse mechanisms at the transcriptional, translational or messenger RNA stability levels. Here, we summarize findings that implicate long non-coding RNAs in the control of mammalian cell differentiation. We focus on several representative differentiation systems and discuss how specific long non-coding RNAs contribute to the regulation of mammalian development. PMID:23070366
A Monte-Carlo Benchmark of TRIPOLI-4® and MCNP on ITER neutronics
NASA Astrophysics Data System (ADS)
Blanchet, David; Pénéliau, Yannick; Eschbach, Romain; Fontaine, Bruno; Cantone, Bruno; Ferlet, Marc; Gauthier, Eric; Guillon, Christophe; Letellier, Laurent; Proust, Maxime; Mota, Fernando; Palermo, Iole; Rios, Luis; Guern, Frédéric Le; Kocan, Martin; Reichle, Roger
2017-09-01
Radiation protection and shielding studies are often based on the extensive use of 3D Monte-Carlo neutron and photon transport simulations. ITER organization hence recommends the use of MCNP-5 code (version 1.60), in association with the FENDL-2.1 neutron cross section data library, specifically dedicated to fusion applications. The MCNP reference model of the ITER tokamak, the `C-lite', is being continuously developed and improved. This article proposes to develop an alternative model, equivalent to the 'C-lite', but for the Monte-Carlo code TRIPOLI-4®. A benchmark study is defined to test this new model. Since one of the most critical areas for ITER neutronics analysis concerns the assessment of radiation levels and Shutdown Dose Rates (SDDR) behind the Equatorial Port Plugs (EPP), the benchmark is conducted to compare the neutron flux through the EPP. This problem is quite challenging with regard to the complex geometry and considering the important neutron flux attenuation ranging from 1014 down to 108 n•cm-2•s-1. Such code-to-code comparison provides independent validation of the Monte-Carlo simulations, improving the confidence in neutronic results.
Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Han, Dahai; Gu, Yanjie; Zhang, Min
2017-08-10
An optimized scheme of pulse symmetrical position-orthogonal space-time block codes (PSP-OSTBC) is proposed and applied with m-pulse positions modulation (m-PPM) without the use of a complex decoding algorithm in an optical multi-input multi-output (MIMO) ultraviolet (UV) communication system. The proposed scheme breaks through the limitation of the traditional Alamouti code and is suitable for high-order m-PPM in a UV scattering channel, verified by both simulation experiments and field tests with specific parameters. The performances of 1×1, 2×1, and 2×2 PSP-OSTBC systems with 4-PPM are compared experimentally as the optimal tradeoff between modification and coding in practical application. Meanwhile, the feasibility of the proposed scheme for 8-PPM is examined by a simulation experiment as well. The results suggest that the proposed scheme makes the system insensitive to the influence of path loss with a larger channel capacity, and a higher diversity gain and coding gain with a simple decoding algorithm will be achieved by employing the orthogonality of m-PPM in an optical-MIMO-based ultraviolet scattering channel.
Toward Supersonic Retropropulsion CFD Validation
NASA Technical Reports Server (NTRS)
Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl
2011-01-01
This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.
A measure of short-term visual memory based on the WISC-R coding subtest.
Collaer, M L; Evans, J R
1982-07-01
Adapted the Coding subtest of the WISC-R to provide a measure of visual memory. Three hundred and five children, aged 8 through 12, were administered the Coding test using standard directions. A few seconds after completion the key was taken away, and each was given a paper with only the digits and asked to write the appropriate matching symbol below each. This was termed "Coding Recall." To provide validity data, a subgroup of 50 Ss also was administered the Attention Span for Letters subtest from the Detroit Tests of Learning Aptitude (as a test of visual memory for sequences of letters) and a Bender Gestalt recall test (as a measure of visual memory for geometric forms). Coding Recall means and standard deviations are reported separately by sex and age level. Implications for clinicans are discussed. Reservations about clinical use of the data are given in view of the possible lack of representativeness of the sample used and the limited reliability and validity of Coding Recall.
Basic and Morphological Properties of Bukit Goh Bauxite
NASA Astrophysics Data System (ADS)
Hasan, Muzamir; Nor Azmi, Ahmad Amirul Faez Ahmad; Tam, Weng Long; Phang, Biao Yu; Azizul Moqsud, M.
2018-03-01
Investigation conducted by International Maritime Organization (IMO) concluded that the loss of the Bulk Jupiter that carrying bauxite from Kuantan has uncovered evidence to suggest liquefaction led to loss of stability. This research analysed Bukit Goh bauxite and comparison was made with International Maritime Solid Bulk Cargoes (IMSBC Code) standard. To analyse these characteristics of the bauxite, four samples were selected at Bukit Goh, Kuantan ; two of the samples from the Bukit Goh mine and two samples from the stock piles were tested to identify the bauxite basic and morphological properties by referring to GEOSPEC 3 : Model Specification for Soil Testing ; particle size distribution, moisture content and specific gravity and its morphological properties. Laboratory tests involved including Hydrometer test, Small Pycnometer test, Dry Sieve test and Field Emission Scanning Electron Microscop (FESEM) test. The results show that the average moisture content of raw Bukit Goh bauxite is 20.64% which exceeded the recomended value of maximum 10%. Average fine material for raw bauxite is 37.75% which should not be greater than 30% per IMSBC standard. By that, the bauxite from Bukit Goh mine do not achieved the minimum requirements and standards of the IMSBC standard and need to undergo beneficiation process for better quality and safety.
Convolutional coding combined with continuous phase modulation
NASA Technical Reports Server (NTRS)
Pizzi, S. V.; Wilson, S. G.
1985-01-01
Background theory and specific coding designs for combined coding/modulation schemes utilizing convolutional codes and continuous-phase modulation (CPM) are presented. In this paper the case of r = 1/2 coding onto a 4-ary CPM is emphasized, with short-constraint length codes presented for continuous-phase FSK, double-raised-cosine, and triple-raised-cosine modulation. Coding buys several decibels of coding gain over the Gaussian channel, with an attendant increase of bandwidth. Performance comparisons in the power-bandwidth tradeoff with other approaches are made.
Pretest analysis document for Test S-FS-6
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shaw, R.A.; Hall, D.G.
This report documents the pretest analyses completed for Semiscale Test S-FS-6. This test will simulate a transient initiated by a 100% break in a steam generator bottom feedwater line downstream of the check valve. The initial conditions represent normal operating conditions for a C-E System 80 nuclear power plant. Predictions of transients resulting from feedwater line breaks in these plants have indicated that significant primary system overpressurization may occur. The enclosed analyses include a RELAP5/MOD2/CY21 code calculation and preliminary results from a facility hot, integrated test which was conducted to near S-FS-6 specifications. The results of these analyses indicate thatmore » the test objectives for Test S-FS-6 can be achieved. The primary system overpressurization will pose no threat to personnel or plant integrity.« less
Diagnosis - Using automatic test equipment and artificial intelligence expert systems
NASA Astrophysics Data System (ADS)
Ramsey, J. E., Jr.
Three expert systems (ATEOPS, ATEFEXPERS, and ATEFATLAS), which were created to direct automatic test equipment (ATE), are reviewed. The purpose of the project was to develop an expert system to troubleshoot the converter-programmer power supply card for the F-15 aircraft and have that expert system direct the automatic test equipment. Each expert system uses a different knowledge base or inference engine, basing the testing on the circuit schematic, test requirements document, or ATLAS code. Implementing generalized modules allows the expert systems to be used for any different unit under test. Using converted ATLAS to LISP code allows the expert system to direct any ATE using ATLAS. The constraint propagated frame system allows for the expansion of control by creating the ATLAS code, checking the code for good software engineering techniques, directing the ATE, and changing the test sequence as needed (planning).
NASA Astrophysics Data System (ADS)
Price-Whelan, Adrian M.
2016-01-01
Now more than ever, scientific results are dependent on sophisticated software and analysis. Why should we trust code written by others? How do you ensure your own code produces sensible results? How do you make sure it continues to do so as you update, modify, and add functionality? Software testing is an integral part of code validation and writing tests should be a requirement for any software project. I will talk about Python-based tools that make managing and running tests much easier and explore some statistics for projects hosted on GitHub that contain tests.
ELEFANT: a user-friendly multipurpose geodynamics code
NASA Astrophysics Data System (ADS)
Thieulot, C.
2014-07-01
A new finite element code for the solution of the Stokes and heat transport equations is presented. It has purposely been designed to address geological flow problems in two and three dimensions at crustal and lithospheric scales. The code relies on the Marker-in-Cell technique and Lagrangian markers are used to track materials in the simulation domain which allows recording of the integrated history of deformation; their (number) density is variable and dynamically adapted. A variety of rheologies has been implemented including nonlinear thermally activated dislocation and diffusion creep and brittle (or plastic) frictional models. The code is built on the Arbitrary Lagrangian Eulerian kinematic description: the computational grid deforms vertically and allows for a true free surface while the computational domain remains of constant width in the horizontal direction. The solution to the large system of algebraic equations resulting from the finite element discretisation and linearisation of the set of coupled partial differential equations to be solved is obtained by means of the efficient parallel direct solver MUMPS whose performance is thoroughly tested, or by means of the WISMP and AGMG iterative solvers. The code accuracy is assessed by means of many geodynamically relevant benchmark experiments which highlight specific features or algorithms, e.g., the implementation of the free surface stabilisation algorithm, the (visco-)plastic rheology implementation, the temperature advection, the capacity of the code to handle large viscosity contrasts. A two-dimensional application to salt tectonics presented as case study illustrates the potential of the code to model large scale high resolution thermo-mechanically coupled free surface flows.
Epidemiology of angina pectoris: role of natural language processing of the medical record
Pakhomov, Serguei; Hemingway, Harry; Weston, Susan A.; Jacobsen, Steven J.; Rodeheffer, Richard; Roger, Véronique L.
2007-01-01
Background The diagnosis of angina is challenging as it relies on symptom descriptions. Natural language processing (NLP) of the electronic medical record (EMR) can provide access to such information contained in free text that may not be fully captured by conventional diagnostic coding. Objective To test the hypothesis that NLP of the EMR improves angina pectoris (AP) ascertainment over diagnostic codes. Methods Billing records of in- and out-patients were searched for ICD-9 codes for AP, chronic ischemic heart disease and chest pain. EMR clinical reports were searched electronically for 50 specific non-negated natural language synonyms to these ICD-9 codes. The two methods were compared to a standardized assessment of angina by Rose questionnaire for three diagnostic levels: unspecified chest pain, exertional chest pain, and Rose angina. Results Compared to the Rose questionnaire, the true positive rate of EMR-NLP for unspecified chest pain was 62% (95%CI:55–67) vs. 51% (95%CI:44–58) for diagnostic codes (p<0.001). For exertional chest pain, the EMR-NLP true positive rate was 71% (95%CI:61–80) vs. 62% (95%CI:52–73) for diagnostic codes (p=0.10). Both approaches had 88% (95%CI:65–100) true positive rate for Rose angina. The EMR-NLP method consistently identified more patients with exertional chest pain over 28-month follow-up. Conclusion EMR-NLP method improves the detection of unspecified and exertional chest pain cases compared to diagnostic codes. These findings have implications for epidemiological and clinical studies of angina pectoris. PMID:17383310
Sierra/Aria 4.48 Verification Manual.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierra Thermal Fluid Development Team
Presented in this document is a portion of the tests that exist in the Sierra Thermal/Fluids verification test suite. Each of these tests is run nightly with the Sierra/TF code suite and the results of the test checked under mesh refinement against the correct analytic result. For each of the tests presented in this document the test setup, derivation of the analytic solution, and comparison of the code results to the analytic solution is provided. This document can be used to confirm that a given code capability is verified or referenced as a compilation of example problems.
NASA Technical Reports Server (NTRS)
Ni, Jianjun David
2011-01-01
This presentation briefly discusses a research effort on mitigation techniques of pulsed radio frequency interference (RFI) on a Low-Density-Parity-Check (LDPC) code. This problem is of considerable interest in the context of providing reliable communications to the space vehicle which might suffer severe degradation due to pulsed RFI sources such as large radars. The LDPC code is one of modern forward-error-correction (FEC) codes which have the decoding performance to approach the Shannon Limit. The LDPC code studied here is the AR4JA (2048, 1024) code recommended by the Consultative Committee for Space Data Systems (CCSDS) and it has been chosen for some spacecraft design. Even though this code is designed as a powerful FEC code in the additive white Gaussian noise channel, simulation data and test results show that the performance of this LDPC decoder is severely degraded when exposed to the pulsed RFI specified in the spacecraft s transponder specifications. An analysis work (through modeling and simulation) has been conducted to evaluate the impact of the pulsed RFI and a few implemental techniques have been investigated to mitigate the pulsed RFI impact by reshuffling the soft-decision-data available at the input of the LDPC decoder. The simulation results show that the LDPC decoding performance of codeword error rate (CWER) under pulsed RFI can be improved up to four orders of magnitude through a simple soft-decision-data reshuffle scheme. This study reveals that an error floor of LDPC decoding performance appears around CWER=1E-4 when the proposed technique is applied to mitigate the pulsed RFI impact. The mechanism causing this error floor remains unknown, further investigation is necessary.
Code-to-Code Comparison, and Material Response Modeling of Stardust and MSL using PATO and FIAT
NASA Technical Reports Server (NTRS)
Omidy, Ali D.; Panerai, Francesco; Martin, Alexandre; Lachaud, Jean R.; Cozmuta, Ioana; Mansour, Nagi N.
2015-01-01
This report provides a code-to-code comparison between PATO, a recently developed high fidelity material response code, and FIAT, NASA's legacy code for ablation response modeling. The goal is to demonstrates that FIAT and PATO generate the same results when using the same models. Test cases of increasing complexity are used, from both arc-jet testing and flight experiment. When using the exact same physical models, material properties and boundary conditions, the two codes give results that are within 2% of errors. The minor discrepancy is attributed to the inclusion of the gas phase heat capacity (cp) in the energy equation in PATO, and not in FIAT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Rui; Sumner, Tyler S.
2016-04-17
An advanced system analysis tool SAM is being developed for fast-running, improved-fidelity, and whole-plant transient analyses at Argonne National Laboratory under DOE-NE’s Nuclear Energy Advanced Modeling and Simulation (NEAMS) program. As an important part of code development, companion validation activities are being conducted to ensure the performance and validity of the SAM code. This paper presents the benchmark simulations of two EBR-II tests, SHRT-45R and BOP-302R, whose data are available through the support of DOE-NE’s Advanced Reactor Technology (ART) program. The code predictions of major primary coolant system parameter are compared with the test results. Additionally, the SAS4A/SASSYS-1 code simulationmore » results are also included for a code-to-code comparison.« less
Predicting Regulatory Compliance in Beer Advertising on Facebook.
Noel, Jonathan K; Babor, Thomas F
2017-11-01
The prevalence of alcohol advertising has been growing on social media platforms. The purpose of this study was to evaluate alcohol advertising on Facebook for regulatory compliance and thematic content. A total of 50 Budweiser and Bud Light ads posted on Facebook within 1 month of the 2015 NFL Super Bowl were evaluated for compliance with a self-regulated alcohol advertising code and for thematic content. An exploratory sensitivity/specificity analysis was conducted to determine if thematic content could predict code violations. The code violation rate was 82%, with violations prevalent in guidelines prohibiting the association of alcohol with success (Guideline 5) and health benefits (Guideline 3). Overall, 21 thematic content areas were identified. Displaying the product (62%) and adventure/sensation seeking (52%) were the most prevalent. There was perfect specificity (100%) for 10 content areas for detecting any code violation (animals, negative emotions, positive emotions, games/contests/promotions, female characters, minorities, party, sexuality, night-time, sunrise) and high specificity (>80%) for 10 content areas for detecting violations of guidelines intended to protect minors (animals, negative emotions, famous people, friendship, games/contests/promotions, minorities, responsibility messages, sexuality, sunrise, video games). The high prevalence of code violations indicates a failure of self-regulation to prevent potentially harmful content from appearing in alcohol advertising, including explicit code violations (e.g. sexuality). Routine violations indicate an unwillingness to restrict advertising content for public health purposes, and statutory restrictions may be necessary to sufficiently deter alcohol producers from repeatedly violating marketing codes. Violations of a self-regulated alcohol advertising code are prevalent in a sample of beer ads published on Facebook near the US National Football League's Super Bowl. Overall, 16 thematic content areas demonstrated high specificity for code violations. Alcohol advertising codes should be updated to expressly prohibit the use of such content. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.
Benchmark Problems of the Geothermal Technologies Office Code Comparison Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Mark D.; Podgorney, Robert; Kelkar, Sharad M.
A diverse suite of numerical simulators is currently being applied to predict or understand the performance of enhanced geothermal systems (EGS). To build confidence and identify critical development needs for these analytical tools, the United States Department of Energy, Geothermal Technologies Office has sponsored a Code Comparison Study (GTO-CCS), with participants from universities, industry, and national laboratories. A principal objective for the study was to create a community forum for improvement and verification of numerical simulators for EGS modeling. Teams participating in the study were those representing U.S. national laboratories, universities, and industries, and each team brought unique numerical simulationmore » capabilities to bear on the problems. Two classes of problems were developed during the study, benchmark problems and challenge problems. The benchmark problems were structured to test the ability of the collection of numerical simulators to solve various combinations of coupled thermal, hydrologic, geomechanical, and geochemical processes. This class of problems was strictly defined in terms of properties, driving forces, initial conditions, and boundary conditions. Study participants submitted solutions to problems for which their simulation tools were deemed capable or nearly capable. Some participating codes were originally developed for EGS applications whereas some others were designed for different applications but can simulate processes similar to those in EGS. Solution submissions from both were encouraged. In some cases, participants made small incremental changes to their numerical simulation codes to address specific elements of the problem, and in other cases participants submitted solutions with existing simulation tools, acknowledging the limitations of the code. The challenge problems were based on the enhanced geothermal systems research conducted at Fenton Hill, near Los Alamos, New Mexico, between 1974 and 1995. The problems involved two phases of research, stimulation, development, and circulation in two separate reservoirs. The challenge problems had specific questions to be answered via numerical simulation in three topical areas: 1) reservoir creation/stimulation, 2) reactive and passive transport, and 3) thermal recovery. Whereas the benchmark class of problems were designed to test capabilities for modeling coupled processes under strictly specified conditions, the stated objective for the challenge class of problems was to demonstrate what new understanding of the Fenton Hill experiments could be realized via the application of modern numerical simulation tools by recognized expert practitioners.« less
Posttest analysis of international standard problem 10 using RELAP4/MOD7. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, M.; Davis, C.B.; Peterson, A.C. Jr.
RELAP4/MOD7, a best estimate computer code for the calculation of thermal and hydraulic phenomena in a nuclear reactor or related system, is the latest version in the RELAP4 code development series. This paper evaluates the capability of RELAP4/MOD7 to calculate refill/reflood phenomena. This evaluation uses the data of International Standard Problem 10, which is based on West Germany's KWU PKL refill/reflood experiment K9A. The PKL test facility represents a typical West German four-loop, 1300 MW pressurized water reactor (PWR) in reduced scale while maintaining prototypical volume-to-power ratio. The PKL facility was designed to specifically simulate the refill/reflood phase of amore » hypothetical loss-of-coolant accident (LOCA).« less
Egocentric and nonegocentric coding in memory for spatial layout: Evidence from scene recognition
2005-01-01
Much contemporary research has suggested that memories for spatial layout are stored with a preferred orientation. The present paper examines whether spatial memories are also stored with a preferred viewpoint position. Participants viewed images of an arrangement of objects taken from a single viewpoint, and were subsequently tested on their ability to recognize the arrangement from novel viewpoints that had been translated in either the lateral or depth dimension. Lateral and forward displacements of the viewpoint resulted in increasing response latencies and errors. Backward displacement showed no such effect, nor did lateral translation that resulted in a centered “canonical” view of the arrangement. These results further constrain the specificity of spatial memory, while also providing some evidence that nonegocentric spatial information is coded in memory. PMID:16933759
Multitasking the three-dimensional transport code TORT on CRAY platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azmy, Y.Y.; Barnett, D.A.; Burre, C.A.
1996-04-01
The multitasking options in the three-dimensional neutral particle transport code TORT originally implemented for Cray`s CTSS operating system are revived and extended to run on Cray Y/MP and C90 computers using the UNICOS operating system. These include two coarse-grained domain decompositions; across octants, and across directions within an octant, termed Octant Parallel (OP), and Direction Parallel (DP), respectively. Parallel performance of the DP is significantly enhanced by increasing the task grain size and reducing load imbalance via dynamic scheduling of the discrete angles among the participating tasks. Substantial Wall Clock speedup factors, approaching 4.5 using 8 tasks, have been measuredmore » in a time-sharing environment, and generally depend on the test problem specifications, number of tasks, and machine loading during execution.« less
The pros and cons of code validation
NASA Technical Reports Server (NTRS)
Bobbitt, Percy J.
1988-01-01
Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.
User Manual for the PROTEUS Mesh Tools
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Micheal A.; Shemon, Emily R.
2015-06-01
This report describes the various mesh tools that are provided with the PROTEUS code giving both descriptions of the input and output. In many cases the examples are provided with a regression test of the mesh tools. The most important mesh tools for any user to consider using are the MT_MeshToMesh.x and the MT_RadialLattice.x codes. The former allows the conversion between most mesh types handled by PROTEUS while the second allows the merging of multiple (assembly) meshes into a radial structured grid. Note that the mesh generation process is recursive in nature and that each input specific for a givenmore » mesh tool (such as .axial or .merge) can be used as “mesh” input for any of the mesh tools discussed in this manual.« less
Finch, Caroline F; Orchard, John W; Twomey, Dara M; Saad Saleem, Muhammad; Ekegren, Christina L; Lloyd, David G; Elliott, Bruce C
2014-04-01
To compare Orchard Sports Injury Classification System (OSICS-10) sports medicine diagnoses assigned by a clinical and non-clinical coder. Assessment of intercoder agreement. Community Australian football. 1082 standardised injury surveillance records. Direct comparison of the four-character hierarchical OSICS-10 codes assigned by two independent coders (a sports physician and an epidemiologist). Adjudication by a third coder (biomechanist). The coders agreed on the first character 95% of the time and on the first two characters 86% of the time. They assigned the same four-digit OSICS-10 code for only 46% of the 1082 injuries. The majority of disagreements occurred for the third character; 85% were because one coder assigned a non-specific 'X' code. The sports physician code was deemed correct in 53% of cases and the epidemiologist in 44%. Reasons for disagreement included the physician not using all of the collected information and the epidemiologist lacking specific anatomical knowledge. Sports injury research requires accurate identification and classification of specific injuries and this study found an overall high level of agreement in coding according to OSICS-10. The fact that the majority of the disagreements occurred for the third OSICS character highlights the fact that increasing complexity and diagnostic specificity in injury coding can result in a loss of reliability and demands a high level of anatomical knowledge. Injury report form details need to reflect this level of complexity and data management teams need to include a broad range of expertise.
MELCOR/CONTAIN LMR Implementation Report-Progress FY15
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L.; Louie, David L.Y.
2016-01-01
This report describes the progress of the CONTAIN-LMR sodium physics and chemistry models to be implemented in to MELCOR 2.1. It also describes the progress to implement these models into CONT AIN 2 as well. In the past two years, the implementation included the addition of sodium equations of state and sodium properties from two different sources. The first source is based on the previous work done by Idaho National Laborat ory by modifying MELCOR to include liquid lithium equation of state as a working fluid to mode l the nuclear fusion safety research. The second source uses properties generatedmore » for the SIMMER code. Testing and results from this implementation of sodium pr operties are given. In addition, the CONTAIN-LMR code was derived from an early version of C ONTAIN code. Many physical models that were developed sin ce this early version of CONTAIN are not captured by this early code version. Therefore, CONTAIN 2 is being updated with the sodium models in CONTAIN-LMR in or der to facilitate verification of these models with the MELCOR code. Although CONTAIN 2, which represents the latest development of CONTAIN, now contains ma ny of the sodium specific models, this work is not complete due to challenges from the lower cell architecture in CONTAIN 2, which is different from CONTAIN- LMR. This implementation should be completed in the coming year, while sodi um models from C ONTAIN-LMR are being integrated into MELCOR. For testing, CONTAIN decks have been developed for verification and validation use. In terms of implementing the sodium m odels into MELCOR, a separate sodium model branch was created for this document . Because of massive development in the main stream MELCOR 2.1 code and the require ment to merge the latest code version into this branch, the integration of the s odium models were re-directed to implement the sodium chemistry models first. This change led to delays of the actual implementation. For aid in the future implementation of sodium models, a new sodium chemistry package was created. Thus reporting for the implementation of the sodium chemistry is discussed in this report.« less
Ultra-wideband communication system prototype using orthogonal frequency coded SAW correlators.
Gallagher, Daniel R; Kozlovski, Nikolai Y; Malocha, Donald C
2013-03-01
This paper presents preliminary ultra-wideband (UWB) communication system results utilizing orthogonal frequency coded SAW correlators. Orthogonal frequency coding (OFC) and pseudo-noise (PN) coding provides a means for spread-spectrum UWB. The use of OFC spectrally spreads a PN sequence beyond that of CDMA; allowing for improved correlation gain. The transceiver approach is still very similar to that of the CDMA approach, but provides greater code diversity. Use of SAW correlators eliminates many of the costly components that are typically needed in the intermediate frequency (IF) section in the transmitter and receiver, and greatly reduces the signal processing requirements. Development and results of an experimental prototype system with center frequency of 250 MHz are presented. The prototype system is configured using modular RF components and benchtop pulse generator and frequency source. The SAW correlation filters used in the test setup were designed using 7 chip frequencies within the transducer. The fractional bandwidth of approximately 29% was implemented to exceed the defined UWB specification. Discussion of the filter design and results are presented and are compared with packaged device measurements. A prototype UWB system using OFC SAW correlators is demonstrated in wired and wireless configurations. OFC-coded SAW filters are used for generation of a transmitted spread-spectrum UWB and matched filter correlated reception. Autocorrelation and cross-correlation system outputs are compared. The results demonstrate the feasibility of UWB SAW correlators for use in UWB communication transceivers.
Testing stellar evolution models with detached eclipsing binaries
NASA Astrophysics Data System (ADS)
Higl, J.; Weiss, A.
2017-12-01
Stellar evolution codes, as all other numerical tools, need to be verified. One of the standard stellar objects that allow stringent tests of stellar evolution theory and models, are detached eclipsing binaries. We have used 19 such objects to test our stellar evolution code, in order to see whether standard methods and assumptions suffice to reproduce the observed global properties. In this paper we concentrate on three effects that contain a specific uncertainty: atomic diffusion as used for standard solar model calculations, overshooting from convective regions, and a simple model for the effect of stellar spots on stellar radius, which is one of the possible solutions for the radius problem of M dwarfs. We find that in general old systems need diffusion to allow for, or at least improve, an acceptable fit, and that systems with convective cores indeed need overshooting. Only one system (AI Phe) requires the absence of it for a successful fit. To match stellar radii for very low-mass stars, the spot model proved to be an effective approach, but depending on model details, requires a high percentage of the surface being covered by spots. We briefly discuss improvements needed to further reduce the freedom in modelling and to allow an even more restrictive test by using these objects.
1996-01-01
Real-Time 19 5 Conclusion 23 List of References 25 ii LIST OF FIGURES FIGURE PAGE 3-1 Test Bench Pseudo Code 7 3-2 Fast Convolution...3-1 shows pseudo - code for a test bench with two application nodes. The outer test bench wrapper consists of three functions: pipeline_init, pipeline...exit_func); Figure 3-1. Test Bench Pseudo Code The application wrapper is contained in the pipeline routine and similarly consists of an
NASA Astrophysics Data System (ADS)
Wuttke, Manfred W.
2017-04-01
At LIAG, we use numerical models to develop and enhance understanding of coupled transport processes and to predict the dynamics of the system under consideration. Topics include geothermal heat utilization, subrosion processes, and spontaneous underground coal fires. Although the details make it inconvenient if not impossible to apply a single code implementation to all systems, their investigations go along similar paths: They all depend on the solution of coupled transport equations. We thus saw a need for a modular code system with open access for the various communities to maximize the shared synergistic effects. To this purpose we develop the oops! ( open object-oriented parallel solutions) - toolkit, a C++ class library for the numerical solution of mathematical models of coupled thermal, hydraulic and chemical processes. This is used to develop problem-specific libraries like acme( amendable coal-fire modeling exercise), a class library for the numerical simulation of coal-fires and applications like kobra (Kohlebrand, german for coal-fire), a numerical simulation code for standard coal-fire models. Basic principle of the oops!-code system is the provision of data types for the description of space and time dependent data fields, description of terms of partial differential equations (pde), their discretisation and solving methods. Coupling of different processes, described by their particular pde is modeled by an automatic timescale-ordered operator-splitting technique. acme is a derived coal-fire specific application library, depending on oops!. If specific functionalities of general interest are implemented and have been tested they will be assimilated into the main oops!-library. Interfaces to external pre- and post-processing tools are easily implemented. Thus a construction kit which can be arbitrarily amended is formed. With the kobra-application constructed with acme we study the processes and propagation of shallow coal seam fires in particular in Xinjiang, China, as well as analyze and interpret results from lab experiments.
Validity of data in the Danish Colorectal Cancer Screening Database
Thomsen, Mette Kielsholm; Njor, Sisse Helle; Rasmussen, Morten; Linnemann, Dorte; Andersen, Berit; Baatrup, Gunnar; Friis-Hansen, Lennart Jan; Jørgensen, Jens Christian Riis; Mikkelsen, Ellen Margrethe
2017-01-01
Background In Denmark, a nationwide screening program for colorectal cancer was implemented in March 2014. Along with this, a clinical database for program monitoring and research purposes was established. Objective The aim of this study was to estimate the agreement and validity of diagnosis and procedure codes in the Danish Colorectal Cancer Screening Database (DCCSD). Methods All individuals with a positive immunochemical fecal occult blood test (iFOBT) result who were invited to screening in the first 3 months since program initiation were identified. From these, a sample of 150 individuals was selected using stratified random sampling by age, gender and region of residence. Data from the DCCSD were compared with data from hospital records, which were used as the reference. Agreement, sensitivity, specificity and positive and negative predictive values were estimated for categories of codes “clean colon”, “colonoscopy performed”, “overall completeness of colonoscopy”, “incomplete colonoscopy”, “polypectomy”, “tumor tissue left behind”, “number of polyps”, “lost polyps”, “risk group of polyps” and “colorectal cancer and polyps/benign tumor”. Results Hospital records were available for 136 individuals. Agreement was highest for “colorectal cancer” (97.1%) and lowest for “lost polyps” (88.2%). Sensitivity varied between moderate and high, with 60.0% for “incomplete colonoscopy” and 98.5% for “colonoscopy performed”. Specificity was 92.7% or above, except for the categories “colonoscopy performed” and “overall completeness of colonoscopy”, where the specificity was low; however, the estimates were imprecise. Conclusion A high level of agreement between categories of codes in DCCSD and hospital records indicates that DCCSD reflects the hospital records well. Further, the validity of the categories of codes varied from moderate to high. Thus, the DCCSD may be a valuable data source for future research on colorectal cancer screening. PMID:28255255
Dichotomous scoring of Trails B in patients referred for a dementia evaluation.
Schmitt, Andrew L; Livingston, Ronald B; Smernoff, Eric N; Waits, Bethany L; Harris, James B; Davis, Kent M
2010-04-01
The Trail Making Test is a popular neuropsychological test and its interpretation has traditionally used time-based scores. This study examined an alternative approach to scoring that is simply based on the examinees' ability to complete the test. If an examinee is able to complete Trails B successfully, they are coded as "completers"; if not, they are coded as "noncompleters." To assess this approach to scoring Trails B, the performance of 97 diagnostically heterogeneous individuals referred for a dementia evaluation was examined. In this sample, 55 individuals successfully completed Trails B and 42 individuals were unable to complete it. Point-biserial correlations indicated a moderate-to-strong association (r(pb)=.73) between the Trails B completion variable and the Total Scale score of the Repeatable Battery for the Assessment of Neurological Status (RBANS), which was larger than the correlation between the Trails B time-based score and the RBANS Total Scale score (r(pb)=.60). As a screen for dementia status, Trails B completion showed a sensitivity of 69% and a specificity of 100% in this sample. These results suggest that dichotomous scoring of Trails B might provide a brief and clinically useful measure of dementia status.
Nimptsch, Ulrike
2016-06-01
To investigate changes in comorbidity coding after the introduction of diagnosis related groups (DRGs) based prospective payment and whether trends differ regarding specific comorbidities. Nationwide administrative data (DRG statistics) from German acute care hospitals from 2005 to 2012. Observational study to analyze trends in comorbidity coding in patients hospitalized for common primary diseases and the effects on comorbidity-related risk of in-hospital death. Comorbidity coding was operationalized by Elixhauser diagnosis groups. The analyses focused on adult patients hospitalized for the primary diseases of heart failure, stroke, and pneumonia, as well as hip fracture. When focusing the total frequency of diagnosis groups per record, an increase in depth of coding was observed. Between-hospital variations in depth of coding were present throughout the observation period. Specific comorbidity increases were observed in 15 of the 31 diagnosis groups, and decreases in comorbidity were observed for 11 groups. In patients hospitalized for heart failure, shifts of comorbidity-related risk of in-hospital death occurred in nine diagnosis groups, in which eight groups were directed toward the null. Comorbidity-adjusted outcomes in longitudinal administrative data analyses may be biased by nonconstant risk over time, changes in completeness of coding, and between-hospital variations in coding. Accounting for such issues is important when the respective observation period coincides with changes in the reimbursement system or other conditions that are likely to alter clinical coding practice. © Health Research and Educational Trust.
Analysis of Phenix end-of-life natural convection test with the MARS-LMR code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jeong, H. Y.; Ha, K. S.; Lee, K. L.
The end-of-life test of Phenix reactor performed by the CEA provided an opportunity to have reliable and valuable test data for the validation and verification of a SFR system analysis code. KAERI joined this international program for the analysis of Phenix end-of-life natural circulation test coordinated by the IAEA from 2008. The main objectives of this study were to evaluate the capability of existing SFR system analysis code MARS-LMR and to identify any limitation of the code. The analysis was performed in three stages: pre-test analysis, blind posttest analysis, and final post-test analysis. In the pre-test analysis, the design conditionsmore » provided by the CEA were used to obtain a prediction of the test. The blind post-test analysis was based on the test conditions measured during the tests but the test results were not provided from the CEA. The final post-test analysis was performed to predict the test results as accurate as possible by improving the previous modeling of the test. Based on the pre-test analysis and blind test analysis, the modeling for heat structures in the hot pool and cold pool, steel structures in the core, heat loss from roof and vessel, and the flow path at core outlet were reinforced in the final analysis. The results of the final post-test analysis could be characterized into three different phases. In the early phase, the MARS-LMR simulated the heat-up process correctly due to the enhanced heat structure modeling. In the mid phase before the opening of SG casing, the code reproduced the decrease of core outlet temperature successfully. Finally, in the later phase the increase of heat removal by the opening of the SG opening was well predicted with the MARS-LMR code. (authors)« less
Code of Federal Regulations, 2010 CFR
2010-10-01
... laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding System Code...
Code of Federal Regulations, 2011 CFR
2011-10-01
... laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding System Code...
Di Rosa, Elisa; Bardi, Lara; Umiltà, Carlo; Masina, Fabio; Forgione, Margherita; Mapelli, Daniela
2017-08-01
The concept of stimulus response compatibility (SRC) refers to the existence of a privileged association between a specific stimulus feature and a specific response feature. Two examples of SRC are the Spatial Numerical Association of Response Codes (SNARC) and the Markedness Association of Response Codes (MARC) effects. According to the polarity correspondence principle, these two SRC effects occur because of a match between the most salient dimensions of stimulus and response. Specifically, the SNARC effect would be caused by a match between right-sided responses and large numbers, while a match between right-sided responses and even numbers would give rise to the MARC effect. The aim of the present study was to test the validity of the polarity correspondence principle in explaining these two SRC effects. To this end, we applied transcranial direct current stimulation (tDCS) over left and right posterior parietal cortex (PPC), which is thought to be the neural basis of salience processing, during a parity judgement task. Results showed that cathodal tDCS over the PPC significantly reduced the MARC effect but did not affect the SNARC effect, suggesting a dissociation between the two effects. That is, the MARC would rely on a salience processing mechanism, whereas the SNARC would not. Despite this interpretation is in need of further experimental confirmations (i.e., testing different tasks or using different tDCS montages), our results suggest that the polarity correspondence principle can be a plausible explanation only for the MARC effect but not for the SNARC effect. Copyright © 2017 Elsevier Ltd. All rights reserved.
Benchmarking the Multidimensional Stellar Implicit Code MUSIC
NASA Astrophysics Data System (ADS)
Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.
2017-04-01
We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.
Technical Support Document for Version 3.9.0 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2011-09-01
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC are no longer included, but those sections remain in this document for reference purposes.« less
Technical Support Document for Version 3.9.1 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2012-09-01
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC and version 3.9.0 support for 2000 and 2001 IECC are no longer included, but those sections remain in this document for reference purposes.« less
Learning Discriminative Binary Codes for Large-scale Cross-modal Retrieval.
Xu, Xing; Shen, Fumin; Yang, Yang; Shen, Heng Tao; Li, Xuelong
2017-05-01
Hashing based methods have attracted considerable attention for efficient cross-modal retrieval on large-scale multimedia data. The core problem of cross-modal hashing is how to learn compact binary codes that construct the underlying correlations between heterogeneous features from different modalities. A majority of recent approaches aim at learning hash functions to preserve the pairwise similarities defined by given class labels. However, these methods fail to explicitly explore the discriminative property of class labels during hash function learning. In addition, they usually discard the discrete constraints imposed on the to-be-learned binary codes, and compromise to solve a relaxed problem with quantization to obtain the approximate binary solution. Therefore, the binary codes generated by these methods are suboptimal and less discriminative to different classes. To overcome these drawbacks, we propose a novel cross-modal hashing method, termed discrete cross-modal hashing (DCH), which directly learns discriminative binary codes while retaining the discrete constraints. Specifically, DCH learns modality-specific hash functions for generating unified binary codes, and these binary codes are viewed as representative features for discriminative classification with class labels. An effective discrete optimization algorithm is developed for DCH to jointly learn the modality-specific hash function and the unified binary codes. Extensive experiments on three benchmark data sets highlight the superiority of DCH under various cross-modal scenarios and show its state-of-the-art performance.
Synthesizing Safety Conditions for Code Certification Using Meta-Level Programming
NASA Technical Reports Server (NTRS)
Eusterbrock, Jutta
2004-01-01
In code certification the code consumer publishes a safety policy and the code producer generates a proof that the produced code is in compliance with the published safety policy. In this paper, a novel viewpoint approach towards an implementational re-use oriented framework for code certification is taken. It adopts ingredients from Necula's approach for proof-carrying code, but in this work safety properties can be analyzed on a higher code level than assembly language instructions. It consists of three parts: (1) The specification language is extended to include generic pre-conditions that shall ensure safety at all states that can be reached during program execution. Actual safety requirements can be expressed by providing domain-specific definitions for the generic predicates which act as interface to the environment. (2) The Floyd-Hoare inductive assertion method is refined to obtain proof rules that allow the derivation of the proof obligations in terms of the generic safety predicates. (3) A meta-interpreter is designed and experimentally implemented that enables automatic synthesis of proof obligations for submitted programs by applying the modified Floyd-Hoare rules. The proof obligations have two separate conjuncts, one for functional correctness and another for the generic safety obligations. Proof of the generic obligations, having provided the actual safety definitions as context, ensures domain-specific safety of program execution in a particular environment and is simpler than full program verification.
Static Verification for Code Contracts
NASA Astrophysics Data System (ADS)
Fähndrich, Manuel
The Code Contracts project [3] at Microsoft Research enables programmers on the .NET platform to author specifications in existing languages such as C# and VisualBasic. To take advantage of these specifications, we provide tools for documentation generation, runtime contract checking, and static contract verification.
Benchmarking of Improved DPAC Transient Deflagration Analysis Code
Laurinat, James E.; Hensel, Steve J.
2017-09-27
The deflagration pressure analysis code (DPAC) has been upgraded for use in modeling hydrogen deflagration transients. The upgraded code is benchmarked using data from vented hydrogen deflagration tests conducted at the HYDRO-SC Test Facility at the University of Pisa. DPAC originally was written to calculate peak pressures for deflagrations in radioactive waste storage tanks and process facilities at the Savannah River Site. Upgrades include the addition of a laminar flame speed correlation for hydrogen deflagrations and a mechanistic model for turbulent flame propagation, incorporation of inertial effects during venting, and inclusion of the effect of water vapor condensation on vesselmore » walls. In addition, DPAC has been coupled with chemical equilibrium with applications (CEA), a NASA combustion chemistry code. The deflagration tests are modeled as end-to-end deflagrations. As a result, the improved DPAC code successfully predicts both the peak pressures during the deflagration tests and the times at which the pressure peaks.« less
Benchmarking of Improved DPAC Transient Deflagration Analysis Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Laurinat, James E.; Hensel, Steve J.
The deflagration pressure analysis code (DPAC) has been upgraded for use in modeling hydrogen deflagration transients. The upgraded code is benchmarked using data from vented hydrogen deflagration tests conducted at the HYDRO-SC Test Facility at the University of Pisa. DPAC originally was written to calculate peak pressures for deflagrations in radioactive waste storage tanks and process facilities at the Savannah River Site. Upgrades include the addition of a laminar flame speed correlation for hydrogen deflagrations and a mechanistic model for turbulent flame propagation, incorporation of inertial effects during venting, and inclusion of the effect of water vapor condensation on vesselmore » walls. In addition, DPAC has been coupled with chemical equilibrium with applications (CEA), a NASA combustion chemistry code. The deflagration tests are modeled as end-to-end deflagrations. As a result, the improved DPAC code successfully predicts both the peak pressures during the deflagration tests and the times at which the pressure peaks.« less
Standardizing clinical laboratory data for secondary use.
Abhyankar, Swapna; Demner-Fushman, Dina; McDonald, Clement J
2012-08-01
Clinical databases provide a rich source of data for answering clinical research questions. However, the variables recorded in clinical data systems are often identified by local, idiosyncratic, and sometimes redundant and/or ambiguous names (or codes) rather than unique, well-organized codes from standard code systems. This reality discourages research use of such databases, because researchers must invest considerable time in cleaning up the data before they can ask their first research question. Researchers at MIT developed MIMIC-II, a nearly complete collection of clinical data about intensive care patients. Because its data are drawn from existing clinical systems, it has many of the problems described above. In collaboration with the MIT researchers, we have begun a process of cleaning up the data and mapping the variable names and codes to LOINC codes. Our first step, which we describe here, was to map all of the laboratory test observations to LOINC codes. We were able to map 87% of the unique laboratory tests that cover 94% of the total number of laboratory tests results. Of the 13% of tests that we could not map, nearly 60% were due to test names whose real meaning could not be discerned and 29% represented tests that were not yet included in the LOINC table. These results suggest that LOINC codes cover most of laboratory tests used in critical care. We have delivered this work to the MIMIC-II researchers, who have included it in their standard MIMIC-II database release so that researchers who use this database in the future will not have to do this work. Published by Elsevier Inc.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1994-01-01
A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.
Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of anaphylaxis. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis health outcome of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify anaphylaxis and including validation estimates of the coding algorithms. Our search revealed limited literature focusing on anaphylaxis that provided administrative and claims data-based algorithms and validation estimates. Only four studies identified via literature searches provided validated algorithms; however, two additional studies were identified by Mini-Sentinel collaborators and were incorporated. The International Classification of Diseases, Ninth Revision, codes varied, as did the positive predictive value, depending on the cohort characteristics and the specific codes used to identify anaphylaxis. Research needs to be conducted on designing validation studies to test anaphylaxis algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
A global view of the nonprotein-coding transcriptome in Plasmodium falciparum
Raabe, Carsten A.; Sanchez, Cecilia P.; Randau, Gerrit; Robeck, Thomas; Skryabin, Boris V.; Chinni, Suresh V.; Kube, Michael; Reinhardt, Richard; Ng, Guey Hooi; Manickam, Ravichandran; Kuryshev, Vladimir Y.; Lanzer, Michael; Brosius, Juergen; Tang, Thean Hock; Rozhdestvensky, Timofey S.
2010-01-01
Nonprotein-coding RNAs (npcRNAs) represent an important class of regulatory molecules that act in many cellular pathways. Here, we describe the experimental identification and validation of the small npcRNA transcriptome of the human malaria parasite Plasmodium falciparum. We identified 630 novel npcRNA candidates. Based on sequence and structural motifs, 43 of them belong to the C/D and H/ACA-box subclasses of small nucleolar RNAs (snoRNAs) and small Cajal body-specific RNAs (scaRNAs). We further observed the exonization of a functional H/ACA snoRNA gene, which might contribute to the regulation of ribosomal protein L7a gene expression. Some of the small npcRNA candidates are from telomeric and subtelomeric repetitive regions, suggesting their potential involvement in maintaining telomeric integrity and subtelomeric gene silencing. We also detected 328 cis-encoded antisense npcRNAs (asRNAs) complementary to P. falciparum protein-coding genes of a wide range of biochemical pathways, including determinants of virulence and pathology. All cis-encoded asRNA genes tested exhibit lifecycle-specific expression profiles. For all but one of the respective sense–antisense pairs, we deduced concordant patterns of expression. Our findings have important implications for a better understanding of gene regulatory mechanisms in P. falciparum, revealing an extended and sophisticated npcRNA network that may control the expression of housekeeping genes and virulence factors. PMID:19864253
A global view of the nonprotein-coding transcriptome in Plasmodium falciparum.
Raabe, Carsten A; Sanchez, Cecilia P; Randau, Gerrit; Robeck, Thomas; Skryabin, Boris V; Chinni, Suresh V; Kube, Michael; Reinhardt, Richard; Ng, Guey Hooi; Manickam, Ravichandran; Kuryshev, Vladimir Y; Lanzer, Michael; Brosius, Juergen; Tang, Thean Hock; Rozhdestvensky, Timofey S
2010-01-01
Nonprotein-coding RNAs (npcRNAs) represent an important class of regulatory molecules that act in many cellular pathways. Here, we describe the experimental identification and validation of the small npcRNA transcriptome of the human malaria parasite Plasmodium falciparum. We identified 630 novel npcRNA candidates. Based on sequence and structural motifs, 43 of them belong to the C/D and H/ACA-box subclasses of small nucleolar RNAs (snoRNAs) and small Cajal body-specific RNAs (scaRNAs). We further observed the exonization of a functional H/ACA snoRNA gene, which might contribute to the regulation of ribosomal protein L7a gene expression. Some of the small npcRNA candidates are from telomeric and subtelomeric repetitive regions, suggesting their potential involvement in maintaining telomeric integrity and subtelomeric gene silencing. We also detected 328 cis-encoded antisense npcRNAs (asRNAs) complementary to P. falciparum protein-coding genes of a wide range of biochemical pathways, including determinants of virulence and pathology. All cis-encoded asRNA genes tested exhibit lifecycle-specific expression profiles. For all but one of the respective sense-antisense pairs, we deduced concordant patterns of expression. Our findings have important implications for a better understanding of gene regulatory mechanisms in P. falciparum, revealing an extended and sophisticated npcRNA network that may control the expression of housekeeping genes and virulence factors.
48 CFR 304.7001 - Numbering acquisitions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... contracting office identification codes currently in use is contained in the DCIS Users' Manual, available at... than one code may apply in a specific situation, or for additional codes, refer to the DCIS Users' Manual or consult with the cognizant DCIS coordinator/focal point for guidance on which code governs...
One Speaker, Two Languages. Cross-Disciplinary Perspectives on Code-Switching.
ERIC Educational Resources Information Center
Milroy, Lesley, Ed.; Muysken, Pieter, Ed.
Fifteen articles review code-switching in the four major areas: policy implications in specific institutional and community settings; perspectives of social theory of code-switching as a form of speech behavior in particular social contexts; the grammatical analysis of code-switching, including factors that constrain switching even within a…
Evaluation plan for space station network interface units
NASA Technical Reports Server (NTRS)
Weaver, Alfred C.
1990-01-01
Outlined here is a procedure for evaluating network interface units (NIUs) produced for the Space Station program. The procedures should be equally applicable to the data management system (DMS) testbed NIUs produced by Honeywell and IBM. The evaluation procedures are divided into four areas. Performance measurement tools are hardware and software that must be developed in order to evaluate NIU performance. Performance tests are a series of tests, each of which documents some specific characteristic of NIU and/or network performance. In general, these performance tests quantify the speed, capacity, latency, and reliability of message transmission under a wide variety of conditions. Functionality tests are a series of tests and code inspections that demonstrate the functionality of the particular subset of ISO protocols which have been implemented in a given NIU. Conformance tests are a series of tests which would expose whether or not selected features within the ISO protocols are present and interoperable.
Two-fluid 2.5D code for simulations of small scale magnetic fields in the lower solar atmosphere
NASA Astrophysics Data System (ADS)
Piantschitsch, Isabell; Amerstorfer, Ute; Thalmann, Julia Katharina; Hanslmeier, Arnold; Lemmerer, Birgit
2015-08-01
Our aim is to investigate magnetic reconnection as a result of the time evolution of magnetic flux tubes in the solar chromosphere. A new numerical two-fluid code was developed, which will perform a 2.5D simulation of the dynamics from the upper convection zone up to the transition region. The code is based on the Total Variation Diminishing Lax-Friedrichs method and includes the effects of ion-neutral collisions, ionisation/recombination, thermal/resistive diffusivity as well as collisional/resistive heating. What is innovative about our newly developed code is the inclusion of a two-fluid model in combination with the use of analytically constructed vertically open magnetic flux tubes, which are used as initial conditions for our simulation. First magnetohydrodynamic (MHD) tests have already shown good agreement with known results of numerical MHD test problems like e.g. the Orszag-Tang vortex test, the Current Sheet test or the Spherical Blast Wave test. Furthermore, the single-fluid approach will also be applied to the initial conditions, in order to compare the different rates of magnetic reconnection in both codes, the two-fluid code and the single-fluid one.
EMPHASIS™/Nevada CABANA User Guide Version 2.1.2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turner, C. David; Bohnhoff, William J.; Powell, Jennifer L.
2017-11-15
The CABle ANAlysis (CABANA) portion of the EMPHASIS™ suite is designed specifically for the simulation of cable SGEMP. The code can be used to evaluate the response of a specific cable design to threat or to compare and minimize the relative response of difference designs. This document provides user-specific information to facilitate the application of the code to cables of interest.
NASA Technical Reports Server (NTRS)
Kimmel, W. M.; Kuhn, N. S.; Berry, R. F.; Newman, J. A.
2001-01-01
An overview and status of current activities seeking alternatives to 200 grade 18Ni Steel CVM alloy for cryogenic wind tunnel models is presented. Specific improvements in material selection have been researched including availability, strength, fracture toughness and potential for use in transonic wind tunnel testing. Potential benefits from utilizing damage tolerant life-prediction methods, recently developed fatigue crack growth codes and upgraded NDE methods are also investigated. Two candidate alloys are identified and accepted for cryogenic/transonic wind tunnel models and hardware.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wodtke, C.H.; Frizzell, D.R.; Plunkett, W.A.
1986-06-01
Procedure WPS-1002 is qualified under Section IX of the ASME Boiler and Pressure Vessel Code for gas tungsten arc welding of aluminum alloys 3004, 5052, 5154, and 5454 (P-22), in thickness range 0.062 to 0.5 inches; filler metal is ER4043 (F-23) for 3004, and ER5356 (F-22) for other alloys; shielding gas is argon.
Software engineering project management - A state-of-the-art report
NASA Technical Reports Server (NTRS)
Thayer, R. H.; Lehman, J. H.
1977-01-01
The management of software engineering projects in the aerospace industry was investigated. The survey assessed such features as contract type, specification preparation techniques, software documentation required by customers, planning and cost-estimating, quality control, the use of advanced program practices, software tools and test procedures, the education levels of project managers, programmers and analysts, work assignment, automatic software monitoring capabilities, design and coding reviews, production times, success rates, and organizational structure of the projects.
Shinde, Satomi K.; Danov, Stacy; Chen, Chin-Chih; Clary, Jamie; Harper, Vicki; Bodfish, James W.; Symons, Frank J.
2014-01-01
Objectives The main aim of the study was to generate initial convergent validity evidence for the Pain and Discomfort Scale (PADS) for use with non-verbal adults with intellectual disabilities (ID). Methods Forty-four adults with intellectual disability (mean age = 46, 52 % male) were evaluated using a standardized sham-controlled and blinded sensory testing protocol, from which FACS and PADS scores were tested for (1) sensitivity to an array of calibrated sensory stimuli, (2) specificity (active vs. sham trials), and (3) concordance. Results The primary findings were that participants were reliably coded using both FACS and PADS approaches as being reactive to the sensory stimuli (FACS: F[2, 86] = 4.71, P < .05, PADS: F[2, 86] = 21.49, P < .05) (sensitivity evidence), not reactive during the sham stimulus trials (FACS: F[1, 43]= 3.77, p = .06, PADS: F[1, 43] = 5.87, p = .02) (specificity evidence), and there were significant (r = .41 – .51, p < .01) correlations between PADS and FACS (convergent validity evidence). Discussion FACS is an objective coding platform for facial expression. It requires intensive training and resources for scoring. As such it may be limited for clinical application. PADS was designed for clinical application. PADS scores were comparable to FACS scores under controlled evaluation conditions providing partial convergent validity evidence for its use. PMID:24135902
Testing of Error-Correcting Sparse Permutation Channel Codes
NASA Technical Reports Server (NTRS)
Shcheglov, Kirill, V.; Orlov, Sergei S.
2008-01-01
A computer program performs Monte Carlo direct numerical simulations for testing sparse permutation channel codes, which offer strong error-correction capabilities at high code rates and are considered especially suitable for storage of digital data in holographic and volume memories. A word in a code of this type is characterized by, among other things, a sparseness parameter (M) and a fixed number (K) of 1 or "on" bits in a channel block length of N.
Guidelines for development structured FORTRAN programs
NASA Technical Reports Server (NTRS)
Earnest, B. M.
1984-01-01
Computer programming and coding standards were compiled to serve as guidelines for the uniform writing of FORTRAN 77 programs at NASA Langley. Software development philosophy, documentation, general coding conventions, and specific FORTRAN coding constraints are discussed.
Quantized phase coding and connected region labeling for absolute phase retrieval.
Chen, Xiangcheng; Wang, Yuwei; Wang, Yajun; Ma, Mengchao; Zeng, Chunnian
2016-12-12
This paper proposes an absolute phase retrieval method for complex object measurement based on quantized phase-coding and connected region labeling. A specific code sequence is embedded into quantized phase of three coded fringes. Connected regions of different codes are labeled and assigned with 3-digit-codes combining the current period and its neighbors. Wrapped phase, more than 36 periods, can be restored with reference to the code sequence. Experimental results verify the capability of the proposed method to measure multiple isolated objects.
System Design Description for the TMAD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finfrock, S.H.
This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.
Recombinant blood group proteins for use in antibody screening and identification tests.
Seltsam, Axel; Blasczyk, Rainer
2009-11-01
The present review elucidates the potentials of recombinant blood group proteins (BGPs) for red blood cell (RBC) antibody detection and identification in pretransfusion testing and the achievements in this field so far. Many BGPs have been eukaryotically and prokaryotically expressed in sufficient quantity and quality for RBC antibody testing. Recombinant BGPs can be incorporated in soluble protein reagents or solid-phase assays such as ELISA, color-coded microsphere and protein microarray chip-based techniques. Because novel recombinant protein-based assays use single antigens, a positive reaction of a serum with the recombinant protein directly indicates the presence and specificity of the target antibody. Inversely, conventional RBC-based assays use panels of human RBCs carrying a huge number of blood group antigens at the same time and require negative reactions of samples with antigen-negative cells for indirect determination of antibody specificity. Because of their capacity for single-step, direct RBC antibody determination, recombinant protein-based assays may greatly facilitate and accelerate the identification of common and rare RBC antibodies.
Sharp, Ian; Patton, James; Listenberger, Molly; Case, Emily
2011-08-08
Recent research that tests interactive devices for prolonged therapy practice has revealed new prospects for robotics combined with graphical and other forms of biofeedback. Previous human-robot interactive systems have required different software commands to be implemented for each robot leading to unnecessary developmental overhead time each time a new system becomes available. For example, when a haptic/graphic virtual reality environment has been coded for one specific robot to provide haptic feedback, that specific robot would not be able to be traded for another robot without recoding the program. However, recent efforts in the open source community have proposed a wrapper class approach that can elicit nearly identical responses regardless of the robot used. The result can lead researchers across the globe to perform similar experiments using shared code. Therefore modular "switching out"of one robot for another would not affect development time. In this paper, we outline the successful creation and implementation of a wrapper class for one robot into the open-source H3DAPI, which integrates the software commands most commonly used by all robots.
PHISICS/RELAP5-3D Adaptive Time-Step Method Demonstrated for the HTTR LOFC#1 Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baker, Robin Ivey; Balestra, Paolo; Strydom, Gerhard
A collaborative effort between Japan Atomic Energy Agency (JAEA) and Idaho National Laboratory (INL) as part of the Civil Nuclear Energy Working Group is underway to model the high temperature engineering test reactor (HTTR) loss of forced cooling (LOFC) transient that was performed in December 2010. The coupled version of RELAP5-3D, a thermal fluids code, and PHISICS, a neutronics code, were used to model the transient. The focus of this report is to summarize the changes made to the PHISICS-RELAP5-3D code for implementing an adaptive time step methodology into the code for the first time, and to test it usingmore » the full HTTR PHISICS/RELAP5-3D model developed by JAEA and INL and the LOFC simulation. Various adaptive schemes are available based on flux or power convergence criteria that allow significantly larger time steps to be taken by the neutronics module. The report includes a description of the HTTR and the associated PHISICS/RELAP5-3D model test results as well as the University of Rome sub-contractor report documenting the adaptive time step theory and methodology implemented in PHISICS/RELAP5-3D. Two versions of the HTTR model were tested using 8 and 26 energy groups. It was found that most of the new adaptive methods lead to significant improvements in the LOFC simulation time required without significant accuracy penalties in the prediction of the fission power and the fuel temperature. In the best performing 8 group model scenarios, a LOFC simulation of 20 hours could be completed in real-time, or even less than real-time, compared with the previous version of the code that completed the same transient 3-8 times slower than real-time. A few of the user choice combinations between the methodologies available and the tolerance settings did however result in unacceptably high errors or insignificant gains in simulation time. The study is concluded with recommendations on which methods to use for this HTTR model. An important caveat is that these findings are very model-specific and cannot be generalized to other PHISICS/RELAP5-3D models.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fritz, David J.; Harrison, Christopher B.; Perr, C. W.
Choreographer is a "moving target defense system", designed to protect against attacks aimed at IP addresses without corresponding domain name system (DNS) lookups. It coordinates actions between a DNS server and a Network Address Translation (NAT) device to regularly change which publicly available IP addresses' traffic will be routed to the protected device versus routed to a honeypot. More details about how Choreographer operates can be found in Section 2: Introducing Choreographer. Operational considerations for the successful deployment of Choreographer can be found in Section 3. The Testing & Evaluation (T&E) for Choreographer involved 3 phases: Pre-testing, Code Analysis, andmore » Operational Testing. Pre-testing, described in Section 4, involved installing and configuring an instance of Choreographer and verifying it would operate as expected for a simple use case. Our findings were that it was simple and straightforward to prepare a system for a Choreographer installation as well as configure Choreographer to work in a representative environment. Code Analysis, described in Section 5, consisted of running a static code analyzer (HP Fortify) and conducting dynamic analysis tests using the Valgrind instrumentation framework. Choreographer performed well, such that only a few errors that might possibly be problematic in a given operating situation were identified. Operational Testing, described in Section 6, involved operating Choreographer in a representative environment created through Emulytics TM . Depending upon the amount of server resources dedicated to Choreographer vis-á-vis the amount of client traffic handled, Choreographer had varying degrees of operational success. In an environment with a poorly resourced Choreographer server and as few as 50-100 clients, Choreographer failed to properly route traffic over half the time. Yet, with a well-resourced server, Choreographer handled over 1000 clients without missrouting. Choreographer demonstrated sensitivity to low-latency connections as well as high volumes of traffic. In addition, depending upon the frequency of new connection requests and the size of the address range that Choreographer has to work with, it is possible for all benefits of Choreographer to be ameliorated by its need to allow DNS servers rather than the end client to make DNS requests. Conclusions and Recommendations, listed in Section 7, address the need to understand the specific use case where Choreographer would be deployed to assess whether there would be problems resulting from the operational considerations described in Section 3 or performance concerns from the results of Operational Testing in Section 6. Deployed in an appropriate architecture with sufficiently light traffic volumes and a well-provisioned server, it is quite likely that Choreographer would perform satisfactorily. Thus, we recommend further detailed testing, to potentially include Red Team testing, at such time a specific use case is identified« less
The Role of Ontologies in Schema-based Program Synthesis
NASA Technical Reports Server (NTRS)
Bures, Tomas; Denney, Ewen; Fischer, Bernd; Nistor, Eugen C.
2004-01-01
Program synthesis is the process of automatically deriving executable code from (non-executable) high-level specifications. It is more flexible and powerful than conventional code generation techniques that simply translate algorithmic specifications into lower-level code or only create code skeletons from structural specifications (such as UML class diagrams). Key to building a successful synthesis system is specializing to an appropriate application domain. The AUTOBAYES and AUTOFILTER systems, under development at NASA Ames, operate in the two domains of data analysis and state estimation, respectively. The central concept of both systems is the schema, a representation of reusable computational knowledge. This can take various forms, including high-level algorithm templates, code optimizations, datatype refinements, or architectural information. A schema also contains applicability conditions that are used to determine when it can be applied safely. These conditions can refer to the initial specification, to intermediate results, or to elements of the partially-instantiated code. Schema-based synthesis uses AI technology to recursively apply schemas to gradually refine a specification into executable code. This process proceeds in two main phases. A front-end gradually transforms the problem specification into a program represented in an abstract intermediate code. A backend then compiles this further down into a concrete target programming language of choice. A core engine applies schemas on the initial problem specification, then uses the output of those schemas as the input for other schemas, until the full implementation is generated. Since there might be different schemas that implement different solutions to the same problem this process can generate an entire solution tree. AUTOBAYES and AUTOFILTER have reached the level of maturity where they enable users to solve interesting application problems, e.g., the analysis of Hubble Space Telescope images. They are large (in total around 100kLoC Prolog), knowledge intensive systems that employ complex symbolic reasoning to generate a wide range of non-trivial programs for complex application do- mains. Their schemas can have complex interactions, which make it hard to change them in isolation or even understand what an existing schema actually does. Adding more capabilities by increasing the number of schemas will only worsen this situation, ultimately leading to the entropy death of the synthesis system. The root came of this problem is that the domain knowledge is scattered throughout the entire system and only represented implicitly in the schema implementations. In our current work, we are addressing this problem by making explicit the knowledge from Merent parts of the synthesis system. Here; we discuss how Gruber's definition of an ontology as an explicit specification of a conceptualization matches our efforts in identifying and explicating the domain-specific concepts. We outline the dual role ontologies play in schema-based synthesis and argue that they address different audiences and serve different purposes. Their first role is descriptive: they serve as explicit documentation, and help to understand the internal structure of the system. Their second role is prescriptive: they provide the formal basis against which the other parts of the system (e.g., schemas) can be checked. Their final role is referential: ontologies also provide semantically meaningful "hooks" which allow schemas and tools to access the internal state of the program derivation process (e.g., fragments of the generated code) in domain-specific rather than language-specific terms, and thus to modify it in a controlled fashion. For discussion purposes we use AUTOLINEAR, a small synthesis system we are currently experimenting with, which can generate code for solving a system of linear equations, Az = b.
Synchronizing Defense Transportation System Reference Files
1998-02-01
and Pressure Vessel Code Federal Specification Military Specification BS AWS DIN JIS Food and Drug Administration Therapeutic Equivalence...269 270 276 278 279 280 282 287 288 289 292 293 294 296 299 300 301 302 307 308 309 311 312 314 Code source title ASME Boiler
Trellis phase codes for power-bandwith efficient satellite communications
NASA Technical Reports Server (NTRS)
Wilson, S. G.; Highfill, J. H.; Hsu, C. D.; Harkness, R.
1981-01-01
Support work on improved power and spectrum utilization on digital satellite channels was performed. Specific attention is given to the class of signalling schemes known as continuous phase modulation (CPM). The specific work described in this report addresses: analytical bounds on error probability for multi-h phase codes, power and bandwidth characterization of 4-ary multi-h codes, and initial results of channel simulation to assess the impact of band limiting filters and nonlinear amplifiers on CPM performance.
Orthographic Coding: Brain Activation for Letters, Symbols, and Digits.
Carreiras, Manuel; Quiñones, Ileana; Hernández-Cabrera, Juan Andrés; Duñabeitia, Jon Andoni
2015-12-01
The present experiment investigates the input coding mechanisms of 3 common printed characters: letters, numbers, and symbols. Despite research in this area, it is yet unclear whether the identity of these 3 elements is processed through the same or different brain pathways. In addition, some computational models propose that the position-in-string coding of these elements responds to general flexible mechanisms of the visual system that are not character-specific, whereas others suggest that the position coding of letters responds to specific processes that are different from those that guide the position-in-string assignment of other types of visual objects. Here, in an fMRI study, we manipulated character position and character identity through the transposition or substitution of 2 internal elements within strings of 4 elements. Participants were presented with 2 consecutive visual strings and asked to decide whether they were the same or different. The results showed: 1) that some brain areas responded more to letters than to numbers and vice versa, suggesting that processing may follow different brain pathways; 2) that the left parietal cortex is involved in letter identity, and critically in letter position coding, specifically contributing to the early stages of the reading process; and that 3) a stimulus-specific mechanism for letter position coding is operating during orthographic processing. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Problem-Based Test: An "In Vitro" Experiment to Analyze the Genetic Code
ERIC Educational Resources Information Center
Szeberenyi, Jozsef
2010-01-01
Terms to be familiar with before you start to solve the test: genetic code, translation, synthetic polynucleotide, leucine, serine, filter precipitation, radioactivity measurement, template, mRNA, tRNA, rRNA, aminoacyl-tRNA synthesis, ribosomes, degeneration of the code, wobble, initiation, and elongation of protein synthesis, initiation codon.…
A Test of Two Alternative Cognitive Processing Models: Learning Styles and Dual Coding
ERIC Educational Resources Information Center
Cuevas, Joshua; Dawson, Bryan L.
2018-01-01
This study tested two cognitive models, learning styles and dual coding, which make contradictory predictions about how learners process and retain visual and auditory information. Learning styles-based instructional practices are common in educational environments despite a questionable research base, while the use of dual coding is less…
Preliminary Analysis of the Transient Reactor Test Facility (TREAT) with PROTEUS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Connaway, H. M.; Lee, C. H.
The neutron transport code PROTEUS has been used to perform preliminary simulations of the Transient Reactor Test Facility (TREAT). TREAT is an experimental reactor designed for the testing of nuclear fuels and other materials under transient conditions. It operated from 1959 to 1994, when it was placed on non-operational standby. The restart of TREAT to support the U.S. Department of Energy’s resumption of transient testing is currently underway. Both single assembly and assembly-homogenized full core models have been evaluated. Simulations were performed using a historic set of WIMS-ANL-generated cross-sections as well as a new set of Serpent-generated cross-sections. To supportmore » this work, further analyses were also performed using additional codes in order to investigate particular aspects of TREAT modeling. DIF3D and the Monte-Carlo codes MCNP and Serpent were utilized in these studies. MCNP and Serpent were used to evaluate the effect of geometry homogenization on the simulation results and to support code-to-code comparisons. New meshes for the PROTEUS simulations were created using the CUBIT toolkit, with additional meshes generated via conversion of selected DIF3D models to support code-to-code verifications. All current analyses have focused on code-to-code verifications, with additional verification and validation studies planned. The analysis of TREAT with PROTEUS-SN is an ongoing project. This report documents the studies that have been performed thus far, and highlights key challenges to address in future work.« less
Specifications for a coupled neutronics thermal-hydraulics SFR test case
NASA Astrophysics Data System (ADS)
Tassone, A.; Smirnov, A. D.; Tikhomirov, G. V.
2017-01-01
Coupling neutronics/thermal-hydraulics calculations for the design of nuclear reactors are a growing trend in the scientific community. This approach allows to properly represent the mutual feedbacks between the neutronic distribution and the thermal-hydraulics properties of the materials composing the reactor, details which are often lost when separate analysis are performed. In this work, a test case for a generation IV sodium-cooled fast reactor (SFR), based on the ASTRID concept developed by CEA, is proposed. Two sub-assemblies (SA) characterized by different fuel enrichment and layout are considered. Specifications for the test case are provided including geometrical data, material compositions, thermo-physical properties and coupling scheme details. Serpent and ANSYS-CFX are used as reference in the description of suitable inputs for the performing of the benchmark, but the use of other code combinations for the purpose of validation of the results is encouraged. The expected outcome of the test case are the axial distribution of volumetric power generation term (q‴), density and temperature for the fuel, the cladding and the coolant.
The temperature dependence of the tensile properties of thermally treated Alloy 690 tubing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harrod, D.L.; Gold, R.E.; Larsson, B.
1992-12-31
Tensile tests were run in air on full tube cross-sections of 22.23 mm OD by 1.27 mm wall thickness Alloy 690 steam generator production tubes from ten (10) heats of material at eight (8) temperatures between room temperature and 760{degrees}C. The tubing was manufactured to specification requirements consistent with the EPRI guidelines for Alloy 690 tubing. The room temperature stress-strain curves are described quite well by the Voce equation. Ductile fracture by dimpled rupture was observed at all test temperatures. The elevated temperature tensile properties are compared with design data given in the ASME Code.
FEDEF: A High Level Architecture Federate Development Framework
2010-09-01
require code changes for operability between HLA specifications. Configuration of federate requirements such as publications, subscriptions, time ... management , and management protocol should occur outside of federate source code, allowing for federate reusability without code modification and re
Review of codes, standards, and regulations for natural gas locomotives.
DOT National Transportation Integrated Search
2014-06-01
This report identified, collected, and summarized relevant international codes, standards, and regulations with potential : applicability to the use of natural gas as a locomotive fuel. Few international or country-specific codes, standards, and regu...
Creation and Delivery of New Superpixelized DIRBE Map Products
NASA Technical Reports Server (NTRS)
Weiland, J.
1998-01-01
Phase 1 called for the following tasks: (1) completion of code to generate intermediate files containing the individual DIRBE observations which would be used to make the superpixelized maps; (2) completion of code necessary to generate the maps themselves; and (3) quality control on test-case maps in the form of point-source extraction and photometry. Items 1 and 2 are well in hand and the tested code is nearly complete. A few test maps have been generated for the tests mentioned in item 3. Map generation is not in production mode yet.
Testability, Test Automation and Test Driven Development for the Trick Simulation Toolkit
NASA Technical Reports Server (NTRS)
Penn, John
2014-01-01
This paper describes the adoption of a Test Driven Development approach and a Continuous Integration System in the development of the Trick Simulation Toolkit, a generic simulation development environment for creating high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. It describes the approach, and the significant benefits seen, such as fast, thorough and clear test feedback every time code is checked into the code repository. It also describes an approach that encourages development of code that is testable and adaptable.
Creep of A508/533 Pressure Vessel Steel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard Wright
2014-08-01
ABSTRACT Evaluation of potential Reactor Pressure Vessel (RPV) steels has been carried out as part of the pre-conceptual Very High Temperature Reactor (VHTR) design studies. These design studies have generally focused on American Society of Mechanical Engineers (ASME) Code status of the steels, temperature limits, and allowable stresses. Initially, three candidate materials were identified by this process: conventional light water reactor (LWR) RPV steels A508 and A533, 2¼Cr-1Mo in the annealed condition, and Grade 91 steel. The low strength of 2¼Cr-1Mo at elevated temperature has eliminated this steel from serious consideration as the VHTR RPV candidate material. Discussions with themore » very few vendors that can potentially produce large forgings for nuclear pressure vessels indicate a strong preference for conventional LWR steels. This preference is based in part on extensive experience with forging these steels for nuclear components. It is also based on the inability to cast large ingots of the Grade 91 steel due to segregation during ingot solidification, thus restricting the possible mass of forging components and increasing the amount of welding required for completion of the RPV. Grade 91 steel is also prone to weld cracking and must be post-weld heat treated to ensure adequate high-temperature strength. There are also questions about the ability to produce, and very importantly, verify the through thickness properties of thick sections of Grade 91 material. The availability of large components, ease of fabrication, and nuclear service experience with the A508 and A533 steels strongly favor their use in the RPV for the VHTR. Lowering the gas outlet temperature for the VHTR to 750°C from 950 to 1000°C, proposed in early concept studies, further strengthens the justification for this material selection. This steel is allowed in the ASME Boiler and Pressure Vessel Code for nuclear service up to 371°C (700°F); certain excursions above that temperature are allowed by Code Case N-499-2 (now incorporated as an appendix to Section III Division 5 of the Code). This Code Case was developed with a rather sparse data set and focused primarily on rolled plate material (A533 specification). Confirmatory tests of creep behavior of both A508 and A533 are described here that are designed to extend the database in order to build higher confidence in ensuring the structural integrity of the VHTR RPV during off-normal conditions. A number of creep-rupture tests were carried out at temperatures above the 371°C (700°F) Code limit; longer term tests designed to evaluate minimum creep behavior are ongoing. A limited amount of rupture testing was also carried out on welded material. All of the rupture data from the current experiments is compared to historical values from the testing carried out to develop Code Case N-499-2. It is shown that the A508/533 basemetal tested here fits well with the rupture behavior reported from the historical testing. The presence of weldments significantly reduces the time to rupture. The primary purpose of this report is to summarize and record the experimental results in a single document.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, R.N.
This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in thismore » publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.« less
Finch, Caroline F; Orchard, John W; Twomey, Dara M; Saad Saleem, Muhammad; Ekegren, Christina L; Lloyd, David G; Elliott, Bruce C
2014-01-01
Objective To compare Orchard Sports Injury Classification System (OSICS-10) sports medicine diagnoses assigned by a clinical and non-clinical coder. Design Assessment of intercoder agreement. Setting Community Australian football. Participants 1082 standardised injury surveillance records. Main outcome measurements Direct comparison of the four-character hierarchical OSICS-10 codes assigned by two independent coders (a sports physician and an epidemiologist). Adjudication by a third coder (biomechanist). Results The coders agreed on the first character 95% of the time and on the first two characters 86% of the time. They assigned the same four-digit OSICS-10 code for only 46% of the 1082 injuries. The majority of disagreements occurred for the third character; 85% were because one coder assigned a non-specific ‘X’ code. The sports physician code was deemed correct in 53% of cases and the epidemiologist in 44%. Reasons for disagreement included the physician not using all of the collected information and the epidemiologist lacking specific anatomical knowledge. Conclusions Sports injury research requires accurate identification and classification of specific injuries and this study found an overall high level of agreement in coding according to OSICS-10. The fact that the majority of the disagreements occurred for the third OSICS character highlights the fact that increasing complexity and diagnostic specificity in injury coding can result in a loss of reliability and demands a high level of anatomical knowledge. Injury report form details need to reflect this level of complexity and data management teams need to include a broad range of expertise. PMID:22919021
NASA Astrophysics Data System (ADS)
Leclaire, N.; Cochet, B.; Le Dauphin, F. X.; Haeck, W.; Jacquet, O.
2014-06-01
The present paper aims at providing experimental validation for the use of the MORET 5 code for advanced concepts of reactor involving thorium and heavy water. It therefore constitutes an opportunity to test and improve the thermal-scattering data of heavy water and also to test the recent implementation of probability tables in the MORET 5 code.
Evaluation of deflectometry for E-ELT optics.
NASA Astrophysics Data System (ADS)
Sironi, G.; Canestrari, R.; Civitani, M. M.
A deflectometrical facility was developed at Italian National Institute for Astrophysics-OAB in the context of the ASTRI project to characterize free-form segments for Cherenkov optics. The test works as an inverse Ronchi test in combination with a ray-tracing code: the under-test surface is illuminated by a known light pattern and the pattern warped by local surface errors is observed. Knowing the geometry of the system it is possible to retrieve the surface normal vectors. This contribution presents the analysis of the upgrades and of the configuration modifications required to allow the use of deflectometry in the realization of optical components suitable for European Extremely Large Telescope and as a specific case to support the manufacturing of the Multi-conjugate Adaptive Optics Relay (MAORY) module.
NASA Technical Reports Server (NTRS)
Farassat, F.; Dunn, M. H.; Padula, S. L.
1986-01-01
The development of a high speed propeller noise prediction code at Langley Research Center is described. The code utilizes two recent acoustic formulations in the time domain for subsonic and supersonic sources. The structure and capabilities of the code are discussed. Grid size study for accuracy and speed of execution on a computer is also presented. The code is tested against an earlier Langley code. Considerable increase in accuracy and speed of execution are observed. Some examples of noise prediction of a high speed propeller for which acoustic test data are available are given. A brisk derivation of formulations used is given in an appendix.
Simulation to Flight Test for a UAV Controls Testbed
NASA Technical Reports Server (NTRS)
Motter, Mark A.; Logan, Michael J.; French, Michael L.; Guerreiro, Nelson M.
2006-01-01
The NASA Flying Controls Testbed (FLiC) is a relatively small and inexpensive unmanned aerial vehicle developed specifically to test highly experimental flight control approaches. The most recent version of the FLiC is configured with 16 independent aileron segments, supports the implementation of C-coded experimental controllers, and is capable of fully autonomous flight from takeoff roll to landing, including flight test maneuvers. The test vehicle is basically a modified Army target drone, AN/FQM-117B, developed as part of a collaboration between the Aviation Applied Technology Directorate (AATD) at Fort Eustis, Virginia and NASA Langley Research Center. Several vehicles have been constructed and collectively have flown over 600 successful test flights, including a fully autonomous demonstration at the Association of Unmanned Vehicle Systems International (AUVSI) UAV Demo 2005. Simulations based on wind tunnel data are being used to further develop advanced controllers for implementation and flight test.
MsSpec-1.0: A multiple scattering package for electron spectroscopies in material science
NASA Astrophysics Data System (ADS)
Sébilleau, Didier; Natoli, Calogero; Gavaza, George M.; Zhao, Haifeng; Da Pieve, Fabiana; Hatada, Keisuke
2011-12-01
We present a multiple scattering package to calculate the cross-section of various spectroscopies namely photoelectron diffraction (PED), Auger electron diffraction (AED), X-ray absorption (XAS), low-energy electron diffraction (LEED) and Auger photoelectron coincidence spectroscopy (APECS). This package is composed of three main codes, computing respectively the cluster, the potential and the cross-section. In the latter case, in order to cover a range of energies as wide as possible, three different algorithms are provided to perform the multiple scattering calculation: full matrix inversion, series expansion or correlation expansion of the multiple scattering matrix. Numerous other small Fortran codes or bash/csh shell scripts are also provided to perform specific tasks. The cross-section code is built by the user from a library of subroutines using a makefile. Program summaryProgram title: MsSpec-1.0 Catalogue identifier: AEJT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 504 438 No. of bytes in distributed program, including test data, etc.: 14 448 180 Distribution format: tar.gz Programming language: Fortran 77 Computer: Any Operating system: Linux, MacOs RAM: Bytes Classification: 7.2 External routines: Lapack ( http://www.netlib.org/lapack/) Nature of problem: Calculation of the cross-section of various spectroscopies. Solution method: Multiple scattering. Running time: The test runs provided only take a few seconds to run.
Effects of visual and verbal interference tasks on olfactory memory: the role of task complexity.
Annett, J M; Leslie, J C
1996-08-01
Recent studies have demonstrated that visual and verbal suppression tasks interfere with olfactory memory in a manner which is partially consistent with a dual coding interpretation. However, it has been suggested that total task complexity rather than modality specificity of the suppression tasks might account for the observed pattern of results. This study addressed the issue of whether or not the level of difficulty and complexity of suppression tasks could explain the apparent modality effects noted in earlier experiments. A total of 608 participants were each allocated to one of 19 experimental conditions involving interference tasks which varied suppression type (visual or verbal), nature of complexity (single, double or mixed) and level of difficulty (easy, optimal or difficult) and presented with 13 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Both recognition and recall performance showed an overall effect for suppression nature, suppression level and time of testing with no effect for suppression type. The results lend only limited support to Paivio's (1986) dual coding theory, but have a number of characteristics which suggest that an adequate account of olfactory memory may be broadly similar to current theories of face and object recognition. All of these phenomena might be dealt with by an appropriately modified version of dual coding theory.
Proceedings of the Workshop on software tools for distributed intelligent control systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herget, C.J.
1990-09-01
The Workshop on Software Tools for Distributed Intelligent Control Systems was organized by Lawrence Livermore National Laboratory for the United States Army Headquarters Training and Doctrine Command and the Defense Advanced Research Projects Agency. The goals of the workshop were to the identify the current state of the art in tools which support control systems engineering design and implementation, identify research issues associated with writing software tools which would provide a design environment to assist engineers in multidisciplinary control design and implementation, formulate a potential investment strategy to resolve the research issues and develop public domain code which can formmore » the core of more powerful engineering design tools, and recommend test cases to focus the software development process and test associated performance metrics. Recognizing that the development of software tools for distributed intelligent control systems will require a multidisciplinary effort, experts in systems engineering, control systems engineering, and compute science were invited to participate in the workshop. In particular, experts who could address the following topics were selected: operating systems, engineering data representation and manipulation, emerging standards for manufacturing data, mathematical foundations, coupling of symbolic and numerical computation, user interface, system identification, system representation at different levels of abstraction, system specification, system design, verification and validation, automatic code generation, and integration of modular, reusable code.« less
Qiu, Xianbo; Song, Liuwei; Yang, Shuo; Guo, Meng; Yuan, Quan; Ge, Shengxiang; Min, Xiaoping; Xia, Ningshao
2016-01-01
A fast and low-cost method for HBV genotyping especially for genotypes A, B, C and D was developed and tested. A classifier was used to detect and analyze a one-step immunoassay lateral flow strip functionalized with genotype-specific monoclonal antibodies (mAbs) on multiple capture lines in the form of pattern recognition for point-of-care (POC) diagnostics. The fluorescent signals from the capture lines and the background of the strip were collected via multiple optical channels in parallel. A digital HBV genotyping model, whose inputs are the fluorescent signals and outputs are a group of genotype-specific digital binary codes (0/1), was developed based on the HBV genotyping strategy. Meanwhile, a companion decoding table was established to cover all possible pairing cases between the states of a group of genotype-specific digital binary codes and the HBV genotyping results. A logical analyzing module was constructed to process the detected signals in parallel without program control, and its outputs were used to drive a set of LED indicators, which determine the HBV genotype. Comparing to the nucleic acid analysis to HBV viruses, much faster HBV genotyping with significantly lower cost can be obtained with the developed method. PMID:27306485
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Lee, Theresa M; Tu, Karen; Wing, Laura L; Gershon, Andrea S
2017-05-15
Little is known about using electronic medical records to identify patients with chronic obstructive pulmonary disease to improve quality of care. Our objective was to develop electronic medical record algorithms that can accurately identify patients with obstructive pulmonary disease. A retrospective chart abstraction study was conducted on data from the Electronic Medical Record Administrative data Linked Database (EMRALD ® ) housed at the Institute for Clinical Evaluative Sciences. Abstracted charts provided the reference standard based on available physician-diagnoses, chronic obstructive pulmonary disease-specific medications, smoking history and pulmonary function testing. Chronic obstructive pulmonary disease electronic medical record algorithms using combinations of terminology in the cumulative patient profile (CPP; problem list/past medical history), physician billing codes (chronic bronchitis/emphysema/other chronic obstructive pulmonary disease), and prescriptions, were tested against the reference standard. Sensitivity, specificity, and positive/negative predictive values (PPV/NPV) were calculated. There were 364 patients with chronic obstructive pulmonary disease identified in a 5889 randomly sampled cohort aged ≥ 35 years (prevalence = 6.2%). The electronic medical record algorithm consisting of ≥ 3 physician billing codes for chronic obstructive pulmonary disease per year; documentation in the CPP; tiotropium prescription; or ipratropium (or its formulations) prescription and a chronic obstructive pulmonary disease billing code had sensitivity of 76.9% (95% CI:72.2-81.2), specificity of 99.7% (99.5-99.8), PPV of 93.6% (90.3-96.1), and NPV of 98.5% (98.1-98.8). Electronic medical record algorithms can accurately identify patients with chronic obstructive pulmonary disease in primary care records. They can be used to enable further studies in practice patterns and chronic obstructive pulmonary disease management in primary care. NOVEL ALGORITHM SEARCH TECHNIQUE: Researchers develop an algorithm that can accurately search through electronic health records to find patients with chronic lung disease. Mining population-wide data for information on patients diagnosed and treated with chronic obstructive pulmonary disease (COPD) in primary care could help inform future healthcare and spending practices. Theresa Lee at the University of Toronto, Canada, and colleagues used an algorithm to search electronic medical records and identify patients with COPD from doctors' notes, prescriptions and symptom histories. They carefully adjusted the algorithm to improve sensitivity and predictive value by adding details such as specific medications, physician codes related to COPD, and different combinations of terminology in doctors' notes. The team accurately identified 364 patients with COPD in a randomly-selected cohort of 5889 people. Their results suggest opportunities for broader, informative studies of COPD in wider populations.
An Experiment in Scientific Program Understanding
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.; Owen, Karl (Technical Monitor)
2000-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.
Lack of harmonization in sweat testing for cystic fibrosis - a national survey.
Christiansen, Anne Lindegaard; Nybo, Mads
2014-11-01
Sweat testing is used in the diagnosis of cystic fibrosis. Interpretation of the sweat test depends, however, on the method performed since conductivity, osmolality and chloride concentration all can be measured as part of a sweat test. The aim of this study was to investigate how performance of the test is organized in Denmark. Departments conducting the sweat test were contacted and interviewed following a premade questionnaire. They were asked about methods performed, applied NPU (Nomenclature for Properties and Units) code, reference interval, recommended interpretation and referred literature. 14 departments performed the sweat test. One department measured chloride and sodium concentration, while 13 departments measured conductivity. One department used a non-existing NPU code, two departments applied NPU codes inconsistent with the method performed, four departments applied no NPU code and seven applied a correct NPU code. Ten of the departments measuring conductivity applied reference intervals. Nine departments measuring conductivity had recommendations of a normal area, a grey zone and a pathological value, while four departments only applied a normal and grey zone or a pathological value. Cut-off values for normal, grey and pathological areas were like the reference intervals inconsistent. There is inconsistent use of NPU codes, reference intervals and interpretation of sweat conductivity used in the process of diagnosing cystic fibrosis. Because diagnosing cystic fibrosis is a combined effort between local pediatric departments, biochemical and genetic departments and cystic fibrosis centers, a national harmonization is necessary to assure correct clinical use.
SolTrace Background | Concentrating Solar Power | NREL
codes was written to model a very specific optical geometry, and each one built upon the others in an evolutionary way. Examples of such codes include: OPTDSH, a code written to model circular aperture parabolic
The radiation fields around a proton therapy facility: A comparison of Monte Carlo simulations
NASA Astrophysics Data System (ADS)
Ottaviano, G.; Picardi, L.; Pillon, M.; Ronsivalle, C.; Sandri, S.
2014-02-01
A proton therapy test facility with a beam current lower than 10 nA in average, and an energy up to 150 MeV, is planned to be sited at the Frascati ENEA Research Center, in Italy. The accelerator is composed of a sequence of linear sections. The first one is a commercial 7 MeV proton linac, from which the beam is injected in a SCDTL (Side Coupled Drift Tube Linac) structure reaching the energy of 52 MeV. Then a conventional CCL (coupled Cavity Linac) with side coupling cavities completes the accelerator. The linear structure has the important advantage that the main radiation losses during the acceleration process occur to protons with energy below 20 MeV, with a consequent low production of neutrons and secondary radiation. From the radiation protection point of view the source of radiation for this facility is then almost completely located at the final target. Physical and geometrical models of the device have been developed and implemented into radiation transport computer codes based on the Monte Carlo method. The scope is the assessment of the radiation field around the main source for supporting the safety analysis. For the assessment independent researchers used two different Monte Carlo computer codes named FLUKA (FLUktuierende KAskade) and MCNPX (Monte Carlo N-Particle eXtended) respectively. Both are general purpose tools for calculations of particle transport and interactions with matter, covering an extended range of applications including proton beam analysis. Nevertheless each one utilizes its own nuclear cross section libraries and uses specific physics models for particle types and energies. The models implemented into the codes are described and the results are presented. The differences between the two calculations are reported and discussed pointing out disadvantages and advantages of each code in the specific application.
Dynamic and Widespread lncRNA Expression in a Sponge and the Origin of Animal Complexity
Gaiti, Federico; Fernandez-Valverde, Selene L.; Nakanishi, Nagayasu; Calcino, Andrew D.; Yanai, Itai; Tanurdzic, Milos; Degnan, Bernard M.
2015-01-01
Long noncoding RNAs (lncRNAs) are important developmental regulators in bilaterian animals. A correlation has been claimed between the lncRNA repertoire expansion and morphological complexity in vertebrate evolution. However, this claim has not been tested by examining morphologically simple animals. Here, we undertake a systematic investigation of lncRNAs in the demosponge Amphimedon queenslandica, a morphologically simple, early-branching metazoan. We combine RNA-Seq data across multiple developmental stages of Amphimedon with a filtering pipeline to conservatively predict 2,935 lncRNAs. These include intronic overlapping lncRNAs, exonic antisense overlapping lncRNAs, long intergenic nonprotein coding RNAs, and precursors for small RNAs. Sponge lncRNAs are remarkably similar to their bilaterian counterparts in being relatively short with few exons and having low primary sequence conservation relative to protein-coding genes. As in bilaterians, a majority of sponge lncRNAs exhibit typical hallmarks of regulatory molecules, including high temporal specificity and dynamic developmental expression. Specific lncRNA expression profiles correlate tightly with conserved protein-coding genes likely involved in a range of developmental and physiological processes, such as the Wnt signaling pathway. Although the majority of Amphimedon lncRNAs appears to be taxonomically restricted with no identifiable orthologs, we find a few cases of conservation between demosponges in lncRNAs that are antisense to coding sequences. Based on the high similarity in the structure, organization, and dynamic expression of sponge lncRNAs to their bilaterian counterparts, we propose that these noncoding RNAs are an ancient feature of the metazoan genome. These results are consistent with lncRNAs regulating the development of animals, regardless of their level of morphological complexity. PMID:25976353
Identifying personal microbiomes using metagenomic codes
Franzosa, Eric A.; Huang, Katherine; Meadow, James F.; Gevers, Dirk; Lemon, Katherine P.; Bohannan, Brendan J. M.; Huttenhower, Curtis
2015-01-01
Community composition within the human microbiome varies across individuals, but it remains unknown if this variation is sufficient to uniquely identify individuals within large populations or stable enough to identify them over time. We investigated this by developing a hitting set-based coding algorithm and applying it to the Human Microbiome Project population. Our approach defined body site-specific metagenomic codes: sets of microbial taxa or genes prioritized to uniquely and stably identify individuals. Codes capturing strain variation in clade-specific marker genes were able to distinguish among 100s of individuals at an initial sampling time point. In comparisons with follow-up samples collected 30–300 d later, ∼30% of individuals could still be uniquely pinpointed using metagenomic codes from a typical body site; coincidental (false positive) matches were rare. Codes based on the gut microbiome were exceptionally stable and pinpointed >80% of individuals. The failure of a code to match its owner at a later time point was largely explained by the loss of specific microbial strains (at current limits of detection) and was only weakly associated with the length of the sampling interval. In addition to highlighting patterns of temporal variation in the ecology of the human microbiome, this work demonstrates the feasibility of microbiome-based identifiability—a result with important ethical implications for microbiome study design. The datasets and code used in this work are available for download from huttenhower.sph.harvard.edu/idability. PMID:25964341
NASA Astrophysics Data System (ADS)
Loevenbruck, Anne; Arpaia, Luca; Ata, Riadh; Gailler, Audrey; Hayashi, Yutaka; Hébert, Hélène; Heinrich, Philippe; Le Gal, Marine; Lemoine, Anne; Le Roy, Sylvestre; Marcer, Richard; Pedreros, Rodrigo; Pons, Kevin; Ricchiuto, Mario; Violeau, Damien
2017-04-01
This study is part of the joint actions carried out within TANDEM (Tsunamis in northern AtlaNtic: Definition of Effects by Modeling). This French project, mainly dedicated to the appraisal of coastal effects due to tsunami waves on the French coastlines, was initiated after the catastrophic 2011 Tohoku-Oki tsunami. This event, which tragically struck Japan, drew the attention to the importance of tsunami risk assessment, in particular when nuclear facilities are involved. As a contribution to this challenging task, the TANDEM partners intend to provide guidance for the French Atlantic area based on numerical simulation. One of the identified objectives consists in designing, adapting and validating simulation codes for tsunami hazard assessment. Besides an integral benchmarking workpackage, the outstanding database of the 2011 event offers the TANDEM partners the opportunity to test their numerical tools with a real case. As a prerequisite, among the numerous published seismic source models arisen from the inversion of the various available records, a couple of coseismic slip distributions have been selected to provide common initial input parameters for the tsunami computations. After possible adaptations or specific developments, the different codes are employed to simulate the Tohoku-Oki tsunami from its source to the northeast Japanese coastline. The results are tested against the numerous tsunami measurements and, when relevant, comparisons of the different codes are carried out. First, the results related to the oceanic propagation phase are compared with the offshore records. Then, the modeled coastal impacts are tested against the onshore data. Flooding at a regional scale is considered, but high resolution simulations are also performed with some of the codes. They allow examining in detail the runup amplitudes and timing, as well as the complexity of the tsunami interaction with the coastal structures. The work is supported by the Tandem project in the frame of French PIA grant ANR-11-RSNR-00023.
ASME AG-1 Section FC Qualified HEPA Filters; a Particle Loading Comparison - 13435
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stillo, Andrew; Ricketts, Craig I.
High Efficiency Particulate Air (HEPA) Filters used to protect personnel, the public and the environment from airborne radioactive materials are designed, manufactured and qualified in accordance with ASME AG-1 Code section FC (HEPA Filters) [1]. The qualification process requires that filters manufactured in accordance with this ASME AG-1 code section must meet several performance requirements. These requirements include performance specifications for resistance to airflow, aerosol penetration, resistance to rough handling, resistance to pressure (includes high humidity and water droplet exposure), resistance to heated air, spot flame resistance and a visual/dimensional inspection. None of these requirements evaluate the particle loading capacitymore » of a HEPA filter design. Concerns, over the particle loading capacity, of the different designs included within the ASME AG-1 section FC code[1], have been voiced in the recent past. Additionally, the ability of a filter to maintain its integrity, if subjected to severe operating conditions such as elevated relative humidity, fog conditions or elevated temperature, after loading in use over long service intervals is also a major concern. Although currently qualified HEPA filter media are likely to have similar loading characteristics when evaluated independently, filter pleat geometry can have a significant impact on the in-situ particle loading capacity of filter packs. Aerosol particle characteristics, such as size and composition, may also have a significant impact on filter loading capacity. Test results comparing filter loading capacities for three different aerosol particles and three different filter pack configurations are reviewed. The information presented represents an empirical performance comparison among the filter designs tested. The results may serve as a basis for further discussion toward the possible development of a particle loading test to be included in the qualification requirements of ASME AG-1 Code sections FC and FK[1]. (authors)« less
A Fast Healthcare Interoperability Resources (FHIR) layer implemented over i2b2.
Boussadi, Abdelali; Zapletal, Eric
2017-08-14
Standards and technical specifications have been developed to define how the information contained in Electronic Health Records (EHRs) should be structured, semantically described, and communicated. Current trends rely on differentiating the representation of data instances from the definition of clinical information models. The dual model approach, which combines a reference model (RM) and a clinical information model (CIM), sets in practice this software design pattern. The most recent initiative, proposed by HL7, is called Fast Health Interoperability Resources (FHIR). The aim of our study was to investigate the feasibility of applying the FHIR standard to modeling and exposing EHR data of the Georges Pompidou European Hospital (HEGP) integrating biology and the bedside (i2b2) clinical data warehouse (CDW). We implemented a FHIR server over i2b2 to expose EHR data in relation with five FHIR resources: DiagnosisReport, MedicationOrder, Patient, Encounter, and Medication. The architecture of the server combines a Data Access Object design pattern and FHIR resource providers, implemented using the Java HAPI FHIR API. Two types of queries were tested: query type #1 requests the server to display DiagnosticReport resources, for which the diagnosis code is equal to a given ICD-10 code. A total of 80 DiagnosticReport resources, corresponding to 36 patients, were displayed. Query type #2, requests the server to display MedicationOrder, for which the FHIR Medication identification code is equal to a given code expressed in a French coding system. A total of 503 MedicationOrder resources, corresponding to 290 patients, were displayed. Results were validated by manually comparing the results of each request to the results displayed by an ad-hoc SQL query. We showed the feasibility of implementing a Java layer over the i2b2 database model to expose data of the CDW as a set of FHIR resources. An important part of this work was the structural and semantic mapping between the i2b2 model and the FHIR RM. To accomplish this, developers must manually browse the specifications of the FHIR standard. Our source code is freely available and can be adapted for use in other i2b2 sites.
NASA Technical Reports Server (NTRS)
Jenkins, R. M.
1983-01-01
The present effort represents an extension of previous work wherein a calculation model for performing rapid pitchline optimization of axial gas turbine geometry, including blade profiles, is developed. The model requires no specification of geometric constraints. Output includes aerodynamic performance (adiabatic efficiency), hub-tip flow-path geometry, blade chords, and estimates of blade shape. Presented herein is a verification of the aerodynamic performance portion of the model, whereby detailed turbine test-rig data, including rig geometry, is input to the model to determine whether tested performance can be predicted. An array of seven (7) NASA single-stage axial gas turbine configurations is investigated, ranging in size from 0.6 kg/s to 63.8 kg/s mass flow and in specific work output from 153 J/g to 558 J/g at design (hot) conditions; stage loading factor ranges from 1.15 to 4.66.
Odour intensity learning in fruit flies
Yarali, Ayse; Ehser, Sabrina; Hapil, Fatma Zehra; Huang, Ju; Gerber, Bertram
2009-01-01
Animals' behaviour towards odours depends on both odour quality and odour intensity. While neuronal coding of odour quality is fairly well studied, how odour intensity is treated by olfactory systems is less clear. Here we study odour intensity processing at the behavioural level, using the fruit fly Drosophila melanogaster. We trained flies by pairing a MEDIUM intensity of an odour with electric shock, and then, at a following test phase, measured flies' conditioned avoidance of either this previously trained MEDIUM intensity or a LOWer or a HIGHer intensity. With respect to 3-octanol, n-amylacetate and 4-methylcyclohexanol, we found that conditioned avoidance is strongest when training and test intensities match, speaking for intensity-specific memories. With respect to a fourth odour, benzaldehyde, on the other hand, we found no such intensity specificity. These results form the basis for further studies of odour intensity processing at the behavioural, neuronal and molecular level. PMID:19586944
40 CFR 62.15265 - How do I monitor the load of my municipal waste combustion unit?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Mechanical Engineers (ASME PTC 4.1—1964): Test Code for Steam Generating Units, Power Test Code 4.1-1964... of Mechanical Engineers, Service Center, 22 Law Drive, Post Office Box 2900, Fairfield, NJ 07007. You....archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. (4) Design, construct...
40 CFR 62.15265 - How do I monitor the load of my municipal waste combustion unit?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Mechanical Engineers (ASME PTC 4.1—1964): Test Code for Steam Generating Units, Power Test Code 4.1-1964... of Mechanical Engineers, Service Center, 22 Law Drive, Post Office Box 2900, Fairfield, NJ 07007. You....archives.gov/federal_register/code_of_federal_regulations/ibr_locations.html. (4) Design, construct...