Audit of Clinical Coding of Major Head and Neck Operations
Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean
2009-01-01
INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944
Liu, Charles; Kayima, Peter; Riesel, Johanna; Situma, Martin; Chang, David; Firth, Paul
2017-11-01
The lack of a classification system for surgical procedures in resource-limited settings hinders outcomes measurement and reporting. Existing procedure coding systems are prohibitively large and expensive to implement. We describe the creation and prospective validation of 3 brief procedure code lists applicable in low-resource settings, based on analysis of surgical procedures performed at Mbarara Regional Referral Hospital, Uganda's second largest public hospital. We reviewed operating room logbooks to identify all surgical operations performed at Mbarara Regional Referral Hospital during 2014. Based on the documented indication for surgery and procedure(s) performed, we assigned each operation up to 4 procedure codes from the International Classification of Diseases, 9th Revision, Clinical Modification. Coding of procedures was performed by 2 investigators, and a random 20% of procedures were coded by both investigators. These codes were aggregated to generate procedure code lists. During 2014, 6,464 surgical procedures were performed at Mbarara Regional Referral Hospital, to which we assigned 435 unique procedure codes. Substantial inter-rater reliability was achieved (κ = 0.7037). The 111 most common procedure codes accounted for 90% of all codes assigned, 180 accounted for 95%, and 278 accounted for 98%. We considered these sets of codes as 3 procedure code lists. In a prospective validation, we found that these lists described 83.2%, 89.2%, and 92.6% of surgical procedures performed at Mbarara Regional Referral Hospital during August to September of 2015, respectively. Empirically generated brief procedure code lists based on International Classification of Diseases, 9th Revision, Clinical Modification can be used to classify almost all surgical procedures performed at a Ugandan referral hospital. Such a standardized procedure coding system may enable better surgical data collection for administration, research, and quality improvement in resource-limited settings. Copyright © 2017 Elsevier Inc. All rights reserved.
Practice patterns of academic general thoracic and adult cardiac surgeons.
Ingram, Michael T; Wisner, David H; Cooke, David T
2014-10-01
We hypothesized that academic adult cardiac surgeons (CSs) and general thoracic surgeons (GTSs) would have distinct practice patterns of, not just case-mix, but also time devoted to outpatient care, involvement in critical care, and work relative value unit (wRVU) generation for the procedures they perform. We queried the University Health System Consortium-Association of American Medical Colleges Faculty Practice Solution Center database for fiscal years 2007-2008, 2008-2009, and 2009-2010 for the frequency of inpatient and outpatient current procedural terminology coding and wRVU data of academic GTSs and CSs. The Faculty Practice Solution Center database is a compilation of productivity and payer data from 86 academic institutions. The greatest wRVU generating current procedural terminology codes for CSs were, in order, coronary artery bypass grafting, aortic valve replacement, and mitral valve replacement. In contrast, open lobectomy, video-assisted thoracic surgery wedge, and video-assisted thoracic surgery lobectomy were greatest for GTSs. The 10 greatest wRVU-generating procedures for CSs generated more wRVUs than those for GTSs (P<.001). Although CSs generated significantly more hospital inpatient evaluation and management (E & M) wRVUs than did GTSs (P<.001), only 2.5% of the total wRVUs generated by CSs were from E & M codes versus 18.8% for GTSs. Critical care codes were 1.5% of total evaluation and management billing for both CSs and GTSs. Academic CSs and GTSs have distinct practice patterns. CSs receive greater reimbursement for services because of the greater wRVUs of the procedures performed compared with GTSs, and evaluation and management coding is a more important wRVU generator for GTSs. The results of our study could guide academic CS and GTS practice structure and time prioritization. Copyright © 2014 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.
On the symbolic manipulation and code generation for elasto-plastic material matrices
NASA Technical Reports Server (NTRS)
Chang, T. Y.; Saleeb, A. F.; Wang, P. S.; Tan, H. Q.
1991-01-01
A computerized procedure for symbolic manipulations and FORTRAN code generation of an elasto-plastic material matrix for finite element applications is presented. Special emphasis is placed on expression simplifications during intermediate derivations, optimal code generation, and interface with the main program. A systematic procedure is outlined to avoid redundant algebraic manipulations. Symbolic expressions of the derived material stiffness matrix are automatically converted to RATFOR code which is then translated into FORTRAN statements through a preprocessor. To minimize the interface problem with the main program, a template file is prepared so that the translated FORTRAN statements can be merged into the file to form a subroutine (or a submodule). Three constitutive models; namely, von Mises plasticity, Drucker-Prager model, and a concrete plasticity model, are used as illustrative examples.
XSECT: A computer code for generating fuselage cross sections - user's manual
NASA Technical Reports Server (NTRS)
Ames, K. R.
1982-01-01
A computer code, XSECT, has been developed to generate fuselage cross sections from a given area distribution and wing definition. The cross sections are generated to match the wing definition while conforming to the area requirement. An iterative procedure is used to generate each cross section. Fuselage area balancing may be included in this procedure if desired. The code is intended as an aid for engineers who must first design a wing under certain aerodynamic constraints and then design a fuselage for the wing such that the contraints remain satisfied. This report contains the information necessary for accessing and executing the code, which is written in FORTRAN to execute on the Cyber 170 series computers (NOS operating system) and produces graphical output for a Tektronix 4014 CRT. The LRC graphics software is used in combination with the interface between this software and the PLOT 10 software.
Cracking the code: the accuracy of coding shoulder procedures and the repercussions.
Clement, N D; Murray, I R; Nie, Y X; McBirnie, J M
2013-05-01
Coding of patients' diagnosis and surgical procedures is subject to error levels of up to 40% with consequences on distribution of resources and financial recompense. Our aim was to explore and address reasons behind coding errors of shoulder diagnosis and surgical procedures and to evaluate a potential solution. A retrospective review of 100 patients who had undergone surgery was carried out. Coding errors were identified and the reasons explored. A coding proforma was designed to address these errors and was prospectively evaluated for 100 patients. The financial implications were also considered. Retrospective analysis revealed the correct primary diagnosis was assigned in 54 patients (54%) had an entirely correct diagnosis, and only 7 (7%) patients had a correct procedure code assigned. Coders identified indistinct clinical notes and poor clarity of procedure codes as reasons for errors. The proforma was significantly more likely to assign the correct diagnosis (odds ratio 18.2, p < 0.0001) and the correct procedure code (odds ratio 310.0, p < 0.0001). Using the proforma resulted in a £28,562 increase in revenue for the 100 patients evaluated relative to the income generated from the coding department. High error levels for coding are due to misinterpretation of notes and ambiguity of procedure codes. This can be addressed by allowing surgeons to assign the diagnosis and procedure using a simplified list that is passed directly to coding.
Optimizing a liquid propellant rocket engine with an automated combustor design code (AUTOCOM)
NASA Technical Reports Server (NTRS)
Hague, D. S.; Reichel, R. H.; Jones, R. T.; Glatt, C. R.
1972-01-01
A procedure for automatically designing a liquid propellant rocket engine combustion chamber in an optimal fashion is outlined. The procedure is contained in a digital computer code, AUTOCOM. The code is applied to an existing engine, and design modifications are generated which provide a substantial potential payload improvement over the existing design. Computer time requirements for this payload improvement were small, approximately four minutes in the CDC 6600 computer.
Dhakal, Sanjaya; Burwen, Dale R; Polakowski, Laura L; Zinderman, Craig E; Wise, Robert P
2014-03-01
Assess whether Medicare data are useful for monitoring tissue allograft safety and utilization. We used health care claims (billing) data from 2007 for 35 million fee-for-service Medicare beneficiaries, a predominantly elderly population. Using search terms for transplant-related procedures, we generated lists of ICD-9-CM and CPT(®) codes and assessed the frequency of selected allograft procedures. Step 1 used inpatient data and ICD-9-CM procedure codes. Step 2 added non-institutional provider (e.g., physician) claims, outpatient institutional claims, and CPT codes. We assembled preliminary lists of diagnosis codes for infections after selected allograft procedures. Many ICD-9-CM codes were ambiguous as to whether the procedure involved an allograft. Among 1.3 million persons with a procedure ascertained using the list of ICD-9-CM codes, only 1,886 claims clearly involved an allograft. CPT codes enabled better ascertainment of some allograft procedures (over 17,000 persons had corneal transplants and over 2,700 had allograft skin transplants). For spinal fusion procedures, CPT codes improved specificity for allografts; of nearly 100,000 patients with ICD-9-CM codes for spinal fusions, more than 34,000 had CPT codes indicating allograft use. Monitoring infrequent events (infections) after infrequent exposures (tissue allografts) requires large study populations. A strength of the large Medicare databases is the substantial number of certain allograft procedures. Limitations include lack of clinical detail and donor information. Medicare data can potentially augment passive reporting systems and may be useful for monitoring tissue allograft safety and utilization where codes clearly identify allograft use and coding algorithms can effectively screen for infections.
Subotin, Michael; Davis, Anthony R
2016-09-01
Natural language processing methods for medical auto-coding, or automatic generation of medical billing codes from electronic health records, generally assign each code independently of the others. They may thus assign codes for closely related procedures or diagnoses to the same document, even when they do not tend to occur together in practice, simply because the right choice can be difficult to infer from the clinical narrative. We propose a method that injects awareness of the propensities for code co-occurrence into this process. First, a model is trained to estimate the conditional probability that one code is assigned by a human coder, given than another code is known to have been assigned to the same document. Then, at runtime, an iterative algorithm is used to apply this model to the output of an existing statistical auto-coder to modify the confidence scores of the codes. We tested this method in combination with a primary auto-coder for International Statistical Classification of Diseases-10 procedure codes, achieving a 12% relative improvement in F-score over the primary auto-coder baseline. The proposed method can be used, with appropriate features, in combination with any auto-coder that generates codes with different levels of confidence. The promising results obtained for International Statistical Classification of Diseases-10 procedure codes suggest that the proposed method may have wider applications in auto-coding. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
HOMAR: A computer code for generating homotopic grids using algebraic relations: User's manual
NASA Technical Reports Server (NTRS)
Moitra, Anutosh
1989-01-01
A computer code for fast automatic generation of quasi-three-dimensional grid systems for aerospace configurations is described. The code employs a homotopic method to algebraically generate two-dimensional grids in cross-sectional planes, which are stacked to produce a three-dimensional grid system. Implementation of the algebraic equivalents of the homotopic relations for generating body geometries and grids are explained. Procedures for controlling grid orthogonality and distortion are described. Test cases with description and specification of inputs are presented in detail. The FORTRAN computer program and notes on implementation and use are included.
Improving the accuracy of operation coding in surgical discharge summaries
Martinou, Eirini; Shouls, Genevieve; Betambeau, Nadine
2014-01-01
Procedural coding in surgical discharge summaries is extremely important; as well as communicating to healthcare staff which procedures have been performed, it also provides information that is used by the hospital's coding department. The OPCS code (Office of Population, Censuses and Surveys Classification of Surgical Operations and Procedures) is used to generate the tariff that allows the hospital to be reimbursed for the procedure. We felt that the OPCS coding on discharge summaries was often incorrect within our breast and endocrine surgery department. A baseline measurement over two months demonstrated that 32% of operations had been incorrectly coded, resulting in an incorrect tariff being applied and an estimated loss to the Trust of £17,000. We developed a simple but specific OPCS coding table in collaboration with the clinical coding team and breast surgeons that summarised all operations performed within our department. This table was disseminated across the team, specifically to the junior doctors who most frequently complete the discharge summaries. Re-audit showed 100% of operations were accurately coded, demonstrating the effectiveness of the coding table. We suggest that specifically designed coding tables be introduced across each surgical department to ensure accurate OPCS codes are used to produce better quality surgical discharge summaries and to ensure correct reimbursement to the Trust. PMID:26734286
Adaptive EAGLE dynamic solution adaptation and grid quality enhancement
NASA Technical Reports Server (NTRS)
Luong, Phu Vinh; Thompson, J. F.; Gatlin, B.; Mastin, C. W.; Kim, H. J.
1992-01-01
In the effort described here, the elliptic grid generation procedure in the EAGLE grid code was separated from the main code into a subroutine, and a new subroutine which evaluates several grid quality measures at each grid point was added. The elliptic grid routine can now be called, either by a computational fluid dynamics (CFD) code to generate a new adaptive grid based on flow variables and quality measures through multiple adaptation, or by the EAGLE main code to generate a grid based on quality measure variables through static adaptation. Arrays of flow variables can be read into the EAGLE grid code for use in static adaptation as well. These major changes in the EAGLE adaptive grid system make it easier to convert any CFD code that operates on a block-structured grid (or single-block grid) into a multiple adaptive code.
NASA Technical Reports Server (NTRS)
Spradley, L.; Pearson, M.
1979-01-01
The General Interpolants Method (GIM), a three dimensional, time dependent, hybrid procedure for generating numerical analogs of the conversion laws, is described. The Navier-Stokes equations written for an Eulerian system are considered. The conversion of the GIM code to the STAR-100 computer, and the implementation of 'GIM-ON-STAR' is discussed.
Rodrigues, J M; Trombert-Paviot, B; Baud, R; Wagner, J; Meusnier-Carriot, F
1998-01-01
GALEN has developed a language independent common reference model based on a medically oriented ontology and practical tools and techniques for managing healthcare terminology including natural language processing. GALEN-IN-USE is the current phase which applied the modelling and the tools to the development or the updating of coding systems for surgical procedures in different national coding centers co-operating within the European Federation of Coding Centre (EFCC) to create a language independent knowledge repository for multicultural Europe. We used an integrated set of artificial intelligence terminology tools named CLAssification Manager workbench to process French professional medical language rubrics into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation we generate controlled French natural language. The French national coding centre is then able to retrieve the initial professional rubrics with different categories of concepts, to compare the professional language proposed by expert clinicians to the French generated controlled vocabulary and to finalize the linguistic labels of the coding system in relation with the meanings of the conceptual system structure.
From Novice to Expert: Problem Solving in ICD-10-PCS Procedural Coding
Rousse, Justin Thomas
2013-01-01
The benefits of converting to ICD-10-CM/PCS have been well documented in recent years. One of the greatest challenges in the conversion, however, is how to train the workforce in the code sets. The International Classification of Diseases, Tenth Revision, Procedure Coding System (ICD-10-PCS) has been described as a language requiring higher-level reasoning skills because of the system's increased granularity. Training and problem-solving strategies required for correct procedural coding are unclear. The objective of this article is to propose that the acquisition of rule-based logic will need to be augmented with self-evaluative and critical thinking. Awareness of how this process works is helpful for established coders as well as for a new generation of coders who will master the complexities of the system. PMID:23861674
CFD analysis of turbopump volutes
NASA Technical Reports Server (NTRS)
Ascoli, Edward P.; Chan, Daniel C.; Darian, Armen; Hsu, Wayne W.; Tran, Ken
1993-01-01
An effort is underway to develop a procedure for the regular use of CFD analysis in the design of turbopump volutes. Airflow data to be taken at NASA Marshall will be used to validate the CFD code and overall procedure. Initial focus has been on preprocessing (geometry creation, translation, and grid generation). Volute geometries have been acquired electronically and imported into the CATIA CAD system and RAGGS (Rockwell Automated Grid Generation System) via the IGES standard. An initial grid topology has been identified and grids have been constructed for turbine inlet and discharge volutes. For CFD analysis of volutes to be used regularly, a procedure must be defined to meet engineering design needs in a timely manner. Thus, a compromise must be established between making geometric approximations, the selection of grid topologies, and possible CFD code enhancements. While the initial grid developed approximated the volute tongue with a zero thickness, final computations should more accurately account for the geometry in this region. Additionally, grid topologies will be explored to minimize skewness and high aspect ratio cells that can affect solution accuracy and slow code convergence. Finally, as appropriate, code modifications will be made to allow for new grid topologies in an effort to expedite the overall CFD analysis process.
ICD-10 procedure codes produce transition challenges.
Boyd, Andrew D; Li, Jianrong 'John'; Kenost, Colleen; Zaim, Samir Rachid; Krive, Jacob; Mittal, Manish; Satava, Richard A; Burton, Michael; Smith, Jacob; Lussier, Yves A
2018-01-01
The transition of procedure coding from ICD-9-CM-Vol-3 to ICD-10-PCS has generated problems for the medical community at large resulting from the lack of clarity required to integrate two non-congruent coding systems. We hypothesized that quantifying these issues with network topology analyses offers a better understanding of the issues, and therefore we developed solutions (online tools) to empower hospital administrators and researchers to address these challenges. Five topologies were identified: "identity"(I), "class-to-subclass"(C2S), "subclass-toclass"(S2C), "convoluted(C)", and "no mapping"(NM). The procedure codes in the 2010 Illinois Medicaid dataset (3,290 patients, 116 institutions) were categorized as C=55%, C2S=40%, I=3%, NM=2%, and S2C=1%. Majority of the problematic and ambiguous mappings (convoluted) pertained to operations in ophthalmology cardiology, urology, gyneco-obstetrics, and dermatology. Finally, the algorithms were expanded into a user-friendly tool to identify problematic topologies and specify lists of procedural codes utilized by medical professionals and researchers for mitigating error-prone translations, simplifying research, and improving quality.http://www.lussiergroup.org/transition-to-ICD10PCS.
Optical encryption and QR codes: secure and noise-free information retrieval.
Barrera, John Fredy; Mira, Alejandro; Torroba, Roberto
2013-03-11
We introduce for the first time the concept of an information "container" before a standard optical encrypting procedure. The "container" selected is a QR code which offers the main advantage of being tolerant to pollutant speckle noise. Besides, the QR code can be read by smartphones, a massively used device. Additionally, QR code includes another secure step to the encrypting benefits the optical methods provide. The QR is generated by means of worldwide free available software. The concept development probes that speckle noise polluting the outcomes of normal optical encrypting procedures can be avoided, then making more attractive the adoption of these techniques. Actual smartphone collected results are shown to validate our proposal.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 2 is the User's Guide, and describes the program's general features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
Aiello, Francesco; Durgin, Jonathan; Daniel, Vijaya; Messina, Louis; Doucet, Danielle; Simons, Jessica; Jenkins, James; Schanzer, Andres
2017-10-01
Fenestrated endovascular aneurysm repair (FEVAR) allows endovascular treatment of thoracoabdominal and juxtarenal aneurysms previously outside the indications of use for standard devices. However, because of considerable device costs and increased procedure time, FEVAR is thought to result in financial losses for medical centers and physicians. We hypothesized that surgeon leadership in the coding, billing, and contractual negotiations for FEVAR procedures will increase medical center contribution margin (CM) and physician reimbursement. At the UMass Memorial Center for Complex Aortic Disease, a vascular surgeon with experience in medical finances is supported to manage the billing and coding of FEVAR procedures for medical center and physician reimbursement. A comprehensive financial analysis was performed for all FEVAR procedures (2011-2015), independent of insurance status, patient presentation, or type of device used. Medical center CM (actual reimbursement minus direct costs) was determined for each index FEVAR procedure and for all related subsequent procedures, inpatient or outpatient, 3 months before and 1 year subsequent to the index FEVAR procedure. Medical center CM for outpatient clinic visits, radiology examinations, vascular laboratory studies, and cardiology and pulmonary evaluations related to FEVAR were also determined. Surgeon reimbursement for index FEVAR procedure, related adjunct procedures, and assistant surgeon reimbursement were also calculated. All financial analyses were performed and adjudicated by the UMass Department of Finance. The index hospitalization for 63 FEVAR procedures incurred $2,776,726 of direct costs and generated $3,027,887 in reimbursement, resulting in a positive CM of $251,160. Subsequent related hospital procedures (n = 26) generated a CM of $144,473. Outpatient clinic visits, radiologic examinations, and vascular laboratory studies generated an additional CM of $96,888. Direct cost analysis revealed that grafts accounted for the largest proportion of costs (55%), followed by supplies (12%), bed (12%), and operating room (10%). Total medical center CM for all FEVAR services was $492,521. Average surgeon reimbursements per FEVAR from 2011 to 2015 increased from $1601 to $2480 while the surgeon payment denial rate declined from 50% to 0%. Surgeon-led negotiations with the Centers for Medicare & Medicaid Services during 2015 resulted in a 27% increase in physician reimbursement for the remainder of 2015 ($2480 vs $3068/case) and a 91% increase in reimbursement from 2011 ($1601 vs $3068). Assistant surgeon reimbursement also increased ($266 vs $764). Concomitant FEVAR-related procedures generated an additional $27,347 in surgeon reimbursement. Physician leadership in the coding, billing, and contractual negotiations for FEVAR results in a positive medical center CM and increased physician reimbursement. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Characteristics of health interventions: a systematic analysis of the Austrian Procedure Catalogue.
Neururer, Sabrina B; Pfeiffer, Karl-Peter
2012-01-01
The Austrian Procedure Catalogue contains 1,500 codes for health interventions used for performance-oriented hospital financing in Austria. It offers a multiaxial taxonomy. The aim of this study is to identify characteristics of medical procedures. Therefore a definition analysis followed by a typological analysis was conducted. Search strings were generated out of code descriptions regarding the heart, large vessels and cardiovascular system. Their definitions were looked up in the Pschyrembel Clinical Dictionary and documented. Out of these definitions, types which represent characteristics of health interventions were abstracted. The three axes of the Austrian Procedure Catalogue were approved as well as new, relevant information identified. The results are the foundation of a further enhancement of the Austrian Procedure Catalogue.
ICD-10 procedure codes produce transition challenges
Boyd, Andrew D.; Li, Jianrong ‘John’; Kenost, Colleen; Zaim, Samir Rachid; Krive, Jacob; Mittal, Manish; Satava, Richard A.; Burton, Michael; Smith, Jacob; Lussier, Yves A.
2018-01-01
The transition of procedure coding from ICD-9-CM-Vol-3 to ICD-10-PCS has generated problems for the medical community at large resulting from the lack of clarity required to integrate two non-congruent coding systems. We hypothesized that quantifying these issues with network topology analyses offers a better understanding of the issues, and therefore we developed solutions (online tools) to empower hospital administrators and researchers to address these challenges. Five topologies were identified: “identity”(I), “class-to-subclass”(C2S), “subclass-toclass”(S2C), “convoluted(C)”, and “no mapping”(NM). The procedure codes in the 2010 Illinois Medicaid dataset (3,290 patients, 116 institutions) were categorized as C=55%, C2S=40%, I=3%, NM=2%, and S2C=1%. Majority of the problematic and ambiguous mappings (convoluted) pertained to operations in ophthalmology cardiology, urology, gyneco-obstetrics, and dermatology. Finally, the algorithms were expanded into a user-friendly tool to identify problematic topologies and specify lists of procedural codes utilized by medical professionals and researchers for mitigating error-prone translations, simplifying research, and improving quality.http://www.lussiergroup.org/transition-to-ICD10PCS PMID:29888037
The FORTRAN static source code analyzer program (SAP) system description
NASA Technical Reports Server (NTRS)
Decker, W.; Taylor, W.; Merwarth, P.; Oneill, M.; Goorevich, C.; Waligora, S.
1982-01-01
A source code analyzer program (SAP) designed to assist personnel in conducting studies of FORTRAN programs is described. The SAP scans FORTRAN source code and produces reports that present statistics and measures of statements and structures that make up a module. The processing performed by SAP and of the routines, COMMON blocks, and files used by SAP are described. The system generation procedure for SAP is also presented.
Mr.CAS-A minimalistic (pure) Ruby CAS for fast prototyping and code generation
NASA Astrophysics Data System (ADS)
Ragni, Matteo
There are Computer Algebra System (CAS) systems on the market with complete solutions for manipulation of analytical models. But exporting a model that implements specific algorithms on specific platforms, for target languages or for particular numerical library, is often a rigid procedure that requires manual post-processing. This work presents a Ruby library that exposes core CAS capabilities, i.e. simplification, substitution, evaluation, etc. The library aims at programmers that need to rapidly prototype and generate numerical code for different target languages, while keeping separated mathematical expression from the code generation rules, where best practices for numerical conditioning are implemented. The library is written in pure Ruby language and is compatible with most Ruby interpreters.
Surface grid generation for complex three-dimensional geometries
NASA Technical Reports Server (NTRS)
Luh, Raymond Ching-Chung
1988-01-01
An outline is presented for the creation of surface grids from primitive geometry data such as obtained from CAD/CAM systems. The general procedure is applicable to any geometry including full aircraft with wing, nacelle, and empennage. When developed in an interactive graphics environment, a code based on this procedure is expected to substantially improve the turn around time for generating surface grids on complex geometries. Results are shown for a general hypersonic airplane geometry.
Surface grid generation for complex three-dimensional geometries
NASA Technical Reports Server (NTRS)
Luh, Raymond Ching-Chung
1988-01-01
An outline is presented for the creation of surface grids from primitive geometry data such as obtained from CAD/CAM systems. The general procedure is applicable to any geometry including full aircraft with wing, nacelle, and empennage. When developed in an interactive graphics environment, a code base on this procedure is expected to substantially improve the turn around time for generating surface grids on complex geometries. Results are shown for a general hypersonic airplane geometry.
Ruffing, T; Huchzermeier, P; Muhm, M; Winkler, H
2014-05-01
Precise coding is an essential requirement in order to generate a valid DRG. The aim of our study was to evaluate the quality of the initial coding of surgical procedures, as well as to introduce our "hybrid model" of a surgical specialist supervising medical coding and a nonphysician for case auditing. The department's DRG responsible physician as a surgical specialist has profound knowledge both in surgery and in DRG coding. At a Level 1 hospital, 1000 coded cases of surgical procedures were checked. In our department, the DRG responsible physician who is both a surgeon and encoder has proven itself for many years. The initial surgical DRG coding had to be corrected by the DRG responsible physician in 42.2% of cases. On average, one hour per working day was necessary. The implementation of a DRG responsible physician is a simple, effective way to connect medical and business expertise without interface problems. Permanent feedback promotes both medical and economic sensitivity for the improvement of coding quality.
Shaping electromagnetic waves using software-automatically-designed metasurfaces.
Zhang, Qian; Wan, Xiang; Liu, Shuo; Yuan Yin, Jia; Zhang, Lei; Jun Cui, Tie
2017-06-15
We present a fully digital procedure of designing reflective coding metasurfaces to shape reflected electromagnetic waves. The design procedure is completely automatic, controlled by a personal computer. In details, the macro coding units of metasurface are automatically divided into several types (e.g. two types for 1-bit coding, four types for 2-bit coding, etc.), and each type of the macro coding units is formed by discretely random arrangement of micro coding units. By combining an optimization algorithm and commercial electromagnetic software, the digital patterns of the macro coding units are optimized to possess constant phase difference for the reflected waves. The apertures of the designed reflective metasurfaces are formed by arranging the macro coding units with certain coding sequence. To experimentally verify the performance, a coding metasurface is fabricated by automatically designing two digital 1-bit unit cells, which are arranged in array to constitute a periodic coding metasurface to generate the required four-beam radiations with specific directions. Two complicated functional metasurfaces with circularly- and elliptically-shaped radiation beams are realized by automatically designing 4-bit macro coding units, showing excellent performance of the automatic designs by software. The proposed method provides a smart tool to realize various functional devices and systems automatically.
Estimating the costs of VA ambulatory care.
Phibbs, Ciaran S; Bhandari, Aman; Yu, Wei; Barnett, Paul G
2003-09-01
This article reports how we matched Common Procedure Terminology (CPT) codes with Medicare payment rates and aggregate Veterans Affairs (VA) budget data to estimate the costs of every VA ambulatory encounter. Converting CPT codes to encounter-level costs was more complex than a simple match of Medicare reimbursements to CPT codes. About 40 percent of the CPT codes used in VA, representing about 20 percent of procedures, did not have a Medicare payment rate and required other cost estimates. Reconciling aggregated estimated costs to the VA budget allocations for outpatient care produced final VA cost estimates that were lower than projected Medicare reimbursements. The methods used to estimate costs for encounters could be replicated for other settings. They are potentially useful for any system that does not generate billing data, when CPT codes are simpler to collect than billing data, or when there is a need to standardize cost estimates across data sources.
A general multiblock Euler code for propulsion integration. Volume 3: User guide for the Euler code
NASA Technical Reports Server (NTRS)
Chen, H. C.; Su, T. Y.; Kao, T. J.
1991-01-01
This manual explains the procedures for using the general multiblock Euler (GMBE) code developed under NASA contract NAS1-18703. The code was developed for the aerodynamic analysis of geometrically complex configurations in either free air or wind tunnel environments (vol. 1). The complete flow field is divided into a number of topologically simple blocks within each of which surface fitted grids and efficient flow solution algorithms can easily be constructed. The multiblock field grid is generated with the BCON procedure described in volume 2. The GMBE utilizes a finite volume formulation with an explicit time stepping scheme to solve the Euler equations. A multiblock version of the multigrid method was developed to accelerate the convergence of the calculations. This user guide provides information on the GMBE code, including input data preparations with sample input files and a sample Unix script for program execution in the UNICOS environment.
Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H
1999-01-01
GALEN has developed a new generation of terminology tools based on a language independent concept reference model using a compositional formalism allowing computer processing and multiple reuses. During the 4th framework program project Galen-In-Use we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures (CCAM) in France. On one hand we contributed to a language independent knowledge repository for multicultural Europe. On the other hand we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW to process French professional medical language rubrics produced by the national colleges of surgeons into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation on one hand we generate controlled French natural language to support the finalization of the linguistic labels in relation with the meanings of the conceptual system structure. On the other hand the classification manager of third generation proves to be very powerful to retrieve the initial professional rubrics with different categories of concepts within a semantic network.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.
Assessment of the MPACT Resonance Data Generation Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Williams, Mark L.
Currently, heterogeneous models are being used to generate resonance self-shielded cross-section tables as a function of background cross sections for important nuclides such as 235U and 238U by performing the CENTRM (Continuous Energy Transport Model) slowing down calculation with the MOC (Method of Characteristics) spatial discretization and ESSM (Embedded Self-Shielding Method) calculations to obtain background cross sections. And then the resonance self-shielded cross section tables are converted into subgroup data which are to be used in estimating problem-dependent self-shielded cross sections in MPACT (Michigan Parallel Characteristics Transport Code). Although this procedure has been developed and thus resonance data have beenmore » generated and validated by benchmark calculations, assessment has never been performed to review if the resonance data are properly generated by the procedure and utilized in MPACT. This study focuses on assessing the procedure and a proper use in MPACT.« less
Keltie, Kim; Cole, Helen; Arber, Mick; Patrick, Hannah; Powell, John; Campbell, Bruce; Sims, Andrew
2014-11-28
Several authors have developed and applied methods to routine data sets to identify the nature and rate of complications following interventional procedures. But, to date, there has been no systematic search for such methods. The objective of this article was to find, classify and appraise published methods, based on analysis of clinical codes, which used routine healthcare databases in a United Kingdom setting to identify complications resulting from interventional procedures. A literature search strategy was developed to identify published studies that referred, in the title or abstract, to the name or acronym of a known routine healthcare database and to complications from procedures or devices. The following data sources were searched in February and March 2013: Cochrane Methods Register, Conference Proceedings Citation Index - Science, Econlit, EMBASE, Health Management Information Consortium, Health Technology Assessment database, MathSciNet, MEDLINE, MEDLINE in-process, OAIster, OpenGrey, Science Citation Index Expanded and ScienceDirect. Of the eligible papers, those which reported methods using clinical coding were classified and summarised in tabular form using the following headings: routine healthcare database; medical speciality; method for identifying complications; length of follow-up; method of recording comorbidity. The benefits and limitations of each approach were assessed. From 3688 papers identified from the literature search, 44 reported the use of clinical codes to identify complications, from which four distinct methods were identified: 1) searching the index admission for specified clinical codes, 2) searching a sequence of admissions for specified clinical codes, 3) searching for specified clinical codes for complications from procedures and devices within the International Classification of Diseases 10th revision (ICD-10) coding scheme which is the methodology recommended by NHS Classification Service, and 4) conducting manual clinical review of diagnostic and procedure codes. The four distinct methods identifying complication from codified data offer great potential in generating new evidence on the quality and safety of new procedures using routine data. However the most robust method, using the methodology recommended by the NHS Classification Service, was the least frequently used, highlighting that much valuable observational data is being ignored.
Nonperturbative methods in HZE ion transport
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Costen, Robert C.; Shinn, Judy L.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport. The code is established to operate on the Langley Research Center nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code is highly efficient and compares well with the perturbation approximations.
A compendium of controlled diffusion blades generated by an automated inverse design procedure
NASA Technical Reports Server (NTRS)
Sanz, Jose M.
1989-01-01
A set of sample cases was produced to test an automated design procedure developed at the NASA Lewis Research Center for the design of controlled diffusion blades. The range of application of the automated design procedure is documented. The results presented include characteristic compressor and turbine blade sections produced with the automated design code as well as various other airfoils produced with the base design method prior to the incorporation of the automated procedure.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 1: Analysis description
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the two-dimensional or axisymmetric, Reynolds averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 1 is the Analysis Description, and describes in detail the governing equations, the turbulence model, the linearization of the equations and boundary conditions, the time and space differencing formulas, the ADI solution procedure, and the artificial viscosity models.
Automatic generation of user material subroutines for biomechanical growth analysis.
Young, Jonathan M; Yao, Jiang; Ramasubramanian, Ashok; Taber, Larry A; Perucchio, Renato
2010-10-01
The analysis of the biomechanics of growth and remodeling in soft tissues requires the formulation of specialized pseudoelastic constitutive relations. The nonlinear finite element analysis package ABAQUS allows the user to implement such specialized material responses through the coding of a user material subroutine called UMAT. However, hand coding UMAT subroutines is a challenge even for simple pseudoelastic materials and requires substantial time to debug and test the code. To resolve this issue, we develop an automatic UMAT code generation procedure for pseudoelastic materials using the symbolic mathematics package MATHEMATICA and extend the UMAT generator to include continuum growth. The performance of the automatically coded UMAT is tested by simulating the stress-stretch response of a material defined by a Fung-orthotropic strain energy function, subject to uniaxial stretching, equibiaxial stretching, and simple shear in ABAQUS. The MATHEMATICA UMAT generator is then extended to include continuum growth by adding a growth subroutine to the automatically generated UMAT. The MATHEMATICA UMAT generator correctly derives the variables required in the UMAT code, quickly providing a ready-to-use UMAT. In turn, the UMAT accurately simulates the pseudoelastic response. In order to test the growth UMAT, we simulate the growth-based bending of a bilayered bar with differing fiber directions in a nongrowing passive layer. The anisotropic passive layer, being topologically tied to the growing isotropic layer, causes the bending bar to twist laterally. The results of simulations demonstrate the validity of the automatically coded UMAT, used in both standardized tests of hyperelastic materials and for a biomechanical growth analysis.
Approximate Green's function methods for HZE transport in multilayered materials
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.
NASA Technical Reports Server (NTRS)
Warsi, Saif A.
1989-01-01
A detailed operating manual is presented for a grid generating program that produces 3-D meshes for advanced turboprops. The code uses both algebraic and elliptic partial differential equation methods to generate single rotation and counterrotation, H or C type meshes for the z - r planes and H type for the z - theta planes. The code allows easy specification of geometrical constraints (such as blade angle, location of bounding surfaces, etc.), mesh control parameters (point distribution near blades and nacelle, number of grid points desired, etc.), and it has good runtime diagnostics. An overview is provided of the mesh generation procedure, sample input dataset with detailed explanation of all input, and example meshes.
Neural network decoder for quantum error correcting codes
NASA Astrophysics Data System (ADS)
Krastanov, Stefan; Jiang, Liang
Artificial neural networks form a family of extremely powerful - albeit still poorly understood - tools used in anything from image and sound recognition through text generation to, in our case, decoding. We present a straightforward Recurrent Neural Network architecture capable of deducing the correcting procedure for a quantum error-correcting code from a set of repeated stabilizer measurements. We discuss the fault-tolerance of our scheme and the cost of training the neural network for a system of a realistic size. Such decoders are especially interesting when applied to codes, like the quantum LDPC codes, that lack known efficient decoding schemes.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, L.C.; Deen, J.R.; Woodruff, W.L.
1995-02-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
A Coded Structured Light System Based on Primary Color Stripe Projection and Monochrome Imaging
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-01-01
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy. PMID:24129018
A coded structured light system based on primary color stripe projection and monochrome imaging.
Barone, Sandro; Paoli, Alessandro; Razionale, Armando Viviano
2013-10-14
Coded Structured Light techniques represent one of the most attractive research areas within the field of optical metrology. The coding procedures are typically based on projecting either a single pattern or a temporal sequence of patterns to provide 3D surface data. In this context, multi-slit or stripe colored patterns may be used with the aim of reducing the number of projected images. However, color imaging sensors require the use of calibration procedures to address crosstalk effects between different channels and to reduce the chromatic aberrations. In this paper, a Coded Structured Light system has been developed by integrating a color stripe projector and a monochrome camera. A discrete coding method, which combines spatial and temporal information, is generated by sequentially projecting and acquiring a small set of fringe patterns. The method allows the concurrent measurement of geometrical and chromatic data by exploiting the benefits of using a monochrome camera. The proposed methodology has been validated by measuring nominal primitive geometries and free-form shapes. The experimental results have been compared with those obtained by using a time-multiplexing gray code strategy.
Trombert-Paviot, B; Rodrigues, J M; Rogers, J E; Baud, R; van der Haring, E; Rassinoux, A M; Abrial, V; Clavel, L; Idir, H
2000-09-01
Generalised architecture for languages, encyclopedia and nomenclatures in medicine (GALEN) has developed a new generation of terminology tools based on a language independent model describing the semantics and allowing computer processing and multiple reuses as well as natural language understanding systems applications to facilitate the sharing and maintaining of consistent medical knowledge. During the European Union 4 Th. framework program project GALEN-IN-USE and later on within two contracts with the national health authorities we applied the modelling and the tools to the development of a new multipurpose coding system for surgical procedures named CCAM in a minority language country, France. On one hand, we contributed to a language independent knowledge repository and multilingual semantic dictionaries for multicultural Europe. On the other hand, we support the traditional process for creating a new coding system in medicine which is very much labour consuming by artificial intelligence tools using a medically oriented recursive ontology and natural language processing. We used an integrated software named CLAW (for classification workbench) to process French professional medical language rubrics produced by the national colleges of surgeons domain experts into intermediate dissections and to the Grail reference ontology model representation. From this language independent concept model representation, on one hand, we generate with the LNAT natural language generator controlled French natural language to support the finalization of the linguistic labels (first generation) in relation with the meanings of the conceptual system structure. On the other hand, the Claw classification manager proves to be very powerful to retrieve the initial domain experts rubrics list with different categories of concepts (second generation) within a semantic structured representation (third generation) bridge to the electronic patient record detailed terminology.
Thermal-hydraulic interfacing code modules for CANDU reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, W.S.; Gold, M.; Sills, H.
1997-07-01
The approach for CANDU reactor safety analysis in Ontario Hydro Nuclear (OHN) and Atomic Energy of Canada Limited (AECL) is presented. Reflecting the unique characteristics of CANDU reactors, the procedure of coupling the thermal-hydraulics, reactor physics and fuel channel/element codes in the safety analysis is described. The experience generated in the Canadian nuclear industry may be useful to other types of reactors in the areas of reactor safety analysis.
Development of the general interpolants method for the CYBER 200 series of supercomputers
NASA Technical Reports Server (NTRS)
Stalnaker, J. F.; Robinson, M. A.; Spradley, L. W.; Kurzius, S. C.; Thoenes, J.
1988-01-01
The General Interpolants Method (GIM) is a 3-D, time-dependent, hybrid procedure for generating numerical analogs of the conservation laws. This study is directed toward the development and application of the GIM computer code for fluid dynamic research applications as implemented for the Cyber 200 series of supercomputers. An elliptic and quasi-parabolic version of the GIM code are discussed. Turbulence models, algebraic and differential equations, were added to the basic viscous code. An equilibrium reacting chemistry model and an implicit finite difference scheme are also included.
NASA Technical Reports Server (NTRS)
Chima, R. V.; Strazisar, A. J.
1982-01-01
Two and three dimensional inviscid solutions for the flow in a transonic axial compressor rotor at design speed are compared with probe and laser anemometers measurements at near-stall and maximum-flow operating points. Experimental details of the laser anemometer system and computational details of the two dimensional axisymmetric code and three dimensional Euler code are described. Comparisons are made between relative Mach number and flow angle contours, shock location, and shock strength. A procedure for using an efficient axisymmetric code to generate downstream pressure input for computationally expensive Euler codes is discussed. A film supplement shows the calculations of the two operating points with the time-marching Euler code.
The Use of a Code-generating System for the Derivation of the Equations for Wind Turbine Dynamics
NASA Astrophysics Data System (ADS)
Ganander, Hans
2003-10-01
For many reasons the size of wind turbines on the rapidly growing wind energy market is increasing. Relations between aeroelastic properties of these new large turbines change. Modifications of turbine designs and control concepts are also influenced by growing size. All these trends require development of computer codes for design and certification. Moreover, there is a strong desire for design optimization procedures, which require fast codes. General codes, e.g. finite element codes, normally allow such modifications and improvements of existing wind turbine models. This is done relatively easy. However, the calculation times of such codes are unfavourably long, certainly for optimization use. The use of an automatic code generating system is an alternative for relevance of the two key issues, the code and the design optimization. This technique can be used for rapid generation of codes of particular wind turbine simulation models. These ideas have been followed in the development of new versions of the wind turbine simulation code VIDYN. The equations of the simulation model were derived according to the Lagrange equation and using Mathematica®, which was directed to output the results in Fortran code format. In this way the simulation code is automatically adapted to an actual turbine model, in terms of subroutines containing the equations of motion, definitions of parameters and degrees of freedom. Since the start in 1997, these methods, constituting a systematic way of working, have been used to develop specific efficient calculation codes. The experience with this technique has been very encouraging, inspiring the continued development of new versions of the simulation code as the need has arisen, and the interest for design optimization is growing.
The Italian experience on T/H best estimate codes: Achievements and perspectives
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alemberti, A.; D`Auria, F.; Fiorino, E.
1997-07-01
Themalhydraulic system codes are complex tools developed to simulate the power plants behavior during off-normal conditions. Among the objectives of the code calculations the evaluation of safety margins, the operator training, the optimization of the plant design and of the emergency operating procedures, are mostly considered in the field of the nuclear safety. The first generation of codes was developed in the United States at the end of `60s. Since that time, different research groups all over the world started the development of their own codes. At the beginning of the `80s, the second generation codes were proposed; these differmore » from the first generation codes owing to the number of balance equations solved (six instead of three), the sophistication of the constitutive models and of the adopted numerics. The capabilities of available computers have been fully exploited during the years. The authors then summarize some of the major steps in the process of developing, modifying, and advancing the capabilities of the codes. They touch on the fact that Italian, and for that matter non-American, researchers have not been intimately involved in much of this work. They then describe the application of these codes in Italy, even though there are no operating or under construction nuclear power plants at this time. Much of this effort is directed at the general question of plant safety in the face of transient type events.« less
Munasinghe, A; Chang, D; Mamidanna, R; Middleton, S; Joy, M; Penninckx, F; Darzi, A; Livingston, E; Faiz, O
2014-07-01
Significant variation in colorectal surgery outcomes exists between different countries. Better understanding of the sources of variable outcomes using administrative data requires alignment of differing clinical coding systems. We aimed to map similar diagnoses and procedures across administrative coding systems used in different countries. Administrative data were collected in a central database as part of the Global Comparators (GC) Project. In order to unify these data, a systematic translation of diagnostic and procedural codes was undertaken. Codes for colorectal diagnoses, resections, operative complications and reoperative interventions were mapped across the respective national healthcare administrative coding systems. Discharge data from January 2006 to June 2011 for patients who had undergone colorectal surgical resections were analysed to generate risk-adjusted models for mortality, length of stay, readmissions and reoperations. In all, 52 544 case records were collated from 31 institutions in five countries. Mapping of all the coding systems was achieved so that diagnosis and procedures from the participant countries could be compared. Using the aligned coding systems to develop risk-adjusted models, the 30-day mortality rate for colorectal surgery was 3.95% (95% CI 0.86-7.54), the 30-day readmission rate was 11.05% (5.67-17.61), the 28-day reoperation rate was 6.13% (3.68-9.66) and the mean length of stay was 14 (7.65-46.76) days. The linkage of international hospital administrative data that we developed enabled comparison of documented surgical outcomes between countries. This methodology may facilitate international benchmarking. Colorectal Disease © 2014 The Association of Coloproctology of Great Britain and Ireland.
NASA Technical Reports Server (NTRS)
Nguyen, D. T.; Al-Nasra, M.; Zhang, Y.; Baddourah, M. A.; Agarwal, T. K.; Storaasli, O. O.; Carmona, E. A.
1991-01-01
Several parallel-vector computational improvements to the unconstrained optimization procedure are described which speed up the structural analysis-synthesis process. A fast parallel-vector Choleski-based equation solver, pvsolve, is incorporated into the well-known SAP-4 general-purpose finite-element code. The new code, denoted PV-SAP, is tested for static structural analysis. Initial results on a four processor CRAY 2 show that using pvsolve reduces the equation solution time by a factor of 14-16 over the original SAP-4 code. In addition, parallel-vector procedures for the Golden Block Search technique and the BFGS method are developed and tested for nonlinear unconstrained optimization. A parallel version of an iterative solver and the pvsolve direct solver are incorporated into the BFGS method. Preliminary results on nonlinear unconstrained optimization test problems, using pvsolve in the analysis, show excellent parallel-vector performance indicating that these parallel-vector algorithms can be used in a new generation of finite-element based structural design/analysis-synthesis codes.
ORNL Resolved Resonance Covariance Generation for ENDF/B-VII.1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, Luiz C.; Guber, Klaus H.; Wiarda, Dorothea
2012-12-01
Resonance-parameter covariance matrix (RPCM) evaluations in the resolved resonance regionwere done at the Oak Ridge National Laboratory (ORNL) for the chromium isotopes, titanium isotopes, 19F, 58Ni, 60Ni, 35Cl, 37Cl, 39K, 41K, 55Mn, 233U, 235U, 238U, and 239Pu using the computer code SAMMY. The retroactive approach of the code SAMMY was used to generate the RPCMs for 233U. For 235U, the approach used for covariance generation was similar to the retroactive approach with the distinction that real experimental data were used as opposed to data generated from the resonance parameters. RPCMs for 238U and 239Pu were generated together with the resonancemore » parameter evaluations. The RPCMs were then converted in the ENDF format using the FILE32 representation. Alternatively, for computer storage reasons, the FILE32 was converted in the FILE33 cross section covariance matrix (CSCM). Both representations were processed using the computer code PUFF-IV. This paper describes the procedures used to generate the RPCM and CSCM in the resonance region for ENDF/B-VII.1. The impact of data uncertainty in nuclear reactor benchmark calculations is also presented.« less
Integrated Graphics Operations and Analysis Lab Development of Advanced Computer Graphics Algorithms
NASA Technical Reports Server (NTRS)
Wheaton, Ira M.
2011-01-01
The focus of this project is to aid the IGOAL in researching and implementing algorithms for advanced computer graphics. First, this project focused on porting the current International Space Station (ISS) Xbox experience to the web. Previously, the ISS interior fly-around education and outreach experience only ran on an Xbox 360. One of the desires was to take this experience and make it into something that can be put on NASA s educational site for anyone to be able to access. The current code works in the Unity game engine which does have cross platform capability but is not 100% compatible. The tasks for an intern to complete this portion consisted of gaining familiarity with Unity and the current ISS Xbox code, porting the Xbox code to the web as is, and modifying the code to work well as a web application. In addition, a procedurally generated cloud algorithm will be developed. Currently, the clouds used in AGEA animations and the Xbox experiences are a texture map. The desire is to create a procedurally generated cloud algorithm to provide dynamically generated clouds for both AGEA animations and the Xbox experiences. This task consists of gaining familiarity with AGEA and the plug-in interface, developing the algorithm, creating an AGEA plug-in to implement the algorithm inside AGEA, and creating a Unity script to implement the algorithm for the Xbox. This portion of the project was unable to be completed in the time frame of the internship; however, the IGOAL will continue to work on it in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Andrew; Haves, Philip; Jegi, Subhash
This paper describes a software system for automatically generating a reference (baseline) building energy model from the proposed (as-designed) building energy model. This system is built using the OpenStudio Software Development Kit (SDK) and is designed to operate on building energy models in the OpenStudio file format.
2012-01-01
Background Procedures documented by general practitioners in primary care have not been studied in relation to procedure coding systems. We aimed to describe procedures documented by Swedish general practitioners in electronic patient records and to compare them to the Swedish Classification of Health Interventions (KVÅ) and SNOMED CT. Methods Procedures in 200 record entries were identified, coded, assessed in relation to two procedure coding systems and analysed. Results 417 procedures found in the 200 electronic patient record entries were coded with 36 different Classification of Health Interventions categories and 148 different SNOMED CT concepts. 22.8% of the procedures could not be coded with any Classification of Health Interventions category and 4.3% could not be coded with any SNOMED CT concept. 206 procedure-concept/category pairs were assessed as a complete match in SNOMED CT compared to 10 in the Classification of Health Interventions. Conclusions Procedures documented by general practitioners were present in nearly all electronic patient record entries. Almost all procedures could be coded using SNOMED CT. Classification of Health Interventions covered the procedures to a lesser extent and with a much lower degree of concordance. SNOMED CT is a more flexible terminology system that can be used for different purposes for procedure coding in primary care. PMID:22230095
CFD analyses for advanced pump design
NASA Technical Reports Server (NTRS)
Dejong, F. J.; Choi, S.-K.; Govindan, T. R.
1994-01-01
As one of the activities of the NASA/MSFC Pump Stage Technology Team, the present effort was focused on using CFD in the design and analysis of high performance rocket engine pumps. Under this effort, a three-dimensional Navier-Stokes code was used for various inducer and impeller flow field calculations. An existing algebraic grid generation procedure was-extended to allow for nonzero blade thickness, splitter blades, and hub/shroud cavities upstream or downstream of the (main) blades. This resulted in a fast, robust inducer/impeller geometry/grid generation package. Problems associated with running a compressible flow code to simulate an incompressible flow were resolved; related aspects of the numerical algorithm (viz., the matrix preconditioning, the artificial dissipation, and the treatment of low Mach number flows) were addressed. As shown by the calculations performed under the present effort, the resulting code, in conjunction with the grid generation package, is an effective tool for the rapid solution of three-dimensional viscous inducer and impeller flows.
Missed surgical intensive care unit billing: potential financial impact of 24/7 faculty presence.
Hendershot, Kimberly M; Bollins, John P; Armen, Scott B; Thomas, Yalaunda M; Steinberg, Steven M; Cook, Charles H
2009-07-01
To efficiently capture evaluation and management (E&M) and procedural billing in our surgical intensive care unit (SICU), we have developed an electronic billing system that links to the electronic medical record (EMR). In this system, only notes electronically signed and coded by an attending generate billing charges. We hypothesized that capture of missed billing during nighttime and weekends might be sufficient to subsidize 24/7 in-house attending coverage. A retrospective chart EMR review was performed of the EMRs for all SICU patients during a 2-month period. Note type, date, time, attending signature, and coding were analyzed. Notes without attending signature, diagnosis, or current procedural terminology (CPT) code were considered incomplete and identified as "missed billing." Four hundred and forty-three patients had 465 admissions generating 2,896 notes. Overall, 76% of notes were signed and coded by an attending and billed. Incomplete (not billed) notes represented an overall missed billing opportunity of $159,138 for the 2-month time period (approximately $954,000 annually). Unbilled E&M encounters during weekdays totaled $54,758, whereas unbilled E&M and procedures from weeknights and weekends totaled $88,408 ($44,566 and $43,842, respectively). Missed billing after-hours thus represents approximately $530K annually, extrapolating to approximately $220K in collections from our payer mix. Surprisingly, missed E&M and procedural billing during weekdays totaled $70,730 (approximately $425K billing, approximately $170K collections annually), and typically represented patients seen, but transferred from the SICU before attending documentation was completed. Capture of nighttime and weekend ICU collections alone may be insufficient to add faculty or incentivize in-house coverage, but could certainly complement other in-house derived revenues to such ends. In addition, missed daytime billing in busy modern ICUs can be substantial, and use of an EMR to identify missed billing opportunities can help create solutions to recover these revenues.
A procedure for automating CFD simulations of an inlet-bleed problem
NASA Technical Reports Server (NTRS)
Chyu, Wei J.; Rimlinger, Mark J.; Shih, Tom I.-P.
1995-01-01
A procedure was developed to improve the turn-around time for computational fluid dynamics (CFD) simulations of an inlet-bleed problem involving oblique shock-wave/boundary-layer interactions on a flat plate with bleed into a plenum through one or more circular holes. This procedure is embodied in a preprocessor called AUTOMAT. With AUTOMAT, once data for the geometry and flow conditions have been specified (either interactively or via a namelist), it will automatically generate all input files needed to perform a three-dimensional Navier-Stokes simulation of the prescribed inlet-bleed problem by using the PEGASUS and OVERFLOW codes. The input files automatically generated by AUTOMAT include those for the grid system and those for the initial and boundary conditions. The grid systems automatically generated by AUTOMAT are multi-block structured grids of the overlapping type. Results obtained by using AUTOMAT are presented to illustrate its capability.
Nonlinear dynamic simulation of single- and multi-spool core engines
NASA Technical Reports Server (NTRS)
Schobeiri, T.; Lippke, C.; Abouelkheir, M.
1993-01-01
In this paper a new computational method for accurate simulation of the nonlinear dynamic behavior of single- and multi-spool core engines, turbofan engines, and power generation gas turbine engines is presented. In order to perform the simulation, a modularly structured computer code has been developed which includes individual mathematical modules representing various engine components. The generic structure of the code enables the dynamic simulation of arbitrary engine configurations ranging from single-spool thrust generation to multi-spool thrust/power generation engines under adverse dynamic operating conditions. For precise simulation of turbine and compressor components, row-by-row calculation procedures were implemented that account for the specific turbine and compressor cascade and blade geometry and characteristics. The dynamic behavior of the subject engine is calculated by solving a number of systems of partial differential equations, which describe the unsteady behavior of the individual components. In order to ensure the capability, accuracy, robustness, and reliability of the code, comprehensive critical performance assessment and validation tests were performed. As representatives, three different transient cases with single- and multi-spool thrust and power generation engines were simulated. The transient cases range from operating with a prescribed fuel schedule, to extreme load changes, to generator and turbine shut down.
Generation of signature databases with fast codes
NASA Astrophysics Data System (ADS)
Bradford, Robert A.; Woodling, Arthur E.; Brazzell, James S.
1990-09-01
Using the FASTSIG signature code to generate optical signature databases for the Ground-based Surveillance and Traking System (GSTS) Program has improved the efficiency of the database generation process. The goal of the current GSTS database is to provide standardized, threat representative target signatures that can easily be used for acquisition and trk studies, discrimination algorithm development, and system simulations. Large databases, with as many as eight interpolalion parameters, are required to maintain the fidelity demands of discrimination and to generalize their application to other strateg systems. As the need increases for quick availability of long wave infrared (LWIR) target signatures for an evolving design4o-threat, FASTSIG has become a database generation alternative to using the industry standard OptiCal Signatures Code (OSC). FASTSIG, developed in 1985 to meet the unique strategic systems demands imposed by the discrimination function, has the significant advantage of being a faster running signature code than the OSC, typically requiring two percent of the cpu time. It uses analytical approximations to model axisymmetric targets, with the fidelity required for discrimination analysis. Access of the signature database is accomplished through use of the waveband integration and interpolation software, INTEG and SIGNAT. This paper gives details of this procedure as well as sample interpolated signatures and also covers sample verification by comparison to the OSC, in order to establish the fidelity of the FASTSIG generated database.
Experimental implementation of the Bacon-Shor code with 10 entangled photons
NASA Astrophysics Data System (ADS)
Gimeno-Segovia, Mercedes; Sanders, Barry C.
The number of qubits that can be effectively controlled in quantum experiments is growing, reaching a regime where small quantum error-correcting codes can be tested. The Bacon-Shor code is a simple quantum code that protects against the effect of an arbitrary single-qubit error. In this work, we propose an experimental implementation of said code in a post-selected linear optical setup, similar to the recently reported 10-photon GHZ generation experiment. In the procedure we propose, an arbitrary state is encoded into the protected Shor code subspace, and after undergoing a controlled single-qubit error, is successfully decoded. BCS appreciates financial support from Alberta Innovates, NSERC, China's 1000 Talent Plan and the Institute for Quantum Information and Matter, which is an NSF Physics Frontiers Center(NSF Grant PHY-1125565) with support of the Moore Foundation(GBMF-2644).
NASA Technical Reports Server (NTRS)
Wey, Thomas; Liu, Nan-Suey
2015-01-01
This paper summarizes the procedures of (1) generating control volumes anchored at the nodes of a mesh; and (2) generating staggered control volumes via mesh reconstructions, in terms of either mesh realignment or mesh refinement, as well as presents sample results from their applications to the numerical solution of a single-element LDI combustor using a releasable edition of the National Combustion Code (NCC).
Development of Yield and Tensile Strength Design Curves for Alloy 617
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nancy Lybeck; T. -L. Sham
2013-10-01
The U.S. Department of Energy Very High Temperature Reactor Program is acquiring data in preparation for developing an Alloy 617 Code Case for inclusion in the nuclear section of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel (B&PV) Code. A draft code case was previously developed, but effort was suspended before acceptance by ASME. As part of the draft code case effort, a database was compiled of yield and tensile strength data from tests performed in air. Yield strength and tensile strength at temperature are used to set time independent allowable stress for construction materials in B&PVmore » Code, Section III, Subsection NH. The yield and tensile strength data used for the draft code case has been augmented with additional data generated by Idaho National Laboratory and Oak Ridge National Laboratory in the U.S. and CEA in France. The standard ASME Section II procedure for generating yield and tensile strength at temperature is presented, along with alternate methods that accommodate the change in temperature trends seen at high temperatures, resulting in a more consistent design margin over the temperature range of interest.« less
Improving accuracy of clinical coding in surgery: collaboration is key.
Heywood, Nick A; Gill, Michael D; Charlwood, Natasha; Brindle, Rachel; Kirwan, Cliona C
2016-08-01
Clinical coding data provide the basis for Hospital Episode Statistics and Healthcare Resource Group codes. High accuracy of this information is required for payment by results, allocation of health and research resources, and public health data and planning. We sought to identify the level of accuracy of clinical coding in general surgical admissions across hospitals in the Northwest of England. Clinical coding departments identified a total of 208 emergency general surgical patients discharged between 1st March and 15th August 2013 from seven hospital trusts (median = 20, range = 16-60). Blinded re-coding was performed by a senior clinical coder and clinician, with results compared with the original coding outcome. Recorded codes were generated from OPCS-4 & ICD-10. Of all cases, 194 of 208 (93.3%) had at least one coding error and 9 of 208 (4.3%) had errors in both primary diagnosis and primary procedure. Errors were found in 64 of 208 (30.8%) of primary diagnoses and 30 of 137 (21.9%) of primary procedure codes. Median tariff using original codes was £1411.50 (range, £409-9138). Re-calculation using updated clinical codes showed a median tariff of £1387.50, P = 0.997 (range, £406-10,102). The most frequent reasons for incorrect coding were "coder error" and a requirement for "clinical interpretation of notes". Errors in clinical coding are multifactorial and have significant impact on primary diagnosis, potentially affecting the accuracy of Hospital Episode Statistics data and in turn the allocation of health care resources and public health planning. As we move toward surgeon specific outcomes, surgeons should increase collaboration with coding departments to ensure the system is robust. Copyright © 2016 Elsevier Inc. All rights reserved.
Comparison of procedure coding systems for level 1 and 2 hospitals in South Africa.
Montewa, Lebogang; Hanmer, Lyn; Reagon, Gavin
2013-01-01
The ability of three procedure coding systems to reflect the procedure concepts extracted from patient records from six hospitals was compared, in order to inform decision making about a procedure coding standard for South Africa. A convenience sample of 126 procedure concepts was extracted from patient records at three level 1 hospitals and three level 2 hospitals. Each procedure concept was coded using ICPC-2, ICD-9-CM, and CCSA-2001. The extent to which each code assigned actually reflected the procedure concept was evaluated (between 'no match' and 'complete match'). For the study sample, CCSA-2001 was found to reflect the procedure concepts most completely, followed by ICD-9-CM and then ICPC-2. In practice, decision making about procedure coding standards would depend on multiple factors in addition to coding accuracy.
Statistical evaluation of PACSTAT random number generation capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepel, G.F.; Toland, M.R.; Harty, H.
1988-05-01
This report summarizes the work performed in verifying the general purpose Monte Carlo driver-program PACSTAT. The main objective of the work was to verify the performance of PACSTAT's random number generation capabilities. Secondary objectives were to document (using controlled configuration management procedures) changes made in PACSTAT at Pacific Northwest Laboratory, and to assure that PACSTAT input and output files satisfy quality assurance traceability constraints. Upon receipt of the PRIME version of the PACSTAT code from the Basalt Waste Isolation Project, Pacific Northwest Laboratory staff converted the code to run on Digital Equipment Corporation (DEC) VAXs. The modifications to PACSTAT weremore » implemented using the WITNESS configuration management system, with the modifications themselves intended to make the code as portable as possible. Certain modifications were made to make the PACSTAT input and output files conform to quality assurance traceability constraints. 10 refs., 17 figs., 6 tabs.« less
Coding for urologic office procedures.
Dowling, Robert A; Painter, Mark
2013-11-01
This article summarizes current best practices for documenting, coding, and billing common office-based urologic procedures. Topics covered include general principles, basic and advanced urologic coding, creation of medical records that support compliant coding practices, bundled codes and unbundling, global periods, modifiers for procedure codes, when to bill for evaluation and management services during the same visit, coding for supplies, and laboratory and radiology procedures pertinent to urology practice. Detailed information is included for the most common urology office procedures, and suggested resources and references are provided. This information is of value to physicians, office managers, and their coding staff. Copyright © 2013 Elsevier Inc. All rights reserved.
Ready for "Code Red"? Pre-Plan for Safety
ERIC Educational Resources Information Center
Baker, Davis E.
2006-01-01
In this article, the author, a principal of Buckeye Valley High School, Delaware, Ohio, focuses on how to generate a building readiness plan. He suggests that school administrators should have a readily available notebook of emergency response procedures to ensure students' safety. Among other things, he recommends creation of a building…
Stey, Anne M; Ko, Clifford Y; Hall, Bruce Lee; Louie, Rachel; Lawson, Elise H; Gibbons, Melinda M; Zingmond, David S; Russell, Marcia M
2014-08-01
Identifying iatrogenic injuries using existing data sources is important for improved transparency in the occurrence of intraoperative events. There is evidence that procedure codes are reliably recorded in claims data. The objective of this study was to assess whether concurrent splenic procedure codes in patients undergoing colectomy procedures are reliably coded in claims data as compared with clinical registry data. Patients who underwent colectomy procedures in the absence of neoplastic diagnosis codes were identified from American College of Surgeons (ACS) NSQIP data linked with Medicare inpatient claims data file (2005 to 2008). A κ statistic was used to assess coding concordance between ACS NSQIP and Medicare inpatient claims, with ACS NSQIP serving as the reference standard. A total of 11,367 colectomy patients were identified from 212 hospitals. There were 114 patients (1%) who had a concurrent splenic procedure code recorded in either ACS NSQIP or Medicare inpatient claims. There were 7 patients who had a splenic injury diagnosis code recorded in either data source. Agreement of splenic procedure codes between the data sources was substantial (κ statistic 0.72; 95% CI, 0.64-0.79). Medicare inpatient claims identified 81% of the splenic procedure codes recorded in ACS NSQIP, and 99% of the patients without a splenic procedure code. It is feasible to use Medicare claims data to identify splenic injuries occurring during colectomy procedures, as claims data have moderate sensitivity and excellent specificity for capturing concurrent splenic procedure codes compared with ACS NSQIP. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Exploiting loop level parallelism in nonprocedural dataflow programs
NASA Technical Reports Server (NTRS)
Gokhale, Maya B.
1987-01-01
Discussed are how loop level parallelism is detected in a nonprocedural dataflow program, and how a procedural program with concurrent loops is scheduled. Also discussed is a program restructuring technique which may be applied to recursive equations so that concurrent loops may be generated for a seemingly iterative computation. A compiler which generates C code for the language described below has been implemented. The scheduling component of the compiler and the restructuring transformation are described.
Dynamical generation of noiseless quantum subsystems
Viola; Knill; Lloyd
2000-10-16
We combine dynamical decoupling and universal control methods for open quantum systems with coding procedures. By exploiting a general algebraic approach, we show how appropriate encodings of quantum states result in obtaining universal control over dynamically generated noise-protected subsystems with limited control resources. In particular, we provide a constructive scheme based on two-body Hamiltonians for performing universal quantum computation over large noiseless spaces which can be engineered in the presence of arbitrary linear quantum noise.
Coded excitation ultrasonic needle tracking: An in vivo study.
Xia, Wenfeng; Ginsberg, Yuval; West, Simeon J; Nikitichev, Daniil I; Ourselin, Sebastien; David, Anna L; Desjardins, Adrien E
2016-07-01
Accurate and efficient guidance of medical devices to procedural targets lies at the heart of interventional procedures. Ultrasound imaging is commonly used for device guidance, but determining the location of the device tip can be challenging. Various methods have been proposed to track medical devices during ultrasound-guided procedures, but widespread clinical adoption has remained elusive. With ultrasonic tracking, the location of a medical device is determined by ultrasonic communication between the ultrasound imaging probe and a transducer integrated into the medical device. The signal-to-noise ratio (SNR) of the transducer data is an important determinant of the depth in tissue at which tracking can be performed. In this paper, the authors present a new generation of ultrasonic tracking in which coded excitation is used to improve the SNR without spatial averaging. A fiber optic hydrophone was integrated into the cannula of a 20 gauge insertion needle. This transducer received transmissions from the ultrasound imaging probe, and the data were processed to obtain a tracking image of the needle tip. Excitation using Barker or Golay codes was performed to improve the SNR, and conventional bipolar excitation was performed for comparison. The performance of the coded excitation ultrasonic tracking system was evaluated in an in vivo ovine model with insertions to the brachial plexus and the uterine cavity. Coded excitation significantly increased the SNRs of the tracking images, as compared with bipolar excitation. During an insertion to the brachial plexus, the SNR was increased by factors of 3.5 for Barker coding and 7.1 for Golay coding. During insertions into the uterine cavity, these factors ranged from 2.9 to 4.2 for Barker coding and 5.4 to 8.5 for Golay coding. The maximum SNR was 670, which was obtained with Golay coding during needle withdrawal from the brachial plexus. Range sidelobe artifacts were observed in tracking images obtained with Barker coded excitation, and they were visually absent with Golay coded excitation. The spatial tracking accuracy was unaffected by coded excitation. Coded excitation is a viable method for improving the SNR in ultrasonic tracking without compromising spatial accuracy. This method provided SNR increases that are consistent with theoretical expectations, even in the presence of physiological motion. With the ultrasonic tracking system in this study, the SNR increases will have direct clinical implications in a broad range of interventional procedures by improving visibility of medical devices at large depths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.
1987-01-01
Analysis results for multiple steam generator blow down caused by an auxiliary feedwater steam-line break performed with the RETRAN-02 MOD 003 computer code are presented to demonstrate the capabilities of the RETRAN code to predict system transient response for verifying changes in operational procedures and supporting plant equipment modifications. A typical four-loop Westinghouse pressurized water reactor was modeled using best-estimate versus worst case licensing assumptions. This paper presents analyses performed to evaluate the necessity of implementing an auxiliary feedwater steam-line isolation modification. RETRAN transient analysis can be used to determine core cooling capability response, departure from nucleate boiling ratio (DNBR)more » status, and reactor trip signal actuation times.« less
An assessment of the adaptive unstructured tetrahedral grid, Euler Flow Solver Code FELISA
NASA Technical Reports Server (NTRS)
Djomehri, M. Jahed; Erickson, Larry L.
1994-01-01
A three-dimensional solution-adaptive Euler flow solver for unstructured tetrahedral meshes is assessed, and the accuracy and efficiency of the method for predicting sonic boom pressure signatures about simple generic models are demonstrated. Comparison of computational and wind tunnel data and enhancement of numerical solutions by means of grid adaptivity are discussed. The mesh generation is based on the advancing front technique. The FELISA code consists of two solvers, the Taylor-Galerkin and the Runge-Kutta-Galerkin schemes, both of which are spacially discretized by the usual Galerkin weighted residual finite-element methods but with different explicit time-marching schemes to steady state. The solution-adaptive grid procedure is based on either remeshing or mesh refinement techniques. An alternative geometry adaptive procedure is also incorporated.
Financial implications of nonoperative fracture care at an academic trauma center.
Appleton, Paul; Chacko, Aron; Rodriguez, Edward K
2012-11-01
To determine if nonoperative fracture Current Procedural Technology codes generate a significant portion of annual revenues in an academic practice. Retrospective review of an orthopaedic trauma practice billings during fiscal year 2008. An urban level-1 trauma center. Outpatient clinic, and all consults, to the orthopaedic trauma service in the emergency room and hospital wards staffed by an attending traumatologist. An analysis was made of relative value units (RVUs) generated by operative and nonoperative care, separating the later into clinic, consults, and closed (nonoperative) fracture treatment. A total of 19,815 RVUs were generated by the trauma service during the 2008 fiscal year. Emergency department and ward consults generated 2176 (11%) of RVUs, whereas outpatient clinic generated an additional 1313 (7%) of RVUs. Nonoperative (closed) fracture care generated 2725 (14%) RVUs, whereas surgical procedures were responsible for the remaining 13,490 (68%) of RVUs. In terms of overall financial reimbursement, nonoperative management, consults, and office visits generated 31% of income for the trauma service. Although the largest financial contribution to a busy surgical practice is operative procedures, 1 must not overlook the important impact of nonoperative fracture care and consults. In our academic center, nearly one-third of all income was generated from nonsurgical procedures. In the current medical/financial climate, 1 must be diligent in optimizing the finances of trauma care to sustain an economically viable practice. Economic Level IV. See Instructions for Authors for a complete description of levels of evidence.
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 2 2013-10-01 2013-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 2 2014-10-01 2014-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 2 2012-10-01 2012-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
Kindergarten students' explanations during science learning
NASA Astrophysics Data System (ADS)
Harris, Karleah
The study examines kindergarten students' explanations during science learning. The data on children's explanations are drawn from videotaped and transcribed discourse collected from four public kindergarten science classrooms engaged in a life science inquiry unit on the life cycle of the monarch butterfly. The inquiry unit was implemented as part of a larger intervention conducted as part of the Scientific Literacy Project or SLP (Mantzicopoulos, Patrick & Samarapungavan, 2005). The children's explanation data were coded and analyzed using quantitative content analysis procedures. The coding procedures involved initial "top down" explanation categories derived from the existing theoretical and empirical literature on scientific explanation and the nature of students' explanations, followed by an inductive or "bottom up" analysis, that evaluated and refined the categorization scheme as needed. The analyses provide important descriptive data on the nature and frequency of children's explanations generated in classroom discourse during the inquiry unit. The study also examines how teacher discourse strategies during classroom science discourse are related to children's explanations. Teacher discourse strategies were coded and analyzed following the same procedures as the children's explanations as noted above. The results suggest that, a) kindergarten students have the capability of generating a variety of explanations during inquiry-based science learning; b) teachers use a variety of classroom discourse strategies to support children's explanations during inquiry-based science learning; and c) The conceptual discourse (e.g., asking for or modeling explanations, asking for clarifications) to non-conceptual discourse (e.g., classroom management discourse) is related to the ratio of explanatory to non-explanatory discourse produced by children during inquiry-based science learning.
NASA Technical Reports Server (NTRS)
Kiris, Cetin
1995-01-01
Development of an incompressible Navier-Stokes solution procedure was performed for the analysis of a liquid rocket engine pump components and for the mechanical heart assist devices. The solution procedure for the propulsion systems is applicable to incompressible Navier-Stokes flows in a steadily rotating frame of reference for any general complex configurations. The computer codes were tested on different complex configurations such as liquid rocket engine inducer and impellers. As a spin-off technology from the turbopump component simulations, the flow analysis for an axial heart pump was conducted. The baseline Left Ventricular Assist Device (LVAD) design was improved by adding an inducer geometry by adapting from the liquid rocket engine pump. The time-accurate mode of the incompressible Navier-Stokes code was validated with flapping foil experiment by using different domain decomposition methods. In the flapping foil experiment, two upstream NACA 0025 foils perform high-frequency synchronized motion and generate unsteady flow conditions for a downstream larger stationary foil. Fairly good agreement was obtained between unsteady experimental data and numerical results from two different moving boundary procedures. Incompressible Navier-Stokes code (INS3D) has been extended for heat transfer applications. The temperature equation was written for both forced and natural convection phenomena. Flow in a square duct case was used for the validation of the code in both natural and forced convection.
Algorithms for Zonal Methods and Development of Three Dimensional Mesh Generation Procedures.
1984-02-01
a r-re complete set of equations is used, but their effect is imposed by means of a right hand side forcing function, not by means of a zonal boundary...modifications of flow-simulation algorithms The explicit finite-difference code of Magnus and are discussed. Computational tests in two dimensions...used to simplify the task of grid generation without an adverse achieve computational efficiency. More recently, effect on flow-field algorithms and
NASA Astrophysics Data System (ADS)
Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui
2016-07-01
Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.
Prediction task guided representation learning of medical codes in EHR.
Cui, Liwen; Xie, Xiaolei; Shen, Zuojun
2018-06-18
There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples. Copyright © 2018. Published by Elsevier Inc.
PROTEUS two-dimensional Navier-Stokes computer code, version 1.0. Volume 3: Programmer's reference
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Benson, Thomas J.; Suresh, Ambady
1990-01-01
A new computer code was developed to solve the 2-D or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The thin-layer or Euler equations may also be solved. Turbulence is modeled using an algebraic eddy viscosity model. The objective was to develop a code for aerospace applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The equations are written in nonorthogonal body-fitted coordinates, and solved by marching in time using a fully-coupled alternating-direction-implicit procedure with generalized first- or second-order time differencing. All terms are linearized using second-order Taylor series. The boundary conditions are treated implicitly, and may be steady, unsteady, or spatially periodic. Simple Cartesian or polar grids may be generated internally by the program. More complex geometries require an externally generated computational coordinate system. The documentation is divided into three volumes. Volume 3 is the Programmer's Reference, and describes the program structure, the FORTRAN variables stored in common blocks, and the details of each subprogram.
Medicare's "Global" terrorism: where is the pay for performance?
Reed, R Lawrence; Luchette, Fred A; Esposito, Thomas J; Pyrz, Karen; Gamelli, Richard L
2008-02-01
Medicare and Medicaid Services (CMS) payment policies for surgical operations are based on a global package concept. CMS' physician fee schedule splits the global package into preoperative, intraoperative, and postoperative components of each procedure. We hypothesized that these global package component valuations were often lower than comparable evaluation and management (E&M) services and that billing for E&M services instead of the operation could often be more profitable. Our billing database and Trauma Registry were queried for the operative procedures and hospital lengths of stay for trauma patients during the past 5 years. Determinations of preoperative, intraoperative, and postoperative payments were calculated for 10-day and 90-day global packages, comparing them to CMS payments for comparable E&M codes. Of 90-day and 10-day Current Procedural Terminology codes, 88% and 100%, respectively, do not pay for the comprehensive history and physical that trauma patients usually receive, whereas 41% and 98%, respectively, do not even meet payment levels for a simple history and physical. Of 90-day global package procedures, 70% would have generated more revenue had comprehensive daily visits been billed instead of the operation ($3,057,500 vs. $1,658,058). For 10-day global package procedures, 56% would have generated more revenue with merely problem-focused daily visits instead of the operation ($161,855 vs. $156,318). Medicare's global surgical package underpays E&M services in trauma patients. In most cases, trauma surgeons would fare better by not billing for operations to receive higher reimbursement for E&M services that are considered "bundled" in the global package payment.
Wing Weight Optimization Under Aeroelastic Loads Subject to Stress Constraints
NASA Technical Reports Server (NTRS)
Kapania, Rakesh K.; Issac, J.; Macmurdy, D.; Guruswamy, Guru P.
1997-01-01
A minimum weight optimization of the wing under aeroelastic loads subject to stress constraints is carried out. The loads for the optimization are based on aeroelastic trim. The design variables are the thickness of the wing skins and planform variables. The composite plate structural model incorporates first-order shear deformation theory, the wing deflections are expressed using Chebyshev polynomials and a Rayleigh-Ritz procedure is adopted for the structural formulation. The aerodynamic pressures provided by the aerodynamic code at a discrete number of grid points is represented as a bilinear distribution on the composite plate code to solve for the deflections and stresses in the wing. The lifting-surface aerodynamic code FAST is presently being used to generate the pressure distribution over the wing. The envisioned ENSAERO/Plate is an aeroelastic analysis code which combines ENSAERO version 3.0 (for analysis of wing-body configurations) with the composite plate code.
Proteus three-dimensional Navier-Stokes computer code, version 1.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 3D was developed to solve the three-dimensional, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This User's Guide describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
NASA Technical Reports Server (NTRS)
Wey, Thomas; Liu, Nan-Suey
2013-01-01
This paper summarizes the procedures of generating a polyhedral mesh derived from hanging-node elements as well as presents sample results from its application to the numerical solution of a single element lean direct injection (LDI) combustor using an open-source version of the National Combustion Code (NCC).
A Study of Interior Wiring, Color Coding, and Switching Principles by Simulation and Practice.
ERIC Educational Resources Information Center
McCormick, B. G.; McCormick, Robert S.
After a preliminary introduction and a chapter on wiring and electricity safety procedures, this study text proceeds to offer a general coverage of single and polyphase alternating current electrical systems used to power factories, farms, small businesses, and homes. Electrical power, from its generation to its application, is discussed, with the…
Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres
2016-08-01
Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes resulting in a higher overall reimbursement that was subsequently corrected. Assessment of physician documentation showed improvement, with decreased documentation errors at each period (11% vs 3.1% vs 0.6%; P = .02). Overall, between period 1 and period 3, multidisciplinary coding resulted in a significant increase in additional reimbursement ($17.63 per procedure; P = .004) and wRVUs (0.50 per procedure; P = .01). External validation at a second academic institution was performed to assess coding accuracy during period 1. Similar to institution 1, traditional coding revealed an 11% loss in reimbursement ($13,178 vs $14,630; P = .007) and a 12% loss in wRVU (293 vs 329; P = .01) compared with multidisciplinary coding. Physician involvement in the coding of endovascular procedures leads to improved procedural coding accuracy, increased wRVU assignments, and increased physician reimbursement. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Engineering calculations for communications satellite systems planning
NASA Technical Reports Server (NTRS)
Martin, C. H.; Gonsalvez, D. J.; Levis, C. A.; Wang, C. W.
1983-01-01
Progress is reported on a computer code to improve the efficiency of spectrum and orbit utilization for the Broadcasting Satellite Service in the 12 GHz band for Region 2. It implements a constrained gradient search procedure using an exponential objective function based on aggregate signal to noise ratio and an extended line search in the gradient direction. The procedure is tested against a manually generated initial scenario and appears to work satisfactorily. In this test it was assumed that alternate channels use orthogonal polarizations at any one satellite location.
Vo, Elaine; Davila, Jessica A; Hou, Jason; Hodge, Krystle; Li, Linda T; Suliburk, James W; Kao, Lillian S; Berger, David H; Liang, Mike K
2013-08-01
Large databases provide a wealth of information for researchers, but identifying patient cohorts often relies on the use of current procedural terminology (CPT) codes. In particular, studies of stoma surgery have been limited by the accuracy of CPT codes in identifying and differentiating ileostomy procedures from colostomy procedures. It is important to make this distinction because the prevalence of complications associated with stoma formation and reversal differ dramatically between types of stoma. Natural language processing (NLP) is a process that allows text-based searching. The Automated Retrieval Console is an NLP-based software that allows investigators to design and perform NLP-assisted document classification. In this study, we evaluated the role of CPT codes and NLP in differentiating ileostomy from colostomy procedures. Using CPT codes, we conducted a retrospective study that identified all patients undergoing a stoma-related procedure at a single institution between January 2005 and December 2011. All operative reports during this time were reviewed manually to abstract the following variables: formation or reversal and ileostomy or colostomy. Sensitivity and specificity for validation of the CPT codes against the mastery surgery schedule were calculated. Operative reports were evaluated by use of NLP to differentiate ileostomy- from colostomy-related procedures. Sensitivity and specificity for identifying patients with ileostomy or colostomy procedures were calculated for CPT codes and NLP for the entire cohort. CPT codes performed well in identifying stoma procedures (sensitivity 87.4%, specificity 97.5%). A total of 664 stoma procedures were identified by CPT codes between 2005 and 2011. The CPT codes were adequate in identifying stoma formation (sensitivity 97.7%, specificity 72.4%) and stoma reversal (sensitivity 74.1%, specificity 98.7%), but they were inadequate in identifying ileostomy (sensitivity 35.0%, specificity 88.1%) and colostomy (75.2% and 80.9%). NLP performed with greater sensitivity, specificity, and accuracy than CPT codes in identifying stoma procedures and stoma types. Major differences where NLP outperformed CPT included identifying ileostomy (specificity 95.8%, sensitivity 88.3%, and accuracy 91.5%) and colostomy (97.6%, 90.5%, and 92.8%, respectively). CPT codes can identify effectively patients who have had stoma procedures and are adequate in distinguishing between formation and reversal; however, CPT codes cannot differentiate ileostomy from colostomy. NLP can be used to differentiate between ileostomy- and colostomy-related procedures. The role of NLP in conjunction with electronic medical records in data retrieval warrants further investigation. Published by Mosby, Inc.
Code of Federal Regulations, 2010 CFR
2010-10-01
... laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding System Code...
Code of Federal Regulations, 2011 CFR
2011-10-01
... laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding System Code...
Good, Ryan J; Leroue, Matthew K; Czaja, Angela S
2018-06-07
Noninvasive positive pressure ventilation (NIPPV) is increasingly used in critically ill pediatric patients, despite limited data on safety and efficacy. Administrative data may be a good resource for observational studies. Therefore, we sought to assess the performance of the International Classification of Diseases, Ninth Revision procedure code for NIPPV. Patients admitted to the PICU requiring NIPPV or heated high-flow nasal cannula (HHFNC) over the 11-month study period were identified from the Virtual PICU System database. The gold standard was manual review of the electronic health record to verify the use of NIPPV or HHFNC among the cohort. The presence or absence of a NIPPV procedure code was determined by using administrative data. Test characteristics with 95% confidence intervals (CIs) were generated, comparing administrative data with the gold standard. Among the cohort ( n = 562), the majority were younger than 5 years, and the most common primary diagnosis was bronchiolitis. Most (82%) required NIPPV, whereas 18% required only HHFNC. The NIPPV code had a sensitivity of 91.1% (95% CI: 88.2%-93.6%) and a specificity of 57.6% (95% CI: 47.2%-67.5%), with a positive likelihood ratio of 2.15 (95% CI: 1.70-2.71) and negative likelihood ratio of 0.15 (95% CI: 0.11-0.22). Among our critically ill pediatric cohort, NIPPV procedure codes had high sensitivity but only moderate specificity. On the basis of our study results, there is a risk of misclassification, specifically failure to identify children who require NIPPV, when using administrative data to study the use of NIPPV in this population. Copyright © 2018 by the American Academy of Pediatrics.
Villalobos Gámez, Juan Luis; González Pérez, Cristina; García-Almeida, José Manuel; Martínez Reina, Alfonso; Del Río Mata, José; Márquez Fernández, Efrén; Rioja Vázquez, Rosalía; Barranco Pérez, Joaquín; Enguix Armada, Alfredo; Rodríguez García, Luis Miguel; Bernal Losada, Olga; Osorio Fernández, Diego; Mínguez Mañanes, Alfredo; Lara Ramos, Carlos; Dani, Laila; Vallejo Báez, Antonio; Martínez Martín, Jesús; Fernández Ovies, José Manuel; Tinahones Madueño, Francisco Javier; Fernández-Crehuet Navajas, Joaquín
2014-06-01
The high prevalence of disease-related hospital malnutrition justifies the need for screening tools and early detection in patients at risk for malnutrition, followed by an assessment targeted towards diagnosis and treatment. At the same time there is clear undercoding of malnutrition diagnoses and the procedures to correct it Objectives: To describe the INFORNUT program/ process and its development as an information system. To quantify performance in its different phases. To cite other tools used as a coding source. To calculate the coding rates for malnutrition diagnoses and related procedures. To show the relationship to Mean Stay, Mortality Rate and Urgent Readmission; as well as to quantify its impact on the hospital Complexity Index and its effect on the justification of Hospitalization Costs. The INFORNUT® process is based on an automated screening program of systematic detection and early identification of malnourished patients on hospital admission, as well as their assessment, diagnoses, documentation and reporting. Of total readmissions with stays longer than three days incurred in 2008 and 2010, we recorded patients who underwent analytical screening with an alert for a medium or high risk of malnutrition, as well as the subgroup of patients in whom we were able to administer the complete INFORNUT® process, generating a report for each. Other documentary coding sources are cited. From the Minimum Basic Data Set, codes defined in the SEDOMSENPE consensus were analyzed. The data were processed with the Alcor-DRG program. Rates in ‰ of discharges for 2009 and 2010 of diagnoses of malnutrition, procedure and procedures-related diagnoses were calculated. These rates were compared with the mean rates in Andalusia. The contribution of these codes to the Complexity Index was estimated and, from the cost accounting data, the fraction of the hospitalization cost seen as justified by this activity was estimated. RESULTS are summarized for both study years. With respect to process performance, more than 3,600 patients per year (30% of admissions with a stay > 3 days) underwent analytical screening. Half of these patients were at medium or high risk and a nutritional assessment using INFORNUT® was completed for 55% of them, generating approximately 1,000 reports/year. Our coding rates exceeded the mean rates in Andalusia, being 3.5 times higher for diagnoses (35‰); 2.5 times higher for procedures (50‰) and five times the rate of procedurerelated diagnoses in the same patient (25‰). The Mean Stay of patients coded with malnutrition at discharge was 31.7 days, compared to 9.5 for the overall hospital stay. The Mortality Rate for the same patients (21.8%) was almost five times higher than the mean and Urgent Readmissions (5.5%) were 1.9 times higher. The impact of this coding on the hospital Complexity Index was four hundredths (from 2.08 to 2.12 in 2009 and 2.15 to 2.19 in 2010). This translates into a hospitalization cost justification of 2,000,000; five to six times the cost of artificial nutrition. The process facilitated access to the diagnosis of malnutrition and to understanding the risk of developing it, as well as to the prescription of procedures and/or supplements to correct it. The interdisciplinary team coordination, the participatory process and the tools used improved coding rates to give results far above the Andalusian mean. These results help to upwardly adjust the hospital Complexity Index or Case Mix-, as well as to explain hospitalization costs. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Study of an External Neutron Source for an Accelerator-Driven System using the PHITS Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugawara, Takanori; Iwasaki, Tomohiko; Chiba, Takashi
A code system for the Accelerator Driven System (ADS) has been under development for analyzing dynamic behaviors of a subcritical core coupled with an accelerator. This code system named DSE (Dynamics calculation code system for a Subcritical system with an External neutron source) consists of an accelerator part and a reactor part. The accelerator part employs a database, which is calculated by using PHITS, for investigating the effect related to the accelerator such as the changes of beam energy, beam diameter, void generation, and target level. This analysis method using the database may introduce some errors into dynamics calculations sincemore » the neutron source data derived from the database has some errors in fitting or interpolating procedures. In this study, the effects of various events are investigated to confirm that the method based on the database is appropriate.« less
Proteus two-dimensional Navier-Stokes computer code, version 2.0. Volume 2: User's guide
NASA Technical Reports Server (NTRS)
Towne, Charles E.; Schwab, John R.; Bui, Trong T.
1993-01-01
A computer code called Proteus 2D was developed to solve the two-dimensional planar or axisymmetric, Reynolds-averaged, unsteady compressible Navier-Stokes equations in strong conservation law form. The objective in this effort was to develop a code for aerospace propulsion applications that is easy to use and easy to modify. Code readability, modularity, and documentation were emphasized. The governing equations are solved in generalized nonorthogonal body-fitted coordinates, by marching in time using a fully-coupled ADI solution procedure. The boundary conditions are treated implicitly. All terms, including the diffusion terms, are linearized using second-order Taylor series expansions. Turbulence is modeled using either an algebraic or two-equation eddy viscosity model. The thin-layer or Euler equations may also be solved. The energy equation may be eliminated by the assumption of constant total enthalpy. Explicit and implicit artificial viscosity may be used. Several time step options are available for convergence acceleration. The documentation is divided into three volumes. This is the User's Guide, and describes the program's features, the input and output, the procedure for setting up initial conditions, the computer resource requirements, the diagnostic messages that may be generated, the job control language used to run the program, and several test cases.
Cavallo, Jaime A.; Ousley, Jenny; Barrett, Christopher D.; Baalman, Sara; Ward, Kyle; Borchardt, Malgorzata; Thomas, J. Ross; Perotti, Gary; Frisella, Margaret M.; Matthews, Brent D.
2013-01-01
INTRODUCTION Expenditures on material supplies and medications constitute the greatest per capita costs for surgical missions. We hypothesized that supply acquisition at nonprofit organization (NPO) costs would lead to significant cost-savings compared to supply acquisition at US academic institution costs from the provider perspective for hernia repairs and minor procedures during a surgical mission in the Dominican Republic (DR). METHODS Items acquired for a surgical mission were uniquely QR-coded for accurate consumption accounting. Both NPO and US academic institution unit costs were associated with each item in an electronic inventory system. Medication doses were recorded and QR-codes for consumed items were scanned into a record for each sampled procedure. Mean material costs and cost savings ± SDs were calculated in US dollars for each procedure type. Cost-minimization analyses between the NPO and the US academic institution platforms for each procedure type ensued using a two-tailed Wilcoxon matched-pairs test with α=0.05. Item utilization analyses generated lists of most frequently used materials by procedure type. RESULTS The mean cost savings of supply acquisition at NPO costs for each procedure type were as follows: $482.86 ± $683.79 for unilateral inguinal hernia repair (IHR, n=13); $332.46 ± $184.09 for bilateral inguinal hernia repair (BIHR, n=3); $127.26 ± $13.18 for hydrocelectomy (HC, n=9); $232.92 ± $56.49 for femoral hernia repair (FHR, n=3); $120.90 ± $30.51 for umbilical hernia repair (UHR, n=8); $36.59 ± $17.76 for minor procedures (MP, n=26); and $120.66 ± $14.61 for pediatric inguinal hernia repair (PIHR, n=7). CONCLUSION Supply acquisition at NPO costs leads to significant cost-savings compared to supply acquisition at US academic institution costs from the provider perspective for IHR, HC, UHR, MP, and PIHR during a surgical mission to DR. Item utilization analysis can generate minimum-necessary material lists for each procedure type to reproduce cost-savings for subsequent missions. PMID:24162140
Development of code evaluation criteria for assessing predictive capability and performance
NASA Technical Reports Server (NTRS)
Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.
1993-01-01
Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ndong, Mamadou; Lauvergnat, David; Nauts, André
2013-11-28
We present new techniques for an automatic computation of the kinetic energy operator in analytical form. These techniques are based on the use of the polyspherical approach and are extended to take into account Cartesian coordinates as well. An automatic procedure is developed where analytical expressions are obtained by symbolic calculations. This procedure is a full generalization of the one presented in Ndong et al., [J. Chem. Phys. 136, 034107 (2012)]. The correctness of the new implementation is analyzed by comparison with results obtained from the TNUM program. We give several illustrations that could be useful for users of themore » code. In particular, we discuss some cyclic compounds which are important in photochemistry. Among others, we show that choosing a well-adapted parameterization and decomposition into subsystems can allow one to avoid singularities in the kinetic energy operator. We also discuss a relation between polyspherical and Z-matrix coordinates: this comparison could be helpful for building an interface between the new code and a quantum chemistry package.« less
GENIE - Generation of computational geometry-grids for internal-external flow configurations
NASA Technical Reports Server (NTRS)
Soni, B. K.
1988-01-01
Progress realized in the development of a master geometry-grid generation code GENIE is presented. The grid refinement process is enhanced by developing strategies to utilize bezier curves/surfaces and splines along with weighted transfinite interpolation technique and by formulating new forcing function for the elliptic solver based on the minimization of a non-orthogonality functional. A two step grid adaptation procedure is developed by optimally blending adaptive weightings with weighted transfinite interpolation technique. Examples of 2D-3D grids are provided to illustrate the success of these methods.
Yang, Lei; Naylor, Gavin J P
2016-01-01
We determined the complete mitochondrial genome sequence (16,760 bp) of the peacock skate Pavoraja nitida using a long-PCR based next generation sequencing method. It has 13 protein-coding genes, 22 tRNA genes, 2 rRNA genes, and 1 control region in the typical vertebrate arrangement. Primers, protocols, and procedures used to obtain this mitogenome are provided. We anticipate that this approach will facilitate rapid collection of mitogenome sequences for studies on phylogenetic relationships, population genetics, and conservation of cartilaginous fishes.
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
Locality-preserving logical operators in topological stabilizer codes
NASA Astrophysics Data System (ADS)
Webster, Paul; Bartlett, Stephen D.
2018-01-01
Locality-preserving logical operators in topological codes are naturally fault tolerant, since they preserve the correctability of local errors. Using a correspondence between such operators and gapped domain walls, we describe a procedure for finding all locality-preserving logical operators admitted by a large and important class of topological stabilizer codes. In particular, we focus on those equivalent to a stack of a finite number of surface codes of any spatial dimension, where our procedure fully specifies the group of locality-preserving logical operators. We also present examples of how our procedure applies to codes with different boundary conditions, including color codes and toric codes, as well as more general codes such as Abelian quantum double models and codes with fermionic excitations in more than two dimensions.
Numerical methods for stiff systems of two-point boundary value problems
NASA Technical Reports Server (NTRS)
Flaherty, J. E.; Omalley, R. E., Jr.
1983-01-01
Numerical procedures are developed for constructing asymptotic solutions of certain nonlinear singularly perturbed vector two-point boundary value problems having boundary layers at one or both endpoints. The asymptotic approximations are generated numerically and can either be used as is or to furnish a general purpose two-point boundary value code with an initial approximation and the nonuniform computational mesh needed for such problems. The procedures are applied to a model problem that has multiple solutions and to problems describing the deformation of thin nonlinear elastic beam that is resting on an elastic foundation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... CONSUMPTION INFANT FORMULA QUALITY CONTROL PROCEDURES Quality Control Procedures for Assuring Nutrient Content of Infant Formulas § 106.90 Coding. The manufacturer shall code all infant formulas in conformity...
Reynolds-averaged Navier-Stokes based ice accretion for aircraft wings
NASA Astrophysics Data System (ADS)
Lashkajani, Kazem Hasanzadeh
This thesis addresses one of the current issues in flight safety towards increasing icing simulation capabilities for prediction of complex 2D and 3D glaze ice shapes over aircraft surfaces. During the 1980's and 1990's, the field of aero-icing was established to support design and certification of aircraft flying in icing conditions. The multidisciplinary technologies used in such codes were: aerodynamics (panel method), droplet trajectory calculations (Lagrangian framework), thermodynamic module (Messinger model) and geometry module (ice accretion). These are embedded in a quasi-steady module to simulate the time-dependent ice accretion process (multi-step procedure). The objectives of the present research are to upgrade the aerodynamic module from Laplace to Reynolds-Average Navier-Stokes equations solver. The advantages are many. First, the physical model allows accounting for viscous effects in the aerodynamic module. Second, the solution of the aero-icing module directly provides the means for characterizing the aerodynamic effects of icing, such as loss of lift and increased drag. Third, the use of a finite volume approach to solving the Partial Differential Equations allows rigorous mesh and time convergence analysis. Finally, the approaches developed in 2D can be easily transposed to 3D problems. The research was performed in three major steps, each providing insights into the overall numerical approaches. The most important realization comes from the need to develop specific mesh generation algorithms to ensure feasible solutions in very complex multi-step aero-icing calculations. The contributions are presented in chronological order of their realization. First, a new framework for RANS based two-dimensional ice accretion code, CANICE2D-NS, is developed. A multi-block RANS code from U. of Liverpool (named PMB) is providing the aerodynamic field using the Spalart-Allmaras turbulence model. The ICEM-CFD commercial tool is used for the iced airfoil remeshing and field smoothing. The new coupling is fully automated and capable of multi-step ice accretion simulations via a quasi-steady approach. In addition, the framework allows for flow analysis and aerodynamic performance prediction of the iced airfoils. The convergence of the quasi-steady algorithm is verified and identifies the need for an order of magnitude increase in the number of multi-time steps in icing simulations to achieve solver independent solutions. Second, a Multi-Block Navier-Stokes code, NSMB, is coupled with the CANICE2D icing framework. Attention is paid to the roughness implementation of the ONERA roughness model within the Spalart-Allmaras turbulence model, and to the convergence of the steady and quasi-steady iterative procedure. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases. The results of CANICE2D-NS show good agreement with experimental data both in terms of predicted ice shapes as well as aerodynamic analysis of predicted and experimental ice shapes. Third, an efficient single-block structured Navier-Stokes CFD code, NSCODE, is coupled with the CANICE2D-NS icing framework. Attention is paid to the roughness implementation of the Boeing model within the Spalart-Allmaras turbulence model, and to acceleration of the convergence of the steady and quasi-steady iterative procedures. Effects of uniform surface roughness in quasi-steady ice accretion simulation are analyzed through different validation test cases, including code to code comparisons with the same framework coupled with the NSMB Navier-Stokes solver. The efficiency of the J-multigrid approach to solve the flow equations on complex iced geometries is demonstrated. Since it was noted in all these calculations that the ICEM-CFD grid generation package produced a number of issues such as inefficient mesh quality and smoothing deficiencies (notably grid shocks), a fourth study proposes a new mesh generation algorithm. A PDE based multi-block structured grid generation code, NSGRID, is developed for this purpose. The study includes the developments of novel mesh generation algorithms over complex glaze ice shapes containing multi-curvature ice accretion geometries, such as single/double ice horns. The twofold approaches tackle surface geometry discretization as well as field mesh generation. An adaptive curvilinear curvature control algorithm is constructed solving a 1D elliptic PDE equation with periodic source terms. This method controls the arclength grid spacing so that high convex and concave curvature regions around ice horns are appropriately captured and is shown to effectively treat the grid shock problem. Then, a novel blended method is developed by defining combinations of source terms with 2D elliptic equations. The source terms include two common control functions, Sorenson and Spekreijse, and an additional third source term to improve orthogonality. This blended method is shown to be very effective for improving grid quality metrics for complex glaze ice meshes with RANS resolution. The performance in terms of residual reduction per non-linear iteration of several solution algorithms (Point-Jacobi, Gauss-Seidel, ADI, Point and Line SOR) are discussed within the context of a full Multi-grid operator. Details are given on the various formulations used in the linearization process. It is shown that the performance of the solution algorithm depends on the type of control function used. Finally, the algorithms are validated on standard complex experimental ice shapes, demonstrating the applicability of the methods. Finally, the automated framework of RANS based two-dimensional multi-step ice accretion, CANICE2D-NS is developed, coupled with a Multi-Block Navier-Stokes CFD code, NSCODE2D, a Multi-Block elliptic grid generation code, NSGRID2D, and a Multi-Block Eulerian droplet solver, NSDROP2D (developed at Polytechnique Montreal). The framework allows Lagrangian and Eulerian droplet computations within a chimera approach treating multi-elements geometries. The code was tested on public and confidential validation test cases including standard NATO cases. In addition, up to 10 times speedup is observed in the mesh generation procedure by using the implicit line SOR and ADI smoothers within a multigrid procedure. The results demonstrate the benefits and robustness of the new framework in predicting ice shapes and aerodynamic performance parameters.
Code of Federal Regulations, 2013 CFR
2013-10-01
... diagnostic laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding...
Code of Federal Regulations, 2012 CFR
2012-10-01
... diagnostic laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding...
Code of Federal Regulations, 2014 CFR
2014-10-01
... diagnostic laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding...
13 CFR 121.1103 - What are the procedures for appealing a NAICS code designation?
Code of Federal Regulations, 2010 CFR
2010-01-01
... appealing a NAICS code designation? 121.1103 Section 121.1103 Business Credit and Assistance SMALL BUSINESS... Determinations and Naics Code Designations § 121.1103 What are the procedures for appealing a NAICS code... code designation and applicable size standard must be served and filed within 10 calendar days after...
The numerical design of a spherical baroclinic experiment for Spacelab flights
NASA Technical Reports Server (NTRS)
Fowlis, W. W.; Roberts, G. O.
1982-01-01
The near-zero G environment of Spacelab is the basis of a true spherical experimental model of synoptic scale baroclinic atmospheric processes, using a radial dielectric body force analogous to gravity over a volume of liquid within two concentric spheres. The baroclinic motions are generated by corotating the spheres and imposing thermal boundary conditions, such that the liquid is subjected to a stable radial gradient and a latitudinal gradient. Owing to mathematical difficulties associated with the spherical geometry, quantitative design criteria can be acquired only by means of numerical models. The procedure adopted required the development of two computer codes based on the Navier-Stokes equations. The codes, of which the first calculates axisymmetric steady flow solutions and the second determines the growth or decay rates of linear wave perturbations with different wave numbers, are combined to generate marginal stability curves.
Development of a New 47-Group Library for the CASL Neutronics Simulators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Williams, Mark L; Wiarda, Dorothea
The CASL core simulator MPACT is under development for the neutronics and thermal-hydraulics coupled simulation for the pressurized light water reactors. The key characteristics of the MPACT code include a subgroup method for resonance self-shielding, and a whole core solver with a 1D/2D synthesis method. The ORNL AMPX/SCALE code packages have been significantly improved to support various intermediate resonance self-shielding approximations such as the subgroup and embedded self-shielding methods. New 47-group AMPX and MPACT libraries based on ENDF/B-VII.0 have been generated for the CASL core simulator MPACT of which group structure comes from the HELIOS library. The new 47-group MPACTmore » library includes all nuclear data required for static and transient core simulations. This study discusses a detailed procedure to generate the 47-group AMPX and MPACT libraries and benchmark results for the VERA progression problems.« less
The Design and Evaluation of "CAPTools"--A Computer Aided Parallelization Toolkit
NASA Technical Reports Server (NTRS)
Yan, Jerry; Frumkin, Michael; Hribar, Michelle; Jin, Haoqiang; Waheed, Abdul; Johnson, Steve; Cross, Jark; Evans, Emyr; Ierotheou, Constantinos; Leggett, Pete;
1998-01-01
Writing applications for high performance computers is a challenging task. Although writing code by hand still offers the best performance, it is extremely costly and often not very portable. The Computer Aided Parallelization Tools (CAPTools) are a toolkit designed to help automate the mapping of sequential FORTRAN scientific applications onto multiprocessors. CAPTools consists of the following major components: an inter-procedural dependence analysis module that incorporates user knowledge; a 'self-propagating' data partitioning module driven via user guidance; an execution control mask generation and optimization module for the user to fine tune parallel processing of individual partitions; a program transformation/restructuring facility for source code clean up and optimization; a set of browsers through which the user interacts with CAPTools at each stage of the parallelization process; and a code generator supporting multiple programming paradigms on various multiprocessors. Besides describing the rationale behind the architecture of CAPTools, the parallelization process is illustrated via case studies involving structured and unstructured meshes. The programming process and the performance of the generated parallel programs are compared against other programming alternatives based on the NAS Parallel Benchmarks, ARC3D and other scientific applications. Based on these results, a discussion on the feasibility of constructing architectural independent parallel applications is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chapman, J.C.
This report discusses the comparisons of a RELAP5 posttest calculation of the recovery portion of the Semiscale Mod-2B test S-SG-1 to the test data. The posttest calculation was performed with the RELAP5/MOD2 cycle 36.02 code without updates. The recovery procedure that was calculated mainly consisted of secondary feed and steam using auxiliary feedwater injection and the atmospheric dump valve of the unaffected steam generator (the steam generator without the tube rupture). A second procedure was initiated after the trends of the secondary feed and steam procedure had been established, and this was to stop the safety injection that had beenmore » provided by two trains of both the charging and high pressure injection systems. The Semiscale Mod-2B configuration is a small scale (1/1705), nonnuclear, instrumented, model of a Westinghouse four-loop pressurized water reactor power plant. S-SG-1 was a single-tube, cold-side, steam generator tube rupture experiment. The comparison of the posttest calculation and data included comparing the general trends and the driving mechanisms of the responses, the phenomena, and the individual responses of the main parameters.« less
NASA Technical Reports Server (NTRS)
Martin, Carl J., Jr.
1996-01-01
This report describes a structural optimization procedure developed for use with the Engineering Analysis Language (EAL) finite element analysis system. The procedure is written primarily in the EAL command language. Three external processors which are written in FORTRAN generate equivalent stiffnesses and evaluate stress and local buckling constraints for the sections. Several built-up structural sections were coded into the design procedures. These structural sections were selected for use in aircraft design, but are suitable for other applications. Sensitivity calculations use the semi-analytic method, and an extensive effort has been made to increase the execution speed and reduce the storage requirements. There is also an approximate sensitivity update method included which can significantly reduce computational time. The optimization is performed by an implementation of the MINOS V5.4 linear programming routine in a sequential liner programming procedure.
MAGI: many-component galaxy initializer
NASA Astrophysics Data System (ADS)
Miki, Yohei; Umemura, Masayuki
2018-04-01
Providing initial conditions is an essential procedure for numerical simulations of galaxies. The initial conditions for idealized individual galaxies in N-body simulations should resemble observed galaxies and be dynamically stable for time-scales much longer than their characteristic dynamical times. However, generating a galaxy model ab initio as a system in dynamical equilibrium is a difficult task, since a galaxy contains several components, including a bulge, disc, and halo. Moreover, it is desirable that the initial-condition generator be fast and easy to use. We have now developed an initial-condition generator for galactic N-body simulations that satisfies these requirements. The developed generator adopts a distribution-function-based method, and it supports various kinds of density models, including custom-tabulated inputs and the presence of more than one disc. We tested the dynamical stability of systems generated by our code, representing early- and late-type galaxies, with N = 2097 152 and 8388 608 particles, respectively, and we found that the model galaxies maintain their initial distributions for at least 1 Gyr. The execution times required to generate the two models were 8.5 and 221.7 seconds, respectively, which is negligible compared to typical execution times for N-body simulations. The code is provided as open-source software and is publicly and freely available at https://bitbucket.org/ymiki/magi.
Kageyama, Kyoko; Jimba, Koichi; Hashimoto, Satoru
2013-04-01
Code of civil procedure is started when a plaintiff appeals to the law. Conversely, if a suit is not appealed, it is not started. We explain the essential principles of the code of civil procedure, and present systems associated with expediting trials (a brief, preliminary oral arguments, preparatory proceedings, inquiry to opponent, organized proceedings, technical adviser system, etc.). Amendment of law is repeated for the purpose of aiming suitably expediting trials. We should utilize the present code of civil procedure suitably, and expect the quick conclusion of trials.
Automatic Generation of Algorithms for the Statistical Analysis of Planetary Nebulae Images
NASA Technical Reports Server (NTRS)
Fischer, Bernd
2004-01-01
Analyzing data sets collected in experiments or by observations is a Core scientific activity. Typically, experimentd and observational data are &aught with uncertainty, and the analysis is based on a statistical model of the conjectured underlying processes, The large data volumes collected by modern instruments make computer support indispensible for this. Consequently, scientists spend significant amounts of their time with the development and refinement of the data analysis programs. AutoBayes [GF+02, FS03] is a fully automatic synthesis system for generating statistical data analysis programs. Externally, it looks like a compiler: it takes an abstract problem specification and translates it into executable code. Its input is a concise description of a data analysis problem in the form of a statistical model as shown in Figure 1; its output is optimized and fully documented C/C++ code which can be linked dynamically into the Matlab and Octave environments. Internally, however, it is quite different: AutoBayes derives a customized algorithm implementing the given model using a schema-based process, and then further refines and optimizes the algorithm into code. A schema is a parameterized code template with associated semantic constraints which define and restrict the template s applicability. The schema parameters are instantiated in a problem-specific way during synthesis as AutoBayes checks the constraints against the original model or, recursively, against emerging sub-problems. AutoBayes schema library contains problem decomposition operators (which are justified by theorems in a formal logic in the domain of Bayesian networks) as well as machine learning algorithms (e.g., EM, k-Means) and nu- meric optimization methods (e.g., Nelder-Mead simplex, conjugate gradient). AutoBayes augments this schema-based approach by symbolic computation to derive closed-form solutions whenever possible. This is a major advantage over other statistical data analysis systems which use numerical approximations even in cases where closed-form solutions exist. AutoBayes is implemented in Prolog and comprises approximately 75.000 lines of code. In this paper, we take one typical scientific data analysis problem-analyzing planetary nebulae images taken by the Hubble Space Telescope-and show how AutoBayes can be used to automate the implementation of the necessary anal- ysis programs. We initially follow the analysis described by Knuth and Hajian [KHO2] and use AutoBayes to derive code for the published models. We show the details of the code derivation process, including the symbolic computations and automatic integration of library procedures, and compare the results of the automatically generated and manually implemented code. We then go beyond the original analysis and use AutoBayes to derive code for a simple image segmentation procedure based on a mixture model which can be used to automate a manual preproceesing step. Finally, we combine the original approach with the simple segmentation which yields a more detailed analysis. This also demonstrates that AutoBayes makes it easy to combine different aspects of data analysis.
Code-division multiple-access protocol for active RFID systems
NASA Astrophysics Data System (ADS)
Mazurek, Gustaw; Szabatin, Jerzy
2008-01-01
Most of the Radio Frequency Identification (RFID) systems operating in HF and UHF bands employ narrowband modulations (FSK or ASK) with Manchester coding. However, these simple transmission schemes are vulnerable to narrowband interference (NBI) generated by other radio systems working in the same frequency band, and also suffer from collision problem and need special anti-collision procedures. This becomes especially important when operating in a noisy, crowded industrial environment. In this paper we show the performance of RFID system with DS-CDMA transmission in comparison to a standard system with FSK modulation defined in ISO 18000-7. Our simulation results show that without any bandwidth expansion the immunity against NBI can be improved by 8 dB and the system capacity can be 7 times higher when using DS-CDMA transmission instead of FSK modulation with Manchester coding.
User's manual for three-dimensional analysis of propeller flow fields
NASA Technical Reports Server (NTRS)
Chaussee, D. S.; Kutler, P.
1983-01-01
A detailed operating manual is presented for the prop-fan computer code (in addition to supporting programs) recently developed by Kutler, Chaussee, Sorenson, and Pulliam while at the NASA'S Ames Research Center. This code solves the inviscid Euler equations using an implicit numerical procedure developed by Beam and Warming of Ames. A description of the underlying theory, numerical techniques, and boundary conditions with equations, formulas, and methods for the mesh generation program (MGP), three dimensional prop-fan flow field program (3DPFP), and data reduction program (DRP) is provided, together with complete operating instructions. In addition, a programmer's manual is also provided to assist the user interested in modifying the codes. Included in the programmer's manual for each program is a description of the input and output variables, flow charts, program listings, sample input and output data, and operating hints.
Andrade, Xavier; Aspuru-Guzik, Alán
2013-10-08
We discuss the application of graphical processing units (GPUs) to accelerate real-space density functional theory (DFT) calculations. To make our implementation efficient, we have developed a scheme to expose the data parallelism available in the DFT approach; this is applied to the different procedures required for a real-space DFT calculation. We present results for current-generation GPUs from AMD and Nvidia, which show that our scheme, implemented in the free code Octopus, can reach a sustained performance of up to 90 GFlops for a single GPU, representing a significant speed-up when compared to the CPU version of the code. Moreover, for some systems, our implementation can outperform a GPU Gaussian basis set code, showing that the real-space approach is a competitive alternative for DFT simulations on GPUs.
Creep and Creep-Fatigue Crack Growth at Structural Discontinuities and Welds
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. F. W. Brust; Dr. G. M. Wilkowski; Dr. P. Krishnaswamy
2010-01-27
The subsection ASME NH high temperature design procedure does not admit crack-like defects into the structural components. The US NRC identified the lack of treatment of crack growth within NH as a limitation of the code and thus this effort was undertaken. This effort is broken into two parts. Part 1, summarized here, involved examining all high temperature creep-fatigue crack growth codes being used today and from these, the task objective was to choose a methodology that is appropriate for possible implementation within NH. The second part of this task, which has just started, is to develop design rules formore » possible implementation within NH. This second part is a challenge since all codes require step-by-step analysis procedures to be undertaken in order to assess the crack growth and life of the component. Simple rules for design do not exist in any code at present. The codes examined in this effort included R5, RCC-MR (A16), BS 7910, API 579, and ATK (and some lesser known codes). There are several reasons that the capability for assessing cracks in high temperature nuclear components is desirable. These include: (1) Some components that are part of GEN IV reactors may have geometries that have sharp corners - which are essentially cracks. Design of these components within the traditional ASME NH procedure is quite challenging. It is natural to ensure adequate life design by modeling these features as cracks within a creep-fatigue crack growth procedure. (2) Workmanship flaws in welds sometimes occur and are accepted in some ASME code sections. It can be convenient to consider these as flaws when making a design life assessment. (3) Non-destructive Evaluation (NDE) and inspection methods after fabrication are limited in the size of the crack or flaw that can be detected. It is often convenient to perform a life assessment using a flaw of a size that represents the maximum size that can elude detection. (4) Flaws that are observed using in-service detection methods often need to be addressed as plants age. Shutdown inspection intervals can only be designed using creep and creep-fatigue crack growth techniques. (5) The use of crack growth procedures can aid in examining the seriousness of creep damage in structural components. How cracks grow can be used to assess margins on components and lead to further safe operation. After examining the pros and cons of all these methods, the R5 code was chosen as the most up-to-date and validated high temperature creep and creep fatigue code currently used in the world at present. R5 is considered the leader because the code: (1) has well established and validated rules, (2) has a team of experts continually improving and updating it, (3) has software that can be used by designers, (4) extensive validation in many parts with available data from BE resources as well as input from Imperial college's database, and (5) was specifically developed for use in nuclear plants. R5 was specifically developed for use in gas cooled nuclear reactors which operate in the UK and much of the experience is based on materials and temperatures which are experienced in these reactors. If the next generation advanced reactors to be built in the US used these same materials within the same temperature ranges as these reactors, then R5 may be appropriate for consideration of direct implementation within ASME code NH or Section XI. However, until more verification and validation of these creep/fatigue crack growth rules for the specific materials and temperatures to be used in the GEN IV reactors is complete, ASME should consider delaying this implementation. With this in mind, it is this authors opinion that R5 methods are the best available for code use today. The focus of this work was to examine the literature for creep and creep-fatigue crack growth procedures that are well established in codes in other countries and choose a procedure to consider implementation into ASME NH. It is very important to recognize that all creep and creep fatigue crack growth procedures that are part of high temperature design codes are related and very similar. This effort made no attempt to develop a new creep-fatigue crack growth predictive methodology. Rather examination of current procedures was the only goal. The uncertainties in the R5 crack growth methods and recommendations for more work are summarized here also.« less
Integration of rocket turbine design and analysis through computer graphics
NASA Technical Reports Server (NTRS)
Hsu, Wayne; Boynton, Jim
1988-01-01
An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.
Update and evaluation of decay data for spent nuclear fuel analyses
NASA Astrophysics Data System (ADS)
Simeonov, Teodosi; Wemple, Charles
2017-09-01
Studsvik's approach to spent nuclear fuel analyses combines isotopic concentrations and multi-group cross-sections, calculated by the CASMO5 or HELIOS2 lattice transport codes, with core irradiation history data from the SIMULATE5 reactor core simulator and tabulated isotopic decay data. These data sources are used and processed by the code SNF to predict spent nuclear fuel characteristics. Recent advances in the generation procedure for the SNF decay data are presented. The SNF decay data includes basic data, such as decay constants, atomic masses and nuclide transmutation chains; radiation emission spectra for photons from radioactive decay, alpha-n reactions, bremsstrahlung, and spontaneous fission, electrons and alpha particles from radioactive decay, and neutrons from radioactive decay, spontaneous fission, and alpha-n reactions; decay heat production; and electro-atomic interaction data for bremsstrahlung production. These data are compiled from fundamental (ENDF, ENSDF, TENDL) and processed (ESTAR) sources for nearly 3700 nuclides. A rigorous evaluation procedure of internal consistency checks and comparisons to measurements and benchmarks, and code-to-code verifications is performed at the individual isotope level and using integral characteristics on a fuel assembly level (e.g., decay heat, radioactivity, neutron and gamma sources). Significant challenges are presented by the scope and complexity of the data processing, a dearth of relevant detailed measurements, and reliance on theoretical models for some data.
Evaluating a Dental Diagnostic Terminology in an Electronic Health Record
White, Joel M.; Kalenderian, Elsbeth; Stark, Paul C.; Ramoni, Rachel L.; Vaderhobli, Ram; Walji, Muhammad F.
2011-01-01
Standardized treatment procedure codes and terms are routinely used in dentistry. Utilization of a diagnostic terminology is common in medicine, but there is not a satisfactory or commonly standardized dental diagnostic terminology available at this time. Recent advances in dental informatics have provided an opportunity for inclusion of diagnostic codes and terms as part of treatment planning and documentation in the patient treatment history. This article reports the results of the use of a diagnostic coding system in a large dental school’s predoctoral clinical practice. A list of diagnostic codes and terms, called Z codes, was developed by dental faculty members. The diagnostic codes and terms were implemented into an electronic health record (EHR) for use in a predoctoral dental clinic. The utilization of diagnostic terms was quantified. The validity of Z code entry was evaluated by comparing the diagnostic term entered to the procedure performed, where valid diagnosis-procedure associations were determined by consensus among three calibrated academically based dentists. A total of 115,004 dental procedures were entered into the EHR during the year sampled. Of those, 43,053 were excluded from this analysis because they represent diagnosis or other procedures unrelated to treatments. Among the 71,951 treatment procedures, 27,973 had diagnoses assigned to them with an overall utilization of 38.9 percent. Of the 147 available Z codes, ninety-three were used (63.3 percent). There were 335 unique procedures provided and 2,127 procedure/diagnosis pairs captured in the EHR. Overall, 76.7 percent of the diagnoses entered were valid. We conclude that dental diagnostic terminology can be incorporated within an electronic health record and utilized in an academic clinical environment. Challenges remain in the development of terms and implementation and ease of use that, if resolved, would improve the utilization. PMID:21546594
The aerodynamic characteristics of vortex ingestion for the F/A-18 inlet duct
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.
1991-01-01
A Reduced Navier-Stokes (RNS) solution technique was successfully combined with the concept of partitioned geometry and mesh generation to form a very efficient 3D RNS code aimed at the analysis-design engineering environment. Partitioned geometry and mesh generation is a pre-processor to augment existing geometry and grid generation programs which allows the solver to (1) recluster an existing gridlife mesh lattice, and (2) perturb an existing gridfile definition to alter the cross-sectional shape and inlet duct centerline distribution without returning to the external geometry and grid generator. The present results provide a quantitative validation of the initial value space marching 3D RNS procedure and demonstrates accurate predictions of the engine face flow field, with a separation present in the inlet duct as well as when vortex generators are installed to supress flow separation. The present results also demonstrate the ability of the 3D RNS procedure to analyze the flow physics associated with vortex ingestion in general geometry ducts such as the F/A-18 inlet. At the conditions investigated, these interactions are basically inviscid like, i.e., the dominant aerodynamic characteristics have their origin in inviscid flow theory.
Generating Code Review Documentation for Auto-Generated Mission-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2009-01-01
Model-based design and automated code generation are increasingly used at NASA to produce actual flight code, particularly in the Guidance, Navigation, and Control domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently auto-generated code still needs to be fully tested and certified. We have thus developed AUTOCERT, a generator-independent plug-in that supports the certification of auto-generated code. AUTOCERT takes a set of mission safety requirements, and formally verifies that the autogenerated code satisfies these requirements. It generates a natural language report that explains why and how the code complies with the specified requirements. The report is hyper-linked to both the program and the verification conditions and thus provides a high-level structured argument containing tracing information for use in code reviews.
Information quality measurement of medical encoding support based on usability.
Puentes, John; Montagner, Julien; Lecornu, Laurent; Cauvin, Jean-Michel
2013-12-01
Medical encoding support systems for diagnoses and medical procedures are an emerging technology that begins to play a key role in billing, reimbursement, and health policies decisions. A significant problem to exploit these systems is how to measure the appropriateness of any automatically generated list of codes, in terms of fitness for use, i.e. their quality. Until now, only information retrieval performance measurements have been applied to estimate the accuracy of codes lists as quality indicator. Such measurements do not give the value of codes lists for practical medical encoding, and cannot be used to globally compare the quality of multiple codes lists. This paper defines and validates a new encoding information quality measure that addresses the problem of measuring medical codes lists quality. It is based on a usability study of how expert coders and physicians apply computer-assisted medical encoding. The proposed measure, named ADN, evaluates codes Accuracy, Dispersion and Noise, and is adapted to the variable length and content of generated codes lists, coping with limitations of previous measures. According to the ADN measure, the information quality of a codes list is fully represented by a single point, within a suitably constrained feature space. Using one scheme, our approach is reliable to measure and compare the information quality of hundreds of codes lists, showing their practical value for medical encoding. Its pertinence is demonstrated by simulation and application to real data corresponding to 502 inpatient stays in four clinic departments. Results are compared to the consensus of three expert coders who also coded this anonymized database of discharge summaries, and to five information retrieval measures. Information quality assessment applying the ADN measure showed the degree of encoding-support system variability from one clinic department to another, providing a global evaluation of quality measurement trends. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Algorithm Building and Learning Programming Languages Using a New Educational Paradigm
NASA Astrophysics Data System (ADS)
Jain, Anshul K.; Singhal, Manik; Gupta, Manu Sheel
2011-08-01
This research paper presents a new concept of using a single tool to associate syntax of various programming languages, algorithms and basic coding techniques. A simple framework has been programmed in Python that helps students learn skills to develop algorithms, and implement them in various programming languages. The tool provides an innovative and a unified graphical user interface for development of multimedia objects, educational games and applications. It also aids collaborative learning amongst students and teachers through an integrated mechanism based on Remote Procedure Calls. The paper also elucidates an innovative method for code generation to enable students to learn the basics of programming languages using drag-n-drop methods for image objects.
PEGASUS 5: An Automated Pre-Processor for Overset-Grid CFD
NASA Technical Reports Server (NTRS)
Rogers, Stuart E.; Suhs, Norman; Dietz, William; Rogers, Stuart; Nash, Steve; Chan, William; Tramel, Robert; Onufer, Jeff
2006-01-01
This viewgraph presentation reviews the use and requirements of Pegasus 5. PEGASUS 5 is a code which performs a pre-processing step for the Overset CFD method. The code prepares the overset volume grids for the flow solver by computing the domain connectivity database, and blanking out grid points which are contained inside a solid body. PEGASUS 5 successfully automates most of the overset process. It leads to dramatic reduction in user input over previous generations of overset software. It also can lead to an order of magnitude reduction in both turn-around time and user expertise requirements. It is also however not a "black-box" procedure; care must be taken to examine the resulting grid system.
Design of hat-stiffened composite panels loaded in axial compression
NASA Astrophysics Data System (ADS)
Paul, T. K.; Sinha, P. K.
An integrated step-by-step analysis procedure for the design of axially compressed stiffened composite panels is outlined. The analysis makes use of the effective width concept. A computer code, BUSTCOP, is developed incorporating various aspects of buckling such as skin buckling, stiffener crippling and column buckling. Other salient features of the computer code include capabilities for generation of data based on micromechanics theories and hygrothermal analysis, and for prediction of strength failure. Parametric studies carried out on a hat-stiffened structural element indicate that, for all practical purposes, composite panels exhibit higher structural efficiency. Some hybrid laminates with outer layers made of aluminum alloy also show great promise for flight vehicle structural applications.
Certifying Auto-Generated Flight Code
NASA Technical Reports Server (NTRS)
Denney, Ewen
2008-01-01
Model-based design and automated code generation are being used increasingly at NASA. Many NASA projects now use MathWorks Simulink and Real-Time Workshop for at least some of their modeling and code development. However, there are substantial obstacles to more widespread adoption of code generators in safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. Moreover, the regeneration of code can require complete recertification, which offsets many of the advantages of using a generator. Indeed, manual review of autocode can be more challenging than for hand-written code. Since the direct V&V of code generators is too laborious and complicated due to their complex (and often proprietary) nature, we have developed a generator plug-in to support the certification of the auto-generated code. Specifically, the AutoCert tool supports certification by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews. The generated documentation also contains substantial tracing information, allowing users to trace between model, code, documentation, and V&V artifacts. This enables missions to obtain assurance about the safety and reliability of the code without excessive manual V&V effort and, as a consequence, eases the acceptance of code generators in safety-critical contexts. The generation of explicit certificates and textual reports is particularly well-suited to supporting independent V&V. The primary contribution of this approach is the combination of human-friendly documentation with formal analysis. The key technical idea is to exploit the idiomatic nature of auto-generated code in order to automatically infer logical annotations. The annotation inference algorithm itself is generic, and parametrized with respect to a library of coding patterns that depend on the safety policies and the code generator. The patterns characterize the notions of definitions and uses that are specific to the given safety property. For example, for initialization safety, definitions correspond to variable initializations while uses are statements which read a variable, whereas for array bounds safety, definitions are the array declarations, while uses are statements which access an array variable. The inferred annotations are thus highly dependent on the actual program and the properties being proven. The annotations, themselves, need not be trusted, but are crucial to obtain the automatic formal verification of the safety properties without requiring access to the internals of the code generator. The approach has been applied to both in-house and commercial code generators, but is independent of the particular generator used. It is currently being adapted to flight code generated using MathWorks Real-Time Workshop, an automatic code generator that translates from Simulink/Stateflow models into embedded C code.
NASA Technical Reports Server (NTRS)
Weed, Richard Allen; Sankar, L. N.
1994-01-01
An increasing amount of research activity in computational fluid dynamics has been devoted to the development of efficient algorithms for parallel computing systems. The increasing performance to price ratio of engineering workstations has led to research to development procedures for implementing a parallel computing system composed of distributed workstations. This thesis proposal outlines an ongoing research program to develop efficient strategies for performing three-dimensional flow analysis on distributed computing systems. The PVM parallel programming interface was used to modify an existing three-dimensional flow solver, the TEAM code developed by Lockheed for the Air Force, to function as a parallel flow solver on clusters of workstations. Steady flow solutions were generated for three different wing and body geometries to validate the code and evaluate code performance. The proposed research will extend the parallel code development to determine the most efficient strategies for unsteady flow simulations.
NASA Technical Reports Server (NTRS)
Swift, Daniel W.
1991-01-01
The primary methodology during the grant period has been the use of micro or meso-scale simulations to address specific questions concerning magnetospheric processes related to the aurora and substorm morphology. This approach, while useful in providing some answers, has its limitations. Many of the problems relating to the magnetosphere are inherently global and kinetic. Effort during the last year of the grant period has increasingly focused on development of a global-scale hybrid code to model the entire, coupled magnetosheath - magnetosphere - ionosphere system. In particular, numerical procedures for curvilinear coordinate generation and exactly conservative differencing schemes for hybrid codes in curvilinear coordinates have been developed. The new computer algorithms and the massively parallel computer architectures now make this global code a feasible proposition. Support provided by this project has played an important role in laying the groundwork for the eventual development or a global-scale code to model and forecast magnetospheric weather.
The Social Interactive Coding System (SICS): An On-Line, Clinically Relevant Descriptive Tool.
ERIC Educational Resources Information Center
Rice, Mabel L.; And Others
1990-01-01
The Social Interactive Coding System (SICS) assesses the continuous verbal interactions of preschool children as a function of play areas, addressees, script codes, and play levels. This paper describes the 26 subjects and the setting involved in SICS development, coding definitions and procedures, training procedures, reliability, sample…
Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking
NASA Technical Reports Server (NTRS)
Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward
2011-01-01
To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk calculation procedure.
Comparing the coding of complications in Queensland and Victorian admitted patient data.
Michel, Jude L; Cheng, Diana; Jackson, Terri J
2011-08-01
To examine differences between Queensland and Victorian coding of hospital-acquired conditions and suggest ways to improve the usefulness of these data in the monitoring of patient safety events. Secondary analysis of admitted patient episode data collected in Queensland and Victoria. Comparison of depth of coding, and patterns in the coding of ten commonly coded complications of five elective procedures. Comparison of the mean complication codes assigned per episode revealed Victoria assigns more valid codes than Queensland for all procedures, with the difference between the states being significantly different in all cases. The proportion of the codes flagged as complications was consistently lower for Queensland when comparing 10 common complications for each of the five selected elective procedures. The estimated complication rates for the five procedures showed Victoria to have an apparently higher complication rate than Queensland for 35 of the 50 complications examined. Our findings demonstrate that the coding of complications is more comprehensive in Victoria than in Queensland. It is known that inconsistencies exist between states in routine hospital data quality. Comparative use of patient safety indicators should be viewed with caution until standards are improved across Australia. More exploration of data quality issues is needed to identify areas for improvement.
Natural Language Interface for Safety Certification of Safety-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2011-01-01
Model-based design and automated code generation are being used increasingly at NASA. The trend is to move beyond simulation and prototyping to actual flight code, particularly in the guidance, navigation, and control domain. However, there are substantial obstacles to more widespread adoption of code generators in such safety-critical domains. Since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently the generated code still needs to be fully tested and certified. The AutoCert generator plug-in supports the certification of automatically generated code by formally verifying that the generated code is free of different safety violations, by constructing an independently verifiable certificate, and by explaining its analysis in a textual form suitable for code reviews.
Multispectral Terrain Background Simulation Techniques For Use In Airborne Sensor Evaluation
NASA Astrophysics Data System (ADS)
Weinberg, Michael; Wohlers, Ronald; Conant, John; Powers, Edward
1988-08-01
A background simulation code developed at Aerodyne Research, Inc., called AERIE is designed to reflect the major sources of clutter that are of concern to staring and scanning sensors of the type being considered for various airborne threat warning (both aircraft and missiles) sensors. The code is a first principles model that could be used to produce a consistent image of the terrain for various spectral bands, i.e., provide the proper scene correlation both spectrally and spatially. The code utilizes both topographic and cultural features to model terrain, typically from DMA data, with a statistical overlay of the critical underlying surface properties (reflectance, emittance, and thermal factors) to simulate the resulting texture in the scene. Strong solar scattering from water surfaces is included with allowance for wind driven surface roughness. Clouds can be superimposed on the scene using physical cloud models and an analytical representation of the reflectivity obtained from scattering off spherical particles. The scene generator is augmented by collateral codes that allow for the generation of images at finer resolution. These codes provide interpolation of the basic DMA databases using fractal procedures that preserve the high frequency power spectral density behavior of the original scene. Scenes are presented illustrating variations in altitude, radiance, resolution, material, thermal factors, and emissivities. The basic models utilized for simulation of the various scene components and various "engineering level" approximations are incorporated to reduce the computational complexity of the simulation.
Users Manual for the NASA Lewis Ice Accretion Prediction Code (LEWICE)
NASA Technical Reports Server (NTRS)
Ruff, Gary A.; Berkowitz, Brian M.
1990-01-01
LEWICE is an ice accretion prediction code that applies a time-stepping procedure to calculate the shape of an ice accretion. The potential flow field is calculated in LEWICE using the Douglas Hess-Smith 2-D panel code (S24Y). This potential flow field is then used to calculate the trajectories of particles and the impingement points on the body. These calculations are performed to determine the distribution of liquid water impinging on the body, which then serves as input to the icing thermodynamic code. The icing thermodynamic model is based on the work of Messinger, but contains several major modifications and improvements. This model is used to calculate the ice growth rate at each point on the surface of the geometry. By specifying an icing time increment, the ice growth rate can be interpreted as an ice thickness which is added to the body, resulting in the generation of new coordinates. This procedure is repeated, beginning with the potential flow calculations, until the desired icing time is reached. The operation of LEWICE is illustrated through the use of five examples. These examples are representative of the types of applications expected for LEWICE. All input and output is discussed, along with many of the diagnostic messages contained in the code. Several error conditions that may occur in the code for certain icing conditions are identified, and a course of action is recommended. LEWICE has been used to calculate a variety of ice shapes, but should still be considered a research code. The code should be exercised further to identify any shortcomings and inadequacies. Any modifications identified as a result of these cases, or of additional experimental results, should be incorporated into the model. Using it as a test bed for improvements to the ice accretion model is one important application of LEWICE.
Structure Limits for a 30mm Annular Piston.
1988-05-01
block at the rear, ending the cycle. III. STRESS ANALYSIS PROCEDURE tress data was generated using the SAAS -II finite element computer code. Applied...Plastics Avenue Rockford, IL 61125 Pittsfield, MA 01201-3698 1 Veritay Technology, Inc. 1 General Electric Company ATTN: E.B. Fisher Armament Systems... VALUATION Slit:ET/CHANGE 01- ADDRESS -hi s laboratoryN undertakes a continuing effort to improve the quality of the( re-ports it publishe,4. Yfour comments
TOWARD THE DEVELOPMENT OF A CONSENSUS MATERIALS DATABASE FOR PRESSURE TECHNOLGY APPLICATIONS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swindeman, Robert W; Ren, Weiju
The ASME construction code books specify materials and fabrication procedures that are acceptable for pressure technology applications. However, with few exceptions, the materials properties provided in the ASME code books provide no statistics or other information pertaining to material variability. Such information is central to the prediction and prevention of failure events. Many sources of materials data exist that provide variability information but such sources do not necessarily represent a consensus of experts with respect to the reported trends that are represented. Such a need has been identified by the ASME Standards Technology, LLC and initial steps have been takenmore » to address these needs: however, these steps are limited to project-specific applications only, such as the joint DOE-ASME project on materials for Generation IV nuclear reactors. In contrast to light-water reactor technology, the experience base for the Generation IV nuclear reactors is somewhat lacking and heavy reliance must be placed on model development and predictive capability. The database for model development is being assembled and includes existing code alloys such as alloy 800H and 9Cr-1Mo-V steel. Ownership and use rights are potential barriers that must be addressed.« less
External-Compression Supersonic Inlet Design Code
NASA Technical Reports Server (NTRS)
Slater, John W.
2011-01-01
A computer code named SUPIN has been developed to perform aerodynamic design and analysis of external-compression, supersonic inlets. The baseline set of inlets include axisymmetric pitot, two-dimensional single-duct, axisymmetric outward-turning, and two-dimensional bifurcated-duct inlets. The aerodynamic methods are based on low-fidelity analytical and numerical procedures. The geometric methods are based on planar geometry elements. SUPIN has three modes of operation: 1) generate the inlet geometry from a explicit set of geometry information, 2) size and design the inlet geometry and analyze the aerodynamic performance, and 3) compute the aerodynamic performance of a specified inlet geometry. The aerodynamic performance quantities includes inlet flow rates, total pressure recovery, and drag. The geometry output from SUPIN includes inlet dimensions, cross-sectional areas, coordinates of planar profiles, and surface grids suitable for input to grid generators for analysis by computational fluid dynamics (CFD) methods. The input data file for SUPIN and the output file from SUPIN are text (ASCII) files. The surface grid files are output as formatted Plot3D or stereolithography (STL) files. SUPIN executes in batch mode and is available as a Microsoft Windows executable and Fortran95 source code with a makefile for Linux.
Standard interface files and procedures for reactor physics codes, version III
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, B.M.
Standards and procedures for promoting the exchange of reactor physics codes are updated to Version-III status. Standards covering program structure, interface files, file handling subroutines, and card input format are included. The implementation status of the standards in codes and the extension of the standards to new code areas are summarized. (15 references) (auth)
NASA Technical Reports Server (NTRS)
Liu, D. D.; Kao, Y. F.; Fung, K. Y.
1989-01-01
A transonic equivalent strip (TES) method was further developed for unsteady flow computations of arbitrary wing planforms. The TES method consists of two consecutive correction steps to a given nonlinear code such as LTRAN2; namely, the chordwise mean flow correction and the spanwise phase correction. The computation procedure requires direct pressure input from other computed or measured data. Otherwise, it does not require airfoil shape or grid generation for given planforms. To validate the computed results, four swept wings of various aspect ratios, including those with control surfaces, are selected as computational examples. Overall trends in unsteady pressures are established with those obtained by XTRAN3S codes, Isogai's full potential code and measured data by NLR and RAE. In comparison with these methods, the TES has achieved considerable saving in computer time and reasonable accuracy which suggests immediate industrial applications.
From Verified Models to Verifiable Code
NASA Technical Reports Server (NTRS)
Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.
2009-01-01
Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.
Development tests for the 2.5 megawatt Mod-2 wind turbine generator
NASA Technical Reports Server (NTRS)
Andrews, J. S.; Baskin, J. M.
1982-01-01
The 2.5 megawatt MOD-2 wind turbine generator test program is discussed. The development of the 2.5 megawatt MOD-2 wind turbine generator included an extensive program of testing which encompassed verification of analytical procedures, component development, and integrated system verification. The test program was to assure achievement of the thirty year design operational life of the wind turbine system as well as to minimize costly design modifications which would otherwise have been required during on site system testing. Computer codes were modified, fatigue life of structure and dynamic components were verified, mechanical and electrical component and subsystems were functionally checked and modified where necessary to meet system specifications, and measured dynamic responses of coupled systems confirmed analytical predictions.
Pillai, Anilkumar; Medford, Andrew R L
2013-01-01
Correct coding is essential for accurate reimbursement for clinical activity. Published data confirm that significant aberrations in coding occur, leading to considerable financial inaccuracies especially in interventional procedures such as endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA). Previous data reported a 15% coding error for EBUS-TBNA in a U.K. service. We hypothesised that greater physician involvement with coders would reduce EBUS-TBNA coding errors and financial disparity. The study was done as a prospective cohort study in the tertiary EBUS-TBNA service in Bristol. 165 consecutive patients between October 2009 and March 2012 underwent EBUS-TBNA for evaluation of unexplained mediastinal adenopathy on computed tomography. The chief coder was prospectively electronically informed of all procedures and cross-checked on a prospective database and by Trust Informatics. Cost and coding analysis was performed using the 2010-2011 tariffs. All 165 procedures (100%) were coded correctly as verified by Trust Informatics. This compares favourably with the 14.4% coding inaccuracy rate for EBUS-TBNA in a previous U.K. prospective cohort study [odds ratio 201.1 (1.1-357.5), p = 0.006]. Projected income loss was GBP 40,000 per year in the previous study, compared to a GBP 492,195 income here with no coding-attributable loss in revenue. Greater physician engagement with coders prevents coding errors and financial losses which can be significant especially in interventional specialties. The intervention can be as cheap, quick and simple as a prospective email to the coding team with cross-checks by Trust Informatics and against a procedural database. We suggest that all specialties should engage more with their coders using such a simple intervention to prevent revenue losses. Copyright © 2013 S. Karger AG, Basel.
28 CFR 36.604 - Procedure following preliminary determination of equivalency.
Code of Federal Regulations, 2014 CFR
2014-07-01
... State Laws or Local Building Codes § 36.604 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite... enforcement of the code, at which interested individuals, including individuals with disabilities, are...
28 CFR 36.604 - Procedure following preliminary determination of equivalency.
Code of Federal Regulations, 2011 CFR
2011-07-01
... State Laws or Local Building Codes § 36.604 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite... enforcement of the code, at which interested individuals, including individuals with disabilities, are...
28 CFR 36.604 - Procedure following preliminary determination of equivalency.
Code of Federal Regulations, 2012 CFR
2012-07-01
... State Laws or Local Building Codes § 36.604 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite... enforcement of the code, at which interested individuals, including individuals with disabilities, are...
28 CFR 36.604 - Procedure following preliminary determination of equivalency.
Code of Federal Regulations, 2013 CFR
2013-07-01
... State Laws or Local Building Codes § 36.604 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite... enforcement of the code, at which interested individuals, including individuals with disabilities, are...
Evaluation of flaws in carbon steel piping. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zahoor, A.; Gamble, R.M.; Mehta, H.S.
1986-10-01
The objective of this program was to develop flaw evaluation procedures and allowable flaw sizes for ferritic piping used in light water reactor (LWR) power generation facilities. The program results provide relevant ASME Code groups with the information necessary to define flaw evaluation procedures, allowable flaw sizes, and their associated bases for Section XI of the code. Because there are several possible flaw-related failure modes for ferritic piping over the LWR operating temperature range, three analysis methods were employed to develop the evaluation procedures. These include limit load analysis for plastic collapse, elastic plastic fracture mechanics (EPFM) analysis for ductilemore » tearing, and linear elastic fracture mechanics (LEFM) analysis for non ductile crack extension. To ensure the appropriate analysis method is used in an evaluation, a step by step procedure also is provided to identify the relevant acceptance standard or procedure on a case by case basis. The tensile strength and toughness properties required to complete the flaw evaluation for any of the three analysis methods are included in the evaluation procedure. The flaw evaluation standards are provided in tabular form for the plastic collapse and ductile tearing modes, where the allowable part through flaw depth is defined as a function of load and flaw length. For non ductile crack extension, linear elastic fracture mechanics analysis methods, similar to those in Appendix A of Section XI, are defined. Evaluation flaw sizes and procedures are developed for both longitudinal and circumferential flaw orientations and normal/upset and emergency/faulted operating conditions. The tables are based on margins on load of 2.77 and 1.39 for circumferential flaws and 3.0 and 1.5 for longitudinal flaws for normal/upset and emergency/faulted conditions, respectively.« less
NASA Technical Reports Server (NTRS)
Steger, J. L.; Dougherty, F. C.; Benek, J. A.
1983-01-01
A mesh system composed of multiple overset body-conforming grids is described for adapting finite-difference procedures to complex aircraft configurations. In this so-called 'chimera mesh,' a major grid is generated about a main component of the configuration and overset minor grids are used to resolve all other features. Methods for connecting overset multiple grids and modifications of flow-simulation algorithms are discussed. Computational tests in two dimensions indicate that the use of multiple overset grids can simplify the task of grid generation without an adverse effect on flow-field algorithms and computer code complexity.
Atmospheric Correction of Satellite Imagery Using Modtran 3.5 Code
NASA Technical Reports Server (NTRS)
Gonzales, Fabian O.; Velez-Reyes, Miguel
1997-01-01
When performing satellite remote sensing of the earth in the solar spectrum, atmospheric scattering and absorption effects provide the sensors corrupted information about the target's radiance characteristics. We are faced with the problem of reconstructing the signal that was reflected from the target, from the data sensed by the remote sensing instrument. This article presents a method for simulating radiance characteristic curves of satellite images using a MODTRAN 3.5 band model (BM) code to solve the radiative transfer equation (RTE), and proposes a method for the implementation of an adaptive system for automated atmospheric corrections. The simulation procedure is carried out as follows: (1) for each satellite digital image a radiance characteristic curve is obtained by performing a digital number (DN) to radiance conversion, (2) using MODTRAN 3.5 a simulation of the images characteristic curves is generated, (3) the output of the code is processed to generate radiance characteristic curves for the simulated cases. The simulation algorithm was used to simulate Landsat Thematic Mapper (TM) images for two types of locations: the ocean surface, and a forest surface. The simulation procedure was validated by computing the error between the empirical and simulated radiance curves. While results in the visible region of the spectrum where not very accurate, those for the infrared region of the spectrum were encouraging. This information can be used for correction of the atmospheric effects. For the simulation over ocean, the lowest error produced in this region was of the order of 105 and up to 14 times smaller than errors in the visible region. For the same spectral region on the forest case, the lowest error produced was of the order of 10-4, and up to 41 times smaller than errors in the visible region,
Idaho National Engineering Laboratory code assessment of the Rocky Flats transuranic waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-07-01
This report is an assessment of the content codes associated with transuranic waste shipped from the Rocky Flats Plant in Golden, Colorado, to INEL. The primary objective of this document is to characterize and describe the transuranic wastes shipped to INEL from Rocky Flats by item description code (IDC). This information will aid INEL in determining if the waste meets the waste acceptance criteria (WAC) of the Waste Isolation Pilot Plant (WIPP). The waste covered by this content code assessment was shipped from Rocky Flats between 1985 and 1989. These years coincide with the dates for information available in themore » Rocky Flats Solid Waste Information Management System (SWIMS). The majority of waste shipped during this time was certified to the existing WIPP WAC. This waste is referred to as precertified waste. Reassessment of these precertified waste containers is necessary because of changes in the WIPP WAC. To accomplish this assessment, the analytical and process knowledge available on the various IDCs used at Rocky Flats were evaluated. Rocky Flats sources for this information include employee interviews, SWIMS, Transuranic Waste Certification Program, Transuranic Waste Inspection Procedure, Backlog Waste Baseline Books, WIPP Experimental Waste Characterization Program (headspace analysis), and other related documents, procedures, and programs. Summaries are provided of: (a) certification information, (b) waste description, (c) generation source, (d) recovery method, (e) waste packaging and handling information, (f) container preparation information, (g) assay information, (h) inspection information, (i) analytical data, and (j) RCRA characterization.« less
A Semantic Analysis Method for Scientific and Engineering Code
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.
1998-01-01
This paper develops a procedure to statically analyze aspects of the meaning or semantics of scientific and engineering code. The analysis involves adding semantic declarations to a user's code and parsing this semantic knowledge with the original code using multiple expert parsers. These semantic parsers are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. In practice, a user would submit code with semantic declarations of primitive variables to the analysis procedure, and its semantic parsers would automatically recognize and document some static, semantic concepts and locate some program semantic errors. A prototype implementation of this analysis procedure is demonstrated. Further, the relationship between the fundamental algebraic manipulations of equations and the parsing of expressions is explained. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.
Grid orthogonality effects on predicted turbine midspan heat transfer and performance
NASA Technical Reports Server (NTRS)
Boyle, R. J.; Ameri, A. A.
1995-01-01
The effect of five different C type grid geometries on the predicted heat transfer and aerodynamic performance of a turbine stator is examined. Predictions were obtained using two flow analysis codes. One was a finite difference analysis, and the other was a finite volume analysis. Differences among the grids in terms of heat transfer and overall performance were small. The most significant difference among the five grids occurred in the prediction of pitchwise variation in total pressure. There was consistency between results obtained with each of the flow analysis codes when the same grid was used. A grid generating procedure in which the viscous grid is embedded within an inviscid type grid resulted in the best overall performance.
Acceptance criteria for welds in ASTM A106 grade B steel pipe and plate
NASA Technical Reports Server (NTRS)
Hudson, C. M.; Wright, D. B., Jr.; Leis, B. N.
1986-01-01
Based on the RECERT Program findings, NASA-Langley funded a fatigue study of code-unacceptable welds. Usage curves were developed which were based on the structural integrity of the welds. The details of this study are presented in NASA CR-178114. The information presented is a condensation and reinterpretation of the information in NASA CR-178114. This condensation and reinterpretation generated usage curves for welds having: (1) indications 0.20 -inch deep by 0.40-inch long, and (2) indications 0.195-inch deep by 8.4-inches long. These curves were developed using the procedures used in formulating the design curves in Section VIII, Division 2 of the American Society of Mechanical Engineers Boiler and Pressure Vessel Code.
NASA Technical Reports Server (NTRS)
Manhardt, P. D.
1982-01-01
The CMC fluid mechanics program system was developed to transmit the theoretical solution of finite element numerical solution methodology, applied to nonlinear field problems into a versatile computer code for comprehensive flow field analysis. Data procedures for the CMC 3 dimensional Parabolic Navier-Stokes (PNS) algorithm are presented. General data procedures a juncture corner flow standard test case data deck is described. A listing of the data deck and an explanation of grid generation methodology are presented. Tabulations of all commands and variables available to the user are described. These are in alphabetical order with cross reference numbers which refer to storage addresses.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-16
... hospital payment systems; hospital medical care delivery systems; provider billing and accounting systems; APC groups; Current Procedural Terminology codes; Health Care Common Procedure Coding System (HCPCS) codes; the use of, and payment for, drugs, medical devices, and other services in the outpatient setting...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heuze, F.E.
1983-03-01
An attempt to model the complex thermal and mechanical phenomena occurring in the disposal of high-level nuclear wastes in rock at high power loading is described. Such processes include melting of the rock, convection of the molten material, and very high stressing of the rock mass, leading to new fracturing. Because of the phase changes and the wide temperature ranges considered, realistic models must provide for coupling of the thermal and mechanical calculations, for large deformations, and for steady-state temperature-depenent creep of the rock mass. Explicit representation of convection would be desirable, as would the ability to show fracture developmentmore » and migration of fluids in cracks. Enhancements to SNAGRE consisted of: array modifications to accommodate complex variations of thermal and mechanical properties with temperature; introduction of the ability of calculate thermally induced stresses; improved management of the minimum time step and minimum temperature step to increase code efficiency; introduction of a variable heat-generation algorithm to accommodate heat decay of the nuclear materials; streamlining of the code by general editing and extensive deletion of coding used in mesh generation; and updating of the program users' manual. The enhanced LLNL version of the code was renamed LSANGRE. Phase changes were handled by introducing sharp variations in the specific heat of the rock in a narrow range about the melting point. The accuracy of this procedure was tested successfully on a melting slab problem. LSANGRE replicated the results of both the analytical solution and calculations with the finite difference TRUMP code. Following enhancement and verification, a purely thermal calculation was carried to 105 years. It went beyond the extent of maximum melt and into the beginning of the cooling phase.« less
Fitzgerald, J E F; Ravindra, P; Lepore, M; Armstrong, A; Bhangu, A; Maxwell-Armstrong, C A
2013-01-01
In many countries healthcare commissioning bodies (state or insurance-based) reimburse hospitals for their activity. The costs associated with post-graduate clinical training as part of this are poorly understood. This study quantified the financial revenue generated by surgical trainees in the out-patient clinic setting. A retrospective analysis of surgical out-patient ambulatory care appointments under 6 full-time equivalent Consultants (Attendings) in one hospital over 2 months. Clinic attendance lists were generated from the Patient Access System. Appointments were categorised as: 'new', 'review' or 'procedure' as per the Department of Health Payment by Results (PbR) Outpatient Tariff (Outpatient Treatment Function Code 104; Outpatient Procedure Code OPRSI1). During the study period 78 clinics offered 1184 appointments; 133 of these were not attended (11.2%). Of those attended 1029 had sufficient detail for analysis (98%). 261 (25.4%) patients were seen by a trainee. Applying PbR reimbursement criteria to these gave a projected annual income of £GBP 218,712 (€EU 266,527; $USD 353,657) generated by 6 surgical trainees (Residents). This is equivalent to approximately £GBP 36,452 (€EU 44,415; $USD 58,943) per trainee annually compared to £GBP 48,732 (€EU 59,378; $USD 78,800) per Consultant. This projected yearly income off-set 95% of the trainee's basic salary. Surgical trainees generated a quarter of the out-patient clinic activity related income in this study, with each trainee producing three-quarters of that generated by a Consultant. This offers considerable commercial value to hospitals. Although this must offset productivity differences and overall running costs, training bodies should ensure hospitals offer an appropriate return. In a competitive market hospitals could be invited to compete for trainees, with preference given to those providing excellence in training. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.
Translating MAPGEN to ASPEN for MER
NASA Technical Reports Server (NTRS)
Rabideau, Gregg R.; Knight, Russell L.; Lenda, Matthew; Maldague, Pierre F.
2013-01-01
This software translates MAPGEN (Europa and APGEN) domains to ASPEN, and the resulting domain can be used to perform planning for the Mars Exploration Rover (MER). In other words, this is a conversion of two distinct planning languages (both declarative and procedural) to a third (declarative) planning language in order to solve the problem of faithful translation from mixed-domain representations into the ASPEN Modeling Language. The MAPGEN planning system is an example of a hybrid procedural/declarative system where the advantages of each are leveraged to produce an effective planner/scheduler for MER tactical planning. The adaptation of the planning system (ASPEN) was investigated, and, with some translation, much of the procedural knowledge encoding is amenable to declarative knowledge encoding. The approach was to compose translators from the core languages used for adapting MAGPEN, which consists of Europa and APGEN. Europa is a constraint- based planner/scheduler where domains are encoded using a declarative model. APGEN is also constraint-based, in that it tracks constraints on resources and states and other variables. Domains are encoded in both constraints and code snippets that execute according to a forward sweep through the plan. Europa and APGEN communicate to each other using proxy activities in APGEN that represent constraints and/or tokens in Europa. The composition of a translator from Europa to ASPEN was fairly straightforward, as ASPEN is also a declarative planning system, and the specific uses of Europa for the MER domain matched ASPEN s native encoding fairly closely. On the other hand, translating from APGEN to ASPEN was considerably more involved. On the surface, the types of activities and resources one encodes in APGEN appear to match oneto- one to the activities, state variables, and resources in ASPEN. But, when looking into the definitions of how resources are profiled and activities are expanded, one sees code snippets that access various information available during planning for the moment in time being planned to decide at the time what the appropriate profile or expansion is. APGEN is actually a forward (in time) sweeping discrete event simulator, where the model is composed of code snippets that are artfully interleaved by the engine to produce a plan/schedule. To solve this problem, representative code is simulated as a declarative series of task expansions. Predominantly, three types of procedural models were translated: loops, if statements, and code blocks. Loops and if statements were handled using controlled task expansion, and code blocks were handled using constraint networks that maintained the generation of results based on what the order of execution would be for a procedural representation. One advantage with respect to performance for MAPGEN is the use of APGEN s GUI. This GUI is written in C++ and Motif, and performs very well for large plans.
Scale/TSUNAMI Sensitivity Data for ICSBEP Evaluations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rearden, Bradley T; Reed, Davis Allan; Lefebvre, Robert A
2011-01-01
The Tools for Sensitivity and Uncertainty Analysis Methodology Implementation (TSUNAMI) software developed at Oak Ridge National Laboratory (ORNL) as part of the Scale code system provide unique methods for code validation, gap analysis, and experiment design. For TSUNAMI analysis, sensitivity data are generated for each application and each existing or proposed experiment used in the assessment. The validation of diverse sets of applications requires potentially thousands of data files to be maintained and organized by the user, and a growing number of these files are available through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE) distributed through themore » International Criticality Safety Benchmark Evaluation Program (ICSBEP). To facilitate the use of the IHECSBE benchmarks in rigorous TSUNAMI validation and gap analysis techniques, ORNL generated SCALE/TSUNAMI sensitivity data files (SDFs) for several hundred benchmarks for distribution with the IHECSBE. For the 2010 edition of IHECSBE, the sensitivity data were generated using 238-group cross-section data based on ENDF/B-VII.0 for 494 benchmark experiments. Additionally, ORNL has developed a quality assurance procedure to guide the generation of Scale inputs and sensitivity data, as well as a graphical user interface to facilitate the use of sensitivity data in identifying experiments and applying them in validation studies.« less
Sabatini, Linda M; Mathews, Charles; Ptak, Devon; Doshi, Shivang; Tynan, Katherine; Hegde, Madhuri R; Burke, Tara L; Bossler, Aaron D
2016-05-01
The increasing use of advanced nucleic acid sequencing technologies for clinical diagnostics and therapeutics has made vital understanding the costs of performing these procedures and their value to patients, providers, and payers. The Association for Molecular Pathology invested in a cost and value analysis of specific genomic sequencing procedures (GSPs) newly coded by the American Medical Association Current Procedural Terminology Editorial Panel. Cost data and work effort, including the development and use of data analysis pipelines, were gathered from representative laboratories currently performing these GSPs. Results were aggregated to generate representative cost ranges given the complexity and variability of performing the tests. Cost-impact models for three clinical scenarios were generated with assistance from key opinion leaders: impact of using a targeted gene panel in optimizing care for patients with advanced non-small-cell lung cancer, use of a targeted gene panel in the diagnosis and management of patients with sensorineural hearing loss, and exome sequencing in the diagnosis and management of children with neurodevelopmental disorders of unknown genetic etiology. Each model demonstrated value by either reducing health care costs or identifying appropriate care pathways. The templates generated will aid laboratories in assessing their individual costs, considering the value structure in their own patient populations, and contributing their data to the ongoing dialogue regarding the impact of GSPs on improving patient care. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Reynolds, Kellin; Barnhill, Danny; Sias, Jamie; Young, Amy; Polite, Florencia Greer
2014-12-01
A portable electronic method of providing instructional feedback and recording an evaluation of resident competency immediately following surgical procedures has not previously been documented in obstetrics and gynecology. This report presents a unique electronic format that documents resident competency and encourages verbal communication between faculty and residents immediately following operative procedures. The Microsoft Tag system and SurveyMonkey platform were linked by a 2-D QR code using Microsoft QR code generator. Each resident was given a unique code (TAG) embedded onto an ID card. An evaluation form was attached to each resident's file in SurveyMonkey. Postoperatively, supervising faculty scanned the resident's TAG with a smartphone and completed the brief evaluation using the phone's screen. The evaluation was reviewed with the resident and automatically submitted to the resident's educational file. The evaluation system was quickly accepted by residents and faculty. Of 43 residents and faculty in the study, 38 (88%) responded to a survey 8 weeks after institution of the electronic evaluation system. Thirty (79%) of the 38 indicated it was superior to the previously used handwritten format. The electronic system demonstrated improved utilization compared with paper evaluations, with a mean of 23 electronic evaluations submitted per resident during a 6-month period versus 14 paper assessments per resident during an earlier period of 6 months. This streamlined portable electronic evaluation is an effective tool for direct, formative feedback for residents, and it creates a longitudinal record of resident progress. Satisfaction with, and use of, this evaluation system was high.
Estimation of the behavior factor of existing RC-MRF buildings
NASA Astrophysics Data System (ADS)
Vona, Marco; Mastroberti, Monica
2018-01-01
In recent years, several research groups have studied a new generation of analysis methods for seismic response assessment of existing buildings. Nevertheless, many important developments are still needed in order to define more reliable and effective assessment procedures. Moreover, regarding existing buildings, it should be highlighted that due to the low knowledge level, the linear elastic analysis is the only analysis method allowed. The same codes (such as NTC2008, EC8) consider the linear dynamic analysis with behavior factor as the reference method for the evaluation of seismic demand. This type of analysis is based on a linear-elastic structural model subject to a design spectrum, obtained by reducing the elastic spectrum through a behavior factor. The behavior factor (reduction factor or q factor in some codes) is used to reduce the elastic spectrum ordinate or the forces obtained from a linear analysis in order to take into account the non-linear structural capacities. The behavior factors should be defined based on several parameters that influence the seismic nonlinear capacity, such as mechanical materials characteristics, structural system, irregularity and design procedures. In practical applications, there is still an evident lack of detailed rules and accurate behavior factor values adequate for existing buildings. In this work, some investigations of the seismic capacity of the main existing RC-MRF building types have been carried out. In order to make a correct evaluation of the seismic force demand, actual behavior factor values coherent with force based seismic safety assessment procedure have been proposed and compared with the values reported in the Italian seismic code, NTC08.
Reynolds, Kellin; Barnhill, Danny; Sias, Jamie; Young, Amy; Polite, Florencia Greer
2014-01-01
Background A portable electronic method of providing instructional feedback and recording an evaluation of resident competency immediately following surgical procedures has not previously been documented in obstetrics and gynecology. Objective This report presents a unique electronic format that documents resident competency and encourages verbal communication between faculty and residents immediately following operative procedures. Methods The Microsoft Tag system and SurveyMonkey platform were linked by a 2-D QR code using Microsoft QR code generator. Each resident was given a unique code (TAG) embedded onto an ID card. An evaluation form was attached to each resident's file in SurveyMonkey. Postoperatively, supervising faculty scanned the resident's TAG with a smartphone and completed the brief evaluation using the phone's screen. The evaluation was reviewed with the resident and automatically submitted to the resident's educational file. Results The evaluation system was quickly accepted by residents and faculty. Of 43 residents and faculty in the study, 38 (88%) responded to a survey 8 weeks after institution of the electronic evaluation system. Thirty (79%) of the 38 indicated it was superior to the previously used handwritten format. The electronic system demonstrated improved utilization compared with paper evaluations, with a mean of 23 electronic evaluations submitted per resident during a 6-month period versus 14 paper assessments per resident during an earlier period of 6 months. Conclusions This streamlined portable electronic evaluation is an effective tool for direct, formative feedback for residents, and it creates a longitudinal record of resident progress. Satisfaction with, and use of, this evaluation system was high. PMID:26140128
Posttest calculations of bundle quench test CORA-13 with ATHLET-CD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bestele, J.; Trambauer, K.; Schubert, J.D.
Gesellschaft fuer Anlagen- und Reaktorsicherheit is developing, in cooperation with the Institut fuer Kernenergetik und Energiesysteme, Stuttgart, the system code Analysis of Thermalhydraulics of Leaks and Transients with Core Degradation (ATHLET-CD). The code consists of detailed models of the thermal hydraulics of the reactor coolant system. This thermo-fluid dynamics module is coupled with modules describing the early phase of the core degradation, like cladding deformation, oxidation and melt relocation, and the release and transport of fission products. The assessment of the code is being done by the analysis of separate effect tests, integral tests, and plant events. The code willmore » be applied to the verification of severe accident management procedures. The out-of-pile test CORA-13 was conducted by Forschungszentrum Karlsruhe in their CORA test facility. The test consisted of two phases, a heatup phase and a quench phase. At the beginning of the quench phase, a sharp peak in the hydrogen generation rate was observed. Both phases of the test have been calculated with the system code ATHLET-CD. Special efforts have been made to simulate the heat losses and the flow distribution in the test facility and the thermal hydraulics during the quench phase. In addition to previous calculations, the material relocation and the quench phase have been modeled. The temperature increase during the heatup phase, the starting time of the temperature escalation, and the maximum temperatures have been calculated correctly. At the beginning of the quench phase, an increased hydrogen generation rate has been calculated as measured in the experiment.« less
Development of probabilistic multimedia multipathway computer codes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, C.; LePoire, D.; Gnanapragasam, E.
2002-01-01
The deterministic multimedia dose/risk assessment codes RESRAD and RESRAD-BUILD have been widely used for many years for evaluation of sites contaminated with residual radioactive materials. The RESRAD code applies to the cleanup of sites (soils) and the RESRAD-BUILD code applies to the cleanup of buildings and structures. This work describes the procedure used to enhance the deterministic RESRAD and RESRAD-BUILD codes for probabilistic dose analysis. A six-step procedure was used in developing default parameter distributions and the probabilistic analysis modules. These six steps include (1) listing and categorizing parameters; (2) ranking parameters; (3) developing parameter distributions; (4) testing parameter distributionsmore » for probabilistic analysis; (5) developing probabilistic software modules; and (6) testing probabilistic modules and integrated codes. The procedures used can be applied to the development of other multimedia probabilistic codes. The probabilistic versions of RESRAD and RESRAD-BUILD codes provide tools for studying the uncertainty in dose assessment caused by uncertain input parameters. The parameter distribution data collected in this work can also be applied to other multimedia assessment tasks and multimedia computer codes.« less
Farzandipour, Mehrdad; Sheikhtaheri, Abbas
2009-01-01
To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647
NASA Astrophysics Data System (ADS)
Treloar, W. J.; Taylor, G. E.; Flenley, J. R.
2004-12-01
This is the first of a series of papers on the theme of automated pollen analysis. The automation of pollen analysis could result in numerous advantages for the reconstruction of past environments, with larger data sets made practical, objectivity and fine resolution sampling. There are also applications in apiculture and medicine. Previous work on the classification of pollen using texture measures has been successful with small numbers of pollen taxa. However, as the number of pollen taxa to be identified increases, more features may be required to achieve a successful classification. This paper describes the use of simple geometric measures to augment the texture measures. The feasibility of this new approach is tested using scanning electron microscope (SEM) images of 12 taxa of fresh pollen taken from reference material collected on Henderson Island, Polynesia. Pollen images were captured directly from a SEM connected to a PC. A threshold grey-level was set and binary images were then generated. Pollen edges were then located and the boundaries were traced using a chain coding system. A number of simple geometric variables were calculated directly from the chain code of the pollen and a variable selection procedure was used to choose the optimal subset to be used for classification. The efficiency of these variables was tested using a leave-one-out classification procedure. The system successfully split the original 12 taxa sample into five sub-samples containing no more than six pollen taxa each. The further subdivision of echinate pollen types was then attempted with a subset of four pollen taxa. A set of difference codes was constructed for a range of displacements along the chain code. From these difference codes probability variables were calculated. A variable selection procedure was again used to choose the optimal subset of probabilities that may be used for classification. The efficiency of these variables was again tested using a leave-one-out classification procedure. The proportion of correctly classified pollen ranged from 81% to 100% depending on the subset of variables used. The best set of variables had an overall classification rate averaging at about 95%. This is comparable with the classification rates from the earlier texture analysis work for other types of pollen. Copyright
NASA Technical Reports Server (NTRS)
Hall, E. J.; Topp, D. A.; Delaney, R. A.
1996-01-01
The overall objective of this study was to develop a 3-D numerical analysis for compressor casing treatment flowfields. The current version of the computer code resulting from this study is referred to as ADPAC (Advanced Ducted Propfan Analysis Codes-Version 7). This report is intended to serve as a computer program user's manual for the ADPAC code developed under Tasks 6 and 7 of the NASA Contract. The ADPAC program is based on a flexible multiple- block grid discretization scheme permitting coupled 2-D/3-D mesh block solutions with application to a wide variety of geometries. Aerodynamic calculations are based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. Steady flow predictions are accelerated by a multigrid procedure. An iterative implicit algorithm is available for rapid time-dependent flow calculations, and an advanced two equation turbulence model is incorporated to predict complex turbulent flows. The consolidated code generated during this study is capable of executing in either a serial or parallel computing mode from a single source code. Numerous examples are given in the form of test cases to demonstrate the utility of this approach for predicting the aerodynamics of modem turbomachinery configurations.
Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H. Fatih; Goren, Sezer
2016-01-01
The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed. PMID:27733925
Canbay, Ferhat; Levent, Vecdi Emre; Serbes, Gorkem; Ugurdag, H Fatih; Goren, Sezer; Aydin, Nizamettin
2016-09-01
The authors aimed to develop an application for producing different architectures to implement dual tree complex wavelet transform (DTCWT) having near shift-invariance property. To obtain a low-cost and portable solution for implementing the DTCWT in multi-channel real-time applications, various embedded-system approaches are realised. For comparison, the DTCWT was implemented in C language on a personal computer and on a PIC microcontroller. However, in the former approach portability and in the latter desired speed performance properties cannot be achieved. Hence, implementation of the DTCWT on a reconfigurable platform such as field programmable gate array, which provides portable, low-cost, low-power, and high-performance computing, is considered as the most feasible solution. At first, they used the system generator DSP design tool of Xilinx for algorithm design. However, the design implemented by using such tools is not optimised in terms of area and power. To overcome all these drawbacks mentioned above, they implemented the DTCWT algorithm by using Verilog Hardware Description Language, which has its own difficulties. To overcome these difficulties, simplify the usage of proposed algorithms and the adaptation procedures, a code generator program that can produce different architectures is proposed.
76 FR 12600 - Review of the Emergency Alert System
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-08
... appropriate, various administrative procedures for national tests, including test codes to be used and pre... administrative procedures for national tests, including test codes to be used and pre-test outreach. B. Summary... test codes to be used and pre-test outreach, the Commission has instructed the Bureau to factor in the...
28 CFR 36.605 - Procedure following preliminary denial of certification.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Local Building Codes § 36.605 Procedure following preliminary denial of certification. (a) If the Assistant Attorney General makes a preliminary determination to deny certification of a code under § 36.603... specification of the manner in which the code could be amended in order to qualify for certification. (b) The...
28 CFR 36.605 - Procedure following preliminary denial of certification.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Local Building Codes § 36.605 Procedure following preliminary denial of certification. (a) If the Assistant Attorney General makes a preliminary determination to deny certification of a code under § 36.603... specification of the manner in which the code could be amended in order to qualify for certification. (b) The...
28 CFR 36.605 - Procedure following preliminary denial of certification.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Local Building Codes § 36.605 Procedure following preliminary denial of certification. (a) If the Assistant Attorney General makes a preliminary determination to deny certification of a code under § 36.603... specification of the manner in which the code could be amended in order to qualify for certification. (b) The...
28 CFR 36.605 - Procedure following preliminary denial of certification.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Local Building Codes § 36.605 Procedure following preliminary denial of certification. (a) If the Assistant Attorney General makes a preliminary determination to deny certification of a code under § 36.603... specification of the manner in which the code could be amended in order to qualify for certification. (b) The...
28 CFR 36.606 - Procedure following preliminary denial of certification.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Local Building Codes § 36.606 Procedure following preliminary denial of certification. (a) If the Assistant Attorney General makes a Preliminary determination to deny certification of a code under § 36.604... specification of the manner in which the code could be amended in order to qualify for certification. (b) The...
The purpose of this SOP is to define the strategy for the Global Coding of Scanned Forms. This procedure applies to the Arizona NHEXAS project and the "Border" study. Keywords: Coding; scannable forms.
The National Human Exposure Assessment Survey (NHEXAS) is a federal interag...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-20
... Panel. This expertise encompasses hospital payment systems; hospital medical-care delivery systems; provider billing systems; APC groups, Current Procedural Terminology codes, and alpha-numeric Healthcare Common Procedure Coding System codes; and the use of, and payment for, drugs and medical devices in the...
Automatic Coding of Dialogue Acts in Collaboration Protocols
ERIC Educational Resources Information Center
Erkens, Gijsbert; Janssen, Jeroen
2008-01-01
Although protocol analysis can be an important tool for researchers to investigate the process of collaboration and communication, the use of this method of analysis can be time consuming. Hence, an automatic coding procedure for coding dialogue acts was developed. This procedure helps to determine the communicative function of messages in online…
Computation of Steady and Unsteady Laminar Flames: Theory
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas; Radhakrishnan, Krishnan; Zhou, Ruhai
1999-01-01
In this paper we describe the numerical analysis underlying our efforts to develop an accurate and reliable code for simulating flame propagation using complex physical and chemical models. We discuss our spatial and temporal discretization schemes, which in our current implementations range in order from two to six. In space we use staggered meshes to define discrete divergence and gradient operators, allowing us to approximate complex diffusion operators while maintaining ellipticity. Our temporal discretization is based on the use of preconditioning to produce a highly efficient linearly implicit method with good stability properties. High order for time accurate simulations is obtained through the use of extrapolation or deferred correction procedures. We also discuss our techniques for computing stationary flames. The primary issue here is the automatic generation of initial approximations for the application of Newton's method. We use a novel time-stepping procedure, which allows the dynamic updating of the flame speed and forces the flame front towards a specified location. Numerical experiments are presented, primarily for the stationary flame problem. These illustrate the reliability of our techniques, and the dependence of the results on various code parameters.
Audit of accuracy of clinical coding in oral surgery.
Naran, S; Hudovsky, A; Antscherl, J; Howells, S; Nouraei, S A R
2014-10-01
We aimed to study the accuracy of clinical coding within oral surgery and to identify ways in which it can be improved. We undertook did a multidisciplinary audit of a sample of 646 day case patients who had had oral surgery procedures between 2011 and 2012. We compared the codes given with their case notes and amended any discrepancies. The accuracy of coding was assessed for primary and secondary diagnoses and procedures, and for health resource groupings (HRGs). The financial impact of coding Subjectivity, Variability and Error (SVE) was assessed by reference to national tariffs. The audit resulted in 122 (19%) changes to primary diagnoses. The codes for primary procedures changed in 224 (35%) cases; 310 (48%) morbidities and complications had been missed, and 266 (41%) secondary procedures had been missed or were incorrect. This led to at least one change of coding in 496 (77%) patients, and to the HRG changes in 348 (54%) patients. The financial impact of this was £114 in lost revenue per patient. There is a high incidence of coding errors in oral surgery because of the large number of day cases, a lack of awareness by clinicians of coding issues, and because clinical coders are not always familiar with the large number of highly specialised abbreviations used. Accuracy of coding can be improved through the use of a well-designed proforma, and standards can be maintained by the use of an ongoing data quality assurance programme. Copyright © 2014. Published by Elsevier Ltd.
Patrick, Hannah; Sims, Andrew; Burn, Julie; Bousfield, Derek; Colechin, Elaine; Reay, Christopher; Alderson, Neil; Goode, Stephen; Cunningham, David; Campbell, Bruce
2013-03-01
New devices and procedures are often introduced into health services when the evidence base for their efficacy and safety is limited. The authors sought to assess the availability and accuracy of routinely collected Hospital Episodes Statistics (HES) data in the UK and their potential contribution to the monitoring of new procedures. Four years of HES data (April 2006-March 2010) were analysed to identify episodes of hospital care involving a sample of 12 new interventional procedures. HES data were cross checked against other relevant sources including national or local registers and manufacturers' information. HES records were available for all 12 procedures during the entire study period. Comparative data sources were available from national (5), local (2) and manufacturer (2) registers. Factors found to affect comparisons were miscoding, alternative coding and inconsistent use of subsidiary codes. The analysis of provider coverage showed that HES is sensitive at detecting centres which carry out procedures, but specificity is poor in some cases. Routinely collected HES data have the potential to support quality improvements and evidence-based commissioning of devices and procedures in health services but achievement of this potential depends upon the accurate coding of procedures.
Flexible Generation of Kalman Filter Code
NASA Technical Reports Server (NTRS)
Richardson, Julian; Wilson, Edward
2006-01-01
Domain-specific program synthesis can automatically generate high quality code in complex domains from succinct specifications, but the range of programs which can be generated by a given synthesis system is typically narrow. Obtaining code which falls outside this narrow scope necessitates either 1) extension of the code generator, which is usually very expensive, or 2) manual modification of the generated code, which is often difficult and which must be redone whenever changes are made to the program specification. In this paper, we describe adaptations and extensions of the AUTOFILTER Kalman filter synthesis system which greatly extend the range of programs which can be generated. Users augment the input specification with a specification of code fragments and how those fragments should interleave with or replace parts of the synthesized filter. This allows users to generate a much wider range of programs without their needing to modify the synthesis system or edit generated code. We demonstrate the usefulness of the approach by applying it to the synthesis of a complex state estimator which combines code from several Kalman filters with user-specified code. The work described in this paper allows the complex design decisions necessary for real-world applications to be reflected in the synthesized code. When executed on simulated input data, the generated state estimator was found to produce comparable estimates to those produced by a handcoded estimator
Grid generation about complex three-dimensional aircraft configurations
NASA Technical Reports Server (NTRS)
Klopfer, Goetz H.
1991-01-01
The problem of obtaining three dimensional grids with sufficient resolution to resolve all the flow or other physical features of interest is addressed. The generation of a computational grid involves a series of compromises to resolve several conflicting requirements. On one hand, one would like the grid to be fine enough and not too skewed to reduce the numerical errors and to adequately resolve the pertinent physical features of the flow field about the aircraft. On the other hand, the capabilities of present or even future supercomputers are finite and the number of mesh points must be limited to a reasonable number: one which is usually much less than desired for numerical accuracy. One technique to overcome this limitation is the 'zonal' grid approach. In this method, the overall field is subdivided into smaller zones or blocks in each of which an independent grid is generated with enough grid density to resolve the flow features in that zone. The zonal boundaries or interfaces require special boundary conditions such that the conservation properties of the governing equations are observed. Much work was done in 3-D zonal approaches with nonconservative zonal interfaces. A 3-D zonal conservative interfacing method that is efficient and easy to implement was developed during the past year. During the course of the work, it became apparent that it would be much more feasible to do the conservative interfacing with cell-centered finite volume codes instead of the originally planned finite difference codes. Accordingly, the CNS code was converted to finite volume form. This new version of the code is named CNSFV. The original multi-zonal interfacing capability of the CNS code was enhanced by generalizing the procedure to allow for completely arbitrarily shaped zones with no mesh continuity between the zones. While this zoning capability works well for most flow situations, it is, however, still nonconservative. The conservative interface algorithm was also implemented but was not completely validated.
NASA Astrophysics Data System (ADS)
Hori, T.; Ichimura, T.
2015-12-01
Here we propose a system for monitoring and forecasting of crustal activity, especially great interplate earthquake generation and its preparation processes in subduction zone. Basically, we model great earthquake generation as frictional instability on the subjecting plate boundary. So, spatio-temporal variation in slip velocity on the plate interface should be monitored and forecasted. Although, we can obtain continuous dense surface deformation data on land and partly at the sea bottom, the data obtained are not fully utilized for monitoring and forecasting. It is necessary to develop a physics-based data analysis system including (1) a structural model with the 3D geometry of the plate interface and the material property such as elasticity and viscosity, (2) calculation code for crustal deformation and seismic wave propagation using (1), (3) inverse analysis or data assimilation code both for structure and fault slip using (1)&(2). To accomplish this, it is at least necessary to develop highly reliable large-scale simulation code to calculate crustal deformation and seismic wave propagation for 3D heterogeneous structure. Actually, Ichimura et al. (2014, SC14) has developed unstructured FE non-linear seismic wave simulation code, which achieved physics-based urban earthquake simulation enhanced by 10.7 BlnDOF x 30 K time-step. Ichimura et al. (2013, GJI) has developed high fidelity FEM simulation code with mesh generator to calculate crustal deformation in and around Japan with complicated surface topography and subducting plate geometry for 1km mesh. Further, for inverse analyses, Errol et al. (2012, BSSA) has developed waveform inversion code for modeling 3D crustal structure, and Agata et al. (2015, this meeting) has improved the high fidelity FEM code to apply an adjoint method for estimating fault slip and asthenosphere viscosity. Hence, we have large-scale simulation and analysis tools for monitoring. Furthermore, we are developing the methods for forecasting the slip velocity variation on the plate interface. Basic concept is given in Hori et al. (2014, Oceanography) introducing ensemble based sequential data assimilation procedure. Although the prototype described there is for elastic half space model, we will apply it for 3D heterogeneous structure with the high fidelity FE model.
Problem Formulation and Alternative Generation in the Decision Making Process
1988-06-30
Organizatio N00014-86-K-0678 Sc. ADDRESS(City, State, and ZIP Code) 10 SOURCE OF FUNDING NUMBERS p4000ub20/7-4-86 PROGRAM PROJECT TASK WORK UNIT ELEMENT...procedure will work satisfactorily (not optimally) as long as the organism has ample time to carry Ity Codesi and/or DIst 4pu cial3p Problem...among which the priorities are worked out. Neither problems nor opportunities can be considered for the agenda unless they are noticed, and except for
Shick, G L; Hoover, L W; Moore, A N
1979-04-01
A data base was developed for a computer-assisted personnel data system for a university hospital department of dietetics which would store data on employees' employment, personnel information, attendance records, and termination. Development of the data base required designing computer programs and files, coding directions and forms for card input, and forms and procedures for on-line transmission. A program was written to compute accrued vacation, sick leave, and holiday time, and to generate historical records.
1986-11-01
START THE RUN>>> USERNUIDNUPW. CHARGEGROUPNPID. SETJOB, DC= NO . COMMENT . GET CR ATTACH THE INPUT DATA TO GO TO VSOS. GET, INDATA=DATFILE/NA. IFE...NtPW. CHARGEGROUPNPID. SETTL, 200. SETJOB. DC= NO . COMMENT . RUN SAIL ON NOS TO GENERATE THE MAIN PROGRAM. PURGE, SAl LOUT/NA. PURGE, PROG-PROBLEMID...NOSPASS. CHARGEDFCDFCPR.F. SETJOB. DC= NO . COMMENT . GET OR ATTACH THE INPUT DATA To Go To VSOS. GET. INDATA=MYDATA/NA. IFE. .NOT.FILE(INDATA.AS) .DOATT
Maclean, Donald; Younes, Hakim Ben; Forrest, Margaret; Towers, Hazel K
2012-03-01
Accurate and timely clinical data are required for clinical and organisational purposes and is especially important for patient management, audit of surgical performance and the electronic health record. The recent introduction of computerised theatre management systems has enabled real-time (point-of-care) operative procedure coding by clinical staff. However the accuracy of these data is unknown. The aim of this Scottish study was to compare the accuracy of theatre nurses' real-time coding on the local theatre management system with the central Scottish Morbidity Record (SMR01). Paired procedural codes were recorded, qualitatively graded for precision and compared (n = 1038). In this study, real-time, point-of-care coding by theatre nurses resulted in significant coding errors compared with the central SMR01 database. Improved collaboration between full-time coders and clinical staff using computerised decision support systems is suggested.
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
Sasaki, Akinori; Hiraoka, Eiji; Homma, Yosuke; Takahashi, Osamu; Norisue, Yasuhiro; Kawai, Koji; Fujitani, Shigeki
2017-01-01
Code status discussion is associated with a decrease in invasive procedures among terminally ill cancer patients. We investigated the association between code status discussion on admission and incidence of invasive procedures, cardiopulmonary resuscitation (CPR), and opioid use among inpatients with advanced stages of cancer and noncancer diseases. We performed a retrospective cohort study in a single center, Ito Municipal Hospital, Japan. Participants were patients who were admitted to the Department of Internal Medicine between October 1, 2013 and August 30, 2015, with advanced-stage cancer and noncancer. We collected demographic data and inquired the presence or absence of code status discussion within 24 hours of admission and whether invasive procedures, including central venous catheter placement, intubation with mechanical ventilation, and CPR for cardiac arrest, and opioid treatment were performed. We investigated the factors associated with CPR events by using multivariate logistic regression analysis. Among the total 232 patients, code status was discussed with 115 patients on admission, of which 114 (99.1%) patients had do-not-resuscitate (DNR) orders. The code status was not discussed with the remaining 117 patients on admission, of which 69 (59%) patients had subsequent code status discussion with resultant DNR orders. Code status discussion on admission decreased the incidence of central venous catheter placement, intubation with mechanical ventilation, and CPR in both cancer and noncancer patients. It tended to increase the rate of opioid use. Code status discussion on admission was the only factor associated with the decreased use of CPR ( P <0.001, odds ratio =0.03, 95% CI =0.004-0.21), which was found by using multivariate logistic regression analysis. Code status discussion on admission is associated with a decrease in invasive procedures and CPR in cancer and noncancer patients. Physicians should be educated about code status discussion to improve end-of-life care.
28 CFR 36.605 - Procedure following preliminary determination of equivalency.
Code of Federal Regulations, 2010 CFR
2010-07-01
... State Laws or Local Building Codes § 36.605 Procedure following preliminary determination of equivalency... of the preliminary determination of equivalency with respect to the particular code, and invite...
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage scheme.
Pongpirul, Krit; Walker, Damian G; Winch, Peter J; Robinson, Courtland
2011-04-08
In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors.
A qualitative study of DRG coding practice in hospitals under the Thai Universal Coverage Scheme
2011-01-01
Background In the Thai Universal Coverage health insurance scheme, hospital providers are paid for their inpatient care using Diagnosis Related Group-based retrospective payment, for which quality of the diagnosis and procedure codes is crucial. However, there has been limited understandings on which health care professions are involved and how the diagnosis and procedure coding is actually done within hospital settings. The objective of this study is to detail hospital coding structure and process, and to describe the roles of key hospital staff, and other related internal dynamics in Thai hospitals that affect quality of data submitted for inpatient care reimbursement. Methods Research involved qualitative semi-structured interview with 43 participants at 10 hospitals chosen to represent a range of hospital sizes (small/medium/large), location (urban/rural), and type (public/private). Results Hospital Coding Practice has structural and process components. While the structural component includes human resources, hospital committee, and information technology infrastructure, the process component comprises all activities from patient discharge to submission of the diagnosis and procedure codes. At least eight health care professional disciplines are involved in the coding process which comprises seven major steps, each of which involves different hospital staff: 1) Discharge Summarization, 2) Completeness Checking, 3) Diagnosis and Procedure Coding, 4) Code Checking, 5) Relative Weight Challenging, 6) Coding Report, and 7) Internal Audit. The hospital coding practice can be affected by at least five main factors: 1) Internal Dynamics, 2) Management Context, 3) Financial Dependency, 4) Resource and Capacity, and 5) External Factors. Conclusions Hospital coding practice comprises both structural and process components, involves many health care professional disciplines, and is greatly varied across hospitals as a result of five main factors. PMID:21477310
Information retrieval based on single-pixel optical imaging with quick-response code
NASA Astrophysics Data System (ADS)
Xiao, Yin; Chen, Wen
2018-04-01
Quick-response (QR) code technique is combined with ghost imaging (GI) to recover original information with high quality. An image is first transformed into a QR code. Then the QR code is treated as an input image in the input plane of a ghost imaging setup. After measurements, traditional correlation algorithm of ghost imaging is utilized to reconstruct an image (QR code form) with low quality. With this low-quality image as an initial guess, a Gerchberg-Saxton-like algorithm is used to improve its contrast, which is actually a post processing. Taking advantage of high error correction capability of QR code, original information can be recovered with high quality. Compared to the previous method, our method can obtain a high-quality image with comparatively fewer measurements, which means that the time-consuming postprocessing procedure can be avoided to some extent. In addition, for conventional ghost imaging, the larger the image size is, the more measurements are needed. However, for our method, images with different sizes can be converted into QR code with the same small size by using a QR generator. Hence, for the larger-size images, the time required to recover original information with high quality will be dramatically reduced. Our method makes it easy to recover a color image in a ghost imaging setup, because it is not necessary to divide the color image into three channels and respectively recover them.
HangOut: generating clean PSI-BLAST profiles for domains with long insertions.
Kim, Bong-Hyun; Cong, Qian; Grishin, Nick V
2010-06-15
Profile-based similarity search is an essential step in structure-function studies of proteins. However, inclusion of non-homologous sequence segments into a profile causes its corruption and results in false positives. Profile corruption is common in multidomain proteins, and single domains with long insertions are a significant source of errors. We developed a procedure (HangOut) that, for a single domain with specified insertion position, cleans erroneously extended PSI-BLAST alignments to generate better profiles. HangOut is implemented in Python 2.3 and runs on all Unix-compatible platforms. The source code is available under the GNU GPL license at http://prodata.swmed.edu/HangOut/. Supplementary data are available at Bioinformatics online.
Ancient DNA sequence revealed by error-correcting codes.
Brandão, Marcelo M; Spoladore, Larissa; Faria, Luzinete C B; Rocha, Andréa S L; Silva-Filho, Marcio C; Palazzo, Reginaldo
2015-07-10
A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code.
Ancient DNA sequence revealed by error-correcting codes
Brandão, Marcelo M.; Spoladore, Larissa; Faria, Luzinete C. B.; Rocha, Andréa S. L.; Silva-Filho, Marcio C.; Palazzo, Reginaldo
2015-01-01
A previously described DNA sequence generator algorithm (DNA-SGA) using error-correcting codes has been employed as a computational tool to address the evolutionary pathway of the genetic code. The code-generated sequence alignment demonstrated that a residue mutation revealed by the code can be found in the same position in sequences of distantly related taxa. Furthermore, the code-generated sequences do not promote amino acid changes in the deviant genomes through codon reassignment. A Bayesian evolutionary analysis of both code-generated and homologous sequences of the Arabidopsis thaliana malate dehydrogenase gene indicates an approximately 1 MYA divergence time from the MDH code-generated sequence node to its paralogous sequences. The DNA-SGA helps to determine the plesiomorphic state of DNA sequences because a single nucleotide alteration often occurs in distantly related taxa and can be found in the alternative codon patterns of noncanonical genetic codes. As a consequence, the algorithm may reveal an earlier stage of the evolution of the standard code. PMID:26159228
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
What if pediatric residents could bill for their outpatient services?
Ng, M; Lawless, S T
2001-10-01
We prospectively studied the potential of billing and coding practices of pediatric residents in outpatient clinics and extrapolated our results to assess the financial implications of billing inaccuracies. Using Medicare as a common measure of "currency," we also used the relative value unit (RVU) and ambulatory payment class methodologies as means of assessing the productivity and financial value of resident-staffed pediatric clinics. Residents were asked to submit voluntarily shadow billing forms and documentation of outpatient clinic visits. Documentation of work was assessed by a blinded reviewer, and current procedure terminology evaluation and management codes were assigned. Comparisons between resident codes and calculated codes were made. Financial implications of physician productivity were calculated in terms of dollar amounts and RVUs. Resource intensity was measured using the ambulatory payment class methodology. A total of 344 charts were reviewed. Coding agreement for health maintenance visits was 86%, whereas agreement for acute care visits was 38%. Eighty-three percent of coding disagreement in the latter group was resulting from undercoding by residents. Errors accounted for a 4.79% difference in potential reimbursement for all visit types and a 19.10% difference for acute care visits. No significant differences in shadow billing discrepancies were found between different levels of training. Residents were predicted to generate $67 230, $87 593, and $96 072 in Medicare revenue in the outpatient clinic setting during each successive year of training. On average, residents generated 1.17 +/- 0.01 and 0.81 +/- 0.02 work RVUs for each health maintenance visit and office visit, respectively. Annual productivity from outpatient clinic settings was estimated at 548, 735, and 893 work RVUs in the postgraduate levels 1, 2, and 3, respectively. When pediatric residents are not trained adequately in proper coding practices, the potential for billing discrepancies is high and potential reimbursement differences may be substantial. Discussion of financial issues should be considered in curriculum development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sibaev, M.; Crittenden, D. L., E-mail: deborah.crittenden@canterbury.ac.nz
In this paper, we outline a general, scalable, and black-box approach for calculating high-order strongly coupled force fields in rectilinear normal mode coordinates, based upon constructing low order expansions in curvilinear coordinates with naturally limited mode-mode coupling, and then transforming between coordinate sets analytically. The optimal balance between accuracy and efficiency is achieved by transforming from 3 mode representation quartic force fields in curvilinear normal mode coordinates to 4 mode representation sextic force fields in rectilinear normal modes. Using this reduced mode-representation strategy introduces an error of only 1 cm{sup −1} in fundamental frequencies, on average, across a sizable testmore » set of molecules. We demonstrate that if it is feasible to generate an initial semi-quartic force field in curvilinear normal mode coordinates from ab initio data, then the subsequent coordinate transformation procedure will be relatively fast with modest memory demands. This procedure facilitates solving the nuclear vibrational problem, as all required integrals can be evaluated analytically. Our coordinate transformation code is implemented within the extensible PyPES library program package, at http://sourceforge.net/projects/pypes-lib-ext/.« less
NASA Technical Reports Server (NTRS)
Cicon, D. E.; Sofrin, T. G.
1995-01-01
This report describes a procedure for enhancing the use of the basic rotating microphone system so as to determine the forward propagating mode components of the acoustic field in the inlet duct at the microphone plane in order to predict more accurate far-field radiation patterns. In addition, a modification was developed to obtain, from the same microphone readings, the forward acoustic modes generated at the fan face, which is generally some distance downstream of the microphone plane. Both these procedures employ computer-simulated calibrations of sound propagation in the inlet duct, based upon the current radiation code. These enhancement procedures were applied to previously obtained rotating microphone data for the 17-inch ADP fan. The forward mode components at the microphone plane were obtained and were used to compute corresponding far-field directivities. The second main task of the program involved finding the forward wave modes generated at the fan face in terms of the same total radial mode structure measured at the microphone plane. To obtain satisfactory results with the ADP geometry it was necessary to limit consideration to the propagating modes. Sensitivity studies were also conducted to establish guidelines for use in other fan configurations.
Unitary reconstruction of secret for stabilizer-based quantum secret sharing
NASA Astrophysics Data System (ADS)
Matsumoto, Ryutaroh
2017-08-01
We propose a unitary procedure to reconstruct quantum secret for a quantum secret sharing scheme constructed from stabilizer quantum error-correcting codes. Erasure correcting procedures for stabilizer codes need to add missing shares for reconstruction of quantum secret, while unitary reconstruction procedures for certain class of quantum secret sharing are known to work without adding missing shares. The proposed procedure also works without adding missing shares.
Proposal for a new content model for the Austrian Procedure Catalogue.
Neururer, Sabrina B; Pfeiffer, Karl P
2013-01-01
The Austrian Procedure Catalogue is used for procedure coding in Austria. Its architecture and content has some major weaknesses. The aim of this study is the presentation of a new potential content model for this classification system consisting of main characteristics of health interventions. It is visualized using a UML class diagram. Based on this proposition, an implementation of an ontology for procedure coding is planned.
X-ray absorption radiography for high pressure shock wave studies
NASA Astrophysics Data System (ADS)
Antonelli, L.; Atzeni, S.; Batani, D.; Baton, S. D.; Brambrink, E.; Forestier-Colleoni, P.; Koenig, M.; Le Bel, E.; Maheut, Y.; Nguyen-Bui, T.; Richetta, M.; Rousseaux, C.; Ribeyre, X.; Schiavi, A.; Trela, J.
2018-01-01
The study of laser compressed matter, both warm dense matter (WDM) and hot dense matter (HDM), is relevant to several research areas, including materials science, astrophysics, inertial confinement fusion. X-ray absorption radiography is a unique tool to diagnose compressed WDM and HDM. The application of radiography to shock-wave studies is presented and discussed. In addition to the standard Abel inversion to recover a density map from a transmission map, a procedure has been developed to generate synthetic radiographs using density maps produced by the hydrodynamics code DUED. This procedure takes into account both source-target geometry and source size (which plays a non negligible role in the interpretation of the data), and allows to reproduce transmission data with a good degree of accuracy.
Progress of High Efficiency Centrifugal Compressor Simulations Using TURBO
NASA Technical Reports Server (NTRS)
Kulkarni, Sameer; Beach, Timothy A.
2017-01-01
Three-dimensional, time-accurate, and phase-lagged computational fluid dynamics (CFD) simulations of the High Efficiency Centrifugal Compressor (HECC) stage were generated using the TURBO solver. Changes to the TURBO Parallel Version 4 source code were made in order to properly model the no-slip boundary condition along the spinning hub region for centrifugal impellers. A startup procedure was developed to generate a converged flow field in TURBO. This procedure initialized computations on a coarsened mesh generated by the Turbomachinery Gridding System (TGS) and relied on a method of systematically increasing wheel speed and backpressure. Baseline design-speed TURBO results generally overpredicted total pressure ratio, adiabatic efficiency, and the choking flow rate of the HECC stage as compared with the design-intent CFD results of Code Leo. Including diffuser fillet geometry in the TURBO computation resulted in a 0.6 percent reduction in the choking flow rate and led to a better match with design-intent CFD. Diffuser fillets reduced annulus cross-sectional area but also reduced corner separation, and thus blockage, in the diffuser passage. It was found that the TURBO computations are somewhat insensitive to inlet total pressure changing from the TURBO default inlet pressure of 14.7 pounds per square inch (101.35 kilopascals) down to 11.0 pounds per square inch (75.83 kilopascals), the inlet pressure of the component test. Off-design tip clearance was modeled in TURBO in two computations: one in which the blade tip geometry was trimmed by 12 mils (0.3048 millimeters), and another in which the hub flow path was moved to reflect a 12-mil axial shift in the impeller hub, creating a step at the hub. The one-dimensional results of these two computations indicate non-negligible differences between the two modeling approaches.
Schrock, Linda E
2008-07-01
This article reviews the literature to date and reports on a new study that documented the frequency of manual code-requiring blood glucose (BG) meters that were miscoded at the time of the patient's initial appointment in a hospital-based outpatient diabetes education program. Between January 1 and May 31, 2007, the type of BG meter and the accuracy of the patient's meter code (if required) and procedure for checking BG were checked during the initial appointment with the outpatient diabetes educator. If indicated, reeducation regarding the procedure for the BG meter code entry and/or BG test was provided. Of the 65 patients who brought their meter requiring manual entry of a code number or code chip to the initial appointment, 16 (25%) were miscoded at the time of the appointment. Two additional problems, one of dead batteries and one of improperly stored test strips, were identified and corrected at the first appointment. These findings underscore the importance of checking the patient's BG meter code (if required) and procedure for testing BG at each encounter with a health care professional or providing the patient with a meter that does not require manual entry of a code number or chip to match the container of test strips (i.e., an autocode meter).
Progress in The Semantic Analysis of Scientific Code
NASA Technical Reports Server (NTRS)
Stewart, Mark
2000-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.
Simulations of Coherent Synchrotron Radiation Effects in Electron Machines
NASA Astrophysics Data System (ADS)
Migliorati, M.; Schiavi, A.; Dattoli, G.
2007-09-01
Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.
Simulations of Coherent Synchrotron Radiation Effects in Electron Machines
NASA Astrophysics Data System (ADS)
Migliorati, M.; Schiavi, A.; Dattoli, G.
Coherent synchrotron radiation (CSR) generated by high intensity electron beams can be a source of undesirable effects limiting the performance of storage rings. The complexity of the physical mechanisms underlying the interplay between the electron beam and the CSR demands for reliable simulation codes. In the past, codes based on Lie algebraic techniques have been very efficient to treat transport problems in accelerators. The extension of these methods to the non linear case is ideally suited to treat wakefields - beam interaction. In this paper we report on the development of a numerical code, based on the solution of the Vlasov equation, which includes the non linear contribution due to wakefields. The proposed solution method exploits an algebraic technique that uses the exponential operators. We show that, in the case of CSR wakefields, the integration procedure is capable of reproducing the onset of an instability which leads to microbunching of the beam thus increasing the CSR at short wavelengths. In addition, considerations on the threshold of the instability for Gaussian bunches is also reported.
The purpose of this SOP is to define the strategy for the global coding of scanned forms. This procedure applies to the Arizona NHEXAS project and the Border study. Keywords: Coding; scannable forms.
The U.S.-Mexico Border Program is sponsored by the Environmental Health Workg...
The purpose of this SOP is to define the global coding scheme to used in the working and master databases. This procedure applies to all of the databases used during the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; databases.
The National Human Exposu...
Parzeller, Markus; Zedler, Barbara
2013-01-01
The article deals with the new regulations in the German Civil Code (BGB) which came into effect in Germany on 26 Feb 2013 as the Patient Rights Act (PatRG). In Part I, the legislative procedure, the treatment contract and the contracting parties (Section 630a Civil Code), the applicable regulations (Section 630b Civil Code) and the obligations to cooperate and inform (Section 630c Civil Code) are discussed and critically analysed.
Validation of Living Donor Nephrectomy Codes
Lam, Ngan N.; Lentine, Krista L.; Klarenbach, Scott; Sood, Manish M.; Kuwornu, Paul J.; Naylor, Kyla L.; Knoll, Gregory A.; Kim, S. Joseph; Young, Ann; Garg, Amit X.
2018-01-01
Background: Use of administrative data for outcomes assessment in living kidney donors is increasing given the rarity of complications and challenges with loss to follow-up. Objective: To assess the validity of living donor nephrectomy in health care administrative databases compared with the reference standard of manual chart review. Design: Retrospective cohort study. Setting: 5 major transplant centers in Ontario, Canada. Patients: Living kidney donors between 2003 and 2010. Measurements: Sensitivity and positive predictive value (PPV). Methods: Using administrative databases, we conducted a retrospective study to determine the validity of diagnostic and procedural codes for living donor nephrectomies. The reference standard was living donor nephrectomies identified through the province’s tissue and organ procurement agency, with verification by manual chart review. Operating characteristics (sensitivity and PPV) of various algorithms using diagnostic, procedural, and physician billing codes were calculated. Results: During the study period, there were a total of 1199 living donor nephrectomies. Overall, the best algorithm for identifying living kidney donors was the presence of 1 diagnostic code for kidney donor (ICD-10 Z52.4) and 1 procedural code for kidney procurement/excision (1PC58, 1PC89, 1PC91). Compared with the reference standard, this algorithm had a sensitivity of 97% and a PPV of 90%. The diagnostic and procedural codes performed better than the physician billing codes (sensitivity 60%, PPV 78%). Limitations: The donor chart review and validation study was performed in Ontario and may not be generalizable to other regions. Conclusions: An algorithm consisting of 1 diagnostic and 1 procedural code can be reliably used to conduct health services research that requires the accurate determination of living kidney donors at the population level. PMID:29662679
Auto Code Generation for Simulink-Based Attitude Determination Control System
NASA Technical Reports Server (NTRS)
MolinaFraticelli, Jose Carlos
2012-01-01
This paper details the work done to auto generate C code from a Simulink-Based Attitude Determination Control System (ADCS) to be used in target platforms. NASA Marshall Engineers have developed an ADCS Simulink simulation to be used as a component for the flight software of a satellite. This generated code can be used for carrying out Hardware in the loop testing of components for a satellite in a convenient manner with easily tunable parameters. Due to the nature of the embedded hardware components such as microcontrollers, this simulation code cannot be used directly, as it is, on the target platform and must first be converted into C code; this process is known as auto code generation. In order to generate C code from this simulation; it must be modified to follow specific standards set in place by the auto code generation process. Some of these modifications include changing certain simulation models into their atomic representations which can bring new complications into the simulation. The execution order of these models can change based on these modifications. Great care must be taken in order to maintain a working simulation that can also be used for auto code generation. After modifying the ADCS simulation for the auto code generation process, it is shown that the difference between the output data of the former and that of the latter is between acceptable bounds. Thus, it can be said that the process is a success since all the output requirements are met. Based on these results, it can be argued that this generated C code can be effectively used by any desired platform as long as it follows the specific memory requirements established in the Simulink Model.
Nonlinear, nonbinary cyclic group codes
NASA Technical Reports Server (NTRS)
Solomon, G.
1992-01-01
New cyclic group codes of length 2(exp m) - 1 over (m - j)-bit symbols are introduced. These codes can be systematically encoded and decoded algebraically. The code rates are very close to Reed-Solomon (RS) codes and are much better than Bose-Chaudhuri-Hocquenghem (BCH) codes (a former alternative). The binary (m - j)-tuples are identified with a subgroup of the binary m-tuples which represents the field GF(2 exp m). Encoding is systematic and involves a two-stage procedure consisting of the usual linear feedback register (using the division or check polynomial) and a small table lookup. For low rates, a second shift-register encoding operation may be invoked. Decoding uses the RS error-correcting procedures for the m-tuple codes for m = 4, 5, and 6.
An Experiment in Scientific Code Semantic Analysis
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.
1998-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.
One-way quantum repeaters with quantum Reed-Solomon codes
NASA Astrophysics Data System (ADS)
Muralidharan, Sreraman; Zou, Chang-Ling; Li, Linshu; Jiang, Liang
2018-05-01
We show that quantum Reed-Solomon codes constructed from classical Reed-Solomon codes can approach the capacity on the quantum erasure channel of d -level systems for large dimension d . We study the performance of one-way quantum repeaters with these codes and obtain a significant improvement in key generation rate compared to previously investigated encoding schemes with quantum parity codes and quantum polynomial codes. We also compare the three generations of quantum repeaters using quantum Reed-Solomon codes and identify parameter regimes where each generation performs the best.
Jézéquel, Laetitia; Loeper, Jacqueline; Pompon, Denis
2008-11-01
Combinatorial libraries coding for mosaic enzymes with predefined crossover points constitute useful tools to address and model structure-function relationships and for functional optimization of enzymes based on multivariate statistics. The presented method, called sequence-independent generation of a chimera-ordered library (SIGNAL), allows easy shuffling of any predefined amino acid segment between two or more proteins. This method is particularly well adapted to the exchange of protein structural modules. The procedure could also be well suited to generate ordered combinatorial libraries independent of sequence similarities in a robotized manner. Sequence segments to be recombined are first extracted by PCR from a single-stranded template coding for an enzyme of interest using a biotin-avidin-based method. This technique allows the reduction of parental template contamination in the final library. Specific PCR primers allow amplification of two complementary mosaic DNA fragments, overlapping in the region to be exchanged. Fragments are finally reassembled using a fusion PCR. The process is illustrated via the construction of a set of mosaic CYP2B enzymes using this highly modular approach.
Endobronchial Ultrasound: Clinical Uses and Professional Reimbursements.
Gildea, Thomas R; Nicolacakis, Katina
2016-12-01
Endobronchial ultrasonography (EBUS) has become an invaluable tool in the diagnosis of patients with a variety of thoracic abnormalities. The majority of EBUS procedures are used to diagnose and stage mediastinal and hilar abnormalities, as well as peripheral pulmonary targets, with a probe-based technology. Nearly 1,000 articles have been written about its use and utility. New Current Procedural Terminology (CPT) codes have been introduced in 2016 to better capture the work and clinical use associated with the various types of EBUS procedures. The existing 31620 code has been deleted and replaced by three new codes: 31652, 31653, and 31654. These new codes have been through the valuation process, and the new rule for reimbursement has been active since January 1, 2016 with National Correct Coding Initiative correction as of April 1, 2016. The impact of these new codes will result in a net reduction in professional and technical reimbursement. This article describes the current use of EBUS and explains the current codes and professional reimbursement. Copyright © 2016 American College of Chest Physicians. Published by Elsevier Inc. All rights reserved.
Gong, Dan; Jun, Lin; Tsai, James C
2015-05-01
To calculate the association between Medicare payment and service volume for 6 commonly performed glaucoma procedures. Retrospective, longitudinal database study. A 100% dataset of all glaucoma procedures performed on Medicare Part B beneficiaries within the United States from 2005 to 2009. Fixed-effects regression model using Medicare Part B carrier data for all 50 states and the District of Columbia, controlling for time-invariant carrier-specific characteristics, national trends in glaucoma service volume, Medicare beneficiary population, number of ophthalmologists, and income per capita. Payment-volume elasticities, defined as the percent change in service volume per 1% change in Medicare payment, for laser trabeculoplasty (Current Procedural Terminology [CPT] code 65855), trabeculectomy without previous surgery (CPT code 66170), trabeculectomy with previous surgery (CPT code 66172), aqueous shunt to reservoir (CPT code 66180), laser iridotomy (CPT code 66761), and scleral reinforcement with graft (CPT code 67255). The payment-volume elasticity was nonsignificant for 4 of 6 procedures studied: laser trabeculoplasty (elasticity, -0.27; 95% confidence interval [CI], -1.31 to 0.77; P = 0.61), trabeculectomy without previous surgery (elasticity, -0.42; 95% CI, -0.85 to 0.01; P = 0.053), trabeculectomy with previous surgery (elasticity, -0.28; 95% CI, -0.83 to 0.28; P = 0.32), and aqueous shunt to reservoir (elasticity, -0.47; 95% CI, -3.32 to 2.37; P = 0.74). Two procedures yielded significant associations between Medicare payment and service volume. For laser iridotomy, the payment-volume elasticity was -1.06 (95% CI, -1.39 to -0.72; P < 0.001): for every 1% decrease in CPT code 66761 payment, laser iridotomy service volume increased by 1.06%. For scleral reinforcement with graft, the payment-volume elasticity was -2.92 (95% CI, -5.72 to -0.12; P = 0.041): for every 1% decrease in CPT code 67255 payment, scleral reinforcement with graft service volume increased by 2.92%. This study calculated the association between Medicare payment and service volume for 6 commonly performed glaucoma procedures and found varying magnitudes of payment-volume elasticities, suggesting that the volume response to changes in Medicare payments, if present, is not uniform across all Medicare procedures. Copyright © 2015 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
1981-12-01
file.library-unit{.subunit).SYMAP Statement Map: library-file. library-unit.subunit).SMAP Type Map: 1 ibrary.fi le. 1 ibrary-unit{.subunit). TMAP The library...generator SYMAP Symbol Map code generator SMAP Updated Statement Map code generator TMAP Type Map code generator A.3.5 The PUNIT Command The P UNIT...Core.Stmtmap) NAME Tmap (Core.Typemap) END Example A-3 Compiler Command Stream for the Code Generator Texas Instruments A-5 Ada Optimizing Compiler
2009-09-01
instructional format. Using a mixed- method coding and analysis approach, the sample of POIs were categorized, coded, statistically analyzed, and a... Method SECURITY CLASSIFICATION OF 19. LIMITATION OF 20. NUMBER 21. RESPONSIBLE PERSON 16. REPORT Unclassified 17. ABSTRACT...transition to a distributed (or blended) learning format. Procedure: A mixed- methods approach, combining qualitative coding procedures with basic
Code OK3 - An upgraded version of OK2 with beam wobbling function
NASA Astrophysics Data System (ADS)
Ogoyski, A. I.; Kawata, S.; Popov, P. H.
2010-07-01
For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.
Observing heliospheric neutral atoms at 1 AU
NASA Astrophysics Data System (ADS)
Heerikhuisen, Jacob; Pogorelov, Nikolai; Florinski, Vladimir; Zank, Gary
2006-09-01
Although in situ observations of distant heliospheric plasma by the Voyagers has proven to be extremely enlightening, such point observations need to be complemented with global measurements taken remotely to obtain a complete picture of the heliosphere and local interstellar environment. Neutral atoms, with their contempt for magnetic fields, provide useful probes of the plasma that generated them. However, there will be a number of ambiguities in neutral atom readings that require a deeper understanding of the plasma processes generating neutral atoms, as well as the loss mechanisms on their flight to the observation point. We introduce a procedure for generating all-sky maps of energetic H-atoms, calculated directly in our Monte-Carlo neutral atom code. Results obtained for a self-consistent axisymmetric MHD-Boltzmann calculation, as well as several non-selfconsistent 3D sky maps, will be presented.
Pretest analysis document for Semiscale Test S-FS-1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, T.H.
This report documents the pretest analysis calculation completed with the RELAP5/MOD2/CY21 code for Semiscale Test S-FS-1. The test will simulate the double-ended offset shear of the main steam line at the exit of the broken loop steam generator (downstream of the flow restrictor) and the subsequent plant recovery. The recovery portion of the test consists of a plant stabilization phase and a plant cooldown phase. The recovery procedures involve normal charging/letdown operation, pressurizer heater operation, secondary steam and feed of the unaffected steam generator, and pressurizer auxiliary spray. The test will be terminated after the unaffected steam generator and pressurizermore » pressures and liquid levels are stable, and the average priamry fluid temperature is stable at about 480 K (405/sup 0/F) for at least 10 minutes.« less
Engine structures modeling software system: Computer code. User's manual
NASA Technical Reports Server (NTRS)
1992-01-01
ESMOSS is a specialized software system for the construction of geometric descriptive and discrete analytical models of engine parts, components and substructures which can be transferred to finite element analysis programs such as NASTRAN. The software architecture of ESMOSS is designed in modular form with a central executive module through which the user controls and directs the development of the analytical model. Modules consist of a geometric shape generator, a library of discretization procedures, interfacing modules to join both geometric and discrete models, a deck generator to produce input for NASTRAN and a 'recipe' processor which generates geometric models from parametric definitions. ESMOSS can be executed both in interactive and batch modes. Interactive mode is considered to be the default mode and that mode will be assumed in the discussion in this document unless stated otherwise.
Late-presenting dural tear: incidence, risk factors, and associated complications.
Durand, Wesley M; DePasse, J Mason; Kuris, Eren O; Yang, JaeWon; Daniels, Alan H
2018-04-18
Unrecognized and inadequately repaired intraoperative durotomies may lead to cerebrospinal fluid leak, pseudomeningocele, and other complications. Few studies have investigated durotomy that is unrecognized intraoperatively and requires additional postoperative management (hereafter, late-presenting dural tear [LPDT]), although estimates of LPDT range from 0.6 to 8.3 per 1,000 spinal surgeries. These single-center studies are based on relatively small sample sizes for an event of this rarity, all with <10 patients experiencing LPDT. This investigation is the largest yet conducted on LPDT, and sought to identify incidence, risk factors for, and complications associated with LPDT. This observational cohort study employed the American College of Surgeons National Surgical Quality Improvement Program dataset (years 2012-2015). Patients who underwent spine surgery were identified based on presence of primary listed Current Procedural Terminology (CPT) codes corresponding to spinal fusion or isolated posterior decompression without fusion. The primary variable in this study was occurrence of LPDT, identified as reoperation or readmission with durotomy-specific CPT or International Classification of Diseases, Ninth Revision, Clinical Modification codes but without durotomy codes present for the index procedure. Descriptive statistics were generated. Bivariate and multivariate analyses were conducted using chi-square tests and multiple logistic regression, respectively, generating both risk factors for LPDT and independent association of LPDT with postoperative complications. Statistical significance was defined as p<.05. In total, 86,212 patients were analyzed. The overall rate of reoperation or readmission without reoperation for LPDT was 2.0 per 1,000 patients (n=174). Of LPDT patients, 97.7% required one or more unplanned reoperations (n=170), and 5.7% of patients (n=10) required two reoperations. On multivariate analysis, lumbar procedures (odds ratio [OR] 2.79, p<.0001, vs. cervical), procedures involving both cervical and lumbar levels (OR 3.78, p=.0338, vs. cervical only), procedures with decompression only (OR 1.72, p=.0017, vs. fusion and decompression), and operative duration ≥250 minutes (OR 1.70, p=.0058, vs. <250 minutes) were associated with increased likelihood of LPDT. Late-presenting dural tear was significantly associated with surgical site infection (SSI) (OR 2.54, p<.0001), wound disruption (OR 2.24, p<.0001), sepsis (OR 2.19, p<.0001), thromboembolism (OR 1.71, p<.0001), acute kidney injury (OR 1.59, p=.0281), pneumonia (OR 1.14, p=.0269), and urinary tract infection (UTI) (OR 1.08, p=.0057). Late-presenting dural tears occurred in 2.0 per 1,000 patients who underwent spine surgery. Patients who underwent lumbar procedures, decompression procedures, and procedures with operative duration ≥250 minutes were at increased risk for LPDT. Further, LPDT was independently associated with increased likelihood of SSI, sepsis, pneumonia, UTI, wound dehiscence, thromboembolism, and acute kidney injury. As LPDT is associated with markedly increased morbidity and potential liability risk, spine surgeons should be aware of best-practice management for LPDT and consider it a rare, but possible etiology for developing postoperative complications. Copyright © 2018 Elsevier Inc. All rights reserved.
Psychometric challenges and proposed solutions when scoring facial emotion expression codes.
Olderbak, Sally; Hildebrandt, Andrea; Pinkpank, Thomas; Sommer, Werner; Wilhelm, Oliver
2014-12-01
Coding of facial emotion expressions is increasingly performed by automated emotion expression scoring software; however, there is limited discussion on how best to score the resulting codes. We present a discussion of facial emotion expression theories and a review of contemporary emotion expression coding methodology. We highlight methodological challenges pertinent to scoring software-coded facial emotion expression codes and present important psychometric research questions centered on comparing competing scoring procedures of these codes. Then, on the basis of a time series data set collected to assess individual differences in facial emotion expression ability, we derive, apply, and evaluate several statistical procedures, including four scoring methods and four data treatments, to score software-coded emotion expression data. These scoring procedures are illustrated to inform analysis decisions pertaining to the scoring and data treatment of other emotion expression questions and under different experimental circumstances. Overall, we found applying loess smoothing and controlling for baseline facial emotion expression and facial plasticity are recommended methods of data treatment. When scoring facial emotion expression ability, maximum score is preferred. Finally, we discuss the scoring methods and data treatments in the larger context of emotion expression research.
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Automated Concurrent Blackboard System Generation in C++
NASA Technical Reports Server (NTRS)
Kaplan, J. A.; McManus, J. W.; Bynum, W. L.
1999-01-01
In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.
Automatic Certification of Kalman Filters for Reliable Code Generation
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd; Schumann, Johann; Richardson, Julian
2005-01-01
AUTOFILTER is a tool for automatically deriving Kalman filter code from high-level declarative specifications of state estimation problems. It can generate code with a range of algorithmic characteristics and for several target platforms. The tool has been designed with reliability of the generated code in mind and is able to automatically certify that the code it generates is free from various error classes. Since documentation is an important part of software assurance, AUTOFILTER can also automatically generate various human-readable documents, containing both design and safety related information. We discuss how these features address software assurance standards such as DO-178B.
PCC Framework for Program-Generators
NASA Technical Reports Server (NTRS)
Kong, Soonho; Choi, Wontae; Yi, Kwangkeun
2009-01-01
In this paper, we propose a proof-carrying code framework for program-generators. The enabling technique is abstract parsing, a static string analysis technique, which is used as a component for generating and validating certificates. Our framework provides an efficient solution for certifying program-generators whose safety properties are expressed in terms of the grammar representing the generated program. The fixed-point solution of the analysis is generated and attached with the program-generator on the code producer side. The consumer receives the code with a fixed-point solution and validates that the received fixed point is indeed a fixed point of the received code. This validation can be done in a single pass.
Incorporating Manual and Autonomous Code Generation
NASA Technical Reports Server (NTRS)
McComas, David
1998-01-01
Code can be generated manually or using code-generated software tools, but how do you interpret the two? This article looks at a design methodology that combines object-oriented design with autonomic code generation for attitude control flight software. Recent improvements in space flight computers are allowing software engineers to spend more time engineering the applications software. The application developed was the attitude control flight software for an astronomical satellite called the Microwave Anisotropy Probe (MAP). The MAP flight system is being designed, developed, and integrated at NASA's Goddard Space Flight Center. The MAP controls engineers are using Integrated Systems Inc.'s MATRIXx for their controls analysis. In addition to providing a graphical analysis for an environment, MATRIXx includes an autonomic code generation facility called AutoCode. This article examines the forces that shaped the final design and describes three highlights of the design process: (1) Defining the manual to autonomic code interface; (2) Applying object-oriented design to the manual flight code; (3) Implementing the object-oriented design in C.
Gene-Auto: Automatic Software Code Generation for Real-Time Embedded Systems
NASA Astrophysics Data System (ADS)
Rugina, A.-E.; Thomas, D.; Olive, X.; Veran, G.
2008-08-01
This paper gives an overview of the Gene-Auto ITEA European project, which aims at building a qualified C code generator from mathematical models under Matlab-Simulink and Scilab-Scicos. The project is driven by major European industry partners, active in the real-time embedded systems domains. The Gene- Auto code generator will significantly improve the current development processes in such domains by shortening the time to market and by guaranteeing the quality of the generated code through the use of formal methods. The first version of the Gene-Auto code generator has already been released and has gone thought a validation phase on real-life case studies defined by each project partner. The validation results are taken into account in the implementation of the second version of the code generator. The partners aim at introducing the Gene-Auto results into industrial development by 2010.
Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van
2018-04-01
In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.
An Experiment in Scientific Program Understanding
NASA Technical Reports Server (NTRS)
Stewart, Mark E. M.; Owen, Karl (Technical Monitor)
2000-01-01
This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.
Zhang, Fangzheng; Ge, Xiaozhong; Gao, Bindong; Pan, Shilong
2015-08-24
A novel scheme for photonic generation of a phase-coded microwave signal is proposed and its application in one-dimension distance measurement is demonstrated. The proposed signal generator has a simple and compact structure based on a single dual-polarization modulator. Besides, the generated phase-coded signal is stable and free from the DC and low-frequency backgrounds. An experiment is carried out. A 2 Gb/s phase-coded signal at 20 GHz is successfully generated, and the recovered phase information agrees well with the input 13-bit Barker code. To further investigate the performance of the proposed signal generator, its application in one-dimension distance measurement is demonstrated. The measurement accuracy is less than 1.7 centimeters within a measurement range of ~2 meters. The experimental results can verify the feasibility of the proposed phase-coded microwave signal generator and also provide strong evidence to support its practical applications.
A Fast Optimization Method for General Binary Code Learning.
Shen, Fumin; Zhou, Xiang; Yang, Yang; Song, Jingkuan; Shen, Heng; Tao, Dacheng
2016-09-22
Hashing or binary code learning has been recognized to accomplish efficient near neighbor search, and has thus attracted broad interests in recent retrieval, vision and learning studies. One main challenge of learning to hash arises from the involvement of discrete variables in binary code optimization. While the widely-used continuous relaxation may achieve high learning efficiency, the pursued codes are typically less effective due to accumulated quantization error. In this work, we propose a novel binary code optimization method, dubbed Discrete Proximal Linearized Minimization (DPLM), which directly handles the discrete constraints during the learning process. Specifically, the discrete (thus nonsmooth nonconvex) problem is reformulated as minimizing the sum of a smooth loss term with a nonsmooth indicator function. The obtained problem is then efficiently solved by an iterative procedure with each iteration admitting an analytical discrete solution, which is thus shown to converge very fast. In addition, the proposed method supports a large family of empirical loss functions, which is particularly instantiated in this work by both a supervised and an unsupervised hashing losses, together with the bits uncorrelation and balance constraints. In particular, the proposed DPLM with a supervised `2 loss encodes the whole NUS-WIDE database into 64-bit binary codes within 10 seconds on a standard desktop computer. The proposed approach is extensively evaluated on several large-scale datasets and the generated binary codes are shown to achieve very promising results on both retrieval and classification tasks.
Correct coding for laboratory procedures during assisted reproductive technology cycles.
2016-04-01
This document provides updated coding information for services related to assisted reproductive technology procedures. This document replaces the 2012 ASRM document of the same name. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Utilization of an Academic Nursing Center.
ERIC Educational Resources Information Center
Cole, Frank L.; Mackey, Thomas
1996-01-01
Using data from an academic nursing center that cared for 3,263 patients over eight months, diseases were classified using International Classification of Diseases codes, and procedures were classified using Current Procedural Terminology codes. Patterns of health care emerged, with implications for clinical teaching. (SK)
Storage and retrieval of mass spectral information
NASA Technical Reports Server (NTRS)
Hohn, M. E.; Humberston, M. J.; Eglinton, G.
1977-01-01
Computer handling of mass spectra serves two main purposes: the interpretation of the occasional, problematic mass spectrum, and the identification of the large number of spectra generated in the gas-chromatographic-mass spectrometric (GC-MS) analysis of complex natural and synthetic mixtures. Methods available fall into the three categories of library search, artificial intelligence, and learning machine. Optional procedures for coding, abbreviating and filtering a library of spectra minimize time and storage requirements. Newer techniques make increasing use of probability and information theory in accessing files of mass spectral information.
Cryptographic salting for security enhancement of double random phase encryption schemes
NASA Astrophysics Data System (ADS)
Velez Zea, Alejandro; Fredy Barrera, John; Torroba, Roberto
2017-10-01
Security in optical encryption techniques is a subject of great importance, especially in light of recent reports of successful attacks. We propose a new procedure to reinforce the ciphertexts generated in double random phase encrypting experimental setups. This ciphertext is protected by multiplexing with a ‘salt’ ciphertext coded with the same setup. We present an experimental implementation of the ‘salting’ technique. Thereafter, we analyze the resistance of the ‘salted’ ciphertext under some of the commonly known attacks reported in the literature, demonstrating the validity of our proposal.
Prediction of sound radiation from different practical jet engine inlets
NASA Technical Reports Server (NTRS)
Zinn, B. T.; Meyer, W. L.
1981-01-01
Computer codes, capable of producing accurate results for nondimensional wave numbers (based on duct radius) of up to 20, were developed and used to generate results for various other inlet configurations. Both reflection coefficients and radiation patterns were calculated by the integral solution procedure for the following five inlet configurations: the NASA Langley Bellmouth, the NASA Lewis JT-15D-1 ground test nacelle, and three hyperbolic inlets of 50, 70, and 90 degrees. Results obtained are compared with results from other experimental and theoretical studies.
ERIC Educational Resources Information Center
New Mexico Univ., Albuquerque. American Indian Law Center.
The Model Children's Code was developed to provide a legally correct model code that American Indian tribes can use to enact children's codes that fulfill their legal, cultural and economic needs. Code sections cover the court system, jurisdiction, juvenile offender procedures, minor-in-need-of-care, and termination. Almost every Code section is…
New GOES satellite synchronized time code generation
NASA Technical Reports Server (NTRS)
Fossler, D. E.; Olson, R. K.
1984-01-01
The TRAK Systems' GOES Satellite Synchronized Time Code Generator is described. TRAK Systems has developed this timing instrument to supply improved accuracy over most existing GOES receiver clocks. A classical time code generator is integrated with a GOES receiver.
NASA Astrophysics Data System (ADS)
Ivankovic, D.; Dadic, V.
2009-04-01
Some of oceanographic parameters have to be manually inserted into database; some (for example data from CTD probe) are inserted from various files. All this parameters requires visualization, validation and manipulation from research vessel or scientific institution, and also public presentation. For these purposes is developed web based system, containing dynamic sql procedures and java applets. Technology background is Oracle 10g relational database, and Oracle application server. Web interfaces are developed using PL/SQL stored database procedures (mod PL/SQL). Additional parts for data visualization include use of Java applets and JavaScript. Mapping tool is Google maps API (javascript) and as alternative java applet. Graph is realized as dynamically generated web page containing java applet. Mapping tool and graph are georeferenced. That means that click on some part of graph, automatically initiate zoom or marker onto location where parameter was measured. This feature is very useful for data validation. Code for data manipulation and visualization are partially realized with dynamic SQL and that allow as to separate data definition and code for data manipulation. Adding new parameter in system requires only data definition and description without programming interface for this kind of data.
DRG benchmarking study establishes national coding norms.
Vaul, J H
1998-05-01
With the increase in fraud and abuse investigations, healthcare financial managers should examine their organization's medical record coding procedures. The Federal government and third-party payers are looking specifically for improper billing of outpatient services, unbundling of procedures to increase payment, assigning higher-paying DRG codes for inpatient claims, and other abuses. A recent benchmarking study of Medicare Provider Analysis and Review (MEDPAR) data has established national norms for hospital coding and case mix based on DRGs and has revealed the majority of atypical coding cases fall into six DRG pairs. Organizations with a greater percentage of atypical cases--those more likely to be scrutinized by Federal investigators--will want to conduct suitable review and be sure appropriate documentation exists to justify the coding.
Boan, Andrea D; Voeks, Jenifer H; Feng, Wuwei Wayne; Bachman, David L; Jauch, Edward C; Adams, Robert J; Ovbiagele, Bruce; Lackland, Daniel T
2014-01-01
The use of International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9) diagnostic codes can identify racial disparities in ischemic stroke hospitalizations; however, inclusion of revascularization procedure codes as acute stroke events may affect the magnitude of the risk difference. This study assesses the impact of excluding revascularization procedure codes in the ICD-9 definition of ischemic stroke, compared with the traditional inclusive definition, on racial disparity estimates for stroke incidence and recurrence. Patients discharged with a diagnosis of ischemic stroke (ICD-9 codes 433.00-434.91 and 436) were identified from a statewide inpatient discharge database from 2010 to 2012. Race-age specific disparity estimates of stroke incidence and recurrence and 1-year cumulative recurrent stroke rates were compared between the routinely used traditional classification and a modified classification of stroke that excluded primary ICD-9 cerebral revascularization procedures codes (38.12, 00.61, and 00.63). The traditional classification identified 7878 stroke hospitalizations, whereas the modified classification resulted in 18% fewer hospitalizations (n = 6444). The age-specific black to white rate ratios were significantly higher in the modified than in the traditional classification for stroke incidence (rate ratio, 1.50; 95% confidence interval [CI], 1.43-1.58 vs. rate ratio, 1.24; 95% CI, 1.18-1.30, respectively). In whites, the 1-year cumulative recurrence rate was significantly reduced by 46% (45-64 years) and 49% (≥ 65 years) in the modified classification, largely explained by a higher rate of cerebral revascularization procedures among whites. There were nonsignificant reductions of 14% (45-64 years) and 19% (≥ 65 years) among blacks. Including cerebral revascularization procedure codes overestimates hospitalization rates for ischemic stroke and significantly underestimates the racial disparity estimates in stroke incidence and recurrence. Copyright © 2014 National Stroke Association. Published by Elsevier Inc. All rights reserved.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION INFANT FORMULA QUALITY CONTROL PROCEDURES Quality Control Procedures for Assuring Nutrient Content...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION INFANT FORMULA QUALITY CONTROL PROCEDURES Quality Control Procedures for Assuring Nutrient Content...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION INFANT FORMULA QUALITY CONTROL PROCEDURES Quality Control Procedures for Assuring Nutrient Content...
Punzalan, Florencio Rusty; Kunieda, Yoshitoshi; Amano, Akira
2015-01-01
Clinical and experimental studies involving human hearts can have certain limitations. Methods such as computer simulations can be an important alternative or supplemental tool. Physiological simulation at the tissue or organ level typically involves the handling of partial differential equations (PDEs). Boundary conditions and distributed parameters, such as those used in pharmacokinetics simulation, add to the complexity of the PDE solution. These factors can tailor PDE solutions and their corresponding program code to specific problems. Boundary condition and parameter changes in the customized code are usually prone to errors and time-consuming. We propose a general approach for handling PDEs and boundary conditions in computational models using a replacement scheme for discretization. This study is an extension of a program generator that we introduced in a previous publication. The program generator can generate code for multi-cell simulations of cardiac electrophysiology. Improvements to the system allow it to handle simultaneous equations in the biological function model as well as implicit PDE numerical schemes. The replacement scheme involves substituting all partial differential terms with numerical solution equations. Once the model and boundary equations are discretized with the numerical solution scheme, instances of the equations are generated to undergo dependency analysis. The result of the dependency analysis is then used to generate the program code. The resulting program code are in Java or C programming language. To validate the automatic handling of boundary conditions in the program code generator, we generated simulation code using the FHN, Luo-Rudy 1, and Hund-Rudy cell models and run cell-to-cell coupling and action potential propagation simulations. One of the simulations is based on a published experiment and simulation results are compared with the experimental data. We conclude that the proposed program code generator can be used to generate code for physiological simulations and provides a tool for studying cardiac electrophysiology. PMID:26356082
Trends in Utilization of Vocal Fold Injection Procedures.
Rosow, David E
2015-11-01
Office-based vocal fold injections have become increasingly popular over the past 15 years. Examination of trends in procedure coding for vocal fold injections in the United States from 2000 to 2012 was undertaken to see if they reflect this shift. The US Part B Medicare claims database was queried from 2000 through 2012 for multiple Current Procedural Terminology codes. Over the period studied, the number of nonoperative laryngoscopic injections (31513, 31570) and operative medialization laryngoplasties (31588) remained constant. Operative vocal fold injection (31571) demonstrated marked linear growth over the 12-year study period, from 744 procedures in 2000 to 4788 in 2012-an increase >640%. The dramatic increased incidence in the use of code 31571 reflects an increasing share of vocal fold injections being performed in the operating room and not in an office setting, running counter to the prevailing trend toward awake, office-based injection procedures. © American Academy of Otolaryngology—Head and Neck Surgery Foundation 2015.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakhai, B.
A new method for solving radiation transport problems is presented. The heart of the technique is a new cross section processing procedure for the calculation of group-to-point and point-to-group cross sections sets. The method is ideally suited for problems which involve media with highly fluctuating cross sections, where the results of the traditional multigroup calculations are beclouded by the group averaging procedures employed. Extensive computational efforts, which would be required to evaluate double integrals in the multigroup treatment numerically, prohibit iteration to optimize the energy boundaries. On the other hand, use of point-to-point techniques (as in the stochastic technique) ismore » often prohibitively expensive due to the large computer storage requirement. The pseudo-point code is a hybrid of the two aforementioned methods (group-to-group and point-to-point) - hence the name pseudo-point - that reduces the computational efforts of the former and the large core requirements of the latter. The pseudo-point code generates the group-to-point or the point-to-group transfer matrices, and can be coupled with the existing transport codes to calculate pointwise energy-dependent fluxes. This approach yields much more detail than is available from the conventional energy-group treatments. Due to the speed of this code, several iterations could be performed (in affordable computing efforts) to optimize the energy boundaries and the weighting functions. The pseudo-point technique is demonstrated by solving six problems, each depicting a certain aspect of the technique. The results are presented as flux vs energy at various spatial intervals. The sensitivity of the technique to the energy grid and the savings in computational effort are clearly demonstrated.« less
NASA Astrophysics Data System (ADS)
Brandelik, Andreas
2009-07-01
CALCMIN, an open source Visual Basic program, was implemented in EXCEL™. The program was primarily developed to support geoscientists in their routine task of calculating structural formulae of minerals on the basis of chemical analysis mainly obtained by electron microprobe (EMP) techniques. Calculation programs for various minerals are already included in the form of sub-routines. These routines are arranged in separate modules containing a minimum of code. The architecture of CALCMIN allows the user to easily develop new calculation routines or modify existing routines with little knowledge of programming techniques. By means of a simple mouse-click, the program automatically generates a rudimentary framework of code using the object model of the Visual Basic Editor (VBE). Within this framework simple commands and functions, which are provided by the program, can be used, for example, to perform various normalization procedures or to output the results of the computations. For the clarity of the code, element symbols are used as variables initialized by the program automatically. CALCMIN does not set any boundaries in complexity of the code used, resulting in a wide range of possible applications. Thus, matrix and optimization methods can be included, for instance, to determine end member contents for subsequent thermodynamic calculations. Diverse input procedures are provided, such as the automated read-in of output files created by the EMP. Furthermore, a subsequent filter routine enables the user to extract specific analyses in order to use them for a corresponding calculation routine. An event-driven, interactive operating mode was selected for easy application of the program. CALCMIN leads the user from the beginning to the end of the calculation process.
Large Coded Aperture Mask for Spaceflight Hard X-ray Images
NASA Technical Reports Server (NTRS)
Vigneau, Danielle N.; Robinson, David W.
2002-01-01
The 2.6 square meter coded aperture mask is a vital part of the Burst Alert Telescope on the Swift mission. A random, but known pattern of more than 50,000 lead tiles, each 5 mm square, was bonded to a large honeycomb panel which projects a shadow on the detector array during a gamma ray burst. A two-year development process was necessary to explore ideas, apply techniques, and finalize procedures to meet the strict requirements for the coded aperture mask. Challenges included finding a honeycomb substrate with minimal gamma ray attenuation, selecting an adhesive with adequate bond strength to hold the tiles in place but soft enough to allow the tiles to expand and contract without distorting the panel under large temperature gradients, and eliminating excess adhesive from all untiled areas. The largest challenge was to find an efficient way to bond the > 50,000 lead tiles to the panel with positional tolerances measured in microns. In order to generate the desired bondline, adhesive was applied and allowed to cure to each tile. The pre-cured tiles were located in a tool to maintain positional accuracy, wet adhesive was applied to the panel, and it was lowered to the tile surface with synchronized actuators. Using this procedure, the entire tile pattern was transferred to the large honeycomb panel in a single bond. The pressure for the bond was achieved by enclosing the entire system in a vacuum bag. Thermal vacuum and acoustic tests validated this approach. This paper discusses the methods, materials, and techniques used to fabricate this very large and unique coded aperture mask for the Swift mission.
Method and apparatus for determining position using global positioning satellites
NASA Technical Reports Server (NTRS)
Ward, John (Inventor); Ward, William S. (Inventor)
1998-01-01
A global positioning satellite receiver having an antenna for receiving a L1 signal from a satellite. The L1 signal is processed by a preamplifier stage including a band pass filter and a low noise amplifier and output as a radio frequency (RF) signal. A mixer receives and de-spreads the RF signal in response to a pseudo-random noise code, i.e., Gold code, generated by an internal pseudo-random noise code generator. A microprocessor enters a code tracking loop, such that during the code tracking loop, it addresses the pseudo-random code generator to cause the pseudo-random code generator to sequentially output pseudo-random codes corresponding to satellite codes used to spread the L1 signal, until correlation occurs. When an output of the mixer is indicative of the occurrence of correlation between the RF signal and the generated pseudo-random codes, the microprocessor enters an operational state which slows the receiver code sequence to stay locked with the satellite code sequence. The output of the mixer is provided to a detector which, in turn, controls certain routines of the microprocessor. The microprocessor will output pseudo range information according to an interrupt routine in response detection of correlation. The pseudo range information is to be telemetered to a ground station which determines the position of the global positioning satellite receiver.
FY2012 summary of tasks completed on PROTEUS-thermal work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C.H.; Smith, M.A.
2012-06-06
PROTEUS is a suite of the neutronics codes, both old and new, that can be used within the SHARP codes being developed under the NEAMS program. Discussion here is focused on updates and verification and validation activities of the SHARP neutronics code, DeCART, for application to thermal reactor analysis. As part of the development of SHARP tools, the different versions of the DeCART code created for PWR, BWR, and VHTR analysis were integrated. Verification and validation tests for the integrated version were started, and the generation of cross section libraries based on the subgroup method was revisited for the targetedmore » reactor types. The DeCART code has been reorganized in preparation for an efficient integration of the different versions for PWR, BWR, and VHTR analysis. In DeCART, the old-fashioned common blocks and header files have been replaced by advanced memory structures. However, the changing of variable names was minimized in order to limit problems with the code integration. Since the remaining stability problems of DeCART were mostly caused by the CMFD methodology and modules, significant work was performed to determine whether they could be replaced by more stable methods and routines. The cross section library is a key element to obtain accurate solutions. Thus, the procedure for generating cross section libraries was revisited to provide libraries tailored for the targeted reactor types. To improve accuracy in the cross section library, an attempt was made to replace the CENTRM code by the MCNP Monte Carlo code as a tool obtaining reference resonance integrals. The use of the Monte Carlo code allows us to minimize problems or approximations that CENTRM introduces since the accuracy of the subgroup data is limited by that of the reference solutions. The use of MCNP requires an additional set of libraries without resonance cross sections so that reference calculations can be performed for a unit cell in which only one isotope of interest includes resonance cross sections, among the isotopes in the composition. The OECD MHTGR-350 benchmark core was simulated using DeCART as initial focus of the verification/validation efforts. Among the benchmark problems, Exercise 1 of Phase 1 is a steady-state benchmark case for the neutronics calculation for which block-wise cross sections were provided in 26 energy groups. This type of problem was designed for a homogenized geometry solver like DIF3D rather than the high-fidelity code DeCART. Instead of the homogenized block cross sections given in the benchmark, the VHTR-specific 238-group ENDF/B-VII.0 library of DeCART was directly used for preliminary calculations. Initial results showed that the multiplication factors of a fuel pin and a fuel block with or without a control rod hole were off by 6, -362, and -183 pcm Dk from comparable MCNP solutions, respectively. The 2-D and 3-D one-third core calculations were also conducted for the all-rods-out (ARO) and all-rods-in (ARI) configurations, producing reasonable results. Figure 1 illustrates the intermediate (1.5 eV - 17 keV) and thermal (below 1.5 eV) group flux distributions. As seen from VHTR cores with annular fuels, the intermediate group fluxes are relatively high in the fuel region, but the thermal group fluxes are higher in the inner and outer graphite reflector regions than in the fuel region. To support the current project, a new three-year I-NERI collaboration involving ANL and KAERI was started in November 2011, focused on performing in-depth verification and validation of high-fidelity multi-physics simulation codes for LWR and VHTR. The work scope includes generating improved cross section libraries for the targeted reactor types, developing benchmark models for verification and validation of the neutronics code with or without thermo-fluid feedback, and performing detailed comparisons of predicted reactor parameters against both Monte Carlo solutions and experimental measurements. The following list summarizes the work conducted so far for PROTEUS-Thermal Tasks: Unification of different versions of DeCART was initiated, and at the same time code modernization was conducted to make code unification efficient; (2) Regeneration of cross section libraries was attempted for the targeted reactor types, and the procedure for generating cross section libraries was updated by replacing CENTRM with MCNP for reference resonance integrals; (3) The MHTGR-350 benchmark core was simulated using DeCART with VHTR-specific 238-group ENDF/B-VII.0 library, and MCNP calculations were performed for comparison; and (4) Benchmark problems for PWR and BWR analysis were prepared for the DeCART verification/validation effort. In the coming months, the work listed above will be completed. Cross section libraries will be generated with optimized group structures for specific reactor types.« less
Automated apparatus and method of generating native code for a stitching machine
NASA Technical Reports Server (NTRS)
Miller, Jeffrey L. (Inventor)
2000-01-01
A computer system automatically generates CNC code for a stitching machine. The computer determines the locations of a present stitching point and a next stitching point. If a constraint is not found between the present stitching point and the next stitching point, the computer generates code for making a stitch at the next stitching point. If a constraint is found, the computer generates code for changing a condition (e.g., direction) of the stitching machine's stitching head.
NASA Technical Reports Server (NTRS)
Stahara, S. S.
1984-01-01
An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.
ESAS Deliverable PS 1.1.2.3: Customer Survey on Code Generations in Safety-Critical Applications
NASA Technical Reports Server (NTRS)
Schumann, Johann; Denney, Ewen
2006-01-01
Automated code generators (ACG) are tools that convert a (higher-level) model of a software (sub-)system into executable code without the necessity for a developer to actually implement the code. Although both commercially supported and in-house tools have been used in many industrial applications, little data exists on how these tools are used in safety-critical domains (e.g., spacecraft, aircraft, automotive, nuclear). The aims of the survey, therefore, were threefold: 1) to determine if code generation is primarily used as a tool for prototyping, including design exploration and simulation, or for fiight/production code; 2) to determine the verification issues with code generators relating, in particular, to qualification and certification in safety-critical domains; and 3) to determine perceived gaps in functionality of existing tools.
Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes
NASA Technical Reports Server (NTRS)
Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun
1996-01-01
In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.
The purpose of this SOP is to define the coding strategy for coding and coding verification of hand-entered data. It applies to the coding of all physical forms, especially those coded by hand. The strategy was developed for use in the Arizona NHEXAS project and the "Border" st...
26 CFR 301.6331-2 - Procedures and restrictions on levies.
Code of Federal Regulations, 2010 CFR
2010-04-01
... certified mail to the taxpayer's last known address. For further guidance regarding the definition of last...— (i) The Internal Revenue Code provisions and the procedures relating to levy and sale of property... (including the use of an installment agreement under section 6159); and (iv) The Internal Revenue Code...
26 CFR 301.6331-2 - Procedures and restrictions on levies.
Code of Federal Regulations, 2011 CFR
2011-04-01
... certified mail to the taxpayer's last known address. For further guidance regarding the definition of last...— (i) The Internal Revenue Code provisions and the procedures relating to levy and sale of property... (including the use of an installment agreement under section 6159); and (iv) The Internal Revenue Code...
26 CFR 301.6331-2 - Procedures and restrictions on levies.
Code of Federal Regulations, 2012 CFR
2012-04-01
... certified mail to the taxpayer's last known address. For further guidance regarding the definition of last...— (i) The Internal Revenue Code provisions and the procedures relating to levy and sale of property... (including the use of an installment agreement under section 6159); and (iv) The Internal Revenue Code...
26 CFR 301.6331-2 - Procedures and restrictions on levies.
Code of Federal Regulations, 2014 CFR
2014-04-01
... certified mail to the taxpayer's last known address. For further guidance regarding the definition of last...— (i) The Internal Revenue Code provisions and the procedures relating to levy and sale of property... (including the use of an installment agreement under section 6159); and (iv) The Internal Revenue Code...
Random Sequence for Optimal Low-Power Laser Generated Ultrasound
NASA Astrophysics Data System (ADS)
Vangi, D.; Virga, A.; Gulino, M. S.
2017-08-01
Low-power laser generated ultrasounds are lately gaining importance in the research world, thanks to the possibility of investigating a mechanical component structural integrity through a non-contact and Non-Destructive Testing (NDT) procedure. The ultrasounds are, however, very low in amplitude, making it necessary to use pre-processing and post-processing operations on the signals to detect them. The cross-correlation technique is used in this work, meaning that a random signal must be used as laser input. For this purpose, a highly random and simple-to-create code called T sequence, capable of enhancing the ultrasound detectability, is introduced (not previously available at the state of the art). Several important parameters which characterize the T sequence can influence the process: the number of pulses Npulses , the pulse duration δ and the distance between pulses dpulses . A Finite Element FE model of a 3 mm steel disk has been initially developed to analytically study the longitudinal ultrasound generation mechanism and the obtainable outputs. Later, experimental tests have shown that the T sequence is highly flexible for ultrasound detection purposes, making it optimal to use high Npulses and δ but low dpulses . In the end, apart from describing all phenomena that arise in the low-power laser generation process, the results of this study are also important for setting up an effective NDT procedure using this technology.
Critical Care Coding for Neurologists.
Nuwer, Marc R; Vespa, Paul M
2015-10-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Coding of Neuroinfectious Diseases.
Barkley, Gregory L
2015-12-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Diagnostic Coding for Epilepsy.
Williams, Korwyn; Nuwer, Marc R; Buchhalter, Jeffrey R
2016-02-01
Accurate coding is an important function of neurologic practice. This contribution to Continuum is part of an ongoing series that presents helpful coding information along with examples related to the issue topic. Tips for diagnosis coding, Evaluation and Management coding, procedure coding, or a combination are presented, depending on which is most applicable to the subject area of the issue.
Environmental factor(tm) system: RCRA hazardous waste handler information (on CD-ROM). Data file
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-11-01
Environmental Factor(trademark) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity, and compliance history for facilities found in the EPA Research Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management, and minimization by companies who are large quantity generators; and (3) Data on the waste management practices of treatment, storage, and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action, or violation information, TSD status, generator and transporter status, and more. (2) View compliance information - dates of evaluation, violation, enforcement, and corrective action. (3) Lookup facilities by waste processing categories of marketing, transporting, processing, and energy recovery. (4) Use owner/operator information and names, titles, and telephone numbers of project managers for prospecting. (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving, and exporting.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanford, J.
The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rodrigues, A.
The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less
Real-time implementation of second generation of audio multilevel information coding
NASA Astrophysics Data System (ADS)
Ali, Murtaza; Tewfik, Ahmed H.; Viswanathan, V.
1994-03-01
This paper describes real-time implementation of a novel wavelet- based audio compression method. This method is based on the discrete wavelet (DWT) representation of signals. A bit allocation procedure is used to allocate bits to the transform coefficients in an adaptive fashion. The bit allocation procedure has been designed to take advantage of the masking effect in human hearing. The procedure minimizes the number of bits required to represent each frame of audio signals at a fixed distortion level. The real-time implementation provides almost transparent compression of monophonic CD quality audio signals (samples at 44.1 KHz and quantized using 16 bits/sample) at bit rates of 64-78 Kbits/sec. Our implementation uses two ASPI Elf boards, each of which is built around a TI TMS230C31 DSP chip. The time required for encoding of a mono CD signal is about 92 percent of real time and that for decoding about 61 percent.
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.
1980-01-01
Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.
NASA Technical Reports Server (NTRS)
Vetter, A. A.; Maxwell, C. D.; Swean, T. F., Jr.; Demetriades, S. T.; Oliver, D. A.; Bangerter, C. D.
1981-01-01
Data from sufficiently well-instrumented, short-duration experiments at AEDC/HPDE, Reynolds Metal Co., and Hercules, Inc., are compared to analyses with multidimensional and time-dependent simulations with the STD/MHD computer codes. These analyses reveal detailed features of major transient events, severe loss mechanisms, and anomalous MHD behavior. In particular, these analyses predicted higher-than-design voltage drops, Hall voltage overshoots, and asymmetric voltage drops before the experimental data were available. The predictions obtained with these analyses are in excellent agreement with the experimental data and the failure predictions are consistent with the experiments. The design of large, high-interaction or advanced MHD experiments will require application of sophisticated, detailed and comprehensive computational procedures in order to account for the critical mechanisms which led to the observed behavior in these experiments.
Automated Simplification of Full Chemical Mechanisms
NASA Technical Reports Server (NTRS)
Norris, A. T.
1997-01-01
A code has been developed to automatically simplify full chemical mechanisms. The method employed is based on the Intrinsic Low Dimensional Manifold (ILDM) method of Maas and Pope. The ILDM method is a dynamical systems approach to the simplification of large chemical kinetic mechanisms. By identifying low-dimensional attracting manifolds, the method allows complex full mechanisms to be parameterized by just a few variables; in effect, generating reduced chemical mechanisms by an automatic procedure. These resulting mechanisms however, still retain all the species used in the full mechanism. Full and skeletal mechanisms for various fuels are simplified to a two dimensional manifold, and the resulting mechanisms are found to compare well with the full mechanisms, and show significant improvement over global one step mechanisms, such as those by Westbrook and Dryer. In addition, by using an ILDM reaction mechanism in a CID code, a considerable improvement in turn-around time can be achieved.
Aerodynamic heating on AFE due to nonequilibrium flow with variable entropy at boundary layer edge
NASA Technical Reports Server (NTRS)
Ting, P. C.; Rochelle, W. C.; Bouslog, S. A.; Tam, L. T.; Scott, C. D.; Curry, D. M.
1991-01-01
A method of predicting the aerobrake aerothermodynamic environment on the NASA Aeroassist Flight Experiment (AFE) vehicle is described. Results of a three dimensional inviscid nonequilibrium solution are used as input to an axisymmetric nonequilibrium boundary layer program to predict AFE convective heating rates. Inviscid flow field properties are obtained from the Euler option of the Viscous Reacting Flow (VRFLO) code at the boundary layer edge. Heating rates on the AFE surface are generated with the Boundary Layer Integral Matrix Procedure (BLIMP) code for a partially catalytic surface composed of Reusable Surface Insulation (RSI) times. The 1864 kg AFE will fly an aerobraking trajectory, simulating return from geosynchronous Earth orbit, with a 75 km perigee and a 10 km/sec entry velocity. Results of this analysis will provide principal investigators and thermal analysts with aeroheating environments to perform experiment and thermal protection system design.
Hybrid real-code ant colony optimisation for constrained mechanical design
NASA Astrophysics Data System (ADS)
Pholdee, Nantiwat; Bureerat, Sujin
2016-01-01
This paper proposes a hybrid meta-heuristic based on integrating a local search simplex downhill (SDH) method into the search procedure of real-code ant colony optimisation (ACOR). This hybridisation leads to five hybrid algorithms where a Monte Carlo technique, a Latin hypercube sampling technique (LHS) and a translational propagation Latin hypercube design (TPLHD) algorithm are used to generate an initial population. Also, two numerical schemes for selecting an initial simplex are investigated. The original ACOR and its hybrid versions along with a variety of established meta-heuristics are implemented to solve 17 constrained test problems where a fuzzy set theory penalty function technique is used to handle design constraints. The comparative results show that the hybrid algorithms are the top performers. Using the TPLHD technique gives better results than the other sampling techniques. The hybrid optimisers are a powerful design tool for constrained mechanical design problems.
Model-Based Development of Automotive Electronic Climate Control Software
NASA Astrophysics Data System (ADS)
Kakade, Rupesh; Murugesan, Mohan; Perugu, Bhupal; Nair, Mohanan
With increasing complexity of software in today's products, writing and maintaining thousands of lines of code is a tedious task. Instead, an alternative methodology must be employed. Model-based development is one candidate that offers several benefits and allows engineers to focus on the domain of their expertise than writing huge codes. In this paper, we discuss the application of model-based development to the electronic climate control software of vehicles. The back-to-back testing approach is presented that ensures flawless and smooth transition from legacy designs to the model-based development. Simulink report generator to create design documents from the models is presented along with its usage to run the simulation model and capture the results into the test report. Test automation using model-based development tool that support the use of unique set of test cases for several testing levels and the test procedure that is independent of software and hardware platform is also presented.
McNair, Peter D; Jackson, Terri J; Borovnicar, Daniel J
2010-07-05
To model the effect of excluding payment for eight hospital-acquired conditions (HACs) on hospital payments in Victoria, Australia. Retrospective ecological study using the Victorian Admitted Episodes Dataset. The analysis involved all acute inpatient admissions to Victorian public and private hospitals between 1 July 2007 and 30 June 2008. Each admission record includes up to 40 diagnosis and procedure codes from which payments are calculated. The model deleted diagnosis codes for eight HACs from all records, then recalculated payments to estimate the impact of a policy of non-payment for HACs. The effect on hospital payments of excluding diagnosis codes for eight HACs. 2,047,133 cases with total estimated payments of $4902 million were identified; 994 cases (0.05%) had one or more diagnoses meeting the code definition for a definable HAC, representing total payments of $24.1 million. In-hospital falls and pressure ulcers were the most commonly coded HACs. Applying a model that excluded HAC diagnosis codes changed the diagnosis-related group for 134 cases (13.5%), thereby generating a $448,630 reduction in payments. Introducing a non-payment for HACs policy similar to that introduced by Medicare in the United States would have little direct financial impact in the Australian context, although additional savings would accrue if HAC rates were reduced. Such a policy could add further incentive to current initiatives aimed at reducing HACs.
a Hadoop-Based Algorithm of Generating dem Grid from Point Cloud Data
NASA Astrophysics Data System (ADS)
Jian, X.; Xiao, X.; Chengfang, H.; Zhizhong, Z.; Zhaohui, W.; Dengzhong, Z.
2015-04-01
Airborne LiDAR technology has proven to be the most powerful tools to obtain high-density, high-accuracy and significantly detailed surface information of terrain and surface objects within a short time, and from which the Digital Elevation Model of high quality can be extracted. Point cloud data generated from the pre-processed data should be classified by segmentation algorithms, so as to differ the terrain points from disorganized points, then followed by a procedure of interpolating the selected points to turn points into DEM data. The whole procedure takes a long time and huge computing resource due to high-density, that is concentrated on by a number of researches. Hadoop is a distributed system infrastructure developed by the Apache Foundation, which contains a highly fault-tolerant distributed file system (HDFS) with high transmission rate and a parallel programming model (Map/Reduce). Such a framework is appropriate for DEM generation algorithms to improve efficiency. Point cloud data of Dongting Lake acquired by Riegl LMS-Q680i laser scanner was utilized as the original data to generate DEM by a Hadoop-based algorithms implemented in Linux, then followed by another traditional procedure programmed by C++ as the comparative experiment. Then the algorithm's efficiency, coding complexity, and performance-cost ratio were discussed for the comparison. The results demonstrate that the algorithm's speed depends on size of point set and density of DEM grid, and the non-Hadoop implementation can achieve a high performance when memory is big enough, but the multiple Hadoop implementation can achieve a higher performance-cost ratio, while point set is of vast quantities on the other hand.
Preliminary Results from the Application of Automated Adjoint Code Generation to CFL3D
NASA Technical Reports Server (NTRS)
Carle, Alan; Fagan, Mike; Green, Lawrence L.
1998-01-01
This report describes preliminary results obtained using an automated adjoint code generator for Fortran to augment a widely-used computational fluid dynamics flow solver to compute derivatives. These preliminary results with this augmented code suggest that, even in its infancy, the automated adjoint code generator can accurately and efficiently deliver derivatives for use in transonic Euler-based aerodynamic shape optimization problems with hundreds to thousands of independent design variables.
Huo, Jinhai; Yang, Ming; Tina Shih, Ya-Chen
2018-03-01
The "meaningful use of certified electronic health record" policy requires eligible professionals to record smoking status for more than 50% of all individuals aged 13 years or older in 2011 to 2012. To explore whether the coding to document smoking behavior has increased over time and to assess the accuracy of smoking-related diagnosis and procedure codes in identifying previous and current smokers. We conducted an observational study with 5,423,880 enrollees from the year 2009 to 2014 in the Truven Health Analytics database. Temporal trends of smoking coding, sensitivity, specificity, positive predictive value, and negative predictive value were measured. The rate of coding of smoking behavior improved significantly by the end of the study period. The proportion of patients in the claims data recorded as current smokers increased 2.3-fold and the proportion of patients recorded as previous smokers increased 4-fold during the 6-year period. The sensitivity of each International Classification of Diseases, Ninth Revision, Clinical Modification code was generally less than 10%. The diagnosis code of tobacco use disorder (305.1X) was the most sensitive code (9.3%) for identifying smokers. The specificities of these codes and the Current Procedural Terminology codes were all more than 98%. A large improvement in the coding of current and previous smoking behavior has occurred since the inception of the meaningful use policy. Nevertheless, the use of diagnosis and procedure codes to identify smoking behavior in administrative data is still unreliable. This suggests that quality improvements toward medical coding on smoking behavior are needed to enhance the capability of claims data for smoking-related outcomes research. Copyright © 2018 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Levesque, Eric; Hoti, Emir; de La Serna, Sofia; Habouchi, Houssam; Ichai, Philippe; Saliba, Faouzi; Samuel, Didier; Azoulay, Daniel
2013-03-01
In the French healthcare system, the intensive care budget allocated is directly dependent on the activity level of the center. To evaluate this activity level, it is necessary to code the medical diagnoses and procedures performed on Intensive Care Unit (ICU) patients. The aim of this study was to evaluate the effects of using an Intensive Care Information System (ICIS) on the incidence of coding errors and its impact on the ICU budget allocated. Since 2005, the documentation on and monitoring of every patient admitted to our ICU has been carried out using an ICIS. However, the coding process was performed manually until 2008. This study focused on two periods: the period of manual coding (year 2007) and the period of computerized coding (year 2008) which covered a total of 1403 ICU patients. The time spent on the coding process, the rate of coding errors (defined as patients missed/not coded or wrongly identified as undergoing major procedure/s) and the financial impact were evaluated for these two periods. With computerized coding, the time per admission decreased significantly (from 6.8 ± 2.8 min in 2007 to 3.6 ± 1.9 min in 2008, p<0.001). Similarly, a reduction in coding errors was observed (7.9% vs. 2.2%, p<0.001). This decrease in coding errors resulted in a reduced difference between the potential and real ICU financial supplements obtained in the respective years (€194,139 loss in 2007 vs. a €1628 loss in 2008). Using specific computer programs improves the intensive process of manual coding by shortening the time required as well as reducing errors, which in turn positively impacts the ICU budget allocation. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Development of Cross Section Library and Application Programming Interface (API)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Marin-Lafleche, A.; Smith, M. A.
2014-04-09
The goal of NEAMS neutronics is to develop a high-fidelity deterministic neutron transport code termed PROTEUS for use on all reactor types of interest, but focused primarily on sodium-cooled fast reactors. While PROTEUS-SN has demonstrated good accuracy for homogeneous fast reactor problems and partially heterogeneous fast reactor problems, the simulation results were not satisfactory when applied on fully heterogeneous thermal problems like the Advanced Test Reactor (ATR). This is mainly attributed to the quality of cross section data for heterogeneous geometries since the conventional cross section generation approach does not work accurately for such irregular and complex geometries. Therefore, onemore » of the NEAMS neutronics tasks since FY12 has been the development of a procedure to generate appropriate cross sections for a heterogeneous geometry core.« less
Hand washing compliance among retail food establishment workers in Minnesota.
Allwood, Paul B; Jenkins, Timothy; Paulus, Colleen; Johnson, Lars; Hedberg, Craig W
2004-12-01
Inadequate hand washing by food workers is an important contributing factor to foodborne disease outbreaks in retail food establishments (RFEs). We conducted a survey of RFEs to investigate the effect of hand washing training, availability of hand washing facilities, and the ability of the person in charge (PIC) to describe hand washing according to the Minnesota Food Code (food code) on workers' ability to demonstrate food code-compliant hand washing. Only 52% of the PICs could describe the hand washing procedure outlined in the food code, and only 48% of workers could demonstrate code-compliant hand washing. The most common problems observed were failure to wash for 20 s and failure to use a fingernail brush. There was a strong positive association between the PIC being a certified food manager and being able to describe the food code hand washing procedure (odds ratio [OR], 5.5; 95% confidence interval [CI], 2.2 to 13.7), and there was an even stronger association between the PIC being able to describe hand washing and workers being able to demonstrate code-compliant hand washing (OR, 15; 95% CI, 6 to 37). Significant associations were detected among correct hand washing demonstration, physical infrastructure for hand washing, and the hand washing training methods used by the establishment. However, the principal determinant of successful hand washing demonstration was the PIC's ability to describe proper hand washing procedure. These results suggest that improving hand washing practices among food workers will require interventions that address PIC knowledge of hand washing requirement and procedure and the development and implementation of effective hand washing training methods.
A CFD/CSD Interaction Methodology for Aircraft Wings
NASA Technical Reports Server (NTRS)
Bhardwaj, Manoj K.
1997-01-01
With advanced subsonic transports and military aircraft operating in the transonic regime, it is becoming important to determine the effects of the coupling between aerodynamic loads and elastic forces. Since aeroelastic effects can contribute significantly to the design of these aircraft, there is a strong need in the aerospace industry to predict these aero-structure interactions computationally. To perform static aeroelastic analysis in the transonic regime, high fidelity computational fluid dynamics (CFD) analysis tools must be used in conjunction with high fidelity computational structural fluid dynamics (CSD) analysis tools due to the nonlinear behavior of the aerodynamics in the transonic regime. There is also a need to be able to use a wide variety of CFD and CSD tools to predict these aeroelastic effects in the transonic regime. Because source codes are not always available, it is necessary to couple the CFD and CSD codes without alteration of the source codes. In this study, an aeroelastic coupling procedure is developed which will perform static aeroelastic analysis using any CFD and CSD code with little code integration. The aeroelastic coupling procedure is demonstrated on an F/A-18 Stabilator using NASTD (an in-house McDonnell Douglas CFD code) and NASTRAN. In addition, the Aeroelastic Research Wing (ARW-2) is used for demonstration of the aeroelastic coupling procedure by using ENSAERO (NASA Ames Research Center CFD code) and a finite element wing-box code (developed as part of this research).
NASA Astrophysics Data System (ADS)
Hoffman, Kenneth J.; Keithley, Hudson
1994-12-01
There are few systems which aggregate standardized pertinent clinical observations of discrete patient problems and resolutions. The systematic information supplied by clinicians is generally provided to justify reimbursement from insurers. Insurers, by their nature, and expert in modeling health care costs by diagnosis, procedures, and population risk groups. Medically, they rely on clinician generated diagnostic and coded procedure information. Clinicians will document a patient's status at a discrete point in time through narrative. Clinical notes do not support aggregate and systematic analysis of outcome. A methodology exists and has been used by the US Army Drug and Alcohol Program to model the clinical activities, associated costs, and data requirements of an outpatient clinic. This has broad applicability for a comprehensive health care system to which patient costs and data requirements can be established.
A Rapid Aerodynamic Design Procedure Based on Artificial Neural Networks
NASA Technical Reports Server (NTRS)
Rai, Man Mohan
2001-01-01
An aerodynamic design procedure that uses neural networks to model the functional behavior of the objective function in design space has been developed. This method incorporates several improvements to an earlier method that employed a strategy called parameter-based partitioning of the design space in order to reduce the computational costs associated with design optimization. As with the earlier method, the current method uses a sequence of response surfaces to traverse the design space in search of the optimal solution. The new method yields significant reductions in computational costs by using composite response surfaces with better generalization capabilities and by exploiting synergies between the optimization method and the simulation codes used to generate the training data. These reductions in design optimization costs are demonstrated for a turbine airfoil design study where a generic shape is evolved into an optimal airfoil.
Boundary layer integral matrix procedure: Verification of models
NASA Technical Reports Server (NTRS)
Bonnett, W. S.; Evans, R. M.
1977-01-01
The three turbulent models currently available in the JANNAF version of the Aerotherm Boundary Layer Integral Matrix Procedure (BLIMP-J) code were studied. The BLIMP-J program is the standard prediction method for boundary layer effects in liquid rocket engine thrust chambers. Experimental data from flow fields with large edge-to-wall temperature ratios are compared to the predictions of the three turbulence models contained in BLIMP-J. In addition, test conditions necessary to generate additional data on a flat plate or in a nozzle are given. It is concluded that the Cebeci-Smith turbulence model be the recommended model for the prediction of boundary layer effects in liquid rocket engines. In addition, the effects of homogeneous chemical reaction kinetics were examined for a hydrogen/oxygen system. Results show that for most flows, kinetics are probably only significant for stoichiometric mixture ratios.
Development of a thermal and structural analysis procedure for cooled radial turbines
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Deanna, Russell G.
1988-01-01
A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine is considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analyses. An inviscid, quasi three-dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous one-dimensional internal flow code for the momentum and energy equation. These boundary conditions are input to a three-dimensional heat conduction code for calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results from this case are included.
Spectral fitting, shock layer modeling, and production of nitrogen oxides and excited nitrogen
NASA Technical Reports Server (NTRS)
Blackwell, H. E.
1991-01-01
An analysis was made of N2 emission from 8.72 MJ/kg shock layer at 2.54, 1.91, and 1.27 cm positions and vibrational state distributions, temperatures, and relative electronic state populations was obtained from data sets. Other recorded arc jet N2 and air spectral data were reviewed and NO emission characteristics were studied. A review of operational procedures of the DSMC code was made. Information on other appropriate codes and modifications, including ionization, were made as well as a determination of the applicability of codes reviewed to task requirement. A review was also made of computational procedures used in CFD codes of Li and other codes on JSC computers. An analysis was made of problems associated with integration of specific chemical kinetics applicable to task into CFD codes.
Nonequilibrium chemistry boundary layer integral matrix procedure
NASA Technical Reports Server (NTRS)
Tong, H.; Buckingham, A. C.; Morse, H. L.
1973-01-01
The development of an analytic procedure for the calculation of nonequilibrium boundary layer flows over surfaces of arbitrary catalycities is described. An existing equilibrium boundary layer integral matrix code was extended to include nonequilibrium chemistry while retaining all of the general boundary condition features built into the original code. For particular application to the pitch-plane of shuttle type vehicles, an approximate procedure was developed to estimate the nonequilibrium and nonisentropic state at the edge of the boundary layer.
Meyer, Anne-Marie; Kuo, Tzy-Mey; Chang, YunKyung; Carpenter, William R; Chen, Ronald C; Sturmer, Til
2017-05-01
Systematic coding systems are used to define clinically meaningful outcomes when leveraging administrative claims data for research. How and when these codes are applied within a research study can have implications for the study validity and their specificity can vary significantly depending on treatment received. Data are from the Surveillance, Epidemiology, and End Results-Medicare linked dataset. We use propensity score methods in a retrospective cohort of prostate cancer patients first examined in a recently published radiation oncology comparative effectiveness study. With the narrowly defined outcome definition, the toxicity event outcome rate ratio was 0.88 per 100 person-years (95% confidence interval, 0.71-1.08). With the broadly defined outcome, the rate ratio was comparable, with 0.89 per 100 person-years (95% confidence interval, 0.76-1.04), although individual event rates were doubled. Some evidence of surveillance bias was suggested by a higher rate of endoscopic procedures the first year of follow-up in patients who received proton therapy compared with those receiving intensity-modulated radiation treatment (11.15 vs. 8.90, respectively). This study demonstrates the risk of introducing bias through subjective application of procedure codes. Careful consideration is required when using procedure codes to define outcomes in administrative data.
Accuracy of clinical coding for procedures in oral and maxillofacial surgery.
Khurram, S A; Warner, C; Henry, A M; Kumar, A; Mohammed-Ali, R I
2016-10-01
Clinical coding has important financial implications, and discrepancies in the assigned codes can directly affect the funding of a department and hospital. Over the last few years, numerous oversights have been noticed in the coding of oral and maxillofacial (OMF) procedures. To establish the accuracy and completeness of coding, we retrospectively analysed the records of patients during two time periods: March to May 2009 (324 patients), and January to March 2014 (200 patients). Two investigators independently collected and analysed the data to ensure accuracy and remove bias. A large proportion of operations were not assigned all the relevant codes, and only 32% - 33% were correct in both cycles. To our knowledge, this is the first reported audit of clinical coding in OMFS, and it highlights serious shortcomings that have substantial financial implications. Better input by the surgical team and improved communication between the surgical and coding departments will improve accuracy. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Conversion of the agent-oriented domain-specific language ALAS into JavaScript
NASA Astrophysics Data System (ADS)
Sredojević, Dejan; Vidaković, Milan; Okanović, Dušan; Mitrović, Dejan; Ivanović, Mirjana
2016-06-01
This paper shows generation of JavaScript code from code written in agent-oriented domain-specific language ALAS. ALAS is an agent-oriented domain-specific language for writing software agents that are executed within XJAF middleware. Since the agents can be executed on various platforms, they must be converted into a language of the target platform. We also try to utilize existing tools and technologies to make the whole conversion process as simple as possible, as well as faster and more efficient. We use the Xtext framework that is compatible with Java to implement ALAS infrastructure - editor and code generator. Since Xtext supports Java, generation of Java code from ALAS code is straightforward. To generate a JavaScript code that will be executed within the target JavaScript XJAF implementation, Google Web Toolkit (GWT) is used.
Wenke, A; Gaber, A; Hertle, L; Roeder, N; Pühse, G
2012-07-01
Precise and complete coding of diagnoses and procedures is of value for optimizing revenues within the German diagnosis-related groups (G-DRG) system. The implementation of effective structures for coding is cost-intensive. The aim of this study was to prove whether higher costs can be refunded by complete acquisition of comorbidities and complications. Calculations were based on DRG data of the Department of Urology, University Hospital of Münster, Germany, covering all patients treated in 2009. The data were regrouped and subjected to a process of simulation (increase and decrease of patient clinical complexity levels, PCCL) with the help of recently developed software. In urology a strong dependency of quantity and quality of coding of secondary diagnoses on PCCL and subsequent profits was found. Departmental budgetary procedures can be optimized when coding is effective. The new simulation tool can be a valuable aid to improve profits available for distribution. Nevertheless, calculation of time use and financial needs by this procedure are subject to specific departmental terms and conditions. Completeness of coding of (secondary) diagnoses must be the ultimate administrative goal of patient case documentation in urology.
A MATLAB based 3D modeling and inversion code for MT data
NASA Astrophysics Data System (ADS)
Singh, Arun; Dehiya, Rahul; Gupta, Pravin K.; Israil, M.
2017-07-01
The development of a MATLAB based computer code, AP3DMT, for modeling and inversion of 3D Magnetotelluric (MT) data is presented. The code comprises two independent components: grid generator code and modeling/inversion code. The grid generator code performs model discretization and acts as an interface by generating various I/O files. The inversion code performs core computations in modular form - forward modeling, data functionals, sensitivity computations and regularization. These modules can be readily extended to other similar inverse problems like Controlled-Source EM (CSEM). The modular structure of the code provides a framework useful for implementation of new applications and inversion algorithms. The use of MATLAB and its libraries makes it more compact and user friendly. The code has been validated on several published models. To demonstrate its versatility and capabilities the results of inversion for two complex models are presented.
Comparison of theoretical and flight-measured local flow aerodynamics for a low-aspect-ratio fin
NASA Technical Reports Server (NTRS)
Johnson, J. B.; Sandlin, D. R.
1984-01-01
Flight test and theoretical aerodynamic data were obtained for a flight test fixture mounted on the underside of an F-104G aircraft. The theoretical data were generated using two codes, a two dimensional transonic code called Code H, and a three dimensional subsonic and supersonic code call wing-body. Pressure distributions generated by the codes for the flight test fixture as well as boundary layer displacement thickness generated by the two dimensional code were compared to the flight test data. The two dimensional code pressure distributions compared well except at the minimum pressure point and trailing edge. Shock locations compared well except at high transonic speeds. The three dimensional code pressure distributions compared well except at the trailing edge of the flight test fixture. The two dimensional code does not predict displacement thickness of the flight test fixture well.
Completing and Adapting Models of Biological Processes
NASA Technical Reports Server (NTRS)
Margaria, Tiziana; Hinchey, Michael G.; Raffelt, Harald; Rash, James L.; Rouff, Christopher A.; Steffen, Bernhard
2006-01-01
We present a learning-based method for model completion and adaptation, which is based on the combination of two approaches: 1) R2D2C, a technique for mechanically transforming system requirements via provably equivalent models to running code, and 2) automata learning-based model extrapolation. The intended impact of this new combination is to make model completion and adaptation accessible to experts of the field, like biologists or engineers. The principle is briefly illustrated by generating models of biological procedures concerning gene activities in the production of proteins, although the main application is going to concern autonomic systems for space exploration.
NASA Astrophysics Data System (ADS)
Zulai, Luis G. T.; Durand, Fábio R.; Abrão, Taufik
2015-05-01
In this article, an energy-efficiency mechanism for next-generation passive optical networks is investigated through heuristic particle swarm optimization. Ten-gigabit Ethernet-wavelength division multiplexing optical code division multiplexing-passive optical network next-generation passive optical networks are based on the use of a legacy 10-gigabit Ethernet-passive optical network with the advantage of using only an en/decoder pair of optical code division multiplexing technology, thus eliminating the en/decoder at each optical network unit. The proposed joint mechanism is based on the sleep-mode power-saving scheme for a 10-gigabit Ethernet-passive optical network, combined with a power control procedure aiming to adjust the transmitted power of the active optical network units while maximizing the overall energy-efficiency network. The particle swarm optimization based power control algorithm establishes the optimal transmitted power in each optical network unit according to the network pre-defined quality of service requirements. The objective is controlling the power consumption of the optical network unit according to the traffic demand by adjusting its transmitter power in an attempt to maximize the number of transmitted bits with minimum energy consumption, achieving maximal system energy efficiency. Numerical results have revealed that it is possible to save 75% of energy consumption with the proposed particle swarm optimization based sleep-mode energy-efficiency mechanism compared to 55% energy savings when just a sleeping-mode-based mechanism is deployed.
Thorogood, Adrian; Joly, Yann; Knoppers, Bartha Maria; Nilsson, Tommy; Metrakos, Peter; Lazaris, Anthoula; Salman, Ayat
2014-12-23
This article outlines procedures for the feedback of individual research data to participants. This feedback framework was developed in the context of a personalized medicine research project in Canada. Researchers in this domain have an ethical obligation to return individual research results and/or material incidental findings that are clinically significant, valid and actionable to participants. Communication of individual research data must proceed in an ethical and efficient manner. Feedback involves three procedural steps: assessing the health relevance of a finding, re-identifying the affected participant, and communicating the finding. Re-identification requires researchers to break the code in place to protect participant identities. Coding systems replace personal identifiers with a numerical code. Double coding systems provide added privacy protection by separating research data from personal identifying data with a third "linkage" database. A trusted and independent intermediary, the "keyholder", controls access to this linkage database. Procedural guidelines for the return of individual research results and incidental findings are lacking. This article outlines a procedural framework for the three steps of feedback: assessment, re-identification, and communication. This framework clarifies the roles of the researcher, Research Ethics Board, and keyholder in the process. The framework also addresses challenges posed by coding systems. Breaking the code involves privacy risks and should only be carried out in clearly defined circumstances. Where a double coding system is used, the keyholder plays an important role in balancing the benefits of individual feedback with the privacy risks of re-identification. Feedback policies should explicitly outline procedures for the assessment of findings, and the re-identification and contact of participants. The responsibilities of researchers, the Research Ethics Board, and the keyholder must be clearly defined. We provide general guidelines for keyholders involved in feedback. We also recommend that Research Ethics Boards should not be directly involved in the assessment of individual findings. Hospitals should instead establish formal, interdisciplinary clinical advisory committees to help researchers determine whether or not an uncertain finding should be returned.
JPEG 2000 Encoding with Perceptual Distortion Control
NASA Technical Reports Server (NTRS)
Watson, Andrew B.; Liu, Zhen; Karam, Lina J.
2008-01-01
An alternative approach has been devised for encoding image data in compliance with JPEG 2000, the most recent still-image data-compression standard of the Joint Photographic Experts Group. Heretofore, JPEG 2000 encoding has been implemented by several related schemes classified as rate-based distortion-minimization encoding. In each of these schemes, the end user specifies a desired bit rate and the encoding algorithm strives to attain that rate while minimizing a mean squared error (MSE). While rate-based distortion minimization is appropriate for transmitting data over a limited-bandwidth channel, it is not the best approach for applications in which the perceptual quality of reconstructed images is a major consideration. A better approach for such applications is the present alternative one, denoted perceptual distortion control, in which the encoding algorithm strives to compress data to the lowest bit rate that yields at least a specified level of perceptual image quality. Some additional background information on JPEG 2000 is prerequisite to a meaningful summary of JPEG encoding with perceptual distortion control. The JPEG 2000 encoding process includes two subprocesses known as tier-1 and tier-2 coding. In order to minimize the MSE for the desired bit rate, a rate-distortion- optimization subprocess is introduced between the tier-1 and tier-2 subprocesses. In tier-1 coding, each coding block is independently bit-plane coded from the most-significant-bit (MSB) plane to the least-significant-bit (LSB) plane, using three coding passes (except for the MSB plane, which is coded using only one "clean up" coding pass). For M bit planes, this subprocess involves a total number of (3M - 2) coding passes. An embedded bit stream is then generated for each coding block. Information on the reduction in distortion and the increase in the bit rate associated with each coding pass is collected. This information is then used in a rate-control procedure to determine the contribution of each coding block to the output compressed bit stream.
A neutron spectrum unfolding computer code based on artificial neural networks
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2014-02-01
The Bonner Spheres Spectrometer consists of a thermal neutron sensor placed at the center of a number of moderating polyethylene spheres of different diameters. From the measured readings, information can be derived about the spectrum of the neutron field where measurements were made. Disadvantages of the Bonner system are the weight associated with each sphere and the need to sequentially irradiate the spheres, requiring long exposure periods. Provided a well-established response matrix and adequate irradiation conditions, the most delicate part of neutron spectrometry, is the unfolding process. The derivation of the spectral information is not simple because the unknown is not given directly as a result of the measurements. The drawbacks associated with traditional unfolding procedures have motivated the need of complementary approaches. Novel methods based on Artificial Intelligence, mainly Artificial Neural Networks, have been widely investigated. In this work, a neutron spectrum unfolding code based on neural nets technology is presented. This code is called Neutron Spectrometry and Dosimetry with Artificial Neural networks unfolding code that was designed in a graphical interface. The core of the code is an embedded neural network architecture previously optimized using the robust design of artificial neural networks methodology. The main features of the code are: easy to use, friendly and intuitive to the user. This code was designed for a Bonner Sphere System based on a 6LiI(Eu) neutron detector and a response matrix expressed in 60 energy bins taken from an International Atomic Energy Agency compilation. The main feature of the code is that as entrance data, for unfolding the neutron spectrum, only seven rate counts measured with seven Bonner spheres are required; simultaneously the code calculates 15 dosimetric quantities as well as the total flux for radiation protection purposes. This code generates a full report with all information of the unfolding in the HTML format. NSDann unfolding code is freely available, upon request to the authors.
Industrial Facility Combustion Energy Use
McMillan, Colin
2016-08-01
Facility-level industrial combustion energy use is calculated from greenhouse gas emissions data reported by large emitters (>25,000 metric tons CO2e per year) under the U.S. EPA's Greenhouse Gas Reporting Program (GHGRP, https://www.epa.gov/ghgreporting). The calculation applies EPA default emissions factors to reported fuel use by fuel type. Additional facility information is included with calculated combustion energy values, such as industry type (six-digit NAICS code), location (lat, long, zip code, county, and state), combustion unit type, and combustion unit name. Further identification of combustion energy use is provided by calculating energy end use (e.g., conventional boiler use, co-generation/CHP use, process heating, other facility support) by manufacturing NAICS code. Manufacturing facilities are matched by their NAICS code and reported fuel type with the proportion of combustion fuel energy for each end use category identified in the 2010 Energy Information Administration Manufacturing Energy Consumption Survey (MECS, http://www.eia.gov/consumption/manufacturing/data/2010/). MECS data are adjusted to account for data that were withheld or whose end use was unspecified following the procedure described in Fox, Don B., Daniel Sutter, and Jefferson W. Tester. 2011. The Thermal Spectrum of Low-Temperature Energy Use in the United States, NY: Cornell Energy Institute.
Maljovec, D.; Liu, S.; Wang, B.; ...
2015-07-14
Here, dynamic probabilistic risk assessment (DPRA) methodologies couple system simulator codes (e.g., RELAP and MELCOR) with simulation controller codes (e.g., RAVEN and ADAPT). Whereas system simulator codes model system dynamics deterministically, simulation controller codes introduce both deterministic (e.g., system control logic and operating procedures) and stochastic (e.g., component failures and parameter uncertainties) elements into the simulation. Typically, a DPRA is performed by sampling values of a set of parameters and simulating the system behavior for that specific set of parameter values. For complex systems, a major challenge in using DPRA methodologies is to analyze the large number of scenarios generated,more » where clustering techniques are typically employed to better organize and interpret the data. In this paper, we focus on the analysis of two nuclear simulation datasets that are part of the risk-informed safety margin characterization (RISMC) boiling water reactor (BWR) station blackout (SBO) case study. We provide the domain experts a software tool that encodes traditional and topological clustering techniques within an interactive analysis and visualization environment, for understanding the structures of such high-dimensional nuclear simulation datasets. We demonstrate through our case study that both types of clustering techniques complement each other for enhanced structural understanding of the data.« less
Generating Customized Verifiers for Automatically Generated Code
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2008-01-01
Program verification using Hoare-style techniques requires many logical annotations. We have previously developed a generic annotation inference algorithm that weaves in all annotations required to certify safety properties for automatically generated code. It uses patterns to capture generator- and property-specific code idioms and property-specific meta-program fragments to construct the annotations. The algorithm is customized by specifying the code patterns and integrating them with the meta-program fragments for annotation construction. However, this is difficult since it involves tedious and error-prone low-level term manipulations. Here, we describe an annotation schema compiler that largely automates this customization task using generative techniques. It takes a collection of high-level declarative annotation schemas tailored towards a specific code generator and safety property, and generates all customized analysis functions and glue code required for interfacing with the generic algorithm core, thus effectively creating a customized annotation inference algorithm. The compiler raises the level of abstraction and simplifies schema development and maintenance. It also takes care of some more routine aspects of formulating patterns and schemas, in particular handling of irrelevant program fragments and irrelevant variance in the program structure, which reduces the size, complexity, and number of different patterns and annotation schemas that are required. The improvements described here make it easier and faster to customize the system to a new safety property or a new generator, and we demonstrate this by customizing it to certify frame safety of space flight navigation code that was automatically generated from Simulink models by MathWorks' Real-Time Workshop.
Research on Automatic Programming
1975-12-31
Sequential processes, deadlocks, and semaphore primitives , Ph.D. Thesis, Harvard University, November 1974; Center for Research in Computing...verified. 13 Code generated to effect the synchronization makes use of the ECL control extension facility (Prenner’s CI, see [Prenner]). The... semaphore operations [Dijkstra] is being developed. Initial results for this code generator are very encouraging; in many cases generated code is
Verification and Validation: High Charge and Energy (HZE) Transport Codes and Future Development
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Mertens, Christopher J.; Blattnig, Steve R.; Clowdsley, Martha S.; Cucinotta, Francis A.; Tweed, John; Heinbockel, John H.; Walker, Steven A.; Nealy, John E.
2005-01-01
In the present paper, we give the formalism for further developing a fully three-dimensional HZETRN code using marching procedures but also development of a new Green's function code is discussed. The final Green's function code is capable of not only validation in the space environment but also in ground based laboratories with directed beams of ions of specific energy and characterized with detailed diagnostic particle spectrometer devices. Special emphasis is given to verification of the computational procedures and validation of the resultant computational model using laboratory and spaceflight measurements. Due to historical requirements, two parallel development paths for computational model implementation using marching procedures and Green s function techniques are followed. A new version of the HZETRN code capable of simulating HZE ions with either laboratory or space boundary conditions is under development. Validation of computational models at this time is particularly important for President Bush s Initiative to develop infrastructure for human exploration with first target demonstration of the Crew Exploration Vehicle (CEV) in low Earth orbit in 2008.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-04
...-5-000] Revisions to Procedural Regulations Governing Filing, Indexing and Service by Oil Pipelines, Electronic Tariff Filings; Notice of Changes to eTariff Part 341 Type of Filing Codes Order No. 780... available eTariff Type of Filing Codes (TOFC) will be modified as follows: \\2\\ \\1\\ Filing, Indexing and...
ERIC Educational Resources Information Center
Food and Drug Administration (DHHS/PHS), Rockville, MD.
This document provides information, standards, and behavioral objectives for standardization and certification of retail food inspection personnel in the Food and Drug Administration (FDA). The procedures described in the document are based on the FDA Food Code, updated to reflect current Food Code provisions and to include a more refined focus on…
Ethical and educational considerations in coding hand surgeries.
Lifchez, Scott D; Leinberry, Charles F; Rivlin, Michael; Blazar, Philip E
2014-07-01
To assess treatment coding knowledge and practices among residents, fellows, and attending hand surgeons. Through the use of 6 hypothetical cases, we developed a coding survey to assess coding knowledge and practices. We e-mailed this survey to residents, fellows, and attending hand surgeons. In additionally, we asked 2 professional coders to code these cases. A total of 71 participants completed the survey out of 134 people to whom the survey was sent (response rate = 53%). We observed marked disparity in codes chosen among surgeons and among professional coders. Results of this study indicate that coding knowledge, not just its ethical application, had a major role in coding procedures accurately. Surgical coding is an essential part of a hand surgeon's practice and is not well learned during residency or fellowship. Whereas ethical issues such as deliberate unbundling and upcoding may have a role in inaccurate coding, lack of knowledge among surgeons and coders has a major role as well. Coding has a critical role in every hand surgery practice. Inconstancies among those polled in this study reveal that an increase in education on coding during training and improvement in the clarity and consistency of the Current Procedural Terminology coding rules themselves are needed. Copyright © 2014 American Society for Surgery of the Hand. Published by Elsevier Inc. All rights reserved.
Secure ADS-B authentication system and method
NASA Technical Reports Server (NTRS)
Viggiano, Marc J (Inventor); Valovage, Edward M (Inventor); Samuelson, Kenneth B (Inventor); Hall, Dana L (Inventor)
2010-01-01
A secure system for authenticating the identity of ADS-B systems, including: an authenticator, including a unique id generator and a transmitter transmitting the unique id to one or more ADS-B transmitters; one or more ADS-B transmitters, including a receiver receiving the unique id, one or more secure processing stages merging the unique id with the ADS-B transmitter's identification, data and secret key and generating a secure code identification and a transmitter transmitting a response containing the secure code and ADSB transmitter's data to the authenticator; the authenticator including means for independently determining each ADS-B transmitter's secret key, a receiver receiving each ADS-B transmitter's response, one or more secure processing stages merging the unique id, ADS-B transmitter's identification and data and generating a secure code, and comparison processing comparing the authenticator-generated secure code and the ADS-B transmitter-generated secure code and providing an authentication signal based on the comparison result.
File-Based One-Way BISON Coupling Through VERA: User's Manual
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stimpson, Shane G.
Activities to incorporate fuel performance capabilities into the Virtual Environment for Reactor Applications (VERA) are receiving increasing attention [1–6]. The multiphysics emphasis is expanding as the neutronics (MPACT) and thermal-hydraulics (CTF) packages are becoming more mature. Capturing the finer details of fuel phenomena (swelling, densification, relocation, gap closure, etc.) is the natural next step in the VERA development process since these phenomena are currently not directly taken into account. While several codes could be used to accomplish this, the BISON fuel performance code [8,9] being developed by the Idaho National Laboratory (INL) is the focus of ongoing work in themore » Consortium for Advanced Simulation of Light Water Reactors (CASL). Built on INL’s MOOSE framework [10], BISON uses the finite element method for geometric representation and a Jacobian-free Newton-Krylov (JFNK) scheme to solve systems of partial differential equations for various fuel characteristic relationships. There are several modes of operation in BISON, but, this work uses a 2D azimuthally symmetric (R-Z) smeared-pellet model. This manual is intended to cover (1) the procedure pertaining to the standalone BISON one-way coupling from VERA and (2) the procedure to generate BISON fuel temperature tables that VERA can use.« less
Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation
NASA Technical Reports Server (NTRS)
Edwards, Thomas A.; Flores, Jolen
1989-01-01
Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.
Investigating the Simulink Auto-Coding Process
NASA Technical Reports Server (NTRS)
Gualdoni, Matthew J.
2016-01-01
Model based program design is the most clear and direct way to develop algorithms and programs for interfacing with hardware. While coding "by hand" results in a more tailored product, the ever-growing size and complexity of modern-day applications can cause the project work load to quickly become unreasonable for one programmer. This has generally been addressed by splitting the product into separate modules to allow multiple developers to work in parallel on the same project, however this introduces new potentials for errors in the process. The fluidity, reliability and robustness of the code relies on the abilities of the programmers to communicate their methods to one another; furthermore, multiple programmers invites multiple potentially differing coding styles into the same product, which can cause a loss of readability or even module incompatibility. Fortunately, Mathworks has implemented an auto-coding feature that allows programmers to design their algorithms through the use of models and diagrams in the graphical programming environment Simulink, allowing the designer to visually determine what the hardware is to do. From here, the auto-coding feature handles converting the project into another programming language. This type of approach allows the designer to clearly see how the software will be directing the hardware without the need to try and interpret large amounts of code. In addition, it speeds up the programming process, minimizing the amount of man-hours spent on a single project, thus reducing the chance of human error as well as project turnover time. One such project that has benefited from the auto-coding procedure is Ramses, a portion of the GNC flight software on-board Orion that has been implemented primarily in Simulink. Currently, however, auto-coding Ramses into C++ requires 5 hours of code generation time. This causes issues if the tool ever needs to be debugged, as this code generation will need to occur with each edit to any part of the program; additionally, this is lost time that could be spent testing and analyzing the code. This is one of the more prominent issues with the auto-coding process, and while much information is available with regard to optimizing Simulink designs to produce efficient and reliable C++ code, not much research has been made public on how to reduce the code generation time. It is of interest to develop some insight as to what causes code generation times to be so significant, and determine if there are architecture guidelines or a desirable auto-coding configuration set to assist in streamlining this step of the design process for particular applications. To address the issue at hand, the Simulink coder was studied at a foundational level. For each different component type made available by the software, the features, auto-code generation time, and the format of the generated code were analyzed and documented. Tools were developed and documented to expedite these studies, particularly in the area of automating sequential builds to ensure accurate data was obtained. Next, the Ramses model was examined in an attempt to determine the composition and the types of technologies used in the model. This enabled the development of a model that uses similar technologies, but takes a fraction of the time to auto-code to reduce the turnaround time for experimentation. Lastly, the model was used to run a wide array of experiments and collect data to obtain knowledge about where to search for bottlenecks in the Ramses model. The resulting contributions of the overall effort consist of an experimental model for further investigation into the subject, as well as several automation tools to assist in analyzing the model, and a reference document offering insight to the auto-coding process, including documentation of the tools used in the model analysis, data illustrating some potential problem areas in the auto-coding process, and recommendations on areas or practices in the current Ramses model that should be further investigated. Several skills were required to be built up over the course of the internship project. First and foremost, my Simulink skills have improved drastically, as much of my experience had been modeling electronic circuits as opposed to software models. Furthermore, I am now comfortable working with the Simulink Auto-coder, a tool I had never used until this summer; this tool also tested my critical thinking and C++ knowledge as I had to interpret the C++ code it was generating and attempt to understand how the Simulink model affected the generated code. I had come into the internship with a solid understanding of Matlab code, but had done very little in using it to automate tasks, particularly Simulink tasks; along the same lines, I had rarely used shell script to automate and interface with programs, which I gained a fair amount of experience with this summer, including how to use regular expression. Lastly, soft-skills are an area everyone can continuously improve on; having never worked with NASA engineers, which to me seem to be a completely different breed than what I am used to (commercial electronic engineers), I learned to utilize the wealth of knowledge present at JSC. I wish I had come into the internship knowing exactly how helpful everyone in my branch would be, as I would have picked up on this sooner. I hope that having gained such a strong foundation in Simulink over this summer will open the opportunity to return to work on this project, or potentially other opportunities within the division. The idea of leaving a project I devoted ten weeks to is a hard one to cope with, so having the chance to pick up where I left off sounds appealing; alternatively, I am interested to see if there are any opening in the future that would allow me to work on a project that is more in-line with my research in estimation algorithms. Regardless, this summer has been a milestone in my professional career, and I hope this has started a long-term relationship between JSC and myself. I really enjoy the thought of building on my experience here over future summers while I work to complete my PhD at Missouri University of Science and Technology.
Balla, Fadi; Garwe, Tabitha; Motghare, Prasenjeet; Stamile, Tessa; Kim, Jennifer; Mahnken, Heidi; Lees, Jason
The Accreditation Council for Graduate Medical Education (ACGME) case log captures resident operative experience based on Current Procedural Terminology (CPT) codes and is used to track operative experience during residency. With increasing emphasis on resident operative experiences, coding is more important than ever. It has been shown in other surgical specialties at similar institutions that the residents' ACGME case log may not accurately reflect their operative experience. What barriers may influence this remains unclear. As the only objective measure of resident operative experience, an accurate case log is paramount in representing one's operative experience. This study aims to determine the accuracy of procedural coding by general surgical residents at a single institution. Data were collected from 2 consecutive graduating classes of surgical residents' ACGME case logs from 2008 to 2014. A total of 5799 entries from 7 residents were collected. The CPT codes entered by residents were compared to departmental billing records submitted by the attending surgeon for each procedure. Assigned CPT codes by institutional American Academy of Professional Coders certified abstract coders were considered the "gold standard." A total of 4356 (75.12%) of 5799 entries were identified in billing records. Excel 2010 and SAS 9.3 were used for analysis. In the event of multiple codes for the same patient, any match between resident codes and billing record codes was considered a "correct" entry. A 4-question survey was distributed to all current general surgical residents at our institution for feedback on coding habits, limitations to accurate coding, and opinions on ACGME case log representation of their operative experience. All 7 residents had a low percentage of correctly entered CPT codes. The overall accuracy proportion for all residents was 52.82% (range: 43.32%-60.07%). Only 1 resident showed significant improvement in accuracy during his/her training (p = 0.0043). The survey response rate was 100%. Survey results indicated that inability to find the precise code within the ACGME search interface and unfamiliarity with available CPT codes were by far the most common perceived barriers to accuracy. Survey results also indicated that most residents (74%) believe that they code accurately most of the time and agree that their case log would accurately represent their operative experience (66.6%). This is the first study to evaluate correctness of residents' ACGME case logs in general surgery. The degree of inaccuracy found here necessitates further investigation into the etiology of these discrepancies. Instruction on coding practices should also benefit the residents after graduation. Optimizing communication among attendings and residents, improving ACGME coding search interface, and implementing consistent coding practices could improve accuracy giving a more realistic view of residents' operative experience. Published by Elsevier Inc.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gordienko, P. V., E-mail: gorpavel@vver.kiae.ru; Kotsarev, A. V.; Lizorkin, M. P.
2014-12-15
The procedure of recovery of pin-by-pin energy-release fields for the BIPR-8 code and the algorithm of the BIPR-8 code which is used in nodal computation of the reactor core and on which the recovery of pin-by-pin fields of energy release is based are briefly described. The description and results of the verification using the module of recovery of pin-by-pin energy-release fields and the TVS-M program are given.
Surface Modeling and Grid Generation of Orbital Sciences X34 Vehicle. Phase 1
NASA Technical Reports Server (NTRS)
Alter, Stephen J.
1997-01-01
The surface modeling and grid generation requirements, motivations, and methods used to develop Computational Fluid Dynamic volume grids for the X34-Phase 1 are presented. The requirements set forth by the Aerothermodynamics Branch at the NASA Langley Research Center serve as the basis for the final techniques used in the construction of all volume grids, including grids for parametric studies of the X34. The Integrated Computer Engineering and Manufacturing code for Computational Fluid Dynamics (ICEM/CFD), the Grid Generation code (GRIDGEN), the Three-Dimensional Multi-block Advanced Grid Generation System (3DMAGGS) code, and Volume Grid Manipulator (VGM) code are used to enable the necessary surface modeling, surface grid generation, volume grid generation, and grid alterations, respectively. All volume grids generated for the X34, as outlined in this paper, were used for CFD simulations within the Aerothermodynamics Branch.
NASA Technical Reports Server (NTRS)
Clark, Kenneth; Watney, Garth; Murray, Alexander; Benowitz, Edward
2007-01-01
A computer program translates Unified Modeling Language (UML) representations of state charts into source code in the C, C++, and Python computing languages. ( State charts signifies graphical descriptions of states and state transitions of a spacecraft or other complex system.) The UML representations constituting the input to this program are generated by using a UML-compliant graphical design program to draw the state charts. The generated source code is consistent with the "quantum programming" approach, which is so named because it involves discrete states and state transitions that have features in common with states and state transitions in quantum mechanics. Quantum programming enables efficient implementation of state charts, suitable for real-time embedded flight software. In addition to source code, the autocoder program generates a graphical-user-interface (GUI) program that, in turn, generates a display of state transitions in response to events triggered by the user. The GUI program is wrapped around, and can be used to exercise the state-chart behavior of, the generated source code. Once the expected state-chart behavior is confirmed, the generated source code can be augmented with a software interface to the rest of the software with which the source code is required to interact.
Towards a Framework for Generating Tests to Satisfy Complex Code Coverage in Java Pathfinder
NASA Technical Reports Server (NTRS)
Staats, Matt
2009-01-01
We present work on a prototype tool based on the JavaPathfinder (JPF) model checker for automatically generating tests satisfying the MC/DC code coverage criterion. Using the Eclipse IDE, developers and testers can quickly instrument Java source code with JPF annotations covering all MC/DC coverage obligations, and JPF can then be used to automatically generate tests that satisfy these obligations. The prototype extension to JPF enables various tasks useful in automatic test generation to be performed, such as test suite reduction and execution of generated tests.
Nouraei, S A R; Hudovsky, A; Frampton, A E; Mufti, U; White, N B; Wathen, C G; Sandhu, G S; Darzi, A
2015-06-01
Clinical coding is the translation of clinical activity into a coded language. Coded data drive hospital reimbursement and are used for audit and research, and benchmarking and outcomes management purposes. We undertook a 2-center audit of coding accuracy across surgery. Clinician-auditor multidisciplinary teams reviewed the coding of 30,127 patients and assessed accuracy at primary and secondary diagnosis and procedure levels, morbidity level, complications assignment, and financial variance. Postaudit data of a randomly selected sample of 400 cases were reaudited by an independent team. At least 1 coding change occurred in 15,402 patients (51%). There were 3911 (13%) and 3620 (12%) changes to primary diagnoses and procedures, respectively. In 5183 (17%) patients, the Health Resource Grouping changed, resulting in income variance of £3,974,544 (+6.2%). The morbidity level changed in 2116 (7%) patients (P < 0.001). The number of assigned complications rose from 2597 (8.6%) to 2979 (9.9%) (P < 0.001). Reaudit resulted in further primary diagnosis and procedure changes in 8.7% and 4.8% of patients, respectively. The coded data are a key engine for knowledge-driven health care provision. They are used, increasingly at individual surgeon level, to benchmark performance. Surgical clinical coding is prone to subjectivity, variability, and error (SVE). Having a specialty-by-specialty understanding of the nature and clinical significance of informatics variability and adopting strategies to reduce it, are necessary to allow accurate assumptions and informed decisions to be made concerning the scope and clinical applicability of administrative data in surgical outcomes improvement.
SU-A-210-01: Why Should We Learn Radiation Oncology Billing?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, H.
The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less
SU-A-210-02: Medical Physics Opportunities at the NRC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abogunde, M.
The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The purpose of this student annual meeting is to address topics that are becoming more relevant to medical physicists, but are not frequently addressed, especially for students and trainees just entering the field. The talk is divided into two parts: medical billing and regulations. Hsinshun Wu – Why should we learn radiation oncology billing? Many medical physicists do not like to be involved with medical billing or coding during their career. They believe billing is not their responsibility and sometimes they even refuse to participate in the billing process if given the chance. This presentation will talk about a physicist’smore » long career and share his own experience that knowing medical billing is not only important and necessary for every young medical physicist, but that good billing knowledge could provide a valuable contribution to his/her medical physics development. Learning Objectives: The audience will learn the basic definition of Current Procedural Terminology (CPT) codes performed in a Radiation Oncology Department. Understand the differences between hospital coding and physician-based or freestanding coding. Apply proper CPT coding for each Radiation Oncology procedure. Each procedure with its specific CPT code will be discussed in detail. The talk will focus on the process of care and use of actual workflow to understand each CPT code. Example coding of a typical Radiation Oncology procedure. Special procedure coding such as brachytherapy, proton therapy, radiosurgery, and SBRT. Maryann Abogunde – Medical physics opportunities at the Nuclear Regulatory Commission (NRC) The NRC’s responsibilities include the regulation of medical uses of byproduct (radioactive) materials and oversight of medical use end-users (licensees) through a combination of regulatory requirements, licensing, safety oversight including inspection and enforcement, operational experience evaluation, and regulatory support activities. This presentation will explore the career options for medical physicists in the NRC, how the NRC interacts with clinical medical physicists, and a physicist’s experience as a regulator. Learning Objectives: Explore non-clinical career pathways for medical physics students and trainees at the Nuclear Regulatory Commission. Overview of NRC medical applications and medical use regulations. Understand the skills needed for physicists as regulators. Abogunde is funded to attend the meeting by her employer, the NRC.« less
Rolston, John D; Han, Seunggu J; Chang, Edward F
2017-03-01
The American College of Surgeons (ACS) National Surgical Quality Improvement Program (NSQIP) provides a rich database of North American surgical procedures and their complications. Yet no external source has validated the accuracy of the information within this database. Using records from the 2006 to 2013 NSQIP database, we used two methods to identify errors: (1) mismatches between the Current Procedural Terminology (CPT) code that was used to identify the surgical procedure, and the International Classification of Diseases (ICD-9) post-operative diagnosis: i.e., a diagnosis that is incompatible with a certain procedure. (2) Primary anesthetic and CPT code mismatching: i.e., anesthesia not indicated for a particular procedure. Analyzing data for movement disorders, epilepsy, and tumor resection, we found evidence of CPT code and postoperative diagnosis mismatches in 0.4-100% of cases, depending on the CPT code examined. When analyzing anesthetic data from brain tumor, epilepsy, trauma, and spine surgery, we found evidence of miscoded anesthesia in 0.1-0.8% of cases. National databases like NSQIP are an important tool for quality improvement. Yet all databases are subject to errors, and measures of internal consistency show that errors affect up to 100% of case records for certain procedures in NSQIP. Steps should be taken to improve data collection on the frontend of NSQIP, and also to ensure that future studies with NSQIP take steps to exclude erroneous cases from analysis. Copyright © 2016 Elsevier Ltd. All rights reserved.
New procedures to evaluate visually lossless compression for display systems
NASA Astrophysics Data System (ADS)
Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim
2017-09-01
Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.
ERIC Educational Resources Information Center
Meyer, Linda A.; And Others
This manual describes the model--specifically the observation procedures and coding systems--used in a longitudinal study of how children learn to comprehend what they read, with particular emphasis on science texts. Included are procedures for the following: identifying students; observing--recording observations and diagraming the room; writing…
Environmental Factor(tm) system: RCRA hazardous waste handler information (on cd-rom). Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1996-04-01
Environmental Factor(tm) RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information - dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less
Environmental Factor{trademark} system: RCRA hazardous waste handler information
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1999-03-01
Environmental Factor{trademark} RCRA Hazardous Waste Handler Information on CD-ROM unleashes the invaluable information found in two key EPA data sources on hazardous waste handlers and offers cradle-to-grave waste tracking. It`s easy to search and display: (1) Permit status, design capacity and compliance history for facilities found in the EPA Resource Conservation and Recovery Information System (RCRIS) program tracking database; (2) Detailed information on hazardous wastes generation, management and minimization by companies who are large quantity generators, and (3) Data on the waste management practices of treatment, storage and disposal (TSD) facilities from the EPA Biennial Reporting System which is collectedmore » every other year. Environmental Factor`s powerful database retrieval system lets you: (1) Search for RCRA facilities by permit type, SIC code, waste codes, corrective action or violation information, TSD status, generator and transporter status and more; (2) View compliance information -- dates of evaluation, violation, enforcement and corrective action; (3) Lookup facilities by waste processing categories of marketing, transporting, processing and energy recovery; (4) Use owner/operator information and names, titles and telephone numbers of project managers for prospecting; and (5) Browse detailed data on TSD facility and large quantity generators` activities such as onsite waste treatment, disposal, or recycling, offsite waste received, and waste generation and management. The product contains databases, search and retrieval software on two CD-ROMs, an installation diskette and User`s Guide. Environmental Factor has online context-sensitive help from any screen and a printed User`s Guide describing installation and step-by-step procedures for searching, retrieving and exporting. Hotline support is also available for no additional charge.« less
Schmidt, M K; van Leeuwen, F E; Klaren, H M; Tollenaar, R A; van 't Veer, L J
2004-03-20
To answer research questions concerning the course of disease and the optimal treatment of hereditary breast cancer, genetic typing together with the clinical and tumour characteristics of breast cancer patients are an important source of information. Part of the incidence of breast cancer can be explained by BRCA1 and BRCA2 germline mutations, which with current techniques can be retrospectively analysed in stored, paraffin-embedded tissue samples. In view of the implications of BRCA1- or BRCA2-carrier status for patients and other family members and the lack of clear legal regulations regarding the procedures to be followed when analysis is performed on historical material and no individual informed consent can be asked from the patients, an appropriate procedure for coding such data or rendering it anonymous is of great importance. By using the coding procedure described in this article, it becomes possible to follow and to work out in greater detail the guidelines of the code for 'Proper secondary use of human tissue' of the Federation of Biomedical Scientific Societies and to use these valuable databases again in the future.
Speech coding at low to medium bit rates
NASA Astrophysics Data System (ADS)
Leblanc, Wilfred Paul
1992-09-01
Improved search techniques coupled with improved codebook design methodologies are proposed to improve the performance of conventional code-excited linear predictive coders for speech. Improved methods for quantizing the short term filter are developed by employing a tree search algorithm and joint codebook design to multistage vector quantization. Joint codebook design procedures are developed to design locally optimal multistage codebooks. Weighting during centroid computation is introduced to improve the outlier performance of the multistage vector quantizer. Multistage vector quantization is shown to be both robust against input characteristics and in the presence of channel errors. Spectral distortions of about 1 dB are obtained at rates of 22-28 bits/frame. Structured codebook design procedures for excitation in code-excited linear predictive coders are compared to general codebook design procedures. Little is lost using significant structure in the excitation codebooks while greatly reducing the search complexity. Sparse multistage configurations are proposed for reducing computational complexity and memory size. Improved search procedures are applied to code-excited linear prediction which attempt joint optimization of the short term filter, the adaptive codebook, and the excitation. Improvements in signal to noise ratio of 1-2 dB are realized in practice.
47 CFR 11.61 - Tests of EAS procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... EAS header codes, Attention Signal, Test Script and EOM code. (i) Tests in odd numbered months shall... substitute for a monthly test, activation must include transmission of the EAS header codes, Attention Signal, emergency message and EOM code and comply with the visual message requirements in § 11.51. To substitute for...
Experimental QR code optical encryption: noise-free data recovering.
Barrera, John Fredy; Mira-Agudelo, Alejandro; Torroba, Roberto
2014-05-15
We report, to our knowledge for the first time, the experimental implementation of a quick response (QR) code as a "container" in an optical encryption system. A joint transform correlator architecture in an interferometric configuration is chosen as the experimental scheme. As the implementation is not possible in a single step, a multiplexing procedure to encrypt the QR code of the original information is applied. Once the QR code is correctly decrypted, the speckle noise present in the recovered QR code is eliminated by a simple digital procedure. Finally, the original information is retrieved completely free of any kind of degradation after reading the QR code. Additionally, we propose and implement a new protocol in which the reception of the encrypted QR code and its decryption, the digital block processing, and the reading of the decrypted QR code are performed employing only one device (smartphone, tablet, or computer). The overall method probes to produce an outcome far more attractive to make the adoption of the technique a plausible option. Experimental results are presented to demonstrate the practicality of the proposed security system.
Software Certification - Coding, Code, and Coders
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Holzmann, Gerard J.
2011-01-01
We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.
Green, Tyler; Smith, Terry; Hodges, Richard; Fry, W Mark
2017-12-01
Record keeping within research animal care facilities is a key part of the guidelines set forth by national regulatory bodies and mandated by federal laws. Research facilities must maintain records of animal health issues, procedures and usage. Facilities are also required to maintain records regarding regular husbandry such as general animal checks, feeding and watering. The level of record keeping has the potential to generate excessive amounts of paper which must be retained in a fashion as to be accessible. In addition it is preferable not to retain within administrative areas any paper records which may have been in contact with animal rooms. Here, we present a flexible, simple and inexpensive process for the generation and storage of electronic animal husbandry records using smartphone technology over a WiFi or cellular network.
The numerical solution of ordinary differential equations by the Taylor series method
NASA Technical Reports Server (NTRS)
Silver, A. H.; Sullivan, E.
1973-01-01
A programming implementation of the Taylor series method is presented for solving ordinary differential equations. The compiler is written in PL/1, and the target language is FORTRAN IV. The reduction of a differential system to rational form is described along with the procedures required for automatic numerical integration. The Taylor method is compared with two other methods for a number of differential equations. Algorithms using the Taylor method to find the zeroes of a given differential equation and to evaluate partial derivatives are presented. An annotated listing of the PL/1 program which performs the reduction and code generation is given. Listings of the FORTRAN routines used by the Taylor series method are included along with a compilation of all the recurrence formulas used to generate the Taylor coefficients for non-rational functions.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
High-performance parallel analysis of coupled problems for aircraft propulsion
NASA Technical Reports Server (NTRS)
Felippa, C. A.; Farhat, C.; Lanteri, S.; Maman, N.; Piperno, S.; Gumaste, U.
1994-01-01
This research program deals with the application of high-performance computing methods for the analysis of complete jet engines. We have entitled this program by applying the two dimensional parallel aeroelastic codes to the interior gas flow problem of a bypass jet engine. The fluid mesh generation, domain decomposition, and solution capabilities were successfully tested. We then focused attention on methodology for the partitioned analysis of the interaction of the gas flow with a flexible structure and with the fluid mesh motion that results from these structural displacements. This is treated by a new arbitrary Lagrangian-Eulerian (ALE) technique that models the fluid mesh motion as that of a fictitious mass-spring network. New partitioned analysis procedures to treat this coupled three-component problem are developed. These procedures involved delayed corrections and subcycling. Preliminary results on the stability, accuracy, and MPP computational efficiency are reported.
NASA Technical Reports Server (NTRS)
Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.
1987-01-01
It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.
Identifying Vasopressor and Inotrope Use for Health Services Research
Fawzy, Ashraf; Bradford, Mark; Lindenauer, Peter K.
2016-01-01
Rationale: Identifying vasopressor and inotrope (vasopressor) use from administrative claims data may provide an important resource to study the epidemiology of shock. Objectives: Determine accuracy of identifying vasopressor use using International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) coding. Methods: Using administrative data enriched with pharmacy billing files (Premier, Inc., Charlotte, NC), we identified two cohorts: adult patients admitted with a diagnosis of sepsis from 2010 to 2013 or pulmonary embolism (PE) from 2008 to 2011. Vasopressor administration was obtained using pharmacy billing files (dopamine, dobutamine, epinephrine, milrinone, norepinephrine, phenylephrine, vasopressin) and compared with ICD-9-CM procedure code for vasopressor administration (00.17). We estimated performance characteristics of the ICD-9-CM code and compared patients’ characteristics and mortality rates according to vasopressor identification method. Measurements and Main Results: Using either pharmacy data or the ICD-9-CM procedure code, 29% of 541,144 patients in the sepsis cohort and 5% of 81,588 patients in the PE cohort were identified as receiving a vasopressor. In the sepsis cohort, the ICD-9-CM procedure code had low sensitivity (9.4%; 95% confidence interval, 9.2–9.5), which increased over time. Results were similar in the PE cohort (sensitivity, 5.8%; 95% confidence interval, 5.1–6.6). The ICD-9-CM code exhibited high specificity in the sepsis (99.8%) and PE (100%) cohorts. However, patients identified as receiving vasopressors by ICD-9-CM code had significantly higher unadjusted in-hospital mortality, had more acute organ failures, and were more likely hospitalized in the Northeast and West. Conclusions: The ICD-9-CM procedure code for vasopressor administration has low sensitivity and selects for higher severity of illness in studies of shock. Temporal changes in sensitivity would likely make longitudinal shock surveillance using ICD-9-CM inaccurate. PMID:26653145
Probability Quantization for Multiplication-Free Binary Arithmetic Coding
NASA Technical Reports Server (NTRS)
Cheung, K. -M.
1995-01-01
A method has been developed to improve on Witten's binary arithmetic coding procedure of tracking a high value and a low value. The new method approximates the probability of the less probable symbol, which improves the worst-case coding efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Virtanen, E.; Haapalehto, T.; Kouhia, J.
1995-09-01
Three experiments were conducted to study the behavior of the new horizontal steam generator construction of the PACTEL test facility. In the experiments the secondary side coolant level was reduced stepwise. The experiments were calculated with two computer codes RELAP5/MOD3.1 and APROS version 2.11. A similar nodalization scheme was used for both codes to that the results may be compared. Only the steam generator was modelled and the rest of the facility was given as a boundary condition. The results show that both codes calculate well the behaviour of the primary side of the steam generator. On the secondary sidemore » both codes calculate lower steam temperatures in the upper part of the heat exchange tube bundle than was measured in the experiments.« less
Unaligned instruction relocation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less
Unaligned instruction relocation
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.
2018-01-23
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.
IGB grid: User's manual (A turbomachinery grid generation code)
NASA Technical Reports Server (NTRS)
Beach, T. A.; Hoffman, G.
1992-01-01
A grid generation code called IGB is presented for use in computational investigations of turbomachinery flowfields. It contains a combination of algebraic and elliptic techniques coded for use on an interactive graphics workstation. The instructions for use and a test case are included.
TIGER: Turbomachinery interactive grid generation
NASA Technical Reports Server (NTRS)
Soni, Bharat K.; Shih, Ming-Hsin; Janus, J. Mark
1992-01-01
A three dimensional, interactive grid generation code, TIGER, is being developed for analysis of flows around ducted or unducted propellers. TIGER is a customized grid generator that combines new technology with methods from general grid generation codes. The code generates multiple block, structured grids around multiple blade rows with a hub and shroud for either C grid or H grid topologies. The code is intended for use with a Euler/Navier-Stokes solver also being developed, but is general enough for use with other flow solvers. TIGER features a silicon graphics interactive graphics environment that displays a pop-up window, graphics window, and text window. The geometry is read as a discrete set of points with options for several industrial standard formats and NASA standard formats. Various splines are available for defining the surface geometries. Grid generation is done either interactively or through a batch mode operation using history files from a previously generated grid. The batch mode operation can be done either with a graphical display of the interactive session or with no graphics so that the code can be run on another computer system. Run time can be significantly reduced by running on a Cray-YMP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
A.A. Bingham; R.M. Ferrer; A.M. ougouag
2009-09-01
An accurate and computationally efficient two or three-dimensional neutron diffusion model will be necessary for the development, safety parameters computation, and fuel cycle analysis of a prismatic Very High Temperature Reactor (VHTR) design under Next Generation Nuclear Plant Project (NGNP). For this purpose, an analytical nodal Green’s function solution for the transverse integrated neutron diffusion equation is developed in two and three-dimensional hexagonal geometry. This scheme is incorporated into HEXPEDITE, a code first developed by Fitzpatrick and Ougouag. HEXPEDITE neglects non-physical discontinuity terms that arise in the transverse leakage due to the transverse integration procedure application to hexagonal geometry andmore » cannot account for the effects of burnable poisons across nodal boundaries. The test code being developed for this document accounts for these terms by maintaining an inventory of neutrons by using the nodal balance equation as a constraint of the neutron flux equation. The method developed in this report is intended to restore neutron conservation and increase the accuracy of the code by adding these terms to the transverse integrated flux solution and applying the nodal Green’s function solution to the resulting equation to derive a semi-analytical solution.« less
Kimia, Amir A; Savova, Guergana; Landschaft, Assaf; Harper, Marvin B
2015-07-01
Electronically stored clinical documents may contain both structured data and unstructured data. The use of structured clinical data varies by facility, but clinicians are familiar with coded data such as International Classification of Diseases, Ninth Revision, Systematized Nomenclature of Medicine-Clinical Terms codes, and commonly other data including patient chief complaints or laboratory results. Most electronic health records have much more clinical information stored as unstructured data, for example, clinical narrative such as history of present illness, procedure notes, and clinical decision making are stored as unstructured data. Despite the importance of this information, electronic capture or retrieval of unstructured clinical data has been challenging. The field of natural language processing (NLP) is undergoing rapid development, and existing tools can be successfully used for quality improvement, research, healthcare coding, and even billing compliance. In this brief review, we provide examples of successful uses of NLP using emergency medicine physician visit notes for various projects and the challenges of retrieving specific data and finally present practical methods that can run on a standard personal computer as well as high-end state-of-the-art funded processes run by leading NLP informatics researchers.
Hosseini, Seyed Abolfazl; Esmaili Paeen Afrakoti, Iman
2018-01-17
The purpose of the present study was to reconstruct the energy spectrum of a poly-energetic neutron source using an algorithm developed based on an Adaptive Neuro-Fuzzy Inference System (ANFIS). ANFIS is a kind of artificial neural network based on the Takagi-Sugeno fuzzy inference system. The ANFIS algorithm uses the advantages of both fuzzy inference systems and artificial neural networks to improve the effectiveness of algorithms in various applications such as modeling, control and classification. The neutron pulse height distributions used as input data in the training procedure for the ANFIS algorithm were obtained from the simulations performed by MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Taking into account the normalization condition of each energy spectrum, 4300 neutron energy spectra were generated randomly. (The value in each bin was generated randomly, and finally a normalization of each generated energy spectrum was performed). The randomly generated neutron energy spectra were considered as output data of the developed ANFIS computational code in the training step. To calculate the neutron energy spectrum using conventional methods, an inverse problem with an approximately singular response matrix (with the determinant of the matrix close to zero) should be solved. The solution of the inverse problem using the conventional methods unfold neutron energy spectrum with low accuracy. Application of the iterative algorithms in the solution of such a problem, or utilizing the intelligent algorithms (in which there is no need to solve the problem), is usually preferred for unfolding of the energy spectrum. Therefore, the main reason for development of intelligent algorithms like ANFIS for unfolding of neutron energy spectra is to avoid solving the inverse problem. In the present study, the unfolded neutron energy spectra of 252Cf and 241Am-9Be neutron sources using the developed computational code were found to have excellent agreement with the reference data. Also, the unfolded energy spectra of the neutron sources as obtained using ANFIS were more accurate than the results reported from calculations performed using artificial neural networks in previously published papers. © The Author(s) 2018. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.
Lowe, Jeanne R; Raugi, Gregory J; Reiber, Gayle E; Whitney, Joanne D
2013-01-01
The purpose of this cohort study was to evaluate the effect of a 1-year intervention of an electronic medical record wound care template on the completeness of wound care documentation and medical coding compared to a similar time interval for the fiscal year preceding the intervention. From October 1, 2006, to September 30, 2007, a "good wound care" intervention was implemented at a rural Veterans Affairs facility to prevent amputations in veterans with diabetes and foot ulcers. The study protocol included a template with foot ulcer variables embedded in the electronic medical record to facilitate data collection, support clinical decision making, and improve ordering and medical coding. The intervention group showed significant differences in complete documentation of good wound care compared to the historic control group (χ = 15.99, P < .001), complete documentation of coding for diagnoses and procedures (χ = 30.23, P < .001), and complete documentation of both good wound care and coding for diagnoses and procedures (χ = 14.96, P < .001). An electronic wound care template improved documentation of evidence-based interventions and facilitated coding for wound complexity and procedures.
A concatenated coding scheme for error control
NASA Technical Reports Server (NTRS)
Lin, S.
1985-01-01
A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).
Fractal-Based Image Compression
1989-09-01
6. A Mercedes Benz symbol generated using an IFS code ................. 21 7. (a) U-A fern and (b) A-0 fern generated with RIFS codes...22 8. Construction of the Mercedes - Benz symbol using RIFS ................ 23 9. The regenerated perfect image of the Mercedes - Benz symbol using R IF...quite often, it cannot be done with a reasonable number of transforms. As an example, the Mercedes Benz symbol generated using an IFS code is illustrated
Model-Driven Engineering: Automatic Code Generation and Beyond
2015-03-01
and Weblogic as well as cloud environments such as Mi- crosoft Azure and Amazon Web Services®. Finally, while the generated code has dependencies on...code generation in the context of the full system lifecycle from development to sustainment. Acquisition programs in govern- ment or large commercial...Acquirers are concerned with the full system lifecycle, and they need confidence that the development methods will enable the system to meet the functional
Deductive Glue Code Synthesis for Embedded Software Systems Based on Code Patterns
NASA Technical Reports Server (NTRS)
Liu, Jian; Fu, Jicheng; Zhang, Yansheng; Bastani, Farokh; Yen, I-Ling; Tai, Ann; Chau, Savio N.
2006-01-01
Automated code synthesis is a constructive process that can be used to generate programs from specifications. It can, thus, greatly reduce the software development cost and time. The use of formal code synthesis approach for software generation further increases the dependability of the system. Though code synthesis has many potential benefits, the synthesis techniques are still limited. Meanwhile, components are widely used in embedded system development. Applying code synthesis to component based software development (CBSD) process can greatly enhance the capability of code synthesis while reducing the component composition efforts. In this paper, we discuss the issues and techniques for applying deductive code synthesis techniques to CBSD. For deductive synthesis in CBSD, a rule base is the key for inferring appropriate component composition. We use the code patterns to guide the development of rules. Code patterns have been proposed to capture the typical usages of the components. Several general composition operations have been identified to facilitate systematic composition. We present the technique for rule development and automated generation of new patterns from existing code patterns. A case study of using this method in building a real-time control system is also presented.
Nouraei, S A R; O'Hanlon, S; Butler, C R; Hadovsky, A; Donald, E; Benjamin, E; Sandhu, G S
2009-02-01
To audit the accuracy of otolaryngology clinical coding and identify ways of improving it. Prospective multidisciplinary audit, using the 'national standard clinical coding audit' methodology supplemented by 'double-reading and arbitration'. Teaching-hospital otolaryngology and clinical coding departments. Otolaryngology inpatient and day-surgery cases. Concordance between initial coding performed by a coder (first cycle) and final coding by a clinician-coder multidisciplinary team (MDT; second cycle) for primary and secondary diagnoses and procedures, and Health Resource Groupings (HRG) assignment. 1250 randomly-selected cases were studied. Coding errors occurred in 24.1% of cases (301/1250). The clinician-coder MDT reassigned 48 primary diagnoses and 186 primary procedures and identified a further 209 initially-missed secondary diagnoses and procedures. In 203 cases, patient's initial HRG changed. Incorrect coding caused an average revenue loss of 174.90 pounds per patient (14.7%) of which 60% of the total income variance was due to miscoding of a eight highly-complex head and neck cancer cases. The 'HRG drift' created the appearance of disproportionate resource utilisation when treating 'simple' cases. At our institution the total cost of maintaining a clinician-coder MDT was 4.8 times lower than the income regained through the double-reading process. This large audit of otolaryngology practice identifies a large degree of error in coding on discharge. This leads to significant loss of departmental revenue, and given that the same data is used for benchmarking and for making decisions about resource allocation, it distorts the picture of clinical practice. These can be rectified through implementing a cost-effective clinician-coder double-reading multidisciplinary team as part of a data-assurance clinical governance framework which we recommend should be established in hospitals.
Jones, Lyell K; Ney, John P
2016-12-01
Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.
Fister, Karin; Fister, Iztok; Murovec, Jana; Bohanec, Borut
2017-02-01
Plant breeders' rights are undergoing dramatic changes due to changes in patent rights in terms of plant variety rights protection. Although differences in the interpretation of »breeder's exemption«, termed research exemption in the 1991 UPOV, did exist in the past in some countries, allowing breeders to use protected varieties as parents in the creation of new varieties of plants, current developments brought about by patenting conventionally bred varieties with the European Patent Office (such as EP2140023B1) have opened new challenges. Legal restrictions on germplasm availability are therefore imposed on breeders while, at the same time, no practical information on how to distinguish protected from non-protected varieties is given. We propose here a novel approach that would solve this problem by the insertion of short DNA stretches (labels) into protected plant varieties by genetic transformation. This information will then be available to breeders by a simple and standardized procedure. We propose that such a procedure should consist of using a pair of universal primers that will generate a sequence in a PCR reaction, which can be read and translated into ordinary text by a computer application. To demonstrate the feasibility of such approach, we conducted a case study. Using the Agrobacterium tumefaciens transformation protocol, we inserted a stretch of DNA code into Nicotiana benthamiana. We also developed an on-line application that enables coding of any text message into DNA nucleotide code and, on sequencing, decoding it back into text. In the presented case study, a short command line coding the phrase »Hello world« was transformed into a DNA sequence that was inserted in the plant genome. The encoded message was reconstructed from the resulting T1 seedlings with 100 % accuracy. The feasibility and possible other applications of this approach are discussed.
featsel: A framework for benchmarking of feature selection algorithms and cost functions
NASA Astrophysics Data System (ADS)
Reis, Marcelo S.; Estrela, Gustavo; Ferreira, Carlos Eduardo; Barrera, Junior
In this paper, we introduce featsel, a framework for benchmarking of feature selection algorithms and cost functions. This framework allows the user to deal with the search space as a Boolean lattice and has its core coded in C++ for computational efficiency purposes. Moreover, featsel includes Perl scripts to add new algorithms and/or cost functions, generate random instances, plot graphs and organize results into tables. Besides, this framework already comes with dozens of algorithms and cost functions for benchmarking experiments. We also provide illustrative examples, in which featsel outperforms the popular Weka workbench in feature selection procedures on data sets from the UCI Machine Learning Repository.
Leadless Pacing: Current State and Future Direction.
Merkel, Matthias; Grotherr, Philipp; Radzewitz, Andrea; Schmitt, Claus
2017-12-01
Leadless pacing is now an established alternative to conventional pacing with subcutaneous pocket and transvenous lead for patients with class I or II single-chamber pacing indication. Available 12-month follow-up data shows a 48% fewer major complication rate in patients with Micra™ compared to a historical control group in a nonrandomized study [1]. There is one system with Food and Drug Administration (FDA) approval and two with the Communauté Européenne (CE) mark. The OPS code for the implantation is 8-83d.3 and the procedure has recently been rated as a "new Examination and Treatment Method (NUB)" in the German DRG system, meaning adequate reimbursement is negotiable with health insurance providers. The systems offer similar generator longevity and programming possibilities as conventional pacemaker systems, including rate response, remote monitoring, and MRI safety. The biggest downsides to date are limitations to single-chamber stimulation, lack of long-time data, and concerns of handling of the system at the end of its life span. However, implant procedure complication rates and procedure times do not exceed conventional pacemaker operations, and proper training and patient selection is provided.
Experimental determination of the correlation properties of plasma turbulence using 2D BES systems
NASA Astrophysics Data System (ADS)
Fox, M. F. J.; Field, A. R.; van Wyk, F.; Ghim, Y.-c.; Schekochihin, A. A.; the MAST Team
2017-04-01
A procedure is presented to map from the spatial correlation parameters of a turbulent density field (the radial and binormal correlation lengths and wavenumbers, and the fluctuation amplitude) to correlation parameters that would be measured by a beam emission spectroscopy (BES) diagnostic. The inverse mapping is also derived, which results in resolution criteria for recovering correct correlation parameters, depending on the spatial response of the instrument quantified in terms of point-spread functions (PSFs). Thus, a procedure is presented that allows for a systematic comparison between theoretical predictions and experimental observations. This procedure is illustrated using the Mega-Ampere Spherical Tokamak BES system and the validity of the underlying assumptions is tested on fluctuating density fields generated by direct numerical simulations using the gyrokinetic code GS2. The measurement of the correlation time, by means of the cross-correlation time-delay method, is also investigated and is shown to be sensitive to the fluctuating radial component of velocity, as well as to small variations in the spatial properties of the PSFs.
SIGACE Code for Generating High-Temperature ACE Files; Validation and Benchmarking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharma, Amit R.; Ganesan, S.; Trkov, A.
2005-05-24
A code named SIGACE has been developed as a tool for MCNP users within the scope of a research contract awarded by the Nuclear Data Section of the International Atomic Energy Agency (IAEA) (Ref: 302-F4-IND-11566 B5-IND-29641). A new recipe has been evolved for generating high-temperature ACE files for use with the MCNP code. Under this scheme the low-temperature ACE file is first converted to an ENDF formatted file using the ACELST code and then Doppler broadened, essentially limited to the data in the resolved resonance region, to any desired higher temperature using SIGMA1. The SIGACE code then generates a high-temperaturemore » ACE file for use with the MCNP code. A thinning routine has also been introduced in the SIGACE code for reducing the size of the ACE files. The SIGACE code and the recipe for generating ACE files at higher temperatures has been applied to the SEFOR fast reactor benchmark problem (sodium-cooled fast reactor benchmark described in ENDF-202/BNL-19302, 1974 document). The calculated Doppler coefficient is in good agreement with the experimental value. A similar calculation using ACE files generated directly with the NJOY system also agrees with our SIGACE computed results. The SIGACE code and the recipe is further applied to study the numerical benchmark configuration of selected idealized PWR pin cell configurations with five different fuel enrichments as reported by Mosteller and Eisenhart. The SIGACE code that has been tested with several FENDL/MC files will be available, free of cost, upon request, from the Nuclear Data Section of the IAEA.« less
Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code
NASA Astrophysics Data System (ADS)
Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy
2017-12-01
The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a) = s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.
Application of Quantum Gauss-Jordan Elimination Code to Quantum Secret Sharing Code
NASA Astrophysics Data System (ADS)
Diep, Do Ngoc; Giang, Do Hoang; Phu, Phan Huy
2018-03-01
The QSS codes associated with a MSP code are based on finding an invertible matrix V, solving the system vATMB (s a)=s. We propose a quantum Gauss-Jordan Elimination Procedure to produce such a pivotal matrix V by using the Grover search code. The complexity of solving is of square-root order of the cardinal number of the unauthorized set √ {2^{|B|}}.
Aerodynamic characterization of the jet of an arc wind tunnel
NASA Astrophysics Data System (ADS)
Zuppardi, Gennaro; Esposito, Antonio
2016-11-01
It is well known that, due to a very aggressive environment and to a rather high rarefaction level of the arc wind tunnel jet, the measurement of fluid-dynamic parameters is difficult. For this reason, the aerodynamic characterization of the jet relies also on computer codes, simulating the operation of the tunnel. The present authors already used successfully such a kind of computing procedure for the tests in the arc wind tunnel (SPES) in Naples (Italy). In the present work an improved procedure is proposed. Like the former procedure also the present procedure relies on two codes working in tandem: 1) one-dimensional code simulating the inviscid and thermally not-conducting flow field in the torch, in the mix-chamber and in the nozzle up to the position, along the nozzle axis, of the continuum breakdown, 2) Direct Simulation Monte Carlo (DSMC) code simulating the flow field in the remaining part of the nozzle. In the present procedure, the DSMC simulation includes the simulation both in the nozzle and in the test chamber. An interesting problem, considered in this paper by means of the present procedure, has been the simulation of the flow field around a Pitot tube and of the related measurement of the stagnation pressure. The measured stagnation pressure, under rarefied conditions, may be even four times the theoretical value. Therefore a substantial correction has to be applied to the measured pressure. In the present paper a correction factor for the stagnation pressure measured in SPES is proposed. The analysis relies on twelve tests made in SPES.
Keane, Frank; Hammond, Laura; Kelliher, Gerry; Mealy, Ken
2017-12-12
In the year to July 2017, surgical disciplines accounted for 73% of the total national inpatient and day case waiting list and, of these, day cases accounted for 72%. Their proper classification is therefore important so that patients can be managed and treated in the most suitable and efficient setting. We set out to sub-classify the different elective surgical day cases treated in Irish public hospitals in order to assess their need to be managed as day cases and the consistency of practice between hospitals. We analysed all elective day cases that came under the care of surgeons between January 2014 and December 2016 and sub-classified them into those that were (A) true day case surgical procedures; (B) minor surgery or outpatient procedures; (C) gastrointestinal endoscopies; (D) day case, non-surgical interventions and (E) unclassified or having no primary procedure identified. Of 813,236 day case surgical interventions performed over 3 years, 26% were adjudged to accord with group A, 41% with B, 23% with C, 5% with D and 5% with E. The ratio of A to B procedures did not vary significantly across the range of hospital types. However, there were some notable variations in coding and practices between hospitals. Our findings show that many day cases should have been performed as outpatient procedures and that there were variations in coding and practices between hospitals that could not be easily explained. Outpatient procedure coding and a better, more consistent, classification of day cases are both required to better manage this group of patients.
Creation and Delivery of New Superpixelized DIRBE Map Products
NASA Technical Reports Server (NTRS)
Weiland, J.
1998-01-01
Phase 1 called for the following tasks: (1) completion of code to generate intermediate files containing the individual DIRBE observations which would be used to make the superpixelized maps; (2) completion of code necessary to generate the maps themselves; and (3) quality control on test-case maps in the form of point-source extraction and photometry. Items 1 and 2 are well in hand and the tested code is nearly complete. A few test maps have been generated for the tests mentioned in item 3. Map generation is not in production mode yet.
Rassinoux, Anne-Marie; Baud, Robert H; Rodrigues, Jean-Marie; Lovis, Christian; Geissbühler, Antoine
2007-01-01
The importance of clinical communication between providers, consumers and others, as well as the requisite for computer interoperability, strengthens the need for sharing common accepted terminologies. Under the directives of the World Health Organization (WHO), an approach is currently being conducted in Australia to adopt a standardized terminology for medical procedures that is intended to become an international reference. In order to achieve such a standard, a collaborative approach is adopted, in line with the successful experiment conducted for the development of the new French coding system CCAM. Different coding centres are involved in setting up a semantic representation of each term using a formal ontological structure expressed through a logic-based representation language. From this language-independent representation, multilingual natural language generation (NLG) is performed to produce noun phrases in various languages that are further compared for consistency with the original terms. Outcomes are presented for the assessment of the International Classification of Health Interventions (ICHI) and its translation into Portuguese. The initial results clearly emphasize the feasibility and cost-effectiveness of the proposed method for handling both a different classification and an additional language. NLG tools, based on ontology driven semantic representation, facilitate the discovery of ambiguous and inconsistent terms, and, as such, should be promoted for establishing coherent international terminologies.
A decoding procedure for the Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Lim, R. S.
1978-01-01
A decoding procedure is described for the (n,k) t-error-correcting Reed-Solomon (RS) code, and an implementation of the (31,15) RS code for the I4-TENEX central system. This code can be used for error correction in large archival memory systems. The principal features of the decoder are a Galois field arithmetic unit implemented by microprogramming a microprocessor, and syndrome calculation by using the g(x) encoding shift register. Complete decoding of the (31,15) code is expected to take less than 500 microsecs. The syndrome calculation is performed by hardware using the encoding shift register and a modified Chien search. The error location polynomial is computed by using Lin's table, which is an interpretation of Berlekamp's iterative algorithm. The error location numbers are calculated by using the Chien search. Finally, the error values are computed by using Forney's method.
A Combinatorial Geometry Computer Description of the MEP-021A Generator Set
1979-02-01
Generator Computer Description Gasoline Generator GIFT MEP-021A 20. ABSTRACT fCbntteu* an rararaa eta* ft namamwaay anal Identify by block number) This... GIFT code is also stored on magnetic tape for future vulnerability analysis. 00,] *7,1473 EDITION OF • NOV 65 IS OBSOLETE UNCLASSIFIED SECURITY...the Geometric Information for Targets ( GIFT ) computer code. The GIFT code traces shotlines through a COM-GEOM description from any specified attack
Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne
2008-03-01
The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.
A versatile calibration procedure for portable coded aperture gamma cameras and RGB-D sensors
NASA Astrophysics Data System (ADS)
Paradiso, V.; Crivellaro, A.; Amgarou, K.; de Lanaute, N. Blanc; Fua, P.; Liénard, E.
2018-04-01
The present paper proposes a versatile procedure for the geometrical calibration of coded aperture gamma cameras and RGB-D depth sensors, using only one radioactive point source and a simple experimental set-up. Calibration data is then used for accurately aligning radiation images retrieved by means of the γ-camera with the respective depth images computed with the RGB-D sensor. The system resulting from such a combination is thus able to retrieve, automatically, the distance of radioactive hotspots by means of pixel-wise mapping between gamma and depth images. This procedure is of great interest for a wide number of applications, ranging from precise automatic estimation of the shape and distance of radioactive objects to Augmented Reality systems. Incidentally, the corresponding results validated the choice of a perspective design model for a coded aperture γ-camera.
Evidence-Based Imaging Guidelines and Medicare Payment Policy
Sistrom, Christopher L; McKay, Niccie L
2008-01-01
Objective This study examines the relationship between evidence-based appropriateness criteria for neurologic imaging procedures and Medicare payment determinations. The primary research question is whether Medicare is more likely to pay for imaging procedures as the level of appropriateness increases. Data Sources The American College of Radiology Appropriateness Criteria (ACRAC) for neurological imaging, ICD-9-CM codes, CPT codes, and payment determinations by the Medicare Part B carrier for Florida and Connecticut. Study Design Cross-sectional study of appropriateness criteria and Medicare Part B payment policy for neurological imaging. In addition to descriptive and bivariate statistics, multivariate logistic regression on payment determination (yes or no) was performed. Data Collection Methods The American College of Radiology Appropriateness Criteria (ACRAC) documents specific to neurological imaging, ICD-9-CM codes, and CPT codes were used to create 2,510 medical condition/imaging procedure combinations, with associated appropriateness scores (coded as low/middle/high). Principal Findings As the level of appropriateness increased, more medical condition/imaging procedure combinations were payable (low = 61 percent, middle = 70 percent, and high = 74 percent). Logistic regression indicated that the odds of a medical condition/imaging procedure combination with a middle level of appropriateness being payable was 48 percent higher than for an otherwise similar combination with a low appropriateness score (95 percent CI on odds ratio=1.19–1.84). The odds ratio for being payable between high and low levels of appropriateness was 2.25 (95 percent CI: 1.66–3.04). Conclusions Medicare could improve its payment determinations by taking advantage of existing clinical guidelines, appropriateness criteria, and other authoritative resources for evidence-based practice. Such an approach would give providers a financial incentive that is aligned with best-practice medicine. In particular, Medicare should review and update its payment policies to reflect current information on the appropriateness of alternative imaging procedures for the same medical condition. PMID:18454778
The MOLGENIS toolkit: rapid prototyping of biosoftware at the push of a button.
Swertz, Morris A; Dijkstra, Martijn; Adamusiak, Tomasz; van der Velde, Joeri K; Kanterakis, Alexandros; Roos, Erik T; Lops, Joris; Thorisson, Gudmundur A; Arends, Danny; Byelas, George; Muilu, Juha; Brookes, Anthony J; de Brock, Engbert O; Jansen, Ritsert C; Parkinson, Helen
2010-12-21
There is a huge demand on bioinformaticians to provide their biologists with user friendly and scalable software infrastructures to capture, exchange, and exploit the unprecedented amounts of new *omics data. We here present MOLGENIS, a generic, open source, software toolkit to quickly produce the bespoke MOLecular GENetics Information Systems needed. The MOLGENIS toolkit provides bioinformaticians with a simple language to model biological data structures and user interfaces. At the push of a button, MOLGENIS' generator suite automatically translates these models into a feature-rich, ready-to-use web application including database, user interfaces, exchange formats, and scriptable interfaces. Each generator is a template of SQL, JAVA, R, or HTML code that would require much effort to write by hand. This 'model-driven' method ensures reuse of best practices and improves quality because the modeling language and generators are shared between all MOLGENIS applications, so that errors are found quickly and improvements are shared easily by a re-generation. A plug-in mechanism ensures that both the generator suite and generated product can be customized just as much as hand-written software. In recent years we have successfully evaluated the MOLGENIS toolkit for the rapid prototyping of many types of biomedical applications, including next-generation sequencing, GWAS, QTL, proteomics and biobanking. Writing 500 lines of model XML typically replaces 15,000 lines of hand-written programming code, which allows for quick adaptation if the information system is not yet to the biologist's satisfaction. Each application generated with MOLGENIS comes with an optimized database back-end, user interfaces for biologists to manage and exploit their data, programming interfaces for bioinformaticians to script analysis tools in R, Java, SOAP, REST/JSON and RDF, a tab-delimited file format to ease upload and exchange of data, and detailed technical documentation. Existing databases can be quickly enhanced with MOLGENIS generated interfaces using the 'ExtractModel' procedure. The MOLGENIS toolkit provides bioinformaticians with a simple model to quickly generate flexible web platforms for all possible genomic, molecular and phenotypic experiments with a richness of interfaces not provided by other tools. All the software and manuals are available free as LGPLv3 open source at http://www.molgenis.org.
Tompkins, Connie A.; Meigh, Kimberly M.; Prat, Chantel S.
2015-01-01
Purpose This study examined right hemisphere (RH) neuroanatomical correlates of lexical–semantic deficits that predict narrative comprehension in adults with RH brain damage. Coarse semantic coding and suppression deficits were related to lesions by voxel-based lesion symptom mapping. Method Participants were 20 adults with RH cerebrovascular accidents. Measures of coarse coding and suppression deficits were computed from lexical decision reaction times at short (175 ms) and long (1000 ms) prime-target intervals. Lesions were drawn on magnetic resonance imaging images and through normalization were registered on an age-matched brain template. Voxel-based lesion symptom mapping analysis was applied to build a general linear model at each voxel. Z score maps were generated for each deficit, and results were interpreted using automated anatomical labeling procedures. Results A deficit in coarse semantic activation was associated with lesions to the RH posterior middle temporal gyrus, dorsolateral prefrontal cortex, and lenticular nuclei. A maintenance deficit for coarsely coded representations involved the RH temporal pole and dorsolateral prefrontal cortex more medially. Ineffective suppression implicated lesions to the RH inferior frontal gyrus and subcortical regions, as hypothesized, along with the rostral temporal pole. Conclusion Beyond their scientific implications, these lesion–deficit correspondences may help inform the clinical diagnosis and enhance decisions about candidacy for deficit-focused treatment to improve narrative comprehension in individuals with RH damage. PMID:26425785
Yang, Ying; Tompkins, Connie A; Meigh, Kimberly M; Prat, Chantel S
2015-11-01
This study examined right hemisphere (RH) neuroanatomical correlates of lexical-semantic deficits that predict narrative comprehension in adults with RH brain damage. Coarse semantic coding and suppression deficits were related to lesions by voxel-based lesion symptom mapping. Participants were 20 adults with RH cerebrovascular accidents. Measures of coarse coding and suppression deficits were computed from lexical decision reaction times at short (175 ms) and long (1000 ms) prime-target intervals. Lesions were drawn on magnetic resonance imaging images and through normalization were registered on an age-matched brain template. Voxel-based lesion symptom mapping analysis was applied to build a general linear model at each voxel. Z score maps were generated for each deficit, and results were interpreted using automated anatomical labeling procedures. A deficit in coarse semantic activation was associated with lesions to the RH posterior middle temporal gyrus, dorsolateral prefrontal cortex, and lenticular nuclei. A maintenance deficit for coarsely coded representations involved the RH temporal pole and dorsolateral prefrontal cortex more medially. Ineffective suppression implicated lesions to the RH inferior frontal gyrus and subcortical regions, as hypothesized, along with the rostral temporal pole. Beyond their scientific implications, these lesion-deficit correspondences may help inform the clinical diagnosis and enhance decisions about candidacy for deficit-focused treatment to improve narrative comprehension in individuals with RH damage.
Automatic Testcase Generation for Flight Software
NASA Technical Reports Server (NTRS)
Bushnell, David Henry; Pasareanu, Corina; Mackey, Ryan M.
2008-01-01
The TacSat3 project is applying Integrated Systems Health Management (ISHM) technologies to an Air Force spacecraft for operational evaluation in space. The experiment will demonstrate the effectiveness and cost of ISHM and vehicle systems management (VSM) technologies through onboard operation for extended periods. We present two approaches to automatic testcase generation for ISHM: 1) A blackbox approach that views the system as a blackbox, and uses a grammar-based specification of the system's inputs to automatically generate *all* inputs that satisfy the specifications (up to prespecified limits); these inputs are then used to exercise the system. 2) A whitebox approach that performs analysis and testcase generation directly on a representation of the internal behaviour of the system under test. The enabling technologies for both these approaches are model checking and symbolic execution, as implemented in the Ames' Java PathFinder (JPF) tool suite. Model checking is an automated technique for software verification. Unlike simulation and testing which check only some of the system executions and therefore may miss errors, model checking exhaustively explores all possible executions. Symbolic execution evaluates programs with symbolic rather than concrete values and represents variable values as symbolic expressions. We are applying the blackbox approach to generating input scripts for the Spacecraft Command Language (SCL) from Interface and Control Systems. SCL is an embedded interpreter for controlling spacecraft systems. TacSat3 will be using SCL as the controller for its ISHM systems. We translated the SCL grammar into a program that outputs scripts conforming to the grammars. Running JPF on this program generates all legal input scripts up to a prespecified size. Script generation can also be targeted to specific parts of the grammar of interest to the developers. These scripts are then fed to the SCL Executive. ICS's in-house coverage tools will be run to measure code coverage. Because the scripts exercise all parts of the grammar, we expect them to provide high code coverage. This blackbox approach is suitable for systems for which we do not have access to the source code. We are applying whitebox test generation to the Spacecraft Health INference Engine (SHINE) that is part of the ISHM system. In TacSat3, SHINE will execute an on-board knowledge base for fault detection and diagnosis. SHINE converts its knowledge base into optimized C code which runs onboard TacSat3. SHINE can translate its rules into an intermediate representation (Java) suitable for analysis with JPF. JPF will analyze SHINE's Java output using symbolic execution, producing testcases that can provide either complete or directed coverage of the code. Automatically generated test suites can provide full code coverage and be quickly regenerated when code changes. Because our tools analyze executable code, they fully cover the delivered code, not just models of the code. This approach also provides a way to generate tests that exercise specific sections of code under specific preconditions. This capability gives us more focused testing of specific sections of code.
PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra
NASA Astrophysics Data System (ADS)
Sibaev, Marat; Crittenden, Deborah L.
2016-06-01
The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).
Pythran: enabling static optimization of scientific Python programs
NASA Astrophysics Data System (ADS)
Guelton, Serge; Brunet, Pierrick; Amini, Mehdi; Merlini, Adrien; Corbillon, Xavier; Raynaud, Alan
2015-01-01
Pythran is an open source static compiler that turns modules written in a subset of Python language into native ones. Assuming that scientific modules do not rely much on the dynamic features of the language, it trades them for powerful, possibly inter-procedural, optimizations. These optimizations include detection of pure functions, temporary allocation removal, constant folding, Numpy ufunc fusion and parallelization, explicit thread-level parallelism through OpenMP annotations, false variable polymorphism pruning, and automatic vector instruction generation such as AVX or SSE. In addition to these compilation steps, Pythran provides a C++ runtime library that leverages the C++ STL to provide generic containers, and the Numeric Template Toolbox for Numpy support. It takes advantage of modern C++11 features such as variadic templates, type inference, move semantics and perfect forwarding, as well as classical idioms such as expression templates. Unlike the Cython approach, Pythran input code remains compatible with the Python interpreter. Output code is generally as efficient as the annotated Cython equivalent, if not more, but without the backward compatibility loss.
Palombo, Daniela J; Kapson, Heather S; Lafleche, Ginette; Vasterling, Jennifer J; Marx, Brian P; Franz, Molly; Verfaellie, Mieke
2015-07-01
Although loss of consciousness associated with moderate or severe traumatic brain injury (TBI) is thought to interfere with encoding of the TBI event, little is known about the effects of mild TBI (mTBI), which typically involves only transient disruption in consciousness. Blast-exposed Afghanistan and Iraq War veterans were asked to recall the blast event. Participants were stratified based on whether the blast was associated with probable mTBI (n = 50) or not (n = 25). Narratives were scored for organizational structure (i.e., coherence) using the Narrative Coherence Coding Scheme (Reese et al., 2011) and episodic recollection using the Autobiographical Interview Coding Procedures (Levine et al., 2002). The mTBI group produced narratives that were less coherent but contained more episodic details than those of the no-TBI group. These results suggest that mTBI interferes with the organizational quality of memory in a manner that is independent of episodic detail generation. (c) 2015 APA, all rights reserved).
1984-08-01
8 3. Water-quality, sediment, and biological parameters, associated units, EPA STORET codes, container type, 0 preservative and methods used for...Section III.B). Water samples were collected and preserved according to * _ approved EPA (1974) or American Public Health Association (APHA) (1975...procedures. Water-quality parameters tested, associated units, EPA STORET codes, test procedures, and preservation tech- niques used throughout the
[Orthopedic and trauma surgery in the German DRG system. Recent developments].
Franz, D; Schemmann, F; Selter, D D; Wirtz, D C; Roeder, N; Siebert, H; Mahlke, L
2012-07-01
Orthopedics and trauma surgery are subject to continuous medical advancement. The correct and performance-based case allocation by German diagnosis-related groups (G-DRG) is a major challenge. This article analyzes and assesses current developments in orthopedics and trauma surgery in the areas of coding of diagnoses and medical procedures and the development of the 2012 G-DRG system. The relevant diagnoses, medical procedures and G-DRGs in the versions 2011 and 2012 were analyzed based on the publications of the German DRG Institute (InEK) and the German Institute of Medical Documentation and Information (DIMDI). Changes were made for the International Classification of Diseases (ICD) coding of complex cases with medical complications, the procedure coding for spinal surgery and for hand and foot surgery. The G-DRG structures were modified for endoprosthetic surgery on ankle, shoulder and elbow joints. The definition of modular structured endoprostheses was clarified. The G-DRG system for orthopedic and trauma surgery appears to be largely consolidated. The current phase of the evolution of the G-DRG system is primarily aimed at developing most exact descriptions and definitions of the content and mutual delimitation of operation and procedures coding (OPS). This is an essential prerequisite for a correct and performance-based case allocation in the G-DRG system.
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 11 2013-01-01 2013-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 11 2010-01-01 2010-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 11 2011-01-01 2011-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 11 2012-01-01 2012-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
7 CFR 1724.50 - Compliance with National Electrical Safety Code (NESC).
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 11 2014-01-01 2014-01-01 false Compliance with National Electrical Safety Code (NESC... UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE ELECTRIC ENGINEERING, ARCHITECTURAL SERVICES AND DESIGN POLICIES AND PROCEDURES Electric System Design § 1724.50 Compliance with National Electrical Safety Code...
ON UPGRADING THE NUMERICS IN COMBUSTION CHEMISTRY CODES. (R824970)
A method of updating and reusing legacy FORTRAN codes for combustion simulations is presented using the DAEPACK software package. The procedure is demonstrated on two codes that come with the CHEMKIN-II package, CONP and SENKIN, for the constant-pressure batch reactor simulati...
O'Neill, Liam; Dexter, Franklin; Park, Sae-Hwan; Epstein, Richard H
2017-09-01
Recently, there has been interest in activity-based cost accounting for inpatient surgical procedures to facilitate "value based" analyses. Research 10-20years ago, performed using data from 3 large teaching hospitals, found that activity-based cost accounting was practical and useful for modeling surgeons and subspecialties, but inaccurate for individual procedures. We hypothesized that these older results would apply to hundreds of hospitals, currently evaluable using administrative databases. Observational study. State of Texas hospital discharge abstract data for 1st quarter of 2016, 4th quarter of 2015, 1st quarter of 2015, and 4th quarter of 2014. Discharged from an acute care hospital in Texas with at least 1 major therapeutic ("operative") procedure. Counts of discharges for each procedure or combination of procedures, classified by ICD-10-PCS or ICD-9-CM. At the average hospital, most surgical discharges were for procedures performed at most once a month at the hospital (54%, 95% confidence interval [CI] 51% to 55%). At the average hospital, approximately 90% of procedures were performed at most once a month at the hospital (93%, CI 93% to 94%). The percentages were insensitive to the quarter of the year. The percentages were 3% to 6% greater with ICD-10-PCS than for the superseded ICD 9 CM. There are many different procedure codes, and many different combinations of codes, relative to the number of different hospital discharges. Since most procedures at most hospitals are performed no more than once a month, activity-based cost accounting with a sample size sufficient to be useful is impractical for the vast majority of procedures, in contrast to analysis by surgeon and/or subspecialty. Copyright © 2017 Elsevier Inc. All rights reserved.
Bhagavatula, Pradeep; Xiang, Qun; Szabo, Aniko; Eichmiller, Fredrick; Kuthy, Raymond A; Okunseri, Christopher E
2012-12-21
Studies on rural-urban differences in dental care have primarily focused on differences in utilization rates and preventive dental services. Little is known about rural-urban differences in the use of wider range of dental procedures. This study examined patterns of preventive, restorative, endodontic, and extraction procedures provided to children enrolled in Delta Dental of Wisconsin (DDWI). We analyzed DDWI enrollment and claims data for children aged 0-18 years from 2002 to 2008. We modified and used a rural and urban classification based on ZIP codes developed by the Wisconsin Area Health Education Center (AHEC). We categorized the ZIP codes into 6 AHEC categories (3 rural and 3 urban). Descriptive and multivariable analysis using generalized linear mixed models (GLMM) were used to examine the patterns of dental procedures provided to children. Tukey-Kramer adjustment was used to control for multiple comparisons. Approximately, 50%, 67% and 68% of enrollees in inner-city Milwaukee, Rural 1 (less than 2500 people), and suburban-Milwaukee had at least one annual dental visit, respectively. Children in inner city-Milwaukee had the lowest utilization rates for all procedures examined, except for endodontic procedures. Compared to children from inner-city Milwaukee, children in other locations had significantly more preventive procedures. Children in Rural 1-ZIP codes had more restorative, endodontic and extraction procedures, compared to children from all other regions. We found significant geographic variation in dental procedures received by children enrolled in DDWI.
Bruck, Johannes C
2006-01-01
The WHO describes health as physical, mental and social well being. Ever since the establishment of plastic surgery aesthetic surgery has been an integral part of this medical specialty. It aims at reconstructing subjective well-being by employing plastic surgical procedures as described in the educational code and regulations for specialists of plastic surgery. This code confirms that plastic surgery comprises cosmetic procedures for the entire body that have to be applied in respect of psychological exploration and selection criteria. A wide variety of opinions resulting from very different motivations shows how difficult it is to differentiate aesthetic surgery as a therapeutic procedure from beauty surgery as a primarily economic service. Jurisdiction, guidelines for professional conduct and ethical codes have tried to solve this question. Regardless of the intention and ability of the health insurances, it has currently been established that the moral and legal evaluation of advertisements for medical procedures depends on their purpose: advertising with the intent of luring patients into cosmetic procedures that do not aim to reconstruct a subjective physical disorder does not comply with a medical indication. If, however, the initiative originates with the patient requesting the amelioration of a subjective disorder of his body, a medical indication can be assumed.
1997-04-01
DATA COLLABORATORS 0001N B NQ 8380 NUMBER OF DATA RECEIVERS 0001N B NQ 2533 AUTHORIZED ITEM IDENTIFICATION DATA COLLABORATOR CODE 0002 ,X B 03 18 TD...01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 9505 TYPE OF SCREENING CODE 0001A 01 NC 8268 DATA ELEMENT TERMINATOR CODE 000iX VT 4690 OUTPUT DATA... 9505 TYPE OF SCREENING CODE 0001A 2 89 2910 REFERENCE NUMBER CATEGORY CODE (RNCC) 0001X 2 89 4780 REFERENCE NUMBER VARIATION CODE (RNVC) 0001 N 2 89
[Hand surgery in the German DRG System 2007].
Franz, D; Windolf, J; Kaufmann, M; Siebert, C H; Roeder, N
2007-05-01
Hand surgery often needs only a short length of stay in hospital. Patients' comorbidity is low. Many hand surgery procedures do not need inpatient structures. Up until 2006 special procedures of hand surgery could not be coded. The DRG structure did not separate very complex and less complex operations. Specialized hospitals needed a proper case allocation of their patients within the G-DRG system. The DRG structure concerning hand surgery increased in version 2007 of the G-DRG system. The main parameter of DRG splitting is the complexity of the operation. Furthermore additional criteria such as more than one significant OR procedure, the patients' age, or special diagnoses influence case allocation. A special OPS code for complex cases treated with hand surgery was implemented. The changes in the DRG structure and the implementation of the new OPS code for complex cases establish a strong basis for the identification of different patient costs. Different case allocation leads to different economic impacts on departments of hand surgery. Whether the new OPS code becomes a DRG splitting parameter has to be calculated by the German DRG Institute for further DRG versions.
Comparison of three coding strategies for a low cost structure light scanner
NASA Astrophysics Data System (ADS)
Xiong, Hanwei; Xu, Jun; Xu, Chenxi; Pan, Ming
2014-12-01
Coded structure light is widely used for 3D scanning, and different coding strategies are adopted to suit for different goals. In this paper, three coding strategies are compared, and one of them is selected to implement a low cost structure light scanner under the cost of €100. To reach this goal, the projector and the video camera must be the cheapest, which will lead to some problems related to light coding. For a cheapest projector, complex intensity pattern can't be generated; even if it can be generated, it can't be captured by a cheapest camera. Based on Gray code, three different strategies are implemented and compared, called phase-shift, line-shift, and bit-shift, respectively. The bit-shift Gray code is the contribution of this paper, in which a simple, stable light pattern is used to generate dense(mean points distance<0.4mm) and accurate(mean error<0.1mm) results. The whole algorithm details and some example are presented in the papers.
FY17 Status Report on NEAMS Neutronics Activities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C. H.; Jung, Y. S.; Smith, M. A.
2017-09-30
Under the U.S. DOE NEAMS program, the high-fidelity neutronics code system has been developed to support the multiphysics modeling and simulation capability named SHARP. The neutronics code system includes the high-fidelity neutronics code PROTEUS, the cross section library and preprocessing tools, the multigroup cross section generation code MC2-3, the in-house meshing generation tool, the perturbation and sensitivity analysis code PERSENT, and post-processing tools. The main objectives of the NEAMS neutronics activities in FY17 are to continue development of an advanced nodal solver in PROTEUS for use in nuclear reactor design and analysis projects, implement a simplified sub-channel based thermal-hydraulic (T/H)more » capability into PROTEUS to efficiently compute the thermal feedback, improve the performance of PROTEUS-MOCEX using numerical acceleration and code optimization, improve the cross section generation tools including MC2-3, and continue to perform verification and validation tests for PROTEUS.« less
NASA Technical Reports Server (NTRS)
Knight, J. C.; Hamm, R. W.
1984-01-01
PASCAL/48 is a programming language for the Intel MCS-48 series of microcomputers. In particular, it can be used with the Intel 8748. It is designed to allow the programmer to control most of the instructions being generated and the allocation of storage. The language can be used instead of ASSEMBLY language in most applications while allowing the user the necessary degree of control over hardware resources. Although it is called PASCAL/48, the language differs in many ways from PASCAL. The program structure and statements of the two languages are similar, but the expression mechanism and data types are different. The PASCAL/48 cross-compiler is written in PASCAL and runs on the CDC CYBER NOS system. It generates object code in Intel hexadecimal format that can be used to program the MCS-48 series of microcomputers. This reference manual defines the language, describes the predeclared procedures, lists error messages, illustrates use, and includes language syntax diagrams.
NASA Astrophysics Data System (ADS)
Jenness, Tim; Currie, Malcolm J.; Tilanus, Remo P. J.; Cavanagh, Brad; Berry, David S.; Leech, Jamie; Rizzi, Luca
2015-10-01
With the advent of modern multidetector heterodyne instruments that can result in observations generating thousands of spectra per minute it is no longer feasible to reduce these data as individual spectra. We describe the automated data reduction procedure used to generate baselined data cubes from heterodyne data obtained at the James Clerk Maxwell Telescope (JCMT). The system can automatically detect baseline regions in spectra and automatically determine regridding parameters, all without input from a user. Additionally, it can detect and remove spectra suffering from transient interference effects or anomalous baselines. The pipeline is written as a set of recipes using the ORAC-DR pipeline environment with the algorithmic code using Starlink software packages and infrastructure. The algorithms presented here can be applied to other heterodyne array instruments and have been applied to data from historical JCMT heterodyne instrumentation.
DEAN: A program for dynamic engine analysis
NASA Technical Reports Server (NTRS)
Sadler, G. G.; Melcher, K. J.
1985-01-01
The Dynamic Engine Analysis program, DEAN, is a FORTRAN code implemented on the IBM/370 mainframe at NASA Lewis Research Center for digital simulation of turbofan engine dynamics. DEAN is an interactive program which allows the user to simulate engine subsystems as well as a full engine systems with relative ease. The nonlinear first order ordinary differential equations which define the engine model may be solved by one of four integration schemes, a second order Runge-Kutta, a fourth order Runge-Kutta, an Adams Predictor-Corrector, or Gear's method for still systems. The numerical data generated by the model equations are displayed at specified intervals between which the user may choose to modify various parameters affecting the model equations and transient execution. Following the transient run, versatile graphics capabilities allow close examination of the data. DEAN's modeling procedure and capabilities are demonstrated by generating a model of simple compressor rig.
Integration, warehousing, and analysis strategies of Omics data.
Gedela, Srinubabu
2011-01-01
"-Omics" is a current suffix for numerous types of large-scale biological data generation procedures, which naturally demand the development of novel algorithms for data storage and analysis. With next generation genome sequencing burgeoning, it is pivotal to decipher a coding site on the genome, a gene's function, and information on transcripts next to the pure availability of sequence information. To explore a genome and downstream molecular processes, we need umpteen results at the various levels of cellular organization by utilizing different experimental designs, data analysis strategies and methodologies. Here comes the need for controlled vocabularies and data integration to annotate, store, and update the flow of experimental data. This chapter explores key methodologies to merge Omics data by semantic data carriers, discusses controlled vocabularies as eXtensible Markup Languages (XML), and provides practical guidance, databases, and software links supporting the integration of Omics data.
This purpose of this SOP is to define the coding strategy for the Descriptive Questionnaire. This questionnaire was developed for use in the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; descriptive questionnaire.
The National Human Exposure Assessment...
Klann, Jeffrey G; Phillips, Lori C; Turchin, Alexander; Weiler, Sarah; Mandl, Kenneth D; Murphy, Shawn N
2015-12-11
Interoperable phenotyping algorithms, needed to identify patient cohorts meeting eligibility criteria for observational studies or clinical trials, require medical data in a consistent structured, coded format. Data heterogeneity limits such algorithms' applicability. Existing approaches are often: not widely interoperable; or, have low sensitivity due to reliance on the lowest common denominator (ICD-9 diagnoses). In the Scalable Collaborative Infrastructure for a Learning Healthcare System (SCILHS) we endeavor to use the widely-available Current Procedural Terminology (CPT) procedure codes with ICD-9. Unfortunately, CPT changes drastically year-to-year - codes are retired/replaced. Longitudinal analysis requires grouping retired and current codes. BioPortal provides a navigable CPT hierarchy, which we imported into the Informatics for Integrating Biology and the Bedside (i2b2) data warehouse and analytics platform. However, this hierarchy does not include retired codes. We compared BioPortal's 2014AA CPT hierarchy with Partners Healthcare's SCILHS datamart, comprising three-million patients' data over 15 years. 573 CPT codes were not present in 2014AA (6.5 million occurrences). No existing terminology provided hierarchical linkages for these missing codes, so we developed a method that automatically places missing codes in the most specific "grouper" category, using the numerical similarity of CPT codes. Two informaticians reviewed the results. We incorporated the final table into our i2b2 SCILHS/PCORnet ontology, deployed it at seven sites, and performed a gap analysis and an evaluation against several phenotyping algorithms. The reviewers found the method placed the code correctly with 97 % precision when considering only miscategorizations ("correctness precision") and 52 % precision using a gold-standard of optimal placement ("optimality precision"). High correctness precision meant that codes were placed in a reasonable hierarchal position that a reviewer can quickly validate. Lower optimality precision meant that codes were not often placed in the optimal hierarchical subfolder. The seven sites encountered few occurrences of codes outside our ontology, 93 % of which comprised just four codes. Our hierarchical approach correctly grouped retired and non-retired codes in most cases and extended the temporal reach of several important phenotyping algorithms. We developed a simple, easily-validated, automated method to place retired CPT codes into the BioPortal CPT hierarchy. This complements existing hierarchical terminologies, which do not include retired codes. The approach's utility is confirmed by the high correctness precision and successful grouping of retired with non-retired codes.
Hu, Jianwei; Gauld, Ian C.
2014-12-01
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Jianwei; Gauld, Ian C.
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less
Python-Assisted MODFLOW Application and Code Development
NASA Astrophysics Data System (ADS)
Langevin, C.
2013-12-01
The U.S. Geological Survey (USGS) has a long history of developing and maintaining free, open-source software for hydrological investigations. The MODFLOW program is one of the most popular hydrologic simulation programs released by the USGS, and it is considered to be the most widely used groundwater flow simulation code. MODFLOW was written using a modular design and a procedural FORTRAN style, which resulted in code that could be understood, modified, and enhanced by many hydrologists. The code is fast, and because it uses standard FORTRAN it can be run on most operating systems. Most MODFLOW users rely on proprietary graphical user interfaces for constructing models and viewing model results. Some recent efforts, however, have focused on construction of MODFLOW models using open-source Python scripts. Customizable Python packages, such as FloPy (https://code.google.com/p/flopy), can be used to generate input files, read simulation results, and visualize results in two and three dimensions. Automating this sequence of steps leads to models that can be reproduced directly from original data and rediscretized in space and time. Python is also being used in the development and testing of new MODFLOW functionality. New packages and numerical formulations can be quickly prototyped and tested first with Python programs before implementation in MODFLOW. This is made possible by the flexible object-oriented design capabilities available in Python, the ability to call FORTRAN code from Python, and the ease with which linear systems of equations can be solved using SciPy, for example. Once new features are added to MODFLOW, Python can then be used to automate comprehensive regression testing and ensure reliability and accuracy of new versions prior to release.
Construction of self-dual codes in the Rosenbloom-Tsfasman metric
NASA Astrophysics Data System (ADS)
Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin
2017-12-01
Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.
Failure to Follow Written Procedures
DOT National Transportation Integrated Search
2017-12-01
Most tasks in aviation have a mandated written procedure to be followed specifically under the Code of Federal Regulations (CFR) Part 14, Section 43.13(a). However, the incidence of Failure to Follow Procedure (FFP) events continues to be a major iss...
An artificial viscosity method for the design of supercritical airfoils
NASA Technical Reports Server (NTRS)
Mcfadden, G. B.
1979-01-01
A numerical technique is presented for the design of two-dimensional supercritical wing sections with low wave drag. The method is a design mode of the analysis code H which gives excellent agreement with experimental results and is widely used in the aircraft industry. Topics covered include the partial differential equations of transonic flow, the computational procedure and results; the design procedure; a convergence theorem; and description of the code.
JPRS Report: East Asia, Southeast Asia, LPDR Criminal Code, Courts, and Criminal Procedure.
1991-03-05
1941 - 1991 JPRS Repor East Asia Southeast Asia LPDR Criminal Code, Courts, and Criminal Procedure mom m £C QUALITY »ra^r...prostitution, will be impris- oned for three to five years. Article 124. Incest . Anyone who has sexual intercourse with parents, step- parents...This consists of facts which indicate whether there have been actions dangerous to society, the guilt of the per- sons who undertook the
[Orthopedic and trauma surgery in the German-DRG-System 2009].
Franz, D; Windolf, J; Siebert, C H; Roeder, N
2009-01-01
The German DRG-System was advanced into version 2009. For orthopedic and trauma surgery significant changes concerning coding of diagnoses, medical procedures and concerning the DRG-structure were made. Analysis of relevant diagnoses, medical procedures and G-DRGs in the versions 2008 and 2009 based on the publications of the German DRG-institute (InEK) and the German Institute of Medical Documentation and Information (DIMDI). Changes for 2009 focussed on the development of DRG-structure, DRG-validation and codes for medical procedures to be used for very complex cases. The outcome of these changes for German hospitals may vary depending in the range of activities. G-DRG-System gained complexity again. High demands are made on correct and complete coding of complex orthopedic and trauma surgery cases. Quality of case-allocation within the G-DRG-System was improved. Nevertheless, further adjustments of the G-DRG-System especially for cases with severe injuries are necessary.
Development of a thermal and structural analysis procedure for cooled radial turbines
NASA Technical Reports Server (NTRS)
Kumar, Ganesh N.; Deanna, Russell G.
1988-01-01
A procedure for computing the rotor temperature and stress distributions in a cooled radial turbine are considered. Existing codes for modeling the external mainstream flow and the internal cooling flow are used to compute boundary conditions for the heat transfer and stress analysis. The inviscid, quasi three dimensional code computes the external free stream velocity. The external velocity is then used in a boundary layer analysis to compute the external heat transfer coefficients. Coolant temperatures are computed by a viscous three dimensional internal flow cade for the momentum and energy equation. These boundary conditions are input to a three dimensional heat conduction code for the calculation of rotor temperatures. The rotor stress distribution may be determined for the given thermal, pressure and centrifugal loading. The procedure is applied to a cooled radial turbine which will be tested at the NASA Lewis Research Center. Representative results are given.
Lin, Ching-Heng; Wu, Nai-Yuan; Lai, Wei-Shao; Liou, Der-Ming
2015-01-01
Electronic medical records with encoded entries should enhance the semantic interoperability of document exchange. However, it remains a challenge to encode the narrative concept and to transform the coded concepts into a standard entry-level document. This study aimed to use a novel approach for the generation of entry-level interoperable clinical documents. Using HL7 clinical document architecture (CDA) as the example, we developed three pipelines to generate entry-level CDA documents. The first approach was a semi-automatic annotation pipeline (SAAP), the second was a natural language processing (NLP) pipeline, and the third merged the above two pipelines. We randomly selected 50 test documents from the i2b2 corpora to evaluate the performance of the three pipelines. The 50 randomly selected test documents contained 9365 words, including 588 Observation terms and 123 Procedure terms. For the Observation terms, the merged pipeline had a significantly higher F-measure than the NLP pipeline (0.89 vs 0.80, p<0.0001), but a similar F-measure to that of the SAAP (0.89 vs 0.87). For the Procedure terms, the F-measure was not significantly different among the three pipelines. The combination of a semi-automatic annotation approach and the NLP application seems to be a solution for generating entry-level interoperable clinical documents. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliation see end of article.
An efficient code for the simulation of nonhydrostatic stratified flow over obstacles
NASA Technical Reports Server (NTRS)
Pihos, G. G.; Wurtele, M. G.
1981-01-01
The physical model and computational procedure of the code is described in detail. The code is validated in tests against a variety of known analytical solutions from the literature and is also compared against actual mountain wave observations. The code will receive as initial input either mathematically idealized or discrete observational data. The form of the obstacle or mountain is arbitrary.
Kölch, Michael; Vogel, Harald
2016-01-01
According to German law (Para. 1631b German Civil Code), the placement of children and adolescents following seclusion and restraint actions must be approved by a family court. We analyzed the family court data of a court district in Berlin (Tempelhof-Kreuzberg) concerning cases of “placement of minors” between 2008 and 2011. A total of 474 such procedures were discovered. After data clearing and correction of cases (e. g., because of emergency interventions of the youth welfare system taking children into custody according to Para. 42, German Civil Code VIII), 376 cases remained. Of these 376 procedures in the years 2008 to 2011, 127 cases concerned children and adolescents according to Para. 1631b German Civil Code, and 249 procedures were settled either by dismissal, withdrawal or by repealing the initial decision to place the child with restrain or seclusion by means of an interim order or by filing an appeal against the final decision. Of the 127 procedures, 68 concerned girls, who were on average slightly younger than boys (14.5 years vs. 15.1 years). In two thirds of the procedures, the children and adolescents were German citizens. The majority of youths involved were living at home at the time of the procedure, but in 15 % of the case the youths were homeless. Most of the adolescents were treated with restraint in child and adolescent psychiatry. The most frequently quoted reasons for seclusion were substance abuse, suicide risk and running away from home/being homeless.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A
2013-10-29
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexandre E [Chappaqua, NY; Gschwind, Michael K [Chappaqua, NY; Gunnels, John A [Yorktown Heights, NY
2012-08-28
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
NASA Technical Reports Server (NTRS)
Cross, James H., II; Morrison, Kelly I.; May, Charles H., Jr.; Waddel, Kathryn C.
1989-01-01
The first phase of a three-phase effort to develop a new graphically oriented specification language which will facilitate the reverse engineering of Ada source code into graphical representations (GRs) as well as the automatic generation of Ada source code is described. A simplified view of the three phases of Graphical Representations for Algorithms, Structure, and Processes for Ada (GRASP/Ada) with respect to three basic classes of GRs is presented. Phase 1 concentrated on the derivation of an algorithmic diagram, the control structure diagram (CSD) (CRO88a) from Ada source code or Ada PDL. Phase 2 includes the generation of architectural and system level diagrams such as structure charts and data flow diagrams and should result in a requirements specification for a graphically oriented language able to support automatic code generation. Phase 3 will concentrate on the development of a prototype to demonstrate the feasibility of this new specification language.
The procedure execution manager and its application to Advanced Photon Source operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borland, M.
1997-06-01
The Procedure Execution Manager (PEM) combines a complete scripting environment for coding accelerator operation procedures with a manager application for executing and monitoring the procedures. PEM is based on Tcl/Tk, a supporting widget library, and the dp-tcl extension for distributed processing. The scripting environment provides support for distributed, parallel execution of procedures along with join and abort operations. Nesting of procedures is supported, permitting the same code to run as a top-level procedure under operator control or as a subroutine under control of another procedure. The manager application allows an operator to execute one or more procedures in automatic, semi-automatic,more » or manual modes. It also provides a standard way for operators to interact with procedures. A number of successful applications of PEM to accelerator operations have been made to date. These include start-up, shutdown, and other control of the positron accumulator ring (PAR), low-energy transport (LET) lines, and the booster rf systems. The PAR/LET procedures make nested use of PEM`s ability to run parallel procedures. There are also a number of procedures to guide and assist tune-up operations, to make accelerator physics measurements, and to diagnose equipment. Because of the success of the existing procedures, expanded use of PEM is planned.« less
A CellML simulation compiler and code generator using ODE solving schemes
2012-01-01
Models written in description languages such as CellML are becoming a popular solution to the handling of complex cellular physiological models in biological function simulations. However, in order to fully simulate a model, boundary conditions and ordinary differential equation (ODE) solving schemes have to be combined with it. Though boundary conditions can be described in CellML, it is difficult to explicitly specify ODE solving schemes using existing tools. In this study, we define an ODE solving scheme description language-based on XML and propose a code generation system for biological function simulations. In the proposed system, biological simulation programs using various ODE solving schemes can be easily generated. We designed a two-stage approach where the system generates the equation set associating the physiological model variable values at a certain time t with values at t + Δt in the first stage. The second stage generates the simulation code for the model. This approach enables the flexible construction of code generation modules that can support complex sets of formulas. We evaluate the relationship between models and their calculation accuracies by simulating complex biological models using various ODE solving schemes. Using the FHN model simulation, results showed good qualitative and quantitative correspondence with the theoretical predictions. Results for the Luo-Rudy 1991 model showed that only first order precision was achieved. In addition, running the generated code in parallel on a GPU made it possible to speed up the calculation time by a factor of 50. The CellML Compiler source code is available for download at http://sourceforge.net/projects/cellmlcompiler. PMID:23083065
NASA Astrophysics Data System (ADS)
Rielly, Matthew Robert
An existing numerical model (known as the Bergen code) is used to investigate finite amplitude ultrasound propagation through multiple layers of tissue-like media. This model uses a finite difference method to solve the nonlinear parabolic KZK wave equation. The code is modified to include an arbitrary frequency dependence of absorption and transmission effects for wave propagation across a plane interface at normal incidence. In addition the code is adapted to calculate the total intensity loss associated with the absorption of the fundamental and nonlinearly generated harmonics. Measurements are also taken of the axial nonlinear pressure field generated from a circular focused, 2.25 MHz source, through single and multiple layered tissue mimicking fluids, for source pressures in the range from 13 kPa to 310 kPa. Two tissue mimicking fluids are developed to provide acoustic properties similar to amniotic fluid and a typical soft tissue. The values of the nonlinearity parameter, sound velocity and frequency dependence of attenuation for both fluids are presented, and the measurement procedures employed to obtain these characteristics are described in detail. These acoustic parameters, together with the measured source conditions are used as input to the numerical model, allowing the experimental conditions to be simulated. Extensive comparisons are made between the model's predictions and the axial pressure field measurements. Results are presented in the frequency domain showing the fundamental and three subsequent harmonic amplitudes on axis, as a function of axial distance. These show that significant nonlinear distortion can occur through media with characteristics typical of tissue. Time domain waveform comparisons are also made. An excellent agreement is found between theory and experiment indicating that the model can be used to predict nonlinear ultrasound propagation through multiple layers of tissue-like media. The numerical code is also used to model the intensity loss through layered tissue mimics and results are presented illustrating the effects of altering the layered medium on the magnitude and spatial distribution of intensity loss.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parchevsky, K. V.; Zhao, J.; Hartlep, T.
We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agreemore » well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.« less
TAIR- TRANSONIC AIRFOIL ANALYSIS COMPUTER CODE
NASA Technical Reports Server (NTRS)
Dougherty, F. C.
1994-01-01
The Transonic Airfoil analysis computer code, TAIR, was developed to employ a fast, fully implicit algorithm to solve the conservative full-potential equation for the steady transonic flow field about an arbitrary airfoil immersed in a subsonic free stream. The full-potential formulation is considered exact under the assumptions of irrotational, isentropic, and inviscid flow. These assumptions are valid for a wide range of practical transonic flows typical of modern aircraft cruise conditions. The primary features of TAIR include: a new fully implicit iteration scheme which is typically many times faster than classical successive line overrelaxation algorithms; a new, reliable artifical density spatial differencing scheme treating the conservative form of the full-potential equation; and a numerical mapping procedure capable of generating curvilinear, body-fitted finite-difference grids about arbitrary airfoil geometries. Three aspects emphasized during the development of the TAIR code were reliability, simplicity, and speed. The reliability of TAIR comes from two sources: the new algorithm employed and the implementation of effective convergence monitoring logic. TAIR achieves ease of use by employing a "default mode" that greatly simplifies code operation, especially by inexperienced users, and many useful options including: several airfoil-geometry input options, flexible user controls over program output, and a multiple solution capability. The speed of the TAIR code is attributed to the new algorithm and the manner in which it has been implemented. Input to the TAIR program consists of airfoil coordinates, aerodynamic and flow-field convergence parameters, and geometric and grid convergence parameters. The airfoil coordinates for many airfoil shapes can be generated in TAIR from just a few input parameters. Most of the other input parameters have default values which allow the user to run an analysis in the default mode by specifing only a few input parameters. Output from TAIR may include aerodynamic coefficients, the airfoil surface solution, convergence histories, and printer plots of Mach number and density contour maps. The TAIR program is written in FORTRAN IV for batch execution and has been implemented on a CDC 7600 computer with a central memory requirement of approximately 155K (octal) of 60 bit words. The TAIR program was developed in 1981.
A Comparison of Fatigue Design Methods
2001-04-05
Boiler and Pressure Vessel Code does not...Engineers, "ASME Boiler and Pressure Vessel Code ," ASME, 3 Park Ave., New York, NY 10016-5990. [4] Langer, B. F., "Design of Pressure Vessels Involving... and Pressure Vessel Code [3] presents these methods and has expanded the procedures to other pressure vessels besides nuclear pressure vessels. B.
The purpose of this SOP is to describe the coding strategy for the Questionnaire Feedback form. This Questionnaire Feedback form was developed for use during the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; questionnaire feedback form.
The National Hu...
The purpose of this SOP is to define the coding strategy for the Diet Diary Questionnaire. This questionnaire was developed for use during the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; diet diary questionnaire.
The National Human Exposure Assessme...
The purpose of this SOP is to define the coding strategy for the Technician Walk-Through Questionnaire. This questionnaire was developed for use during the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; technician walk-through questionnaire.
The Nationa...
This purpose of this SOP is to define the coding strategy for the Descriptive Questionnaire. This questionnaire was developed for use in the Arizona NHEXAS project and the Border study. Keywords: data; coding; descriptive questionnaire.
The U.S.-Mexico Border Program is sponso...
The Attorney General's Proposed Voluntary Student Code of Conduct.
ERIC Educational Resources Information Center
Texas State Attorney General's Office, Austin.
Intended as a guide for Texas school districts wishing to adopt or modify a student code of conduct, this proposed code describes a positive learning atmosphere, specifies conduct that disrupts such an environment, assures the rights and responsibilities of students, and standardizes procedures to be used in responding to disciplinary problems.…
Content Analysis Coding Schemes for Online Asynchronous Discussion
ERIC Educational Resources Information Center
Weltzer-Ward, Lisa
2011-01-01
Purpose: Researchers commonly utilize coding-based analysis of classroom asynchronous discussion contributions as part of studies of online learning and instruction. However, this analysis is inconsistent from study to study with over 50 coding schemes and procedures applied in the last eight years. The aim of this article is to provide a basis…
Continuities in Reading Acquisition, Reading Skill, and Reading Disability.
ERIC Educational Resources Information Center
Perfetti, Charles A.
1986-01-01
Learning to read depends on eventual mastery of coding procedures, and even skilled reading depends on coding processes low in cost to processing resources. Reading disability may be understood as a point on an ability continuum or a wide range of coding ability. Instructional goals of word reading skill, including rapid and fluent word…
New double-byte error-correcting codes for memory systems
NASA Technical Reports Server (NTRS)
Feng, Gui-Liang; Wu, Xinen; Rao, T. R. N.
1996-01-01
Error-correcting or error-detecting codes have been used in the computer industry to increase reliability, reduce service costs, and maintain data integrity. The single-byte error-correcting and double-byte error-detecting (SbEC-DbED) codes have been successfully used in computer memory subsystems. There are many methods to construct double-byte error-correcting (DBEC) codes. In the present paper we construct a class of double-byte error-correcting codes, which are more efficient than those known to be optimum, and a decoding procedure for our codes is also considered.
Three-dimensional turbopump flowfield analysis
NASA Technical Reports Server (NTRS)
Sharma, O. P.; Belford, K. A.; Ni, R. H.
1992-01-01
A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.
Gilmore-Bykovskyi, Andrea L.
2015-01-01
Mealtime behavioral symptoms are distressing and frequently interrupt eating for the individual experiencing them and others in the environment. In order to enable identification of potential antecedents to mealtime behavioral symptoms, a computer-assisted coding scheme was developed to measure caregiver person-centeredness and behavioral symptoms for nursing home residents with dementia during mealtime interactions. The purpose of this pilot study was to determine the acceptability and feasibility of procedures for video-capturing naturally-occurring mealtime interactions between caregivers and residents with dementia, to assess the feasibility, ease of use, and inter-observer reliability of the coding scheme, and to explore the clinical utility of the coding scheme. Trained observers coded 22 observations. Data collection procedures were feasible and acceptable to caregivers, residents and their legally authorized representatives. Overall, the coding scheme proved to be feasible, easy to execute and yielded good to very good inter-observer agreement following observer re-training. The coding scheme captured clinically relevant, modifiable antecedents to mealtime behavioral symptoms, but would be enhanced by the inclusion of measures for resident engagement and consolidation of items for measuring caregiver person-centeredness that co-occurred and were difficult for observers to distinguish. PMID:25784080
a Framework for Distributed Mixed Language Scientific Applications
NASA Astrophysics Data System (ADS)
Quarrie, D. R.
The Object Management Group has defined an architecture (CORBA) for distributed object applications based on an Object Request Broker and Interface Definition Language. This project builds upon this architecture to establish a framework for the creation of mixed language scientific applications. A prototype compiler has been written that generates FORTRAN 90 or Eiffel stubs and skeletons and the required C++ glue code from an input IDL file that specifies object interfaces. This generated code can be used directly for non-distributed mixed language applications or in conjunction with the C++ code generated from a commercial IDL compiler for distributed applications. A feasibility study is presently underway to see whether a fully integrated software development environment for distributed, mixed-language applications can be created by modifying the back-end code generator of a commercial CASE tool to emit IDL.
QX MAN: Q and X file manipulation
NASA Technical Reports Server (NTRS)
Krein, Mark A.
1992-01-01
QX MAN is a grid and solution file manipulation program written primarily for the PARC code and the GRIDGEN family of grid generation codes. QX MAN combines many of the features frequently encountered in grid generation, grid refinement, the setting-up of initial conditions, and post processing. QX MAN allows the user to manipulate single block and multi-block grids (and their accompanying solution files) by splitting, concatenating, rotating, translating, re-scaling, and stripping or adding points. In addition, QX MAN can be used to generate an initial solution file for the PARC code. The code was written to provide several formats for input and output in order for it to be useful in a broad spectrum of applications.
Automatically generated code for relativistic inhomogeneous cosmologies
NASA Astrophysics Data System (ADS)
Bentivegna, Eloisa
2017-02-01
The applications of numerical relativity to cosmology are on the rise, contributing insight into such cosmological problems as structure formation, primordial phase transitions, gravitational-wave generation, and inflation. In this paper, I present the infrastructure for the computation of inhomogeneous dust cosmologies which was used recently to measure the effect of nonlinear inhomogeneity on the cosmic expansion rate. I illustrate the code's architecture, provide evidence for its correctness in a number of familiar cosmological settings, and evaluate its parallel performance for grids of up to several billion points. The code, which is available as free software, is based on the Einstein Toolkit infrastructure, and in particular leverages the automated code generation capabilities provided by its component Kranc.
NASA Astrophysics Data System (ADS)
Zou, Ding; Djordjevic, Ivan B.
2016-02-01
Forward error correction (FEC) is as one of the key technologies enabling the next-generation high-speed fiber optical communications. In this paper, we propose a rate-adaptive scheme using a class of generalized low-density parity-check (GLDPC) codes with a Hamming code as local code. We show that with the proposed unified GLDPC decoder architecture, a variable net coding gains (NCGs) can be achieved with no error floor at BER down to 10-15, making it a viable solution in the next-generation high-speed fiber optical communications.
An interactive method for digitizing zone maps
NASA Technical Reports Server (NTRS)
Giddings, L. E.; Thompson, E. J.
1975-01-01
A method is presented for digitizing maps that consist of zones, such as contour or climatic zone maps. A color-coded map is prepared by any convenient process. The map is then read into memory of an Image 100 computer by means of its table scanner, using colored filters. Zones are separated and stored in themes, using standard classification procedures. Thematic data are written on magnetic tape and these data, appropriately coded, are combined to make a digitized image on tape. Step-by-step procedures are given for digitization of crop moisture index maps with this procedure. In addition, a complete example of the digitization of a climatic zone map is given.
25 CFR 11.503 - Applicable civil procedure.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 25 Indians 1 2010-04-01 2010-04-01 false Applicable civil procedure. 11.503 Section 11.503 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.503 Applicable civil procedure. The procedure to be followed in civil...
25 CFR 11.503 - Applicable civil procedure.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 25 Indians 1 2014-04-01 2014-04-01 false Applicable civil procedure. 11.503 Section 11.503 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.503 Applicable civil procedure. The procedure to be followed in civil...
25 CFR 11.503 - Applicable civil procedure.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 25 Indians 1 2013-04-01 2013-04-01 false Applicable civil procedure. 11.503 Section 11.503 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.503 Applicable civil procedure. The procedure to be followed in civil...
25 CFR 11.503 - Applicable civil procedure.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 25 Indians 1 2011-04-01 2011-04-01 false Applicable civil procedure. 11.503 Section 11.503 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.503 Applicable civil procedure. The procedure to be followed in civil...
25 CFR 11.503 - Applicable civil procedure.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true Applicable civil procedure. 11.503 Section 11.503 Indians BUREAU OF INDIAN AFFAIRS, DEPARTMENT OF THE INTERIOR LAW AND ORDER COURTS OF INDIAN OFFENSES AND LAW AND ORDER CODE Civil Actions § 11.503 Applicable civil procedure. The procedure to be followed in civil...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Begovich, C.L.; Eckerman, K.F.; Schlatter, E.C.
1981-08-01
The DARTAB computer code combines radionuclide environmental exposure data with dosimetric and health effects data to generate tabulations of the predicted impact of radioactive airborne effluents. DARTAB is independent of the environmental transport code used to generate the environmental exposure data and the codes used to produce the dosimetric and health effects data. Therefore human dose and risk calculations need not be added to every environmental transport code. Options are included in DARTAB to permit the user to request tabulations by various topics (e.g., cancer site, exposure pathway, etc.) to facilitate characterization of the human health impacts of the effluents.more » The DARTAB code was written at ORNL for the US Environmental Protection Agency, Office of Radiation Programs.« less
Development of an Automatic Differentiation Version of the FPX Rotor Code
NASA Technical Reports Server (NTRS)
Hu, Hong
1996-01-01
The ADIFOR2.0 automatic differentiator is applied to the FPX rotor code along with the grid generator GRGN3. The FPX is an eXtended Full-Potential CFD code for rotor calculations. The automatic differentiation version of the code is obtained, which provides both non-geometry and geometry sensitivity derivatives. The sensitivity derivatives via automatic differentiation are presented and compared with divided difference generated derivatives. The study shows that automatic differentiation method gives accurate derivative values in an efficient manner.
Bhagavatula, Pradeep; Xiang, Qun; Eichmiller, Fredrick; Szabo, Aniko; Okunseri, Christopher
2014-01-01
Most studies on the provision of dental procedures have focused on Medicaid enrollees known to have inadequate access to dental care. Little information on private insurance enrollees exists. This study documents the rates of preventive, restorative, endodontic, and surgical dental procedures provided to children enrolled in Delta Dental of Wisconsin (DDWI) in Milwaukee. We analyzed DDWI claims data for Milwaukee children aged 0-18 years between 2002 and 2008. We linked the ZIP codes of enrollees to the 2000 U.S. Census information to derive racial/ethnic estimates in the different ZIP codes. We estimated the rates of preventive, restorative, endodontic, and surgical procedures provided to children in different racial/ethnic groups based on the population estimates derived from the U.S. Census data. Descriptive and multivariable analysis was done using Poisson regression modeling on dental procedures per year. In 7 years, a total of 266,380 enrollees were covered in 46 ZIP codes in the database. Approximately, 64 percent, 44 percent, and 49 percent of White, African American, and Hispanic children had at least one dental visit during the study period, respectively. The rates of preventive procedures increased up to the age of 9 years and decreased thereafter among children in all three racial groups included in the analysis. African American and Hispanic children received half as many preventive procedures as White children. Our study shows that substantial racial disparities may exist in the types of dental procedures that were received by children. © 2012 American Association of Public Health Dentistry.
Evaluation of three coding schemes designed for improved data communication
NASA Technical Reports Server (NTRS)
Snelsire, R. W.
1974-01-01
Three coding schemes designed for improved data communication are evaluated. Four block codes are evaluated relative to a quality function, which is a function of both the amount of data rejected and the error rate. The Viterbi maximum likelihood decoding algorithm as a decoding procedure is reviewed. This evaluation is obtained by simulating the system on a digital computer. Short constraint length rate 1/2 quick-look codes are studied, and their performance is compared to general nonsystematic codes.
General phase spaces: from discrete variables to rotor and continuum limits
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Pascazio, Saverio; Devoret, Michel H.
2017-12-01
We provide a basic introduction to discrete-variable, rotor, and continuous-variable quantum phase spaces, explaining how the latter two can be understood as limiting cases of the first. We extend the limit-taking procedures used to travel between phase spaces to a general class of Hamiltonians (including many local stabilizer codes) and provide six examples: the Harper equation, the Baxter parafermionic spin chain, the Rabi model, the Kitaev toric code, the Haah cubic code (which we generalize to qudits), and the Kitaev honeycomb model. We obtain continuous-variable generalizations of all models, some of which are novel. The Baxter model is mapped to a chain of coupled oscillators and the Rabi model to the optomechanical radiation pressure Hamiltonian. The procedures also yield rotor versions of all models, five of which are novel many-body extensions of the almost Mathieu equation. The toric and cubic codes are mapped to lattice models of rotors, with the toric code case related to U(1) lattice gauge theory.
TAS: A Transonic Aircraft/Store flow field prediction code
NASA Technical Reports Server (NTRS)
Thompson, D. S.
1983-01-01
A numerical procedure has been developed that has the capability to predict the transonic flow field around an aircraft with an arbitrarily located, separated store. The TAS code, the product of a joint General Dynamics/NASA ARC/AFWAL research and development program, will serve as the basis for a comprehensive predictive method for aircraft with arbitrary store loadings. This report described the numerical procedures employed to simulate the flow field around a configuration of this type. The validity of TAS code predictions is established by comparison with existing experimental data. In addition, future areas of development of the code are outlined. A brief description of code utilization is also given in the Appendix. The aircraft/store configuration is simulated using a mesh embedding approach. The computational domain is discretized by three meshes: (1) a planform-oriented wing/body fine mesh, (2) a cylindrical store mesh, and (3) a global Cartesian crude mesh. This embedded mesh scheme enables simulation of stores with fins of arbitrary angular orientation.
Exposure calculation code module for reactor core analysis: BURNER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vondy, D.R.; Cunningham, G.W.
1979-02-01
The code module BURNER for nuclear reactor exposure calculations is presented. The computer requirements are shown, as are the reference data and interface data file requirements, and the programmed equations and procedure of calculation are described. The operating history of a reactor is followed over the period between solutions of the space, energy neutronics problem. The end-of-period nuclide concentrations are determined given the necessary information. A steady state, continuous fueling model is treated in addition to the usual fixed fuel model. The control options provide flexibility to select among an unusually wide variety of programmed procedures. The code also providesmore » user option to make a number of auxiliary calculations and print such information as the local gamma source, cumulative exposure, and a fine scale power density distribution in a selected zone. The code is used locally in a system for computation which contains the VENTURE diffusion theory neutronics code and other modules.« less
Demonstration of Automatically-Generated Adjoint Code for Use in Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Green, Lawrence; Carle, Alan; Fagan, Mike
1999-01-01
Gradient-based optimization requires accurate derivatives of the objective function and constraints. These gradients may have previously been obtained by manual differentiation of analysis codes, symbolic manipulators, finite-difference approximations, or existing automatic differentiation (AD) tools such as ADIFOR (Automatic Differentiation in FORTRAN). Each of these methods has certain deficiencies, particularly when applied to complex, coupled analyses with many design variables. Recently, a new AD tool called ADJIFOR (Automatic Adjoint Generation in FORTRAN), based upon ADIFOR, was developed and demonstrated. Whereas ADIFOR implements forward-mode (direct) differentiation throughout an analysis program to obtain exact derivatives via the chain rule of calculus, ADJIFOR implements the reverse-mode counterpart of the chain rule to obtain exact adjoint form derivatives from FORTRAN code. Automatically-generated adjoint versions of the widely-used CFL3D computational fluid dynamics (CFD) code and an algebraic wing grid generation code were obtained with just a few hours processing time using the ADJIFOR tool. The codes were verified for accuracy and were shown to compute the exact gradient of the wing lift-to-drag ratio, with respect to any number of shape parameters, in about the time required for 7 to 20 function evaluations. The codes have now been executed on various computers with typical memory and disk space for problems with up to 129 x 65 x 33 grid points, and for hundreds to thousands of independent variables. These adjoint codes are now used in a gradient-based aerodynamic shape optimization problem for a swept, tapered wing. For each design iteration, the optimization package constructs an approximate, linear optimization problem, based upon the current objective function, constraints, and gradient values. The optimizer subroutines are called within a design loop employing the approximate linear problem until an optimum shape is found, the design loop limit is reached, or no further design improvement is possible due to active design variable bounds and/or constraints. The resulting shape parameters are then used by the grid generation code to define a new wing surface and computational grid. The lift-to-drag ratio and its gradient are computed for the new design by the automatically-generated adjoint codes. Several optimization iterations may be required to find an optimum wing shape. Results from two sample cases will be discussed. The reader should note that this work primarily represents a demonstration of use of automatically- generated adjoint code within an aerodynamic shape optimization. As such, little significance is placed upon the actual optimization results, relative to the method for obtaining the results.
The Design of Integrated Information System for High Voltage Metering Lab
NASA Astrophysics Data System (ADS)
Ma, Yan; Yang, Yi; Xu, Guangke; Gu, Chao; Zou, Lida; Yang, Feng
2018-01-01
With the development of smart grid, intelligent and informatization management of high-voltage metering lab become increasingly urgent. In the paper we design an integrated information system, which automates the whole transactions from accepting instruments, make experiments, generating report, report signature to instrument claims. Through creating database for all the calibrated instruments, using two-dimensional code, integrating report templates in advance, establishing bookmarks and online transmission of electronical signatures, our manual procedures reduce largely. These techniques simplify the complex process of account management and report transmission. After more than a year of operation, our work efficiency improves about forty percent averagely, and its accuracy rate and data reliability are much higher as well.
Spent Pot Lining Characterization Framework
NASA Astrophysics Data System (ADS)
Ospina, Gustavo; Hassan, Mohamed I.
2017-09-01
Spent pot lining (SPL) management represents a major concern for aluminum smelters. There are two key elements for spent pot lining management: recycling and safe storage. Spent pot lining waste can potentially have beneficial uses in co-firing in cement plants. Also, safe storage of SPL is of utmost importance. Gas generation of SPL reaction with water and ignition sensitivity must be studied. However, determining the feasibility of SPL co-firing and developing the required procedures for safe storage rely on determining experimentally all the necessary SPL properties along with the appropriate test methods, recognized by emissions standards and fire safety design codes. The applicable regulations and relevant SPL properties for this purpose are presented along with the corresponding test methods.
Gschwind, Michael K
2013-07-23
Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.
Tramontano, A; Macchiato, M F
1986-01-01
An algorithm to determine the probability that a reading frame codifies for a protein is presented. It is based on the results of our previous studies on the thermodynamic characteristics of a translated reading frame. We also develop a prediction procedure to distinguish between coding and non-coding reading frames. The procedure is based on the characteristics of the putative product of the DNA sequence and not on periodicity characteristics of the sequence, so the prediction is not biased by the presence of overlapping translated reading frames or by the presence of translated reading frames on the complementary DNA strand. PMID:3753761
Pressure Mapping and Efficiency Analysis of an EPPLER 857 Hydrokinetic Turbine
NASA Astrophysics Data System (ADS)
Clark, Tristan
A conceptual energy ship is presented to provide renewable energy. The ship, driven by the wind, drags a hydrokinetic turbine through the water. The power generated is used to run electrolysis on board, taking the resultant hydrogen back to shore to be used as an energy source. The basin efficiency (Power/thrust*velocity) of the Hydrokinetic Turbine (HTK) plays a vital role in this process. In order to extract the maximum allowable power from the flow, the blades need to be optimized. The structural analysis of the blade is important, as the blade will undergo high pressure loads from the water. A procedure for analysis of a preliminary Hydrokinetic Turbine blade design is developed. The blade was designed by a non-optimized Blade Element Momentum Theory (BEMT) code. Six simulations were run, with varying mesh resolution, turbulence models, and flow region size. The procedure was developed that provides detailed explanation for the entire process, from geometry and mesh generation to post-processing analysis tools. The efficiency results from the simulations are used to study the mesh resolution, flow region size, and turbulence models. The results are compared to the BEMT model design targets. Static pressure maps are created that can be used for structural analysis of the blades.
Methodology for the structural design of single spoke accelerating cavities at Fermilab
Passarelli, Donato; Wands, Robert H.; Merio, Margherita; ...
2016-10-01
Fermilab is planning to upgrade its accelerator complex to deliver a more powerful and intense proton-beam for neutrino experiments. In the framework of the so-called Proton Improvement Plan-II (PIP-II), we are designing and developing a cryomodule containing superconducting accelerating cavities, the Single Spoke Resonators of type 1 (SSR1). In this paper, we present the sequence of analysis and calculations performed for the structural de- sign of these cavities, using the rules of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (BPVC). The lack of an accepted procedure for addressing the design, fabrication, and inspection of suchmore » unique pressure vessels makes the task demanding and challenging every time. Several factors such as exotic materials, unqualified brazing procedures, limited nondestructive examination, and the general R&D nature of these early generations of cavity design, conspire to make it impractical to obtain full compliance with all ASME BPVC requirements. However, the presented approach allowed us to validate the design of these new generation of single spoke cavities with values of maximum allowable working pressure that exceed the safety requirements. This set of rules could be used as a starting point for the structural design and development of similar objects.« less
Methodology for the structural design of single spoke accelerating cavities at Fermilab
NASA Astrophysics Data System (ADS)
Passarelli, Donato; Wands, Robert H.; Merio, Margherita; Ristori, Leonardo
2016-10-01
Fermilab is planning to upgrade its accelerator complex to deliver a more powerful and intense proton-beam for neutrino experiments. In the framework of the so-called Proton Improvement Plan-II (PIP-II), we are designing and developing a cryomodule containing superconducting accelerating cavities, the Single Spoke Resonators of type 1 (SSR1). In this paper, we present the sequence of analysis and calculations performed for the structural design of these cavities, using the rules of the American Society of Mechanical Engineers (ASME) Boiler and Pressure Vessel Code (BPVC). The lack of an accepted procedure for addressing the design, fabrication, and inspection of such unique pressure vessels makes the task demanding and challenging every time. Several factors such as exotic materials, unqualified brazing procedures, limited nondestructive examination, and the general R&D nature of these early generations of cavity design, conspire to make it impractical to obtain full compliance with all ASME BPVC requirements. However, the presented approach allowed us to validate the design of this new generation of single spoke cavities with values of maximum allowable working pressure that exceeds the safety requirements. This set of rules could be used as a starting point for the structural design and development of similar objects.
NASA Technical Reports Server (NTRS)
Boger, David A.; Govindan, T. R.; McDonald, Henry
1997-01-01
Previous work at NASA LeRC has shown that flow distortions in aircraft engine inlet ducts can be significantly reduced by mounting vortex generators, or small wing sections, on the inside surface of the engine inlet. The placement of the vortex generators is an important factor in obtaining the optimal effect over a wide operating envelope. In this regard, the only alternative to a long and expensive test program which would search out this optimal configuration is a good prediction procedure which could narrow the field of search. Such a procedure has been developed in collaboration with NASA LeRC, and results obtained by NASA personnel indicate that it shows considerable promise for predicting the viscous turbulent flow in engine inlet ducts in the presence of vortex generators. The prediction tool is a computer code which numerically solves the reduced Navier-Stokes equations and so is commonly referred to as RNS3D. Obvious deficiencies in RNS3D have been addressed in previous work. Primarily, it is known that the predictions of the mean velocity field of a turbulent boundary layer flow approaching separation are not in good agreement with data. It was suggested that the use of an algebraic mixing-length turbulence model in RNS3D is at least partly to blame for this. Additionally, the current turbulence model includes an assumption of isotropy which will ultimately fail to capture turbulence-driven secondary flow known to exist in noncircular ducts.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schulz, C.; Givens, C.; Bhatt, R.
2003-02-24
Idaho National Engineering and Environmental Laboratory (INEEL) is conducting an effort to characterize approximately 620 drums of remote-handled (RH-) transuranic (TRU) waste currently in its inventory that were generated at the Argonne National Laboratory-East (ANL-E) Alpha Gamma Hot Cell Facility (AGHCF) between 1971 and 1995. The waste was generated at the AGHCF during the destructive examination of irradiated and unirradiated fuel pins, targets, and other materials from reactor programs at ANL-West (ANL-W) and other Department of Energy (DOE) reactors. In support of this effort, Shaw Environmental and Infrastructure (formerly IT Corporation) developed an acceptable knowledge (AK) collection and management programmore » based on existing contact-handled (CH)-TRU waste program requirements and proposed RH-TRU waste program requirements in effect in July 2001. Consistent with Attachments B-B6 of the Waste Isolation Pilot Plant (WIPP) Hazardous Waste Facility Permit (HWFP) and th e proposed Class 3 permit modification (Attachment R [RH-WAP] of this permit), the draft AK Summary Report prepared under the AK procedure describes the waste generating process and includes determinations in the following areas based on AK: physical form (currently identified at the Waste Matrix Code level); waste stream delineation; applicability of hazardous waste numbers for hazardous waste constituents; and prohibited items. In addition, the procedure requires and the draft summary report contains information supporting determinations in the areas of defense relationship and radiological characterization.« less
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
Exploring Hill Ciphers with Graphing Calculators.
ERIC Educational Resources Information Center
St. John, Dennis
1998-01-01
Explains how to code and decode messages using Hill ciphers which combine matrix multiplication and modular arithmetic. Discusses how a graphing calculator can facilitate the matrix and modular arithmetic used in the coding and decoding procedures. (ASK)
Validating LES for Jet Aeroacoustics
NASA Technical Reports Server (NTRS)
Bridges, James
2011-01-01
Engineers charged with making jet aircraft quieter have long dreamed of being able to see exactly how turbulent eddies produce sound and this dream is now coming true with the advent of large eddy simulation (LES). Two obvious challenges remain: validating the LES codes at the resolution required to see the fluid-acoustic coupling, and the interpretation of the massive datasets that result in having dreams come true. This paper primarily addresses the former, the use of advanced experimental techniques such as particle image velocimetry (PIV) and Raman and Rayleigh scattering, to validate the computer codes and procedures used to create LES solutions. It also addresses the latter problem in discussing what are relevant measures critical for aeroacoustics that should be used in validating LES codes. These new diagnostic techniques deliver measurements and flow statistics of increasing sophistication and capability, but what of their accuracy? And what are the measures to be used in validation? This paper argues that the issue of accuracy be addressed by cross-facility and cross-disciplinary examination of modern datasets along with increased reporting of internal quality checks in PIV analysis. Further, it is argued that the appropriate validation metrics for aeroacoustic applications are increasingly complicated statistics that have been shown in aeroacoustic theory to be critical to flow-generated sound.
Severgnini, Marco; Bicciato, Silvio; Mangano, Eleonora; Scarlatti, Francesca; Mezzelani, Alessandra; Mattioli, Michela; Ghidoni, Riccardo; Peano, Clelia; Bonnal, Raoul; Viti, Federica; Milanesi, Luciano; De Bellis, Gianluca; Battaglia, Cristina
2006-06-01
Meta-analysis of microarray data is increasingly important, considering both the availability of multiple platforms using disparate technologies and the accumulation in public repositories of data sets from different laboratories. We addressed the issue of comparing gene expression profiles from two microarray platforms by devising a standardized investigative strategy. We tested this procedure by studying MDA-MB-231 cells, which undergo apoptosis on treatment with resveratrol. Gene expression profiles were obtained using high-density, short-oligonucleotide, single-color microarray platforms: GeneChip (Affymetrix) and CodeLink (Amersham). Interplatform analyses were carried out on 8414 common transcripts represented on both platforms, as identified by LocusLink ID, representing 70.8% and 88.6% of annotated GeneChip and CodeLink features, respectively. We identified 105 differentially expressed genes (DEGs) on CodeLink and 42 DEGs on GeneChip. Among them, only 9 DEGs were commonly identified by both platforms. Multiple analyses (BLAST alignment of probes with target sequences, gene ontology, literature mining, and quantitative real-time PCR) permitted us to investigate the factors contributing to the generation of platform-dependent results in single-color microarray experiments. An effective approach to cross-platform comparison involves microarrays of similar technologies, samples prepared by identical methods, and a standardized battery of bioinformatic and statistical analyses.
HowTo - Easy use of global unique identifier
NASA Astrophysics Data System (ADS)
Czerniak, A.; Fleischer, D.; Schirnick, C.
2013-12-01
The GEOMAR sample- and core repository covers several thousands of samples and cores and was collected over the last decades. In the actual project, we bring this collection up to the new generation and tag every sample and core with a unique identifier, in our case the International Geo Sample Number (ISGN). This work is done with our digital Ink and hand writing recognition implementation. The Smart Pen technology was save time and resources to record the information on every sample or core. In the procedure of recording, there are several steps systematical are done: 1. Getting all information about the core or sample, such as cruise number, responsible person and so on. 2. Tag with unique identifiers, in our case a QR-Code. 3. Wrote down the location of sample or core. After transmitting the information from Smart Pen, actually via USB but wireless is a choice too, into our server infrastructure the link to other information began. As it linked in our Virtual Research Environment (VRE) with the unique identifier (ISGN) sample or core can be located and the QR-Code was simply linked back from core or sample to ISGN with additional scientific information. On the QR-Code all important information are on it and it was simple to produce thousand of it.
48 CFR 1503.500-71 - Procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... for: (1) A written code of business ethics and conduct and an ethics training program for all employees; (2) Periodic reviews of company business practices, procedures, policies and internal controls...
Results from the Veterans Health Administration ICD-10-CM/PCS Coding Pilot Study.
Weems, Shelley; Heller, Pamela; Fenton, Susan H
2015-01-01
The Veterans Health Administration (VHA) of the US Department of Veterans Affairs has been preparing for the October 1, 2015, conversion to the International Classification of Diseases, Tenth Revision, Clinical Modification and Procedural Coding System (ICD-10-CM/PCS) for more than four years. The VHA's Office of Informatics and Analytics ICD-10 Program Management Office established an ICD-10 Learning Lab to explore expected operational challenges. This study was conducted to determine the effects of the classification system conversion on coding productivity. ICD codes are integral to VHA business processes and are used for purposes such as clinical studies, performance measurement, workload capture, cost determination, Veterans Equitable Resource Allocation (VERA) determination, morbidity and mortality classification, indexing of hospital records by disease and operations, data storage and retrieval, research purposes, and reimbursement. The data collection for this study occurred in multiple VHA sites across several months using standardized methods. It is commonly accepted that coding productivity will decrease with the implementation of ICD-10-CM/PCS. The findings of this study suggest that the decrease will be more significant for inpatient coding productivity (64.5 percent productivity decrease) than for ambulatory care coding productivity (6.7 percent productivity decrease). This study reveals the following important points regarding ICD-10-CM/PCS coding productivity: 1. Ambulatory care ICD-10-CM coding productivity is not expected to decrease as significantly as inpatient ICD-10-CM/PCS coding productivity. 2. Coder training and type of record (inpatient versus outpatient) affect coding productivity. 3. Inpatient coding productivity is decreased when a procedure requiring ICD-10-PCS coding is present. It is highly recommended that organizations perform their own analyses to determine the effects of ICD-10-CM/PCS implementation on coding productivity.
Results from the Veterans Health Administration ICD-10-CM/PCS Coding Pilot Study
Weems, Shelley; Heller, Pamela; Fenton, Susan H.
2015-01-01
The Veterans Health Administration (VHA) of the US Department of Veterans Affairs has been preparing for the October 1, 2015, conversion to the International Classification of Diseases, Tenth Revision, Clinical Modification and Procedural Coding System (ICD-10-CM/PCS) for more than four years. The VHA's Office of Informatics and Analytics ICD-10 Program Management Office established an ICD-10 Learning Lab to explore expected operational challenges. This study was conducted to determine the effects of the classification system conversion on coding productivity. ICD codes are integral to VHA business processes and are used for purposes such as clinical studies, performance measurement, workload capture, cost determination, Veterans Equitable Resource Allocation (VERA) determination, morbidity and mortality classification, indexing of hospital records by disease and operations, data storage and retrieval, research purposes, and reimbursement. The data collection for this study occurred in multiple VHA sites across several months using standardized methods. It is commonly accepted that coding productivity will decrease with the implementation of ICD-10-CM/PCS. The findings of this study suggest that the decrease will be more significant for inpatient coding productivity (64.5 percent productivity decrease) than for ambulatory care coding productivity (6.7 percent productivity decrease). This study reveals the following important points regarding ICD-10-CM/PCS coding productivity: Ambulatory care ICD-10-CM coding productivity is not expected to decrease as significantly as inpatient ICD-10-CM/PCS coding productivity.Coder training and type of record (inpatient versus outpatient) affect coding productivity.Inpatient coding productivity is decreased when a procedure requiring ICD-10-PCS coding is present. It is highly recommended that organizations perform their own analyses to determine the effects of ICD-10-CM/PCS implementation on coding productivity. PMID:26396553
Performance and Architecture Lab Modeling Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
2014-06-19
Analytical application performance models are critical for diagnosing performance-limiting resources, optimizing systems, and designing machines. Creating models, however, is difficult. Furthermore, models are frequently expressed in forms that are hard to distribute and validate. The Performance and Architecture Lab Modeling tool, or Palm, is a modeling tool designed to make application modeling easier. Palm provides a source code modeling annotation language. Not only does the modeling language divide the modeling task into sub problems, it formally links an application's source code with its model. This link is important because a model's purpose is to capture application behavior. Furthermore, this linkmore » makes it possible to define rules for generating models according to source code organization. Palm generates hierarchical models according to well-defined rules. Given an application, a set of annotations, and a representative execution environment, Palm will generate the same model. A generated model is a an executable program whose constituent parts directly correspond to the modeled application. Palm generates models by combining top-down (human-provided) semantic insight with bottom-up static and dynamic analysis. A model's hierarchy is defined by static and dynamic source code structure. Because Palm coordinates models and source code, Palm's models are 'first-class' and reproducible. Palm automates common modeling tasks. For instance, Palm incorporates measurements to focus attention, represent constant behavior, and validate models. Palm's workflow is as follows. The workflow's input is source code annotated with Palm modeling annotations. The most important annotation models an instance of a block of code. Given annotated source code, the Palm Compiler produces executables and the Palm Monitor collects a representative performance profile. The Palm Generator synthesizes a model based on the static and dynamic mapping of annotations to program behavior. The model -- an executable program -- is a hierarchical composition of annotation functions, synthesized functions, statistics for runtime values, and performance measurements.« less
Production Level CFD Code Acceleration for Hybrid Many-Core Architectures
NASA Technical Reports Server (NTRS)
Duffy, Austen C.; Hammond, Dana P.; Nielsen, Eric J.
2012-01-01
In this work, a novel graphics processing unit (GPU) distributed sharing model for hybrid many-core architectures is introduced and employed in the acceleration of a production-level computational fluid dynamics (CFD) code. The latest generation graphics hardware allows multiple processor cores to simultaneously share a single GPU through concurrent kernel execution. This feature has allowed the NASA FUN3D code to be accelerated in parallel with up to four processor cores sharing a single GPU. For codes to scale and fully use resources on these and the next generation machines, codes will need to employ some type of GPU sharing model, as presented in this work. Findings include the effects of GPU sharing on overall performance. A discussion of the inherent challenges that parallel unstructured CFD codes face in accelerator-based computing environments is included, with considerations for future generation architectures. This work was completed by the author in August 2010, and reflects the analysis and results of the time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, S.A.
1997-07-01
The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for codemore » use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.« less
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Anguera, M Teresa; Portell, Mariona; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana
2018-01-01
Indirect observation is a recent concept in systematic observation. It largely involves analyzing textual material generated either indirectly from transcriptions of audio recordings of verbal behavior in natural settings (e.g., conversation, group discussions) or directly from narratives (e.g., letters of complaint, tweets, forum posts). It may also feature seemingly unobtrusive objects that can provide relevant insights into daily routines. All these materials constitute an extremely rich source of information for studying everyday life, and they are continuously growing with the burgeoning of new technologies for data recording, dissemination, and storage. Narratives are an excellent vehicle for studying everyday life, and quantitization is proposed as a means of integrating qualitative and quantitative elements. However, this analysis requires a structured system that enables researchers to analyze varying forms and sources of information objectively. In this paper, we present a methodological framework detailing the steps and decisions required to quantitatively analyze a set of data that was originally qualitative. We provide guidelines on study dimensions, text segmentation criteria, ad hoc observation instruments, data quality controls, and coding and preparation of text for quantitative analysis. The quality control stage is essential to ensure that the code matrices generated from the qualitative data are reliable. We provide examples of how an indirect observation study can produce data for quantitative analysis and also describe the different software tools available for the various stages of the process. The proposed method is framed within a specific mixed methods approach that involves collecting qualitative data and subsequently transforming these into matrices of codes (not frequencies) for quantitative analysis to detect underlying structures and behavioral patterns. The data collection and quality control procedures fully meet the requirement of flexibility and provide new perspectives on data integration in the study of biopsychosocial aspects in everyday contexts.
SLHAplus: A library for implementing extensions of the standard model
NASA Astrophysics Data System (ADS)
Bélanger, G.; Christensen, Neil D.; Pukhov, A.; Semenov, A.
2011-03-01
We provide a library to facilitate the implementation of new models in codes such as matrix element and event generators or codes for computing dark matter observables. The library contains an SLHA reader routine as well as diagonalisation routines. This library is available in CalcHEP and micrOMEGAs. The implementation of models based on this library is supported by LanHEP and FeynRules. Program summaryProgram title: SLHAplus_1.3 Catalogue identifier: AEHX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEHX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6283 No. of bytes in distributed program, including test data, etc.: 52 119 Distribution format: tar.gz Programming language: C Computer: IBM PC, MAC Operating system: UNIX (Linux, Darwin, Cygwin) RAM: 2000 MB Classification: 11.1 Nature of problem: Implementation of extensions of the standard model in matrix element and event generators and codes for dark matter observables. Solution method: For generic extensions of the standard model we provide routines for reading files that adopt the standard format of the SUSY Les Houches Accord (SLHA) file. The procedure has been generalized to take into account an arbitrary number of blocks so that the reader can be used in generic models including non-supersymmetric ones. The library also contains routines to diagonalize real and complex mass matrices with either unitary or bi-unitary transformations as well as routines for evaluating the running strong coupling constant, running quark masses and effective quark masses. Running time: 0.001 sec
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Scaling up spike-and-slab models for unsupervised feature learning.
Goodfellow, Ian J; Courville, Aaron; Bengio, Yoshua
2013-08-01
We describe the use of two spike-and-slab models for modeling real-valued data, with an emphasis on their applications to object recognition. The first model, which we call spike-and-slab sparse coding (S3C), is a preexisting model for which we introduce a faster approximate inference algorithm. We introduce a deep variant of S3C, which we call the partially directed deep Boltzmann machine (PD-DBM) and extend our S3C inference algorithm for use on this model. We describe learning procedures for each. We demonstrate that our inference procedure for S3C enables scaling the model to unprecedented large problem sizes, and demonstrate that using S3C as a feature extractor results in very good object recognition performance, particularly when the number of labeled examples is low. We show that the PD-DBM generates better samples than its shallow counterpart, and that unlike DBMs or DBNs, the PD-DBM may be trained successfully without greedy layerwise training.
Modular Engine Noise Component Prediction System (MCP) Program Users' Guide
NASA Technical Reports Server (NTRS)
Golub, Robert A. (Technical Monitor); Herkes, William H.; Reed, David H.
2004-01-01
This is a user's manual for Modular Engine Noise Component Prediction System (MCP). This computer code allows the user to predict turbofan engine noise estimates. The program is based on an empirical procedure that has evolved over many years at The Boeing Company. The data used to develop the procedure include both full-scale engine data and small-scale model data, and include testing done by Boeing, by the engine manufacturers, and by NASA. In order to generate a noise estimate, the user specifies the appropriate engine properties (including both geometry and performance parameters), the microphone locations, the atmospheric conditions, and certain data processing options. The version of the program described here allows the user to predict three components: inlet-radiated fan noise, aft-radiated fan noise, and jet noise. MCP predicts one-third octave band noise levels over the frequency range of 50 to 10,000 Hertz. It also calculates overall sound pressure levels and certain subjective noise metrics (e.g., perceived noise levels).
Haas, Brian J; Papanicolaou, Alexie; Yassour, Moran; Grabherr, Manfred; Blood, Philip D; Bowden, Joshua; Couger, Matthew Brian; Eccles, David; Li, Bo; Lieber, Matthias; MacManes, Matthew D; Ott, Michael; Orvis, Joshua; Pochet, Nathalie; Strozzi, Francesco; Weeks, Nathan; Westerman, Rick; William, Thomas; Dewey, Colin N; Henschel, Robert; LeDuc, Richard D; Friedman, Nir; Regev, Aviv
2013-08-01
De novo assembly of RNA-seq data enables researchers to study transcriptomes without the need for a genome sequence; this approach can be usefully applied, for instance, in research on 'non-model organisms' of ecological and evolutionary importance, cancer samples or the microbiome. In this protocol we describe the use of the Trinity platform for de novo transcriptome assembly from RNA-seq data in non-model organisms. We also present Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes. In the procedure, we provide a workflow for genome-independent transcriptome analysis leveraging the Trinity platform. The software, documentation and demonstrations are freely available from http://trinityrnaseq.sourceforge.net. The run time of this protocol is highly dependent on the size and complexity of data to be analyzed. The example data set analyzed in the procedure detailed herein can be processed in less than 5 h.
Nodal network generator for CAVE3
NASA Technical Reports Server (NTRS)
Palmieri, J. V.; Rathjen, K. A.
1982-01-01
A new extension of CAVE3 code was developed that automates the creation of a finite difference math model in digital form ready for input to the CAVE3 code. The new software, Nodal Network Generator, is broken into two segments. One segment generates the model geometry using a Tektronix Tablet Digitizer and the other generates the actual finite difference model and allows for graphic verification using Tektronix 4014 Graphic Scope. Use of the Nodal Network Generator is described.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-16
... Interest and Penalty Suspension Provisions Under Section 6404(g) of the Internal Revenue Code AGENCY.... SUMMARY: This document contains final regulations under section 6404(g)(2)(E) of the Internal Revenue Code... Procedure and Administration Regulations (26 CFR part 301) by adding rules under section 6404(g) relating to...
The purpose of this SOP is to define the coding strategy for the Technician Walk-Through Questionnaire. This questionnaire was developed for use during the Arizona NHEXAS project and the Border study. Keywords: data; coding; technician walk-through questionnaire.
The U.S.-Mexi...
The purpose of this SOP is to define the coding strategy for the Food Diary Follow Up Questionnaire. This questionnaire was developed for use in the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; food diary follow up questionnaire.
The National Human Ex...
The purpose of this SOP is to define the coding strategy for the Diet Diary Questionnaire. This questionnaire was developed for use during the Arizona NHEXAS project and the Border study. Keywords: data; coding; diet diary questionnaire.
The U.S.-Mexico Border Program is spon...
The purpose of this SOP is to define the coding strategy for the Time Diary and Activity Questionnaire. This questionnaire was developed for use in the Arizona NHEXAS project and the "Border" study. Keywords: Data; Coding; Time Diary and Activity Questionnaire.
The National Hu...
The purpose of this SOP is to describe the coding strategy for the Questionnaire Feedback form. This Questionnaire Feedback form was developed for use during the Arizona NHEXAS project and the Border study. Keywords: data; coding; questionnaire feedback form.
The U.S.-Mexico B...
NHEXAS PHASE I ARIZONA STUDY--STANDARD OPERATING PROCEDURE FOR CODING: ARIZONA LAB DATA (UA-D-13.0)
The purpose of this SOP is to define the coding strategy for Arizona Lab Data. This strategy was developed for use in the Arizona NHEXAS project and the "Border" study. Keywords: data; coding; lab data forms.
The National Human Exposure Assessment Survey (NHEXAS) is a federal ...