DOE Office of Scientific and Technical Information (OSTI.GOV)
Adrian Miron; Joshua Valentine; John Christenson
2009-10-01
The current state of the art in nuclear fuel cycle (NFC) modeling is an eclectic mixture of codes with various levels of applicability, flexibility, and availability. In support of the advanced fuel cycle systems analyses, especially those by the Advanced Fuel Cycle Initiative (AFCI), Unviery of Cincinnati in collaboration with Idaho State University carried out a detailed review of the existing codes describing various aspects of the nuclear fuel cycle and identified the research and development needs required for a comprehensive model of the global nuclear energy infrastructure and the associated nuclear fuel cycles. Relevant information obtained on the NFCmore » codes was compiled into a relational database that allows easy access to various codes' properties. Additionally, the research analyzed the gaps in the NFC computer codes with respect to their potential integration into programs that perform comprehensive NFC analysis.« less
Contracting to improve your revenue cycle performance.
Welter, Terri L; Semko, George A; Miller, Tony; Lauer, Roberta
2007-09-01
The following key drivers of commercial contract variability can have a material effect on your hospital's revenue cycle: Claim form variance. Benefit design. Contract complexity. Coding variance. Medical necessity. Precertification/authorization. Claim adjudication/appeal requirements. Additional documentation requirements. Timeliness of payment. Third-party payer activity.
PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin Mohammed; Som, Sibendu
This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less
Entanglement-assisted quantum quasicyclic low-density parity-check codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor
2009-03-01
We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
Implementation of Energy Code Controls Requirements in New Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike
Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less
An Object Oriented Analysis Method for Ada and Embedded Systems
1989-12-01
expansion of the paradligm from the coding anld desiningactivities into the earlier activity of reurmnsalyi.Ts hpl, begins by discussing the application of...response time: 0.1 seconds.I Step le: Identify Known Restrictions on the Software.I " The cruise control system object code must fit within 16K of mem- orv...application of object-oriented techniques to the coding and desigll phases of the life cycle, as well as various approaches to requirements analysis. 3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sesonske, A.
1980-08-01
Detailed core management arrangements are developed requiring four operating cycles for the transition from present three-batch loading to an extended burnup four-batch plan for Zion-1. The ARMP code EPRI-NODE-P was used for core modeling. Although this work is preliminary, uranium and economic savings during the transition cycles appear of the order of 6 percent.
NASA Technical Reports Server (NTRS)
Walton, J. T.
1994-01-01
The development of a single-stage-to-orbit aerospace vehicle intended to be launched horizontally into low Earth orbit, such as the National Aero-Space Plane (NASP), has concentrated on the use of the supersonic combustion ramjet (scramjet) propulsion cycle. SRGULL, a scramjet cycle analysis code, is an engineer's tool capable of nose-to-tail, hydrogen-fueled, airframe-integrated scramjet simulation in a real gas flow with equilibrium thermodynamic properties. This program facilitates initial estimates of scramjet cycle performance by linking a two-dimensional forebody, inlet and nozzle code with a one-dimensional combustor code. Five computer codes (SCRAM, SEAGUL, INLET, Progam HUD, and GASH) originally developed at NASA Langley Research Center in support of hypersonic technology are integrated in this program to analyze changing flow conditions. The one-dimensional combustor code is based on the combustor subroutine from SCRAM and the two-dimensional coding is based on an inviscid Euler program (SEAGUL). Kinetic energy efficiency input for sidewall area variation modeling can be calculated by the INLET program code. At the completion of inviscid component analysis, Program HUD, an integral boundary layer code based on the Spaulding-Chi method, is applied to determine the friction coefficient which is then used in a modified Reynolds Analogy to calculate heat transfer. Real gas flow properties such as flow composition, enthalpy, entropy, and density are calculated by the subroutine GASH. Combustor input conditions are taken from one-dimensionalizing the two-dimensional inlet exit flow. The SEAGUL portions of this program are limited to supersonic flows, but the combustor (SCRAM) section can handle supersonic and dual-mode operation. SRGULL has been compared to scramjet engine tests with excellent results. SRGULL was written in FORTRAN 77 on an IBM PC compatible using IBM's FORTRAN/2 or Microway's NDP386 F77 compiler. The program is fully user interactive, but can also run in batch mode. It operates under the UNIX, VMS, NOS, and DOS operating systems. The source code is not directly compatible with all PC compilers (e.g., Lahey or Microsoft FORTRAN) due to block and segment size requirements. SRGULL executable code requires about 490K RAM and a math coprocessor on PC's. The SRGULL program was developed in 1989, although the component programs originated in the 1960's and 1970's. IBM, IBM PC, and DOS are registered trademarks of International Business Machines. VMS is a registered trademark of Digital Equipment Corporation. UNIX is a registered trademark of Bell Laboratories. NOS is a registered trademark of Control Data Corporation.
Report on SNL RCBC control options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, R.; Vilim, R. B.
The attractive performance of the S-CO 2 recompression cycle arises from the thermo-physical properties of carbon dioxide near the critical point. However, to ensure efficient operation of the cycle near the critical point, precise control of the heat removal rate by the Printed Circuit Heat Exchanger (PCHE) upstream of the main compressor is required. Accomplishing this task is not trivial because of the large variations in fluid properties with respect to temperature and pressure near the critical point. The use of a model-based approach for the design of a robust feedback regulator is being investigated to achieve acceptable control ofmore » heat removal rate at different operating conditions. A first step in this procedure is the development of a dynamic model of the heat exchanger. In this work, a one-dimensional (1-D) control-oriented model of the PCHE was developed using the General Plant Analyzer and System Simulator (GPASS) code. GPASS is a transient simulation code that supports analysis and control of power conversion cycles based on the S-CO 2 Brayton cycle. This modeling capability was used this fiscal year to analyze experiment data obtained from the heat exchanger in the SNL recompression Brayton cycle. The analysis suggested that the error in the water flowrate measurement was greater than required for achieving precise control of heat removal rate. Accordingly, a new water flowmeter was installed, significantly improving the quality of the measurement. Comparison of heat exchanger measurements in subsequent experiments with code simulations yielded good agreement establishing a reliable basis for the use of the GPASS PCHE model for future development of a model-based feedback controller.« less
Support for life-cycle product reuse in NASA's SSE
NASA Technical Reports Server (NTRS)
Shotton, Charles
1989-01-01
The Software Support Environment (SSE) is a software factory for the production of Space Station Freedom Program operational software. The SSE is to be centrally developed and maintained and used to configure software production facilities in the field. The PRC product TTCQF provides for an automated qualification process and analysis of existing code that can be used for software reuse. The interrogation subsystem permits user queries of the reusable data and components which have been identified by an analyzer and qualified with associated metrics. The concept includes reuse of non-code life-cycle components such as requirements and designs. Possible types of reusable life-cycle components include templates, generics, and as-is items. Qualification of reusable elements requires analysis (separation of candidate components into primitives), qualification (evaluation of primitives for reusability according to reusability criteria) and loading (placing qualified elements into appropriate libraries). There can be different qualifications for different installations, methodologies, applications and components. Identifying reusable software and related components is labor-intensive and is best carried out as an integrated function of an SSE.
Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S.
2002-07-01
As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less
Coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.
National Combustion Code Parallel Performance Enhancements
NASA Technical Reports Server (NTRS)
Quealy, Angela; Benyo, Theresa (Technical Monitor)
2002-01-01
The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.
Methodology for Evaluating Cost-effectiveness of Commercial Energy Code Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Liu, Bing
This document lays out the U.S. Department of Energy’s (DOE’s) method for evaluating the cost-effectiveness of energy code proposals and editions. The evaluation is applied to provisions or editions of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 90.1 and the International Energy Conservation Code (IECC). The method follows standard life-cycle cost (LCC) economic analysis procedures. Cost-effectiveness evaluation requires three steps: 1) evaluating the energy and energy cost savings of code changes, 2) evaluating the incremental and replacement costs related to the changes, and 3) determining the cost-effectiveness of energy code changes based on those costs andmore » savings over time.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael; Jonlin, Duane; Nadel, Steven
Today’s building energy codes focus on prescriptive requirements for features of buildings that are directly controlled by the design and construction teams and verifiable by municipal inspectors. Although these code requirements have had a significant impact, they fail to influence a large slice of the building energy use pie – including not only miscellaneous plug loads, cooking equipment and commercial/industrial processes, but the maintenance and optimization of the code-mandated systems as well. Currently, code compliance is verified only through the end of construction, and there are no limits or consequences for the actual energy use in an occupied building. Inmore » the future, our suite of energy regulations will likely expand to include building efficiency, energy use or carbon emission budgets over their full life cycle. Intelligent building systems, extensive renewable energy, and a transition from fossil fuel to electric heating systems will likely be required to meet ultra-low-energy targets. This paper lays out the authors’ perspectives on how buildings may evolve over the course of the 21st century and the roles that codes and regulations will play in shaping those buildings of the future.« less
Analysis and specification tools in relation to the APSE
NASA Technical Reports Server (NTRS)
Hendricks, John W.
1986-01-01
Ada and the Ada Programming Support Environment (APSE) specifically address the phases of the system/software life cycle which follow after the user's problem was translated into system and software development specifications. The waterfall model of the life cycle identifies the analysis and requirements definition phases as preceeding program design and coding. Since Ada is a programming language and the APSE is a programming support environment, they are primarily targeted to support program (code) development, tecting, and maintenance. The use of Ada based or Ada related specification languages (SLs) and program design languages (PDLs) can extend the use of Ada back into the software design phases of the life cycle. Recall that the standardization of the APSE as a programming support environment is only now happening after many years of evolutionary experience with diverse sets of programming support tools. Restricting consideration to one, or even a few chosen specification and design tools, could be a real mistake for an organization or a major project such as the Space Station, which will need to deal with an increasingly complex level of system problems. To require that everything be Ada-like, be implemented in Ada, run directly under the APSE, and fit into a rigid waterfall model of the life cycle would turn a promising support environment into a straight jacket for progress.
Development of a Benchmark Example for Delamination Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2010-01-01
The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required
Cheng, Chao; Ung, Matthew; Grant, Gavin D.; Whitfield, Michael L.
2013-01-01
Cell cycle is a complex and highly supervised process that must proceed with regulatory precision to achieve successful cellular division. Despite the wide application, microarray time course experiments have several limitations in identifying cell cycle genes. We thus propose a computational model to predict human cell cycle genes based on transcription factor (TF) binding and regulatory motif information in their promoters. We utilize ENCODE ChIP-seq data and motif information as predictors to discriminate cell cycle against non-cell cycle genes. Our results show that both the trans- TF features and the cis- motif features are predictive of cell cycle genes, and a combination of the two types of features can further improve prediction accuracy. We apply our model to a complete list of GENCODE promoters to predict novel cell cycle driving promoters for both protein-coding genes and non-coding RNAs such as lincRNAs. We find that a similar percentage of lincRNAs are cell cycle regulated as protein-coding genes, suggesting the importance of non-coding RNAs in cell cycle division. The model we propose here provides not only a practical tool for identifying novel cell cycle genes with high accuracy, but also new insights on cell cycle regulation by TFs and cis-regulatory elements. PMID:23874175
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.; ...
2018-04-30
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Deterministically estimated fission source distributions for Monte Carlo k-eigenvalue problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Biondo, Elliott D.; Davidson, Gregory G.; Pandya, Tara M.
The standard Monte Carlo (MC) k-eigenvalue algorithm involves iteratively converging the fission source distribution using a series of potentially time-consuming inactive cycles before quantities of interest can be tallied. One strategy for reducing the computational time requirements of these inactive cycles is the Sourcerer method, in which a deterministic eigenvalue calculation is performed to obtain an improved initial guess for the fission source distribution. This method has been implemented in the Exnihilo software suite within SCALE using the SPNSPN or SNSN solvers in Denovo and the Shift MC code. The efficacy of this method is assessed with different Denovo solutionmore » parameters for a series of typical k-eigenvalue problems including small criticality benchmarks, full-core reactors, and a fuel cask. Here it is found that, in most cases, when a large number of histories per cycle are required to obtain a detailed flux distribution, the Sourcerer method can be used to reduce the computational time requirements of the inactive cycles.« less
Small non-coding RNAs in streptomycetes.
Heueis, Nona; Vockenhuber, Michael-Paul; Suess, Beatrix
2014-01-01
Streptomycetes are Gram-positive, GC-rich, soil dwelling bacteria, occurring ubiquitary throughout nature. They undergo extensive morphological changes from spores to filamentous mycelia and produce a plethora of secondary metabolites. Owing to their complex life cycle, streptomycetes require efficient regulatory machinery for the control of gene expression. Therefore, they possess a large diversity of regulators. Within this review we summarize the current knowledge about the importance of small non-coding RNA for the control of gene expression in these organisms.
L3.PHI.CTF.P10.02-rev2 Coupling of Subchannel T/H (CTF) and CRUD Chemistry (MAMBA1D)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K.; Palmtag, Scott; Collins, Benjamin S.
2015-05-15
The purpose of this milestone is to create a preliminary capability for modeling light water reactor (LWR) thermal-hydraulic (T/H) and CRUD growth using the CTF subchannel code and the subgrid version of the MAMBA CRUD chemistry code, MAMBA1D. In part, this is a follow-on to Milestone L3.PHI.VCS.P9.01, which is documented in Report CASL-U-2014-0188-000, titled "Development of CTF Capability for Modeling Reactor Operating Cycles with Crud Growth". As the title suggests, the previous milestone set up a framework for modeling reactor operation cycles with CTF. The framework also facilitated coupling to a CRUD chemistry capability for modeling CRUD growth throughout themore » reactor operating cycle. To demonstrate the capability, a simple CRUD \\surrogate" tool was developed and coupled to CTF; however, it was noted that CRUD growth predictions by the surrogate were not considered realistic. This milestone builds on L3.PHI.VCS.P9.01 by replacing this simple surrogate tool with the more advanced MAMBA1D CRUD chemistry code. Completing this task involves addressing unresolved tasks from Milestone L3.PHI.VCS.P9.01, setting up an interface to MAMBA1D, and extracting new T/H information from CTF that was not previously required in the simple surrogate tool. Speci c challenges encountered during this milestone include (1) treatment of the CRUD erosion model, which requires local turbulent kinetic energy (TKE) (a value that CTF does not calculate) and (2) treatment of the MAMBA1D CRUD chimney boiling model in the CTF rod heat transfer solution. To demonstrate this new T/H, CRUD modeling capability, two sets of simulations were performed: (1) an 18 month cycle simulation of a quarter symmetry model of Watts Bar and (2) a simulation of Assemblies G69 and G70 from Seabrook Cycle 5. The Watts Bar simulation is merely a demonstration of the capability. The simulation of the Seabrook cycle, which had experienced CRUD-related fuel rod failures, had actual CRUD-scrape data to compare with results. As results show, the initial CTF/MAMBA1D-predicted CRUD thicknesses were about half of their expected values, so further investigation will be required for this simulation.« less
NASA Technical Reports Server (NTRS)
Sapyta, Joe; Reid, Hank; Walton, Lew
1993-01-01
The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moisseytsev, A.; Sienicki, J. J.
2011-04-12
The analysis of specific control strategies and dynamic behavior of the supercritical carbon dioxide (S-CO{sub 2}) Brayton cycle has been extended to the two reactor types selected for continued development under the Generation IV Nuclear Energy Systems Initiative; namely, the Very High Temperature Reactor (VHTR) and the Sodium-Cooled Fast Reactor (SFR). Direct application of the standard S-CO{sub 2} recompression cycle to the VHTR was found to be challenging because of the mismatch in the temperature drop of the He gaseous reactor coolant through the He-to-CO{sub 2} reactor heat exchanger (RHX) versus the temperature rise of the CO{sub 2} through themore » RHX. The reference VHTR features a large temperature drop of 450 C between the assumed core outlet and inlet temperatures of 850 and 400 C, respectively. This large temperature difference is an essential feature of the VHTR enabling a lower He flow rate reducing the required core velocities and pressure drop. In contrast, the standard recompression S-CO{sub 2} cycle wants to operate with a temperature rise through the RHX of about 150 C reflecting the temperature drop as the CO{sub 2} expands from 20 MPa to 7.4 MPa in the turbine and the fact that the cycle is highly recuperated such that the CO{sub 2} entering the RHX is effectively preheated. Because of this mismatch, direct application of the standard recompression cycle results in a relatively poor cycle efficiency of 44.9%. However, two approaches have been identified by which the S-CO{sub 2} cycle can be successfully adapted to the VHTR and the benefits of the S-CO{sub 2} cycle, especially a significant gain in cycle efficiency, can be realized. The first approach involves the use of three separate cascaded S-CO{sub 2} cycles. Each S-CO{sub 2} cycle is coupled to the VHTR through its own He-to-CO{sub 2} RHX in which the He temperature is reduced by 150 C. The three respective cycles have efficiencies of 54, 50, and 44%, respectively, resulting in a net cycle efficiency of 49.3 %. The other approach involves reducing the minimum cycle pressure significantly below the critical pressure such that the temperature drop in the turbine is increased while the minimum cycle temperature is maintained above the critical temperature to prevent the formation of a liquid phase. The latter approach also involves the addition of a precooler and a third compressor before the main compressor to retain the benefits of compression near the critical point with the main compressor. For a minimum cycle pressure of 1 MPa, a cycle efficiency of 49.5% is achieved. Either approach opens up the door to applying the SCO{sub 2} cycle to the VHTR. In contrast, the SFR system typically has a core outlet-inlet temperature difference of about 150 C such that the standard recompression cycle is ideally suited for direct application to the SFR. The ANL Plant Dynamics Code has been modified for application to the VHTR and SFR when the reactor side dynamic behavior is calculated with another system level computer code such as SAS4A/SYSSYS-1 in the SFR case. The key modification involves modeling heat exchange in the RHX, accepting time dependent tabular input from the reactor code, and generating time dependent tabular input to the reactor code such that both the reactor and S-CO{sub 2} cycle sides can be calculated in a convergent iterative scheme. This approach retains the modeling benefits provided by the detailed reactor system level code and can be applied to any reactor system type incorporating a S-CO{sub 2} cycle. This approach was applied to the particular calculation of a scram scenario for a SFR in which the main and intermediate sodium pumps are not tripped and the generator is not disconnected from the electrical grid in order to enhance heat removal from the reactor system thereby enhancing the cooldown rate of the Na-to-CO{sub 2} RHX. The reactor side is calculated with SAS4A/SASSYS-1 while the S-CO{sub 2} cycle is calculated with the Plant Dynamics Code with a number of iterations over a timescale of 500 seconds. It is found that the RHX undergoes a maximum cooldown rate of {approx} -0.3 C/s. The Plant Dynamics Code was also modified to decrease its running time by replacing the compressible flow form of the momentum equation with an incompressible flow equation for use inside of the cooler or recuperators where the CO{sub 2} has a compressibility similar to that of a liquid. Appendices provide a quasi-static control strategy for a SFR as well as the self-adaptive linear function fitting algorithm developed to produce the tabular data for input to the reactor code and Plant Dynamics Code from the detailed output of the other code.« less
Standardized verification of fuel cycle modeling
Feng, B.; Dixon, B.; Sunny, E.; ...
2016-04-05
A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-yearmore » basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.« less
Dry Air Cooler Modeling for Supercritical Carbon Dioxide Brayton Cycle Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moisseytsev, A.; Sienicki, J. J.; Lv, Q.
Modeling for commercially available and cost effective dry air coolers such as those manufactured by Harsco Industries has been implemented in the Argonne National Laboratory Plant Dynamics Code for system level dynamic analysis of supercritical carbon dioxide (sCO 2) Brayton cycles. The modeling can now be utilized to optimize and simulate sCO 2 Brayton cycles with dry air cooling whereby heat is rejected directly to the atmospheric heat sink without the need for cooling towers that require makeup water for evaporative losses. It has sometimes been stated that a benefit of the sCO 2 Brayton cycle is that it enablesmore » dry air cooling implying that the Rankine steam cycle does not. A preliminary and simple examination of a Rankine superheated steam cycle and an air-cooled condenser indicates that dry air cooling can be utilized with both cycles provided that the cycle conditions are selected appropriately« less
microRNAs of parasites: current status and future perspectives
USDA-ARS?s Scientific Manuscript database
MicroRNAs (miRNAs) are a class of endogenous non-coding small RNAs regulating gene expression in eukaryotes at the post-transcriptional level. The complex life cycles of parasites may require the ability to respond to environmental and developmental signals through miRNA-mediated gene expression. Ov...
The Use of a Pseudo Noise Code for DIAL Lidar
NASA Technical Reports Server (NTRS)
Burris, John F.
2010-01-01
Retrievals of CO2 profiles within the planetary boundary layer (PBL) are required to understand CO2 transport over regional scales and for validating the future space borne CO2 remote sensing instrument, such as the CO2 Laser Sounder, for the ASCENDS mission, We report the use of a return-to-zero (RZ) pseudo noise (PN) code modulation technique for making range resolved measurements of CO2 within the PBL using commercial, off-the-shelf, components. Conventional, range resolved, measurements require laser pulse widths that are s#rorter than the desired spatial resolution and have pulse spacing such that returns from only a single pulse are observed by the receiver at one time (for the PBL pulse separations must be greater than approximately 2000m). This imposes a serious limitation when using available fiber lasers because of the resulting low duty cycle (less than 0.001) and consequent low average laser output power. RZ PN code modulation enables a fiber laser to operate at much higher duty cycles (approaching 0.1) thereby more effectively utilizing the amplifier's output. This results in an increase in received counts by approximately two orders of magnitude. The approach involves employing two, back to back, CW fiber amplifiers seeded at the appropriate on and offline CO2 wavelengths (approximately 1572 nm) using distributed feedback diode lasers modulated by a PN code at rates significantly above 1 megahertz. An assessment of the technique, discussions of measurement precision and error sources as well as preliminary data will be presented.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1980-01-01
The computational techniques are described which are utilized at Lewis Research Center to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. Cycle performance, and engine weight can be calculated along with costs and installation effects as opposed to fuel consumption alone. Almost any conceivable turbine engine cycle can be studied. These computer codes are: NNEP, WATE, LIFCYC, INSTAL, and POD DRG. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight and cost for representative types of aircraft and missions.
NASA Astrophysics Data System (ADS)
Ortega, Jesus Daniel
This work focuses on the development of a solar power thermal receiver for a supercritical-carbon dioxide (sCO2), Brayton power-cycle to produce ~1 MWe. Closed-loop sCO2 Brayton cycles are being evaluated in combination with concentrating solar power to provide higher thermal-to-electric conversion efficiencies relative to conventional steam Rankine cycles. High temperatures (923--973 K) and pressures (20--25 MPa) are required in the solar receiver to achieve thermal efficiencies of ~50%, making concentrating solar power (CSP) technologies a competitive alternative to current power generation methods. In this study, the CSP receiver is required to achieve an outlet temperature of 923 K at 25 MPa or 973 K at 20 MPa to meet the operating needs. To obtain compatible receiver tube material, an extensive material review was performed based the ASME Boiler and Pressure Vessel Code, ASME B31.1 and ASME B313.3 codes respectively. Subsequently, a thermal-structural model was developed using a commercial computational fluid (CFD) dynamics and structural mechanics software for designing and analyzing the tubular receiver that could provide the heat input for a ~2 MWth plant. These results were used to perform an analytical cumulative damage creep-fatigue analysis to estimate the work-life of the tubes. In sequence, an optical-thermal-fluid model was developed to evaluate the resulting thermal efficiency of the tubular receiver from the NSTTF heliostat field. The ray-tracing tool SolTrace was used to obtain the heat-flux distribution on the surfaces of the receiver. The K-ω SST turbulence model and P-1 radiation model used in Fluent were coupled with SolTrace to provide the heat flux distribution on the receiver surface. The creep-fatigue analysis displays the damage accumulated due to the cycling and the permanent deformation of the tubes. Nonetheless, they are able to support the required lifetime. The receiver surface temperatures were found to be within the safe operational limit while exhibiting a receiver thermal efficiency of ~85%. Future work includes the completion of a cyclic loading analysis to be performed using the Larson-Miller creep model in nCode Design Life to corroborate the structural integrity of the receiver over the desired lifetime of ~10,000 cycles.
NASA Technical Reports Server (NTRS)
Choo, Y. K.; Staiger, P. J.
1982-01-01
The code was designed to analyze performance at valves-wide-open design flow. The code can model conventional steam cycles as well as cycles that include such special features as process steam extraction and induction and feedwater heating by external heat sources. Convenience features and extensions to the special features were incorporated into the PRESTO code. The features are described, and detailed examples illustrating the use of both the original and the special features are given.
Prospective scenarios of nuclear energy evolution over the 21. century
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massara, S.; Tetart, P.; Garzenne, C.
2006-07-01
In this paper, different world scenarios of nuclear energy development over the 21. century are analyzed, by means of the EDF fuel cycle simulation code for nuclear scenario studies, TIRELIRE - STRATEGIE. Three nuclear demand scenarios are considered, and the performance of different nuclear strategies in satisfying these scenarios is analyzed and discussed, focusing on natural uranium consumption and industrial requirements related to the nuclear reactors and the associated fuel cycle facilities. Both thermal-spectrum systems (Pressurized Water Reactor and High Temperature Gas-cooled Reactor) and Fast Reactors are investigated. (authors)
SDTM - SYSTEM DESIGN TRADEOFF MODEL FOR SPACE STATION FREEDOM RELEASE 1.1
NASA Technical Reports Server (NTRS)
Chamberlin, R. G.
1994-01-01
Although extensive knowledge of space station design exists, the information is widely dispersed. The Space Station Freedom Program (SSFP) needs policies and procedures that ensure the use of consistent design objectives throughout its organizational hierarchy. The System Design Tradeoff Model (SDTM) produces information that can be used for this purpose. SDTM is a mathematical model of a set of possible designs for Space Station Freedom. Using the SDTM program, one can find the particular design which provides specified amounts of resources to Freedom's users at the lowest total (or life cycle) cost. One can also compare alternative design concepts by changing the set of possible designs, while holding the specified user services constant, and then comparing costs. Finally, both costs and user services can be varied simultaneously when comparing different designs. SDTM selects its solution from a set of feasible designs. Feasibility constraints include safety considerations, minimum levels of resources required for station users, budget allocation requirements, time limitations, and Congressional mandates. The total, or life cycle, cost includes all of the U.S. costs of the station: design and development, purchase of hardware and software, assembly, and operations throughout its lifetime. The SDTM development team has identified, for a variety of possible space station designs, the subsystems that produce the resources to be modeled. The team has also developed formulas for the cross consumption of resources by other resources, as functions of the amounts of resources produced. SDTM can find the values of station resources, so that subsystem designers can choose new design concepts that further reduce the station's life cycle cost. The fundamental input to SDTM is a set of formulas that describe the subsystems which make up a reference design. Most of the formulas identify how the resources required by each subsystem depend upon the size of the subsystem. Some of the formulas describe how the subsystem costs depend on size. The formulas can be complicated and nonlinear (if nonlinearity is needed to describe how designs change with size). SDTM's outputs are amounts of resources, life-cycle costs, and marginal costs. SDTM will run on IBM PC/XTs, ATs, and 100% compatibles with 640K of RAM and at least 3Mb of fixed-disk storage. A printer which can print in 132-column mode is also required, and a mathematics co-processor chip is highly recommended. This code is written in Turbo C 2.0. However, since the developers used a modified version of the proprietary Vitamin C source code library, the complete source code is not available. The executable is provided, along with all non-proprietary source code. This program was developed in 1989.
Revenue cycle management, Part II.
Crew, Matt
2007-01-01
The proper management of your revenue cycle requires the application of "best practices" and the continual monitoring and measuring of the entire cycle. The correct technology will enable you to gain the insight and efficiencies needed in the ever-changing healthcare economy. The revenue cycle is a process that begins when you negotiate payor contracts, set fees, and schedule appointments and continues until claims are paid in full. Every single step in the cycle carries equal importance. Monitoring all phases and a commitment to continually communicating the results will allow you to achieve unparalleled success. In part I of this article, we explored the importance of contracting, scheduling, and case management as well as coding and clinical documentation. We will now take a closer look at the benefits charge capture, claim submission, payment posting, accounts receivable follow-up, and reporting can mean to your practice.
User's manual for PRESTO: A computer code for the performance of regenerative steam turbine cycles
NASA Technical Reports Server (NTRS)
Fuller, L. C.; Stovall, T. K.
1979-01-01
Standard turbine cycles for baseload power plants and cycles with such additional features as process steam extraction and induction and feedwater heating by external heat sources may be modeled. Peaking and high back pressure cycles are also included. The code's methodology is to use the expansion line efficiencies, exhaust loss, leakages, mechanical losses, and generator losses to calculate the heat rate and generator output. A general description of the code is given as well as the instructions for input data preparation. Appended are two complete example cases.
ABSIM. Simulation of Absorption Systems in Flexible and Modular Form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grossman, G.
1994-06-01
The computer code has been developed for simulation of absorption systems at steady-state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system`s components. When all the equations have been established, a mathematical solver routine is employed to solve them simultaneously. Property subroutines contained in a separate data base serve to provide thermodynamic properties of the working fluids. The code is user-oriented and requires a relatively simple input containing the given operating conditions and the working fluid atmore » each state point. the user conveys to the computer an image of the cycle by specifying the different components and their interconnections. Based on this information, the program calculates the temperature, flowrate, concentration, pressure and vapor fraction at each state point in the system and the heat duty at each unit, from which the coefficient of performance may be determined. A graphical user-interface is provided to facilitate interactive input and study of the output.« less
ABSIM. Simulation of Absorption Systems in Flexible and Modular Form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grossman, G.
1994-06-01
The computer code has been developed for simulation of absorption systems at steady-state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components. When all the equations have been established, a mathematical solver routine is employed to solve them simultaneously. Property subroutines contained in a separate data base serve to provide thermodynamic properties of the working fluids. The code is user-oriented and requires a relatively simple input containing the given operating conditions and the working fluid atmore » each state point. the user conveys to the computer an imagev of the cycle by specifying the different components and their interconnections. Based on this information, the program calculates the temperature, flowrate, concentration, pressure and vapor fraction at each state point in the system and the heat duty at each unit, from which the coefficient of performance may be determined. A graphical user-interface is provided to fcilitate interactive input and study of the output.« less
Nuclear data uncertainty propagation by the XSUSA method in the HELIOS2 lattice code
NASA Astrophysics Data System (ADS)
Wemple, Charles; Zwermann, Winfried
2017-09-01
Uncertainty quantification has been extensively applied to nuclear criticality analyses for many years and has recently begun to be applied to depletion calculations. However, regulatory bodies worldwide are trending toward requiring such analyses for reactor fuel cycle calculations, which also requires uncertainty propagation for isotopics and nuclear reaction rates. XSUSA is a proven methodology for cross section uncertainty propagation based on random sampling of the nuclear data according to covariance data in multi-group representation; HELIOS2 is a lattice code widely used for commercial and research reactor fuel cycle calculations. This work describes a technique to automatically propagate the nuclear data uncertainties via the XSUSA approach through fuel lattice calculations in HELIOS2. Application of the XSUSA methodology in HELIOS2 presented some unusual challenges because of the highly-processed multi-group cross section data used in commercial lattice codes. Currently, uncertainties based on the SCALE 6.1 covariance data file are being used, but the implementation can be adapted to other covariance data in multi-group structure. Pin-cell and assembly depletion calculations, based on models described in the UAM-LWR Phase I and II benchmarks, are performed and uncertainties in multiplication factor, reaction rates, isotope concentrations, and delayed-neutron data are calculated. With this extension, it will be possible for HELIOS2 users to propagate nuclear data uncertainties directly from the microscopic cross sections to subsequent core simulations.
Greif, Gonzalo; Rodriguez, Matias; Alvarez-Valin, Fernando
2017-01-01
American trypanosomiasis is a chronic and endemic disease which affects millions of people. Trypanosoma cruzi, its causative agent, has a life cycle that involves complex morphological and functional transitions, as well as a variety of environmental conditions. This requires a tight regulation of gene expression, which is achieved mainly by post-transcriptional regulation. In this work we conducted an RNAseq analysis of the three major life cycle stages of T. cruzi: amastigotes, epimastigotes and trypomastigotes. This analysis allowed us to delineate specific transcriptomic profiling for each stage, and also to identify those biological processes of major relevance in each state. Stage specific expression profiling evidenced the plasticity of T. cruzi to adapt quickly to different conditions, with particular focus on membrane remodeling and metabolic shifts along the life cycle. Epimastigotes, which replicate in the gut of insect vectors, showed higher expression of genes related to energy metabolism, mainly Krebs cycle, respiratory chain and oxidative phosphorylation related genes, and anabolism related genes associated to nucleotide and steroid biosynthesis; also, a general down-regulation of surface glycoprotein coding genes was seen at this stage. Trypomastigotes, living extracellularly in the bloodstream of mammals, express a plethora of surface proteins and signaling genes involved in invasion and evasion of immune response. Amastigotes mostly express membrane transporters and genes involved in regulation of cell cycle, and also express a specific subset of surface glycoprotein coding genes. In addition, these results allowed us to improve the annotation of the Dm28c genome, identifying new ORFs and set the stage for construction of networks of co-expression, which can give clues about coded proteins of unknown functions. PMID:28286708
Linearized Aeroelastic Solver Applied to the Flutter Prediction of Real Configurations
NASA Technical Reports Server (NTRS)
Reddy, Tondapu S.; Bakhle, Milind A.
2004-01-01
A fast-running unsteady aerodynamics code, LINFLUX, was previously developed for predicting turbomachinery flutter. This linearized code, based on a frequency domain method, models the effects of steady blade loading through a nonlinear steady flow field. The LINFLUX code, which is 6 to 7 times faster than the corresponding nonlinear time domain code, is suitable for use in the initial design phase. Earlier, this code was verified through application to a research fan, and it was shown that the predictions of work per cycle and flutter compared well with those from a nonlinear time-marching aeroelastic code, TURBO-AE. Now, the LINFLUX code has been applied to real configurations: fans developed under the Energy Efficient Engine (E-cubed) Program and the Quiet Aircraft Technology (QAT) project. The LINFLUX code starts with a steady nonlinear aerodynamic flow field and solves the unsteady linearized Euler equations to calculate the unsteady aerodynamic forces on the turbomachinery blades. First, a steady aerodynamic solution is computed for given operating conditions using the nonlinear unsteady aerodynamic code TURBO-AE. A blade vibration analysis is done to determine the frequencies and mode shapes of the vibrating blades, and an interface code is used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor is used to interpolate the mode shapes from the structural dynamics mesh onto the computational fluid dynamics mesh. Then, LINFLUX is used to calculate the unsteady aerodynamic pressure distribution for a given vibration mode, frequency, and interblade phase angle. Finally, a post-processor uses the unsteady pressures to calculate the generalized aerodynamic forces, eigenvalues, an esponse amplitudes. The eigenvalues determine the flutter frequency and damping. Results of flutter calculations from the LINFLUX code are presented for (1) the E-cubed fan developed under the E-cubed program and (2) the Quiet High Speed Fan (QHSF) developed under the Quiet Aircraft Technology project. The results are compared with those obtained from the TURBO-AE code. A graph of the work done per vibration cycle for the first vibration mode of the E-cubed fan is shown. It can be seen that the LINFLUX results show a very good comparison with TURBO-AE results over the entire range of interblade phase angle. The work done per vibration cycle for the first vibration mode of the QHSF fan is shown. Once again, the LINFLUX results compare very well with the results from the TURBOAE code.
THRSTER: A THRee-STream Ejector Ramjet Analysis and Design Tool
NASA Technical Reports Server (NTRS)
Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.
2000-01-01
An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.
THRSTER: A Three-Stream Ejector Ramjet Analysis and Design Tool
NASA Technical Reports Server (NTRS)
Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.; Komar, D. R. (Technical Monitor)
2000-01-01
An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.
A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.
2004-01-01
The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.
Utilization of recently developed codes for high power Brayton and Rankine cycle power systems
NASA Technical Reports Server (NTRS)
Doherty, Michael P.
1993-01-01
Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.
A multi-armed bandit approach to superquantile selection
2017-06-01
decision learning, machine learning, intelligence processing, intelligence cycle, quantitative finance. 15. NUMBER OF PAGES 73 16. PRICE CODE 17...fulfillment of the requirements for the degree of MASTER OF SCIENCE IN OPERATIONS RESEARCH from the NAVAL POSTGRADUATE SCHOOL June 2017 Approved by...Roberto S. Szechtman Thesis Advisor Michael P. Atkinson Second Reader Patricia A. Jacobs Chair, Operations Research Department iii THIS PAGE
Independent rate and temporal coding in hippocampal pyramidal cells.
Huxter, John; Burgess, Neil; O'Keefe, John
2003-10-23
In the brain, hippocampal pyramidal cells use temporal as well as rate coding to signal spatial aspects of the animal's environment or behaviour. The temporal code takes the form of a phase relationship to the concurrent cycle of the hippocampal electroencephalogram theta rhythm. These two codes could each represent a different variable. However, this requires the rate and phase to vary independently, in contrast to recent suggestions that they are tightly coupled, both reflecting the amplitude of the cell's input. Here we show that the time of firing and firing rate are dissociable, and can represent two independent variables: respectively the animal's location within the place field, and its speed of movement through the field. Independent encoding of location together with actions and stimuli occurring there may help to explain the dual roles of the hippocampus in spatial and episodic memory, or may indicate a more general role of the hippocampus in relational/declarative memory.
National Combustion Code: Parallel Implementation and Performance
NASA Technical Reports Server (NTRS)
Quealy, A.; Ryder, R.; Norris, A.; Liu, N.-S.
2000-01-01
The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. CORSAIR-CCD is the current baseline reacting flow solver for NCC. This is a parallel, unstructured grid code which uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC flow solver to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This paper describes the parallel implementation of the NCC flow solver and summarizes its current parallel performance on an SGI Origin 2000. Earlier parallel performance results on an IBM SP-2 are also included. The performance improvements which have enabled a turnaround of less than 15 hours for a 1.3 million element fully reacting combustion simulation are described.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, E.J.; McNeilly, G.S.
The existing National Center for Atmospheric Research (NCAR) code in the Hamburg Oceanic Carbon Cycle Circulation Model and the Hamburg Large-Scale Geostrophic Ocean General Circulation Model was modernized and reduced in size while still producing an equivalent end result. A reduction in the size of the existing code from more than 50,000 lines to approximately 7,500 lines in the new code has made the new code much easier to maintain. The existing code in Hamburg model uses legacy NCAR (including even emulated CALCOMP subrountines) graphics to display graphical output. The new code uses only current (version 3.1) NCAR subrountines.
The seasonal-cycle climate model
NASA Technical Reports Server (NTRS)
Marx, L.; Randall, D. A.
1981-01-01
The seasonal cycle run which will become the control run for the comparison with runs utilizing codes and parameterizations developed by outside investigators is discussed. The climate model currently exists in two parallel versions: one running on the Amdahl and the other running on the CYBER 203. These two versions are as nearly identical as machine capability and the requirement for high speed performance will allow. Developmental changes are made on the Amdahl/CMS version for ease of testing and rapidity of turnaround. The changes are subsequently incorporated into the CYBER 203 version using vectorization techniques where speed improvement can be realized. The 400 day seasonal cycle run serves as a control run for both medium and long range climate forecasts alsensitivity studies.
An automatic editing algorithm for GPS data
NASA Technical Reports Server (NTRS)
Blewitt, Geoffrey
1990-01-01
An algorithm has been developed to edit automatically Global Positioning System data such that outlier deletion, cycle slip identification, and correction are independent of clock instability, selective availability, receiver-satellite kinematics, and tropospheric conditions. This algorithm, called TurboEdit, operates on undifferenced, dual frequency carrier phase data, and requires the use of P code pseudorange data and a smoothly varying ionospheric electron content. TurboEdit was tested on the large data set from the CASA Uno experiment, which contained over 2500 cycle slips.Analyst intervention was required on 1 percent of the station-satellite passes, almost all of these problems being due to difficulties in extrapolating variations in the ionospheric delay. The algorithm is presently being adapted for real time data editing in the Rogue receiver for continuous monitoring applications.
Turbopump Design and Analysis Approach for Nuclear Thermal Rockets
NASA Technical Reports Server (NTRS)
Chen, Shu-cheng S.; Veres, Joseph P.; Fittje, James E.
2006-01-01
A rocket propulsion system, whether it is a chemical rocket or a nuclear thermal rocket, is fairly complex in detail but rather simple in principle. Among all the interacting parts, three components stand out: they are pumps and turbines (turbopumps), and the thrust chamber. To obtain an understanding of the overall rocket propulsion system characteristics, one starts from analyzing the interactions among these three components. It is therefore of utmost importance to be able to satisfactorily characterize the turbopump, level by level, at all phases of a vehicle design cycle. Here at NASA Glenn Research Center, as the starting phase of a rocket engine design, specifically a Nuclear Thermal Rocket Engine design, we adopted the approach of using a high level system cycle analysis code (NESS) to obtain an initial analysis of the operational characteristics of a turbopump required in the propulsion system. A set of turbopump design codes (PumpDes and TurbDes) were then executed to obtain sizing and performance characteristics of the turbopump that were consistent with the mission requirements. A set of turbopump analyses codes (PUMPA and TURBA) were applied to obtain the full performance map for each of the turbopump components; a two dimensional layout of the turbopump based on these mean line analyses was also generated. Adequacy of the turbopump conceptual design will later be determined by further analyses and evaluation. In this paper, descriptions and discussions of the aforementioned approach are provided and future outlooks are discussed.
Science and Observation Recommendations for Future NASA Carbon Cycle Research
NASA Technical Reports Server (NTRS)
McClain, Charles R.; Collatz, G. J.; Kawa, S. R.; Gregg, W. W.; Gervin, J. C.; Abshire, J. B.; Andrews, A. E.; Behrenfeld, M. J.; Demaio, L. D.; Knox, R. G.
2002-01-01
Between October 2000 and June 2001, an Agency-wide planning, effort was organized by elements of NASA Goddard Space Flight Center (GSFC) to define future research and technology development activities. This planning effort was conducted at the request of the Associate Administrator of the Office of Earth Science (Code Y), Dr. Ghassem Asrar, at NASA Headquarters (HQ). The primary points of contact were Dr. Mary Cleave, Deputy Associate Administrator for Advanced Planning at NASA HQ (Headquarters) and Dr. Charles McClain of the Office of Global Carbon Studies (Code 970.2) at GSFC. During this period, GSFC hosted three workshops to define the science requirements and objectives, the observational and modeling requirements to meet the science objectives, the technology development requirements, and a cost plan for both the science program and new flight projects that will be needed for new observations beyond the present or currently planned. The plan definition process was very intensive as HQ required the final presentation package by mid-June 2001. This deadline was met and the recommendations were ultimately refined and folded into a broader program plan, which also included climate modeling, aerosol observations, and science computing technology development, for contributing to the President's Climate Change Research Initiative. This technical memorandum outlines the process and recommendations made for cross-cutting carbon cycle research as presented in June. A separate NASA document outlines the budget profiles or cost analyses conducted as part of the planning effort.
Identification of limit cycles in multi-nonlinearity, multiple path systems
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Barron, O. L.
1979-01-01
A method of analysis which identifies limit cycles in autonomous systems with multiple nonlinearities and multiple forward paths is presented. The FORTRAN code for implementing the Harmonic Balance Algorithm is reported. The FORTRAN code is used to identify limit cycles in multiple path and nonlinearity systems while retaining the effects of several harmonic components.
A CPU benchmark for protein crystallographic refinement.
Bourne, P E; Hendrickson, W A
1990-01-01
The CPU time required to complete a cycle of restrained least-squares refinement of a protein structure from X-ray crystallographic data using the FORTRAN codes PROTIN and PROLSQ are reported for 48 different processors, ranging from single-user workstations to supercomputers. Sequential, vector, VLIW, multiprocessor, and RISC hardware architectures are compared using both a small and a large protein structure. Representative compile times for each hardware type are also given, and the improvement in run-time when coding for a specific hardware architecture considered. The benchmarks involve scalar integer and vector floating point arithmetic and are representative of the calculations performed in many scientific disciplines.
A high order approach to flight software development and testing
NASA Technical Reports Server (NTRS)
Steinbacher, J.
1981-01-01
The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moisseytsev, A.; Sienicki, J. J.
2011-11-07
Significant progress has been made in the ongoing development of the Argonne National Laboratory (ANL) Plant Dynamics Code (PDC), the ongoing investigation and development of control strategies, and the analysis of system transient behavior for supercritical carbon dioxide (S-CO{sub 2}) Brayton cycles. Several code modifications have been introduced during FY2011 to extend the range of applicability of the PDC and to improve its calculational stability and speed. A new and innovative approach was developed to couple the Plant Dynamics Code for S-CO{sub 2} cycle calculations with SAS4A/SASSYS-1 Liquid Metal Reactor Code System calculations for the transient system level behavior onmore » the reactor side of a Sodium-Cooled Fast Reactor (SFR) or Lead-Cooled Fast Reactor (LFR). The new code system allows use of the full capabilities of both codes such that whole-plant transients can now be simulated without additional user interaction. Several other code modifications, including the introduction of compressor surge control, a new approach for determining the solution time step for efficient computational speed, an updated treatment of S-CO{sub 2} cycle flow mergers and splits, a modified enthalpy equation to improve the treatment of negative flow, and a revised solution of the reactor heat exchanger (RHX) equations coupling the S-CO{sub 2} cycle to the reactor, were introduced to the PDC in FY2011. All of these modifications have improved the code computational stability and computational speed, while not significantly affecting the results of transient calculations. The improved PDC was used to continue the investigation of S-CO{sub 2} cycle control and transient behavior. The coupled PDC-SAS4A/SASSYS-1 code capability was used to study the dynamic characteristics of a S-CO{sub 2} cycle coupled to a SFR plant. Cycle control was investigated in terms of the ability of the cycle to respond to a linear reduction in the electrical grid demand from 100% to 0% at a rate of 5%/minute. It was determined that utilization of turbine throttling control below 50% load improves the cycle efficiency significantly. Consequently, the cycle control strategy has been updated to include turbine throttle valve control. The new control strategy still relies on inventory control in the 50%-90% load range and turbine bypass for fine and fast generator output adjustments, but it now also includes turbine throttling control in the 0%-50% load range. In an attempt to investigate the feasibility of using the S-CO{sub 2} cycle for normal decay heat removal from the reactor, the cycle control study was extended beyond the investigation of normal load following. It was shown that such operation is possible with the extension of the inventory and the turbine throttling controls. However, the cycle operation in this range is calculated to be so inefficient that energy would need to be supplied from the electrical grid assuming that the generator could be capable of being operated in a motoring mode with an input electrical energy from the grid having a magnitude of about 20% of the nominal plant output electrical power level in order to maintain circulation of the CO{sub 2} in the cycle. The work on investigation of cycle operation at low power level will be continued in the future. In addition to the cycle control study, the coupled PDC-SAS4A/SASSYS-1 code system was also used to simulate thermal transients in the sodium-to-CO{sub 2} heat exchanger. Several possible conditions with the potential to introduce significant changes to the heat exchanger temperatures were identified and simulated. The conditions range from reactor scram and primary sodium pump failure or intermediate sodium pump failure on the reactor side to pipe breaks and valve malfunctions on the S-CO{sub 2} side. It was found that the maximum possible rate of the heat exchanger wall temperature change for the particular heat exchanger design assumed is limited to {+-}7 C/s for less than 10 seconds. Modeling in the Plant Dynamics Code has been compared with available data from the Sandia National Laboratories (SNL) small-scale S-CO{sub 2} Brayton cycle demonstration that is being assembled in a phased approach currently at Barber-Nichols Inc. and at SNL in the future. The available data was obtained with an earlier configuration of the S-CO{sub 2} loop involving only a single-turbo-alternator-compressor (TAC) instead of two TACs, a single low temperature recuperator (LTR) instead of both a LTR and a high temperature recuperator (HTR), and fewer than the later to be installed full set of electric heaters. Due to the absence of the full heating capability as well as the lack of a high temperature recuperator providing additional recuperation, the temperature conditions obtained with the loop are too low for the loop conditions to be prototypical of the S-CO{sub 2} cycle.« less
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Schlensinger, Adam
2011-01-01
Sinusoidal jitter is produced by simply modulating a clock frequency sinusoidally with a given frequency and amplitude. But this can be expressed as phase jitter, frequency jitter, or cycle-to-cycle jitter, rms or peak, absolute units, or normalized to the base clock frequency. Jitter using other waveforms requires calculating and downloading these waveforms to an arbitrary waveform generator, and helping the user manage relationships among phase jitter crest factor, frequency jitter crest factor, and cycle-to-cycle jitter (CCJ) crest factor. Software was developed for managing these relationships, automatically configuring the generator, and saving test results documentation. Tighter management of clock jitter and jitter sensitivity is required by new codes that further extend the already high performance of space communication links, completely correcting symbol error rates higher than 10 percent, and therefore typically requiring demodulation and symbol synchronization hardware to operating at signal-to-noise ratios of less than one. To accomplish this, greater demands are also made on transmitter performance, and measurement techniques are needed to confirm performance. It was discovered early that sinusoidal jitter can be stepped on a grid such that one can connect points by constant phase jitter, constant frequency jitter, or constant cycle-cycle jitter. The tool automates adherence to a grid while also allowing adjustments off-grid. Also, the jitter can be set by the user on any dimension and the others are calculated. The calculations are all recorded, allowing the data to be rapidly plotted or re-plotted against different interpretations just by changing pointers to columns. A key advantage is taking data on a carefully controlled grid, which allowed a single data set to be post-analyzed many different ways. Another innovation was building a software tool to provide very tight coupling between the generator and the recorded data product, and the operator's worksheet. Together, these allowed the operator to sweep the jitter stimulus quickly along any of three dimensions and focus on the response of the system under test (response was jitter transfer ratio, or performance degradation to the symbol or codeword error rate). Additionally, managing multi-tone and noise waveforms automated a tedious manual process, and provided almost instantaneous decision- making control over test flow. The code was written in LabVIEW, and calls Agilent instrument drivers to write to the generator hardware.
Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Ashe, Thomas L.; Otting, William D.
1993-01-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Water cycle algorithm: A detailed standard code
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.
NASA Technical Reports Server (NTRS)
Manderscheid, J. M.; Kaufman, A.
1985-01-01
Turbine blades for reusable space propulsion systems are subject to severe thermomechanical loading cycles that result in large inelastic strains and very short lives. These components require the use of anisotropic high-temperature alloys to meet the safety and durability requirements of such systems. To assess the effects on blade life of material anisotropy, cyclic structural analyses are being performed for the first stage high-pressure fuel turbopump blade of the space shuttle main engine. The blade alloy is directionally solidified MAR-M 246 alloy. The analyses are based on a typical test stand engine cycle. Stress-strain histories at the airfoil critical location are computed using the MARC nonlinear finite-element computer code. The MARC solutions are compared to cyclic response predictions from a simplified structural analysis procedure developed at the NASA Lewis Research Center.
Kneifel, Joshua; O'Rear, Eric; Webb, David; O'Fallon, Cheyney
2018-02-01
To conduct a more complete analysis of low-energy and net-zero energy buildings that considers both the operating and embodied energy/emissions, members of the building community look to life-cycle assessment (LCA) methods. This paper examines differences in the relative impacts of cost-optimal energy efficiency measure combinations depicting residential buildings up to and beyond net-zero energy consumption on operating and embodied flows using data from the Building Industry Reporting and Design for Sustainability (BIRDS) Low-Energy Residential Database. Results indicate that net-zero performance leads to a large increase in embodied flows (over 40%) that offsets some of the reductions in operational flows, but overall life-cycle flows are still reduced by over 60% relative to the state energy code. Overall, building designs beyond net-zero performance can partially offset embodied flows with negative operational flows by replacing traditional electricity generation with solar production, but would require an additional 8.34 kW (18.54 kW in total) of due south facing solar PV to reach net-zero total life-cycle flows. Such a system would meet over 239% of operational consumption of the most energy efficient design considered in this study and over 116% of a state code-compliant building design in its initial year of operation.
Impacts of Model Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.
The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less
Edwards, N
2008-10-01
The international introduction of performance-based building codes calls for a re-examination of indicators used to monitor their implementation. Indicators used in the building sector have a business orientation, target the life cycle of buildings, and guide asset management. In contrast, indicators used in the health sector focus on injury prevention, have a behavioural orientation, lack specificity with respect to features of the built environment, and do not take into account patterns of building use or building longevity. Suggestions for metrics that bridge the building and health sectors are discussed. The need for integrated surveillance systems in health and building sectors is outlined. It is time to reconsider commonly used epidemiological indicators in the field of injury prevention and determine their utility to address the accountability requirements of performance-based codes.
STAR-CCM+ Verification and Validation Plan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pointer, William David
2016-09-30
The commercial Computational Fluid Dynamics (CFD) code STAR-CCM+ provides general purpose finite volume method solutions for fluid dynamics and energy transport. This document defines plans for verification and validation (V&V) of the base code and models implemented within the code by the Consortium for Advanced Simulation of Light water reactors (CASL). The software quality assurance activities described herein are port of the overall software life cycle defined in the CASL Software Quality Assurance (SQA) Plan [Sieger, 2015]. STAR-CCM+ serves as the principal foundation for development of an advanced predictive multi-phase boiling simulation capability within CASL. The CASL Thermal Hydraulics Methodsmore » (THM) team develops advanced closure models required to describe the subgrid-resolution behavior of secondary fluids or fluid phases in multiphase boiling flows within the Eulerian-Eulerian framework of the code. These include wall heat partitioning models that describe the formation of vapor on the surface and the forces the define bubble/droplet dynamic motion. The CASL models are implemented as user coding or field functions within the general framework of the code. This report defines procedures and requirements for V&V of the multi-phase CFD capability developed by CASL THM. Results of V&V evaluations will be documented in a separate STAR-CCM+ V&V assessment report. This report is expected to be a living document and will be updated as additional validation cases are identified and adopted as part of the CASL THM V&V suite.« less
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
NASA Technical Reports Server (NTRS)
Mclennan, G. A.
1986-01-01
This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to determine the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.
Development of an Aeroelastic Analysis Including a Viscous Flow Model
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Bakhle, Milind A.
2001-01-01
Under this grant, Version 4 of the three-dimensional Navier-Stokes aeroelastic code (TURBO-AE) has been developed and verified. The TURBO-AE Version 4 aeroelastic code allows flutter calculations for a fan, compressor, or turbine blade row. This code models a vibrating three-dimensional bladed disk configuration and the associated unsteady flow (including shocks, and viscous effects) to calculate the aeroelastic instability using a work-per-cycle approach. Phase-lagged (time-shift) periodic boundary conditions are used to model the phase lag between adjacent vibrating blades. The direct-store approach is used for this purpose to reduce the computational domain to a single interblade passage. A disk storage option, implemented using direct access files, is available to reduce the large memory requirements of the direct-store approach. Other researchers have implemented 3D inlet/exit boundary conditions based on eigen-analysis. Appendix A: Aeroelastic calculations based on three-dimensional euler analysis. Appendix B: Unsteady aerodynamic modeling of blade vibration using the turbo-V3.1 code.
Fries, Pascal; Nikolić, Danko; Singer, Wolf
2007-07-01
Activated neuronal groups typically engage in rhythmic synchronization in the gamma-frequency range (30-100 Hz). Experimental and modeling studies demonstrate that each gamma cycle is framed by synchronized spiking of inhibitory interneurons. Here, we review evidence suggesting that the resulting rhythmic network inhibition interacts with excitatory input to pyramidal cells such that the more excited cells fire earlier in the gamma cycle. Thus, the amplitude of excitatory drive is recoded into phase values of discharges relative to the gamma cycle. This recoding enables transmission and read out of amplitude information within a single gamma cycle without requiring rate integration. Furthermore, variation of phase relations can be exploited to facilitate or inhibit exchange of information between oscillating cell assemblies. The gamma cycle could thus serve as a fundamental computational mechanism for the implementation of a temporal coding scheme that enables fast processing and flexible routing of activity, supporting fast selection and binding of distributed responses. This review is part of the INMED/TINS special issue Physiogenic and pathogenic oscillations: the beauty and the beast, based on presentations at the annual INMED/TINS symposium (http://inmednet.com).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, D.; Levine, S.L.; Luoma, J.
1992-01-01
The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less
Validation of a program for supercritical power plant calculations
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian
2011-12-01
This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.
Radiosensitivity of Mammalian Cells
Walters, R. A.; Petersen, D. F.
1968-01-01
Radiation effects on macromolecular synthesis essential for the Chinese hamster cell to traverse the life cycle and to divide have been investigated. Life-cycle analysis techniques employing inhibitors of macromolecular synthesis were used in determining the kinetics of cell growth for specific segments of the population following spontaneous recovery from radiation-induced division delay. The results indicated that recovery does not occur in the absence of functional protein synthesis. Under conditions which inhibit normal RNA and DNA synthesis, irradiated cells can recover the capacity to traverse the life cycle and to divide. The stability of mRNA species coding for proteins essential for division in irradiated cells was also measured. The mean functional lifetime of these mRNA species was 1 hr. The data demonstrate the existence of a specific segment of the population consisting of cells which have completed transcription related to division but not concomitant translation and which can recover from the radiation injury without synthesis of additional RNA. Thus, initial recovery of the ability to divide has an obligate requirement for protein synthesis but no corresponding requirement for nucleic acid synthesis during the period when original messenger remains intact. PMID:5753224
Benoit, Beatrice; He, Chun Hua; Zhang, Fan; Votruba, Sarah M; Tadros, Wael; Westwood, J Timothy; Smibert, Craig A; Lipshitz, Howard D; Theurkauf, William E
2009-03-01
Genetic control of embryogenesis switches from the maternal to the zygotic genome during the maternal-to-zygotic transition (MZT), when maternal mRNAs are destroyed, high-level zygotic transcription is initiated, the replication checkpoint is activated and the cell cycle slows. The midblastula transition (MBT) is the first morphological event that requires zygotic gene expression. The Drosophila MBT is marked by blastoderm cellularization and follows 13 cleavage-stage divisions. The RNA-binding protein Smaug is required for cleavage-independent maternal transcript destruction during the Drosophila MZT. Here, we show that smaug mutants also disrupt syncytial blastoderm stage cell-cycle delays, DNA replication checkpoint activation, cellularization, and high-level zygotic expression of protein coding and micro RNA genes. We also show that Smaug protein levels increase through the cleavage divisions and peak when the checkpoint is activated and zygotic transcription initiates, and that transgenic expression of Smaug in an anterior-to-posterior gradient produces a concomitant gradient in the timing of maternal transcript destruction, cleavage cell cycle delays, zygotic gene transcription, cellularization and gastrulation. Smaug accumulation thus coordinates progression through the MZT.
Optimization of wave rotors for use as gas turbine engine topping cycles
NASA Technical Reports Server (NTRS)
Wilson, Jack; Paxson, Daniel E.
1995-01-01
Use of a wave rotor as a topping cycle for a gas turbine engine can improve specific power and reduce specific fuel consumption. Maximum improvement requires the wave rotor to be optimized for best performance at the mass flow of the engine. The optimization is a trade-off between losses due to friction and passage opening time, and rotational effects. An experimentally validated, one-dimensional CFD code, which includes these effects, has been used to calculate wave rotor performance, and find the optimum configuration. The technique is described, and results given for wave rotors sized for engines with sea level mass flows of 4, 26, and 400 lb/sec.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
NASA Astrophysics Data System (ADS)
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, S.M.
1995-01-01
The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations reported herein is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies inmore » the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of three reactor critical configurations for the Sequoyah Unit 2 Cycle 3. This unit and cycle were chosen because of the relevance in spent fuel benchmark applications: (1) the unit had a significantly long downtime of 2.7 years during the middle of cycle (MOC) 3, and (2) the core consisted entirely of burned fuel at the MOC restart. The first benchmark critical calculation was the MOC restart at hot, full-power (HFP) critical conditions. The other two benchmark critical calculations were the beginning-of-cycle (BOC) startup at both hot, zero-power (HZP) and HFP critical conditions. These latter calculations were used to check for consistency in the calculated results for different burnups and downtimes. The k{sub eff} results were in the range of 1.00014 to 1.00259 with a standard deviation of less than 0.001.« less
Interactive-graphic flowpath plotting for turbine engines
NASA Technical Reports Server (NTRS)
Corban, R. R.
1981-01-01
An engine cycle program capable of simulating the design and off-design performance of arbitrary turbine engines, and a computer code which, when used in conjunction with the cycle code, can predict the weight of the engines are described. A graphics subroutine was added to the code to enable the engineer to visualize the designed engine with more clarity by producing an overall view of the designed engine for output on a graphics device using IBM-370 graphics subroutines. In addition, with the engine drawn on a graphics screen, the program allows for the interactive user to make changes to the inputs to the code for the engine to be redrawn and reweighed. These improvements allow better use of the code in conjunction with the engine program.
Multiprocessing MCNP on an IBN RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors P and the fraction f of task time that multiprocesses, can be formulated using Amdahl's law: S(f, P) =1/(1-f+f/P). However, for most applications, this theoretical limit cannot be achieved because of additional terms (e.g., multitasking overhead, memory overlap, etc.) that are not included in Amdahl's law. Monte Carlo transport is a natural candidate for multiprocessing because the particle tracks are generally independent, and the precision of the result increases as the square Foot of the number of particles tracked.« less
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-03-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
A Fixed Point VHDL Component Library for a High Efficiency Reconfigurable Radio Design Methodology
NASA Technical Reports Server (NTRS)
Hoy, Scott D.; Figueiredo, Marco A.
2006-01-01
Advances in Field Programmable Gate Array (FPGA) technologies enable the implementation of reconfigurable radio systems for both ground and space applications. The development of such systems challenges the current design paradigms and requires more robust design techniques to meet the increased system complexity. Among these techniques is the development of component libraries to reduce design cycle time and to improve design verification, consequently increasing the overall efficiency of the project development process while increasing design success rates and reducing engineering costs. This paper describes the reconfigurable radio component library developed at the Software Defined Radio Applications Research Center (SARC) at Goddard Space Flight Center (GSFC) Microwave and Communications Branch (Code 567). The library is a set of fixed-point VHDL components that link the Digital Signal Processing (DSP) simulation environment with the FPGA design tools. This provides a direct synthesis path based on the latest developments of the VHDL tools as proposed by the BEE VBDL 2004 which allows for the simulation and synthesis of fixed-point math operations while maintaining bit and cycle accuracy. The VHDL Fixed Point Reconfigurable Radio Component library does not require the use of the FPGA vendor specific automatic component generators and provide a generic path from high level DSP simulations implemented in Mathworks Simulink to any FPGA device. The access to the component synthesizable, source code provides full design verification capability:
Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element
NASA Astrophysics Data System (ADS)
Mohammed, Abdul Aziz; Pauzi, Anas Muhamad; Rahman, Shaik Mohmmed Haikhal Abdul; Zin, Muhamad Rawi Muhammad; Jamro, Rafhayudi; Idris, Faridah Mohamad
2016-01-01
In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 (233U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintaining the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.
Simulation on reactor TRIGA Puspati core kinetics fueled with thorium (Th) based fuel element
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mohammed, Abdul Aziz, E-mail: azizM@uniten.edu.my; Rahman, Shaik Mohmmed Haikhal Abdul; Pauzi, Anas Muhamad, E-mail: anas@uniten.edu.my
2016-01-22
In confronting global energy requirement and the search for better technologies, there is a real case for widening the range of potential variations in the design of nuclear power plants. Smaller and simpler reactors are attractive, provided they can meet safety and security standards and non-proliferation issues. On fuel cycle aspect, thorium fuel cycles produce much less plutonium and other radioactive transuranic elements than uranium fuel cycles. Although not fissile itself, Th-232 will absorb slow neutrons to produce uranium-233 ({sup 233}U), which is fissile. By introducing Thorium, the numbers of highly enriched uranium fuel element can be reduced while maintainingmore » the core neutronic performance. This paper describes the core kinetic of a small research reactor core like TRIGA fueled with a Th filled fuel element matrix using a general purpose Monte Carlo N-Particle (MCNP) code.« less
[Representation of knowledge in respiratory medicine: ontology should help the coding process].
Blanc, F-X; Baneyx, A; Charlet, J; Housset, B
2010-09-01
Access to medical knowledge is a major issue for health professionals and requires the development of terminologies. The objective of the reported work was to construct an ontology of respiratory medicine, i.e. an organized and formalized terminology composed by specific knowledge. The purpose is to help the medico-economical coding process and to represent the relevant knowledge about the patient. Our researches cover the whole life cycle of an ontology, from the development of a methodology, to building it from texts, to its use in an operational system. A computerized tool, based on the ontology, allows both a medico-economical coding and a graphical medical one. This second one will be used to index hospital reports. Our ontology counts 1913 concepts and contains all the knowledge included in the PMSI part of the SPLF thesaurus. Our tool has been evaluated and showed a recall of 80% and an accuracy of 85% regarding the medico-economical coding. The work presented in this paper justifies the approach that has been used. It must be continued on a large scale to validate our coding principles and the possibility of making enquiries on patient reports concerning clinical research. Copyright © 2010. Published by Elsevier Masson SAS.
An Object-Oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2009-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.
NDEC: A NEA platform for nuclear data testing, verification and benchmarking
NASA Astrophysics Data System (ADS)
Díez, C. J.; Michel-Sendis, F.; Cabellos, O.; Bossant, M.; Soppera, N.
2017-09-01
The selection, testing, verification and benchmarking of evaluated nuclear data consists, in practice, in putting an evaluated file through a number of checking steps where different computational codes verify that the file and the data it contains complies with different requirements. These requirements range from format compliance to good performance in application cases, while at the same time physical constraints and the agreement with experimental data are verified. At NEA, the NDEC (Nuclear Data Evaluation Cycle) platform aims at providing, in a user friendly interface, a thorough diagnose of the quality of a submitted evaluated nuclear data file. Such diagnose is based on the results of different computational codes and routines which carry out the mentioned verifications, tests and checks. NDEC also searches synergies with other existing NEA tools and databases, such as JANIS, DICE or NDaST, including them into its working scheme. Hence, this paper presents NDEC, its current development status and its usage in the JEFF nuclear data project.
Computer-aided design of antenna structures and components
NASA Technical Reports Server (NTRS)
Levy, R.
1976-01-01
This paper discusses computer-aided design procedures for antenna reflector structures and related components. The primary design aid is a computer program that establishes cross sectional sizes of the structural members by an optimality criterion. Alternative types of deflection-dependent objectives can be selected for designs subject to constraints on structure weight. The computer program has a special-purpose formulation to design structures of the type frequently used for antenna construction. These structures, in common with many in other areas of application, are represented by analytical models that employ only the three translational degrees of freedom at each node. The special-purpose construction of the program, however, permits coding and data management simplifications that provide advantages in problem size and execution speed. Size and speed are essentially governed by the requirements of structural analysis and are relatively unaffected by the added requirements of design. Computation times to execute several design/analysis cycles are comparable to the times required by general-purpose programs for a single analysis cycle. Examples in the paper illustrate effective design improvement for structures with several thousand degrees of freedom and within reasonable computing times.
Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System
NASA Technical Reports Server (NTRS)
Taft, James R.
2000-01-01
The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full aircraft are routinely undertaken. Typical large problems might require 100s of Cray C90 CPU hours to complete. The dramatic performance gains with the 256 CPU steger system are exciting. Obtaining results in hours instead of months is revolutionizing the way in which aircraft manufacturers are looking at future aircraft simulation work. Figure 2 below is a current state of the art plot of OVERFLOW-MLP performance on the 512 CPU Lomax system. As can be seen, the chart indicates that OVERFLOW-MLP continues to scale linearly with CPU count up to 512 CPUs on a large 35 million point full aircraft RANS simulation. At this point performance is such that a fully converged simulation of 2500 time steps is completed in less than 2 hours of elapsed time. Further work over the next few weeks will improve the performance of this code even further.The LAURA code has been converted to the MLP format as well. This code is currently being optimized for the 512 CPU system. Performance statistics indicate that the goal of 100 GFLOP/s will be achieved by year's end. This amounts to 20x the 16 CPU C90 result and strongly demonstrates the viability of the new parallel systems rapidly solving very large simulations in a production environment.
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
Military Interoperable Digital Hospital Testbed (MIDHT)
2013-10-01
activities are selected highlights completed by Northrop Grumman during the year. Cycle 4 development: - Increased the max_allowed_packet size in MySQL ...deployment with the Java install that is required by CONNECT v3.3.1.3. - Updated the MIDHT code base to work with the CONNECT v.3.3.1.3 Core Libraries...Provided TATRC the CONNECTUniversalClientGUI binaries for use with CONNECT v3.3.1.3 − Created and deployed a common Java library for the CONNECT
NASA Technical Reports Server (NTRS)
Jones, Scott M.
2007-01-01
This document is intended as an introduction to the analysis of gas turbine engine cycles using the Numerical Propulsion System Simulation (NPSS) code. It is assumed that the analyst has a firm understanding of fluid flow, gas dynamics, thermodynamics, and turbomachinery theory. The purpose of this paper is to provide for the novice the information necessary to begin cycle analysis using NPSS. This paper and the annotated example serve as a starting point and by no means cover the entire range of information and experience necessary for engine performance simulation. NPSS syntax is presented but for a more detailed explanation of the code the user is referred to the NPSS User Guide and Reference document (ref. 1).
Parametric Studies of the Ejector Process within a Turbine-Based Combined-Cycle Propulsion System
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Walker, James F.; Trefny, Charles J.
1999-01-01
Performance characteristics of the ejector process within a turbine-based combined-cycle (TBCC) propulsion system are investigated using the NPARC Navier-Stokes code. The TBCC concept integrates a turbine engine with a ramjet into a single propulsion system that may efficiently operate from takeoff to high Mach number cruise. At the operating point considered, corresponding to a flight Mach number of 2.0, an ejector serves to mix flow from the ramjet duct with flow from the turbine engine. The combined flow then passes through a diffuser where it is mixed with hydrogen fuel and burned. Three sets of fully turbulent Navier-Stokes calculations are compared with predictions from a cycle code developed specifically for the TBCC propulsion system. A baseline ejector system is investigated first. The Navier-Stokes calculations indicate that the flow leaving the ejector is not completely mixed, which may adversely affect the overall system performance. Two additional sets of calculations are presented; one set that investigated a longer ejector region (to enhance mixing) and a second set which also utilized the longer ejector but replaced the no-slip surfaces of the ejector with slip (inviscid) walls in order to resolve discrepancies with the cycle code. The three sets of Navier-Stokes calculations and the TBCC cycle code predictions are compared to determine the validity of each of the modeling approaches.
Lisman, John
2005-01-01
In the hippocampus, oscillations in the theta and gamma frequency range occur together and interact in several ways, indicating that they are part of a common functional system. It is argued that these oscillations form a coding scheme that is used in the hippocampus to organize the readout from long-term memory of the discrete sequence of upcoming places, as cued by current position. This readout of place cells has been analyzed in several ways. First, plots of the theta phase of spikes vs. position on a track show a systematic progression of phase as rats run through a place field. This is termed the phase precession. Second, two cells with nearby place fields have a systematic difference in phase, as indicated by a cross-correlation having a peak with a temporal offset that is a significant fraction of a theta cycle. Third, several different decoding algorithms demonstrate the information content of theta phase in predicting the animal's position. It appears that small phase differences corresponding to jitter within a gamma cycle do not carry information. This evidence, together with the finding that principle cells fire preferentially at a given gamma phase, supports the concept of theta/gamma coding: a given place is encoded by the spatial pattern of neurons that fire in a given gamma cycle (the exact timing within a gamma cycle being unimportant); sequential places are encoded in sequential gamma subcycles of the theta cycle (i.e., with different discrete theta phase). It appears that this general form of coding is not restricted to readout of information from long-term memory in the hippocampus because similar patterns of theta/gamma oscillations have been observed in multiple brain regions, including regions involved in working memory and sensory integration. It is suggested that dual oscillations serve a general function: the encoding of multiple units of information (items) in a way that preserves their serial order. The relationship of such coding to that proposed by Singer and von der Malsburg is discussed; in their scheme, theta is not considered. It is argued that what theta provides is the absolute phase reference needed for encoding order. Theta/gamma coding therefore bears some relationship to the concept of "word" in digital computers, with word length corresponding to the number of gamma cycles within a theta cycle, and discrete phase corresponding to the ordered "place" within a word. Copyright 2005 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Zhou, Yihui; Ou, Yu-Chen; Lee, George C.; O'Connor, Jerome S.
2010-09-01
Use of stainless reinforcing steel (SRS) in reinforced concrete (RC) structures is a promising solution to corrosion issues. However, for SRS to be used in seismic applications, several mechanical properties need to be investigated. These include specified and actual yield strengths, tensile strengths, uniform elongations and low-cycle fatigue behavior. Three types of SRSs (Talley S24100, Talley 316LN and Talley 2205) were tested and the results are reported in this paper. They were compared with the properties of A706 carbon reinforcing steel (RS), which is typical for seismic applications, and MMFX II, which is a high strength, corrosion resistant RS. Low-cycle fatigue tests of the RS coupons were conducted under strain control with constant amplitude to obtain strain life models of the steels. Test results show that the SRSs have slightly lower moduli of elasticity, higher uniform elongations before necking, and better low-cycle fatigue performance than A706 and MMFX II. All five types of RSs tested satisfy the requirements of the ACI 318 code on the lower limit of the tensile to yield strength ratio. Except Talley 2205, the other four types of RSs investigated meet the ACI 318 requirement that the actual yield strength does not exceed the specified yield strength by more than 18 ksi (124 MPa). Among the three types of SRSs tested, Talley S24100 possesses the highest uniform elongation before necking, and the best low-cycle fatigue performance.
Benchmark of FDNS CFD Code For Direct Connect RBCC Test Data
NASA Technical Reports Server (NTRS)
Ruf, J. H.
2000-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with experimental data from the Pennsylvania State University's (PSU) Propulsion Engineering Research Center (PERC) rocket based combined cycle (RBCC) rocket-ejector experiments. The PERC RBCC experimental hardware was in a direct-connect configuration in diffusion and afterburning (DAB) operation. The objective of the present work was to validate the Finite Difference Navier Stokes (FDNS) CFD code for the rocket-ejector mode internal fluid mechanics and combustion phenomena. A second objective was determine the best application procedures to use FDNS as a predictive/engineering tool. Three-dimensional CFD analysis was performed. Solution methodology and grid requirements are discussed. CFD results are compared to experimental data for static pressure, Raman Spectroscopy species distribution data and RBCC net thrust and specified impulse.
The Navy/NASA Engine Program (NNEP89): A user's manual
NASA Technical Reports Server (NTRS)
Plencner, Robert M.; Snyder, Christopher A.
1991-01-01
An engine simulation computer code called NNEP89 was written to perform 1-D steady state thermodynamic analysis of turbine engine cycles. By using a very flexible method of input, a set of standard components are connected at execution time to simulate almost any turbine engine configuration that the user could imagine. The code was used to simulate a wide range of engine cycles from turboshafts and turboprops to air turborockets and supersonic cruise variable cycle engines. Off design performance is calculated through the use of component performance maps. A chemical equilibrium model is incorporated to adequately predict chemical dissociation as well as model virtually any fuel. NNEP89 is written in standard FORTRAN77 with clear structured programming and extensive internal documentation. The standard FORTRAN77 programming allows it to be installed onto most mainframe computers and workstations without modification. The NNEP89 code was derived from the Navy/NASA Engine program (NNEP). NNEP89 provides many improvements and enhancements to the original NNEP code and incorporates features which make it easier to use for the novice user. This is a comprehensive user's guide for the NNEP89 code.
Comparison of Engine Cycle Codes for Rocket-Based Combined Cycle Engines
NASA Technical Reports Server (NTRS)
Waltrup, Paul J.; Auslender, Aaron H.; Bradford, John E.; Carreiro, Louis R.; Gettinger, Christopher; Komar, D. R.; McDonald, J.; Snyder, Christopher A.
2002-01-01
This paper summarizes the results from a one day workshop on Rocket-Based Combined Cycle (RBCC) Engine Cycle Codes held in Monterey CA in November of 2000 at the 2000 JANNAF JPM with the authors as primary participants. The objectives of the workshop were to discuss and compare the merits of existing Rocket-Based Combined Cycle (RBCC) engine cycle codes being used by government and industry to predict RBCC engine performance and interpret experimental results. These merits included physical and chemical modeling, accuracy and user friendliness. The ultimate purpose of the workshop was to identify the best codes for analyzing RBCC engines and to document any potential shortcomings, not to demonstrate the merits or deficiencies of any particular engine design. Five cases representative of the operating regimes of typical RBCC engines were used as the basis of these comparisons. These included Mach 0 sea level static and Mach 1.0 and Mach 2.5 Air-Augmented-Rocket (AAR), Mach 4 subsonic combustion ramjet or dual-mode scramjet, and Mach 8 scramjet operating modes. Specification of a generic RBCC engine geometry and concomitant component operating efficiencies, bypass ratios, fuel/oxidizer/air equivalence ratios and flight dynamic pressures were provided. The engine included an air inlet, isolator duct, axial rocket motor/injector, axial wall fuel injectors, diverging combustor, and exit nozzle. Gaseous hydrogen was used as the fuel with the rocket portion of the system using a gaseous H2/O2 propellant system to avoid cryogenic issues. The results of the workshop, even after post-workshop adjudication of differences, were surprising. They showed that the codes predicted essentially the same performance at the Mach 0 and I conditions, but progressively diverged from a common value (for example, for fuel specific impulse, Isp) as the flight Mach number increased, with the largest differences at Mach 8. The example cases and results are compared and discussed in this paper.
Automotive Gas Turbine Power System-Performance Analysis Code
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
1997-01-01
An open cycle gas turbine numerical modelling code suitable for thermodynamic performance analysis (i.e. thermal efficiency, specific fuel consumption, cycle state points, working fluid flowrates etc.) of automotive and aircraft powerplant applications has been generated at the NASA Lewis Research Center's Power Technology Division. The use this code can be made available to automotive gas turbine preliminary design efforts, either in its present version, or, assuming that resources can be obtained to incorporate empirical models for component weight and packaging volume, in later version that includes the weight-volume estimator feature. The paper contains a brief discussion of the capabilities of the presently operational version of the code, including a listing of input and output parameters and actual sample output listings.
Development of Benchmark Examples for Static Delamination Propagation and Fatigue Growth Predictions
NASA Technical Reports Server (NTRS)
Kruger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall, the results are encouraging but further assessment for mixed-mode delamination is required.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2011-01-01
The development of benchmark examples for static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of an End-Notched Flexure (ENF) specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, static benchmark examples were created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
Small Changes Yield Large Results at NIST's Net-Zero Energy Residential Test Facility.
Fanney, A Hunter; Healy, William; Payne, Vance; Kneifel, Joshua; Ng, Lisa; Dougherty, Brian; Ullah, Tania; Omar, Farhad
2017-12-01
The Net-Zero Energy Residential Test Facility (NZERTF) was designed to be approximately 60 % more energy efficient than homes meeting the 2012 International Energy Conservation Code (IECC) requirements. The thermal envelope minimizes heat loss/gain through the use of advanced framing and enhanced insulation. A continuous air/moisture barrier resulted in an air exchange rate of 0.6 air changes per hour at 50 Pa. The home incorporates a vast array of extensively monitored renewable and energy efficient technologies including an air-to-air heat pump system with a dedicated dehumidification cycle; a ducted heat-recovery ventilation system; a whole house dehumidifier; a photovoltaic system; and a solar domestic hot water system. During its first year of operation the NZERTF produced an energy surplus of 1023 kWh. Based on observations during the first year, changes were made to determine if further improvements in energy performance could be obtained. The changes consisted of installing a thermostat that incorporated control logic to minimize the use of auxiliary heat, using a whole house dehumidifier in lieu of the heat pump's dedicated dehumidification cycle, and reducing the ventilation rate to a value that met but did not exceed code requirements. During the second year of operation the NZERTF produced an energy surplus of 2241 kWh. This paper describes the facility, compares the performance data for the two years, and quantifies the energy impact of the weather conditions and operational changes.
An Object-oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2008-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented
Lustenberger, Nadia A; Prodinger, Birgit; Dorjbal, Delgerjargal; Rubinelli, Sara; Schmitt, Klaus; Scheel-Sailer, Anke
2017-09-23
To illustrate how routinely written narrative admission and discharge reports of a rehabilitation program for eight youths with chronic neurological health conditions can be transformed to the International Classification of Functioning, Disability and Health. First, a qualitative content analysis was conducted by building meaningful units with text segments assigned of the reports to the five elements of the Rehab-Cycle ® : goal; assessment; assignment; intervention; evaluation. Second, the meaningful units were then linked to the ICF using the refined ICF Linking Rules. With the first step of transformation, the emphasis of the narrative reports changed to a process oriented interdisciplinary layout, revealing three thematic blocks of goals: mobility, self-care, mental, and social functions. The linked 95 unique ICF codes could be grouped in clinically meaningful goal-centered ICF codes. Between the two independent linkers, the agreement rate was improved after complementing the rules with additional agreements. The ICF Linking Rules can be used to compile standardized health information from narrative reports if prior structured. The process requires time and expertise. To implement the ICF into common practice, the findings provide the starting point for reporting rehabilitation that builds upon existing practice and adheres to international standards. Implications for Rehabilitation This study provides evidence that routinely collected health information from rehabilitation practice can be transformed to the International Classification of Functioning, Disability and Health by using the "ICF Linking Rules", however, this requires time and expertise. The Rehab-Cycle ® , including assessments, assignments, goal setting, interventions and goal evaluation, serves as feasible framework for structuring this rehabilitation program and ensures that the complexity of local practice is appropriately reflected. The refined "ICF Linking Rules" lead to a standardized transformation process of narrative text and thus a higher quality with increased transparency. As a next step, the resulting format of goal codes supplemented by goal-clarifying codes could be validated to strengthen the implementation of the International Classification of Functioning, Disability and Health into rehabilitation routine by respecting the variety of clinical practice.
Arsenic Detoxification by Geobacter Species.
Dang, Yan; Walker, David J F; Vautour, Kaitlin E; Dixon, Steven; Holmes, Dawn E
2017-02-15
Insight into the mechanisms for arsenic detoxification by Geobacter species is expected to improve the understanding of global cycling of arsenic in iron-rich subsurface sedimentary environments. Analysis of 14 different Geobacter genomes showed that all of these species have genes coding for an arsenic detoxification system (ars operon), and several have genes required for arsenic respiration (arr operon) and methylation (arsM). Genes encoding four arsenic repressor-like proteins were detected in the genome of G. sulfurreducens; however, only one (ArsR1) regulated transcription of the ars operon. Elimination of arsR1 from the G. sulfurreducens chromosome resulted in enhanced transcription of genes coding for the arsenic efflux pump (Acr3) and arsenate reductase (ArsC). When the gene coding for Acr3 was deleted, cells were not able to grow in the presence of either the oxidized or reduced form of arsenic, while arsC deletion mutants could grow in the presence of arsenite but not arsenate. These studies shed light on how Geobacter influences arsenic mobility in anoxic sediments and may help us develop methods to remediate arsenic contamination in the subsurface. This study examines arsenic transformation mechanisms utilized by Geobacter, a genus of iron-reducing bacteria that are predominant in many anoxic iron-rich subsurface environments. Geobacter species play a major role in microbially mediated arsenic release from metal hydroxides in the subsurface. This release raises arsenic concentrations in drinking water to levels that are high enough to cause major health problems. Therefore, information obtained from studies of Geobacter should shed light on arsenic cycling in iron-rich subsurface sedimentary environments, which may help reduce arsenic-associated illnesses. These studies should also help in the development of biosensors that can be used to detect arsenic contaminants in anoxic subsurface environments. We examined 14 different Geobacter genomes and found that all of these species possess genes coding for an arsenic detoxification system (ars operon), and some also have genes required for arsenic respiration (arr operon) and arsenic methylation (arsM). Copyright © 2017 American Society for Microbiology.
Arsenic Detoxification by Geobacter Species
Walker, David J. F.; Vautour, Kaitlin E.; Dixon, Steven
2016-01-01
ABSTRACT Insight into the mechanisms for arsenic detoxification by Geobacter species is expected to improve the understanding of global cycling of arsenic in iron-rich subsurface sedimentary environments. Analysis of 14 different Geobacter genomes showed that all of these species have genes coding for an arsenic detoxification system (ars operon), and several have genes required for arsenic respiration (arr operon) and methylation (arsM). Genes encoding four arsenic repressor-like proteins were detected in the genome of G. sulfurreducens; however, only one (ArsR1) regulated transcription of the ars operon. Elimination of arsR1 from the G. sulfurreducens chromosome resulted in enhanced transcription of genes coding for the arsenic efflux pump (Acr3) and arsenate reductase (ArsC). When the gene coding for Acr3 was deleted, cells were not able to grow in the presence of either the oxidized or reduced form of arsenic, while arsC deletion mutants could grow in the presence of arsenite but not arsenate. These studies shed light on how Geobacter influences arsenic mobility in anoxic sediments and may help us develop methods to remediate arsenic contamination in the subsurface. IMPORTANCE This study examines arsenic transformation mechanisms utilized by Geobacter, a genus of iron-reducing bacteria that are predominant in many anoxic iron-rich subsurface environments. Geobacter species play a major role in microbially mediated arsenic release from metal hydroxides in the subsurface. This release raises arsenic concentrations in drinking water to levels that are high enough to cause major health problems. Therefore, information obtained from studies of Geobacter should shed light on arsenic cycling in iron-rich subsurface sedimentary environments, which may help reduce arsenic-associated illnesses. These studies should also help in the development of biosensors that can be used to detect arsenic contaminants in anoxic subsurface environments. We examined 14 different Geobacter genomes and found that all of these species possess genes coding for an arsenic detoxification system (ars operon), and some also have genes required for arsenic respiration (arr operon) and arsenic methylation (arsM). PMID:27940542
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tome, Carlos N; Caro, J A; Lebensohn, R A
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less
NASA Technical Reports Server (NTRS)
Burris, John
2011-01-01
We report the use of a return-to- zero (RZPN) pseudo noise modulation technique for making range resolved measurements of CO2 within the planetary boundary layer (PBL) using commercial, off-the-shelf, components. Conventional, range resolved, DIAL measurements require laser pulse widths that are significantly shorter than the desired spatial resolution and necessitate using pulses whose temporal spacing is such that scattered returns from only a single pulse are observed by the receiver at any one time (for the PBL pulse separations must be greater than approximately 20 microseconds). This imposes significant operational limitations when using currently available fiber lasers because of the resulting low duty cycle (less than approximately 0.0005) and consequent low average laser output power. The RZPN modulation technique enables a fiber laser to operate at much higher duty cycles (approaching 0.04) thereby more effectively utilizing the amplifier's output. This increases the counts received by approximately two orders of magnitude. Our approach involves employing two distributed feedback lasers (DFB), each modulated by a different RPZN code, whose outputs are then amplified by a CW fiber amplifier. One laser is tuned to a CO2 absorption line; the other operates offline thereby permitting the simultaneous acquisition of both on and offline signals using independent RZPN codes. This minimizes the impact of atmospheric turbulence on the measurement. The on and offline signals are retrieved by deconvolving the return signal using the appropriate kernels.
10 CFR 436.42 - Evaluation of Life-Cycle Cost Effectiveness.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the life-cycle cost analysis method in part 436, subpart A, of title 10 of the Code of Federal... 10 Energy 3 2011-01-01 2011-01-01 false Evaluation of Life-Cycle Cost Effectiveness. 436.42... PROGRAMS Agency Procurement of Energy Efficient Products § 436.42 Evaluation of Life-Cycle Cost...
Migration of the Gaudi and LHCb software repositories from CVS to Subversion
NASA Astrophysics Data System (ADS)
Clemencic, M.; Degaudenzi, H.; LHCb Collaboration
2011-12-01
A common code repository is of primary importance in a distributed development environment such as large HEP experiments. CVS (Concurrent Versions System) has been used in the past years at CERN for the hosting of shared software repositories, among which were the repositories for the Gaudi Framework and the LHCb software projects. Many developers around the world produced alternative systems to share code and revisions among several developers, mainly to overcome the limitations in CVS, and CERN has recently started a new service for code hosting based on the version control system Subversion. The differences between CVS and Subversion and the way the code was organized in Gaudi and LHCb CVS repositories required careful study and planning of the migration. Special care was used to define the organization of the new Subversion repository. To avoid as much as possible disruption in the development cycle, the migration has been gradual with the help of tools developed explicitly to hide the differences between the two systems. The principles guiding the migration steps, the organization of the Subversion repository and the tools developed will be presented, as well as the problems encountered both from the librarian and the user points of view.
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.; Jones, Scott M.
1991-01-01
This analysis and this computer code apply to full, split, and dual expander cycles. Heat regeneration from the turbine exhaust to the pump exhaust is allowed. The combustion process is modeled as one of chemical equilibrium in an infinite-area or a finite-area combustor. Gas composition in the nozzle may be either equilibrium or frozen during expansion. This report, which serves as a users guide for the computer code, describes the system, the analysis methodology, and the program input and output. Sample calculations are included to show effects of key variables such as nozzle area ratio and oxidizer-to-fuel mass ratio.
Enhancement/upgrade of Engine Structures Technology Best Estimator (EST/BEST) Software System
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2003-01-01
This report describes the work performed during the contract period and the capabilities included in the EST/BEST software system. The developed EST/BEST software system includes the integrated NESSUS, IPACS, COBSTRAN, and ALCCA computer codes required to perform the engine cycle mission and component structural analysis. Also, the interactive input generator for NESSUS, IPACS, and COBSTRAN computer codes have been developed and integrated with the EST/BEST software system. The input generator allows the user to create input from scratch as well as edit existing input files interactively. Since it has been integrated with the EST/BEST software system, it enables the user to modify EST/BEST generated files and perform the analysis to evaluate the benefits. Appendix A gives details of how to use the newly added features in the EST/BEST software system.
HRB-22 preirradiation thermal analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Acharya, R.; Sawa, K.
1995-05-01
This report describes the preirradiation thermal analysis of the HRB-22 capsule designed for irradiation in the removable beryllium (RB) position of the High Flux Isotope Reactor (HFIR) at Oak Ridge National Laboratory (ORNL). CACA-2 a heavy isotope and fission product concentration calculational code for experimental irradiation capsules was used to determine time dependent fission power for the fuel compacts. The Heat Engineering and Transfer in Nine Geometries (HEATING) computer code, version 7.2, was used to solve the steady-state heat conduction problem. The diameters of the graphite fuel body that contains the compacts and the primary pressure vessel were selected suchmore » that the requirements of running the compacts at an average temperature of < 1,250 C and not exceeding a maximum fuel temperature of 1,350 C was met throughout the four cycles of irradiation.« less
NASA Technical Reports Server (NTRS)
Foster, Lancert E.; Saunders, John D., Jr.; Sanders, Bobby W.; Weir, Lois J.
2012-01-01
NASA is focused on technologies for combined cycle, air-breathing propulsion systems to enable reusable launch systems for access to space. Turbine Based Combined Cycle (TBCC) propulsion systems offer specific impulse (Isp) improvements over rocket-based propulsion systems in the subsonic takeoff and return mission segments along with improved safety. Among the most critical TBCC enabling technologies are: 1) mode transition from the low speed propulsion system to the high speed propulsion system, 2) high Mach turbine engine development and 3) innovative turbine based combined cycle integration. To address these challenges, NASA initiated an experimental mode transition task including analytical methods to assess the state-of-the-art of propulsion system performance and design codes. One effort has been the Combined-Cycle Engine Large Scale Inlet Mode Transition Experiment (CCE-LIMX) which is a fully integrated TBCC propulsion system with flowpath sizing consistent with previous NASA and DoD proposed Hypersonic experimental flight test plans. This experiment was tested in the NASA GRC 10 by 10-Foot Supersonic Wind Tunnel (SWT) Facility. The goal of this activity is to address key hypersonic combined-cycle engine issues including: (1) dual integrated inlet operability and performance issues-unstart constraints, distortion constraints, bleed requirements, and controls, (2) mode-transition sequence elements caused by switching between the turbine and the ramjet/scramjet flowpaths (imposed variable geometry requirements), and (3) turbine engine transients (and associated time scales) during transition. Testing of the initial inlet and dynamic characterization phases were completed and smooth mode transition was demonstrated. A database focused on a Mach 4 transition speed with limited off-design elements was developed and will serve to guide future TBCC system studies and to validate higher level analyses.
IPAC-Inlet Performance Analysis Code
NASA Technical Reports Server (NTRS)
Barnhart, Paul J.
1997-01-01
A series of analyses have been developed which permit the calculation of the performance of common inlet designs. The methods presented are useful for determining the inlet weight flows, total pressure recovery, and aerodynamic drag coefficients for given inlet geometric designs. Limited geometric input data is required to use this inlet performance prediction methodology. The analyses presented here may also be used to perform inlet preliminary design studies. The calculated inlet performance parameters may be used in subsequent engine cycle analyses or installed engine performance calculations for existing uninstalled engine data.
Time-Shifted Boundary Conditions Used for Navier-Stokes Aeroelastic Solver
NASA Technical Reports Server (NTRS)
Srivastava, Rakesh
1999-01-01
Under the Advanced Subsonic Technology (AST) Program, an aeroelastic analysis code (TURBO-AE) based on Navier-Stokes equations is currently under development at NASA Lewis Research Center s Machine Dynamics Branch. For a blade row, aeroelastic instability can occur in any of the possible interblade phase angles (IBPA s). Analyzing small IBPA s is very computationally expensive because a large number of blade passages must be simulated. To reduce the computational cost of these analyses, we used time shifted, or phase-lagged, boundary conditions in the TURBO-AE code. These conditions can be used to reduce the computational domain to a single blade passage by requiring the boundary conditions across the passage to be lagged depending on the IBPA being analyzed. The time-shifted boundary conditions currently implemented are based on the direct-store method. This method requires large amounts of data to be stored over a period of the oscillation cycle. On CRAY computers this is not a major problem because solid-state devices can be used for fast input and output to read and write the data onto a disk instead of storing it in core memory.
LINFLUX-AE: A Turbomachinery Aeroelastic Code Based on a 3-D Linearized Euler Solver
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, M. A.; Trudell, J. J.; Mehmed, O.; Stefko, G. L.
2004-01-01
This report describes the development and validation of LINFLUX-AE, a turbomachinery aeroelastic code based on the linearized unsteady 3-D Euler solver, LINFLUX. A helical fan with flat plate geometry is selected as the test case for numerical validation. The steady solution required by LINFLUX is obtained from the nonlinear Euler/Navier Stokes solver TURBO-AE. The report briefly describes the salient features of LINFLUX and the details of the aeroelastic extension. The aeroelastic formulation is based on a modal approach. An eigenvalue formulation is used for flutter analysis. The unsteady aerodynamic forces required for flutter are obtained by running LINFLUX for each mode, interblade phase angle and frequency of interest. The unsteady aerodynamic forces for forced response analysis are obtained from LINFLUX for the prescribed excitation, interblade phase angle, and frequency. The forced response amplitude is calculated from the modal summation of the generalized displacements. The unsteady pressures, work done per cycle, eigenvalues and forced response amplitudes obtained from LINFLUX are compared with those obtained from LINSUB, TURBO-AE, ASTROP2, and ANSYS.
NASA Astrophysics Data System (ADS)
Punov, Plamen; Milkov, Nikolay; Danel, Quentin; Perilhon, Christelle; Podevin, Pierre; Evtimov, Teodossi
2017-02-01
An optimization study of the Rankine cycle as a function of diesel engine operating mode is presented. The Rankine cycle here, is studied as a waste heat recovery system which uses the engine exhaust gases as heat source. The engine exhaust gases parameters (temperature, mass flow and composition) were defined by means of numerical simulation in advanced simulation software AVL Boost. Previously, the engine simulation model was validated and the Vibe function parameters were defined as a function of engine load. The Rankine cycle output power and efficiency was numerically estimated by means of a simulation code in Python(x,y). This code includes discretized heat exchanger model and simplified model of the pump and the expander based on their isentropic efficiency. The Rankine cycle simulation revealed the optimum value of working fluid mass flow and evaporation pressure according to the heat source. Thus, the optimal Rankine cycle performance was obtained over the engine operating map.
Overview of the Turbine Based Combined Cycle Discipline
NASA Technical Reports Server (NTRS)
Thomas, Scott R.; Walker, James F.; Pittman, James L.
2009-01-01
The NASA Fundamental Aeronautics Hypersonics project is focused on technologies for combined cycle, airbreathing propulsions systems to enable reusable launch systems for access to space. Turbine Based Combined Cycle (TBCC) propulsion systems offer specific impulse (Isp) improvements over rocket-based propulsion systems in the subsonic takeoff and return mission segments and offer improved safety. The potential to realize more aircraft-like operations with expanded launch site capability and reduced system maintenance are additional benefits. The most critical TBCC enabling technologies as identified in the National Aeronautics Institute (NAI) study were: 1) mode transition from the low speed propulsion system to the high speed propulsion system, 2) high Mach turbine engine development, 3) transonic aero-propulsion performance, 4) low-Mach-number dual-mode scramjet operation, 5) innovative 3-D flowpath concepts and 6) innovative turbine based combined cycle integration. To address several of these key TBCC challenges, NASA s Hypersonics project (TBCC Discipline) initiated an experimental mode transition task that includes an analytic research endeavor to assess the state-of-the-art of propulsion system performance and design codes. This initiative includes inlet fluid and turbine performance codes and engineering-level algorithms. This effort has been focused on the Combined Cycle Engine Large-Scale Inlet Mode Transition Experiment (CCE LIMX) which is a fully integrated TBCC propulsion system with flow path sizing consistent with previous NASA and DoD proposed Hypersonic experimental flight test plans. This experiment is being tested in the NASA-GRC 10 x 10 Supersonic Wind Tunnel (SWT) Facility. The goal of this activity is to address key hypersonic combined-cycle-engine issues: (1) dual integrated inlet operability and performance issues unstart constraints, distortion constraints, bleed requirements, controls, and operability margins, (2) mode-transition constraints imposed by the turbine and the ramjet/scramjet flow paths (imposed variable geometry requirements), (3) turbine engine transients (and associated time scales) during transition, (4) high-altitude turbine engine re-light, and (5) the operating constraints of a Mach 3-7 combustor (specific to the TBCC). The model will be tested in several test phases to develop a unique TBCC database to assess and validate design and analysis tools and address operability, integration, and interaction issues for this class of advanced propulsion systems. The test article and all support equipment is complete and available at the facility. The test article installation and facility build-up in preparation for the inlet performance and operability characterization is near completion and testing is planned to commence in FY11.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzgrewe, F.; Hegedues, F.; Paratte, J.M.
1995-03-01
The light water reactor BOXER code was used to determine the fast azimuthal neutron fluence distribution at the inner surface of the reactor pressure vessel after the tenth cycle of a pressurized water reactor (PWR). Using a cross-section library in 45 groups, fixed-source calculations in transport theory and x-y geometry were carried out to determine the fast azimuthal neutron flux distribution at the inner surface of the pressure vessel for four different cycles. From these results, the fast azimuthal neutron fluence after the tenth cycle was estimated and compared with the results obtained from scraping test experiments. In these experiments,more » small samples of material were taken from the inner surface of the pressure vessel. The fast neutron fluence was then determined form the measured activity of the samples. Comparing the BOXER and scraping test results have maximal differences of 15%, which is very good, considering the factor of 10{sup 3} neutron attenuation between the reactor core and the pressure vessel. To compare the BOXER results with an independent code, the 21st cycle of the PWR was also calculated with the TWODANT two-dimensional transport code, using the same group structure and cross-section library. Deviations in the fast azimuthal flux distribution were found to be <3%, which verifies the accuracy of the BOXER results.« less
Chen, Wenxi; Kitazawa, Masumi; Togawa, Tatsuo
2009-09-01
This paper proposes a method to estimate a woman's menstrual cycle based on the hidden Markov model (HMM). A tiny device was developed that attaches around the abdominal region to measure cutaneous temperature at 10-min intervals during sleep. The measured temperature data were encoded as a two-dimensional image (QR code, i.e., quick response code) and displayed in the LCD window of the device. A mobile phone captured the QR code image, decoded the information and transmitted the data to a database server. The collected data were analyzed by three steps to estimate the biphasic temperature property in a menstrual cycle. The key step was an HMM-based step between preprocessing and postprocessing. A discrete Markov model, with two hidden phases, was assumed to represent higher- and lower-temperature phases during a menstrual cycle. The proposed method was verified by the data collected from 30 female participants, aged from 14 to 46, over six consecutive months. By comparing the estimated results with individual records from the participants, 71.6% of 190 menstrual cycles were correctly estimated. The sensitivity and positive predictability were 91.8 and 96.6%, respectively. This objective evaluation provides a promising approach for managing premenstrual syndrome and birth control.
Value-Based Requirements Traceability: Lessons Learned
NASA Astrophysics Data System (ADS)
Egyed, Alexander; Grünbacher, Paul; Heindl, Matthias; Biffl, Stefan
Traceability from requirements to code is mandated by numerous software development standards. These standards, however, are not explicit about the appropriate level of quality of trace links. From a technical perspective, trace quality should meet the needs of the intended trace utilizations. Unfortunately, long-term trace utilizations are typically unknown at the time of trace acquisition which represents a dilemma for many companies. This chapter suggests ways to balance the cost and benefits of requirements traceability. We present data from three case studies demonstrating that trace acquisition requires broad coverage but can tolerate imprecision. With this trade-off our lessons learned suggest a traceability strategy that (1) provides trace links more quickly, (2) refines trace links according to user-defined value considerations, and (3) supports the later refinement of trace links in case the initial value consideration has changed over time. The scope of our work considers the entire life cycle of traceability instead of just the creation of trace links.
NASA Astrophysics Data System (ADS)
Lahaye, S.; Huynh, T. D.; Tsilanizara, A.
2016-03-01
Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.
NASA Astrophysics Data System (ADS)
Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.
2018-02-01
The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.
Computational Infrastructure for Engine Structural Performance Simulation
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1997-01-01
Select computer codes developed over the years to simulate specific aspects of engine structures are described. These codes include blade impact integrated multidisciplinary analysis and optimization, progressive structural fracture, quantification of uncertainties for structural reliability and risk, benefits estimation of new technology insertion and hierarchical simulation of engine structures made from metal matrix and ceramic matrix composites. Collectively these codes constitute a unique infrastructure readiness to credibly evaluate new and future engine structural concepts throughout the development cycle from initial concept, to design and fabrication, to service performance and maintenance and repairs, and to retirement for cause and even to possible recycling. Stated differently, they provide 'virtual' concurrent engineering for engine structures total-life-cycle-cost.
SERIIUS-MAGEEP Visiting Scholars Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortega, Jesus D.
2014-08-28
Recent studies have assessed closed-loop supercritical carbon dioxide (s-CO 2) Brayton cycles to be a higher energy-density system in comparison to equivalent superheated steam Rankine systems. At turbine inlet conditions of 700°C and 20 MPa, a cycle thermal efficiency of ~50% can be achieved. Achieving these high efficiencies will help concentrating solar power (CSP) technologies to become a competitive alternative to current power generation methods. To incorporate an s-CO 2 Brayton power cycle in a solar power tower system, the development of a solar receiver capable of providing an outlet temperature of 700°C (at 20 MPa) is necessary. To satisfymore » the temperature requirements of an s-CO 2 Brayton cycle with recuperation and recompression, the s-CO 2 must undergo a temperature rise of ~200°C as it flows through the solar receiver. The main objective is to develop an optical-thermal-fluid and structural model to validate a tubular receiver that will receive a heat input ~0.33 MWth from the heliostat field at the National Solar Thermal Test Facility (NSTTF), Albuquerque, NM, USA. We also commenced the development of computational models and testing of air receivers being developed by the Indian Institute of Science (IISc) and the Indian Institute of Technology in Bombay (IIT-B). The helical tubular receiver is expected to counteract the effect of thermal expansion while using a cavity to reduce the radiative and convective losses. Initially, this receiver will be tested for a temperature range of 100-300°C under 1 MPa of pressurized air. The helical air receiver will be exposed to 10kWth to achieve a temperature rise of ~200°C. Preliminary tests to validate the modeling will be performed before the design and construction of a larger scale receiver. Lastly, I focused on the development of a new computational tool that would allow us to perform a nodal creep-fatigue analysis on the receivers and heat exchangers being developed. This tool was developed using MATLAB and is capable of processing the results obtained from ANSYS Fluent and Structural combined, which was limited when using commercial software. The main advantage of this code is that it can be modified to run in parallel making it more affordable and faster compared to commercial codes available. The code is in the process of validation and is currently being compared to nCode Design Life.« less
Enhanced absorption cycle computer model
NASA Astrophysics Data System (ADS)
Grossman, G.; Wilk, M.
1993-09-01
Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperature boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorption systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H2O triple-effect cycles, LiCl-H2O solar-powered open absorption cycles, and NH3-H2O single-effect and generator-absorber heat exchange cycles. An appendix contains the user's manual.
Combinatorial pulse position modulation for power-efficient free-space laser communications
NASA Technical Reports Server (NTRS)
Budinger, James M.; Vanderaar, M.; Wagner, P.; Bibyk, Steven
1993-01-01
A new modulation technique called combinatorial pulse position modulation (CPPM) is presented as a power-efficient alternative to quaternary pulse position modulation (QPPM) for direct-detection, free-space laser communications. The special case of 16C4PPM is compared to QPPM in terms of data throughput and bit error rate (BER) performance for similar laser power and pulse duty cycle requirements. The increased throughput from CPPM enables the use of forward error corrective (FEC) encoding for a net decrease in the amount of laser power required for a given data throughput compared to uncoded QPPM. A specific, practical case of coded CPPM is shown to reduce the amount of power required to transmit and receive a given data sequence by at least 4.7 dB. Hardware techniques for maximum likelihood detection and symbol timing recovery are presented.
RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade
2014-09-30
Hardware counters were used to measure several performance metrics, including the number of double-precision (DP) floating- point operations ( FLOPs ...0.2 DP FLOPs per CPU cycle. Experience with production science code is that it is possible to achieve execution rates in the range of 0.5 to 1.0...DP FLOPs per cycle. Looking at the ratio of vectorized DP FLOPs to total DP FLOPs we see (Figure PROF) that for most of the execution time the
10 CFR 434.607 - Life cycle cost analysis criteria.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle cost...
10 CFR 434.607 - Life cycle cost analysis criteria.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle cost...
10 CFR 434.607 - Life cycle cost analysis criteria.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle cost...
A multicenter collaborative approach to reducing pediatric codes outside the ICU.
Hayes, Leslie W; Dobyns, Emily L; DiGiovine, Bruno; Brown, Ann-Marie; Jacobson, Sharon; Randall, Kelly H; Wathen, Beth; Richard, Heather; Schwab, Carolyn; Duncan, Kathy D; Thrasher, Jodi; Logsdon, Tina R; Hall, Matthew; Markovitz, Barry
2012-03-01
The Child Health Corporation of America formed a multicenter collaborative to decrease the rate of pediatric codes outside the ICU by 50%, double the days between these events, and improve the patient safety culture scores by 5 percentage points. A multidisciplinary pediatric advisory panel developed a comprehensive change package of process improvement strategies and measures for tracking progress. Learning sessions, conference calls, and data submission facilitated collaborative group learning and implementation. Twenty Child Health Corporation of America hospitals participated in this 12-month improvement project. Each hospital identified at least 1 noncritical care target unit in which to implement selected elements of the change package. Strategies to improve prevention, detection, and correction of the deteriorating patient ranged from relatively simple, foundational changes to more complex, advanced changes. Each hospital selected a broad range of change package elements for implementation using rapid-cycle methodologies. The primary outcome measure was reduction in codes per 1000 patient days. Secondary outcomes were days between codes and change in patient safety culture scores. Code rate for the collaborative did not decrease significantly (3% decrease). Twelve hospitals reported additional data after the collaborative and saw significant improvement in code rates (24% decrease). Patient safety culture scores improved by 4.5% to 8.5%. A complex process, such as patient deterioration, requires sufficient time and effort to achieve improved outcomes and create a deeply embedded culture of patient safety. The collaborative model can accelerate improvements achieved by individual institutions.
Quicklook overview of model changes in Melcor 2.2: Rev 6342 to Rev 9496
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L.
2017-05-01
MELCOR 2.2 is a significant official release of the MELCOR code with many new models and model improvements. This report provides the code user with a quick review and characterization of new models added, changes to existing models, the effect of code changes during this code development cycle (rev 6342 to rev 9496), a preview of validation results with this code version. More detailed information is found in the code Subversion logs as well as the User Guide and Reference Manuals.
Density-matrix simulation of small surface codes under current and projected experimental noise
NASA Astrophysics Data System (ADS)
O'Brien, T. E.; Tarasinski, B.; DiCarlo, L.
2017-09-01
We present a density-matrix simulation of the quantum memory and computing performance of the distance-3 logical qubit Surface-17, following a recently proposed quantum circuit and using experimental error parameters for transmon qubits in a planar circuit QED architecture. We use this simulation to optimize components of the QEC scheme (e.g., trading off stabilizer measurement infidelity for reduced cycle time) and to investigate the benefits of feedback harnessing the fundamental asymmetry of relaxation-dominated error in the constituent transmons. A lower-order approximate calculation extends these predictions to the distance-5 Surface-49. These results clearly indicate error rates below the fault-tolerance threshold of the surface code, and the potential for Surface-17 to perform beyond the break-even point of quantum memory. However, Surface-49 is required to surpass the break-even point of computation at state-of-the-art qubit relaxation times and readout speeds.
Design of supercritical swept wings
NASA Technical Reports Server (NTRS)
Garabedian, P.; Mcfadden, G.
1982-01-01
Computational fluid dynamics are used to discuss problems inherent to transonic three-dimensional flow past supercritical swept wings. The formulation for a boundary value problem for the flow past the wing is provided, including consideration of weak shock waves and the use of parabolic coordinates. A swept wing code is developed which requires a mesh of 152 x 10 x 12 points and 200 time cycles. A formula for wave drag is calculated, based on the idea that the conservation form of the momentum equation becomes an entropy inequality measuring the drag, expressible in terms of a small-disturbance equation for a potential function in two dimensions. The entropy inequality has been incorporated in a two-dimensional code for the analysis of transonic flow over airfoils. A method of artificial viscosity is explored for optimum pressure distributions with design, and involves a free boundary problem considering speed over only a portion of the wing.
Computational Fluid Dynamics Analysis Method Developed for Rocket-Based Combined Cycle Engine Inlet
NASA Technical Reports Server (NTRS)
1997-01-01
Renewed interest in hypersonic propulsion systems has led to research programs investigating combined cycle engines that are designed to operate efficiently across the flight regime. The Rocket-Based Combined Cycle Engine is a propulsion system under development at the NASA Lewis Research Center. This engine integrates a high specific impulse, low thrust-to-weight, airbreathing engine with a low-impulse, high thrust-to-weight rocket. From takeoff to Mach 2.5, the engine operates as an air-augmented rocket. At Mach 2.5, the engine becomes a dual-mode ramjet; and beyond Mach 8, the rocket is turned back on. One Rocket-Based Combined Cycle Engine variation known as the "Strut-Jet" concept is being investigated jointly by NASA Lewis, the U.S. Air Force, Gencorp Aerojet, General Applied Science Labs (GASL), and Lockheed Martin Corporation. Work thus far has included wind tunnel experiments and computational fluid dynamics (CFD) investigations with the NPARC code. The CFD method was initiated by modeling the geometry of the Strut-Jet with the GRIDGEN structured grid generator. Grids representing a subscale inlet model and the full-scale demonstrator geometry were constructed. These grids modeled one-half of the symmetric inlet flow path, including the precompression plate, diverter, center duct, side duct, and combustor. After the grid generation, full Navier-Stokes flow simulations were conducted with the NPARC Navier-Stokes code. The Chien low-Reynolds-number k-e turbulence model was employed to simulate the high-speed turbulent flow. Finally, the CFD solutions were postprocessed with a Fortran code. This code provided wall static pressure distributions, pitot pressure distributions, mass flow rates, and internal drag. These results were compared with experimental data from a subscale inlet test for code validation; then they were used to help evaluate the demonstrator engine net thrust.
Hinze, Jacob F.; Nellis, Gregory F.; Anderson, Mark H.
2017-09-21
Supercritical Carbon Dioxide (sCO 2) power cycles have the potential to deliver high efficiency at low cost. However, in order for an sCO 2 cycle to reach high efficiency, highly effective recuperators are needed. These recuperative heat exchangers must transfer heat at a rate that is substantially larger than the heat transfer to the cycle itself and can therefore represent a significant portion of the power block costs. Regenerators are proposed as a cost saving alternative to high cost printed circuit recuperators for this application. A regenerator is an indirect heat exchanger which periodically stores and releases heat to themore » working fluid. The simple design of a regenerator can be made more inexpensively compared to current options. The objective of this paper is a detailed evaluation of regenerators as a competing technology for recuperators within an sCO 2 Brayton cycle. The level of the analysis presented here is sufficient to identify issues with the regenerator system in order to direct future work and also to clarify the potential advantage of pursuing this technology. A reduced order model of a regenerator is implemented into a cycle model of an sCO 2 Brayton cycle. An economic analysis investigates the cost savings that is possible by switching from recuperative heat exchangers to switched-bed regenerators. The cost of the regenerators was estimated using the amount of material required if the pressure vessel is sized using ASME Boiler Pressure Vessel Code (BPVC) requirements. The cost of the associated valves is found to be substantial for the regenerator system and is estimated in collaboration with an industrial valve supplier. The result of this analysis suggests that a 21.2% reduction in the contribution to the Levelized Cost of Electricity (LCoE) from the power block can be realized by switching to a regenerator-based system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinze, Jacob F.; Nellis, Gregory F.; Anderson, Mark H.
Supercritical Carbon Dioxide (sCO 2) power cycles have the potential to deliver high efficiency at low cost. However, in order for an sCO 2 cycle to reach high efficiency, highly effective recuperators are needed. These recuperative heat exchangers must transfer heat at a rate that is substantially larger than the heat transfer to the cycle itself and can therefore represent a significant portion of the power block costs. Regenerators are proposed as a cost saving alternative to high cost printed circuit recuperators for this application. A regenerator is an indirect heat exchanger which periodically stores and releases heat to themore » working fluid. The simple design of a regenerator can be made more inexpensively compared to current options. The objective of this paper is a detailed evaluation of regenerators as a competing technology for recuperators within an sCO 2 Brayton cycle. The level of the analysis presented here is sufficient to identify issues with the regenerator system in order to direct future work and also to clarify the potential advantage of pursuing this technology. A reduced order model of a regenerator is implemented into a cycle model of an sCO 2 Brayton cycle. An economic analysis investigates the cost savings that is possible by switching from recuperative heat exchangers to switched-bed regenerators. The cost of the regenerators was estimated using the amount of material required if the pressure vessel is sized using ASME Boiler Pressure Vessel Code (BPVC) requirements. The cost of the associated valves is found to be substantial for the regenerator system and is estimated in collaboration with an industrial valve supplier. The result of this analysis suggests that a 21.2% reduction in the contribution to the Levelized Cost of Electricity (LCoE) from the power block can be realized by switching to a regenerator-based system.« less
Resident challenges with daily life in Chinese long-term care facilities: A qualitative pilot study.
Song, Yuting; Scales, Kezia; Anderson, Ruth A; Wu, Bei; Corazzini, Kirsten N
As traditional family-based care in China declines, the demand for residential care increases. Knowledge of residents' experiences with long-term care (LTC) facilities is essential to improving quality of care. This pilot study aimed to describe residents' experiences in LTC facilities, particularly as it related to physical function. Semi-structured open-ended interviews were conducted in two facilities with residents stratified by three functional levels (n = 5). Directed content analysis was guided by the Adaptive Leadership Framework. A two-cycle coding approach was used with a first-cycle descriptive coding and second-cycle dramaturgical coding. Interviews provided examples of challenges faced by residents in meeting their daily care needs. Five themes emerged: staff care, care from family members, physical environment, other residents in the facility, and personal strategies. Findings demonstrate the significance of organizational context for care quality and reveal foci for future research. Copyright © 2017 Elsevier Inc. All rights reserved.
OECD/NEA Ongoing activities related to the nuclear fuel cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornet, S.M.; McCarthy, K.; Chauvin, N.
2013-07-01
As part of its role in encouraging international collaboration, the OECD Nuclear Energy Agency is coordinating a series of projects related to the Nuclear Fuel Cycle. The Nuclear Science Committee (NSC) Working Party on Scientific Issues of the Nuclear Fuel Cycle (WPFC) comprises five different expert groups covering all aspects of the fuel cycle from front to back-end. Activities related to fuels, materials, physics, separation chemistry, and fuel cycles scenarios are being undertaken. By publishing state-of-the-art reports and organizing workshops, the groups are able to disseminate recent research advancements to the international community. Current activities mainly focus on advanced nuclearmore » systems, and experts are working on analyzing results and establishing challenges associated to the adoption of new materials and fuels. By comparing different codes, the Expert Group on Advanced Fuel Cycle Scenarios is aiming at gaining further understanding of the scientific issues and specific national needs associated with the implementation of advanced fuel cycles. At the back end of the fuel cycle, separation technologies (aqueous and pyrochemical processing) are being assessed. Current and future activities comprise studies on minor actinides separation and post Fukushima studies. Regular workshops are also organized to discuss recent developments on Partitioning and Transmutation. In addition, the Nuclear Development Committee (NDC) focuses on the analysis of the economics of nuclear power across the fuel cycle in the context of changes of electricity markets, social acceptance and technological advances and assesses the availability of the nuclear fuel and infrastructure required for the deployment of existing and future nuclear power. The Expert Group on the Economics of the Back End of the Nuclear Fuel Cycle (EBENFC), in particular, is looking at assessing economic and financial issues related to the long term management of spent nuclear fuel. (authors)« less
Johnson, Derek; Heltzel, Robert; Nix, Andrew; Barrow, Rebekah
2017-03-01
With the advent of unconventional natural gas resources, new research focuses on the efficiency and emissions of the prime movers powering these fleets. These prime movers also play important roles in emissions inventories for this sector. Industry seeks to reduce operating costs by decreasing the required fuel demands of these high horsepower engines but conducting in-field or full-scale research on new technologies is cost prohibitive. As such, this research completed extensive in-use data collection efforts for the engines powering over-the-road trucks, drilling engines, and hydraulic stimulation pump engines. These engine activity data were processed in order to make representative test cycles using a Markov Chain, Monte Carlo (MCMC) simulation method. Such cycles can be applied under controlled environments on scaled engines for future research. In addition to MCMC, genetic algorithms were used to improve the overall performance values for the test cycles and smoothing was applied to ensure regression criteria were met during implementation on a test engine and dynamometer. The variations in cycle and in-use statistics are presented along with comparisons to conventional test cycles used for emissions compliance. Development of representative, engine dynamometer test cycles, from in-use activity data, is crucial in understanding fuel efficiency and emissions for engine operating modes that are different from cycles mandated by the Code of Federal Regulations. Representative cycles were created for the prime movers of unconventional well development-over-the-road (OTR) trucks and drilling and hydraulic fracturing engines. The representative cycles are implemented on scaled engines to reduce fuel consumption during research and development of new technologies in controlled laboratory environments.
Uranium oxide fuel cycle analysis in VVER-1000 with VISTA simulation code
NASA Astrophysics Data System (ADS)
Mirekhtiary, Seyedeh Fatemeh; Abbasi, Akbar
2018-02-01
The VVER-1000 Nuclear power plant generates about 20-25 tons of spent fuel per year. In this research, the fuel transmutation of Uranium Oxide (UOX) fuel was calculated by using of nuclear fuel cycle simulation system (VISTA) code. In this simulation, we evaluated the back end components fuel cycle. The back end component calculations are Spent Fuel (SF), Actinide Inventory (AI) and Fission Product (FP) radioisotopes. The SF, AI and FP values were obtained 23.792178 ton/y, 22.811139 ton/y, 0.981039 ton/y, respectively. The obtained value of spent fuel, major actinide, and minor actinide and fission products were 23.8 ton/year, 22.795 ton/year, 0.024 ton/year and 0.981 ton/year, respectively.
Feasibility of coded vibration in a vibro-ultrasound system for tissue elasticity measurement.
Zhao, Jinxin; Wang, Yuanyuan; Yu, Jinhua; Li, Tianjie; Zheng, Yong-Ping
2016-07-01
The ability of various methods for elasticity measurement and imaging is hampered by the vibration amplitude on biological tissues. Based on the inference that coded excitation will improve the performance of the cross-correlation function of the tissue displacement waves, the idea of exerting encoded external vibration on tested samples for measuring its elasticity is proposed. It was implemented by integrating a programmable vibration generation function into a customized vibro-ultrasound system to generate Barker coded vibration for elasticity measurement. Experiments were conducted on silicone phantoms and porcine muscles. The results showed that coded excitation of the vibration enhanced the accuracy and robustness of the elasticity measurement especially in low signal-to-noise ratio scenarios. In the phantom study, the measured shear modulus values with coded vibration had an R(2 )= 0.993 linear correlation to that of referenced indentation, while for single-cycle pulse the R(2) decreased to 0.987. In porcine muscle study, the coded vibration also obtained a shear modulus value which is more accurate than the single-cycle pulse by 0.16 kPa and 0.33 kPa at two different depths. These results demonstrated the feasibility and potentiality of the coded vibration for enhancing the quality of elasticity measurement and imaging.
Why and how Mastering an Incremental and Iterative Software Development Process
NASA Astrophysics Data System (ADS)
Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe
2004-06-01
One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.; Saltsman, James F.
1993-01-01
A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.
Three Years of Global Positioning System Experience on International Space Station
NASA Technical Reports Server (NTRS)
Gomez, Susan
2005-01-01
The International Space Station global positioning systems (GPS) receiver was activated in April 2002. Since that time, numerous software anomalies surfaced that had to be worked around. Some of the software problems required waivers, such as the time function, while others required extensive operator intervention, such as numerous power cycles. Eventually, enough anomalies surfaced that the three pieces of code included in the GPS unit have been re-written and the GPS units were upgraded. The technical aspects of the problems are discussed, as well as the underlying causes that led to the delivery of a product that has had numerous problems. The technical aspects of the problems included physical phenomena that were not well understood, such as the affect that the ionosphere would have on the GPS measurements. The underlying causes were traced to inappropriate use of legacy software, changing requirements, inadequate software processes, unrealistic schedules, incorrect contract type, and unclear ownership responsibilities.
Three Years of Global Positioning System Experience on International Space Station
NASA Technical Reports Server (NTRS)
Gomez, Susan
2006-01-01
The International Space Station global positioning system (GPS) receiver was activated in April 2002. Since that time, numerous software anomalies surfaced that had to be worked around. Some of the software problems required waivers, such as the time function, while others required extensive operator intervention, such as numerous power cycles. Eventually enough anomalies surfaced that the three pieces of code included in the GPS unit have been re-written and the GPS units upgraded. The technical aspects of the problems are discussed, as well as the underlying causes that led to the delivery of a product that has had so many problems. The technical aspects of the problems included physical phenomena that were not well understood, such as the affect that the ionosphere would have on the GPS measurements. The underlying causes were traced to inappropriate use of legacy software, changing requirements, inadequate software processes, unrealistic schedules, incorrect contract type, and unclear ownership responsibilities..
Gate-to-gate Life-Cycle Inventory of Hardboard Production in North America
Richard Bergman
2014-01-01
Whole-building life-cycle assessments (LCAs) populated by life-cycle inventory (LCI) data are incorporated into environmental footprint software tools for establishing green building certification by building professionals and code. However, LCI data on some wood building products are still needed to help fill gaps in the data and thus provide a more complete picture...
Linear chirp phase perturbing approach for finding binary phased codes
NASA Astrophysics Data System (ADS)
Li, Bing C.
2017-05-01
Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.
Buchensky, Celeste; Almirón, Paula; Mantilla, Brian Suarez; Silber, Ariel M; Cricco, Julia A
2010-11-01
Trypanosoma cruzi, the etiologic agent for Chagas’ disease, has requirements for several cofactors, one of which is heme. Because this organism is unable to synthesize heme, which serves as a prosthetic group for several heme proteins (including the respiratory chain complexes), it therefore must be acquired from the environment. Considering this deficiency, it is an open question as to how heme A, the essential cofactor for eukaryotic CcO enzymes, is acquired by this parasite. In the present work, we provide evidence for the presence and functionality of genes coding for heme O and heme A synthases, which catalyze the synthesis of heme O and its conversion into heme A, respectively. The functions of these T. cruzi proteins were evaluated using yeast complementation assays, and the mRNA levels of their respective genes were analyzed at the different T. cruzi life stages. It was observed that the amount of mRNA coding for these proteins changes during the parasite life cycle, suggesting that this variation could reflect different respiratory requirements in the different parasite life stages. © 2010 Federation of European Microbiological Societies. Published by Blackwell Publishing Ltd. All rights reserved.
MARTe: A Multiplatform Real-Time Framework
NASA Astrophysics Data System (ADS)
Neto, André C.; Sartori, Filippo; Piccolo, Fabio; Vitelli, Riccardo; De Tommasi, Gianmaria; Zabeo, Luca; Barbalace, Antonio; Fernandes, Horacio; Valcarcel, Daniel F.; Batista, Antonio J. N.
2010-04-01
Development of real-time applications is usually associated with nonportable code targeted at specific real-time operating systems. The boundary between hardware drivers, system services, and user code is commonly not well defined, making the development in the target host significantly difficult. The Multithreaded Application Real-Time executor (MARTe) is a framework built over a multiplatform library that allows the execution of the same code in different operating systems. The framework provides the high-level interfaces with hardware, external configuration programs, and user interfaces, assuring at the same time hard real-time performances. End-users of the framework are required to define and implement algorithms inside a well-defined block of software, named Generic Application Module (GAM), that is executed by the real-time scheduler. Each GAM is reconfigurable with a set of predefined configuration meta-parameters and interchanges information using a set of data pipes that are provided as inputs and required as output. Using these connections, different GAMs can be chained either in series or parallel. GAMs can be developed and debugged in a non-real-time system and, only once the robustness of the code and correctness of the algorithm are verified, deployed to the real-time system. The software also supplies a large set of utilities that greatly ease the interaction and debugging of a running system. Among the most useful are a highly efficient real-time logger, HTTP introspection of real-time objects, and HTTP remote configuration. MARTe is currently being used to successfully drive the plasma vertical stabilization controller on the largest magnetic confinement fusion device in the world, with a control loop cycle of 50 ?s and a jitter under 1 ?s. In this particular project, MARTe is used with the Real-Time Application Interface (RTAI)/Linux operating system exploiting the new ?86 multicore processors technology.
Correct coding for laboratory procedures during assisted reproductive technology cycles.
2016-04-01
This document provides updated coding information for services related to assisted reproductive technology procedures. This document replaces the 2012 ASRM document of the same name. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortega, Jesus; Khivsara, Sagar; Christian, Joshua
A supercritical carbon dioxide (sCO 2) Brayton cycle is an emerging high energy-density cycle undergoing extensive research due to the appealing thermo-physical properties of sCO 2 and single phase operation. Development of a solar receiver capable of delivering sCO 2 at 20 MPa and 700 °C is required for implementation of the high efficiency (~50%) solar powered sCO 2 Brayton cycle. In this work, extensive candidate materials are review along with tube size optimization using the ASME Boiler and Pressure Vessel Code. Moreover, temperature and pressure distribution obtained from the thermal-fluid modeling (presented in a complementary publication) are used tomore » evaluate the thermal and mechanical stresses along with detailed creep-fatigue analysis of the tubes. The lifetime performance of the receiver tubes were approximated using the resulting body stresses. A cyclic loading analysis is performed by coupling the Strain-Life approach and the Larson-Miller creep model. The structural integrity of the receiver was examined and it was found that the stresses can be withstood by specific tubes, determined by a parametric geometric analysis. Furthermore, the creep-fatigue analysis displayed the damage accumulation due to cycling and the permanent deformation on the tubes showed that the tubes can operate for the full lifetime of the receiver.« less
Ortega, Jesus; Khivsara, Sagar; Christian, Joshua; ...
2016-06-06
A supercritical carbon dioxide (sCO 2) Brayton cycle is an emerging high energy-density cycle undergoing extensive research due to the appealing thermo-physical properties of sCO 2 and single phase operation. Development of a solar receiver capable of delivering sCO 2 at 20 MPa and 700 °C is required for implementation of the high efficiency (~50%) solar powered sCO 2 Brayton cycle. In this work, extensive candidate materials are review along with tube size optimization using the ASME Boiler and Pressure Vessel Code. Moreover, temperature and pressure distribution obtained from the thermal-fluid modeling (presented in a complementary publication) are used tomore » evaluate the thermal and mechanical stresses along with detailed creep-fatigue analysis of the tubes. The lifetime performance of the receiver tubes were approximated using the resulting body stresses. A cyclic loading analysis is performed by coupling the Strain-Life approach and the Larson-Miller creep model. The structural integrity of the receiver was examined and it was found that the stresses can be withstood by specific tubes, determined by a parametric geometric analysis. Furthermore, the creep-fatigue analysis displayed the damage accumulation due to cycling and the permanent deformation on the tubes showed that the tubes can operate for the full lifetime of the receiver.« less
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
JavaGenes and Condor: Cycle-Scavenging Genetic Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Langhirt, Eric; Livny, Miron; Ramamurthy, Ravishankar; Soloman, Marvin; Traugott, Steve
2000-01-01
A genetic algorithm code, JavaGenes, was written in Java and used to evolve pharmaceutical drug molecules and digital circuits. JavaGenes was run under the Condor cycle-scavenging batch system managing 100-170 desktop SGI workstations. Genetic algorithms mimic biological evolution by evolving solutions to problems using crossover and mutation. While most genetic algorithms evolve strings or trees, JavaGenes evolves graphs representing (currently) molecules and circuits. Java was chosen as the implementation language because the genetic algorithm requires random splitting and recombining of graphs, a complex data structure manipulation with ample opportunities for memory leaks, loose pointers, out-of-bound indices, and other hard to find bugs. Java garbage-collection memory management, lack of pointer arithmetic, and array-bounds index checking prevents these bugs from occurring, substantially reducing development time. While a run-time performance penalty must be paid, the only unacceptable performance we encountered was using standard Java serialization to checkpoint and restart the code. This was fixed by a two-day implementation of custom checkpointing. JavaGenes is minimally integrated with Condor; in other words, JavaGenes must do its own checkpointing and I/O redirection. A prototype Java-aware version of Condor was developed using standard Java serialization for checkpointing. For the prototype to be useful, standard Java serialization must be significantly optimized. JavaGenes is approximately 8700 lines of code and a few thousand JavaGenes jobs have been run. Most jobs ran for a few days. Results include proof that genetic algorithms can evolve directed and undirected graphs, development of a novel crossover operator for graphs, a paper in the journal Nanotechnology, and another paper in preparation.
A study of power cycles using supercritical carbon dioxide as the working fluid
NASA Astrophysics Data System (ADS)
Schroder, Andrew Urban
A real fluid heat engine power cycle analysis code has been developed for analyzing the zero dimensional performance of a general recuperated, recompression, precompression supercritical carbon dioxide power cycle with reheat and a unique shaft configuration. With the proposed shaft configuration, several smaller compressor-turbine pairs could be placed inside of a pressure vessel in order to avoid high speed, high pressure rotating seals. The small compressor-turbine pairs would share some resemblance with a turbocharger assembly. Variation in fluid properties within the heat exchangers is taken into account by discretizing zero dimensional heat exchangers. The cycle analysis code allows for multiple reheat stages, as well as an option for the main compressor to be powered by a dedicated turbine or an electrical motor. Variation in performance with respect to design heat exchanger pressure drops and minimum temperature differences, precompressor pressure ratio, main compressor pressure ratio, recompression mass fraction, main compressor inlet pressure, and low temperature recuperator mass fraction have been explored throughout a range of each design parameter. Turbomachinery isentropic efficiencies are implemented and the sensitivity of the cycle performance and the optimal design parameters is explored. Sensitivity of the cycle performance and optimal design parameters is studied with respect to the minimum heat rejection temperature and the maximum heat addition temperature. A hybrid stochastic and gradient based optimization technique has been used to optimize critical design parameters for maximum engine thermal efficiency. A parallel design exploration mode was also developed in order to rapidly conduct the parameter sweeps in this design space exploration. A cycle thermal efficiency of 49.6% is predicted with a 320K [47°C] minimum temperature and 923K [650°C] maximum temperature. The real fluid heat engine power cycle analysis code was expanded to study a theoretical recuperated Lenoir cycle using supercritical carbon dioxide as the working fluid. The real fluid cycle analysis code was also enhanced to study a combined cycle engine cascade. Two engine cascade configurations were studied. The first consisted of a traditional open loop gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 65.0% using a peak temperature of 1,890K [1,617°C]. The second configuration consisted of a hybrid natural gas powered solid oxide fuel cell and gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 73.1%. Both configurations had a minimum temperature of 306K [33°C]. The hybrid stochastic and gradient based optimization technique was used to optimize all engine design parameters for each engine in the cascade such that the entire engine cascade achieved the maximum thermal efficiency. The parallel design exploration mode was also utilized in order to understand the impact of different design parameters on the overall engine cascade thermal efficiency. Two dimensional conjugate heat transfer (CHT) numerical simulations of a straight, equal height channel heat exchanger using supercritical carbon dioxide were conducted at various Reynolds numbers and channel lengths.
A network coding based routing protocol for underwater sensor networks.
Wu, Huayang; Chen, Min; Guan, Xin
2012-01-01
Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs). Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR).We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC) comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime.
A Network Coding Based Routing Protocol for Underwater Sensor Networks
Wu, Huayang; Chen, Min; Guan, Xin
2012-01-01
Due to the particularities of the underwater environment, some negative factors will seriously interfere with data transmission rates, reliability of data communication, communication range, and network throughput and energy consumption of underwater sensor networks (UWSNs). Thus, full consideration of node energy savings, while maintaining a quick, correct and effective data transmission, extending the network life cycle are essential when routing protocols for underwater sensor networks are studied. In this paper, we have proposed a novel routing algorithm for UWSNs. To increase energy consumption efficiency and extend network lifetime, we propose a time-slot based routing algorithm (TSR).We designed a probability balanced mechanism and applied it to TSR. The theory of network coding is introduced to TSBR to meet the requirement of further reducing node energy consumption and extending network lifetime. Hence, time-slot based balanced network coding (TSBNC) comes into being. We evaluated the proposed time-slot based balancing routing algorithm and compared it with other classical underwater routing protocols. The simulation results show that the proposed protocol can reduce the probability of node conflicts, shorten the process of routing construction, balance energy consumption of each node and effectively prolong the network lifetime. PMID:22666045
Coupling a Supercritical Carbon Dioxide Brayton Cycle to a Helium-Cooled Reactor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Middleton, Bobby; Pasch, James Jay; Kruizenga, Alan Michael
2016-01-01
This report outlines the thermodynamics of a supercritical carbon dioxide (sCO 2) recompression closed Brayton cycle (RCBC) coupled to a Helium-cooled nuclear reactor. The baseline reactor design for the study is the AREVA High Temperature Gas-Cooled Reactor (HTGR). Using the AREVA HTGR nominal operating parameters, an initial thermodynamic study was performed using Sandia's deterministic RCBC analysis program. Utilizing the output of the RCBC thermodynamic analysis, preliminary values of reactor power and of Helium flow rate through the reactor were calculated in Sandia's HelCO 2 code. Some research regarding materials requirements was then conducted to determine aspects of corrosion related tomore » both Helium and to sCO 2 , as well as some mechanical considerations for pressures and temperatures that will be seen by the piping and other components. This analysis resulted in a list of materials-related research items that need to be conducted in the future. A short assessment of dry heat rejection advantages of sCO 2> Brayton cycles was also included. This assessment lists some items that should be investigated in the future to better understand how sCO 2 Brayton cycles and nuclear can maximally contribute to optimizing the water efficiency of carbon free power generation« less
Low Cycle Fatigue and Creep-Fatigue Behavior of Alloy 617 at High Temperature
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabet, Celine; Carroll, Laura; Wright, Richard
Alloy 617 is the leading candidate material for an intermediate heat exchanger (IHX) application of the Very High Temperature Nuclear Reactor (VHTR), expected to have an outlet temperature as high as 950 degrees C. Acceptance of Alloy 617 in Section III of the ASME Code for nuclear construction requires a detailed understanding of the creep-fatigue behavior. Initial creep-fatigue work on Alloy 617 suggests a more dominant role of environment with increasing temperature and/or hold times evidenced through changes in creep-fatigue crack growth mechanism/s and failure life. Continuous cycle fatigue and creep-fatigue testing of Alloy 617 was conducted at 950 degreesmore » C and 0.3% and 0.6% total strain in air to simulate damage modes expected in a VHTR application. Continuous cycle specimens exhibited transgranular cracking. Intergranular cracking was observed in the creep-fatigue specimens, although evidence of grain boundary cavitation was not observed. Despite the absence of grain boundary cavitation to accelerate crack propagation, the addition of a hold time at peak tensile strain was detrimental to cycle life. This suggests that creepfatigue interaction may occur by a different mechanism or that the environment may be partially responsible for accelerating failure.« less
2014-01-01
Background The genome is pervasively transcribed but most transcripts do not code for proteins, constituting non-protein-coding RNAs. Despite increasing numbers of functional reports of individual long non-coding RNAs (lncRNAs), assessing the extent of functionality among the non-coding transcriptional output of mammalian cells remains intricate. In the protein-coding world, transcripts differentially expressed in the context of processes essential for the survival of multicellular organisms have been instrumental in the discovery of functionally relevant proteins and their deregulation is frequently associated with diseases. We therefore systematically identified lncRNAs expressed differentially in response to oncologically relevant processes and cell-cycle, p53 and STAT3 pathways, using tiling arrays. Results We found that up to 80% of the pathway-triggered transcriptional responses are non-coding. Among these we identified very large macroRNAs with pathway-specific expression patterns and demonstrated that these are likely continuous transcripts. MacroRNAs contain elements conserved in mammals and sauropsids, which in part exhibit conserved RNA secondary structure. Comparing evolutionary rates of a macroRNA to adjacent protein-coding genes suggests a local action of the transcript. Finally, in different grades of astrocytoma, a tumor disease unrelated to the initially used cell lines, macroRNAs are differentially expressed. Conclusions It has been shown previously that the majority of expressed non-ribosomal transcripts are non-coding. We now conclude that differential expression triggered by signaling pathways gives rise to a similar abundance of non-coding content. It is thus unlikely that the prevalence of non-coding transcripts in the cell is a trivial consequence of leaky or random transcription events. PMID:24594072
Expert system validation in prolog
NASA Technical Reports Server (NTRS)
Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline
1988-01-01
An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.
NASA Technical Reports Server (NTRS)
Shull, Forrest; Feldmann, Raimund; Haingaertner, Ralf; Regardie, Myrna; Seaman, Carolyn
2007-01-01
It is often the case in software projects that when schedule and budget resources are limited, the Verification and Validation (V&V) activities suffer. Fewer V&V activities can be afforded and moreover, short-term challenges can result in V&V activities being scaled back or dropped altogether. As a result, too often the default solution is to save activities for improving software quality until too late in the life-cycle, relying on late-term code inspections followed by thorough testing activities to reduce defect counts to acceptable levels. As many project managers realize, however, this is a resource-intensive way of achieving the required quality for software. The Full Life-cycle Defect Management Assessment Initiative, funded by NASA s Office of Safety and Mission Assurance under the Software Assurance Research Program, aims to address these problems by: Improving the effectiveness of early life-cycle V&V activities to make their benefits more attractive to team leads. Specifically, we focus on software inspection, a proven method that can be applied to any software work product, long before executable code has been developed; Better communicating this effectiveness to software development teams, along with suggestions for parameters to improve in the future to increase effectiveness; Analyzing the impact of early life-cycle V&V on the effectiveness and cost required for late life-cycle V&V activities, such as testing, in order to make the tradeoffs more apparent. This white paper reports on an initial milestone in this work, the development of a preliminary model of inspection effectiveness across multiple NASA Centers. This model contributes toward reaching our project goals by: Allowing an examination of inspection parameters, across different types of projects and different work products, for an analysis of factors that impact defect detection effectiveness. Allowing a comparison of this NASA-specific model to existing recommendations in the literature regarding how to plan effective inspections. Forming a baseline model which can be extended to incorporate factors describing: the numbers and types of defects that are missed by inspections; how such defects flow downstream through software development phases; how effectively they can be caught by testing activities in the late stages of development. The model has been implemented in a prototype web-enabled decision-support tool which allows developers to enter their inspection data and receive feedback based on a comparison against the model. The tool also allows users to access reusable materials (such as checklists) from projects included in the baseline. Both the tool itself and the model underlying it will continue to be extended throughout the remainder of this initiative. As results of analyzing inspection effectiveness for defect containment are determined, they can be shared via the tool and also via updates to existing training courses on metrics and software inspections. Moreover, the tool will help satisfy key CMMI requirements for the NASA Centers, as it will enable NASA to take a global view across peer review results for various types of projects to identify systemic problems. This analysis can result in continuous improvements to the approach to verification.
Interfacing Computer Aided Parallelization and Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Biegel, Bryan A. (Technical Monitor)
2003-01-01
When porting sequential applications to parallel computer architectures, the program developer will typically go through several cycles of source code optimization and performance analysis. We have started a project to develop an environment where the user can jointly navigate through program structure and performance data information in order to make efficient optimization decisions. In a prototype implementation we have interfaced the CAPO computer aided parallelization tool with the Paraver performance analysis tool. We describe both tools and their interface and give an example for how the interface helps within the program development cycle of a benchmark code.
Cell cycle dependent intracellular distribution of two spliced isoforms of TCP/ILF3 proteins.
Xu, You Hai; Leonova, Tatyana; Grabowski, Gregory A
2003-12-01
TCP80 is an approximately 80kDa mammalian cytoplasmic protein that binds to a set of mRNAs and inhibits their translation in vitro and ex vivo. This protein has high sequence similarity to interleukin-2 enhancer-binding factors (NF90/ILF3) and the M-phase phosphoprotein (MPP4)/DRBP76. A 110kDa immunologic isoform of TCP80/NF90/MPP4/DRBP76, termed TCP110, also is present in cytoplasm and nuclei of many types of cells. A cDNA sequence coding for TCP110 was derived by 5(')RACE. The TCP110 sequence is identical to ILF3. The gene coding for TCP110/ILF3 mapped to human chromosome 19 and the gene organization was analyzed using TCP80 and TCP110/ILF3 cDNA sequences. The TCP/ILF3 gene spans >34.8kb and contains 21 exons. At least one alternatively spliced product involving exons 19-21 exists and predicts the formation of either TCP80 or TCP110/ILF3. However, the functional relationships of TCP80 and TCP110/ILF3 required elucidation. The metabolic turnover rates and subcellular distribution of TCP80 and TCP110/ILF3 during the cell cycle showed TCP80 to be relatively stable (t(1/2)=5 days) in the cytoplasmic compartment. In comparison, TCP110/ILF3 migrated between the cytoplasmic and nuclear compartments during the cell cycle. The TCP110 C-terminal segment contains an additional nuclear localizing signal that plays a role in its nuclear translocation. This study indicates that the multiple cellular functions, i.e., translation control, interleukin-2 enhancer binding, or cell division, of TCP/ILF3 are fulfilled by alternatively spliced isoforms.
Long non-coding RNA CRNDE promotes tumor growth in medulloblastoma.
Song, H; Han, L-M; Gao, Q; Sun, Y
2016-06-01
Medulloblastoma is the most common malignant brain tumor in children. Despite remarkable advances over the past decades, a novel therapeutic strategy is urgently required to increase long-term survival. This study aimed to understand the role of a long non-coding RNA (lncRNA), colorectal neoplasia differentially expressed (CRNDE), in medulloblastoma tumor growth. The transcript level of CRNDE was initially examined in dissected clinical tissues and cultured cancerous cells. Effects of CRNDE knockdown on cell viability and colony formation in vitro were assessed using the CCK-8 and colony formation assays, respectively. Cell cycle progression and survival were also determined after CRNDE knockdown. A xenograft mouse model of human medulloblastoma was established by injecting nude mice with medulloblastoma cells stably depleted of CRNDE expression. Our data suggest that transcript levels of CRNDE are elevated in clinical medulloblastoma tissues instead of in adjacent non-cancerous tissues. Knockdown of CRNDE significantly slowed cell proliferation rates and inhibited colony formation in Daoy and D341 cells. Tumor growth in vivo was also inhibited after CRNDE knockdown. Moreover, after knockdown of CRNDE, cell cycle progression was arrested in S phase and apoptosis was promoted by 15-20% in Daoy and D341 cells. In vivo data further showed that proliferating cell nuclei antigen (PCNA) was decreased, whereas the apoptosis initiator cleaved-caspase-3 was increased upon CRNDE knockdown in cancerous tissues from the mouse model. All these data suggest that CRNDE promotes tumor growth both in vitro and in vivo. This growth-promotion effect might be achieved via arresting cell cycle progression and inhibiting apoptosis. Therapeutics against CRNDE may be a novel strategy for the treatment of medulloblastoma.
The use of Tcl and Tk to improve design and code reutilization
NASA Technical Reports Server (NTRS)
Rodriguez, Lisbet; Reinholtz, Kirk
1995-01-01
Tcl and Tk facilitate design and code reuse in the ZIPSIM series of high-performance, high-fidelity spacecraft simulators. Tcl and Tk provide a framework for the construction of the Graphical User Interfaces for the simulators. The interfaces are architected such that a large proportion of the design and code is used for several applications, which has reduced design time and life-cycle costs.
Muon catalyzed fusion beam window mechanical strength testing and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ware, A.G.; Zabriskie, J.M.
A thin aluminum window (0.127 mm (0.005-inch) thick x 146 mm (5 3/4-inch) diameter) of 2024-T6 alloy was modeled and analyzed using the ABAQUS non-linear finite element analysis code. A group of windows was fabricated, heat-treated and subsequently tested. Testing included both ultimate burst pressure and fatigue. Fatigue testing cycles involved ''oil-canning'' behavior representing vacuum purge and reversal to pressure. Test results are compared to predictions and the mode of failure is discussed. Operational requirements, based on the above analysis and correlational testing, for the actual beam windows are discussed. 1 ref., 3 figs.
Algorithm for calculating turbine cooling flow and the resulting decrease in turbine efficiency
NASA Technical Reports Server (NTRS)
Gauntner, J. W.
1980-01-01
An algorithm is presented for calculating both the quantity of compressor bleed flow required to cool the turbine and the decrease in turbine efficiency caused by the injection of cooling air into the gas stream. The algorithm, which is intended for an axial flow, air routine in a properly written thermodynamic cycle code. Ten different cooling configurations are available for each row of cooled airfoils in the turbine. Results from the algorithm are substantiated by comparison with flows predicted by major engine manufacturers for given bulk metal temperatures and given cooling configurations. A list of definitions for the terms in the subroutine is presented.
Application of Aeroelastic Solvers Based on Navier Stokes Equations
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Srivastava, Rakesh
2001-01-01
The propulsion element of the NASA Advanced Subsonic Technology (AST) initiative is directed towards increasing the overall efficiency of current aircraft engines. This effort requires an increase in the efficiency of various components, such as fans, compressors, turbines etc. Improvement in engine efficiency can be accomplished through the use of lighter materials, larger diameter fans and/or higher-pressure ratio compressors. However, each of these has the potential to result in aeroelastic problems such as flutter or forced response. To address the aeroelastic problems, the Structural Dynamics Branch of NASA Glenn has been involved in the development of numerical capabilities for analyzing the aeroelastic stability characteristics and forced response of wide chord fans, multi-stage compressors and turbines. In order to design an engine to safely perform a set of desired tasks, accurate information of the stresses on the blade during the entire cycle of blade motion is required. This requirement in turn demands that accurate knowledge of steady and unsteady blade loading is available. To obtain the steady and unsteady aerodynamic forces for the complex flows around the engine components, for the flow regimes encountered by the rotor, an advanced compressible Navier-Stokes solver is required. A finite volume based Navier-Stokes solver has been developed at Mississippi State University (MSU) for solving the flow field around multistage rotors. The focus of the current research effort, under NASA Cooperative Agreement NCC3- 596 was on developing an aeroelastic analysis code (entitled TURBO-AE) based on the Navier-Stokes solver developed by MSU. The TURBO-AE code has been developed for flutter analysis of turbomachine components and delivered to NASA and its industry partners. The code has been verified. validated and is being applied by NASA Glenn and by aircraft engine manufacturers to analyze the aeroelastic stability characteristics of modem fans, compressors and turbines.
Neutron Environment Calculations for Low Earth Orbit
NASA Technical Reports Server (NTRS)
Clowdsley, M. S.; Wilson, J. W.; Shinn, J. L.; Badavi, F. F.; Heinbockel, J. H.; Atwell, W.
2001-01-01
The long term exposure of astronauts on the developing International Space Station (ISS) requires an accurate knowledge of the internal exposure environment for human risk assessment and other onboard processes. The natural environment is moderated by the solar wind, which varies over the solar cycle. The HZETRN high charge and energy transport code developed at NASA Langley Research Center can be used to evaluate the neutron environment on ISS. A time dependent model for the ambient environment in low earth orbit is used. This model includes GCR radiation moderated by the Earth's magnetic field, trapped protons, and a recently completed model of the albedo neutron environment formed through the interaction of galactic cosmic rays with the Earth's atmosphere. Using this code, the neutron environments for space shuttle missions were calculated and comparisons were made to measurements by the Johnson Space Center with onboard detectors. The models discussed herein are being developed to evaluate the natural and induced environment data for the Intelligence Synthesis Environment Project and eventual use in spacecraft optimization.
Prediction of thermal cycling induced cracking in polmer matrix composites
NASA Technical Reports Server (NTRS)
Mcmanus, Hugh L.
1994-01-01
The work done in the period August 1993 through February 1994 on the 'Prediction of Thermal Cycling Induced Cracking In Polymer Matrix Composites' program is summarized. Most of the work performed in this period, as well as the previous one, is described in detail in the attached Master's thesis, 'Analysis of Thermally Induced Damage in Composite Space Structures,' by Cecelia Hyun Seon Park. Work on a small thermal cycling and aging chamber was concluded in this period. The chamber was extensively tested and calibrated. Temperatures can be controlled very precisely, and are very uniform in the test chamber. Based on results obtained in the previous period of this program, further experimental progressive cracking studies were carried out. The laminates tested were selected to clarify the differences between the behaviors of thick and thin ply layers, and to explore other variables such as stacking sequence and scaling effects. Most specimens tested were made available from existing stock at Langley Research Center. One laminate type had to be constructed from available prepreg material at Langley Research Center. Specimens from this laminate were cut and prepared at MIT. Thermal conditioning was carried out at Langley Research Center, and at the newly constructed MIT facility. Specimens were examined by edge inspection and by crack configuration studies, in which specimens were sanded down in order to examine the distribution of cracks within the specimens. A method for predicting matrix cracking due to decreasing temperatures and/or thermal cycling in all plies of an arbitrary laminate was implemented as a computer code. The code also predicts changes in properties due to the cracking. Extensive correlations between test results and code predictions were carried out. The computer code was documented and is ready for distribution.
Engine Cycle Analysis for a Particle Bed Reactor Nuclear Rocket
1991-03-01
0 DTIC USERS UNCLASSIFIED 22a. NAME OF RESPONSIBLE INDIVIDUAL ZZb. TELEPHONE (Include Area Code) 22c. OFFICE SYMBOL Lt Timothy J . Lawrence 805-275...Cycle with 2000 MW PBR and Uncooled Nozzle J : Output for Bleed Cycle with 2000 MW PBR and Cooled Nozzle K: Output for Expander Cycle with 2000 MW PBR L...Mars with carbon dioxide, the primary component of the Martian atmosphere. Carbon dioxide would delivera smaller ! j , but its use would eliminate the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moisseytsev, A.; Sienicki, J. J.
2012-05-10
Significant progress has been made on the development of a control strategy for the supercritical carbon dioxide (S-CO{sub 2}) Brayton cycle enabling removal of power from an autonomous load following Sodium-Cooled Fast Reactor (SFR) down to decay heat levels such that the S-CO{sub 2} cycle can be used to cool the reactor until decay heat can be removed by the normal shutdown heat removal system or a passive decay heat removal system such as Direct Reactor Auxiliary Cooling System (DRACS) loops with DRACS in-vessel heat exchangers. This capability of the new control strategy eliminates the need for use of amore » separate shutdown heat removal system which might also use supercritical CO{sub 2}. It has been found that this capability can be achieved by introducing a new control mechanism involving shaft speed control for the common shaft joining the turbine and two compressors following reduction of the load demand from the electrical grid to zero. Following disconnection of the generator from the electrical grid, heat is removed from the intermediate sodium circuit through the sodium-to-CO{sub 2} heat exchanger, the turbine solely drives the two compressors, and heat is rejected from the cycle through the CO{sub 2}-to-water cooler. To investigate the effectiveness of shaft speed control, calculations are carried out using the coupled Plant Dynamics Code-SAS4A/SASSYS-1 code for a linear load reduction transient for a 1000 MWt metallic-fueled SFR with autonomous load following. No deliberate motion of control rods or adjustment of sodium pump speeds is assumed to take place. It is assumed that the S-CO{sub 2} turbomachinery shaft speed linearly decreases from 100 to 20% nominal following reduction of grid load to zero. The reactor power is calculated to autonomously decrease down to 3% nominal providing a lengthy window in time for the switchover to the normal shutdown heat removal system or for a passive decay heat removal system to become effective. However, the calculations reveal that the compressor conditions are calculated to approach surge such that the need for a surge control system for each compressor is identified. Thus, it is demonstrated that the S-CO{sub 2} cycle can operate in the initial decay heat removal mode even with autonomous reactor control. Because external power is not needed to drive the compressors, the results show that the S-CO{sub 2} cycle can be used for initial decay heat removal for a lengthy interval in time in the absence of any off-site electrical power. The turbine provides sufficient power to drive the compressors. Combined with autonomous reactor control, this represents a significant safety advantage of the S-CO{sub 2} cycle by maintaining removal of the reactor power until the core decay heat falls to levels well below those for which the passive decay heat removal system is designed. The new control strategy is an alternative to a split-shaft layout involving separate power and compressor turbines which had previously been identified as a promising approach enabling heat removal from a SFR at low power levels. The current results indicate that the split-shaft configuration does not provide any significant benefits for the S-CO{sub 2} cycle over the current single-shaft layout with shaft speed control. It has been demonstrated that when connected to the grid the single-shaft cycle can effectively follow the load over the entire range. No compressor speed variation is needed while power is delivered to the grid. When the system is disconnected from the grid, the shaft speed can be changed as effectively as it would be with the split-shaft arrangement. In the split-shaft configuration, zero generator power means disconnection of the power turbine, such that the resulting system will be almost identical to the single-shaft arrangement. Without this advantage of the split-shaft configuration, the economic benefits of the single-shaft arrangement, provided by just one turbine and lower losses at the design point, are more important to the overall cycle performance. Therefore, the single-shaft configuration shall be retained as the reference arrangement for S-CO{sub 2} cycle power converter preconceptual designs. Improvements to the ANL Plant Dynamics Code have been carried out. The major code improvement is the introduction of a restart capability which simplifies investigation of control strategies for very long transients. Another code modification is transfer of the entire code to a new Intel Fortran complier; the execution of the code using the new compiler was verified by demonstrating that the same results are obtained as when the previous Compaq Visual Fortran compiler was used.« less
Formal Analysis of the Remote Agent Before and After Flight
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Lowry, Mike; Park, SeungJoon; Pecheur, Charles; Penix, John; Visser, Willem; White, Jon L.
2000-01-01
This paper describes two separate efforts that used the SPIN model checker to verify deep space autonomy flight software. The first effort occurred at the beginning of a spiral development process and found five concurrency errors early in the design cycle that the developers acknowledge would not have been found through testing. This effort required a substantial manual modeling effort involving both abstraction and translation from the prototype LISP code to the PROMELA language used by SPIN. This experience and others led to research to address the gap between formal method tools and the development cycle used by software developers. The Java PathFinder tool which directly translates from Java to PROMELA was developed as part of this research, as well as automatic abstraction tools. In 1999 the flight software flew on a space mission, and a deadlock occurred in a sibling subsystem to the one which was the focus of the first verification effort. A second quick-response "cleanroom" verification effort found the concurrency error in a short amount of time. The error was isomorphic to one of the concurrency errors found during the first verification effort. The paper demonstrates that formal methods tools can find concurrency errors that indeed lead to loss of spacecraft functions, even for the complex software required for autonomy. Second, it describes progress in automatic translation and abstraction that eventually will enable formal methods tools to be inserted directly into the aerospace software development cycle.
Energy Savings Analysis of the Proposed Revision of the Washington D.C. Non-Residential Energy Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Athalye, Rahul A.; Hart, Philip R.
This report presents the results of an assessment of savings for the proposed Washington D.C. energy code relative to ASHRAE Standard 90.1-2010. It includes annual and life cycle savings for site energy, source energy, energy cost, and carbon dioxide emissions that would result from adoption and enforcement of the proposed code for newly constructed buildings in Washington D.C. over a five year period.
Techniques for the analysis of data from coded-mask X-ray telescopes
NASA Technical Reports Server (NTRS)
Skinner, G. K.; Ponman, T. J.; Hammersley, A. P.; Eyles, C. J.
1987-01-01
Several techniques useful in the analysis of data from coded-mask telescopes are presented. Methods of handling changes in the instrument pointing direction are reviewed and ways of using FFT techniques to do the deconvolution considered. Emphasis is on techniques for optimally-coded systems, but it is shown that the range of systems included in this class can be extended through the new concept of 'partial cycle averaging'.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
Dominguez, Daniel; Tsai, Yi-Hsuan; Gomez, Nicholas; Jha, Deepak Kumar; Davis, Ian; Wang, Zefeng
2016-01-01
Progression through the cell cycle is largely dependent on waves of periodic gene expression, and the regulatory networks for these transcriptome dynamics have emerged as critical points of vulnerability in various aspects of tumor biology. Through RNA-sequencing of human cells during two continuous cell cycles (>2.3 billion paired reads), we identified over 1 000 mRNAs, non-coding RNAs and pseudogenes with periodic expression. Periodic transcripts are enriched in functions related to DNA metabolism, mitosis, and DNA damage response, indicating these genes likely represent putative cell cycle regulators. Using our set of periodic genes, we developed a new approach termed “mitotic trait” that can classify primary tumors and normal tissues by their transcriptome similarity to different cell cycle stages. By analyzing >4 000 tumor samples in The Cancer Genome Atlas (TCGA) and other expression data sets, we found that mitotic trait significantly correlates with genetic alterations, tumor subtype and, notably, patient survival. We further defined a core set of 67 genes with robust periodic expression in multiple cell types. Proteins encoded by these genes function as major hubs of protein-protein interaction and are mostly required for cell cycle progression. The core genes also have unique chromatin features including increased levels of CTCF/RAD21 binding and H3K36me3. Loss of these features in uterine and kidney cancers is associated with altered expression of the core 67 genes. Our study suggests new chromatin-associated mechanisms for periodic gene regulation and offers a predictor of cancer patient outcomes. PMID:27364684
NASA Technical Reports Server (NTRS)
Summanen, T.; Kyroelae, E.
1995-01-01
We have developed a computer code which can be used to study 3-dimensional and time-dependent effects of the solar cycle on the interplanetary (IP) hydrogen distribution. The code is based on the inverted Monte Carlo simulation. In this work we have modelled the temporal behaviour of the solar ionisation rate. We have assumed that during the most of the time of the solar cycle there is an anisotopic latitudinal structure but right at the solar maximum the anisotropy disappears. The effects of this behaviour will be discussed both in regard to the IP hydrogen distribution and IP Lyman a a-intensity.
Theory-based model for the pedestal, edge stability and ELMs in tokamaks
NASA Astrophysics Data System (ADS)
Pankin, A. Y.; Bateman, G.; Brennan, D. P.; Schnack, D. D.; Snyder, P. B.; Voitsekhovitch, I.; Kritz, A. H.; Janeschitz, G.; Kruger, S.; Onjun, T.; Pacher, G. W.; Pacher, H. D.
2006-04-01
An improved model for triggering edge localized mode (ELM) crashes is developed for use within integrated modelling simulations of the pedestal and ELM cycles at the edge of H-mode tokamak plasmas. The new model is developed by using the BALOO, DCON and ELITE ideal MHD stability codes to derive parametric expressions for the ELM triggering threshold. The whole toroidal mode number spectrum is studied with these codes. The DCON code applies to low mode numbers, while the BALOO code applies to only high mode numbers and the ELITE code applies to intermediate and high mode numbers. The variables used in the parametric stability expressions are the normalized pressure gradient and the parallel current density, which drive ballooning and peeling modes. Two equilibria motivated by DIII-D geometry with different plasma triangularities are studied. It is found that the stable region in the high triangularity discharge covers a much larger region of parameter space than the corresponding stability region in the low triangularity discharge. The new ELM trigger model is used together with a previously developed model for pedestal formation and ELM crashes in the ASTRA integrated modelling code to follow the time evolution of the temperature profiles during ELM cycles. The ELM frequencies obtained in the simulations of low and high triangularity discharges are observed to increase with increasing heating power. There is a transition from second stability to first ballooning mode stability as the heating power is increased in the high triangularity simulations. The results from the ideal MHD stability codes are compared with results from the resistive MHD stability code NIMROD.
Double-multiple streamtube model for Darrieus in turbines
NASA Technical Reports Server (NTRS)
Paraschivoiu, I.
1981-01-01
An analytical model is proposed for calculating the rotor performance and aerodynamic blade forces for Darrieus wind turbines with curved blades. The method of analysis uses a multiple-streamtube model, divided into two parts: one modeling the upstream half-cycle of the rotor and the other, the downstream half-cycle. The upwind and downwind components of the induced velocities at each level of the rotor were obtained using the principle of two actuator disks in tandem. Variation of the induced velocities in the two parts of the rotor produces larger forces in the upstream zone and smaller forces in the downstream zone. Comparisons of the overall rotor performance with previous methods and field test data show the important improvement obtained with the present model. The calculations were made using the computer code CARDAA developed at IREQ. The double-multiple streamtube model presented has two major advantages: it requires a much shorter computer time than the three-dimensional vortex model and is more accurate than multiple-streamtube model in predicting the aerodynamic blade loads.
Zaborowska, Justyna; Isa, Nur F.
2015-01-01
Positive transcription elongation factor b (P‐TEFb), which comprises cyclin‐dependent kinase 9 (CDK9) kinase and cyclin T subunits, is an essential kinase complex in human cells. Phosphorylation of the negative elongation factors by P‐TEFb is required for productive elongation of transcription of protein‐coding genes by RNA polymerase II (pol II). In addition, P‐TEFb‐mediated phosphorylation of the carboxyl‐terminal domain (CTD) of the largest subunit of pol II mediates the recruitment of transcription and RNA processing factors during the transcription cycle. CDK9 also phosphorylates p53, a tumor suppressor that plays a central role in cellular responses to a range of stress factors. Many viral factors affect transcription by recruiting or modulating the activity of CDK9. In this review, we will focus on how the function of CDK9 is regulated by viral gene products. The central role of CDK9 in viral life cycles suggests that drugs targeting the interaction between viral products and P‐TEFb could be effective anti‐viral agents. PMID:27398404
Nouraei, S A R; O'Hanlon, S; Butler, C R; Hadovsky, A; Donald, E; Benjamin, E; Sandhu, G S
2009-02-01
To audit the accuracy of otolaryngology clinical coding and identify ways of improving it. Prospective multidisciplinary audit, using the 'national standard clinical coding audit' methodology supplemented by 'double-reading and arbitration'. Teaching-hospital otolaryngology and clinical coding departments. Otolaryngology inpatient and day-surgery cases. Concordance between initial coding performed by a coder (first cycle) and final coding by a clinician-coder multidisciplinary team (MDT; second cycle) for primary and secondary diagnoses and procedures, and Health Resource Groupings (HRG) assignment. 1250 randomly-selected cases were studied. Coding errors occurred in 24.1% of cases (301/1250). The clinician-coder MDT reassigned 48 primary diagnoses and 186 primary procedures and identified a further 209 initially-missed secondary diagnoses and procedures. In 203 cases, patient's initial HRG changed. Incorrect coding caused an average revenue loss of 174.90 pounds per patient (14.7%) of which 60% of the total income variance was due to miscoding of a eight highly-complex head and neck cancer cases. The 'HRG drift' created the appearance of disproportionate resource utilisation when treating 'simple' cases. At our institution the total cost of maintaining a clinician-coder MDT was 4.8 times lower than the income regained through the double-reading process. This large audit of otolaryngology practice identifies a large degree of error in coding on discharge. This leads to significant loss of departmental revenue, and given that the same data is used for benchmarking and for making decisions about resource allocation, it distorts the picture of clinical practice. These can be rectified through implementing a cost-effective clinician-coder double-reading multidisciplinary team as part of a data-assurance clinical governance framework which we recommend should be established in hospitals.
Environmental performance of green building code and certification systems.
Suh, Sangwon; Tomar, Shivira; Leighton, Matthew; Kneifel, Joshua
2014-01-01
We examined the potential life-cycle environmental impact reduction of three green building code and certification (GBCC) systems: LEED, ASHRAE 189.1, and IgCC. A recently completed whole-building life cycle assessment (LCA) database of NIST was applied to a prototype building model specification by NREL. TRACI 2.0 of EPA was used for life cycle impact assessment (LCIA). The results showed that the baseline building model generates about 18 thousand metric tons CO2-equiv. of greenhouse gases (GHGs) and consumes 6 terajoule (TJ) of primary energy and 328 million liter of water over its life-cycle. Overall, GBCC-compliant building models generated 0% to 25% less environmental impacts than the baseline case (average 14% reduction). The largest reductions were associated with acidification (25%), human health-respiratory (24%), and global warming (GW) (22%), while no reductions were observed for ozone layer depletion (OD) and land use (LU). The performances of the three GBCC-compliant building models measured in life-cycle impact reduction were comparable. A sensitivity analysis showed that the comparative results were reasonably robust, although some results were relatively sensitive to the behavioral parameters, including employee transportation and purchased electricity during the occupancy phase (average sensitivity coefficients 0.26-0.29).
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 45 Public Welfare 1 2012-10-01 2012-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 45 Public Welfare 1 2013-10-01 2013-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 1 2010-10-01 2010-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 45 Public Welfare 1 2014-10-01 2014-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare Department of Health and Human Services ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
45 CFR 162.1011 - Valid code sets.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 45 Public Welfare 1 2011-10-01 2011-10-01 false Valid code sets. 162.1011 Section 162.1011 Public Welfare DEPARTMENT OF HEALTH AND HUMAN SERVICES ADMINISTRATIVE DATA STANDARDS AND RELATED REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1011 Valid code sets. Each code set is valid within the dates...
NASA Astrophysics Data System (ADS)
Wei, Pei; Gu, Rentao; Ji, Yuefeng
2014-06-01
As an innovative and promising technology, network coding has been introduced to passive optical networks (PON) in recent years to support inter optical network unit (ONU) communication, yet the signaling process and dynamic bandwidth allocation (DBA) in PON with network coding (NC-PON) still need further study. Thus, we propose a joint signaling and DBA scheme for efficiently supporting differentiated services of inter ONU communication in NC-PON. In the proposed joint scheme, the signaling process lays the foundation to fulfill network coding in PON, and it can not only avoid the potential threat to downstream security in previous schemes but also be suitable for the proposed hybrid dynamic bandwidth allocation (HDBA) scheme. In HDBA, a DBA cycle is divided into two sub-cycles for applying different coding, scheduling and bandwidth allocation strategies to differentiated classes of services. Besides, as network traffic load varies, the entire upstream transmission window for all REPORT messages slides accordingly, leaving the transmission time of one or two sub-cycles to overlap with the bandwidth allocation calculation time at the optical line terminal (the OLT), so that the upstream idle time can be efficiently eliminated. Performance evaluation results validate that compared with the existing two DBA algorithms deployed in NC-PON, HDBA demonstrates the best quality of service (QoS) support in terms of delay for all classes of services, especially guarantees the end-to-end delay bound of high class services. Specifically, HDBA can eliminate queuing delay and scheduling delay of high class services, reduce those of lower class services by at least 20%, and reduce the average end-to-end delay of all services over 50%. Moreover, HDBA also achieves the maximum delay fairness between coded and uncoded lower class services, and medium delay fairness for high class services.
Stirling engine external heat system design with heat pipe heater
NASA Technical Reports Server (NTRS)
Godett, Ted M.; Ziph, Benjamin
1986-01-01
This final report presents the conceptual design of a liquid fueled external heating system (EHS) and the preliminary design of a heat pipe heater for the STM-4120 Stirling cycle engine, to meet the Air Force mobile electric power (MEP) requirement for units in the range of 20 to 60 kW. The EHS design had the following constraints: (1) Packaging requirements limited the overall system dimensions to about 330 mm x 250 mm x 100 mm; (2) Heat flux to the sodium heat pipe evaporator was limited to an average of 100 kW/m and a maximum of 550 kW/m based on previous experience; and (3) The heat pipe operating temperature was specified to be 800 C based on heat input requirements of the STM4-120. An analysis code was developed to optimize the EHS performance parameters and an analytical development of the sodium heat pipe heater was performed; both are presented and discussed. In addition, construction techniques were evaluated and scale model heat pipe testing performed.
Code of Federal Regulations, 2012 CFR
2012-01-01
... leased buildings exempt from State and local code requirements in fire protection? 102-80.85 Section 102... Fire Prevention State and Local Codes § 102-80.85 Are Federally owned and leased buildings exempt from State and local code requirements in fire protection? Federally owned buildings are generally exempt...
Code of Federal Regulations, 2013 CFR
2013-07-01
... leased buildings exempt from State and local code requirements in fire protection? 102-80.85 Section 102... Fire Prevention State and Local Codes § 102-80.85 Are Federally owned and leased buildings exempt from State and local code requirements in fire protection? Federally owned buildings are generally exempt...
Code of Federal Regulations, 2014 CFR
2014-01-01
... leased buildings exempt from State and local code requirements in fire protection? 102-80.85 Section 102... Fire Prevention State and Local Codes § 102-80.85 Are Federally owned and leased buildings exempt from State and local code requirements in fire protection? Federally owned buildings are generally exempt...
Code of Federal Regulations, 2011 CFR
2011-01-01
... leased buildings exempt from State and local code requirements in fire protection? 102-80.85 Section 102... Fire Prevention State and Local Codes § 102-80.85 Are Federally owned and leased buildings exempt from State and local code requirements in fire protection? Federally owned buildings are generally exempt...
Code of Federal Regulations, 2010 CFR
2010-07-01
... leased buildings exempt from State and local code requirements in fire protection? 102-80.85 Section 102... Fire Prevention State and Local Codes § 102-80.85 Are Federally owned and leased buildings exempt from State and local code requirements in fire protection? Federally owned buildings are generally exempt...
Galactic and solar radiation exposure to aircrew during a solar cycle.
Lewis, B J; Bennett, L G I; Green, A R; McCall, M J; Ellaschuk, B; Butler, A; Pierre, M
2002-01-01
An on-going investigation using a tissue-equivalent proportional counter (TEPC) has been carried out to measure the ambient dose equivalent rate of the cosmic radiation exposure of aircrew during a solar cycle. A semi-empirical model has been derived from these data to allow for the interpolation of the dose rate for any global position. The model has been extended to an altitude of up to 32 km with further measurements made on board aircraft and several balloon flights. The effects of changing solar modulation during the solar cycle are characterised by correlating the dose rate data to different solar potential models. Through integration of the dose-rate function over a great circle flight path or between given waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAIRE) has been further developed for estimation of the route dose from galactic cosmic radiation exposure. This estimate is provided in units of ambient dose equivalent as well as effective dose, based on E/H x (10) scaling functions as determined from transport code calculations with LUIN and FLUKA. This experimentally based treatment has also been compared with the CARI-6 and EPCARD codes that are derived solely from theoretical transport calculations. Using TEPC measurements taken aboard the International Space Station, ground based neutron monitoring, GOES satellite data and transport code analysis, an empirical model has been further proposed for estimation of aircrew exposure during solar particle events. This model has been compared to results obtained during recent solar flare events.
Aerothermo-Structural Analysis of Low Cost Composite Nozzle/Inlet Components
NASA Technical Reports Server (NTRS)
Shivakumar, Kuwigai; Challa, Preeli; Sree, Dave; Reddy, D.
1999-01-01
This research is a cooperative effort among the Turbomachinery and Propulsion Division of NASA Glenn, CCMR of NC A&T State University, and the Tuskegee University. The NC A&T is the lead center and Tuskegee University is the participating institution. Objectives of the research were to develop an integrated aerodynamic, thermal and structural analysis code for design of aircraft engine components, such as, nozzles and inlets made of textile composites; conduct design studies on typical inlets for hypersonic transportation vehicles and setup standards test examples and finally manufacture a scaled down composite inlet. These objectives are accomplished through the following seven tasks: (1) identify the relevant public domain codes for all three types of analysis; (2) evaluate the codes for the accuracy of results and computational efficiency; (3) develop aero-thermal and thermal structural mapping algorithms; (4) integrate all the codes into one single code; (5) write a graphical user interface to improve the user friendliness of the code; (6) conduct test studies for rocket based combined-cycle engine inlet; and finally (7) fabricate a demonstration inlet model using textile preform composites. Tasks one, two and six are being pursued. Selected and evaluated NPARC for flow field analysis, CSTEM for in-depth thermal analysis of inlets and nozzles and FRAC3D for stress analysis. These codes have been independently verified for accuracy and performance. In addition, graphical user interface based on micromechanics analysis for laminated as well as textile composites was developed. Demonstration of this code will be made at the conference. A rocket based combined cycle engine was selected for test studies. Flow field analysis of various inlet geometries were studied. Integration of codes is being continued. The codes developed are being applied to a candidate example of trailblazer engine proposed for space transportation. A successful development of the code will provide a simpler, faster and user-friendly tool for conducting design studies of aircraft and spacecraft engines, applicable in high speed civil transport and space missions.
Observations Regarding Use of Advanced CFD Analysis, Sensitivity Analysis, and Design Codes in MDO
NASA Technical Reports Server (NTRS)
Newman, Perry A.; Hou, Gene J. W.; Taylor, Arthur C., III
1996-01-01
Observations regarding the use of advanced computational fluid dynamics (CFD) analysis, sensitivity analysis (SA), and design codes in gradient-based multidisciplinary design optimization (MDO) reflect our perception of the interactions required of CFD and our experience in recent aerodynamic design optimization studies using CFD. Sample results from these latter studies are summarized for conventional optimization (analysis - SA codes) and simultaneous analysis and design optimization (design code) using both Euler and Navier-Stokes flow approximations. The amount of computational resources required for aerodynamic design using CFD via analysis - SA codes is greater than that required for design codes. Thus, an MDO formulation that utilizes the more efficient design codes where possible is desired. However, in the aerovehicle MDO problem, the various disciplines that are involved have different design points in the flight envelope; therefore, CFD analysis - SA codes are required at the aerodynamic 'off design' points. The suggested MDO formulation is a hybrid multilevel optimization procedure that consists of both multipoint CFD analysis - SA codes and multipoint CFD design codes that perform suboptimizations.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-05-16
... Request; Bar Code Label Requirement for Human Drug and Biological Products AGENCY: Food and Drug... and clearance. Bar Code Label Requirement for Human Drug and Biological Products--(OMB Control Number... that required human drug product and biological product labels to have bar codes. The rule required bar...
Wortman, Jeremy R; Goud, Asha; Raja, Ali S; Marchello, Dana; Sodickson, Aaron
2014-12-01
The purpose of this study was to measure the effects of use of a structured physician order entry system for trauma CT on the communication of clinical information and on coding practices and reimbursement efficiency. This study was conducted between April 1, 2011, and January 14, 2013, at a level I trauma center with 59,000 annual emergency department visits. On March 29, 2012, a structured order entry system was implemented for head through pelvis trauma CT, so-called pan-scan CT. This study compared the following factors before and after implementation: communication of clinical signs and symptoms and mechanism of injury, primary International Classification of Diseases, 9th revision, Clinical Modification (ICD-9-CM) code category, success of reimbursement, and time required for successful reimbursement for the examination. Chi-square statistics were used to compare all categoric variables before and after the intervention, and the Wilcoxon rank sum test was used to compare billing cycle times. A total of 457 patients underwent pan-scan CT in 2734 distinct examinations. After the intervention, there was a 62% absolute increase in requisitions containing clinical signs or symptoms (from 0.4% to 63%, p<0.0001) and a 99% absolute increase in requisitions providing mechanism of injury (from 0.4% to 99%, p<0.0001). There was a 19% absolute increase in primary ICD-9-CM codes representing clinical signs or symptoms (from 2.9% to 21.8%, p<0.0001), and a 7% absolute increase in reimbursement success for examinations submitted to insurance carriers (from 83.0% to 89.7%, p<0.0001). For reimbursed studies, there was a 14.7-day reduction in mean billing cycle time (from 68.4 days to 53.7 days, p=0.008). Implementation of structured physician order entry for trauma CT was associated with significant improvement in the communication of clinical history to radiologists. The improvement was also associated with changes in coding practices, greater billing efficiency, and an increase in reimbursement success.
Embedded real-time image processing hardware for feature extraction and clustering
NASA Astrophysics Data System (ADS)
Chiu, Lihu; Chang, Grant
2003-08-01
Printronix, Inc. uses scanner-based image systems to perform print quality measurements for line-matrix printers. The size of the image samples and image definition required make commercial scanners convenient to use. The image processing is relatively well defined, and we are able to simplify many of the calculations into hardware equations and "c" code. The process of rapidly prototyping the system using DSP based "c" code gets the algorithms well defined early in the development cycle. Once a working system is defined, the rest of the process involves splitting the task up for the FPGA and the DSP implementation. Deciding which of the two to use, the DSP or the FPGA, is a simple matter of trial benchmarking. There are two kinds of benchmarking: One for speed, and the other for memory. The more memory intensive algorithms should run in the DSP, and the simple real time tasks can use the FPGA most effectively. Once the task is split, we can decide which platform the algorithm should be executed. This involves prototyping all the code in the DSP, then timing various blocks of the algorithm. Slow routines can be optimized using the compiler tools, and if further reduction in time is needed, into tasks that the FPGA can perform.
NASA Astrophysics Data System (ADS)
Endrizzi, S.; Gruber, S.; Dall'Amico, M.; Rigon, R.
2013-12-01
This contribution describes the new version of GEOtop which emerges after almost eight years of development from the original version. GEOtop now integrate the 3D Richards equation with a new numerical method; improvements were made on the treatment of surface waters by using the shallow water equation. The freezing-soil module was greatly improved, and the evapotranspiration -vegetation modelling is now based on a double layer scheme. Here we discuss the rational of each choice that was made, and we compare the differences between the actual solutions and the old solutions. In doing we highlight the issues that we faced during the development, including the trade-off between complexity and simplicity of the code, the requirements of a shared development, the different branches that were opened during the evolution of the code, and why we think that a code like GEOtop is indeed necessary. Models where the hydrological cycle is simplified can be built on the base of perceptual models that neglects some fundamental aspects of the hydrological processes, of which some examples are presented. At the same time, also process-based models like GEOtop can indeed neglect some fundamental process: but this is made evident with the comparison with measurements, especially when data are imposed ex-ante, instead than calibrated.
Nuclear Engine System Simulation (NESS) version 2.0
NASA Technical Reports Server (NTRS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.
1993-01-01
The topics are presented in viewgraph form and include the following; nuclear thermal propulsion (NTP) engine system analysis program development; nuclear thermal propulsion engine analysis capability requirements; team resources used to support NESS development; expanded liquid engine simulations (ELES) computer model; ELES verification examples; NESS program development evolution; past NTP ELES analysis code modifications and verifications; general NTP engine system features modeled by NESS; representative NTP expander, gas generator, and bleed engine system cycles modeled by NESS; NESS program overview; NESS program flow logic; enabler (NERVA type) nuclear thermal rocket engine; prismatic fuel elements and supports; reactor fuel and support element parameters; reactor parameters as a function of thrust level; internal shield sizing; and reactor thermal model.
Numerical Assessment of Four-Port Through-Flow Wave Rotor Cycles with Passage Height Variation
NASA Technical Reports Server (NTRS)
Paxson, D. E.; Lindau, Jules W.
1997-01-01
The potential for improved performance of wave rotor cycles through the use of passage height variation is examined. A Quasi-one-dimensional CFD code with experimentally validated loss models is used to determine the flowfield in the wave rotor passages. Results indicate that a carefully chosen passage height profile can produce substantial performance gains. Numerical performance data are presented for a specific profile, in a four-port, through-flow cycle design which yielded a computed 4.6% increase in design point pressure ratio over a comparably sized rotor with constant passage height. In a small gas turbine topping cycle application, this increased pressure ratio would reduce specific fuel consumption to 22% below the un-topped engine; a significant improvement over the already impressive 18% reductions predicted for the constant passage height rotor. The simulation code is briefly described. The method used to obtain rotor passage height profiles with enhanced performance is presented. Design and off-design results are shown using two different computational techniques. The paper concludes with some recommendations for further work.
NASA Astrophysics Data System (ADS)
Nekuchaev, A. O.; Shuteev, S. A.
2014-04-01
A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.
Holonomic surface codes for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco
2018-02-01
Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.
Evaluation of the efficiency and fault density of software generated by code generators
NASA Technical Reports Server (NTRS)
Schreur, Barbara
1993-01-01
Flight computers and flight software are used for GN&C (guidance, navigation, and control), engine controllers, and avionics during missions. The software development requires the generation of a considerable amount of code. The engineers who generate the code make mistakes and the generation of a large body of code with high reliability requires considerable time. Computer-aided software engineering (CASE) tools are available which generates code automatically with inputs through graphical interfaces. These tools are referred to as code generators. In theory, code generators could write highly reliable code quickly and inexpensively. The various code generators offer different levels of reliability checking. Some check only the finished product while some allow checking of individual modules and combined sets of modules as well. Considering NASA's requirement for reliability, an in house manually generated code is needed. Furthermore, automatically generated code is reputed to be as efficient as the best manually generated code when executed. In house verification is warranted.
The application of domain-driven design in NMS
NASA Astrophysics Data System (ADS)
Zhang, Jinsong; Chen, Yan; Qin, Shengjun
2011-12-01
In the traditional design approach of data-model-driven, system analysis and design phases are often separated which makes the demand information can not be expressed explicitly. The method is also easy to lead developer to the process-oriented programming, making codes between the modules or between hierarchies disordered. So it is hard to meet requirement of system scalability. The paper proposes a software hiberarchy based on rich domain model according to domain-driven design named FHRDM, then the Webwork + Spring + Hibernate (WSH) framework is determined. Domain-driven design aims to construct a domain model which not only meets the demand of the field where the software exists but also meets the need of software development. In this way, problems in Navigational Maritime System (NMS) development like big system business volumes, difficulty of requirement elicitation, high development costs and long development cycle can be resolved successfully.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tiradani, Tiradani,Anthony; Altunay, Mine; Dagenhart, David
The Decision Engine is a critical component of the HEP Cloud Facility. It provides the functionality of resource scheduling for disparate resource providers, including those which may have a cost or a restricted allocation of cycles. Along with the architecture, design, and requirements for the Decision Engine, this document will provide the rationale and explanations for various design decisions. In some cases, requirements and interfaces for a limited subset of external services will be included in this document. This document is intended to be a high level design. The design represented in this document is not complete and does notmore » break everything down in detail. The class structures and pseudo-code exist for example purposes to illustrate desired behaviors, and as such, should not be taken literally. The protocols and behaviors are the important items to take from this document. This project is still in prototyping mode so flaws and inconsistencies may exist and should be noted and treated as failures.« less
Coding in Stroke and Other Cerebrovascular Diseases.
Korb, Pearce J; Jones, William
2017-02-01
Accurate coding is critical for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of coding principles for patients with strokes and other cerebrovascular diseases and includes an illustrative case as a review of coding principles in a patient with acute stroke.
Controlled longitudinal emittance blow-up using band-limited phase noise in CERN PSB
NASA Astrophysics Data System (ADS)
Quartullo, D.; Shaposhnikova, E.; Timko, H.
2017-07-01
Controlled longitudinal emittance blow-up (from 1 eVs to 1.4 eVs) for LHC beams in the CERN PS Booster is currently achievied using sinusoidal phase modulation of a dedicated high-harmonic RF system. In 2021, after the LHC injectors upgrade, 3 eVs should be extracted to the PS. Even if the current method may satisfy the new requirements, it relies on low-power level RF improvements. In this paper another method of blow-up was considered, that is the injection of band-limited phase noise in the main RF system (h=1), never tried in PSB but already used in CERN SPS and LHC, under different conditions (longer cycles). This technique, which lowers the peak line density and therefore the impact of intensity effects in the PSB and the PS, can also be complementary to the present method. The longitudinal space charge, dominant in the PSB, causes significant synchrotron frequency shifts with intensity, and its effect should be taken into account. Another complication arises from the interaction of the phase loop with the injected noise, since both act on the RF phase. All these elements were studied in simulations of the PSB cycle with the BLonD code, and the required blow-up was achieved.
Modelling of radiation field around spent fuel container.
Kryuchkov, E F; Opalovsky, V A; Tikhomirov, G V
2005-01-01
Operation of nuclear reactors leads to the production of spent nuclear fuel (SNF). There are two basic strategies of SNF management: ultimate disposal of SNF in geological formations and recycle or repeated utilisation of reprocessed SNF. In both options, there is an urgent necessity to study radiation properties of SNF. Information about SNF radiation properties is required at all stages of SNF management. In order to reach more effective utilisation of nuclear materials, new fuel cycles are under development based on uranium-plutonium, uranium-thorium and some other types of nuclear fuel. These promising types of nuclear fuel are characterised by quite different radiation properties at all the stages of nuclear fuel cycle (NFC) listed above. So, comparative analysis is required for radiation properties of different nuclear fuel types at different NFC stages. The results presented here were obtained from the numerical analysis of the radiation field around transport containers of different SNF types and in SNF storage. The calculations are carried out with the application of the computer code packages SCALE-4.3 and MCNP-4C. Comparison of the dose parameters obtained for different models of the transport container with experimental data allowed us to make certain conclusions about the errors of numerical results caused by the approximate geometrical description of the transport container.
ASME code considerations for the compact heat exchanger
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nestell, James; Sham, Sam
2015-08-31
The mission of the U.S. Department of Energy (DOE), Office of Nuclear Energy is to advance nuclear power in order to meet the nation's energy, environmental, and energy security needs. Advanced high temperature reactor systems such as sodium fast reactors and high and very high temperature gas-cooled reactors are being considered for the next generation of nuclear reactor plant designs. The coolants for these high temperature reactor systems include liquid sodium and helium gas. Supercritical carbon dioxide (sCO₂), a fluid at a temperature and pressure above the supercritical point of CO₂, is currently being investigated by DOE as a workingmore » fluid for a nuclear or fossil-heated recompression closed Brayton cycle energy conversion system that operates at 550°C (1022°F) at 200 bar (2900 psi). Higher operating temperatures are envisioned in future developments. All of these design concepts require a highly effective heat exchanger that transfers heat from the nuclear or chemical reactor to the chemical process fluid or the to the power cycle. In the nuclear designs described above, heat is transferred from the primary to the secondary loop via an intermediate heat exchanger (IHX) and then from the intermediate loop to either a working process or a power cycle via a secondary heat exchanger (SHX). The IHX is a component in the primary coolant loop which will be classified as "safety related." The intermediate loop will likely be classified as "not safety related but important to safety." These safety classifications have a direct bearing on heat exchanger design approaches for the IHX and SHX. The very high temperatures being considered for the VHTR will require the use of very high temperature alloys for the IHX and SHX. Material cost considerations alone will dictate that the IHX and SHX be highly effective; that is, provide high heat transfer area in a small volume. This feature must be accompanied by low pressure drop and mechanical reliability and robustness. Classic shell and tube designs will be large and costly, and may only be appropriate in steam generator service in the SHX where boiling inside the tubes occurs. For other energy conversion systems, all of these features can be met in a compact heat exchanger design. This report will examine some of the ASME Code issues that will need to be addressed to allow use of a Code-qualified compact heat exchanger in IHX or SHX nuclear service. Most effort will focus on the IHX, since the safety-related (Class A) design rules are more extensive than those for important-to-safety (Class B) or commercial rules that are relevant to the SHX.« less
Superconducting quantum circuits at the surface code threshold for fault tolerance.
Barends, R; Kelly, J; Megrant, A; Veitia, A; Sank, D; Jeffrey, E; White, T C; Mutus, J; Fowler, A G; Campbell, B; Chen, Y; Chen, Z; Chiaro, B; Dunsworth, A; Neill, C; O'Malley, P; Roushan, P; Vainsencher, A; Wenner, J; Korotkov, A N; Cleland, A N; Martinis, John M
2014-04-24
A quantum computer can solve hard problems, such as prime factoring, database searching and quantum simulation, at the cost of needing to protect fragile quantum states from error. Quantum error correction provides this protection by distributing a logical state among many physical quantum bits (qubits) by means of quantum entanglement. Superconductivity is a useful phenomenon in this regard, because it allows the construction of large quantum circuits and is compatible with microfabrication. For superconducting qubits, the surface code approach to quantum computing is a natural choice for error correction, because it uses only nearest-neighbour coupling and rapidly cycled entangling gates. The gate fidelity requirements are modest: the per-step fidelity threshold is only about 99 per cent. Here we demonstrate a universal set of logic gates in a superconducting multi-qubit processor, achieving an average single-qubit gate fidelity of 99.92 per cent and a two-qubit gate fidelity of up to 99.4 per cent. This places Josephson quantum computing at the fault-tolerance threshold for surface code error correction. Our quantum processor is a first step towards the surface code, using five qubits arranged in a linear array with nearest-neighbour coupling. As a further demonstration, we construct a five-qubit Greenberger-Horne-Zeilinger state using the complete circuit and full set of gates. The results demonstrate that Josephson quantum computing is a high-fidelity technology, with a clear path to scaling up to large-scale, fault-tolerant quantum circuits.
Generating Code Review Documentation for Auto-Generated Mission-Critical Software
NASA Technical Reports Server (NTRS)
Denney, Ewen; Fischer, Bernd
2009-01-01
Model-based design and automated code generation are increasingly used at NASA to produce actual flight code, particularly in the Guidance, Navigation, and Control domain. However, since code generators are typically not qualified, there is no guarantee that their output is correct, and consequently auto-generated code still needs to be fully tested and certified. We have thus developed AUTOCERT, a generator-independent plug-in that supports the certification of auto-generated code. AUTOCERT takes a set of mission safety requirements, and formally verifies that the autogenerated code satisfies these requirements. It generates a natural language report that explains why and how the code complies with the specified requirements. The report is hyper-linked to both the program and the verification conditions and thus provides a high-level structured argument containing tracing information for use in code reviews.
Francis, Brian R.
2015-01-01
Although analysis of the genetic code has allowed explanations for its evolution to be proposed, little evidence exists in biochemistry and molecular biology to offer an explanation for the origin of the genetic code. In particular, two features of biology make the origin of the genetic code difficult to understand. First, nucleic acids are highly complicated polymers requiring numerous enzymes for biosynthesis. Secondly, proteins have a simple backbone with a set of 20 different amino acid side chains synthesized by a highly complicated ribosomal process in which mRNA sequences are read in triplets. Apparently, both nucleic acid and protein syntheses have extensive evolutionary histories. Supporting these processes is a complex metabolism and at the hub of metabolism are the carboxylic acid cycles. This paper advances the hypothesis that the earliest predecessor of the nucleic acids was a β-linked polyester made from malic acid, a highly conserved metabolite in the carboxylic acid cycles. In the β-linked polyester, the side chains are carboxylic acid groups capable of forming interstrand double hydrogen bonds. Evolution of the nucleic acids involved changes to the backbone and side chain of poly(β-d-malic acid). Conversion of the side chain carboxylic acid into a carboxamide or a longer side chain bearing a carboxamide group, allowed information polymers to form amide pairs between polyester chains. Aminoacylation of the hydroxyl groups of malic acid and its derivatives with simple amino acids such as glycine and alanine allowed coupling of polyester synthesis and protein synthesis. Use of polypeptides containing glycine and l-alanine for activation of two different monomers with either glycine or l-alanine allowed simple coded autocatalytic synthesis of polyesters and polypeptides and established the first genetic code. A primitive cell capable of supporting electron transport, thioester synthesis, reduction reactions, and synthesis of polyesters and polypeptides is proposed. The cell consists of an iron-sulfide particle enclosed by tholin, a heterogeneous organic material that is produced by Miller-Urey type experiments that simulate conditions on the early Earth. As the synthesis of nucleic acids evolved from β-linked polyesters, the singlet coding system for replication evolved into a four nucleotide/four amino acid process (AMP = aspartic acid, GMP = glycine, UMP = valine, CMP = alanine) and then into the triplet ribosomal process that permitted multiple copies of protein to be synthesized independent of replication. This hypothesis reconciles the “genetics first” and “metabolism first” approaches to the origin of life and explains why there are four bases in the genetic alphabet. PMID:25679748
ATLAS offline software performance monitoring and optimization
NASA Astrophysics Data System (ADS)
Chauhan, N.; Kabra, G.; Kittelmann, T.; Langenberg, R.; Mandrysch, R.; Salzburger, A.; Seuster, R.; Ritsch, E.; Stewart, G.; van Eldik, N.; Vitillo, R.; Atlas Collaboration
2014-06-01
In a complex multi-developer, multi-package software environment, such as the ATLAS offline framework Athena, tracking the performance of the code can be a non-trivial task in itself. In this paper we describe improvements in the instrumentation of ATLAS offline software that have given considerable insight into the performance of the code and helped to guide the optimization work. The first tool we used to instrument the code is PAPI, which is a programing interface for accessing hardware performance counters. PAPI events can count floating point operations, cycles, instructions and cache accesses. Triggering PAPI to start/stop counting for each algorithm and processed event results in a good understanding of the algorithm level performance of ATLAS code. Further data can be obtained using Pin, a dynamic binary instrumentation tool. Pin tools can be used to obtain similar statistics as PAPI, but advantageously without requiring recompilation of the code. Fine grained routine and instruction level instrumentation is also possible. Pin tools can additionally interrogate the arguments to functions, like those in linear algebra libraries, so that a detailed usage profile can be obtained. These tools have characterized the extensive use of vector and matrix operations in ATLAS tracking. Currently, CLHEP is used here, which is not an optimal choice. To help evaluate replacement libraries a testbed has been setup allowing comparison of the performance of different linear algebra libraries (including CLHEP, Eigen and SMatrix/SVector). Results are then presented via the ATLAS Performance Management Board framework, which runs daily with the current development branch of the code and monitors reconstruction and Monte-Carlo jobs. This framework analyses the CPU and memory performance of algorithms and an overview of results are presented on a web page. These tools have provided the insight necessary to plan and implement performance enhancements in ATLAS code by identifying the most common operations, with the call parameters well understood, and allowing improvements to be quantified in detail.
Development of code evaluation criteria for assessing predictive capability and performance
NASA Technical Reports Server (NTRS)
Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.
1993-01-01
Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sullivan, Edmund J.; Anderson, Michael T.
In May 2010, the NRC issued a proposed notice of rulemaking that includes a provision to add a new section to its rules to require licensees to implement ASME Code Case N-770, ‘‘Alternative Examination Requirements and Acceptance Standards for Class 1 PWR Piping and Vessel Nozzle Butt Welds Fabricated with UNS N06082 or UNS W86182 Weld Filler Material With or Without the Application of Listed Mitigation Activities, Section XI, Division 1,’’ with 15 conditions. Code Case N-770 contains baseline and inservice inspection (ISI) requirements for unmitigated butt welds fabricated with Alloy 82/182 material and preservice and ISI requirements for mitigatedmore » butt welds. The NRC stated that application of ASME Code Case N-770 is necessary because the inspections currently required by the ASME Code, Section XI, were not written to address stress corrosion cracking Alloy 82/182 butt welds, and the safety consequences of inadequate inspections can be significant. The NRC expects to issue the final rule incorporating this code case into its regulations in the spring 2011 time frame. This paper discusses the new examination requirements, the conditions that NRC is imposing , and the major concerns with implementation of the new Code Case.« less
Stochastic many-body problems in ecology, evolution, neuroscience, and systems biology
NASA Astrophysics Data System (ADS)
Butler, Thomas C.
Using the tools of many-body theory, I analyze problems in four different areas of biology dominated by strong fluctuations: The evolutionary history of the genetic code, spatiotemporal pattern formation in ecology, spatiotemporal pattern formation in neuroscience and the robustness of a model circadian rhythm circuit in systems biology. In the first two research chapters, I demonstrate that the genetic code is extremely optimal (in the sense that it manages the effects of point mutations or mistranslations efficiently), more than an order of magnitude beyond what was previously thought. I further show that the structure of the genetic code implies that early proteins were probably only loosely defined. Both the nature of early proteins and the extreme optimality of the genetic code are interpreted in light of recent theory [1] as evidence that the evolution of the genetic code was driven by evolutionary dynamics that were dominated by horizontal gene transfer. I then explore the optimality of a proposed precursor to the genetic code. The results show that the precursor code has only limited optimality, which is interpreted as evidence that the precursor emerged prior to translation, or else never existed. In the next part of the dissertation, I introduce a many-body formalism for reaction-diffusion systems described at the mesoscopic scale with master equations. I first apply this formalism to spatially-extended predator-prey ecosystems, resulting in the prediction that many-body correlations and fluctuations drive population cycles in time, called quasicycles. Most of these results were previously known, but were derived using the system size expansion [2, 3]. I next apply the analytical techniques developed in the study of quasi-cycles to a simple model of Turing patterns in a predator-prey ecosystem. This analysis shows that fluctuations drive the formation of a new kind of spatiotemporal pattern formation that I name "quasi-patterns." These quasi-patterns exist over a much larger range of physically accessible parameters than the patterns predicted in mean field theory and therefore account for the apparent observations in ecology of patterns in regimes where Turing patterns do not occur. I further show that quasi-patterns have statistical properties that allow them to be distinguished empirically from mean field Turing patterns. I next analyze a model of visual cortex in the brain that has striking similarities to the activator-inhibitor model of ecosystem quasi-pattern formation. Through analysis of the resulting phase diagram, I show that the architecture of the neural network in the visual cortex is configured to make the visual cortex robust to unwanted internally generated spatial structure that interferes with normal visual function. I also predict that some geometric visual hallucinations are quasi-patterns and that the visual cortex supports a new phase of spatially scale invariant behavior present far from criticality. In the final chapter, I explore the effects of fluctuations on cycles in systems biology, specifically the pervasive phenomenon of circadian rhythms. By exploring the behavior of a generic stochastic model of circadian rhythms, I show that the circadian rhythm circuit exploits leaky mRNA production to safeguard the cycle from failure. I also show that this safeguard mechanism is highly robust to changes in the rate of leaky mRNA production. Finally, I explore the failure of the deterministic model in two different contexts, one where the deterministic model predicts cycles where they do not exist, and another context in which cycles are not predicted by the deterministic model.
Jones, Lyell K; Ney, John P
2016-12-01
Accurate coding is critically important for clinical practice and research. Ongoing changes to diagnostic and billing codes require the clinician to stay abreast of coding updates. Payment for health care services, data sets for health services research, and reporting for medical quality improvement all require accurate administrative coding. This article provides an overview of administrative coding for patients with muscle disease and includes a case-based review of diagnostic and Evaluation and Management (E/M) coding principles in patients with myopathy. Procedural coding for electrodiagnostic studies and neuromuscular ultrasound is also reviewed.
NASA Technical Reports Server (NTRS)
Norris, Andrew
2003-01-01
The goal was to perform 3D simulation of GE90 combustor, as part of full turbofan engine simulation. Requirements of high fidelity as well as fast turn-around time require massively parallel code. National Combustion Code (NCC) was chosen for this task as supports up to 999 processors and includes state-of-the-art combustion models. Also required is ability to take inlet conditions from compressor code and give exit conditions to turbine code.
21 CFR 610.67 - Bar code label requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... or to blood and blood components intended for transfusion. For blood and blood components intended...) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS Labeling Standards § 610.67 Bar code label requirements. Biological products must comply with the bar code requirements at § 201.25 of this chapter. However, the bar...
21 CFR 610.67 - Bar code label requirements.
Code of Federal Regulations, 2013 CFR
2013-04-01
... or to blood and blood components intended for transfusion. For blood and blood components intended...) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS Labeling Standards § 610.67 Bar code label requirements. Biological products must comply with the bar code requirements at § 201.25 of this chapter. However, the bar...
21 CFR 610.67 - Bar code label requirements.
Code of Federal Regulations, 2014 CFR
2014-04-01
... or to blood and blood components intended for transfusion. For blood and blood components intended...) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS Labeling Standards § 610.67 Bar code label requirements. Biological products must comply with the bar code requirements at § 201.25 of this chapter. However, the bar...
21 CFR 610.67 - Bar code label requirements.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Bar code label requirements. 610.67 Section 610.67 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS Labeling Standards § 610.67 Bar code label requirements...
21 CFR 610.67 - Bar code label requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Bar code label requirements. 610.67 Section 610.67 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) BIOLOGICS GENERAL BIOLOGICAL PRODUCTS STANDARDS Labeling Standards § 610.67 Bar code label requirements...
45 CFR 162.1002 - Medical data code sets.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2012-10-01 2012-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...
45 CFR 162.1002 - Medical data code sets.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2014-10-01 2014-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...
45 CFR 162.1002 - Medical data code sets.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2013-10-01 2013-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...
45 CFR 162.1002 - Medical data code sets.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2011-10-01 2011-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...
45 CFR 162.1002 - Medical data code sets.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Terminology, Fourth Edition (CPT-4), as maintained and distributed by the American Medical Association, for... 45 Public Welfare 1 2010-10-01 2010-10-01 false Medical data code sets. 162.1002 Section 162.1002... REQUIREMENTS ADMINISTRATIVE REQUIREMENTS Code Sets § 162.1002 Medical data code sets. The Secretary adopts the...
2012 financial outlook: physicians and podiatrists.
Schaum, Kathleen D
2012-04-01
Although the nationally unadjusted average Medicare allowable rates have not increased or decreased significantly, the new codes, the new coding regulations, the NCCI edits, and the Medicare contractors' local coverage determinations (LCDs) will greatly impact physicians' and podiatrists' revenue in 2012. Therefore, every wound care physician and podiatrist should take the time to update their charge sheets and their data entry systems with correct codes, units, and appropriate charges (that account for all the resources needed to perform each service or procedure). They should carefully read the LCDs that are pertinent to the work they perform. If the LCDs contain language that is unclear or incorrect, physicians and podiatrists should contact the Medicare contractor medical director and request a revision through the LCD Reconsideration Process. Medicare has stabilized the MPFS allowable rates for 2012-now physicians and podiatrists must do their part to implement the new coding, payment, and coverage regulations. To be sure that the entire revenue process is working properly, physicians and podiatrists should conduct quarterly, if not monthly, audits of their revenue cycle. Healthcare providers will maintain a healthy revenue cycle by conducting internal audits before outside auditors conduct audits that result in repayments that could have been prevented.
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Eisman, Robert C.; Phelps, Melissa A. S.; Kaufman, Thomas
2015-01-01
The formation of the pericentriolar matrix (PCM) and a fully functional centrosome in syncytial Drosophila melanogaster embryos requires the rapid transport of Cnn during initiation of the centrosome replication cycle. We show a Cnn and Polo kinase interaction is apparently required during embryogenesis and involves the exon 1A-initiating coding exon, suggesting a subset of Cnn splice variants is regulated by Polo kinase. During PCM formation exon 1A Cnn-Long Form proteins likely bind Polo kinase before phosphorylation by Polo for Cnn transport to the centrosome. Loss of either of these interactions in a portion of the total Cnn protein pool is sufficient to remove native Cnn from the pool, thereby altering the normal localization dynamics of Cnn to the PCM. Additionally, Cnn-Short Form proteins are required for polar body formation, a process known to require Polo kinase after the completion of meiosis. Exon 1A Cnn-LF and Cnn-SF proteins, in conjunction with Polo kinase, are required at the completion of meiosis and for the formation of functional centrosomes during early embryogenesis. PMID:26447129
Eisman, Robert C; Phelps, Melissa A S; Kaufman, Thomas
2015-10-01
The formation of the pericentriolar matrix (PCM) and a fully functional centrosome in syncytial Drosophila melanogaster embryos requires the rapid transport of Cnn during initiation of the centrosome replication cycle. We show a Cnn and Polo kinase interaction is apparently required during embryogenesis and involves the exon 1A-initiating coding exon, suggesting a subset of Cnn splice variants is regulated by Polo kinase. During PCM formation exon 1A Cnn-Long Form proteins likely bind Polo kinase before phosphorylation by Polo for Cnn transport to the centrosome. Loss of either of these interactions in a portion of the total Cnn protein pool is sufficient to remove native Cnn from the pool, thereby altering the normal localization dynamics of Cnn to the PCM. Additionally, Cnn-Short Form proteins are required for polar body formation, a process known to require Polo kinase after the completion of meiosis. Exon 1A Cnn-LF and Cnn-SF proteins, in conjunction with Polo kinase, are required at the completion of meiosis and for the formation of functional centrosomes during early embryogenesis. Copyright © 2015 by the Genetics Society of America.
NASA Astrophysics Data System (ADS)
Jia, M.; Sun, Y.; Paz-Soldan, C.; Nazikian, R.; Gu, S.; Liu, Y. Q.; Abrams, T.; Bykov, I.; Cui, L.; Evans, T.; Garofalo, A.; Guo, W.; Gong, X.; Lasnier, C.; Logan, N. C.; Makowski, M.; Orlov, D.; Wang, H. H.
2018-05-01
Experiments using Resonant Magnetic Perturbations (RMPs), with a rotating n = 2 toroidal harmonic combined with a stationary n = 3 toroidal harmonic, have validated predictions that divertor heat and particle flux can be dynamically controlled while maintaining Edge Localized Mode (ELM) suppression in the DIII-D tokamak. Here, n is the toroidal mode number. ELM suppression over one full cycle of a rotating n = 2 RMP that was mixed with a static n = 3 RMP field has been achieved. Prominent heat flux splitting on the outer divertor has been observed during ELM suppression by RMPs in low collisionality regime in DIII-D. Strong changes in the three dimensional heat and particle flux footprint in the divertor were observed during the application of the mixed toroidal harmonic magnetic perturbations. These results agree well with modeling of the edge magnetic field structure using the TOP2D code, which takes into account the plasma response from the MARS-F code. These results expand the potential effectiveness of the RMP ELM suppression technique for the simultaneous control of divertor heat and particle load required in ITER.
Fast interrupt platform for extended DOS
NASA Technical Reports Server (NTRS)
Duryea, T. W.
1995-01-01
Extended DOS offers the unique combination of a simple operating system which allows direct access to the interrupt tables, 32 bit protected mode access to 4096 MByte address space, and the use of industry standard C compilers. The drawback is that fast interrupt handling requires both 32 bit and 16 bit versions of each real-time process interrupt handler to avoid mode switches on the interrupts. A set of tools has been developed which automates the process of transforming the output of a standard 32 bit C compiler to 16 bit interrupt code which directly handles the real mode interrupts. The entire process compiles one set of source code via a make file, which boosts productivity by making the management of the compile-link cycle very simple. The software components are in the form of classes written mostly in C. A foreground process written as a conventional application which can use the standard C libraries can communicate with the background real-time classes via a message passing mechanism. The platform thus enables the integration of high performance real-time processing into a conventional application framework.
Propeller performance and weight predictions appended to the Navy/NASA engine program
NASA Technical Reports Server (NTRS)
Plencner, R. M.; Senty, P.; Wickenheiser, T. J.
1983-01-01
The Navy/NASA Engine Performance (NNEP) is a general purpose computer program currently employed by government, industry and university personnel to simulate the thermodynamic cycles of turbine engines. NNEP is a modular program which has the ability to evaluate the performance of an arbitrary engine configuration defined by the user. In 1979, a program to calculate engine weight (WATE-2) was developed by Boeing's Military Division under NASA contract. This program uses a preliminary design approach to determine engine weights and dimensions. Because the thermodynamic and configuration information required by the weight code was available in NNEP, the weight code was appended to NNEP. Due to increased emphasis on fuel economy, a renewed interest has developed in propellers. This report describes the modifications developed by NASA to both NNEP and WATE-2 to determine the performance, weight and dimensions of propellers and the corresponding gearbox. The propeller performance model has three options, two of which are based on propeller map interpolation. Propeller and gearbox weights are obtained from empirical equations which may easily be modified by the user.
Heat Transfer and Fluid Dynamics Measurements in the Expansion Space of a Stirling Cycle Engine
NASA Technical Reports Server (NTRS)
Jiang, Nan; Simon, Terrence W.
2006-01-01
The heater (or acceptor) of a Stirling engine, where most of the thermal energy is accepted into the engine by heat transfer, is the hottest part of the engine. Almost as hot is the adjacent expansion space of the engine. In the expansion space, the flow is oscillatory, impinging on a two-dimensional concavely-curved surface. Knowing the heat transfer on the inside surface of the engine head is critical to the engine design for efficiency and reliability. However, the flow in this region is not well understood and support is required to develop the CFD codes needed to design modern Stirling engines of high efficiency and power output. The present project is to experimentally investigate the flow and heat transfer in the heater head region. Flow fields and heat transfer coefficients are measured to characterize the oscillatory flow as well as to supply experimental validation for the CFD Stirling engine design codes. Presented also is a discussion of how these results might be used for heater head and acceptor region design calculations.
1983-10-01
SYSTEMS OBJECTIVES. This study was conducted as part of a continuing effort to obtain actual (historical) life cycle costs of major Army systems from...Procurement, AMS Code for RDTE, etc.). System life cycle costs cut across appropriation lines. A common architecture should be prerequisite to... life cycle costs of major Army systems have not been successful, but attention recently has been directed toward the possibility that a significant
21 CFR 206.10 - Code imprint required.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Code imprint required. 206.10 Section 206.10 Food...: GENERAL IMPRINTING OF SOLID ORAL DOSAGE FORM DRUG PRODUCTS FOR HUMAN USE § 206.10 Code imprint required... imprint that, in conjunction with the product's size, shape, and color, permits the unique identification...
47 CFR 11.52 - EAS code and Attention Signal Monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false EAS code and Attention Signal Monitoring... SYSTEM (EAS) Emergency Operations § 11.52 EAS code and Attention Signal Monitoring requirements. (a) EAS Participants must be capable of receiving the Attention Signal required by § 11.32(a)(9) and emergency messages...
NASA Astrophysics Data System (ADS)
Doi, Masafumi; Tokutomi, Tsukasa; Hachiya, Shogo; Kobayashi, Atsuro; Tanakamaru, Shuhei; Ning, Sheyang; Ogura Iwasaki, Tomoko; Takeuchi, Ken
2016-08-01
NAND flash memory’s reliability degrades with increasing endurance, retention-time and/or temperature. After a comprehensive evaluation of 1X nm triple-level cell (TLC) NAND flash, two highly reliable techniques are proposed. The first proposal, quick low-density parity check (Quick-LDPC), requires only one cell read in order to accurately estimate a bit-error rate (BER) that includes the effects of temperature, write and erase (W/E) cycles and retention-time. As a result, 83% read latency reduction is achieved compared to conventional AEP-LDPC. Also, W/E cycling is extended by 100% compared with conventional Bose-Chaudhuri-Hocquenghem (BCH) error-correcting code (ECC). The second proposal, dynamic threshold voltage optimization (DVO) has two parts, adaptive V Ref shift (AVS) and V TH space control (VSC). AVS reduces read error and latency by adaptively optimizing the reference voltage (V Ref) based on temperature, W/E cycles and retention-time. AVS stores the optimal V Ref’s in a table in order to enable one cell read. VSC further improves AVS by optimizing the voltage margins between V TH states. DVO reduces BER by 80%.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This standard provides rules for the construction of Class 1 nuclear components, parts, and appurtenances for use at elevated temperatures. This standard is a complete set of requirements only when used in conjunction with Section III of the ASME Boiler and Pressure Vessel Code (ASME Code) and addenda, ASME Code Cases 1592, 1593, 1594, 1595, and 1596, and RDT E 15-2NB. Unmodified paragraphs of the referenced Code Cases are not repeated in this standard but are a part of the requirements of this standard.
NASA Astrophysics Data System (ADS)
Fable, E.; Angioni, C.; Ivanov, A. A.; Lackner, K.; Maj, O.; Medvedev, S. Yu; Pautasso, G.; Pereverzev, G. V.; Treutterer, W.; the ASDEX Upgrade Team
2013-07-01
The modelling of tokamak scenarios requires the simultaneous solution of both the time evolution of the plasma kinetic profiles and of the magnetic equilibrium. Their dynamical coupling involves additional complications, which are not present when the two physical problems are solved separately. Difficulties arise in maintaining consistency in the time evolution among quantities which appear in both the transport and the Grad-Shafranov equations, specifically the poloidal and toroidal magnetic fluxes as a function of each other and of the geometry. The required consistency can be obtained by means of iteration cycles, which are performed outside the equilibrium code and which can have different convergence properties depending on the chosen numerical scheme. When these external iterations are performed, the stability of the coupled system becomes a concern. In contrast, if these iterations are not performed, the coupled system is numerically stable, but can become physically inconsistent. By employing a novel scheme (Fable E et al 2012 Nucl. Fusion submitted), which ensures stability and physical consistency among the same quantities that appear in both the transport and magnetic equilibrium equations, a newly developed version of the ASTRA transport code (Pereverzev G V et al 1991 IPP Report 5/42), which is coupled to the SPIDER equilibrium code (Ivanov A A et al 2005 32nd EPS Conf. on Plasma Physics (Tarragona, 27 June-1 July) vol 29C (ECA) P-5.063), in both prescribed- and free-boundary modes is presented here for the first time. The ASTRA-SPIDER coupled system is then applied to the specific study of the modelling of controlled current ramp-up in ASDEX Upgrade discharges.
ION EFFECTS IN THE APS PARTICLE ACCUMULATOR RING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvey, J.; Harkay, K.; Yao, CY.
2017-06-25
Trapped ions in the APS Particle Accumulator Ring (PAR) lead to a positive coherent tune shift in both planes, which increases along the PAR cycle as more ions accumulate. This effect has been studied using an ion simulation code developed at SLAC. After modifying the code to include a realistic vacuum profile, multiple ionization, and the effect of shaking the beam to measure the tune, the simulation agrees well with our measurements. This code has also been used to evaluate the possibility of ion instabilities at the high bunch charge needed for the APS-Upgrade.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dillon, Heather E.; Antonopoulos, Chrissi A.; Solana, Amy E.
As the model energy codes are improved to reach efficiency levels 50 percent greater than current codes, use of on-site renewable energy generation is likely to become a code requirement. This requirement will be needed because traditional mechanisms for code improvement, including envelope, mechanical and lighting, have been pressed to the end of reasonable limits. Research has been conducted to determine the mechanism for implementing this requirement (Kaufman 2011). Kaufmann et al. determined that the most appropriate way to structure an on-site renewable requirement for commercial buildings is to define the requirement in terms of an installed power density permore » unit of roof area. This provides a mechanism that is suitable for the installation of photovoltaic (PV) systems on future buildings to offset electricity and reduce the total building energy load. Kaufmann et al. suggested that an appropriate maximum for the requirement in the commercial sector would be 4 W/ft{sup 2} of roof area or 0.5 W/ft{sup 2} of conditioned floor area. As with all code requirements, there must be an alternative compliance path for buildings that may not reasonably meet the renewables requirement. This might include conditions like shading (which makes rooftop PV arrays less effective), unusual architecture, undesirable roof pitch, unsuitable building orientation, or other issues. In the short term, alternative compliance paths including high performance mechanical equipment, dramatic envelope changes, or controls changes may be feasible. These options may be less expensive than many renewable systems, which will require careful balance of energy measures when setting the code requirement levels. As the stringency of the code continues to increase however, efficiency trade-offs will be maximized, requiring alternative compliance options to be focused solely on renewable electricity trade-offs or equivalent programs. One alternate compliance path includes purchase of Renewable Energy Credits (RECs). Each REC represents a specified amount of renewable electricity production and provides an offset of environmental externalities associated with non-renewable electricity production. The purpose of this paper is to explore the possible issues with RECs and comparable alternative compliance options. Existing codes have been examined to determine energy equivalence between the energy generation requirement and the RECs alternative over the life of the building. The price equivalence of the requirement and the alternative are determined to consider the economic drivers for a market decision. This research includes case studies that review how the few existing codes have incorporated RECs and some of the issues inherent with REC markets. Section 1 of the report reviews compliance options including RECs, green energy purchase programs, shared solar agreements and leases, and other options. Section 2 provides detailed case studies on codes that include RECs and community based alternative compliance methods. The methods the existing code requirements structure alternative compliance options like RECs are the focus of the case studies. Section 3 explores the possible structure of the renewable energy generation requirement in the context of energy and price equivalence. The price of RECs have shown high variation by market and over time which makes it critical to for code language to be updated frequently for a renewable energy generation requirement or the requirement will not remain price-equivalent over time. Section 4 of the report provides a maximum case estimate for impact to the PV market and the REC market based on the Kaufmann et al. proposed requirement levels. If all new buildings in the commercial sector complied with the requirement to install rooftop PV arrays, nearly 4,700 MW of solar would be installed in 2012, a major increase from EIA estimates of 640 MW of solar generation capacity installed in 2009. The residential sector could contribute roughly an additional 2,300 MW based on the same code requirement levels of 4 W/ft{sup 2} of roof area. Section 5 of the report provides a basic framework for draft code language recommendations based on the analysis of the alternative compliance levels.« less
Numerical investigation of two- and three-dimensional heat transfer in expander cycle engines
NASA Technical Reports Server (NTRS)
Burch, Robert L.; Cheung, Fan-Bill
1993-01-01
The concept of using tube canting for enhancing the hot-side convective heat transfer in a cross-stream tubular rocket combustion chamber is evaluated using a CFD technique in this study. The heat transfer at the combustor wall is determined from the flow field generated by a modified version of the PARC Navier-Stokes Code, using the actual dimensions, fluid properties, and design parameters of a split-expander demonstrator cycle engine. The effects of artificial dissipation on convergence and solution accuracy are investigated. Heat transfer results predicted by the code are presented. The use of CFD in heat transfer calculations is critically examined to demonstrate the care needed in the use of artificial dissipation for good convergence and accurate solutions.
Performance Benefits for Wave Rotor-Topped Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Jones, Scott M.; Welch, Gerard E.
1996-01-01
The benefits of wave rotor-topping in turboshaft engines, subsonic high-bypass turbofan engines, auxiliary power units, and ground power units are evaluated. The thermodynamic cycle performance is modeled using a one-dimensional steady-state code; wave rotor performance is modeled using one-dimensional design/analysis codes. Design and off-design engine performance is calculated for baseline engines and wave rotor-topped engines, where the wave rotor acts as a high pressure spool. The wave rotor-enhanced engines are shown to have benefits in specific power and specific fuel flow over the baseline engines without increasing turbine inlet temperature. The off-design steady-state behavior of a wave rotor-topped engine is shown to be similar to a conventional engine. Mission studies are performed to quantify aircraft performance benefits for various wave rotor cycle and weight parameters. Gas turbine engine cycles most likely to benefit from wave rotor-topping are identified. Issues of practical integration and the corresponding technical challenges with various engine types are discussed.
Heuristic rules embedded genetic algorithm for in-core fuel management optimization
NASA Astrophysics Data System (ADS)
Alim, Fatih
The objective of this study was to develop a unique methodology and a practical tool for designing loading pattern (LP) and burnable poison (BP) pattern for a given Pressurized Water Reactor (PWR) core. Because of the large number of possible combinations for the fuel assembly (FA) loading in the core, the design of the core configuration is a complex optimization problem. It requires finding an optimal FA arrangement and BP placement in order to achieve maximum cycle length while satisfying the safety constraints. Genetic Algorithms (GA) have been already used to solve this problem for LP optimization for both PWR and Boiling Water Reactor (BWR). The GA, which is a stochastic method works with a group of solutions and uses random variables to make decisions. Based on the theories of evaluation, the GA involves natural selection and reproduction of the individuals in the population for the next generation. The GA works by creating an initial population, evaluating it, and then improving the population by using the evaluation operators. To solve this optimization problem, a LP optimization package, GARCO (Genetic Algorithm Reactor Code Optimization) code is developed in the framework of this thesis. This code is applicable for all types of PWR cores having different geometries and structures with an unlimited number of FA types in the inventory. To reach this goal, an innovative GA is developed by modifying the classical representation of the genotype. To obtain the best result in a shorter time, not only the representation is changed but also the algorithm is changed to use in-core fuel management heuristics rules. The improved GA code was tested to demonstrate and verify the advantages of the new enhancements. The developed methodology is explained in this thesis and preliminary results are shown for the VVER-1000 reactor hexagonal geometry core and the TMI-1 PWR. The improved GA code was tested to verify the advantages of new enhancements. The core physics code used for VVER in this research is Moby-Dick, which was developed to analyze the VVER by SKODA Inc. The SIMULATE-3 code, which is an advanced two-group nodal code, is used to analyze the TMI-1.
FEDEF: A High Level Architecture Federate Development Framework
2010-09-01
require code changes for operability between HLA specifications. Configuration of federate requirements such as publications, subscriptions, time ... management , and management protocol should occur outside of federate source code, allowing for federate reusability without code modification and re
NASA Astrophysics Data System (ADS)
Rizzo, Axel; Vaglio-Gaudard, Claire; Martin, Julie-Fiona; Noguère, Gilles; Eschbach, Romain
2017-09-01
DARWIN2.3 is the reference package used for fuel cycle applications in France. It solves the Boltzmann and Bateman equations in a coupling way, with the European JEFF-3.1.1 nuclear data library, to compute the fuel cycle values of interest. It includes both deterministic transport codes APOLLO2 (for light water reactors) and ERANOS2 (for fast reactors), and the DARWIN/PEPIN2 depletion code, each of them being developed by CEA/DEN with the support of its industrial partners. The DARWIN2.3 package has been experimentally validated for pressurized and boiling water reactors, as well as for sodium fast reactors; this experimental validation relies on the analysis of post-irradiation experiments (PIE). The DARWIN2.3 experimental validation work points out some isotopes for which the depleted concentration calculation can be improved. Some other nuclides have no available experimental validation, and their concentration calculation uncertainty is provided by the propagation of a priori nuclear data uncertainties. This paper describes the work plan of studies initiated this year to improve the accuracy of the DARWIN2.3 depleted material balance calculation concerning some nuclides of interest for the fuel cycle.
The psychology of elite cycling: a systematic review.
Spindler, David J; Allen, Mark S; Vella, Stewart A; Swann, Christian
2018-09-01
This systematic review sought to synthesise what is currently known about the psychology of elite cycling. Nine electronic databases were searched in March 2017 for studies reporting an empirical test of any psychological construct in an elite cycling sample. Fourteen studies (total n = 427) met inclusion criteria. Eight studies were coded as having high risk of bias. Themes extracted included mood, anxiety, self-confidence, pain, and cognitive function. Few studies had similar objectives meaning that in many instances findings could not be synthesised in a meaningful way. Nevertheless, there was some cross-study evidence that elite cyclists have more positive mood states (relative to normative scores), pre-race anxiety impairs performance (among male cyclists), and associative strategies are perceived as helpful for pain management. Among single studies coded as having low risk of bias, evidence suggests that implicit beliefs affect decision making performance, elite cyclists are less susceptible to mental fatigue (than non-elite cyclists), and better leadership skills relates to greater social labouring. Limitations include non-standardisation of measures, lack of follow-up data, small sample sizes, and overall poor research quality. The findings of this systematic review might be used to inform research and theory development on the psychology of elite endurance cycling.
Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S
2013-12-01
To audit the accuracy of clinical coding in otolaryngology, assess the effectiveness of previously implemented interventions, and determine ways in which it can be further improved. Prospective clinician-auditor multidisciplinary audit of clinical coding accuracy. Elective and emergency ENT admissions and day-case activity. Concordance between initial coding and the clinician-auditor multi-disciplinary teams (MDT) coding in respect of primary and secondary diagnoses and procedures, health resource groupings health resource groupings (HRGs) and tariffs. The audit of 3131 randomly selected otolaryngology patients between 2010 and 2012 resulted in 420 instances of change to the primary diagnosis (13%) and 417 changes to the primary procedure (13%). In 1420 cases (44%), there was at least one change to the initial coding and 514 (16%) health resource groupings changed. There was an income variance of £343,169 or £109.46 per patient. The highest rates of health resource groupings change were observed in head and neck surgery and in particular skull-based surgery, laryngology and within that tracheostomy, and emergency admissions, and specially, epistaxis management. A randomly selected sample of 235 patients from the audit were subjected to a second audit by a second clinician-auditor multi-disciplinary team. There were 12 further health resource groupings changes (5%) and at least one further coding change occurred in 57 patients (24%). These changes were significantly lower than those observed in the pre-audit sample, but were also significantly greater than zero. Asking surgeons to 'code in theatre' and applying these codes without further quality assurance to activity resulted in an health resource groupings error rate of 45%. The full audit sample was regrouped under health resource groupings 3.5 and was compared with a previous audit of 1250 patients performed between 2007 and 2008. This comparison showed a reduction in the baseline rate of health resource groupings change from 16% during the first audit cycle to 9% in the current audit cycle (P < 0.001). Otolaryngology coding is complex and susceptible to subjectivity, variability and error. Coding variability can be improved, but not eliminated through regular education supported by an audit programme. © 2013 John Wiley & Sons Ltd.
Modeling and optimization of a hybrid solar combined cycle (HYCS)
NASA Astrophysics Data System (ADS)
Eter, Ahmad Adel
2011-12-01
The main objective of this thesis is to investigate the feasibility of integrating concentrated solar power (CSP) technology with the conventional combined cycle technology for electric generation in Saudi Arabia. The generated electricity can be used locally to meet the annual increasing demand. Specifically, it can be utilized to meet the demand during the hours 10 am-3 pm and prevent blackout hours, of some industrial sectors. The proposed CSP design gives flexibility in the operation system. Since, it works as a conventional combined cycle during night time and it switches to work as a hybrid solar combined cycle during day time. The first objective of the thesis is to develop a thermo-economical mathematical model that can simulate the performance of a hybrid solar-fossil fuel combined cycle. The second objective is to develop a computer simulation code that can solve the thermo-economical mathematical model using available software such as E.E.S. The developed simulation code is used to analyze the thermo-economic performance of different configurations of integrating the CSP with the conventional fossil fuel combined cycle to achieve the optimal integration configuration. This optimal integration configuration has been investigated further to achieve the optimal design of the solar field that gives the optimal solar share. Thermo-economical performance metrics which are available in the literature have been used in the present work to assess the thermo-economic performance of the investigated configurations. The economical and environmental impact of integration CSP with the conventional fossil fuel combined cycle are estimated and discussed. Finally, the optimal integration configuration is found to be solarization steam side in conventional combined cycle with solar multiple 0.38 which needs 29 hectare and LEC of HYCS is 63.17 $/MWh under Dhahran weather conditions.
1989-02-01
installs, and provides life cycle support for information management systems. 16. Provides information and reports to higher authority and the scientific com...instruction/policy. 29 November New Employees Margaret Overton Paula Augustine Staffing Clerk Clerk Typist Code OOB Code I I GS-203-4 GS-322-4 Sylvia ...Evaluation and Survey Systems-Develops systems to evaluate the effectiveness of quality of life programs and to improve the quality of personnel
Coupled field effects in BWR stability simulations using SIMULATE-3K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borkowski, J.; Smith, K.; Hagrman, D.
1996-12-31
The SIMULATE-3K code is the transient analysis version of the Studsvik advanced nodal reactor analysis code, SIMULATE-3. Recent developments have focused on further broadening the range of transient applications by refinement of core thermal-hydraulic models and on comparison with boiling water reactor (BWR) stability measurements performed at Ringhals unit 1, during the startups of cycles 14 through 17.
Accuracy of clinical coding for procedures in oral and maxillofacial surgery.
Khurram, S A; Warner, C; Henry, A M; Kumar, A; Mohammed-Ali, R I
2016-10-01
Clinical coding has important financial implications, and discrepancies in the assigned codes can directly affect the funding of a department and hospital. Over the last few years, numerous oversights have been noticed in the coding of oral and maxillofacial (OMF) procedures. To establish the accuracy and completeness of coding, we retrospectively analysed the records of patients during two time periods: March to May 2009 (324 patients), and January to March 2014 (200 patients). Two investigators independently collected and analysed the data to ensure accuracy and remove bias. A large proportion of operations were not assigned all the relevant codes, and only 32% - 33% were correct in both cycles. To our knowledge, this is the first reported audit of clinical coding in OMFS, and it highlights serious shortcomings that have substantial financial implications. Better input by the surgical team and improved communication between the surgical and coding departments will improve accuracy. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
Analysis on burnup step effect for evaluating reactor criticality and fuel breeding ratio
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saputra, Geby; Purnama, Aditya Rizki; Permana, Sidik
Criticality condition of the reactors is one of the important factors for evaluating reactor operation and nuclear fuel breeding ratio is another factor to show nuclear fuel sustainability. This study analyzes the effect of burnup steps and cycle operation step for evaluating the criticality condition of the reactor as well as the performance of nuclear fuel breeding or breeding ratio (BR). Burnup step is performed based on a day step analysis which is varied from 10 days up to 800 days and for cycle operation from 1 cycle up to 8 cycles reactor operations. In addition, calculation efficiency based onmore » the variation of computer processors to run the analysis in term of time (time efficiency in the calculation) have been also investigated. Optimization method for reactor design analysis which is used a large fast breeder reactor type as a reference case was performed by adopting an established reactor design code of JOINT-FR. The results show a criticality condition becomes higher for smaller burnup step (day) and for breeding ratio becomes less for smaller burnup step (day). Some nuclides contribute to make better criticality when smaller burnup step due to individul nuclide half-live. Calculation time for different burnup step shows a correlation with the time consuming requirement for more details step calculation, although the consuming time is not directly equivalent with the how many time the burnup time step is divided.« less
Electrofishing power requirements in relation to duty cycle
Miranda, L.E.; Dolan, C.R.
2004-01-01
Under controlled laboratory conditions we measured the electrical peak power required to immobilize (i.e., narcotize or tetanize) fish of various species and sizes with duty cycles (i.e., percentage of time a field is energized) ranging from 1.5% to 100%. Electrofishing effectiveness was closely associated with duty cycle. Duty cycles of 10-50% required the least peak power to immobilize fish; peak power requirements increased gradually above 50% duty cycle and sharply below 10%. Small duty cycles can increase field strength by making possible higher instantaneous peak voltages that allow the threshold power needed to immobilize fish to radiate farther away from the electrodes. Therefore, operating within the 10-50% range of duty cycles would allow a larger radius of immobilization action than operating with higher duty cycles. This 10-50% range of duty cycles also coincided with some of the highest margins of difference between the electrical power required to narcotize and that required to tetanize fish. This observation is worthy of note because proper use of duty cycle could help reduce the mortality associated with tetany documented by some authors. Although electrofishing with intermediate duty cycles can potentially increase effectiveness of electrofishing, our results suggest that immobilization response is not fully accounted for by duty cycle because of a potential interaction between pulse frequency and duration that requires further investigation.
RELAP-7 Code Assessment Plan and Requirement Traceability Matrix
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, Junsoo; Choi, Yong-joon; Smith, Curtis L.
2016-10-01
The RELAP-7, a safety analysis code for nuclear reactor system, is under development at Idaho National Laboratory (INL). Overall, the code development is directed towards leveraging the advancements in computer science technology, numerical solution methods and physical models over the last decades. Recently, INL has also been putting an effort to establish the code assessment plan, which aims to ensure an improved final product quality through the RELAP-7 development process. The ultimate goal of this plan is to propose a suitable way to systematically assess the wide range of software requirements for RELAP-7, including the software design, user interface, andmore » technical requirements, etc. To this end, we first survey the literature (i.e., international/domestic reports, research articles) addressing the desirable features generally required for advanced nuclear system safety analysis codes. In addition, the V&V (verification and validation) efforts as well as the legacy issues of several recently-developed codes (e.g., RELAP5-3D, TRACE V5.0) are investigated. Lastly, this paper outlines the Requirement Traceability Matrix (RTM) for RELAP-7 which can be used to systematically evaluate and identify the code development process and its present capability.« less
NASA Technical Reports Server (NTRS)
Denney, Ewen W.; Fischer, Bernd
2009-01-01
Model-based development and automated code generation are increasingly used for production code in safety-critical applications, but since code generators are typically not qualified, the generated code must still be fully tested, reviewed, and certified. This is particularly arduous for mathematical and control engineering software which requires reviewers to trace subtle details of textbook formulas and algorithms to the code, and to match requirements (e.g., physical units or coordinate frames) not represented explicitly in models or code. Both tasks are complicated by the often opaque nature of auto-generated code. We address these problems by developing a verification-driven approach to traceability and documentation. We apply the AUTOCERT verification system to identify and then verify mathematical concepts in the code, based on a mathematical domain theory, and then use these verified traceability links between concepts, code, and verification conditions to construct a natural language report that provides a high-level structured argument explaining why and how the code uses the assumptions and complies with the requirements. We have applied our approach to generate review documents for several sub-systems of NASA s Project Constellation.
A Validation of Object-Oriented Design Metrics as Quality Indicators
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio
1997-01-01
This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.
A Validation of Object-Oriented Design Metrics
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.
1995-01-01
This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
Experimental and Analytical Performance of a Dual Brayton Power Conversion System
NASA Technical Reports Server (NTRS)
Lavelle, Thomas A.; Hervol, David S.; Briggs, Maxwell; Owen, A. Karl
2009-01-01
The interactions between two closed Brayton cycle (CBC) power conversion units (PCU) which share a common gas inventory and heat source have been studied experimentally using the Dual Brayton Power Conversion System (DBPCS) and analytically using the Closed- Cycle System Simulation (CCSS) computer code. Selected operating modes include steady-state operation at equal and unequal shaft speeds and various start-up scenarios. Equal shaft speed steady-state tests were conducted for heater exit temperatures of 840 to 950 K and speeds of 50 to 90 krpm, providing a system performance map. Unequal shaft speed steady-state testing over the same operating conditions shows that the power produced by each Brayton is sensitive to the operating conditions of the other due to redistribution of gas inventory. Startup scenarios show that starting the engines one at a time can dramatically reduce the required motoring energy. Although the DBPCS is not considered a flight-like system, these insights, as well as the operational experience gained from operating and modeling this system provide valuable information for the future development of Brayton systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
The cell cycle of early mammalian embryos: lessons from genetic mouse models.
Artus, Jérôme; Babinet, Charles; Cohen-Tannoudji, Michel
2006-03-01
Genes coding for cell cycle components predicted to be essential for its regulation have been shown to be dispensable in mice, at the whole organism level. Such studies have highlighted the extraordinary plasticity of the embryonic cell cycle and suggest that many aspects of in vivo cell cycle regulation remain to be discovered. Here, we discuss the particularities of the mouse early embryonic cell cycle and review the mutations that result in cell cycle defects during mouse early embryogenesis, including deficiencies for genes of the cyclin family (cyclin A2 and B1), genes involved in cell cycle checkpoints (Mad2, Bub3, Chk1, Atr), genes involved in ubiquitin and ubiquitin-like pathways (Uba3, Ubc9, Cul1, Cul3, Apc2, Apc10, Csn2) as well as genes the function of which had not been previously ascribed to cell cycle regulation (Cdc2P1, E4F and Omcg1).
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Jorgenson, Philip, C. E.; Jones, Scott M.
2014-01-01
The main focus of this study is to apply a computational tool for the flow analysis of the engine that has been tested with ice crystal ingestion in the Propulsion Systems Laboratory (PSL) of NASA Glenn Research Center. A data point was selected for analysis during which the engine experienced a full roll back event due to the ice accretion on the blades and flow path of the low pressure compressor. The computational tool consists of the Numerical Propulsion System Simulation (NPSS) engine system thermodynamic cycle code, and an Euler-based compressor flow analysis code, that has an ice particle melt estimation code with the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Decreasing the performance characteristics of the low pressure compressor (LPC) within the NPSS cycle analysis resulted in matching the overall engine performance parameters measured during testing at data points in short time intervals through the progression of the roll back event. Detailed analysis of the fan-core and LPC with the compressor flow analysis code simulated the effects of ice accretion by increasing the aerodynamic blockage and pressure losses through the low pressure compressor until achieving a match with the NPSS cycle analysis results, at each scan. With the additional blockages and losses in the LPC, the compressor flow analysis code results were able to numerically reproduce the performance that was determined by the NPSS cycle analysis, which was in agreement with the PSL engine test data. The compressor flow analysis indicated that the blockage due to ice accretion in the LPC exit guide vane stators caused the exit guide vane (EGV) to be nearly choked, significantly reducing the air flow rate into the core. This caused the LPC to eventually be in stall due to increasing levels of diffusion in the rotors and high incidence angles in the inlet guide vane (IGV) and EGV stators. The flow analysis indicating compressor stall is substantiated by the video images of the IGV taken during the PSL test, which showed water on the surface of the IGV flowing upstream out of the engine, indicating flow reversal, which is characteristic of a stalled compressor.
Performance outlook of the SCRAP receiver
NASA Astrophysics Data System (ADS)
Lubkoll, Matti; von Backström, Theodor W.; Harms, Thomas M.
2016-05-01
A combined cycle (CC) concentrating solar power (CSP) plant provides significant potential to achieve an efficiency increase and an electricity cost reduction compared to current single-cycle plants. A CC CSP system requires a receiver technology capable of effectively transferring heat from concentrated solar irradiation to a pressurized air stream of a gas turbine. The small number of pressurized air receivers demonstrated to date have practical limitations, when operating at high temperatures and pressures. As yet, a robust, scalable and efficient system has to be developed and commercialized. A novel receiver system, the Spiky Central Receiver Air Pre-heater (SCRAP) concept has been proposed to comply with these requirements. The SCRAP system is conceived as a solution for an efficient and robust pressurized air receiver that could be implemented in CC CSP concepts or standalone solar Brayton cycles without a bottoming Rankine cycle. The presented work expands on previous publications on the thermal modeling of the receiver system. Based on the analysis of a single heat transfer element (spike), predictions for its thermal performance can be made. To this end the existing thermal model was improved by heat transfer characteristics for the jet impingement region of the spike tip as well as heat transfer models simulating the interaction with ambient. While the jet impingement cooling effect was simulated employing a commercial CFD code, the ambient heat transfer model was based on simplifying assumptions in order to employ empirical and analytical equations. The thermal efficiency of a spike under design conditions (flux 1.0 MW/m2, air outlet temperature just below 800 °C) was calculated at approximately 80 %, where convective heat losses account for 16.2 % of the absorbed radiation and radiative heat losses for a lower 2.9 %. This effect is due to peak surface temperatures occurring at the root of the spikes. It can thus be concluded that the geometric receiver layout assists to limit radiative heat losses.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-04
.... This Rule specifically requires the adoption of a code of ethics by an investment advisor to include... requiring supervised persons to report any violations of the code of ethics promptly to the chief compliance... designated in the code of ethics; and (v) provisions requiring the investment advisor to provide each of the...
Beermann, Julia; Kirste, Dominique; Iwanov, Katharina; Lu, Dongchao; Kleemiß, Felix; Kumarswamy, Regalla; Schimmel, Katharina; Bär, Christian; Thum, Thomas
2018-01-01
The mammalian cell cycle is a complex and tightly controlled event. Myriads of different control mechanisms are involved in its regulation. Long non-coding RNAs (lncRNA) have emerged as important regulators of many cellular processes including cellular proliferation. However, a more global and unbiased approach to identify lncRNAs with importance for cell proliferation is missing. Here, we present a lentiviral shRNA library-based approach for functional lncRNA profiling. We validated our library approach in NIH3T3 (3T3) fibroblasts by identifying lncRNAs critically involved in cell proliferation. Using stringent selection criteria we identified lncRNA NR_015491.1 out of 3842 different RNA targets represented in our library. We termed this transcript Ntep (non-coding transcript essential for proliferation), as a bona fide lncRNA essential for cell cycle progression. Inhibition of Ntep in 3T3 and primary fibroblasts prevented normal cell growth and expression of key fibroblast markers. Mechanistically, we discovered that Ntep is important to activate P53 concomitant with increased apoptosis and cell cycle blockade in late G2/M. Our findings suggest Ntep to serve as an important regulator of fibroblast proliferation and function. In summary, our study demonstrates the applicability of an innovative shRNA library approach to identify long non-coding RNA functions in a massive parallel approach. PMID:29099486
Trypsteen, Wim; Mohammadi, Pejman; Van Hecke, Clarissa; Mestdagh, Pieter; Lefever, Steve; Saeys, Yvan; De Bleser, Pieter; Vandesompele, Jo; Ciuffi, Angela; Vandekerckhove, Linos; De Spiegelaere, Ward
2016-10-26
Studying the effects of HIV infection on the host transcriptome has typically focused on protein-coding genes. However, recent advances in the field of RNA sequencing revealed that long non-coding RNAs (lncRNAs) add an extensive additional layer to the cell's molecular network. Here, we performed transcriptome profiling throughout a primary HIV infection in vitro to investigate lncRNA expression at the different HIV replication cycle processes (reverse transcription, integration and particle production). Subsequently, guilt-by-association, transcription factor and co-expression analysis were performed to infer biological roles for the lncRNAs identified in the HIV-host interplay. Many lncRNAs were suggested to play a role in mechanisms relying on proteasomal and ubiquitination pathways, apoptosis, DNA damage responses and cell cycle regulation. Through transcription factor binding analysis, we found that lncRNAs display a distinct transcriptional regulation profile as compared to protein coding mRNAs, suggesting that mRNAs and lncRNAs are independently modulated. In addition, we identified five differentially expressed lncRNA-mRNA pairs with mRNA involvement in HIV pathogenesis with possible cis regulatory lncRNAs that control nearby mRNA expression and function. Altogether, the present study demonstrates that lncRNAs add a new dimension to the HIV-host interplay and should be further investigated as they may represent targets for controlling HIV replication.
PML tumor suppressor protein is required for HCV production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuroki, Misao; Research Fellow of the Japan Society for the Promotion of Science; Center for AIDS Research, Kumamoto University, Kumamoto 860-0811
2013-01-11
Highlights: Black-Right-Pointing-Pointer PML tumor suppressor protein is required for HCV production. Black-Right-Pointing-Pointer PML is dispensable for HCV RNA replication. Black-Right-Pointing-Pointer HCV could not alter formation of PML-NBs. Black-Right-Pointing-Pointer INI1 and DDX5, PML-related proteins, are involved in HCV life cycle. -- Abstract: PML tumor suppressor protein, which forms discrete nuclear structures termed PML-nuclear bodies, has been associated with several cellular functions, including cell proliferation, apoptosis and antiviral defense. Recently, it was reported that the HCV core protein colocalizes with PML in PML-NBs and abrogates the PML function through interaction with PML. However, role(s) of PML in HCV life cycle is unknown.more » To test whether or not PML affects HCV life cycle, we examined the level of secreted HCV core and the infectivity of HCV in the culture supernatants as well as the level of HCV RNA in HuH-7-derived RSc cells, in which HCV-JFH1 can infect and efficiently replicate, stably expressing short hairpin RNA targeted to PML. In this context, the level of secreted HCV core and the infectivity in the supernatants from PML knockdown cells was remarkably reduced, whereas the level of HCV RNA in the PML knockdown cells was not significantly affected in spite of very effective knockdown of PML. In fact, we showed that PML is unrelated to HCV RNA replication using the subgenomic HCV-JFH1 replicon RNA, JRN/3-5B. Furthermore, the infectivity of HCV-like particle in the culture supernatants was significantly reduced in PML knockdown JRN/3-5B cells expressing core to NS2 coding region of HCV-JFH1 genome using the trans-packaging system. Finally, we also demonstrated that INI1 and DDX5, the PML-related proteins, are involved in HCV production. Taken together, these findings suggest that PML is required for HCV production.« less
Development of Web Interfaces for Analysis Codes
NASA Astrophysics Data System (ADS)
Emoto, M.; Watanabe, T.; Funaba, H.; Murakami, S.; Nagayama, Y.; Kawahata, K.
Several codes have been developed to analyze plasma physics. However, most of them are developed to run on supercomputers. Therefore, users who typically use personal computers (PCs) find it difficult to use these codes. In order to facilitate the widespread use of these codes, a user-friendly interface is required. The authors propose Web interfaces for these codes. To demonstrate the usefulness of this approach, the authors developed Web interfaces for two analysis codes. One of them is for FIT developed by Murakami. This code is used to analyze the NBI heat deposition, etc. Because it requires electron density profiles, electron temperatures, and ion temperatures as polynomial expressions, those unfamiliar with the experiments find it difficult to use this code, especially visitors from other institutes. The second one is for visualizing the lines of force in the LHD (large helical device) developed by Watanabe. This code is used to analyze the interference caused by the lines of force resulting from the various structures installed in the vacuum vessel of the LHD. This code runs on PCs; however, it requires that the necessary parameters be edited manually. Using these Web interfaces, users can execute these codes interactively.
Center for Extended Magnetohydrodynamics Modeling - Final Technical Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, Scott
This project funding supported approximately 74 percent of a Ph.D. graduate student, not including costs of travel and supplies. We had a highly successful research project including the development of a second-order implicit electromagnetic kinetic ion hybrid model [Cheng 2013, Sturdevant 2016], direct comparisons with the extended MHD NIMROD code and kinetic simulation [Schnack 2013], modeling of slab tearing modes using the fully kinetic ion hybrid model and finally, modeling global tearing modes in cylindrical geometry using gyrokinetic simulation [Chen 2015, Chen 2016]. We developed an electromagnetic second-order implicit kinetic ion fluid electron hybrid model [Cheng 2013]. As a firstmore » step, we assumed isothermal electrons, but have included drift-kinetic electrons in similar models [Chen 2011]. We used this simulation to study the nonlinear evolution of the tearing mode in slab geometry, including nonlinear evolution and saturation [Cheng 2013]. Later, we compared this model directly to extended MHD calculations using the NIMROD code [Schnack 2013]. In this study, we investigated the ion-temperature-gradient instability with an extended MHD code for the first time and got reasonable agreement with the kinetic calculation in terms of linear frequency, growth rate and mode structure. We then extended this model to include orbit averaging and sub-cycling of the ions and compared directly to gyrokinetic theory [Sturdevant 2016]. This work was highlighted in an Invited Talk at the International Conference on the Numerical Simulation of Plasmas in 2015. The orbit averaging sub-cycling multi-scale algorithm is amenable to hybrid architectures with GPUS or math co-processors. Additionally, our participation in the Center for Extend Magnetohydrodynamics motivated our research on developing the capability for gyrokinetic simulation to model a global tearing mode. We did this in cylindrical geometry where the results could be benchmarked with existing eigenmode calculations. First, we developed a gyrokinetic code capable of simulating long wavelengths using a fluid electron model [Chen 2015]. We benchmarked this code with an eigenmode calculation. Besides having to rewrite the field solver due to the breakdown in the gyrokinetic ordering for long wavelengths, very high radial resolution was required. We developed a technique where we used the solution from the eigenmode solver to specify radial boundary conditions allowing for a very high radial resolution of the inner solution. Using this technique enabled us to use our direct algorithm with gyrokinetic ions and drift kinetic electrons [Chen 2016]. This work was highlighted in an Invited Talk at the American Physical Society - Division of Plasma Physics in 2015.« less
Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Garg Vijay; Ameri, Ali
2005-01-01
The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.
Mazar, Joseph; Rosado, Amy; Shelley, John; Marchica, John; Westmoreland, Tamarah J
2017-01-01
The long non-coding RNA GAS5 has been shown to modulate cancer proliferation in numerous human cancer systems and has been correlated with successful patient outcome. Our examination of GAS5 in neuroblastoma has revealed robust expression in both MYCN-amplified and non-amplified cell lines. Knockdown of GAS5 In vitro resulted in defects in cell proliferation, apoptosis, and induced cell cycle arrest. Further analysis of GAS5 clones revealed multiple novel splice variants, two of which inversely modulated with MYCN status. Complementation studies of the variants post-knockdown of GAS5 indicated alternate phenotypes, with one variant (FL) considerably enhancing cell proliferation by rescuing cell cycle arrest and the other (C2) driving apoptosis, suggesting a unique role for each in neuroblastoma cancer physiology. Global sequencing and ELISA arrays revealed that the loss of GAS5 induced p53, BRCA1, and GADD45A, which appeared to modulate cell cycle arrest in concert. Complementation with only the FL GAS5 clone could rescue cell cycle arrest, stabilizing HDM2, and leading to the loss of p53. Together, these data offer novel therapeutic targets in the form of lncRNA splice variants for separate challenges against cancer growth and cell death. PMID:28035057
Using concatenated quantum codes for universal fault-tolerant quantum gates.
Jochym-O'Connor, Tomas; Laflamme, Raymond
2014-01-10
We propose a method for universal fault-tolerant quantum computation using concatenated quantum error correcting codes. The concatenation scheme exploits the transversal properties of two different codes, combining them to provide a means to protect against low-weight arbitrary errors. We give the required properties of the error correcting codes to ensure universal fault tolerance and discuss a particular example using the 7-qubit Steane and 15-qubit Reed-Muller codes. Namely, other than computational basis state preparation as required by the DiVincenzo criteria, our scheme requires no special ancillary state preparation to achieve universality, as opposed to schemes such as magic state distillation. We believe that optimizing the codes used in such a scheme could provide a useful alternative to state distillation schemes that exhibit high overhead costs.
Survey Of Lossless Image Coding Techniques
NASA Astrophysics Data System (ADS)
Melnychuck, Paul W.; Rabbani, Majid
1989-04-01
Many image transmission/storage applications requiring some form of data compression additionally require that the decoded image be an exact replica of the original. Lossless image coding algorithms meet this requirement by generating a decoded image that is numerically identical to the original. Several lossless coding techniques are modifications of well-known lossy schemes, whereas others are new. Traditional Markov-based models and newer arithmetic coding techniques are applied to predictive coding, bit plane processing, and lossy plus residual coding. Generally speaking, the compression ratio offered by these techniques are in the area of 1.6:1 to 3:1 for 8-bit pictorial images. Compression ratios for 12-bit radiological images approach 3:1, as these images have less detailed structure, and hence, their higher pel correlation leads to a greater removal of image redundancy.
Manley, Ray; Satiani, Bhagwan
2009-11-01
With the widening gap between overhead expenses and reimbursement, management of the revenue cycle is a critical part of a successful vascular surgery practice. It is important to review the data on all the components of the revenue cycle: payer contracting, appointment scheduling, preregistration, registration process, coding and capturing charges, proper billing of patients and insurers, follow-up of accounts receivable, and finally using appropriate benchmarking. The industry benchmarks used should be those of peers in identical groups. Warning signs of poor performance are discussed enabling the practice to formulate a performance improvement plan.
Louie, Ke'ale W; Saera-Vila, Alfonso; Kish, Phillip E; Colacino, Justin A; Kahana, Alon
2017-11-09
Tissue regeneration requires a series of steps, beginning with generation of the necessary cell mass, followed by cell migration into damaged area, and ending with differentiation and integration with surrounding tissues. Temporal regulation of these steps lies at the heart of the regenerative process, yet its basis is not well understood. The ability of zebrafish to dedifferentiate mature "post-mitotic" myocytes into proliferating myoblasts that in turn regenerate lost muscle tissue provides an opportunity to probe the molecular mechanisms of regeneration. Following subtotal excision of adult zebrafish lateral rectus muscle, dedifferentiating residual myocytes were collected at two time points prior to cell cycle reentry and compared to uninjured muscles using RNA-seq. Functional annotation (GAGE or K-means clustering followed by GO enrichment) revealed a coordinated response encompassing epigenetic regulation of transcription, RNA processing, and DNA replication and repair, along with protein degradation and translation that would rewire the cellular proteome and metabolome. Selected candidate genes were phenotypically validated in vivo by morpholino knockdown. Rapidly induced gene products, such as the Polycomb group factors Ezh2 and Suz12a, were necessary for both efficient dedifferentiation (i.e. cell reprogramming leading to cell cycle reentry) and complete anatomic regeneration. In contrast, the late activated gene fibronectin was important for efficient anatomic muscle regeneration but not for the early step of myocyte cell cycle reentry. Reprogramming of a "post-mitotic" myocyte into a dedifferentiated myoblast requires a complex coordinated effort that reshapes the cellular proteome and rewires metabolic pathways mediated by heritable yet nuanced epigenetic alterations and molecular switches, including transcription factors and non-coding RNAs. Our studies show that temporal regulation of gene expression is programmatically linked to distinct steps in the regeneration process, with immediate early expression driving dedifferentiation and reprogramming, and later expression facilitating anatomical regeneration.
A Large Scale Code Resolution Service Network in the Internet of Things
Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan
2012-01-01
In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT's advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS. PMID:23202207
A large scale code resolution service network in the Internet of Things.
Yu, Haining; Zhang, Hongli; Fang, Binxing; Yu, Xiangzhan
2012-11-07
In the Internet of Things a code resolution service provides a discovery mechanism for a requester to obtain the information resources associated with a particular product code immediately. In large scale application scenarios a code resolution service faces some serious issues involving heterogeneity, big data and data ownership. A code resolution service network is required to address these issues. Firstly, a list of requirements for the network architecture and code resolution services is proposed. Secondly, in order to eliminate code resolution conflicts and code resolution overloads, a code structure is presented to create a uniform namespace for code resolution records. Thirdly, we propose a loosely coupled distributed network consisting of heterogeneous, independent; collaborating code resolution services and a SkipNet based code resolution service named SkipNet-OCRS, which not only inherits DHT’s advantages, but also supports administrative control and autonomy. For the external behaviors of SkipNet-OCRS, a novel external behavior mode named QRRA mode is proposed to enhance security and reduce requester complexity. For the internal behaviors of SkipNet-OCRS, an improved query algorithm is proposed to increase query efficiency. It is analyzed that integrating SkipNet-OCRS into our resolution service network can meet our proposed requirements. Finally, simulation experiments verify the excellent performance of SkipNet-OCRS.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-08
... Advisers Act Rule 204A-1. This Rule specifically requires the adoption of a code of ethics by an investment...) provisions requiring supervised persons to report any violations of the code of ethics promptly to the chief... designated in the code of ethics; and (v) provisions requiring the investment advisor to provide each of the...
NASA Astrophysics Data System (ADS)
Toride, N.; Matsuoka, K.
2017-12-01
In order to predict the fate and transport of nitrogen in a reduced paddy field as a result of decomposition of organic matter, we implemented within the PHREEQC program a modified coupled carbon and nitrogen cycling model based on the LEACHM code. SOM decay processes from organic carbon (Org-C) to biomass carbon (Bio-C), humus carbon (Hum-C), and carbon dioxide (CO2) were described using first-order kinetics. Bio-C was recycled into the organic pool. When oxygen was available in an aerobic condition, O2 was used to produce CO2 as an electron accepter. When O2 availability is low, other electron acceptors such as NO3-, Mn4+, Fe3+, SO42-, were used depending on the redox potential. Decomposition of Org-N was related to the carbon cycle using the C/N ratio. Mineralization and immobilization were determined based on available NH4-N and the nitrogen demand for the formation of biomass and humus. Although nitrification was independently described with the first-order decay process, denitrification was linked with the SOM decay since NO3- was an electron accepter for the CO2 production. Proton reactions were coupled with the nitrification from NH4+ to NO3-, and the ammonium generation from NH3 to NH4+. Furthermore, cation and anion exchange reactions were included with the permanent negative charges and the pH dependent variable charges. The carbon and nitrogen cycling model described with PHREEQC was linked with HYDRUS-1D using the HP1 code. Various nitrogen and carbon transport scenarios were demonstrated for the application of organic matter to a saturated paddy soil.
Nuclear shell model code CRUNCHER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Resler, D.A.; Grimes, S.M.
1988-05-01
A new nuclear shell model code CRUNCHER, patterned after the code VLADIMIR, has been developed. While CRUNCHER and VLADIMIR employ the techniques of an uncoupled basis and the Lanczos process, improvements in the new code allow it to handle much larger problems than the previous code and to perform them more efficiently. Tests involving a moderately sized calculation indicate that CRUNCHER running on a SUN 3/260 workstation requires approximately one-half the central processing unit (CPU) time required by VLADIMIR running on a CRAY-1 supercomputer.
ERIC Educational Resources Information Center
American Inst. of Architects, Washington, DC.
A MODEL BUILDING CODE FOR FALLOUT SHELTERS WAS DRAWN UP FOR INCLUSION IN FOUR NATIONAL MODEL BUILDING CODES. DISCUSSION IS GIVEN OF FALLOUT SHELTERS WITH RESPECT TO--(1) NUCLEAR RADIATION, (2) NATIONAL POLICIES, AND (3) COMMUNITY PLANNING. FALLOUT SHELTER REQUIREMENTS FOR SHIELDING, SPACE, VENTILATION, CONSTRUCTION, AND SERVICES SUCH AS ELECTRICAL…
Schrock, Linda E
2008-07-01
This article reviews the literature to date and reports on a new study that documented the frequency of manual code-requiring blood glucose (BG) meters that were miscoded at the time of the patient's initial appointment in a hospital-based outpatient diabetes education program. Between January 1 and May 31, 2007, the type of BG meter and the accuracy of the patient's meter code (if required) and procedure for checking BG were checked during the initial appointment with the outpatient diabetes educator. If indicated, reeducation regarding the procedure for the BG meter code entry and/or BG test was provided. Of the 65 patients who brought their meter requiring manual entry of a code number or code chip to the initial appointment, 16 (25%) were miscoded at the time of the appointment. Two additional problems, one of dead batteries and one of improperly stored test strips, were identified and corrected at the first appointment. These findings underscore the importance of checking the patient's BG meter code (if required) and procedure for testing BG at each encounter with a health care professional or providing the patient with a meter that does not require manual entry of a code number or chip to match the container of test strips (i.e., an autocode meter).
Trivedi, Amit Kumar; Malik, Shalie; Rani, Sangeeta; Kumar, Vinod
2015-06-01
Eukaryotic cells produce chemical energy in the form of ATP by oxidative phosphorylation of metabolic fuels via a series of enzyme mediated biochemical reactions. We propose that the rates of these reactions are altered, as per energy needs of the seasonal metabolic states in avian migrants. To investigate this, blackheaded buntings were photoperiodically induced with non-migratory, premigratory, migratory and post-migratory phenotypes. High plasma levels of free fatty acids, citrate (an intermediate that begins the TCA cycle) and malate dehydrogenase (mdh, an enzyme involved at the end of the TCA cycle) confirmed increased availability of metabolic reserves and substrates to the TCA cycle during the premigratory and migratory states, respectively. Further, daily expression pattern of genes coding for enzymes involved in the oxidative decarboxylation of pyruvate to acetyl-CoA (pdc and pdk) and oxidative phosphorylation in the TCA cycle (cs, odgh, sdhd and mdh) was monitored in the hypothalamus and liver. Reciprocal relationship between pdc and pdk expressions conformed with the altered requirements of acetyl-CoA for the TCA cycle in different metabolic states. Except for pdk, all genes had a daily expression pattern, with high mRNA expression during the day in the premigratory/migratory phenotypes, and at night (cs, odhg, sdhd and mdh) in the nonmigratory phenotype. Differences in mRNA expression patterns of pdc, sdhd and mdh, but not of pdk, cs and odgh, between the hypothalamus and liver indicated a tissue dependent metabolism in buntings. These results suggest the adaptation of oxidative phosphorylation pathway(s) at gene levels to the seasonal alternations in metabolism in migratory songbirds. Copyright © 2015 Elsevier Inc. All rights reserved.
Stationary Liquid Fuel Fast Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Won Sik; Grandy, Andrew; Boroski, Andrew
For effective burning of hazardous transuranic (TRU) elements of used nuclear fuel, a transformational advanced reactor concept named SLFFR (Stationary Liquid Fuel Fast Reactor) was proposed based on stationary molten metallic fuel. The fuel enters the reactor vessel in a solid form, and then it is heated to molten temperature in a small melting heater. The fuel is contained within a closed, thick container with penetrating coolant channels, and thus it is not mixed with coolant nor flow through the primary heat transfer circuit. The makeup fuel is semi- continuously added to the system, and thus a very small excessmore » reactivity is required. Gaseous fission products are also removed continuously, and a fraction of the fuel is periodically drawn off from the fuel container to a processing facility where non-gaseous mixed fission products and other impurities are removed and then the cleaned fuel is recycled into the fuel container. A reference core design and a preliminary plant system design of a 1000 MWt TRU- burning SLFFR concept were developed using TRU-Ce-Co fuel, Ta-10W fuel container, and sodium coolant. Conservative design approaches were adopted to stay within the current material performance database. Detailed neutronics and thermal-fluidic analyses were performed to develop a reference core design. Region-dependent 33-group cross sections were generated based on the ENDF/B-VII.0 data using the MC2-3 code. Core and fuel cycle analyses were performed in theta-r-z geometries using the DIF3D and REBUS-3 codes. Reactivity coefficients and kinetics parameters were calculated using the VARI3D perturbation theory code. Thermo-fluidic analyses were performed using the ANSYS FLUENT computational fluid dynamics (CFD) code. Figure 0.1 shows a schematic radial layout of the reference 1000 MWt SLFFR core, and Table 0.1 summarizes the main design parameters of SLFFR-1000 loop plant. The fuel container is a 2.5 cm thick cylinder with an inner radius of 87.5 cm. The fuel container is penetrated by twelve hexagonal control assembly (CA) guide tubes, each of which has 3.0 mm thickness and 69.4 mm flat-to-flat outer distance. The distance between two neighboring CA guide tube is selected to be 26 cm to provide an adequate space for CA driving systems. The fuel container has 18181 penetrating coolant tubes of 6.0 mm inner diameter and 2.0 mm thickness. The coolant tubes are arranged in a triangular lattice with a lattice pitch of 1.21 cm. The fuel, structure, and coolant volume fractions inside the fuel container are 0.386, 0.383, and 0.231, respectively. Separate steel reflectors and B4C shields are used outside of the fuel container. Six gas expansion modules (GEMs) of 5.0 cm thickness are introduced in the radial reflector region. Between the radial reflector and the fuel container is a 2.5 cm sodium gap. The TRU inventory at the beginning of equilibrium cycle (BOEC) is 5081 kg, whereas the TRU inventory at the beginning of life (BOL) was 3541 kg. This is because the equilibrium cycle fuel contains a significantly smaller fissile fraction than the LWR TRU feed. The fuel inventory at BOEC is composed of 34.0 a/o TRU, 41.4 a/o Ce, 23.6 a/o Co, and 1.03 a/o solid fission products. Since uranium-free fuel is used, a theoretical maximum TRU consumption rate of 1.011 kg/day is achieved. The semi-continuous fuel cycle based on the 300-batch, 1- day cycle approximation yields a burnup reactivity loss of 26 pcm/day, and requires a daily reprocessing of 32.5 kg of SLFFR fuel. This yields a daily TRU charge rate of 17.45 kg, including a makeup TRU feed of 1.011 kg recovered from the LWR used fuel. The charged TRU-Ce-Co fuel is composed of 34.4 a/o TRU, 40.6 a/o Ce, and 25.0 a/o Co.« less
Vector Adaptive/Predictive Encoding Of Speech
NASA Technical Reports Server (NTRS)
Chen, Juin-Hwey; Gersho, Allen
1989-01-01
Vector adaptive/predictive technique for digital encoding of speech signals yields decoded speech of very good quality after transmission at coding rate of 9.6 kb/s and of reasonably good quality at 4.8 kb/s. Requires 3 to 4 million multiplications and additions per second. Combines advantages of adaptive/predictive coding, and code-excited linear prediction, yielding speech of high quality but requires 600 million multiplications and additions per second at encoding rate of 4.8 kb/s. Vector adaptive/predictive coding technique bridges gaps in performance and complexity between adaptive/predictive coding and code-excited linear prediction.
Lu, Ming-Chi; Hsieh, Min-Chih; Koo, Malcolm; Lai, Ning-Sheng
2016-01-01
Primary Sjögren's syndrome (pSS) is a progressive systemic autoimmune disorder with a strong female predominance. Hormonal influences are thought to play a role in the development of pSS. However, no studies have specifically evaluated the association between irregular menstrual cycles and pSS. Therefore, using a health claims database, this study investigated the risk of pSS in women with irregular menstrual cycles. We conducted a case-control study using the Taiwan's National Health Insurance Research Database. A total of 360 patients diagnosed with pSS (International Classification of Diseases, ninth revision, clinical modification, ICD-9-CM code 710.2) between 2001 and 2012 were identified. Controls were frequency-matched at a rate of 5:1 to the cases by five-year age interval and index year. Both cases and controls were retrospectively traced back until 2001 for the diagnosis of irregular menstrual cycles (ICD-9-CM code 626.4). The risk of pSS was assessed using multivariate logistic regression analyses. Irregular menstrual cycles were significantly associated with pSS [adjusted odds ratio, (AOR) = 1.38, p = 0.027], after adjusted for insured amount, urbanization level, and thyroid disorder. In addition, when the data were stratified by three age categories, only the patients in the age category of 45-55 years showed significant association between irregular menstrual cycles and pSS (AOR = 1.74, p = 0.005). In this nationwide, population-based case-control study, we found a significant increased risk of pSS in female patients with irregular menstrual cycles, particularly those in their mid-forties to mid-fifties.
Methods for nuclear air-cleaning-system accident-consequence assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrae, R.W.; Bolstad, J.W.; Gregory, W.S.
1982-01-01
This paper describes a multilaboratory research program that is directed toward addressing many questions that analysts face when performing air cleaning accident consequence assessments. The program involves developing analytical tools and supportive experimental data that will be useful in making more realistic assessments of accident source terms within and up to the atmospheric boundaries of nuclear fuel cycle facilities. The types of accidents considered in this study includes fires, explosions, spills, tornadoes, criticalities, and equipment failures. The main focus of the program is developing an accident analysis handbook (AAH). We will describe the contents of the AAH, which include descriptionsmore » of selected nuclear fuel cycle facilities, process unit operations, source-term development, and accident consequence analyses. Three computer codes designed to predict gas and material propagation through facility air cleaning systems are described. These computer codes address accidents involving fires (FIRAC), explosions (EXPAC), and tornadoes (TORAC). The handbook relies on many illustrative examples to show the analyst how to approach accident consequence assessments. We will use the FIRAC code and a hypothetical fire scenario to illustrate the accident analysis capability.« less
Classification of robust heteroclinic cycles for vector fields in {\\protect\\bb R}^3 with symmetry
NASA Astrophysics Data System (ADS)
Hawker, David; Ashwin, Peter
2005-09-01
We consider a classification of robust heteroclinic cycles in the positive octant of {\\bb R}^3 under the action of the symmetry group {{\\bb Z}_2}^3 . We introduce a coding system to represent different classes up to a topological equivalence, and produce a characterization of all types of robust heteroclinic cycle that can arise in this situation. These cycles may or may not contain the origin within the cycle. We proceed to find a connection between our problem and meandric numbers. We find a direct correlation between the number of classes of robust heteroclinic cycle that do not include the origin and the 'Mercedes-Benz' sequence of integers characterizing meanders through a 'Y-shaped' configuration. We investigate upper and lower bounds for the number of classes possible for robust cycles between n equilibria, one of which may be the origin.
Efficient Polar Coding of Quantum Information
NASA Astrophysics Data System (ADS)
Renes, Joseph M.; Dupuis, Frédéric; Renner, Renato
2012-08-01
Polar coding, introduced 2008 by Arıkan, is the first (very) efficiently encodable and decodable coding scheme whose information transmission rate provably achieves the Shannon bound for classical discrete memoryless channels in the asymptotic limit of large block sizes. Here, we study the use of polar codes for the transmission of quantum information. Focusing on the case of qubit Pauli channels and qubit erasure channels, we use classical polar codes to construct a coding scheme that asymptotically achieves a net transmission rate equal to the coherent information using efficient encoding and decoding operations and code construction. Our codes generally require preshared entanglement between sender and receiver, but for channels with a sufficiently low noise level we demonstrate that the rate of preshared entanglement required is zero.
Development of an Aeroelastic Code Based on an Euler/Navier-Stokes Aerodynamic Solver
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Stefko, George L.; Janus, Mark J.
1996-01-01
This paper describes the development of an aeroelastic code (TURBO-AE) based on an Euler/Navier-Stokes unsteady aerodynamic analysis. A brief review of the relevant research in the area of propulsion aeroelasticity is presented. The paper briefly describes the original Euler/Navier-Stokes code (TURBO) and then details the development of the aeroelastic extensions. The aeroelastic formulation is described. The modeling of the dynamics of the blade using a modal approach is detailed, along with the grid deformation approach used to model the elastic deformation of the blade. The work-per-cycle approach used to evaluate aeroelastic stability is described. Representative results used to verify the code are presented. The paper concludes with an evaluation of the development thus far, and some plans for further development and validation of the TURBO-AE code.
Design of an MR image processing module on an FPGA chip
NASA Astrophysics Data System (ADS)
Li, Limin; Wyrwicz, Alice M.
2015-06-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments.
Design of an MR image processing module on an FPGA chip
Li, Limin; Wyrwicz, Alice M.
2015-01-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128 × 128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. PMID:25909646
The advanced software development workstation project
NASA Technical Reports Server (NTRS)
Fridge, Ernest M., III; Pitman, Charles L.
1991-01-01
The Advanced Software Development Workstation (ASDW) task is researching and developing the technologies required to support Computer Aided Software Engineering (CASE) with the emphasis on those advanced methods, tools, and processes that will be of benefit to support all NASA programs. Immediate goals are to provide research and prototype tools that will increase productivity, in the near term, in projects such as the Software Support Environment (SSE), the Space Station Control Center (SSCC), and the Flight Analysis and Design System (FADS) which will be used to support the Space Shuttle and Space Station Freedom. Goals also include providing technology for development, evolution, maintenance, and operations. The technologies under research and development in the ASDW project are targeted to provide productivity enhancements during the software life cycle phase of enterprise and information system modeling, requirements generation and analysis, system design and coding, and system use and maintenance. On-line user's guides will assist users in operating the developed information system with knowledge base expert assistance.
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Updatemore » Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009, Cycle 145A through Cycle 151B, was successfully completed during 2012. This major effort supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR Core Safety Analysis Package (CSAP) preparation process, in parallel with the established PDQ-based methodology, beginning late in Fiscal Year 2012. Acquisition of the advanced SERPENT (VTT-Finland) and MC21 (DOE-NR) Monte Carlo stochastic neutronics simulation codes was also initiated during the year and some initial applications of SERPENT to ATRC experiment analysis were demonstrated. These two new codes will offer significant additional capability, including the possibility of full-3D Monte Carlo fuel management support capabilities for the ATR at some point in the future. Finally, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system has been implemented and initial computational results have been obtained. This capability will have many applications as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation.« less
... functionality. ------------------------------------------------------------ Find a Health Center ------------------------------------------------------------ Share this page: Twitter MySpace Technorati Facebook StumbleUpon Delicious Email to friend ... such as 20002), address, state, or place Share Twitter Facebook Email to friend Embed Code Embed this ...
Accuracy and time requirements of a bar-code inventory system for medical supplies.
Hanson, L B; Weinswig, M H; De Muth, J E
1988-02-01
The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rousseau, Aymeric
2013-02-01
Several tools already exist to develop detailed plant model, including GT-Power, AMESim, CarSim, and SimScape. The objective of Autonomie is not to provide a language to develop detailed models; rather, Autonomie supports the assembly and use of models from design to simulation to analysis with complete plug-and-play capabilities. Autonomie provides a plug-and-play architecture to support this ideal use of modeling and simulation for math-based automotive control system design. Models in the standard format create building blocks, which are assembled at runtime into a simulation model of a vehicle, system, subsystem, or component to simulate. All parts of the graphical usermore » interface (GUI) are designed to be flexible to support architectures, systems, components, and processes not yet envisioned. This allows the software to be molded to individual uses, so it can grow as requirements and technical knowledge expands. This flexibility also allows for implementation of legacy code, including models, controller code, processes, drive cycles, and post-processing equations. A library of useful and tested models and processes is included as part of the software package to support a full range of simulation and analysis tasks, immediately. Autonomie also includes a configuration and database management front end to facilitate the storage, versioning, and maintenance of all required files, such as the models themselves, the model’s supporting files, test data, and reports. During the duration of the CRADA, Argonne has worked closely with GM to implement and demonstrate each one of their requirements. A use case was developed by GM for every requirement and demonstrated by Argonne. Each of the new features were verified by GM experts through a series of Gate. Once all the requirements were validated they were presented to the directors as part of GM Gate process.« less
47 CFR 11.51 - EAS code and Attention Signal Transmission requirements.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false EAS code and Attention Signal Transmission... SYSTEM (EAS) Emergency Operations § 11.51 EAS code and Attention Signal Transmission requirements. (a... programming before EAS message transmission should not cause television receivers to mute EAS audio messages...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Athalye, Rahul A.
The US Department of Energy’s most recent commercial energy code compliance evaluation efforts focused on determining a percent compliance rating for states to help them meet requirements under the American Recovery and Reinvestment Act (ARRA) of 2009. That approach included a checklist of code requirements, each of which was graded pass or fail. Percent compliance for any given building was simply the percent of individual requirements that passed. With its binary approach to compliance determination, the previous methodology failed to answer some important questions. In particular, how much energy cost could be saved by better compliance with the commercial energymore » code and what are the relative priorities of code requirements from an energy cost savings perspective? This paper explores an analytical approach and pilot study using a single building type and climate zone to answer those questions.« less
Long Cycle Life Secondary Lithium Cells Utilizing Tetrahydrofuran.
1984-04-01
Rosenwasser) Code RD-I Washington, D.C. 20360 Washington, D.C. 20380 Naval Civil Engineering Laboratory 1 Dean William Tolles Attn: Dr. R. W. Drisko...Ocean Systems Center 11 apel Street San Diego, California 92152 wton, Massachusetts 02158 Dr. J. J. Auborn Dr. Adam Heller Bell Laboratories Bell...University Research Triangle Park, NC 27709 Evanston, Illinois 60201 Dr. William Ayers Dr. Aaron Fletcher ECD Inc. Naval Weapons Center P.O. Box 5357 Code
MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test
NASA Technical Reports Server (NTRS)
Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.
2001-01-01
The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including Limit Cycle Oscillation behavior.
NASA Technical Reports Server (NTRS)
Suder, Kenneth L.; Prahst, Patricia S.; Thorp, Scott A.
2011-01-01
NASA s Fundamental Aeronautics Program is investigating turbine-based combined cycle (TBCC) propulsion systems for access to space because it provides the potential for aircraft-like, space-launch operations that may significantly reduce launch costs and improve safety. To this end, National Aeronautics and Space Administration (NASA) and General Electric (GE) teamed to design a Mach 4 variable cycle turbofan/ramjet engine for access to space. To enable the wide operating range of a Mach 4+ variable cycle turbofan ramjet required the development of a unique fan stage design capable of multi-point operation to accommodate variations in bypass ratio (10 ), fan speed (7 ), inlet mass flow (3.5 ), inlet pressure (8 ), and inlet temperature (3 ). In this paper, NASA has set out to characterize a TBCC engine fan stage aerodynamic performance and stability limits over a wide operating range including power-on and hypersonic-unique "windmill" operation. Herein, we will present the fan stage design, and the experimental test results of the fan stage operating from 15 to 100 percent corrected design speed. Whereas, in the companion paper, we will provide an assessment of NASA s APNASA code s ability to predict the fan stage performance and operability over a wide range of speed and bypass ratio.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Doucet, M.; Durant Terrasson, L.; Mouton, J.
2006-07-01
Criticality safety evaluations implement requirements to proof of sufficient sub critical margins outside of the reactor environment for example in fuel fabrication plants. Basic criticality data (i.e., criticality standards) are used in the determination of sub critical margins for all processes involving plutonium or enriched uranium. There are several criticality international standards, e.g., ARH-600, which is one the US nuclear industry relies on. The French Nuclear Safety Authority (DGSNR and its advising body IRSN) has requested AREVA NP to review the criticality standards used for the evaluation of its Low Enriched Uranium fuel fabrication plants with CRISTAL V0, the recentlymore » updated French criticality evaluation package. Criticality safety is a concern for every phase of the fabrication process including UF{sub 6} cylinder storage, UF{sub 6}-UO{sub 2} conversion, powder storage, pelletizing, rod loading, assembly fabrication, and assembly transportation. Until 2003, the accepted criticality standards were based on the French CEA work performed in the late seventies with the APOLLO1 cell/assembly computer code. APOLLO1 is a spectral code, used for evaluating the basic characteristics of fuel assemblies for reactor physics applications, which has been enhanced to perform criticality safety calculations. Throughout the years, CRISTAL, starting with APOLLO1 and MORET 3 (a 3D Monte Carlo code), has been improved to account for the growth of its qualification database and for increasing user requirements. Today, CRISTAL V0 is an up-to-date computational tool incorporating a modern basic microscopic cross section set based on JEF2.2 and the comprehensive APOLLO2 and MORET 4 codes. APOLLO2 is well suited for criticality standards calculations as it includes a sophisticated self shielding approach, a P{sub ij} flux determination, and a 1D transport (S{sub n}) process. CRISTAL V0 is the result of more than five years of development work focusing on theoretical approaches and the implementation of user-friendly graphical interfaces. Due to its comprehensive physical simulation and thanks to its broad qualification database with more than a thousand benchmark/calculation comparisons, CRISTAL V0 provides outstanding and reliable accuracy for criticality evaluations for configurations covering the entire fuel cycle (i.e. from enrichment, pellet/assembly fabrication, transportation, to fuel reprocessing). After a brief description of the calculation scheme and the physics algorithms used in this code package, results for the various fissile media encountered in a UO{sub 2} fuel fabrication plant will be detailed and discussed. (authors)« less
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The paper describes the computational techniques employed in determining the optimal propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. The computer programs used to perform calculations for all the factors that enter into the selection process of determining the optimum combinations of airplanes and engines are examined. Attention is given to the description of the computer codes including NNEP, WATE, LIFCYC, INSTAL, and POD DRG. A process is illustrated by which turbine engines can be evaluated as to fuel consumption, engine weight, cost and installation effects. Examples are shown as to the benefits of variable geometry and of the tradeoff between fuel burned and engine weights. Future plans for further improvements in the analytical modeling of engine systems are also described.
A high-order language for a system of closely coupled processing elements
NASA Technical Reports Server (NTRS)
Feyock, S.; Collins, W. R.
1986-01-01
The research reported in this paper was occasioned by the requirements on part of the Real-Time Digital Simulator (RTDS) project under way at NASA Lewis Research Center. The RTDS simulation scheme employs a network of CPUs running lock-step cycles in the parallel computations of jet airplane simulations. Their need for a high order language (HOL) that would allow non-experts to write simulation applications and that could be implemented on a possibly varying network can best be fulfilled by using the programming language Ada. We describe how the simulation problems can be modeled in Ada, how to map a single, multi-processing Ada program into code for individual processors, regardless of network reconfiguration, and why some Ada language features are particulary well-suited to network simulations.
Trevors, J T
2012-12-01
The hypothesis is proposed that during the organization of pre-biotic bacterial cell(s), high-energy electrical discharges, infrared radiation (IR), thermosynthesis and possibly pre-photosynthesis were central to the origin of life. High-energy electrical discharges generated some simple organic molecules available for the origin of life. Infrared radiation, both incoming to the Earth and generated on the cooling Earth with day/night and warming/cooling cycles, was a component of heat engine thermosynthesis before enzymes and the genetic code were present. Eventually, a primitive forerunner of photosynthesis and the capability to capture visible light emerged. In addition, the dual particle-wave nature of light is discussed from the perspective that life requires light acting both as a wave and particle.
Van Laere, Sven; Nyssen, Marc; Verbeke, Frank
2017-01-01
Clinical coding is a requirement to provide valuable data for billing, epidemiology and health care resource allocation. In sub-Saharan Africa, we observe a growing awareness of the need for coding of clinical data, not only in health insurances, but also in governments and the hospitals. Presently, coding systems in sub-Saharan Africa are often used for billing purposes. In this paper we consider the use of a nomenclature to also have a clinical impact. Often coding systems are assumed to be complex and too extensive to be used in daily practice. Here, we present a method for constructing a new nomenclature based on existing coding systems by considering a minimal subset in the sub-Saharan region. Evaluation of completeness will be done nationally using the requirements of national registries. The nomenclature requires an extension character for dealing with codes that have to be used for multiple registries. Hospitals will benefit most by using this extension character.
Code of Federal Regulations, 2010 CFR
2010-07-01
...-cycle vehicles not requiring particulate emission measurements. 86.109-94 Section 86.109-94 Protection... Year New Light-Duty Vehicles and New Light-Duty Trucks and New Otto-Cycle Complete Heavy-Duty Vehicles; Test Procedures § 86.109-94 Exhaust gas sampling system; Otto-cycle vehicles not requiring particulate...
Convolutional code performance in planetary entry channels
NASA Technical Reports Server (NTRS)
Modestino, J. W.
1974-01-01
The planetary entry channel is modeled for communication purposes representing turbulent atmospheric scattering effects. The performance of short and long constraint length convolutional codes is investigated in conjunction with coherent BPSK modulation and Viterbi maximum likelihood decoding. Algorithms for sequential decoding are studied in terms of computation and/or storage requirements as a function of the fading channel parameters. The performance of the coded coherent BPSK system is compared with the coded incoherent MFSK system. Results indicate that: some degree of interleaving is required to combat time correlated fading of channel; only modest amounts of interleaving are required to approach performance of memoryless channel; additional propagational results are required on the phase perturbation process; and the incoherent MFSK system is superior when phase tracking errors are considered.
Potential Effects of Leak-Before-Break on Light Water Reactor Design.
1985-08-26
Boiler and Pressure Vessel Code . In fact, section 3 of that code was created for nuclear applications. This... Boiler and Pressure Vessel Code . The only major change which leak-before-break would require in these analyses would be that all piping to be considered...XI of the ASME Boiler and Pressure Vessel Code , and is already required for all Class I piping systems in the plant. Class I systems are those
NASA Technical Reports Server (NTRS)
Whalen, Michael; Schumann, Johann; Fischer, Bernd
2002-01-01
Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-20
... (Code 324), Field Border (Code 386), Filter Strip (Code 393), Land Smoothing (Code 466), Livestock... the implementation requirement document to the specifications and plans. Filter Strip (Code 393)--The...
Strydom, G.; Epiney, A. S.; Alfonsi, Andrea; ...
2015-12-02
The PHISICS code system has been under development at INL since 2010. It consists of several modules providing improved coupled core simulation capability: INSTANT (3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and modules performing criticality searches, fuel shuffling and generalized perturbation. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D was finalized in 2013, and as part of the verification and validation effort the first phase of the OECD/NEA MHTGR-350 Benchmark has now been completed. The theoretical basis and latest development status of the coupled PHISICS/RELAP5-3D tool are described in more detailmore » in a concurrent paper. This paper provides an overview of the OECD/NEA MHTGR-350 Benchmark and presents the results of Exercises 2 and 3 defined for Phase I. Exercise 2 required the modelling of a stand-alone thermal fluids solution at End of Equilibrium Cycle for the Modular High Temperature Reactor (MHTGR). The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 required a coupled neutronics and thermal fluids solution, and the PHISICS/RELAP5-3D code suite was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of results obtained with the traditional RELAP5-3D “ring” model approach against a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity that can be obtained by this “block” model is illustrated with comparison results on the temperature, power density and flux distributions. Furthermore, it is shown that the ring model leads to significantly lower fuel temperatures (up to 10%) when compared with the higher fidelity block model, and that the additional model development and run-time efforts are worth the gains obtained in the improved spatial temperature and flux distributions.« less
Foreign Object Damage Identification in Turbine Engines
NASA Technical Reports Server (NTRS)
Strack, William; Zhang, Desheng; Turso, James; Pavlik, William; Lopez, Isaac
2005-01-01
This report summarizes the collective work of a five-person team from different organizations examining the problem of detecting foreign object damage (FOD) events in turbofan engines from gas path thermodynamic and bearing accelerometer sensors, and determining the severity of damage to each component (diagnosis). Several detection and diagnostic approaches were investigated and a software tool (FODID) was developed to assist researchers detect/diagnose FOD events. These approaches include (1) fan efficiency deviation computed from upstream and downstream temperature/ pressure measurements, (2) gas path weighted least squares estimation of component health parameter deficiencies, (3) Kalman filter estimation of component health parameters, and (4) use of structural vibration signal processing to detect both large and small FOD events. The last three of these approaches require a significant amount of computation in conjunction with a physics-based analytic model of the underlying phenomenon the NPSS thermodynamic cycle code for approaches 1 to 3 and the DyRoBeS reduced-order rotor dynamics code for approach 4. A potential application of the FODID software tool, in addition to its detection/diagnosis role, is using its sensitivity results to help identify the best types of sensors and their optimum locations within the gas path, and similarly for bearing accelerometers.
ExaSAT: An exascale co-design tool for performance modeling
Unat, Didem; Chan, Cy; Zhang, Weiqun; ...
2015-02-09
One of the emerging challenges to designing HPC systems is understanding and projecting the requirements of exascale applications. In order to determine the performance consequences of different hardware designs, analytic models are essential because they can provide fast feedback to the co-design centers and chip designers without costly simulations. However, current attempts to analytically model program performance typically rely on the user manually specifying a performance model. Here we introduce the ExaSAT framework that automates the extraction of parameterized performance models directly from source code using compiler analysis. The parameterized analytic model enables quantitative evaluation of a broad range ofmore » hardware design trade-offs and software optimizations on a variety of different performance metrics, with a primary focus on data movement as a metric. Finally, we demonstrate the ExaSAT framework’s ability to perform deep code analysis of a proxy application from the Department of Energy Combustion Co-design Center to illustrate its value to the exascale co-design process. ExaSAT analysis provides insights into the hardware and software trade-offs and lays the groundwork for exploring a more targeted set of design points using cycle-accurate architectural simulators.« less
OPserver: opacities and radiative accelerations on demand
NASA Astrophysics Data System (ADS)
Mendoza, C.; González, J.; Seaton, M. J.; Buerger, P.; Bellorín, A.; Meléndez, M.; Rodríguez, L. S.; Delahaye, F.; Zeippen, C. J.; Palacios, E.; Pradhan, A. K.
2009-05-01
We report on developments carried out within the Opacity Project (OP) to upgrade atomic database services to comply with e-infrastructure requirements. We give a detailed description of an interactive, online server for astrophysical opacities, referred to as OPserver, to be used in sophisticated stellar modelling where Rosseland mean opacities and radiative accelerations are computed at every depth point and each evolution cycle. This is crucial, for instance, in chemically peculiar stars and in the exploitation of the new asteroseismological data. OPserver, downloadable with the new OPCD_3.0 release from the Centre de Données Astronomiques de Strasbourg, France, computes mean opacities and radiative data for arbitrary chemical mixtures from the OP monochromatic opacities. It is essentially a client-server network restructuring and optimization of the suite of codes included in the earlier OPCD_2.0 release. The server can be installed locally or, alternatively, accessed remotely from the Ohio Supercomputer Center, Columbus, Ohio, USA. The client is an interactive web page or a subroutine library that can be linked to the user code. The suitability of this scheme in grid computing environments is emphasized, and its extension to other atomic database services for astrophysical purposes is discussed.
Studies of auroral X-ray imaging from high altitude spacecraft
NASA Technical Reports Server (NTRS)
Mckenzie, D. L.; Mizera, P. F.; Rice, C. J.
1980-01-01
Results of a study of techniques for imaging the aurora from a high altitude satellite at X-ray wavelengths are summarized. The X-ray observations allow the straightforward derivation of the primary auroral X-ray spectrum and can be made at all local times, day and night. Five candidate imaging systems are identified: X-ray telescope, multiple pinhole camera, coded aperture, rastered collimator, and imaging collimator. Examples of each are specified, subject to common weight and size limits which allow them to be intercompared. The imaging ability of each system is tested using a wide variety of sample spectra which are based on previous satellite observations. The study shows that the pinhole camera and coded aperture are both good auroral imaging systems. The two collimated detectors are significantly less sensitive. The X-ray telescope provides better image quality than the other systems in almost all cases, but a limitation to energies below about 4 keV prevents this system from providing the spectra data essential to deriving electron spectra, energy input to the atmosphere, and atmospheric densities and conductivities. The orbit selection requires a tradeoff between spatial resolution and duty cycle.
Supersonics Project - Airport Noise Tech Challenge
NASA Technical Reports Server (NTRS)
Bridges, James
2010-01-01
The Airport Noise Tech Challenge research effort under the Supersonics Project is reviewed. While the goal of "Improved supersonic jet noise models validated on innovative nozzle concepts" remains the same, the success of the research effort has caused the thrust of the research to be modified going forward in time. The main activities from FY06-10 focused on development and validation of jet noise prediction codes. This required innovative diagnostic techniques to be developed and deployed, extensive jet noise and flow databases to be created, and computational tools to be developed and validated. Furthermore, in FY09-10 systems studies commissioned by the Supersonics Project showed that viable supersonic aircraft were within reach using variable cycle engine architectures if exhaust nozzle technology could provide 3-5dB of suppression. The Project then began to focus on integrating the technologies being developed in its Tech Challenge areas to bring about successful system designs. Consequently, the Airport Noise Tech Challenge area has shifted efforts from developing jet noise prediction codes to using them to develop low-noise nozzle concepts for integration into supersonic aircraft. The new plan of research is briefly presented by technology and timelines.
Impact of Reactor Operating Parameters on Cask Reactivity in BWR Burnup Credit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Betzler, Benjamin R; Ade, Brian J
This paper discusses the effect of reactor operating parameters used in fuel depletion calculations on spent fuel cask reactivity, with relevance for boiling-water reactor (BWR) burnup credit (BUC) applications. Assessments that used generic BWR fuel assembly and spent fuel cask configurations are presented. The considered operating parameters, which were independently varied in the depletion simulations for the assembly, included fuel temperature, bypass water density, specific power, and operating history. Different operating history scenarios were considered for the assembly depletion to determine the effect of relative power distribution during the irradiation cycles, as well as the downtime between cycles. Depletion, decay,more » and criticality simulations were performed using computer codes and associated nuclear data within the SCALE code system. Results quantifying the dependence of cask reactivity on the assembly depletion parameters are presented herein.« less
High-Fidelity Three-Dimensional Simulation of the GE90
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Norris, Andrew; Veres, Josphe P.
2004-01-01
A full-engine simulation of the three-dimensional flow in the GE90 94B high-bypass ratio turbofan engine has been achieved. It would take less than 11 hr of wall clock time if starting from scratch through the exploitation of parallel processing. The simulation of the compressor components, the cooled high-pressure turbine, and the low-pressure turbine was performed using the APNASA turbomachinery flow code. The combustor flow and chemistry were simulated using the National Combustor Code (NCC). The engine simulation matches the engine thermodynamic cycle for a sea-level takeoff condition. The simulation is started at the inlet of the fan and progresses downstream. Comparisons with the cycle point are presented. A detailed look at the blockage in the turbomachinery is presented as one measure to assess and view the solution and the multistage interaction effects.
The Gift Code User Manual. Volume I. Introduction and Input Requirements
1975-07-01
REPORT & PERIOD COVERED ‘TII~ GIFT CODE USER MANUAL; VOLUME 1. INTRODUCTION AND INPUT REQUIREMENTS FINAL 6. PERFORMING ORG. REPORT NUMBER ?. AuTHOR(#) 8...reverua side if neceaeary and identify by block number] (k St) The GIFT code is a FORTRANcomputerprogram. The basic input to the GIFT ode is data called
26 CFR 1.42-5 - Monitoring compliance with low-income housing credit requirements.
Code of Federal Regulations, 2012 CFR
2012-04-01
... be required to retain the original local health, safety, or building code violation reports or... account local health, safety, and building codes (or other habitability standards), and the State or local government unit responsible for making local health, safety, or building code inspections did not issue a...
26 CFR 1.42-5 - Monitoring compliance with low-income housing credit requirements.
Code of Federal Regulations, 2010 CFR
2010-04-01
... be required to retain the original local health, safety, or building code violation reports or... account local health, safety, and building codes (or other habitability standards), and the State or local government unit responsible for making local health, safety, or building code inspections did not issue a...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-08
... production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311... Federal Food, Drug, and Cosmetic Act (FFDCA) requesting an exemption from the requirement of a tolerance...? You may be potentially affected by this action if you are an agricultural producer, food manufacturer...
Bar Coding the U. S. Government Bill of Lading and the Material Inspection and Receiving Report.
1984-12-01
of respondents K because some of the replies did not respond to this question.) TABLE 3-2. DD 250 PROCESSING CAPABILITIES AUTOMiATED - BAR CODE...Proposed minimum data elements (both human readable and bar coded) required and why? (3) Proposed signature requirement changes and why? (4) Proposed
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.; He, Jiali; White, Gregory S.
1997-01-01
Turbo coding using iterative SOVA decoding and M-ary differentially coherent or non-coherent modulation can provide an effective coding modulation solution: (1) Energy efficient with relatively simple SOVA decoding and small packet lengths, depending on BEP required; (2) Low number of decoding iterations required; and (3) Robustness in fading with channel interleaving.
VLA telemetry performance with concatenated coding for Voyager at Neptune
NASA Technical Reports Server (NTRS)
Dolinar, S. J., Jr.
1988-01-01
Current plans for supporting the Voyager encounter at Neptune include the arraying of the Deep Space Network (DSN) antennas at Goldstone, California, with the National Radio Astronomy Observatory's Very Large Array (VLA) in New Mexico. Not designed as a communications antenna, the VLA signal transmission facility suffers a disadvantage in that the received signal is subjected to a gap or blackout period of approximately 1.6 msec once every 5/96 sec control cycle. Previous analyses showed that the VLA data gaps could cause disastrous performance degradation in a VLA stand-alone system and modest degradation when the VLA is arrayed equally with Goldstone. New analysis indicates that the earlier predictions for concatenated code performance were overly pessimistic for most combinations of system parameters, including those of Voyager-VLA. The periodicity of the VLA gap cycle tends to guarantee that all Reed-Solomon codewords will receive an average share of erroneous symbols from the gaps. However, large deterministic fluctuations in the number of gapped symbols from codeword to codeword may occur for certain combinations of code parameters, gap cycle parameters, and data rates. Several mechanisms for causing these fluctuations are identified and analyzed. Even though graceful degradation is predicted for the Voyager-VLA parameters, catastrophic degradation greater than 2 dB can occur for a VLA stand-alone system at certain non-Voyager data rates inside the range of the actual Voyager rates. Thus, it is imperative that all of the Voyager-VLA parameters be very accurately known and precisely controlled.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palmer, M.E.
1997-12-05
This V and V Report includes analysis of two revisions of the DMS [data management system] System Requirements Specification (SRS) and the Preliminary System Design Document (PSDD); the source code for the DMS Communication Module (DMSCOM) messages; the source code for selected DMS Screens, and the code for the BWAS Simulator. BDM Federal analysts used a series of matrices to: compare the requirements in the System Requirements Specification (SRS) to the specifications found in the System Design Document (SDD), to ensure the design supports the business functions, compare the discreet parts of the SDD with each other, to ensure thatmore » the design is consistent and cohesive, compare the source code of the DMS Communication Module with the specifications, to ensure that the resultant messages will support the design, compare the source code of selected screens to the specifications to ensure that resultant system screens will support the design, compare the source code of the BWAS simulator with the requirements to interface with DMS messages and data transfers relating to the BWAS operations.« less
Guidance and Control Software Project Data - Volume 2: Development Documents
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J. (Editor)
2008-01-01
The Guidance and Control Software (GCS) project was the last in a series of software reliability studies conducted at Langley Research Center between 1977 and 1994. The technical results of the GCS project were recorded after the experiment was completed. Some of the support documentation produced as part of the experiment, however, is serving an unexpected role far beyond its original project context. Some of the software used as part of the GCS project was developed to conform to the RTCA/DO-178B software standard, "Software Considerations in Airborne Systems and Equipment Certification," used in the civil aviation industry. That standard requires extensive documentation throughout the software development life cycle, including plans, software requirements, design and source code, verification cases and results, and configuration management and quality control data. The project documentation that includes this information is open for public scrutiny without the legal or safety implications associated with comparable data from an avionics manufacturer. This public availability has afforded an opportunity to use the GCS project documents for DO-178B training. This report provides a brief overview of the GCS project, describes the 4-volume set of documents and the role they are playing in training, and includes the development documents from the GCS project. Volume 2 contains three appendices: A. Guidance and Control Software Development Specification; B. Design Description for the Pluto Implementation of the Guidance and Control Software; and C. Source Code for the Pluto Implementation of the Guidance and Control Software
An analysis of international nuclear fuel supply options
NASA Astrophysics Data System (ADS)
Taylor, J'tia Patrice
As the global demand for energy grows, many nations are considering developing or increasing nuclear capacity as a viable, long-term power source. To assess the possible expansion of nuclear power and the intricate relationships---which cover the range of economics, security, and material supply and demand---between established and aspirant nuclear generating entities requires models and system analysis tools that integrate all aspects of the nuclear enterprise. Computational tools and methods now exist across diverse research areas, such as operations research and nuclear engineering, to develop such a tool. This dissertation aims to develop methodologies and employ and expand on existing sources to develop a multipurpose tool to analyze international nuclear fuel supply options. The dissertation is comprised of two distinct components: the development of the Material, Economics, and Proliferation Assessment Tool (MEPAT), and analysis of fuel cycle scenarios using the tool. Development of MEPAT is aimed for unrestricted distribution and therefore uses publicly available and open-source codes in its development when possible. MEPAT is built using the Powersim Studio platform that is widely used in systems analysis. MEPAT development is divided into three modules focusing on: material movement; nonproliferation; and economics. The material movement module tracks material quantity in each process of the fuel cycle and in each nuclear program with respect to ownership, location and composition. The material movement module builds on techniques employed by fuel cycle models such as the Verifiable Fuel Cycle Simulation (VISION) code developed at the Idaho National Laboratory under the Advanced Fuel Cycle Initiative (AFCI) for the analysis of domestic fuel cycle. Material movement parameters such as lending and reactor preference, as well as fuel cycle parameters such as process times and material factors are user-specified through a Microsoft Excel(c) data spreadsheet. The material movement module is the largest of the three, and the two other modules that assess nonproliferation and economics of the options are dependent on its output. Proliferation resistance measures from literature are modified and incorporated in MEPAT. The module to assess the nonproliferation of the supply options allows the user to specify defining attributes for the fuel cycle processes, and determines significant quantities of materials as well as measures of proliferation resistance. The measure is dependent on user-input and material information. The economics module allows the user to specify costs associated with different processes and other aspects of the fuel cycle. The simulation tool then calculates economic measures that relate the cost of the fuel cycle to electricity production. The second part of this dissertation consists of an examination of four scenarios of fuel supply option using MEPAT. The first is a simple scenario illustrating the modules and basic functions of MEPAT. The second scenario recreates a fuel supply study reported earlier in literature, and compares MEPAT results with those reported earlier for validation. The third, and a rather realistic, scenario includes four nuclear programs with one program entering the nuclear energy market. The fourth scenario assesses the reactor options available to the Hashemite Kingdom of Jordan, which is currently assessing available options to introduce nuclear power in the country. The methodology developed and implemented in MEPAT to analyze the material, proliferation and economics of nuclear fuel supply options is expected to help simplify and assess different reactor and fuel options available to utilities, government agencies and international organizations.
Postirradiation Testing Laboratory (327 Building)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kammenzind, D.E.
A Standards/Requirements Identification Document (S/RID) is the total list of the Environment, Safety and Health (ES and H) requirements to be implemented by a site, facility, or activity. These requirements are appropriate to the life cycle phase to achieve an adequate level of protection for worker and public health and safety, and the environment during design, construction, operation, decontamination and decommissioning, and environmental restoration. S/RlDs are living documents, to be revised appropriately based on change in the site`s or facility`s mission or configuration, a change in the facility`s life cycle phase, or a change to the applicable standards/requirements. S/RIDs encompassmore » health and safety, environmental, and safety related safeguards and security (S and S) standards/requirements related to the functional areas listed in the US Department of Energy (DOE) Environment, Safety and Health Configuration Guide. The Fluor Daniel Hanford (FDH) Contract S/RID contains standards/requirements, applicable to FDH and FDH subcontractors, necessary for safe operation of Project Hanford Management Contract (PHMC) facilities, that are not the direct responsibility of the facility manager (e.g., a site-wide fire department). Facility S/RIDs contain standards/requirements applicable to a specific facility that are the direct responsibility of the facility manager. S/RlDs are prepared by those responsible for managing the operation of facilities or the conduct of activities that present a potential threat to the health and safety of workers, public, or the environment, including: Hazard Category 1 and 2 nuclear facilities and activities, as defined in DOE 5480.23. Selected Hazard Category 3 nuclear, and Low Hazard non-nuclear facilities and activities, as agreed upon by RL. The Postirradiation Testing Laboratory (PTL) S/RID contains standards/ requirements that are necessary for safe operation of the PTL facility, and other building/areas that are the direct responsibility of the specific facility manager. The specific DOE Orders, regulations, industry codes/standards, guidance documents and good industry practices that serve as the basis for each element/subelement are identified and aligned with each subelement.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Portmann, Greg; /LBL, Berkeley; Safranek, James
The LOCO algorithm has been used by many accelerators around the world. Although the uses for LOCO vary, the most common use has been to find calibration errors and correct the optics functions. The light source community in particular has made extensive use of the LOCO algorithms to tightly control the beta function and coupling. Maintaining high quality beam parameters requires constant attention so a relatively large effort was put into software development for the LOCO application. The LOCO code was originally written in FORTRAN. This code worked fine but it was somewhat awkward to use. For instance, the FORTRANmore » code itself did not calculate the model response matrix. It required a separate modeling code such as MAD to calculate the model matrix then one manually loads the data into the LOCO code. As the number of people interested in LOCO grew, it required making it easier to use. The decision to port LOCO to Matlab was relatively easy. It's best to use a matrix programming language with good graphics capability; Matlab was also being used for high level machine control; and the accelerator modeling code AT, [5], was already developed for Matlab. Since LOCO requires collecting and processing a relative large amount of data, it is very helpful to have the LOCO code compatible with the high level machine control, [3]. A number of new features were added while porting the code from FORTRAN and new methods continue to evolve, [7][9]. Although Matlab LOCO was written with AT as the underlying tracking code, a mechanism to connect to other modeling codes has been provided.« less
Quantum computing with Majorana fermion codes
NASA Astrophysics Data System (ADS)
Litinski, Daniel; von Oppen, Felix
2018-05-01
We establish a unified framework for Majorana-based fault-tolerant quantum computation with Majorana surface codes and Majorana color codes. All logical Clifford gates are implemented with zero-time overhead. This is done by introducing a protocol for Pauli product measurements with tetrons and hexons which only requires local 4-Majorana parity measurements. An analogous protocol is used in the fault-tolerant setting, where tetrons and hexons are replaced by Majorana surface code patches, and parity measurements are replaced by lattice surgery, still only requiring local few-Majorana parity measurements. To this end, we discuss twist defects in Majorana fermion surface codes and adapt the technique of twist-based lattice surgery to fermionic codes. Moreover, we propose a family of codes that we refer to as Majorana color codes, which are obtained by concatenating Majorana surface codes with small Majorana fermion codes. Majorana surface and color codes can be used to decrease the space overhead and stabilizer weight compared to their bosonic counterparts.
Development of Advanced Life Cycle Costing Methods for Technology Benefit/Cost/Risk Assessment
NASA Technical Reports Server (NTRS)
Yackovetsky, Robert (Technical Monitor)
2002-01-01
The overall objective of this three-year grant is to provide NASA Langley's System Analysis Branch with improved affordability tools and methods based on probabilistic cost assessment techniques. In order to accomplish this objective, the Aerospace Systems Design Laboratory (ASDL) needs to pursue more detailed affordability, technology impact, and risk prediction methods and to demonstrate them on variety of advanced commercial transports. The affordability assessment, which is a cornerstone of ASDL methods, relies on the Aircraft Life Cycle Cost Analysis (ALCCA) program originally developed by NASA Ames Research Center and enhanced by ASDL. This grant proposed to improve ALCCA in support of the project objective by updating the research, design, test, and evaluation cost module, as well as the engine development cost module. Investigations into enhancements to ALCCA include improved engine development cost, process based costing, supportability cost, and system reliability with airline loss of revenue for system downtime. A probabilistic, stand-alone version of ALCCA/FLOPS will also be developed under this grant in order to capture the uncertainty involved in technology assessments. FLOPS (FLight Optimization System program) is an aircraft synthesis and sizing code developed by NASA Langley Research Center. This probabilistic version of the coupled program will be used within a Technology Impact Forecasting (TIF) method to determine what types of technologies would have to be infused in a system in order to meet customer requirements. A probabilistic analysis of the CER's (cost estimating relationships) within ALCCA will also be carried out under this contract in order to gain some insight as to the most influential costs and the impact that code fidelity could have on future RDS (Robust Design Simulation) studies.
Low-Latency and Energy-Efficient Data Preservation Mechanism in Low-Duty-Cycle Sensor Networks.
Jiang, Chan; Li, Tao-Shen; Liang, Jun-Bin; Wu, Heng
2017-05-06
Similar to traditional wireless sensor networks (WSN), the nodes only have limited memory and energy in low-duty-cycle sensor networks (LDC-WSN). However, different from WSN, the nodes in LDC-WSN often sleep most of their time to preserve their energies. The sleeping feature causes serious data transmission delay. However, each source node that has sensed data needs to quickly disseminate its data to other nodes in the network for redundant storage. Otherwise, data would be lost due to its source node possibly being destroyed by outer forces in a harsh environment. The quick dissemination requirement produces a contradiction with the sleeping delay in the network. How to quickly disseminate all the source data to all the nodes with limited memory in the network for effective preservation is a challenging issue. In this paper, a low-latency and energy-efficient data preservation mechanism in LDC-WSN is proposed. The mechanism is totally distributed. The data can be disseminated to the network with low latency by using a revised probabilistic broadcasting mechanism, and then stored by the nodes with LT (Luby Transform) codes, which are a famous rateless erasure code. After the process of data dissemination and storage completes, some nodes may die due to being destroyed by outer forces. If a mobile sink enters the network at any time and from any place to collect the data, it can recover all of the source data by visiting a small portion of survived nodes in the network. Theoretical analyses and simulation results show that our mechanism outperforms existing mechanisms in the performances of data dissemination delay and energy efficiency.
The Simpsons program 6-D phase space tracking with acceleration
NASA Astrophysics Data System (ADS)
Machida, S.
1993-12-01
A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.
NASA Technical Reports Server (NTRS)
Adams, Thomas; VanBaalen, Mary
2009-01-01
The Radiation Health Office (RHO) determines each astronaut s cancer risk by using models to associate the amount of radiation dose that astronauts receive from spaceflight missions. The baryon transport codes (BRYNTRN), high charge (Z) and energy transport codes (HZETRN), and computer risk models are used to determine the effective dose received by astronauts in Low Earth orbit (LEO). This code uses an approximation of the Boltzman transport formula. The purpose of the project is to run this code for various International Space Station (ISS) flight parameters in order to gain a better understanding of how this code responds to different scenarios. The project will determine how variations in one set of parameters such as, the point of the solar cycle and altitude can affect the radiation exposure of astronauts during ISS missions. This project will benefit NASA by improving mission dosimetry.
Comparison Between Simulated and Experimentally Measured Performance of a Four Port Wave Rotor
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Wilson, Jack; Welch, Gerard E.
2007-01-01
Performance and operability testing has been completed on a laboratory-scale, four-port wave rotor, of the type suitable for use as a topping cycle on a gas turbine engine. Many design aspects, and performance estimates for the wave rotor were determined using a time-accurate, one-dimensional, computational fluid dynamics-based simulation code developed specifically for wave rotors. The code follows a single rotor passage as it moves past the various ports, which in this reference frame become boundary conditions. This paper compares wave rotor performance predicted with the code to that measured during laboratory testing. Both on and off-design operating conditions were examined. Overall, the match between code and rig was found to be quite good. At operating points where there were disparities, the assumption of larger than expected internal leakage rates successfully realigned code predictions and laboratory measurements. Possible mechanisms for such leakage rates are discussed.
Fast and Adaptive Lossless Onboard Hyperspectral Data Compression System
NASA Technical Reports Server (NTRS)
Aranki, Nazeeh I.; Keymeulen, Didier; Kimesh, Matthew A.
2012-01-01
Modern hyperspectral imaging systems are able to acquire far more data than can be downlinked from a spacecraft. Onboard data compression helps to alleviate this problem, but requires a system capable of power efficiency and high throughput. Software solutions have limited throughput performance and are power-hungry. Dedicated hardware solutions can provide both high throughput and power efficiency, while taking the load off of the main processor. Thus a hardware compression system was developed. The implementation uses a field-programmable gate array (FPGA). The implementation is based on the fast lossless (FL) compression algorithm reported in Fast Lossless Compression of Multispectral-Image Data (NPO-42517), NASA Tech Briefs, Vol. 30, No. 8 (August 2006), page 26, which achieves excellent compression performance and has low complexity. This algorithm performs predictive compression using an adaptive filtering method, and uses adaptive Golomb coding. The implementation also packetizes the coded data. The FL algorithm is well suited for implementation in hardware. In the FPGA implementation, one sample is compressed every clock cycle, which makes for a fast and practical realtime solution for space applications. Benefits of this implementation are: 1) The underlying algorithm achieves a combination of low complexity and compression effectiveness that exceeds that of techniques currently in use. 2) The algorithm requires no training data or other specific information about the nature of the spectral bands for a fixed instrument dynamic range. 3) Hardware acceleration provides a throughput improvement of 10 to 100 times vs. the software implementation. A prototype of the compressor is available in software, but it runs at a speed that does not meet spacecraft requirements. The hardware implementation targets the Xilinx Virtex IV FPGAs, and makes the use of this compressor practical for Earth satellites as well as beyond-Earth missions with hyperspectral instruments.
A Systematic Method for Verification and Validation of Gyrokinetic Microstability Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bravenec, Ronald
My original proposal for the period Feb. 15, 2014 through Feb. 14, 2017 called for an integrated validation and verification effort carried out by myself with collaborators. The validation component would require experimental profile and power-balance analysis. In addition, it would require running the gyrokinetic codes varying the input profiles within experimental uncertainties to seek agreement with experiment before discounting a code as invalidated. Therefore, validation would require a major increase of effort over my previous grant periods which covered only code verification (code benchmarking). Consequently, I had requested full-time funding. Instead, I am being funded at somewhat less thanmore » half time (5 calendar months per year). As a consequence, I decided to forego the validation component and to only continue the verification efforts.« less
Digital video technologies and their network requirements
DOE Office of Scientific and Technical Information (OSTI.GOV)
R. P. Tsang; H. Y. Chen; J. M. Brandt
1999-11-01
Coded digital video signals are considered to be one of the most difficult data types to transport due to their real-time requirements and high bit rate variability. In this study, the authors discuss the coding mechanisms incorporated by the major compression standards bodies, i.e., JPEG and MPEG, as well as more advanced coding mechanisms such as wavelet and fractal techniques. The relationship between the applications which use these coding schemes and their network requirements are the major focus of this study. Specifically, the authors relate network latency, channel transmission reliability, random access speed, buffering and network bandwidth with the variousmore » coding techniques as a function of the applications which use them. Such applications include High-Definition Television, Video Conferencing, Computer-Supported Collaborative Work (CSCW), and Medical Imaging.« less
Suh, Robert D; Goldin, Jonathan G; Wallace, Amanda B; Sheehan, Ramon E; Heinze, Stefan B; Gitlitz, Barbara J; Figlin, Robert A
2004-05-01
To assess the technical feasibility and safety of weekly outpatient percutaneous computed tomographic (CT)-guided intratumoral injections of interleukin-2 (IL-2) plasmid DNA in a wide variety of superficial and deep tumor sites. Twenty-nine patients with metastatic renal cell carcinoma and a total of 30 lesions measuring 1.0 cm(2) or greater in accessible thoracic (n = 15) or abdominal (n = 15) locations underwent up to three cycles of six weekly intratumoral IL-2 plasmid DNA injections. CT was used to guide needle placement and injection. After injection cycle 1, patients whose tumors demonstrated stable (< or =25% increase and < or =50% decrease in product of lesion diameters) or decreased size (>50% decrease in product of lesion diameters) advanced to injection cycle 2. Patients whose lesions decreased in size by more than 50% over the course of injection cycle 2 were eligible to begin injection cycle 3. An acceptable safety and technical feasibility profile for this technique was deemed to be (a) a safety and feasibility profile similar to that of single-needle biopsy and (b) an absence of serious adverse events (as defined in Title 21 of the Code of Federal Regulations) and/or unacceptable toxicities (as graded according to the National Cancer Institute Common Toxicity Criteria). A total of 284 intratumoral injections were performed, with a mean of 9.8 injections (range, 6-18 injections) received by each patient. Technical success (needle placement and injection of gene therapy agent) was achieved in all cases. Complications were experienced after 42 (14.8%) of the 284 injections. The most common complication was pneumothorax (at 32 [28.6%] of 112 intrathoracic injections), for which only one patient required catheter drainage. Complications occurred randomly throughout injection cycles and did not appear to increase as patients received more injections (P =.532). No patient experienced serious adverse events or unacceptable toxicities. Percutaneous CT-guided intratumoral immunotherapy injections are technically feasible and can be safely performed.
Synchronization Analysis and Simulation of a Standard IEEE 802.11G OFDM Signal
2004-03-01
Figure 26 Convolutional Encoder Parameters. Figure 27 Puncturing Parameters. As per Table 3, the required code rate is 3 4r = which requires...to achieve the higher data rates required by the Standard 802.11b was accomplished by using packet binary convolutional coding (PBCC). Essentially...higher data rates are achieved by using convolutional coding combined with BPSK or QPSK modulation. The data is first encoded with a rate one-half
Jeong, Jong Seob; Chang, Jin Ho; Shung, K. Kirk
2009-01-01
For noninvasive treatment of prostate tissue using high intensity focused ultrasound (HIFU), this paper proposes a design of an integrated multi-functional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6 MHz array in the center row for imaging and two 4 MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA was verifeid. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured −6 dB and −20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be −48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded −6dB and −20dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be −40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy. PMID:19811994
Jeong, Jong Seob; Chang, Jin Ho; Shung, K Kirk
2009-09-01
For noninvasive treatment of prostate tissue using high-intensity focused ultrasound this paper proposes a design of an integrated multifunctional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6-MHz array in the center row for imaging and two 4-MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center-row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, we verified whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured -6 dB and -20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be -48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded -6 dB and -20 dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be -40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy.
Boundary layer simulator improvement
NASA Technical Reports Server (NTRS)
Praharaj, S. C.; Schmitz, C.; Frost, C.; Engel, C. D.; Fuller, C. E.; Bender, R. L.; Pond, J.
1984-01-01
High chamber pressure expander cycles proposed for orbit transfer vehicles depend primarily on the heat energy transmitted from the combustion products through the thrust wall chamber wall. The heat transfer to the nozzle wall is affected by such variables as wall roughness, relamarization, and the presence of particles in the flow. Motor performance loss for these nozzles with thick boundary layers is inaccurate using the existing procedure coded BLIMPJ. Modifications and innovations to the code are examined. Updated routines are listed.
EASY-II Renaissance: n, p, d, α, γ-induced Inventory Code System
NASA Astrophysics Data System (ADS)
Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.
2014-04-01
The European Activation SYstem has been re-engineered and re-written in modern programming languages so as to answer today's and tomorrow's needs in terms of activation, transmutation, depletion, decay and processing of radioactive materials. The new FISPACT-II inventory code development project has allowed us to embed many more features in terms of energy range: up to GeV; incident particles: alpha, gamma, proton, deuteron and neutron; and neutron physics: self-shielding effects, temperature dependence and covariance, so as to cover all anticipated application needs: nuclear fission and fusion, accelerator physics, isotope production, stockpile and fuel cycle stewardship, materials characterization and life, and storage cycle management. In parallel, the maturity of modern, truly general purpose libraries encompassing thousands of target isotopes such as TENDL-2012, the evolution of the ENDF-6 format and the capabilities of the latest generation of processing codes PREPRO, NJOY and CALENDF have allowed the activation code to be fed with more robust, complete and appropriate data: cross sections with covariance, probability tables in the resonance ranges, kerma, dpa, gas and radionuclide production and 24 decay types. All such data for the five most important incident particles (n, p, d, α, γ), are placed in evaluated data files up to an incident energy of 200 MeV. The resulting code system, EASY-II is designed as a functional replacement for the previous European Activation System, EASY-2010. It includes many new features and enhancements, but also benefits already from the feedback from extensive validation and verification activities performed with its predecessor.
Lisman, John E; Jensen, Ole
2013-03-20
Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes. Copyright © 2013 Elsevier Inc. All rights reserved.
Group delay variations of GPS transmitting and receiving antennas
NASA Astrophysics Data System (ADS)
Wanninger, Lambert; Sumaya, Hael; Beer, Susanne
2017-09-01
GPS code pseudorange measurements exhibit group delay variations at the transmitting and the receiving antenna. We calibrated C1 and P2 delay variations with respect to dual-frequency carrier phase observations and obtained nadir-dependent corrections for 32 satellites of the GPS constellation in early 2015 as well as elevation-dependent corrections for 13 receiving antenna models. The combined delay variations reach up to 1.0 m (3.3 ns) in the ionosphere-free linear combination for specific pairs of satellite and receiving antennas. Applying these corrections to the code measurements improves code/carrier single-frequency precise point positioning, ambiguity fixing based on the Melbourne-Wübbena linear combination, and determination of ionospheric total electron content. It also affects fractional cycle biases and differential code biases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2014-04-01
The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less
GCS component development cycle
NASA Astrophysics Data System (ADS)
Rodríguez, Jose A.; Macias, Rosa; Molgo, Jordi; Guerra, Dailos; Pi, Marti
2012-09-01
The GTC1 is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). First light was at 13/07/2007 and since them it is in the operation phase. The GTC control system (GCS) is a distributed object & component oriented system based on RT-CORBA8 and it is responsible for the management and operation of the telescope, including its instrumentation. GCS has used the Rational Unified process (RUP9) in its development. RUP is an iterative software development process framework. After analysing (use cases) and designing (UML10) any of GCS subsystems, an initial component description of its interface is obtained and from that information a component specification is written. In order to improve the code productivity, GCS has adopted the code generation to transform this component specification into the skeleton of component classes based on a software framework, called Device Component Framework. Using the GCS development tools, based on javadoc and gcc, in only one step, the component is generated, compiled and deployed to be tested for the first time through our GUI inspector. The main advantages of this approach are the following: It reduces the learning curve of new developers and the development error rate, allows a systematic use of design patterns in the development and software reuse, speeds up the deliverables of the software product and massively increase the timescale, design consistency and design quality, and eliminates the future refactoring process required for the code.
Rural School District Dress Code Implementation: Perceptions of Stakeholders after First Year
ERIC Educational Resources Information Center
Wright, Krystal M.
2012-01-01
Schools are continuously searching for solutions to solve truancy, academic, behavioral, safety, and climate issues. One of the latest trends in education is requiring students to adhere to dress codes as a solution to these issues. Dress codes can range from slightly restrictive clothing to the requiring of a uniform. Many school district…
Error-trellis Syndrome Decoding Techniques for Convolutional Codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1984-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decoding is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Error-trellis syndrome decoding techniques for convolutional codes
NASA Technical Reports Server (NTRS)
Reed, I. S.; Truong, T. K.
1985-01-01
An error-trellis syndrome decoding technique for convolutional codes is developed. This algorithm is then applied to the entire class of systematic convolutional codes and to the high-rate, Wyner-Ash convolutional codes. A special example of the one-error-correcting Wyner-Ash code, a rate 3/4 code, is treated. The error-trellis syndrome decoding method applied to this example shows in detail how much more efficient syndrome decordig is than Viterbi decoding if applied to the same problem. For standard Viterbi decoding, 64 states are required, whereas in the example only 7 states are needed. Also, within the 7 states required for decoding, many fewer transitions are needed between the states.
Session on High Speed Civil Transport Design Capability Using MDO and High Performance Computing
NASA Technical Reports Server (NTRS)
Rehder, Joe
2000-01-01
Since the inception of CAS in 1992, NASA Langley has been conducting research into applying multidisciplinary optimization (MDO) and high performance computing toward reducing aircraft design cycle time. The focus of this research has been the development of a series of computational frameworks and associated applications that increased in capability, complexity, and performance over time. The culmination of this effort is an automated high-fidelity analysis capability for a high speed civil transport (HSCT) vehicle installed on a network of heterogeneous computers with a computational framework built using Common Object Request Broker Architecture (CORBA) and Java. The main focus of the research in the early years was the development of the Framework for Interdisciplinary Design Optimization (FIDO) and associated HSCT applications. While the FIDO effort was eventually halted, work continued on HSCT applications of ever increasing complexity. The current application, HSCT4.0, employs high fidelity CFD and FEM analysis codes. For each analysis cycle, the vehicle geometry and computational grids are updated using new values for design variables. Processes for aeroelastic trim, loads convergence, displacement transfer, stress and buckling, and performance have been developed. In all, a total of 70 processes are integrated in the analysis framework. Many of the key processes include automatic differentiation capabilities to provide sensitivity information that can be used in optimization. A software engineering process was developed to manage this large project. Defining the interactions among 70 processes turned out to be an enormous, but essential, task. A formal requirements document was prepared that defined data flow among processes and subprocesses. A design document was then developed that translated the requirements into actual software design. A validation program was defined and implemented to ensure that codes integrated into the framework produced the same results as their standalone counterparts. Finally, a Commercial Off the Shelf (COTS) configuration management system was used to organize the software development. A computational environment, CJOPT, based on the Common Object Request Broker Architecture, CORBA, and the Java programming language has been developed as a framework for multidisciplinary analysis and Optimization. The environment exploits the parallelisms inherent in the application and distributes the constituent disciplines on machines best suited to their needs. In CJOpt, a discipline code is "wrapped" as an object. An interface to the object identifies the functionality (services) provided by the discipline, defined in Interface Definition Language (IDL) and implemented using Java. The results of using the HSCT4.0 capability are described. A summary of lessons learned is also presented. The use of some of the processes, codes, and techniques by industry are highlighted. The application of the methodology developed in this research to other aircraft are described. Finally, we show how the experience gained is being applied to entirely new vehicles, such as the Reusable Space Transportation System. Additional information is contained in the original.
Hardware Implementation of Serially Concatenated PPM Decoder
NASA Technical Reports Server (NTRS)
Moision, Bruce; Hamkins, Jon; Barsoum, Maged; Cheng, Michael; Nakashima, Michael
2009-01-01
A prototype decoder for a serially concatenated pulse position modulation (SCPPM) code has been implemented in a field-programmable gate array (FPGA). At the time of this reporting, this is the first known hardware SCPPM decoder. The SCPPM coding scheme, conceived for free-space optical communications with both deep-space and terrestrial applications in mind, is an improvement of several dB over the conventional Reed-Solomon PPM scheme. The design of the FPGA SCPPM decoder is based on a turbo decoding algorithm that requires relatively low computational complexity while delivering error-rate performance within approximately 1 dB of channel capacity. The SCPPM encoder consists of an outer convolutional encoder, an interleaver, an accumulator, and an inner modulation encoder (more precisely, a mapping of bits to PPM symbols). Each code is describable by a trellis (a finite directed graph). The SCPPM decoder consists of an inner soft-in-soft-out (SISO) module, a de-interleaver, an outer SISO module, and an interleaver connected in a loop (see figure). Each SISO module applies the Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm to compute a-posteriori bit log-likelihood ratios (LLRs) from apriori LLRs by traversing the code trellis in forward and backward directions. The SISO modules iteratively refine the LLRs by passing the estimates between one another much like the working of a turbine engine. Extrinsic information (the difference between the a-posteriori and a-priori LLRs) is exchanged rather than the a-posteriori LLRs to minimize undesired feedback. All computations are performed in the logarithmic domain, wherein multiplications are translated into additions, thereby reducing complexity and sensitivity to fixed-point implementation roundoff errors. To lower the required memory for storing channel likelihood data and the amounts of data transfer between the decoder and the receiver, one can discard the majority of channel likelihoods, using only the remainder in operation of the decoder. This is accomplished in the receiver by transmitting only a subset consisting of the likelihoods that correspond to time slots containing the largest numbers of observed photons during each PPM symbol period. The assumed number of observed photons in the remaining time slots is set to the mean of a noise slot. In low background noise, the selection of a small subset in this manner results in only negligible loss. Other features of the decoder design to reduce complexity and increase speed include (1) quantization of metrics in an efficient procedure chosen to incur no more than a small performance loss and (2) the use of the max-star function that allows sum of exponentials to be computed by simple operations that involve only an addition, a subtraction, and a table lookup. Another prominent feature of the design is a provision for access to interleaver and de-interleaver memory in a single clock cycle, eliminating the multiple clock-cycle latency characteristic of prior interleaver and de-interleaver designs.
Mora, Erika; Franco, G
2010-01-01
The recently introduced Italian law on the protection of workers' health states that the occupational health physician (competent physician) is required to act according to the Code of Ethics of the International Commission on Occupational Health (ICOH). This paper aims at examining the articles of legislative decree 81/2008 dealing with informed consent and confidentiality compared with the corresponding points of the ICOH Ethics Code. Analysis of the relationship between articles 25 and 39 (informed consent) and 18, 20 and 39 (confidentiality) of the decree shows that there are some points of disagreement between the legal requirements and the Code of Ethics, in particular concerning prescribed health surveillance, consent based on appropriate information (points 8, 10 and 12 of the Code) and some aspects of confidentiality (points 10, 20, 21, 22 and 23 of the Code). Although the competent physician is required to act according to the law, the decisional process could lead to a violation of workers' autonomy.
Bandwidth efficient CCSDS coding standard proposals
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Perez, Lance C.; Wang, Fu-Quan
1992-01-01
The basic concatenated coding system for the space telemetry channel consists of a Reed-Solomon (RS) outer code, a symbol interleaver/deinterleaver, and a bandwidth efficient trellis inner code. A block diagram of this configuration is shown. The system may operate with or without the outer code and interleaver. In this recommendation, the outer code remains the (255,223) RS code over GF(2 exp 8) with an error correcting capability of t = 16 eight bit symbols. This code's excellent performance and the existence of fast, cost effective, decoders justify its continued use. The purpose of the interleaver/deinterleaver is to distribute burst errors out of the inner decoder over multiple codewords of the outer code. This utilizes the error correcting capability of the outer code more efficiently and reduces the probability of an RS decoder failure. Since the space telemetry channel is not considered bursty, the required interleaving depth is primarily a function of the inner decoding method. A diagram of an interleaver with depth 4 that is compatible with the (255,223) RS code is shown. Specific interleaver requirements are discussed after the inner code recommendations.
7 CFR 4274.337 - Other regulatory requirements.
Code of Federal Regulations, 2012 CFR
2012-01-01
... with the seismic provisions of one of the following model building codes or the latest edition of that...) Uniform Building Code; (ii) 1993 Building Officials and Code Administrators International, Inc. (BOCA) National Building Code; or (iii) 1992 Amendments to the Southern Building Code Congress International...
7 CFR 4274.337 - Other regulatory requirements.
Code of Federal Regulations, 2013 CFR
2013-01-01
... with the seismic provisions of one of the following model building codes or the latest edition of that...) Uniform Building Code; (ii) 1993 Building Officials and Code Administrators International, Inc. (BOCA) National Building Code; or (iii) 1992 Amendments to the Southern Building Code Congress International...
38 CFR 39.63 - Architectural design standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 5000, Building Construction and Safety Code, and the 2002 edition of the National Electrical Code, NFPA... 5000, Building Construction and Safety Code. Where the adopted codes state conflicting requirements... the standards set forth in this section, all applicable local and State building codes and regulations...
7 CFR 4274.337 - Other regulatory requirements.
Code of Federal Regulations, 2014 CFR
2014-01-01
... with the seismic provisions of one of the following model building codes or the latest edition of that...) Uniform Building Code; (ii) 1993 Building Officials and Code Administrators International, Inc. (BOCA) National Building Code; or (iii) 1992 Amendments to the Southern Building Code Congress International...
38 CFR 39.63 - Architectural design standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 5000, Building Construction and Safety Code, and the 2002 edition of the National Electrical Code, NFPA... 5000, Building Construction and Safety Code. Where the adopted codes state conflicting requirements... the standards set forth in this section, all applicable local and State building codes and regulations...
38 CFR 39.63 - Architectural design standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 5000, Building Construction and Safety Code, and the 2002 edition of the National Electrical Code, NFPA... 5000, Building Construction and Safety Code. Where the adopted codes state conflicting requirements... the standards set forth in this section, all applicable local and State building codes and regulations...
Perdigão, J; Logarinho, E; Avides, M C; Sunkel, C E
1999-12-01
Replication protein A (RPA) is a highly conserved multifunctional heterotrimeric complex, involved in DNA replication, repair, recombination, and possibly transcription. Here, we report the cloning of the gene that codes for the largest subunit of the Drosophila melanogaster RPA homolog, dmRPA70. In situ hybridization showed that dmRPA70 RNA is present in developing embryos during the first 16 cycles. After this point, dm-RPA70 expression is downregulated in cells that enter a G1 phase and exit the mitotic cycle, becoming restricted to brief bursts of accumulation from late G1 to S phase. This pattern of regulated expression is also observed in the developing eye imaginal disc. In addition, we have shown that the presence of cyclin E is necessary and sufficient to drive the expression of dmRPA70 in embryonic cells arrested in G1 but is not required in tissues undergoing endoreduplication. Immunolocalization showed that in early developing embryos, the dmRPA70 protein associates with chromatin from the end of mitosis until the beginning of the next prophase in a dynamic speckled pattern that is strongly suggestive of its association with replication foci.
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2012-01-01
The development of benchmark examples for quasi-static delamination propagation and cyclic delamination onset and growth prediction is presented and demonstrated for Abaqus/Standard. The example is based on a finite element model of a Double-Cantilever Beam specimen. The example is independent of the analysis software used and allows the assessment of the automated delamination propagation, onset and growth prediction capabilities in commercial finite element codes based on the virtual crack closure technique (VCCT). First, a quasi-static benchmark example was created for the specimen. Second, based on the static results, benchmark examples for cyclic delamination growth were created. Third, the load-displacement relationship from a propagation analysis and the benchmark results were compared, and good agreement could be achieved by selecting the appropriate input parameters. Fourth, starting from an initially straight front, the delamination was allowed to grow under cyclic loading. The number of cycles to delamination onset and the number of cycles during delamination growth for each growth increment were obtained from the automated analysis and compared to the benchmark examples. Again, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. The benchmarking procedure proved valuable by highlighting the issues associated with choosing the input parameters of the particular implementation. Selecting the appropriate input parameters, however, was not straightforward and often required an iterative procedure. Overall the results are encouraging, but further assessment for mixed-mode delamination is required.
NASA Technical Reports Server (NTRS)
Vanfossen, G. J.
1983-01-01
A system which would allow a substantially increased output from a turboshaft engine for brief periods in emergency situations with little or no loss of turbine stress rupture life is proposed and studied analytically. The increased engine output is obtained by overtemperaturing the turbine; however, the temperature of the compressor bleed air used for hot section cooling is lowered by injecting and evaporating water. This decrease in cooling air temperature can offset the effect of increased gas temperature and increased shaft speed and thus keep turbine blade stress rupture life constant. The analysis utilized the NASA-Navy-Engine-Program or NNEP computer code to model the turboshaft engine in both design and off-design modes. This report is concerned with the effect of the proposed method of power augmentation on the engine cycle and turbine components. A simple cycle turboshaft engine with a 16:1 pressure ratio and a 1533 K (2760 R) turbine inlet temperature operating at sea level static conditions was studied to determine the possible power increase and the effect on turbine stress rupture life that could be expected using the proposed emergency cooling scheme. The analysis showed a 54 percent increse in output power can be achieved with no loss in gas generator turbine stress rupture life. A 231 K (415 F) rise in turbine inlet temperature is required for this level of augmentation. The required water flow rate was found to be .0109 kg water per kg of engine air flow.
49 CFR 602.15 - Grant requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... either State or locally adopted building codes or standards, the higher of the competing minimums would... title 49, United States Code, as well as cross-cutting requirements, including but not limited to those...
49 CFR 602.15 - Grant requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... either State or locally adopted building codes or standards, the higher of the competing minimums would... title 49, United States Code, as well as cross-cutting requirements, including but not limited to those...
NASA Astrophysics Data System (ADS)
Yuan, F.; Wang, G.; Painter, S. L.; Tang, G.; Xu, X.; Kumar, J.; Bisht, G.; Hammond, G. E.; Mills, R. T.; Thornton, P. E.; Wullschleger, S. D.
2017-12-01
In Arctic tundra ecosystem soil freezing-thawing is one of dominant physical processes through which biogeochemical (e.g., carbon and nitrogen) cycles are tightly coupled. Besides hydraulic transport, freezing-thawing can cause pore water movement and aqueous species gradients, which are additional mechanisms for soil nitrogen (N) reactive-transport in Tundra ecosystem. In this study, we have fully coupled an in-development ESM(i.e., Advanced Climate Model for Energy, ACME)'s Land Model (ALM) aboveground processes with a state-of-the-art massively parallel 3-D subsurface thermal-hydrology and reactive transport code, PFLOTRAN. The resulting coupled ALM-PFLOTRAN model is a Land Surface Model (LSM) capable of resolving 3-D soil thermal-hydrological-biogeochemical cycles. This specific version of PFLOTRAN has incorporated CLM-CN Converging Trophic Cascade (CTC) model and a full and simple but robust soil N cycle. It includes absorption-desorption for soil NH4+ and gas dissolving-degasing process as well. It also implements thermal-hydrology mode codes with three newly-modified freezing-thawing algorithms which can greatly improve computing performance in regarding to numerical stiffness at freezing-point. Here we tested the model in fully 3-D coupled mode at the Next Generation Ecosystem Experiment-Arctic (NGEE-Arctic) field intensive study site at the Barrow Environmental Observatory (BEO), AK. The simulations show that: (1) synchronous coupling of soil thermal-hydrology and biogeochemistry in 3-D can greatly impact ecosystem dynamics across polygonal tundra landscape; and (2) freezing-thawing cycles can add more complexity to the system, resulting in greater mobility of soil N vertically and laterally, depending upon local micro-topography. As a preliminary experiment, the model is also implemented for Pan-Arctic region in 1-D column mode (i.e. no lateral connection), showing significant differences compared to stand-alone ALM. The developed ALM-PFLOTRAN coupling codes embeded within ESM will be used for Pan-Arctic regional evaluation of climate change-caused ecosystem responses and their feedbacks to climate system at various scales.
Unsteady Flow Interactions Between the LH2 Feed Line and SSME LPFP Inducer
NASA Technical Reports Server (NTRS)
Dorney, Dan; Griffin, Lisa; Marcu, Bogdan; Williams, Morgan
2006-01-01
An extensive computational effort has been performed in order to investigate the nature of unsteady flow in the fuel line supplying the three Space Shuttle Main Engines during flight. Evidence of high cycle fatigue (HCF) in the flow liner one diameter upstream of the Low Pressure Fuel Pump inducer has been observed in several locations. The analysis presented in this report has the objective of determining the driving mechanisms inducing HCF and the associated fluid flow phenomena. The simulations have been performed using two different computational codes, the NASA MSFC PHANTOM code and the Pratt and Whitney Rocketdyne ENIGMA code. The fuel flow through the flow liner and the pump inducer have been modeled in full three-dimensional geometry, and the results of the computations compared with test data taken during hot fire tests at NASA Stennis Space Center, and cold-flow water flow test data obtained at NASA MSFC. The numerical results indicate that unsteady pressure fluctuations at specific frequencies develop in the duct at the flow-liner location. Detailed frequency analysis of the flow disturbances is presented. The unsteadiness is believed to be an important source for fluctuating pressures generating high cycle fatigue.
A VHDL Interface for Altera Design Files
1990-01-01
this requirement dictated that all prototype products developed during this research would have to mirror standard VHDL code . In fact, the final... product would have to meet the 20 syntactic and semantic requirements of standard VHDL . The coding style used to create the transformation program was the...Transformed Decoder File ....................... 47 C. Supplemental VHDL Package Source Code ........... 54 Altpk.vhd .................................... 54 D
Code of Federal Regulations, 2014 CFR
2014-07-01
... regional building codes, the following rules of precedence apply: (1) Between differing levels of fire... cannot be reconciled with a requirement of this part, the local or regional code applies. (b) If any of... require documentation of the mandatory nature of the conflicting code and the inability to reconcile that...
Code of Federal Regulations, 2012 CFR
2012-07-01
... regional building codes, the following rules of precedence apply: (1) Between differing levels of fire... cannot be reconciled with a requirement of this part, the local or regional code applies. (b) If any of... require documentation of the mandatory nature of the conflicting code and the inability to reconcile that...
Code of Federal Regulations, 2011 CFR
2011-07-01
... regional building codes, the following rules of precedence apply: (1) Between differing levels of fire... cannot be reconciled with a requirement of this part, the local or regional code applies. (b) If any of... require documentation of the mandatory nature of the conflicting code and the inability to reconcile that...
Code of Federal Regulations, 2013 CFR
2013-07-01
... regional building codes, the following rules of precedence apply: (1) Between differing levels of fire... cannot be reconciled with a requirement of this part, the local or regional code applies. (b) If any of... require documentation of the mandatory nature of the conflicting code and the inability to reconcile that...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arndt, S.A.
1997-07-01
The real-time reactor simulation field is currently at a crossroads in terms of the capability to perform real-time analysis using the most sophisticated computer codes. Current generation safety analysis codes are being modified to replace simplified codes that were specifically designed to meet the competing requirement for real-time applications. The next generation of thermo-hydraulic codes will need to have included in their specifications the specific requirement for use in a real-time environment. Use of the codes in real-time applications imposes much stricter requirements on robustness, reliability and repeatability than do design and analysis applications. In addition, the need for codemore » use by a variety of users is a critical issue for real-time users, trainers and emergency planners who currently use real-time simulation, and PRA practitioners who will increasingly use real-time simulation for evaluating PRA success criteria in near real-time to validate PRA results for specific configurations and plant system unavailabilities.« less
Performance (Off-Design) Cycle Analysis for a Turbofan Engine With Interstage Turbine Burner
NASA Technical Reports Server (NTRS)
Liew, K. H.; Urip, E.; Yang, S. L.; Mattingly, J. D.; Marek, C. J.
2005-01-01
This report presents the performance of a steady-state, dual-spool, separate-exhaust turbofan engine, with an interstage turbine burner (ITB) serving as a secondary combustor. The ITB, which is located in the transition duct between the high- and the low-pressure turbines, is a relatively new concept for increasing specific thrust and lowering pollutant emissions in modern jet-engine propulsion. A detailed off-design performance analysis of ITB engines is written in Microsoft(Registered Trademark) Excel (Redmond, Washington) macrocode with Visual Basic Application to calculate engine performances over the entire operating envelope. Several design-point engine cases are pre-selected using a parametric cycle-analysis code developed previously in Microsoft(Registered Trademark) Excel, for off-design analysis. The off-design code calculates engine performances (i.e. thrust and thrust-specific-fuel-consumption) at various flight conditions and throttle settings.
Long-time efficacy of the surface code in the presence of a super-Ohmic environment
NASA Astrophysics Data System (ADS)
López-Delgado, D. A.; Novais, E.; Mucciolo, E. R.; Caldeira, A. O.
2017-06-01
We study the long-time evolution of a quantum memory coupled to a bosonic environment on which quantum error correction (QEC) is performed using the surface code. The memory's evolution encompasses N QEC cycles, each of them yielding a nonerror syndrome. This assumption makes our analysis independent of the recovery process. We map the expression for the time evolution of the memory onto the partition function of an equivalent statistical-mechanical spin system. In the super-Ohmic dissipation case the long-time evolution of the memory has the same behavior as the time evolution for just one QEC cycle. For this case we find analytical expressions for the critical parameters of the order-disorder phase transition of an equivalent spin system. These critical parameters determine the threshold value for the system-environment coupling below which it is possible to preserve the memory's state.
NASA Technical Reports Server (NTRS)
Semenov, Boris V.; Acton, Charles H., Jr.; Bachman, Nathaniel J.; Elson, Lee S.; Wright, Edward D.
2005-01-01
The SPICE system of navigation and ancillary data possesses a number of traits that make its use in modern space missions of all types highly cost efficient. The core of the system is a software library providing API interfaces for storing and retrieving such data as trajectories, orientations, time conversions, and instrument geometry parameters. Applications used at any stage of a mission life cycle can call SPICE APIs to access this data and compute geometric quantities required for observation planning, engineering assessment and science data analysis. SPICE is implemented in three different languages, supported on 20+ computer environments, and distributed with complete source code and documentation. It includes capabilities that are extensively tested by everyday use in many active projects and are applicable to all types of space missions - flyby, orbiters, observatories, landers and rovers. While a customer's initial SPICE adaptation for the first mission or experiment requires a modest effort, this initial effort pays off because adaptation for subsequent missions/experiments is just a small fraction of the initial investment, with the majority of tools based on SPICE requiring no or very minor changes.
Continuous Codes and Standards Improvement (CCSI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivkin, Carl H; Burgess, Robert M; Buttner, William J
2015-10-21
As of 2014, the majority of the codes and standards required to initially deploy hydrogen technologies infrastructure in the United States have been promulgated. These codes and standards will be field tested through their application to actual hydrogen technologies projects. Continuous codes and standards improvement (CCSI) is a process of identifying code issues that arise during project deployment and then developing codes solutions to these issues. These solutions would typically be proposed amendments to codes and standards. The process is continuous because as technology and the state of safety knowledge develops there will be a need to monitor the applicationmore » of codes and standards and improve them based on information gathered during their application. This paper will discuss code issues that have surfaced through hydrogen technologies infrastructure project deployment and potential code changes that would address these issues. The issues that this paper will address include (1) setback distances for bulk hydrogen storage, (2) code mandated hazard analyses, (3) sensor placement and communication, (4) the use of approved equipment, and (5) system monitoring and maintenance requirements.« less
The adaption and use of research codes for performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liebetrau, A.M.
1987-05-01
Models of real-world phenomena are developed for many reasons. The models are usually, if not always, implemented in the form of a computer code. The characteristics of a code are determined largely by its intended use. Realizations or implementations of detailed mathematical models of complex physical and/or chemical processes are often referred to as research or scientific (RS) codes. Research codes typically require large amounts of computing time. One example of an RS code is a finite-element code for solving complex systems of differential equations that describe mass transfer through some geologic medium. Considerable computing time is required because computationsmore » are done at many points in time and/or space. Codes used to evaluate the overall performance of real-world physical systems are called performance assessment (PA) codes. Performance assessment codes are used to conduct simulated experiments involving systems that cannot be directly observed. Thus, PA codes usually involve repeated simulations of system performance in situations that preclude the use of conventional experimental and statistical methods. 3 figs.« less
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
NASA Technical Reports Server (NTRS)
Lee, P. J.
1984-01-01
For rate 1/N convolutional codes, a recursive algorithm for finding the transfer function bound on bit error rate (BER) at the output of a Viterbi decoder is described. This technique is very fast and requires very little storage since all the unnecessary operations are eliminated. Using this technique, we find and plot bounds on the BER performance of known codes of rate 1/2 with K 18, rate 1/3 with K 14. When more than one reported code with the same parameter is known, we select the code that minimizes the required signal to noise ratio for a desired bit error rate of 0.000001. This criterion of determining goodness of a code had previously been found to be more useful than the maximum free distance criterion and was used in the code search procedures of very short constraint length codes. This very efficient technique can also be used for searches of longer constraint length codes.
Nuclear fuel management optimization using genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1995-07-01
The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-07-03
The 2012 International Energy Conservation Code (IECC) yields positive benefits for Michigan homeowners. Moving to the 2012 IECC from the Michigan Uniform Energy Code is cost-effective over a 30-year life cycle. On average, Michigan homeowners will save $10,081 with the 2012 IECC. Each year, the reduction to energy bills will significantly exceed increased mortgage costs. After accounting for up-front costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 1 year for the 2012 IECC. Average annual energy savings are $604 for the 2012 IECC.
NASA Technical Reports Server (NTRS)
Bever, G. A.
1981-01-01
The flight test data requirements at the NASA Dryden Flight Research Center increased in complexity, and more advanced instrumentation became necessary to accomplish mission goals. This paper describes the way in which an airborne computer was used to perform real-time calculations on critical flight test parameters during a flight test on a winglet-equipped KC-135A aircraft. With the computer, an airborne flight test engineer can select any sensor for airborne display in several formats, including engineering units. The computer is able to not only calculate values derived from the sensor outputs but also to interact with the data acquisition system. It can change the data cycle format and data rate, and even insert the derived values into the pulse code modulation (PCM) bit stream for recording.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stimpson, Shane G; Powers, Jeffrey J; Clarno, Kevin T
The Consortium for Advanced Simulation of Light Water Reactors (CASL) aims to provide high-fidelity, multiphysics simulations of light water reactors (LWRs) by coupling a variety of codes within the Virtual Environment for Reactor Analysis (VERA). One of the primary goals of CASL is to predict local cladding failure through pellet-clad interaction (PCI). This capability is currently being pursued through several different approaches, such as with Tiamat, which is a simulation tool within VERA that more tightly couples the MPACT neutron transport solver, the CTF thermal hydraulics solver, and the MOOSE-based Bison-CASL fuel performance code. However, the process in this papermore » focuses on running fuel performance calculations with Bison-CASL to predict PCI using the multicycle output data from coupled neutron transport/thermal hydraulics simulations. In recent work within CASL, Watts Bar Unit 1 has been simulated over 12 cycles using the VERA core simulator capability based on MPACT and CTF. Using the output from these simulations, Bison-CASL results can be obtained without rerunning all 12 cycles, while providing some insight into PCI indicators. Multi-cycle Bison-CASL results are presented and compared against results from the FRAPCON fuel performance code. There are several quantities of interest in considering PCI and subsequent fuel rod failures, such as the clad hoop stress and maximum centerline fuel temperature, particularly as a function of time. Bison-CASL performs single-rod simulations using representative power and temperature distributions, providing high-resolution results for these and a number of other quantities. This will assist in identifying fuels rods as potential failure locations for use in further analyses.« less
Requirements for migration of NSSD code systems from LTSS to NLTSS
NASA Technical Reports Server (NTRS)
Pratt, M.
1984-01-01
The purpose of this document is to address the requirements necessary for a successful conversion of the Nuclear Design (ND) application code systems to the NLTSS environment. The ND application code system community can be characterized as large-scale scientific computation carried out on supercomputers. NLTSS is a distributed operating system being developed at LLNL to replace the LTSS system currently in use. The implications of change are examined including a description of the computational environment and users in ND. The discussion then turns to requirements, first in a general way, followed by specific requirements, including a proposal for managing the transition.
PCG: A prototype incremental compilation facility for the SAGA environment, appendix F
NASA Technical Reports Server (NTRS)
Kimball, Joseph John
1985-01-01
A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.
Recent Upgrades to the NASA Ames Mars General Circulation Model: Applications to Mars' Water Cycle
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, M. A.; Haberle, R. M.; Montmessin, F.; Wilson, R. J.; Schaeffer, J.
2008-09-01
We report on recent improvements to the NASA Ames Mars general circulation model (GCM), a robust 3D climate-modeling tool that is state-of-the-art in terms of its physics parameterizations and subgrid-scale processes, and which can be applied to investigate physical and dynamical processes of the present (and past) Mars climate system. The most recent version (gcm2.1, v.24) of the Ames Mars GCM utilizes a more generalized radiation code (based on a two-stream approximation with correlated k's); an updated transport scheme (van Leer formulation); a cloud microphysics scheme that assumes a log-normal particle size distribution whose first two moments are treated as atmospheric tracers, and which includes the nucleation, growth and sedimentation of ice crystals. Atmospheric aerosols (e.g., dust and water-ice) can either be radiatively active or inactive. We apply this version of the Ames GCM to investigate key aspects of the present water cycle on Mars. Atmospheric dust is partially interactive in our simulations; namely, the radiation code "sees" a prescribed distribution that follows the MGS thermal emission spectrometer (TES) year-one measurements with a self-consistent vertical depth scale that varies with season. The cloud microphysics code interacts with a transported dust tracer column whose surface source is adjusted to maintain the TES distribution. The model is run from an initially dry state with a better representation of the north residual cap (NRC) which accounts for both surface-ice and bare-soil components. A seasonally repeatable water cycle is obtained within five Mars years. Our sub-grid scale representation of the NRC provides for a more realistic flux of moisture to the atmosphere and a much drier water cycle consistent with recent spacecraft observations (e.g., Mars Express PFS, corrected MGS/TES) compared to models that assume a spatially uniform and homogeneous north residual polar cap.
Wen, Dong-Yue; Lin, Peng; Pang, Yu-Yan; Chen, Gang; He, Yun; Dang, Yi-Wu; Yang, Hong
2018-05-05
BACKGROUND Long non-coding RNAs (lncRNAs) have a role in physiological and pathological processes, including cancer. The aim of this study was to investigate the expression of the long intergenic non-protein coding RNA 665 (LINC00665) gene and the cell cycle in hepatocellular carcinoma (HCC) using database analysis including The Cancer Genome Atlas (TCGA), the Gene Expression Omnibus (GEO), and quantitative real-time polymerase chain reaction (qPCR). MATERIAL AND METHODS Expression levels of LINC00665 were compared between human tissue samples of HCC and adjacent normal liver, clinicopathological correlations were made using TCGA and the GEO, and qPCR was performed to validate the findings. Other public databases were searched for other genes associated with LINC00665 expression, including The Atlas of Noncoding RNAs in Cancer (TANRIC), the Multi Experiment Matrix (MEM), Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG) and protein-protein interaction (PPI) networks. RESULTS Overexpression of LINC00665 in patients with HCC was significantly associated with gender, tumor grade, stage, and tumor cell type. Overexpression of LINC00665 in patients with HCC was significantly associated with overall survival (OS) (HR=1.47795%; CI: 1.046-2.086). Bioinformatics analysis identified 469 related genes and further analysis supported a hypothesis that LINC00665 regulates pathways in the cell cycle to facilitate the development and progression of HCC through ten identified core genes: CDK1, BUB1B, BUB1, PLK1, CCNB2, CCNB1, CDC20, ESPL1, MAD2L1, and CCNA2. CONCLUSIONS Overexpression of the lncRNA, LINC00665 may be involved in the regulation of cell cycle pathways in HCC through ten identified hub genes.
Residential photovoltaic module and array requirements study, appendices
NASA Technical Reports Server (NTRS)
Nearhoof, S. L.; Oster, J. R.
1979-01-01
Regional building code variations, federal and city codes, and the national electric code are reviewed for their possible effects on the design of photovoltaic modules. Problems that photovoltaic arrays may impose on the insurability of residences are also discussed. Mounting configurations are developed for the modules, and grounding, wiring, terminal, and voltage requirements are established. Installation and materials costs are presented along with performance criteria.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-13
...-adviser are subject to the provisions of Rule 204A-1 under the Advisers Act relating to codes of ethics. This Rule requires investment advisers to adopt a code of ethics that reflects the fiduciary nature of... specifically requires the adoption of a code of ethics by an investment advisor to include, at a minimum: (i...
A Modified Through-Flow Wave Rotor Cycle with Combustor Bypass Ducts
NASA Technical Reports Server (NTRS)
Paxson Daniel E.; Nalim, M. Razi
1998-01-01
A wave rotor cycle is described which avoids the inherent problem of combustor exhaust gas recirculation (EGR) found in four-port, through-flow wave rotor cycles currently under consideration for topping gas turbine engines. The recirculated hot gas is eliminated by the judicious placement of a bypass duct which transfers gas from one end of the rotor to the other. The resulting cycle, when analyzed numerically, yields an absolute mean rotor temperature 18% below the already impressive value of the conventional four-port cycle (approximately the turbine inlet temperature). The absolute temperature of the gas leading to the combustor is also reduced from the conventional four-port design by 22%. The overall design point pressure ratio of this new bypass cycle is approximately the same as the conventional four-port cycle. This paper will describe the EGR problem and the bypass cycle solution including relevant wave diagrams. Performance estimates of design and off-design operation of a specific wave rotor will be presented. The results were obtained using a one-dimensional numerical simulation and design code.
Subsonic Aircraft With Regression and Neural-Network Approximators Designed
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Hopkins, Dale A.
2004-01-01
At the NASA Glenn Research Center, NASA Langley Research Center's Flight Optimization System (FLOPS) and the design optimization testbed COMETBOARDS with regression and neural-network-analysis approximators have been coupled to obtain a preliminary aircraft design methodology. For a subsonic aircraft, the optimal design, that is the airframe-engine combination, is obtained by the simulation. The aircraft is powered by two high-bypass-ratio engines with a nominal thrust of about 35,000 lbf. It is to carry 150 passengers at a cruise speed of Mach 0.8 over a range of 3000 n mi and to operate on a 6000-ft runway. The aircraft design utilized a neural network and a regression-approximations-based analysis tool, along with a multioptimizer cascade algorithm that uses sequential linear programming, sequential quadratic programming, the method of feasible directions, and then sequential quadratic programming again. Optimal aircraft weight versus the number of design iterations is shown. The central processing unit (CPU) time to solution is given. It is shown that the regression-method-based analyzer exhibited a smoother convergence pattern than the FLOPS code. The optimum weight obtained by the approximation technique and the FLOPS code differed by 1.3 percent. Prediction by the approximation technique exhibited no error for the aircraft wing area and turbine entry temperature, whereas it was within 2 percent for most other parameters. Cascade strategy was required by FLOPS as well as the approximators. The regression method had a tendency to hug the data points, whereas the neural network exhibited a propensity to follow a mean path. The performance of the neural network and regression methods was considered adequate. It was at about the same level for small, standard, and large models with redundancy ratios (defined as the number of input-output pairs to the number of unknown coefficients) of 14, 28, and 57, respectively. In an SGI octane workstation (Silicon Graphics, Inc., Mountainview, CA), the regression training required a fraction of a CPU second, whereas neural network training was between 1 and 9 min, as given. For a single analysis cycle, the 3-sec CPU time required by the FLOPS code was reduced to milliseconds by the approximators. For design calculations, the time with the FLOPS code was 34 min. It was reduced to 2 sec with the regression method and to 4 min by the neural network technique. The performance of the regression and neural network methods was found to be satisfactory for the analysis and design optimization of the subsonic aircraft.
Practices in Code Discoverability: Astrophysics Source Code Library
NASA Astrophysics Data System (ADS)
Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.
2012-09-01
Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.
Depth assisted compression of full parallax light fields
NASA Astrophysics Data System (ADS)
Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.
2015-03-01
Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.
ERIC Educational Resources Information Center
Morris, Suzanne E.
2010-01-01
This paper provides a review of institutional authorship policies as required by the "Australian Code for the Responsible Conduct of Research" (the "Code") (National Health and Medical Research Council (NHMRC), the Australian Research Council (ARC) & Universities Australia (UA) 2007), and assesses them for Code compliance.…
Encrypted holographic data storage based on orthogonal-phase-code multiplexing.
Heanue, J F; Bashaw, M C; Hesselink, L
1995-09-10
We describe an encrypted holographic data-storage system that combines orthogonal-phase-code multiplexing with a random-phase key. The system offers the security advantages of random-phase coding but retains the low cross-talk performance and the minimum code storage requirements typical in an orthogonal-phase-code-multiplexing system.
The Nuremberg Code subverts human health and safety by requiring animal modeling
2012-01-01
Background The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. Discussion We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. Summary We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented. PMID:22769234
The Nuremberg Code subverts human health and safety by requiring animal modeling.
Greek, Ray; Pippus, Annalea; Hansen, Lawrence A
2012-07-08
The requirement that animals be used in research and testing in order to protect humans was formalized in the Nuremberg Code and subsequent national and international laws, codes, and declarations. We review the history of these requirements and contrast what was known via science about animal models then with what is known now. We further analyze the predictive value of animal models when used as test subjects for human response to drugs and disease. We explore the use of animals for models in toxicity testing as an example of the problem with using animal models. We conclude that the requirements for animal testing found in the Nuremberg Code were based on scientifically outdated principles, compromised by people with a vested interest in animal experimentation, serve no useful function, increase the cost of drug development, and prevent otherwise safe and efficacious drugs and therapies from being implemented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montégiani, Jean-François; Gaudin, Émilie; Després, Philippe
2014-08-15
In peptide receptor radionuclide therapy (PRRT), huge inter-patient variability in absorbed radiation doses per administered activity mandates the utilization of individualized dosimetry to evaluate therapeutic efficacy and toxicity. We created a reliable GPU-calculated dosimetry code (irtGPUMCD) and assessed {sup 177}Lu-octreotate renal dosimetry in eight patients (4 cycles of approximately 7.4 GBq). irtGPUMCD was derived from a brachytherapy dosimetry code (bGPUMCD), which was adapted to {sup 177}Lu PRRT dosimetry. Serial quantitative single-photon emission computed tomography (SPECT) images were obtained from three SPECT/CT acquisitions performed at 4, 24 and 72 hours after {sup 177}Lu-octreotate administration, and registered with non-rigid deformation of CTmore » volumes, to obtain {sup 177}Lu-octreotate 4D quantitative biodistribution. Local energy deposition from the β disintegrations was assumed. Using Monte Carlo gamma photon transportation, irtGPUMCD computed dose rate at each time point. Average kidney absorbed dose was obtained from 1-cm{sup 3} VOI dose rate samples on each cortex, subjected to a biexponential curve fit. Integration of the latter time-dose rate curve yielded the renal absorbed dose. The mean renal dose per administered activity was 0.48 ± 0.13 Gy/GBq (range: 0.30–0.71 Gy/GBq). Comparison to another PRRT dosimetry code (VRAK: Voxelized Registration and Kinetics) showed fair accordance with irtGPUMCD (11.4 ± 6.8 %, range: 3.3–26.2%). These results suggest the possibility to use the irtGPUMCD code in order to personalize administered activity in PRRT. This could allow improving clinical outcomes by maximizing per-cycle tumor doses, without exceeding the tolerable renal dose.« less
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.
2000-01-01
A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.
Sporophyte Formation and Life Cycle Completion in Moss Requires Heterotrimeric G-Proteins1[OPEN
Hackenberg, Dieter; Quatrano, Ralph
2016-01-01
In this study, we report the functional characterization of heterotrimeric G-proteins from a nonvascular plant, the moss Physcomitrella patens. In plants, G-proteins have been characterized from only a few angiosperms to date, where their involvement has been shown during regulation of multiple signaling and developmental pathways affecting overall plant fitness. In addition to its unparalleled evolutionary position in the plant lineages, the P. patens genome also codes for a unique assortment of G-protein components, which includes two copies of Gβ and Gγ genes, but no canonical Gα. Instead, a single gene encoding an extra-large Gα (XLG) protein exists in the P. patens genome. Here, we demonstrate that in P. patens the canonical Gα is biochemically and functionally replaced by an XLG protein, which works in the same genetic pathway as one of the Gβ proteins to control its development. Furthermore, the specific G-protein subunits in P. patens are essential for its life cycle completion. Deletion of the genomic locus of PpXLG or PpGβ2 results in smaller, slower growing gametophores. Normal reproductive structures develop on these gametophores, but they are unable to form any sporophyte, the only diploid stage in the moss life cycle. Finally, the mutant phenotypes of ΔPpXLG and ΔPpGβ2 can be complemented by the homologous genes from Arabidopsis, AtXLG2 and AtAGB1, respectively, suggesting an overall conservation of their function throughout the plant evolution. PMID:27550997
Characterization of a novel ADAM protease expressed by Pneumocystis carinii.
Kennedy, Cassie C; Kottom, Theodore J; Limper, Andrew H
2009-08-01
Pneumocystis species are opportunistic fungal pathogens that cause severe pneumonia in immunocompromised hosts. Recent evidence has suggested that unidentified proteases are involved in Pneumocystis life cycle regulation. Proteolytically active ADAM (named for "a disintegrin and metalloprotease") family molecules have been identified in some fungal organisms, such as Aspergillus fumigatus and Schizosaccharomyces pombe, and some have been shown to participate in life cycle regulation. Accordingly, we sought to characterize ADAM-like molecules in the fungal opportunistic pathogen, Pneumocystis carinii (PcADAM). After an in silico search of the P. carinii genomic sequencing project identified a 329-bp partial sequence with homology to known ADAM proteins, the full-length PcADAM sequence was obtained by PCR extension cloning, yielding a final coding sequence of 1,650 bp. Sequence analysis detected the presence of a typical ADAM catalytic active site (HEXXHXXGXXHD). Expression of PcADAM over the Pneumocystis life cycle was analyzed by Northern blot. Southern and contour-clamped homogenous electronic field blot analysis demonstrated its presence in the P. carinii genome. Expression of PcADAM was observed to be increased in Pneumocystis cysts compared to trophic forms. The full-length gene was subsequently cloned and heterologously expressed in Saccharomyces cerevisiae. Purified PcADAMp protein was proteolytically active in casein zymography, requiring divalent zinc. Furthermore, native PcADAMp extracted directly from freshly isolated Pneumocystis organisms also exhibited protease activity. This is the first report of protease activity attributable to a specific, characterized protein in the clinically important opportunistic fungal pathogen Pneumocystis.
Performance of MIMO-OFDM using convolution codes with QAM modulation
NASA Astrophysics Data System (ADS)
Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa
2014-04-01
Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.
NASA Astrophysics Data System (ADS)
Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh
2014-06-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.
ODECS -- A computer code for the optimal design of S.I. engine control strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsie, I.; Pianese, C.; Rizzo, G.
1996-09-01
The computer code ODECS (Optimal Design of Engine Control Strategies) for the design of Spark Ignition engine control strategies is presented. This code has been developed starting from the author`s activity in this field, availing of some original contributions about engine stochastic optimization and dynamical models. This code has a modular structure and is composed of a user interface for the definition, the execution and the analysis of different computations performed with 4 independent modules. These modules allow the following calculations: (1) definition of the engine mathematical model from steady-state experimental data; (2) engine cycle test trajectory corresponding to amore » vehicle transient simulation test such as ECE15 or FTP drive test schedule; (3) evaluation of the optimal engine control maps with a steady-state approach; (4) engine dynamic cycle simulation and optimization of static control maps and/or dynamic compensation strategies, taking into account dynamical effects due to the unsteady fluxes of air and fuel and the influences of combustion chamber wall thermal inertia on fuel consumption and emissions. Moreover, in the last two modules it is possible to account for errors generated by a non-deterministic behavior of sensors and actuators and the related influences on global engine performances, and compute robust strategies, less sensitive to stochastic effects. In the paper the four models are described together with significant results corresponding to the simulation and the calculation of optimal control strategies for dynamic transient tests.« less
Structural analysis of cylindrical thrust chambers, volume 3
NASA Technical Reports Server (NTRS)
Pearson, M. L.
1981-01-01
A system of three computer programs is described for use in conjunction with the BOPAGE finite element program. The programs are demonstrated by analyzing cumulative plastic deformation in a regeneratively cooled rocket thrust chamber. The codes provide the capability to predict geometric and material nonlinear behavior of cyclically loaded structures without performing a cycle-by-cycle analysis over the life of the structure. The program set consists of a BOPACE restart tape reader routine, and extrapolation program and a plot package.
Towards 100,000 CPU Cycle-Scavenging by Genetic Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Biegel, Bryan A. (Technical Monitor)
2001-01-01
We examine a web-centric design using standard tools such as web servers, web browsers, PHP, and mySQL. We also consider the applicability of Information Power Grid tools such as the Globus (no relation to the author) Toolkit. We intend to implement this architecture with JavaGenes running on at least two cycle-scavengers: Condor and United Devices. JavaGenes, a genetic algorithm code written in Java, will be used to evolve multi-species reactive molecular force field parameters.
Air breathing engine/rocket trajectory optimization
NASA Technical Reports Server (NTRS)
Smith, V. K., III
1979-01-01
This research has focused on improving the mathematical models of the air-breathing propulsion systems, which can be mated with the rocket engine model and incorporated in trajectory optimization codes. Improved engine simulations provided accurate representation of the complex cycles proposed for advanced launch vehicles, thereby increasing the confidence in propellant use and payload calculations. The versatile QNEP (Quick Navy Engine Program) was modified to allow treatment of advanced turboaccelerator cycles using hydrogen or hydrocarbon fuels and operating in the vehicle flow field.
12 CFR 252.145 - Mid-cycle stress test.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 12 Banks and Banking 4 2013-01-01 2013-01-01 false Mid-cycle stress test. 252.145 Section 252.145... (CONTINUED) ENHANCED PRUDENTIAL STANDARDS (REGULATION YY) Company-Run Stress Test Requirements for Covered Companies § 252.145 Mid-cycle stress test. (a) Mid-cycle stress test requirement. In addition to the stress...
12 CFR 252.145 - Mid-cycle stress test.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 12 Banks and Banking 4 2014-01-01 2014-01-01 false Mid-cycle stress test. 252.145 Section 252.145... (CONTINUED) ENHANCED PRUDENTIAL STANDARDS (REGULATION YY) Company-Run Stress Test Requirements for Covered Companies § 252.145 Mid-cycle stress test. (a) Mid-cycle stress test requirement. In addition to the stress...
Conceptual design study of small long-life PWR based on thorium cycle fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, M. Nurul; Su'ud, Zaki; Waris, Abdul
2014-09-30
A neutronic performance of small long-life Pressurized Water Reactor (PWR) using thorium cycle based fuel has been investigated. Thorium cycle which has higher conversion ratio in thermal region compared to uranium cycle produce some significant of {sup 233}U during burn up time. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.3, while the multi-energy-group diffusion calculations were optimized in whole core cylindrical two-dimension R-Z geometry by SRAC-CITATION. this study would be introduced thorium nitride fuel system which ZIRLO is the cladding material. The optimization of 350 MWt small long life PWRmore » result small excess reactivity and reduced power peaking during its operation.« less
Carroll, Norman V; Rupp, Michael T; Holdford, David A
2014-03-01
The need for accurate calculation of long-term care (LTC) pharmacies' costs to dispense (CTD) has become more important as payers have moved toward reimbursement models based on pharmacies' actual acquisition cost for drug products and the Centers for Medicare Medicaid Services (CMS) has implemented requirements that LTC pharmacies must dispense prescriptions for certain branded drugs in 14-day-or-less quantities. To (a) calculate the average cost that the typical independently owned, closed-door LTC pharmacy currently incurs to dispense and deliver a prescription to the resident of a client LTC facility and (b) estimate how CMS-mandated changes to a 14-day-or-less dispensing cycle would affect the typical LTC pharmacy's average CTD. The data requirements and measurement model were developed by academic researchers in consultation with an industry advisory committee of independent LTC pharmacy owners. A survey instrument was constructed to collect financial and operating data required to calculate the CTD. Surveys were distributed via 3 dissemination channels to approximately 1,000 independently owned, closed-door LTC pharmacies. The National Community Pharmacists Association mailed surveys to their LTC members; 3 major national wholesalers distributed surveys to their LTC customers through their newsletters; and 3 LTC group purchasing organizations distributed the surveys to their members through emails, newsletters, mailings, and/or regional meetings. Each pharmacy's CTD was calculated by dividing total LTC dispensing-related costs by the total number of prescriptions dispensed. Dispensing-related costs included costs incurred to physically dispense and deliver prescriptions (e.g., dispensing pharmacists' and technicians' salaries and costs of medication containers) and costs incurred to support the dispensing function (e.g., salaries of delivery and medical records personnel). A model based on dispensing-related fixed, variable, and semivariable costs was developed to examine the impact of shorter dispensing cycles on LTC pharmacies' CTD. A prescription volume increase of 19% was assumed based on converting only solid oral branded drugs to short-cycle dispensing. A diverse sample of 64 closed-door LTC pharmacies returned usable surveys. Sales from dispensing to LTC facilities accounted for more than 98% of total sales. Respondents indicated that they currently dispensed 23% of total doses in 14-day-or-less cycles and 76% in 28-31 day cycles. Most pharmacies used automated medication packaging technology, heat and cold package sealers, bar code systems, sterile compounding hoods, LTC printers or labelers, and electronic prescribing. The median CTD was $13.54 with an interquartile range (25th to 75th percentiles) of $10.51 to $17.66. More than half of dispensing-related costs were from personnel expense, of which pharmacists and managers accounted for more than 40%. The results of the fixed and variable cost modeling suggested that converting solid oral brand-name drugs from 30-day to 14-day dispensing cycles would lower the median per prescription CTD to between $11.63 and $12.54, depending on the assumptions made about the effects of semivariable costs. However, this decrease in per prescription dispensing cost is dwarfed by an increase in total dispensing cost incurred by pharmacies that results from doubling the monthly volume of short-cycle prescriptions that must be dispensed. The result is that the typical LTC pharmacy in our sample incurred a CTD of $13.54 if the medication is dispensed in a 30-day cycle or $23.26 if the medication is dispensed in two 14-day cycles (at a cost of $11.63 for each cycle dispensed). Our results indicated a median CTD of $13.54 for the typical independently owned, closed-door LTC pharmacy. Moving to a shorter cycle would reduce pharmacies' average per-prescription CTD but would increase the number of prescriptions dispensed per month. Our results indicated that transitioning solid oral branded products to 14-day cycles would reduce the median CTD to a minimum of $11.63 but would increase total dispensing costs because each sold oral branded prescription would require twice the number of monthly dispensing events.
Full core analysis of IRIS reactor by using MCNPX.
Amin, E A; Bashter, I I; Hassan, Nabil M; Mustafa, S S
2016-07-01
This paper describes neutronic analysis for fresh fuelled IRIS (International Reactor Innovative and Secure) reactor by MCNPX code. The analysis included criticality calculations, radial power and axial power distribution, nuclear peaking factor and axial offset percent at the beginning of fuel cycle. The effective multiplication factor obtained by MCNPX code is compared with previous calculations by HELIOS/NESTLE, CASMO/SIMULATE, modified CORD-2 nodal calculations and SAS2H/KENO-V code systems. It is found that k-eff value obtained by MCNPX is closer to CORD-2 value. The radial and axial powers are compared with other published results carried out using SAS2H/KENO-V code. Moreover, the WIMS-D5 code is used for studying the effect of enriched boron in form of ZrB2 on the effective multiplication factor (K-eff) of the fuel pin. In this part of calculation, K-eff is calculated at different concentrations of Boron-10 in mg/cm at different stages of burnup of unit cell. The results of this part are compared with published results performed by HELIOS code. Copyright © 2016 Elsevier Ltd. All rights reserved.
Goddard Visiting Scientist Program
NASA Technical Reports Server (NTRS)
2000-01-01
Under this Indefinite Delivery Indefinite Quantity (IDIQ) contract, USRA was expected to provide short term (from I day up to I year) personnel as required to provide a Visiting Scientists Program to support the Earth Sciences Directorate (Code 900) at the Goddard Space Flight Center. The Contractor was to have a pool, or have access to a pool, of scientific talent, both domestic and international, at all levels (graduate student to senior scientist), that would support the technical requirements of the following laboratories and divisions within Code 900: 1) Global Change Data Center (902); 2) Laboratory for Atmospheres (Code 910); 3) Laboratory for Terrestrial Physics (Code 920); 4) Space Data and Computing Division (Code 930); 5) Laboratory for Hydrospheric Processes (Code 970). The research activities described below for each organization within Code 900 were intended to comprise the general scope of effort covered under the Visiting Scientist Program.
Spatiotemporal coding of inputs for a system of globally coupled phase oscillators
NASA Astrophysics Data System (ADS)
Wordsworth, John; Ashwin, Peter
2008-12-01
We investigate the spatiotemporal coding of low amplitude inputs to a simple system of globally coupled phase oscillators with coupling function g(ϕ)=-sin(ϕ+α)+rsin(2ϕ+β) that has robust heteroclinic cycles (slow switching between cluster states). The inputs correspond to detuning of the oscillators. It was recently noted that globally coupled phase oscillators can encode their frequencies in the form of spatiotemporal codes of a sequence of cluster states [P. Ashwin, G. Orosz, J. Wordsworth, and S. Townley, SIAM J. Appl. Dyn. Syst. 6, 728 (2007)]. Concentrating on the case of N=5 oscillators we show in detail how the spatiotemporal coding can be used to resolve all of the information that relates the individual inputs to each other, providing that a long enough time series is considered. We investigate robustness to the addition of noise and find a remarkable stability, especially of the temporal coding, to the addition of noise even for noise of a comparable magnitude to the inputs.
Intrasystem Analysis Program (IAP) code summaries
NASA Astrophysics Data System (ADS)
Dobmeier, J. J.; Drozd, A. L. S.; Surace, J. A.
1983-05-01
This report contains detailed descriptions and capabilities of the codes that comprise the Intrasystem Analysis Program. The four codes are: Intrasystem Electromagnetic Compatibility Analysis Program (IEMCAP), General Electromagnetic Model for the Analysis of Complex Systems (GEMACS), Nonlinear Circuit Analysis Program (NCAP), and Wire Coupling Prediction Models (WIRE). IEMCAP is used for computer-aided evaluation of electromagnetic compatibility (ECM) at all stages of an Air Force system's life cycle, applicable to aircraft, space/missile, and ground-based systems. GEMACS utilizes a Method of Moments (MOM) formalism with the Electric Field Integral Equation (EFIE) for the solution of electromagnetic radiation and scattering problems. The code employs both full matrix decomposition and Banded Matrix Iteration solution techniques and is expressly designed for large problems. NCAP is a circuit analysis code which uses the Volterra approach to solve for the transfer functions and node voltage of weakly nonlinear circuits. The Wire Programs deal with the Application of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling for specific classes of problems.
Toward a first-principles integrated simulation of tokamak edge plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C S; Klasky, Scott A; Cummings, Julian
2008-01-01
Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less
Interface requirements to couple thermal-hydraulic codes to 3D neutronic codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langenbuch, S.; Austregesilo, H.; Velkov, K.
1997-07-01
The present situation of thermalhydraulics codes and 3D neutronics codes is briefly described and general considerations for coupling of these codes are discussed. Two different basic approaches of coupling are identified and their relative advantages and disadvantages are discussed. The implementation of the coupling for 3D neutronics codes in the system ATHLET is presented. Meanwhile, this interface is used for coupling three different 3D neutronics codes.
Ground Systems Development Environment (GSDE) software configuration management
NASA Technical Reports Server (NTRS)
Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo
1992-01-01
This report presents a review of the software configuration management (CM) plans developed for the Space Station Training Facility (SSTF) and the Space Station Control Center. The scope of the CM assessed in this report is the Systems Integration and Testing Phase of the Ground Systems development life cycle. This is the period following coding and unit test and preceding delivery to operational use. This report is one of a series from a study of the interfaces among the Ground Systems Development Environment (GSDE), the development systems for the SSTF and the SSCC, and the target systems for SSCC and SSTF. This is the last report in the series. The focus of this report is on the CM plans developed by the contractors for the Mission Systems Contract (MSC) and the Training Systems Contract (TSC). CM requirements are summarized and described in terms of operational software development. The software workflows proposed in the TSC and MSC plans are reviewed in this context, and evaluated against the CM requirements defined in earlier study reports. Recommendations are made to improve the effectiveness of CM while minimizing its impact on the developers.
Design of an MR image processing module on an FPGA chip.
Li, Limin; Wyrwicz, Alice M
2015-06-01
We describe the design and implementation of an image processing module on a single-chip Field-Programmable Gate Array (FPGA) for real-time image processing. We also demonstrate that through graphical coding the design work can be greatly simplified. The processing module is based on a 2D FFT core. Our design is distinguished from previously reported designs in two respects. No off-chip hardware resources are required, which increases portability of the core. Direct matrix transposition usually required for execution of 2D FFT is completely avoided using our newly-designed address generation unit, which saves considerable on-chip block RAMs and clock cycles. The image processing module was tested by reconstructing multi-slice MR images from both phantom and animal data. The tests on static data show that the processing module is capable of reconstructing 128×128 images at speed of 400 frames/second. The tests on simulated real-time streaming data demonstrate that the module works properly under the timing conditions necessary for MRI experiments. Copyright © 2015 Elsevier Inc. All rights reserved.
Using a Magnetic Flux Transport Model to Predict the Solar Cycle
NASA Technical Reports Server (NTRS)
Lyatskaya, S.; Hathaway, D.; Winebarger, A.
2007-01-01
We present the results of an investigation into the use of a magnetic flux transport model to predict the amplitude of future solar cycles. Recently Dikpati, de Toma, & Gilman (2006) showed how their dynamo model could be used to accurately predict the amplitudes of the last eight solar cycles and offered a prediction for the next solar cycle - a large amplitude cycle. Cameron & Schussler (2007) found that they could reproduce this predictive skill with a simple 1-dimensional surface flux transport model - provided they used the same parameters and data as Dikpati, de Toma, & Gilman. However, when they tried incorporating the data in what they argued was a more realistic manner, they found that the predictive skill dropped dramatically. We have written our own code for examining this problem and have incorporated updated and corrected data for the source terms - the emergence of magnetic flux in active regions. We present both the model itself and our results from it - in particular our tests of its effectiveness at predicting solar cycles.
Aging, Counterfeiting Configuration Control (AC3)
2010-01-31
SARA continuously polls contributing data sources on a data specific refresh cycle. SARA supports a continuous risk topology assessment by the program...function was demonstrated at the bread -board level based on comparison of North American Industrialization Classification System (NAICS) codes. Other
Development of high-fidelity multiphysics system for light water reactor analysis
NASA Astrophysics Data System (ADS)
Magedanz, Jeffrey W.
There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining codes into a single executable, they are usually still developed and maintained separately. It should thus be a design objective to minimize the changes to those codes, and keep the changes to each code free of dependence on the details of the other codes. This will ease the incorporation of new versions of the code into the coupling, as well as re-use of parts of the coupling to couple with different codes. In order to fulfill this objective, an interface for each code was created in the form of an object-oriented abstract data type. Object-oriented programming is an effective method for enforcing a separation between different parts of a program, and clarifying the communication between them. The interfaces enable the main program to control the codes in terms of high-level functionality. This differs from the established practice of a master/slave relationship, in which the slave code is incorporated into the master code as a set of subroutines. While this PhD research continues previous work with a coupling between CTF and TORT-TD, it makes two major original contributions: (1) using a fuel-performance code, instead of a thermal-hydraulics code's simplified built-in models, to model the feedback from the fuel rods, and (2) the design of an object-oriented interface as an innovative method to interact with a coupled code in a high-level, easily-understandable manner. The resulting code system will serve as a tool to study the question of under what conditions, and to what extent, these higher-fidelity methods will provide benefits to reactor core analysis. (Abstract shortened by UMI.)
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... jurisdictions. If a lender or other interested party is notified that a State or local building code has been... in accordance with the applicable State or local building code, plus those additional requirements... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model code provisions for use in...
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... jurisdictions. If a lender or other interested party is notified that a State or local building code has been... in accordance with the applicable State or local building code, plus those additional requirements... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model code provisions for use in...
24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... jurisdictions. If a lender or other interested party is notified that a State or local building code has been... in accordance with the applicable State or local building code, plus those additional requirements... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model code provisions for use in...
2013-01-01
Background Myelosuppressive chemotherapy can lead to dose-limiting febrile neutropenia. Prophylactic use of recombinant human G-CSF such as daily filgrastim and once-per-cycle pegfilgrastim may reduce the incidence of febrile neutropenia. This comparative study examined the effect of pegfilgrastim versus daily filgrastim on the risk of hospitalization. Methods This retrospective United States claims analysis utilized 2004–2009 data for filgrastim- and pegfilgrastim-treated patients receiving chemotherapy for non-Hodgkin’s lymphoma (NHL) or breast, lung, ovarian, or colorectal cancers. Cycles in which pegfilgrastim or filgrastim was administered within 5 days from initiation of chemotherapy (considered to represent prophylaxis) were pooled for analysis. Neutropenia-related hospitalization and other healthcare encounters were defined with a “narrow” criterion for claims with an ICD-9 code for neutropenia and with a “broad” criterion for claims with an ICD-9 code for neutropenia, fever, or infection. Odds ratios (OR) for hospitalization and 95% confidence intervals (CI) were estimated by generalized estimating equation (GEE) models and adjusted for patient, tumor, and treatment characteristics. Per-cycle healthcare utilization and costs were examined for cycles with pegfilgrastim or filgrastim prophylaxis. Results We identified 3,535 patients receiving G-CSF prophylaxis, representing 12,056 chemotherapy cycles (11,683 pegfilgrastim, 373 filgrastim). The mean duration of filgrastim prophylaxis in the sample was 4.8 days. The mean duration of pegfilgrastim prophylaxis in the sample was 1.0 day, consistent with the recommended dosage of pegfilgrastim - a single injection once per chemotherapy cycle. Cycles with prophylactic pegfilgrastim were associated with a decreased risk of neutropenia-related hospitalization (narrow definition: OR = 0.43, 95% CI: 0.16–1.13; broad definition: OR = 0.38, 95% CI: 0.24–0.59) and all-cause hospitalization (OR = 0.50, 95% CI: 0.35–0.72) versus cycles with prophylactic filgrastim. For neutropenia-related utilization by setting of care, there were more ambulatory visits and hospitalizations per cycle associated with filgrastim prophylaxis than with pegfilgrastim prophylaxis. Mean per-cycle neutropenia-related costs were also higher with prophylactic filgrastim than with prophylactic pegfilgrastim. Conclusions In this comparative effectiveness study, pegfilgrastim prophylaxis was associated with a reduced risk of neutropenia-related or all-cause hospitalization relative to filgrastim prophylaxis. PMID:23298389
Guide to Permitting Hydrogen Motor Fuel Dispensing Facilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivkin, Carl; Buttner, William; Burgess, Robert
2016-03-28
The purpose of this guide is to assist project developers, permitting officials, code enforcement officials, and other parties involved in developing permit applications and approving the implementation of hydrogen motor fuel dispensing facilities. The guide facilitates the identification of the elements to be addressed in the permitting of a project as it progresses through the approval process; the specific requirements associated with those elements; and the applicable (or potentially applicable) codes and standards by which to determine whether the specific requirements have been met. The guide attempts to identify all applicable codes and standards relevant to the permitting requirements.
The GRO remote terminal system
NASA Technical Reports Server (NTRS)
Zillig, David J.; Valvano, Joe
1994-01-01
In March 1992, NASA HQ challenged GSFC/Code 531 to propose a fast, low-cost approach to close the Tracking Data Relay Satellite System (TDRSS) Zone-of-Exclusion (ZOE) over the Indian Ocean in order to provide global communications coverage for the Compton Gamma Ray Observatory (GRO) spacecraft. GRO had lost its tape recording capability which limited its valuable science data return to real-time contacts with the TDRS-E and TDRS-W synchronous data relay satellites, yielding only approximately 62 percent of the possible data obtainable. To achieve global coverage, a TDRS spacecraft would have to be moved over the Indian Ocean out of line-of-sight control of White Sands Ground Terminal (WSGT). To minimize operations life cycle costs, Headquarters also set a goal for remote control, from the WSGT, of the overseas ground station which was required for direct communications with TDRS-1. On August 27, 1992, Code 531 was given the go ahead to implement the proposed GRO Relay Terminal System (GRTS). This paper describes the Remote Ground Relay Terminal (RGRT) which went operational at the Canberra Deep Space Communications Complex (CDSCC) in Canberra, Australia in December 1993 and is currently augmenting the TDRSS constellation in returning between 80-100 percent of GRO science data under the control of a single operator at WSGT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gehin, Jess C; Godfrey, Andrew T; Evans, Thomas M
The Consortium for Advanced Simulation of Light Water Reactors (CASL) is developing a collection of methods and software products known as VERA, the Virtual Environment for Reactor Applications, including a core simulation capability called VERA-CS. A key milestone for this endeavor is to validate VERA against measurements from operating nuclear power reactors. The first step in validation against plant data is to determine the ability of VERA to accurately simulate the initial startup physics tests for Watts Bar Nuclear Power Station, Unit 1 (WBN1) cycle 1. VERA-CS calculations were performed with the Insilico code developed at ORNL using cross sectionmore » processing from the SCALE system and the transport capabilities within the Denovo transport code using the SPN method. The calculations were performed with ENDF/B-VII.0 cross sections in 252 groups (collapsed to 23 groups for the 3D transport solution). The key results of the comparison of calculations with measurements include initial criticality, control rod worth critical configurations, control rod worth, differential boron worth, and isothermal temperature reactivity coefficient (ITC). The VERA results for these parameters show good agreement with measurements, with the exception of the ITC, which requires additional investigation. Results are also compared to those obtained with Monte Carlo methods and a current industry core simulator.« less
A CFD analysis of blade row interactions within a high-speed axial compressor
NASA Astrophysics Data System (ADS)
Richman, Michael Scott
Aircraft engine design provides many technical and financial hurdles. In an effort to streamline the design process, save money, and improve reliability and performance, many manufacturers are relying on computational fluid dynamic simulations. An overarching goal of the design process for military aircraft engines is to reduce size and weight while maintaining (or improving) reliability. Designers often turn to the compression system to accomplish this goal. As pressure ratios increase and the number of compression stages decrease, many problems arise, for example stability and high cycle fatigue (HCF) become significant as individual stage loading is increased. CFD simulations have recently been employed to assist in the understanding of the aeroelastic problems. For accurate multistage blade row HCF prediction, it is imperative that advanced three-dimensional blade row unsteady aerodynamic interaction codes be validated with appropriate benchmark data. This research addresses this required validation process for TURBO, an advanced three-dimensional multi-blade row turbomachinery CFD code. The solution/prediction accuracy is characterized, identifying key flow field parameters driving the inlet guide vane (IGV) and stator response to the rotor generated forcing functions. The result is a quantified evaluation of the ability of TURBO to predict not only the fundamental flow field characteristics but the three dimensional blade loading.
Patil, Yogita; Müller, Nicolai; Schink, Bernhard; ...
2017-02-20
Anaerobium acetethylicum strain GluBS11 T belongs to the family Lachnospiraceae within the order Clostridiales. It is a Gram-positive, non-motile and strictly anaerobic bacterium isolated from biogas slurry that was originally enriched with gluconate as carbon source (Patil, et al., Int J Syst Evol Microbiol 65:3289-3296, 2015). Here we describe the draft genome sequence of strain GluBS11 T and provide a detailed insight into its physiological and metabolic features. The draft genome sequence generated 4,609,043 bp, distributed among 105 scaffolds assembled using the SPAdes genome assembler method. It comprises in total 4,132 genes, of which 4,008 were predicted to be proteinmore » coding genes, 124 RNA genes and 867 pseudogenes. The content was 43.51 mol %. The annotated genome of strain GluBS11 T contains putative genes coding for the pentose phosphate pathway, the Embden-Meyerhoff-Parnas pathway, the Entner-Doudoroff pathway and the tricarboxylic acid cycle. The genome revealed the presence of most of the necessary genes required for the fermentation of glucose and gluconate to acetate, ethanol, and hydrogen gas. However, a candidate gene for production of formate was not identified.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Patil, Yogita; Müller, Nicolai; Schink, Bernhard
Anaerobium acetethylicum strain GluBS11 T belongs to the family Lachnospiraceae within the order Clostridiales. It is a Gram-positive, non-motile and strictly anaerobic bacterium isolated from biogas slurry that was originally enriched with gluconate as carbon source (Patil, et al., Int J Syst Evol Microbiol 65:3289-3296, 2015). Here we describe the draft genome sequence of strain GluBS11 T and provide a detailed insight into its physiological and metabolic features. The draft genome sequence generated 4,609,043 bp, distributed among 105 scaffolds assembled using the SPAdes genome assembler method. It comprises in total 4,132 genes, of which 4,008 were predicted to be proteinmore » coding genes, 124 RNA genes and 867 pseudogenes. The content was 43.51 mol %. The annotated genome of strain GluBS11 T contains putative genes coding for the pentose phosphate pathway, the Embden-Meyerhoff-Parnas pathway, the Entner-Doudoroff pathway and the tricarboxylic acid cycle. The genome revealed the presence of most of the necessary genes required for the fermentation of glucose and gluconate to acetate, ethanol, and hydrogen gas. However, a candidate gene for production of formate was not identified.« less
BRYNTRN: A baryon transport model
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Nealy, John E.; Chun, Sang Y.; Hong, B. S.; Buck, Warren W.; Lamkin, S. L.; Ganapol, Barry D.; Khan, Ferdous; Cucinotta, Francis A.
1989-01-01
The development of an interaction data base and a numerical solution to the transport of baryons through an arbitrary shield material based on a straight ahead approximation of the Boltzmann equation are described. The code is most accurate for continuous energy boundary values, but gives reasonable results for discrete spectra at the boundary using even a relatively coarse energy grid (30 points) and large spatial increments (1 cm in H2O). The resulting computer code is self-contained, efficient and ready to use. The code requires only a very small fraction of the computer resources required for Monte Carlo codes.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation
NASA Technical Reports Server (NTRS)
Holt, James B.; Ruf, Joe
1999-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.
NASA Astrophysics Data System (ADS)
Tsilanizara, A.; Gilardi, N.; Huynh, T. D.; Jouanne, C.; Lahaye, S.; Martinez, J. M.; Diop, C. M.
2014-06-01
The knowledge of the decay heat quantity and the associated uncertainties are important issues for the safety of nuclear facilities. Many codes are available to estimate the decay heat. ORIGEN, FISPACT, DARWIN/PEPIN2 are part of them. MENDEL is a new depletion code developed at CEA, with new software architecture, devoted to the calculation of physical quantities related to fuel cycle studies, in particular decay heat. The purpose of this paper is to present a probabilistic approach to assess decay heat uncertainty due to the decay data uncertainties from nuclear data evaluation like JEFF-3.1.1 or ENDF/B-VII.1. This probabilistic approach is based both on MENDEL code and URANIE software which is a CEA uncertainty analysis platform. As preliminary applications, single thermal fission of uranium 235, plutonium 239 and PWR UOx spent fuel cell are investigated.
Cell Cycle Regulation of Stem Cells by MicroRNAs.
Mens, Michelle M J; Ghanbari, Mohsen
2018-06-01
MicroRNAs (miRNAs) are a class of small non-coding RNA molecules involved in the regulation of gene expression. They are involved in the fine-tuning of fundamental biological processes such as proliferation, differentiation, survival and apoptosis in many cell types. Emerging evidence suggests that miRNAs regulate critical pathways involved in stem cell function. Several miRNAs have been suggested to target transcripts that directly or indirectly coordinate the cell cycle progression of stem cells. Moreover, previous studies have shown that altered expression levels of miRNAs can contribute to pathological conditions, such as cancer, due to the loss of cell cycle regulation. However, the precise mechanism underlying miRNA-mediated regulation of cell cycle in stem cells is still incompletely understood. In this review, we discuss current knowledge of miRNAs regulatory role in cell cycle progression of stem cells. We describe how specific miRNAs may control cell cycle associated molecules and checkpoints in embryonic, somatic and cancer stem cells. We further outline how these miRNAs could be regulated to influence cell cycle progression in stem cells as a potential clinical application.
Life Prediction for a CMC Component Using the NASALIFE Computer Code
NASA Technical Reports Server (NTRS)
Gyekenyesi, John Z.; Murthy, Pappu L. N.; Mital, Subodh K.
2005-01-01
The computer code, NASALIFE, was used to provide estimates for life of an SiC/SiC stator vane under varying thermomechanical loading conditions. The primary intention of this effort is to show how the computer code NASALIFE can be used to provide reasonable estimates of life for practical propulsion system components made of advanced ceramic matrix composites (CMC). Simple loading conditions provided readily observable and acceptable life predictions. Varying the loading conditions such that low cycle fatigue and creep were affected independently provided expected trends in the results for life due to varying loads and life due to creep. Analysis was based on idealized empirical data for the 9/99 Melt Infiltrated SiC fiber reinforced SiC.
NASA Technical Reports Server (NTRS)
Clement, J. D.; Kirby, K. D.
1973-01-01
Exploratory calculations were performed for several gas core breeder reactor configurations. The computational method involved the use of the MACH-1 one dimensional diffusion theory code and the THERMOS integral transport theory code for thermal cross sections. Computations were performed to analyze thermal breeder concepts and nonbreeder concepts. Analysis of breeders was restricted to the (U-233)-Th breeding cycle, and computations were performed to examine a range of parameters. These parameters include U-233 to hydrogen atom ratio in the gaseous cavity, carbon to thorium atom ratio in the breeding blanket, cavity size, and blanket size.
Late-onset urea cycle disorder in adulthood unmasked by severe malnutrition.
Wells, Diana L; Thomas, Jillian B; Sacks, Gordon S; Zouhary, L Anna
2014-01-01
Urea cycle disorders (UCDs) most often involve inherited deficiencies in genes that code for enzymes normally used by the urea cycle to breakdown nitrogen. UCDs lead to serious metabolic complications, including severe neurologic decompensation related to hyperammonemia. Although the majority of UCDs are revealed soon after birth, stressful events in adulthood can lead to unmasking of a partial, late-onset UCDs. In this report, we describe a late-onset UCD unmasked by severe malnutrition. Early, specialized nutrition therapy is a fundamental aspect of treating hyperammonemic crises in patients with UCD. The case presented here demonstrates the importance of early recognition of UCD and appropriate interventions with nutrition support. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, Jun-jun; Department of Obstetrics and Gynecology of Shanghai Medical College, Fudan University, 138 Yixueyuan Road, Shanghai 200032; Shanghai Key Laboratory of Female Reproductive Endocrine-Related Diseases, 413 Zhaozhou Road, Shanghai 200011
HOX transcript antisense RNA (HOTAIR) is a well-known long non-coding RNA (lncRNA) whose dysregulation correlates with poor prognosis and malignant progression in many forms of cancer. Here, we investigate the expression pattern, clinical significance, and biological function of HOTAIR in serous ovarian cancer (SOC). Clinically, we found that HOTAIR levels were overexpressed in SOC tissues compared with normal controls and that HOTAIR overexpression was correlated with an advanced FIGO stage and a high histological grade. Multivariate analysis revealed that HOTAIR is an independent prognostic factor for predicting overall survival in SOC patients. We demonstrated that HOTAIR silencing inhibited A2780 andmore » OVCA429 SOC cell proliferation in vitro and that the anti-proliferative effects of HOTAIR silencing also occurred in vivo. Further investigation into the mechanisms responsible for the growth inhibitory effects by HOTAIR silencing revealed that its knockdown resulted in the induction of cell cycle arrest and apoptosis through certain cell cycle-related and apoptosis-related proteins. Together, these results highlight a critical role of HOTAIR in SOC cell proliferation and contribute to a better understanding of the importance of dysregulated lncRNAs in SOC progression. - Highlights: • HOTAIR overexpression correlates with an aggressive tumour phenotype and a poor prognosis in SOC. • HOTAIR promotes SOC cell proliferation both in vitro and in vivo. • The proliferative role of HOTAIR is associated with regulation of the cell cycle and apoptosis.« less
A Framework for Robust Multivariable Optimization of Integrated Circuits in Space Applications
NASA Technical Reports Server (NTRS)
DuMonthier, Jeffrey; Suarez, George
2013-01-01
Application Specific Integrated Circuit (ASIC) design for space applications involves multiple challenges of maximizing performance, minimizing power and ensuring reliable operation in extreme environments. This is a complex multidimensional optimization problem which must be solved early in the development cycle of a system due to the time required for testing and qualification severely limiting opportunities to modify and iterate. Manual design techniques which generally involve simulation at one or a small number of corners with a very limited set of simultaneously variable parameters in order to make the problem tractable are inefficient and not guaranteed to achieve the best possible results within the performance envelope defined by the process and environmental requirements. What is required is a means to automate design parameter variation, allow the designer to specify operational constraints and performance goals, and to analyze the results in a way which facilitates identifying the tradeoffs defining the performance envelope over the full set of process and environmental corner cases. The system developed by the Mixed Signal ASIC Group (MSAG) at the Goddard Space Flight Center is implemented as framework of software modules, templates and function libraries. It integrates CAD tools and a mathematical computing environment, and can be customized for new circuit designs with only a modest amount of effort as most common tasks are already encapsulated. Customization is required for simulation test benches to determine performance metrics and for cost function computation. Templates provide a starting point for both while toolbox functions minimize the code required. Once a test bench has been coded to optimize a particular circuit, it is also used to verify the final design. The combination of test bench and cost function can then serve as a template for similar circuits or be re-used to migrate the design to different processes by re-running it with the new process specific device models. The system has been used in the design of time to digital converters for laser ranging and time-of-flight mass spectrometry to optimize analog, mixed signal and digital circuits such as charge sensitive amplifiers, comparators, delay elements, radiation tolerant dual interlocked (DICE) flip-flops and two of three voter gates.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-25
... estrous cycles to allow for fixed time artificial insemination in lactating dairy cows and beef cows.\\1... insemination in lactating dairy cows and beef cows. Administer to each cow 100 [micro]g gonadorelin by...
Genetic Programming-based Phononic Bandgap Structure Design
2011-09-01
derivative-based methods is that they require a good starting location to find the global minimum of a function. As can be seen from figure 2, there are many... FRANCHI CODE 7100 M H ORR CODE 7120 J A BUCARO CODE 7130 G J ORRIS 7140 J S PERKINS CODE 7140 S A CHIN BING CODE 7180 4555 OVERLOOK AVE SW WASHINGTON DC
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-30
... limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This listing is not intended to be... commodities, Feed additives, Food additives, Pesticides and pests, Reporting and recordkeeping requirements...
42 CFR 52b.12 - What are the minimum requirements of construction and equipment?
Code of Federal Regulations, 2014 CFR
2014-10-01
...-8400). (3) ICBO “Uniform Building Code,” Volumes 1-3 (1997). International Conference of Building...-4406). (4) BOCA National Building Code (1996) 1998 Supplement, Building Officials and Code... Southern Building Code Congress (SBCC), 900 Montclair Road, Birmingham, AL 35213-1206 (telephone 205-591...
42 CFR 52b.12 - What are the minimum requirements of construction and equipment?
Code of Federal Regulations, 2012 CFR
2012-10-01
...-8400). (3) ICBO “Uniform Building Code,” Volumes 1-3 (1997). International Conference of Building...-4406). (4) BOCA National Building Code (1996) 1998 Supplement, Building Officials and Code... Southern Building Code Congress (SBCC), 900 Montclair Road, Birmingham, AL 35213-1206 (telephone 205-591...
42 CFR 52b.12 - What are the minimum requirements of construction and equipment?
Code of Federal Regulations, 2013 CFR
2013-10-01
...-8400). (3) ICBO “Uniform Building Code,” Volumes 1-3 (1997). International Conference of Building...-4406). (4) BOCA National Building Code (1996) 1998 Supplement, Building Officials and Code... Southern Building Code Congress (SBCC), 900 Montclair Road, Birmingham, AL 35213-1206 (telephone 205-591...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-22
... limited to: Crop production (NAICS code 111). Animal production (NAICS code 112). Food manufacturing (NAICS code 311). Pesticide manufacturing (NAICS code 32532). This listing is not intended to be... commodities, Feed additives, Food additives, Pesticides and pests, Reporting and recordkeeping requirements...
NASA Technical Reports Server (NTRS)
1975-01-01
A system is presented which processes FORTRAN based software systems to surface potential problems before they become execution malfunctions. The system complements the diagnostic capabilities of compilers, loaders, and execution monitors rather than duplicating these functions. Also, it emphasizes frequent sources of FORTRAN problems which require inordinate manual effort to identify. The principle value of the system is extracting small sections of unusual code from the bulk of normal sequences. Code structures likely to cause immediate or future problems are brought to the user's attention. These messages stimulate timely corrective action of solid errors and promote identification of 'tricky' code. Corrective action may require recoding or simply extending software documentation to explain the unusual technique.
Numerical Studies of Impurities in Fusion Plasmas
DOE R&D Accomplishments Database
Hulse, R. A.
1982-09-01
The coupled partial differential equations used to describe the behavior of impurity ions in magnetically confined controlled fusion plasmas require numerical solution for cases of practical interest. Computer codes developed for impurity modeling at the Princeton Plasma Physics Laboratory are used as examples of the types of codes employed for this purpose. These codes solve for the impurity ionization state densities and associated radiation rates using atomic physics appropriate for these low-density, high-temperature plasmas. The simpler codes solve local equations in zero spatial dimensions while more complex cases require codes which explicitly include transport of the impurity ions simultaneously with the atomic processes of ionization and recombination. Typical applications are discussed and computational results are presented for selected cases of interest.
Evaluation of isotopic composition of fast reactor core in closed nuclear fuel cycle
NASA Astrophysics Data System (ADS)
Tikhomirov, Georgy; Ternovykh, Mikhail; Saldikov, Ivan; Fomichenko, Peter; Gerasimov, Alexander
2017-09-01
The strategy of the development of nuclear power in Russia provides for use of fast power reactors in closed nuclear fuel cycle. The PRORYV (i.e. «Breakthrough» in Russian) project is currently under development. Within the framework of this project, fast reactors BN-1200 and BREST-OD-300 should be built to, inter alia, demonstrate possibility of the closed nuclear fuel cycle technologies with plutonium as a main source of energy. Russia has a large inventory of plutonium which was accumulated in the result of reprocessing of spent fuel of thermal power reactors and conversion of nuclear weapons. This kind of plutonium will be used for development of initial fuel assemblies for fast reactors. The closed nuclear fuel cycle concept of the PRORYV assumes self-supplied mode of operation with fuel regeneration by neutron capture reaction in non-enriched uranium, which is used as a raw material. Operating modes of reactors and its characteristics should be chosen so as to provide the self-sufficient mode by using of fissile isotopes while refueling by depleted uranium and to support this state during the entire period of reactor operation. Thus, the actual issue is modeling fuel handling processes. To solve these problems, the code REPRORYV (Recycle for PRORYV) has been developed. It simulates nuclide streams in non-reactor stages of the closed fuel cycle. At the same time various verified codes can be used to evaluate in-core characteristics of a reactor. By using this approach various options for nuclide streams and assess the impact of different plutonium content in the fuel, fuel processing conditions, losses during fuel processing, as well as the impact of initial uncertainties on neutron-physical characteristics of reactor are considered in this study.
Fuel cycle for a fusion neutron source
NASA Astrophysics Data System (ADS)
Ananyev, S. S.; Spitsyn, A. V.; Kuteev, B. V.
2015-12-01
The concept of a tokamak-based stationary fusion neutron source (FNS) for scientific research (neutron diffraction, etc.), tests of structural materials for future fusion reactors, nuclear waste transmutation, fission reactor fuel production, and control of subcritical nuclear systems (fusion-fission hybrid reactor) is being developed in Russia. The fuel cycle system is one of the most important systems of FNS that provides circulation and reprocessing of the deuterium-tritium fuel mixture in all fusion reactor systems: the vacuum chamber, neutral injection system, cryogenic pumps, tritium purification system, separation system, storage system, and tritium-breeding blanket. The existing technologies need to be significantly upgraded since the engineering solutions adopted in the ITER project can be only partially used in the FNS (considering the capacity factor higher than 0.3, tritium flow up to 200 m3Pa/s, and temperature of reactor elements up to 650°C). The deuterium-tritium fuel cycle of the stationary FNS is considered. The TC-FNS computer code developed for estimating the tritium distribution in the systems of FNS is described. The code calculates tritium flows and inventory in tokamak systems (vacuum chamber, cryogenic pumps, neutral injection system, fuel mixture purification system, isotope separation system, tritium storage system) and takes into account tritium loss in the fuel cycle due to thermonuclear burnup and β decay. For the two facility versions considered, FNS-ST and DEMO-FNS, the amount of fuel mixture needed for uninterrupted operation of all fuel cycle systems is 0.9 and 1.4 kg, consequently, and the tritium consumption is 0.3 and 1.8 kg per year, including 35 and 55 g/yr, respectively, due to tritium decay.
Orbit Transfer Vehicle (OTV) engine phase A study
NASA Technical Reports Server (NTRS)
Mellish, J. A.
1978-01-01
Requirements for the orbit transfer vehicle engine were examined. Engine performance/weight sensitivities, the effect of a service life of 300 start/shutdown cycles between overalls on the maximum engine operating pressure, and the sensitivity of the engine design point (i.e., thrust chamber pressure and nozzle area ratio) to the performance requirements specified are among the factors studied. Preliminary engine systems analyses were conducted on the stage combustion, expander, and gas generator engine cycles. Hydrogen and oxygen pump discharge pressure requirements are shown for various engine cycles. Performance of the engine cycles is compared.
New developments and prospects on COSI, the simulation software for fuel cycle analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eschbach, R.; Meyer, M.; Coquelet-Pascal, C.
2013-07-01
COSI, software developed by the Nuclear Energy Direction of the CEA, is a code simulating a pool of nuclear power plants with its associated fuel cycle facilities. This code has been designed to study various short, medium and long term options for the introduction of various types of nuclear reactors and for the use of associated nuclear materials. In the frame of the French Act for waste management, scenario studies are carried out with COSI, to compare different options of evolution of the French reactor fleet and options of partitioning and transmutation of plutonium and minor actinides. Those studies aimmore » in particular at evaluating the sustainability of Sodium cooled Fast Reactors (SFR) deployment and the possibility to transmute minor actinides. The COSI6 version is a completely renewed software released in 2006. COSI6 is now coupled with the last version of CESAR (CESAR5.3 based on JEFF3.1.1 nuclear data) allowing the calculations on irradiated fuel with 200 fission products and 100 heavy nuclides. A new release is planned in 2013, including in particular the coupling with a recommended database of reactors. An exercise of validation of COSI6, carried out on the French PWR historic nuclear fleet, has been performed. During this exercise quantities like cumulative natural uranium consumption, or cumulative depleted uranium, or UOX/MOX spent fuel storage, or stocks of reprocessed uranium, or plutonium content in fresh MOX fuel, or the annual production of high level waste, have been computed by COSI6 and compared to industrial data. The results have allowed us to validate the essential phases of the fuel cycle computation, and reinforces the credibility of the results provided by the code.« less
Remarks on CFD validation: A Boeing Commercial Airplane Company perspective
NASA Technical Reports Server (NTRS)
Rubbert, Paul E.
1987-01-01
Requirements and meaning of validation of computational fluid dynamics codes are discussed. Topics covered include: validating a code, validating a user, and calibrating a code. All results are presented in viewgraph format.
System Design Description for the TMAD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finfrock, S.H.
This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.
NASA Technical Reports Server (NTRS)
Seidel, D. A.
1994-01-01
The Program for Solving the General-Frequency Unsteady Two-Dimensional Transonic Small-Disturbance Equation, XTRAN2L, is used to calculate time-accurate, finite-difference solutions of the nonlinear, small-disturbance potential equation for two- dimensional transonic flow about airfoils. The code can treat forced harmonic, pulse, or aeroelastic transient type motions. XTRAN2L uses a transonic small-disturbance equation that incorporates a time accurate finite-difference scheme. Airfoil flow tangency boundary conditions are defined to include airfoil contour, chord deformation, nondimensional plunge displacement, pitch, and trailing edge control surface deflection. Forced harmonic motion can be based on: 1) coefficients of harmonics based on information from each quarter period of the last cycle of harmonic motion; or 2) Fourier analyses of the last cycle of motion. Pulse motion (an alternate to forced harmonic motion) in which the airfoil is given a small prescribed pulse in a given mode of motion, and the aerodynamic transients are calculated. An aeroelastic transient capability is available within XTRAN2L, wherein the structural equations of motion are coupled with the aerodynamic solution procedure for simultaneous time-integration. The wake is represented as a slit downstream of the airfoil trailing edge. XTRAN2L includes nonreflecting farfield boundary conditions. XTRAN2L was developed on a CDC CYBER mainframe running under NOS 2.4. It is written in FORTRAN 5 and uses overlays to minimize storage requirements. The program requires 120K of memory in overlayed form. XTRAN2L was developed in 1987.
Negative feedback regulation of wild-type p53 biosynthesis.
Mosner, J; Mummenbrauer, T; Bauer, C; Sczakiel, G; Grosse, F; Deppert, W
1995-01-01
When growth-arrested mouse fibroblasts re-entered the cell-cycle, the rise in tumour suppressor p53 mRNA level markedly preceded the rise in expression of the p53 protein. Furthermore, gamma-irradiation of such cells led to a rapid increase in p53 protein biosynthesis even in the presence of the transcription inhibitor actinomycin D. Both findings strongly suggest that p53 biosynthesis in these cells is regulated at the translational level. We present evidence for an autoregulatory control of p53 expression by a negative feed-back loop: p53 mRNA has a predicted tendency to form a stable stem-loop structure that involves the 5'-untranslated region (5'-UTR) plus some 280 nucleotides of the coding sequence. p53 binds tightly to the 5'-UTR region and inhibits the translation of its own mRNA, most likely mediated by the p53-intrinsic RNA re-annealing activity. The inhibition of p53 biosynthesis requires wild-type p53, as it is not observed with MethA mutant p53, p53-catalysed translational inhibition is selective; it might be restricted to p53 mRNA and a few other mRNAs that are able to form extensive stem-loop structures. Release from negative feed-back regulation of p53 biosynthesis, e.g. after damage-induced nuclear transport of p53, might provide a means for rapidly increasing p53 protein levels when p53 is required to act as a cell-cycle checkpoint determinant after DNA damage. Images PMID:7556087
Computer Description of the M561 Utility Truck
1984-10-01
GIFT Computer Code Sustainabi1ity Predictions for Army Spare Components Requirements for Combat (SPARC) 20. ABSTRACT (Caotfmia «a NWM eitim ft...used as input to the GIFT computer code to generate target vulnerability data. DO FORM V JAM 73 1473 EDITION OF I NOV 65 IS OBSOLETE Unclass i f ied...anaLyiis requires input from the Geometric Information for Targets ( GIFT ) ’ computer code. This report documents the combina- torial geometry (Com-Geom
2006-09-01
and resource sponsors to assign SSP codes to HR billets. A researcher- developed survey of 183 HROs and/or supervisors found: (a) There is a...codes to HR billets. A researcher- developed survey of 183 HROs and/or supervisors found: (a) There is a reality-driven trend (insufficient...the KSAs necessary to successfully fulfill the requirements of HR billets requirements. The Navy SSP was developed as a means of defining officer
Software Engineering Laboratory (SEL) compendium of tools, revision 1
NASA Technical Reports Server (NTRS)
1982-01-01
A set of programs used to aid software product development is listed. Known as software tools, such programs include requirements analyzers, design languages, precompilers, code auditors, code analyzers, and software librarians. Abstracts, resource requirements, documentation, processing summaries, and availability are indicated for most tools.
78 FR 25321 - Proposed Collection; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-30
..., Copies Available From: Securities and Exchange Commission, Office of Investor Education and Advocacy... seq.) Rule 204A-1 (the ``Code of Ethics Rule'') requires investment advisers registered with the..., including transactions in any mutual fund managed by the adviser. The Code of Ethics Rule requires access...
24 CFR 983.155 - Completion of housing.
Code of Federal Regulations, 2012 CFR
2012-04-01
... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...
24 CFR 983.155 - Completion of housing.
Code of Federal Regulations, 2014 CFR
2014-04-01
... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...
24 CFR 983.155 - Completion of housing.
Code of Federal Regulations, 2011 CFR
2011-04-01
... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...
24 CFR 983.155 - Completion of housing.
Code of Federal Regulations, 2013 CFR
2013-04-01
... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...
24 CFR 983.155 - Completion of housing.
Code of Federal Regulations, 2010 CFR
2010-04-01
... with local requirements (such as code and zoning requirements); and (ii) An architect's certification that the housing complies with: (A) HUD housing quality standards; (B) State, local, or other building codes; (C) Zoning; (D) The rehabilitation work write-up (for rehabilitated housing) or the work...
Plastid transformation for Rubisco engineering and protocols for assessing expression.
Whitney, Spencer M; Sharwood, Robert E
2014-01-01
The assimilation of CO2 within chloroplasts is catalyzed by the bi-functional enzyme ribulose-1,5-bisphosphate carboxylase/oxygenase, Rubisco. Within higher plants the Rubisco large subunit gene, rbcL, is encoded in the plastid genome, while the Rubisco small subunit gene, RbcS is coded in the nucleus by a multi-gene family. Rubisco is considered a poor catalyst due to its slow turnover rate and its additional fixation of O2 that can result in wasteful loss of carbon through the energy requiring photorespiratory cycle. Improving the carboxylation efficiency and CO2/O2 selectivity of Rubisco within higher plants has been a long-term goal which has been greatly advanced in recent times using plastid transformation techniques. Here we present experimental methodologies for efficiently engineering Rubisco in the plastids of a tobacco master-line and analyzing leaf Rubisco content.
Functional description of signal processing in the Rogue GPS receiver
NASA Technical Reports Server (NTRS)
Thomas, J. B.
1988-01-01
Over the past year, two Rogue GPS prototype receivers have been assembled and successfully subjected to a variety of laboratory and field tests. A functional description is presented of signal processing in the Rogue receiver, tracing the signal from RF input to the output values of group delay, phase, and data bits. The receiver can track up to eight satellites, without time multiplexing among satellites or channels, simultaneously measuring both group delay and phase for each of three channels (L1-C/A, L1-P, L2-P). The Rogue signal processing described requires generation of the code for all three channels. Receiver functional design, which emphasized accuracy, reliability, flexibility, and dynamic capability, is summarized. A detailed functional description of signal processing is presented, including C/A-channel and P-channel processing, carrier-aided averaging of group delays, checks for cycle slips, acquistion, and distinctive features.
Heterochromatin-Encoded Satellite RNAs Induce Breast Cancer.
Zhu, Quan; Hoong, Nien; Aslanian, Aaron; Hara, Toshiro; Benner, Christopher; Heinz, Sven; Miga, Karen H; Ke, Eugene; Verma, Sachin; Soroczynski, Jan; Yates, John R; Hunter, Tony; Verma, Inder M
2018-06-07
Heterochromatic repetitive satellite RNAs are extensively transcribed in a variety of human cancers, including BRCA1 mutant breast cancer. Aberrant expression of satellite RNAs in cultured cells induces the DNA damage response, activates cell cycle checkpoints, and causes defects in chromosome segregation. However, the mechanism by which satellite RNA expression leads to genomic instability is not well understood. Here we provide evidence that increased levels of satellite RNAs in mammary glands induce tumor formation in mice. Using mass spectrometry, we further show that genomic instability induced by satellite RNAs occurs through interactions with BRCA1-associated protein networks required for the stabilization of DNA replication forks. Additionally, de-stabilized replication forks likely promote the formation of RNA-DNA hybrids in cells expressing satellite RNAs. These studies lay the foundation for developing novel therapeutic strategies that block the effects of non-coding satellite RNAs in cancer cells. Copyright © 2018 Elsevier Inc. All rights reserved.
Cell module and fuel conditioner
NASA Technical Reports Server (NTRS)
Hoover, D. Q., Jr.
1980-01-01
The computer code for the detailed analytical model of the MK-2 stacks is described. An ERC proprietary matrix is incorporated in the stacks. The mechanical behavior of the stack during thermal cycles under compression was determined. A 5 cell stack of the MK-2 design was fabricated and tested. Designs for the next three stacks were selected and component fabrication initiated. A 3 cell stack which verified the use of wet assembly and a new acid fill procedure were fabricated and tested. Components for the 2 kW test facility were received or fabricated and construction of the facility is underway. The definition of fuel and water is used in a study of the fuel conditioning subsystem. Kinetic data on several catalysts, both crushed and pellets, was obtained in the differential reactor. A preliminary definition of the equipment requirements for treating tap and recovered water was developed.
Parametric Design of Injectors for LDI-3 Combustors
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Mongia, Hukam; Lee, Phil
2015-01-01
Application of a partially calibrated National Combustion Code (NCC) for providing guidance in the design of the 3rd generation of the Lean-Direct Injection (LDI) multi-element combustion configuration (LDI-3) is summarized. NCC was used to perform non-reacting and two-phase reacting flow computations on several LDI-3 injector configurations in a single-element and a five-element injector array. All computations were performed with a consistent approach for mesh-generation, turbulence, spray simulations, ignition and chemical kinetics-modeling. Both qualitative and quantitative assessment of the computed flowfield characteristics of the several design options led to selection of an optimal injector LDI- 3 design that met all the requirements including effective area, aerodynamics and fuel-air mixing criteria. Computed LDI-3 emissions (namely, NOx, CO and UHC) will be compared with the prior generation LDI- 2 combustor experimental data at relevant engine cycle conditions.
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Diachin, L F; Garaizar, F X; Henson, V E
2009-10-12
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE andmore » the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.« less
VizieR Online Data Catalog: Slug analysis of star clusters in NGC 628 & 7793 (Krumholz+, 2015)
NASA Astrophysics Data System (ADS)
Krumholz, M. R.; Adamo, A.; Fumagalli, M.; Wofford, A.; Calzetti, D.; Lee, J. C.; Whitmore, B. C.; Bright, S. N.; Grasha, K.; Gouliermis, D. A.; Kim, H.; Nair, P.; Ryon, J. E.; Smith, L. J.; Thilker, D.; Ubeda, L.; Zackrisson, E.
2016-02-01
In this paper we use slug, the Stochastically Lighting Up Galaxies code (da Silva et al. 2012ApJ...745..145D, 2014MNRAS.444.3275D; Krumholz et al. 2015MNRAS.452.1447K), and its post-processing tool for analysis of star cluster properties, cluster_slug, to analyze an initial sample of clusters from the LEGUS (Calzetti et al. 2015AJ....149...51C). A description of the steps required to produce final cluster catalogs of the Legacy Extragalactic UV Survey (LEGUS) targets can be found in Calzetti et al. (2015AJ....149...51C), and in A. Adamo et al. (2015, in preparation). LEGUS is an HST Cycle 21 Treasury program that is imaging 50 nearby galaxies in five broadbands with the WFC3/UVIS, from the NUV to the I band. (1 data file).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ebert, D.
1997-07-01
This is a report on the CSNI Workshop on Transient Thermal-Hydraulic and Neutronic Codes Requirements held at Annapolis, Maryland, USA November 5-8, 1996. This experts` meeting consisted of 140 participants from 21 countries; 65 invited papers were presented. The meeting was divided into five areas: (1) current and prospective plans of thermal hydraulic codes development; (2) current and anticipated uses of thermal-hydraulic codes; (3) advances in modeling of thermal-hydraulic phenomena and associated additional experimental needs; (4) numerical methods in multi-phase flows; and (5) programming language, code architectures and user interfaces. The workshop consensus identified the following important action items tomore » be addressed by the international community in order to maintain and improve the calculational capability: (a) preserve current code expertise and institutional memory, (b) preserve the ability to use the existing investment in plant transient analysis codes, (c) maintain essential experimental capabilities, (d) develop advanced measurement capabilities to support future code validation work, (e) integrate existing analytical capabilities so as to improve performance and reduce operating costs, (f) exploit the proven advances in code architecture, numerics, graphical user interfaces, and modularization in order to improve code performance and scrutibility, and (g) more effectively utilize user experience in modifying and improving the codes.« less
Orbit determination using real tracking data from FY3C-GNOS
NASA Astrophysics Data System (ADS)
Xiong, Chao; Lu, Chuanfang; Zhu, Jun; Ding, Huoping
2017-08-01
China is currently developing the BeiDou Navigation Satellite System, also known as BDS. The nominal constellation of BDS (regional), which had been able to provide preliminary regional positioning and navigation functions, was composed of fourteen satellites, including 5 GEO, 5 IGSO and 4 MEO satellites, and was realized by the end of 2013. Global navigation satellite system occultation sounder (GNOS) on board the Fengyun3C (FY3C) satellite, which is the first BDS/GPS compatible radio occultation (RO) sounder in the world, was launched on 23 September 2013. The GNOS instrument is capable of tracking up to 6 BeiDou satellites and more than 8 GPS satellites. We first present a quality analysis using 1-week onboard BDS/GPS measurements collected by GNOS. Satellite visibility, multipath combination and the ratio of cycle slips are analyzed. The analysis of satellite visibility shows that for one week the BDS receiver can track up to 6 healthy satellites. The analysis of multipath combinations (MPC) suggests more multipath present for BDS than GPS for the CA code (B1 MPC is 0.597 m, L1 MPC is 0.326 m), but less multipath for the P code (B2 MPC is 0.421 m, L2 MPC is 0.673 m). More cycle slips occur for the BDS than for the GPS receiver as shown by the ratio of total satellites/cycle slips observed over a 24 h period. Both the maximum value and average of the ratio of cycle slips based on BDS measurements is 72/50.29, which is smaller than 368/278.71 based on GPS measurements. Second, the results of reduced dynamic orbit determination using BDS/GPS code and phase measurements, standalone BDS SPP (Single Point Positioning) kinematic solution and real-time orbit determination using BDS/GPS code measurements are presented and analyzed. Using an overlap analysis, the orbit consistency of FY3C-GNOS is about 3.80 cm. The precision of BDS only solutions is about 22 cm. The precision of FY3C-GNOS orbit with the Helmert variance component estimation are improved slightly after the BDS observations are added for one week (October 10-16, 2013). In the three-dimensional direction, the orbit precision is respectively improved by 0.31 cm. BDS code observations already allow a standalone positioning with RMS accuracy of at least 22 m using BDS broadcast ephemeris, while the accuracy is at least 5 m using BDS precise ephemeris. The standard deviations of differences of real-time orbit determination with the Dynamic Model Compensation using BDS/GPS, GPS, and BDS code measurements are 1.24 m, 1.27 m and 6.67 m in three-dimensional direction, respectively. It can slightly improve convergence time for real-time orbit determination by 17 s after the BDS observations are added. And it can also slightly improve the accuracy of real-time orbit determination by 0.03 m. The results obtained in this paper are already rather promising.
NASA Astrophysics Data System (ADS)
Neveu, M.; Felton, R.; Domagal-Goldman, S. D.; Desch, S. J.; Arney, G. N.
2017-12-01
About 20 Earth-sized planets (0.6-1.6 Earth masses and radii) have now been discovered beyond our solar system [1]. Although such planets are prime targets in the upcoming search for atmospheric biosignatures, their composition, geology, and climate are essentially unconstrained. Yet, developing an understanding of how these factors influence planetary evolution through time and space is essential to establishing abiotic backgrounds against which any deviations can provide evidence for biological activity. To this end, we are building coupled geophysical-geochemical models of abiotic carbon cycling on such planets. Our models are controlled by atmospheric factors such as temperature and composition, and compute interior inputs to atmospheric species. They account for crustal weathering, ocean-atmosphere equilibria, and exchange with the deep interior as a function of planet composition and size (and, eventually, age).Planets in other solar systems differ from the Earth not only in their bulk physical properties, but also likely in their bulk chemical composition [2], which influences key parameters such as the vigor of mantle convection and the near-surface redox state. Therefore, simulating how variations in such parameters affect carbon cycling requires us to simulate the above processes from first principles, rather than by using arbitrary parameterizations derived from observations as is often done with models of carbon cycling on Earth [3] or extrapolations thereof [4]. As a first step, we have developed a kinetic model of crustal weathering using the PHREEQC code [5] and kinetic data from [6]. We will present the ability of such a model to replicate Earth's carbon cycle using, for the time being, parameterizations for surface-interior-atmosphere exchange processes such as volcanism (e.g., [7]).[1] exoplanet.eu, 7/28/2017.[2] Young et al. (2014) Astrobiology 14, 603-626.[3] Lerman & Wu (2008) Kinetics of Global Geochemical Cycles. In Kinetics of Water-Rock Interaction (Brantley et al., eds.), Springer, New York.[4] Edson et al. (2012) Astrobiology 12, 562-571.[5] Parkhurst & Appelo (2013) USGS Techniques and Methods 6-A43.[6] Palandri & Kharaka (2008) USGS Report 2004-1068.[7] Kite et al. (2009) ApJ 700, 1732-1749.
Zuo, Zhibin; Ma, Long; Gong, Zuode; Xue, Lande; Wang, Qibao
2018-05-26
Long non-coding RNAs (lncRNAs) have gained a lot of attention because they participate in several human disorders, including tumors. This study determined the role of LncRNA CASC15 (cancer susceptibility candidate 15) in the development of tongue squamous cell carcinoma (TSCC). Here, we identified that CASC15 expression was upregulated in TSCC samples and cell lines. We showed that overexpression of CASC15 promoted cell proliferation, cycle, and migration in TSCC. In addition, we revealed that miR-33a-5p expression was downregulated in TSCC tissues and cell lines. Moreover, we showed that the expression of CASC15 was negatively related with miR-33a-5p expression in TSCC tissues. Ectopic expression of miR-33a-5p suppressed cell proliferation, cycle, and migration in TSCC. Elevated expression of CASC15 suppressed miR-33a-5p expression and promoted ZEB1 expression in SCC4 cell. Ectopic expression of CASC15 promoted TSCC cell proliferation, cycle, and migration through targeting miR-33a-5p. These results suggested that lncRNA CASC15 and miR-33a-5p might be exploited as new markers of TSCC and were potential treatment targets for TSCC patients.
Development and Implementation of Dynamic Scripts to Execute Cycled GSI/WRF Forecasts
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi; Berndt, Emily; Li, Xuanli; Watson, Leela
2014-01-01
The Weather Research and Forecasting (WRF) numerical weather prediction (NWP) model and Gridpoint Statistical Interpolation (GSI) data assimilation (DA) are the operational systems that make up the North American Mesoscale (NAM) model and the NAM Data Assimilation System (NDAS) analysis used by National Weather Service forecasters. The Developmental Testbed Center (DTC) manages and distributes the code for the WRF and GSI, but it is up to individual researchers to link the systems together and write scripts to run the systems, which can take considerable time for those not familiar with the code. The objective of this project is to develop and disseminate a set of dynamic scripts that mimic the unique cycling configuration of the operational NAM to enable researchers to develop new modeling and data assimilation techniques that can be easily transferred to operations. The current version of the SPoRT GSI/WRF Scripts (v3.0.1) is compatible with WRF v3.3 and GSI v3.0.
Cis-acting RNA elements in the Hepatitis C virus RNA genome
Sagan, Selena M.; Chahal, Jasmin; Sarnow, Peter
2017-01-01
Hepatitis C virus (HCV) infection is a rapidly increasing global health problem with an estimated 170 million people infected worldwide. HCV is a hepatotropic, positive-sense RNA virus of the family Flaviviridae. As a positive-sense RNA virus, the HCV genome itself must serve as a template for translation, replication and packaging. The viral RNA must therefore be a dynamic structure that is able to readily accommodate structural changes to expose different regions of the genome to viral and cellular proteins to carry out the HCV life cycle. The ∼9600 nucleotide viral genome contains a single long open reading frame flanked by 5′ and 3′ non-coding regions that contain cis-acting RNA elements important for viral translation, replication and stability. Additional cis-acting RNA elements have also been identified in the coding sequences as well as in the 3′ end of the negative-strand replicative intermediate. Herein, we provide an overview of the importance of these cis-acting RNA elements in the HCV life cycle. PMID:25576644
NASA Technical Reports Server (NTRS)
Smith, Crawford F.; Podleski, Steve D.
1993-01-01
The proper use of a computational fluid dynamics code requires a good understanding of the particular code being applied. In this report the application of CFL3D, a thin-layer Navier-Stokes code, is compared with the results obtained from PARC3D, a full Navier-Stokes code. In order to gain an understanding of the use of this code, a simple problem was chosen in which several key features of the code could be exercised. The problem chosen is a cone in supersonic flow at an angle of attack. The issues of grid resolution, grid blocking, and multigridding with CFL3D are explored. The use of multigridding resulted in a significant reduction in the computational time required to solve the problem. Solutions obtained are compared with the results using the full Navier-Stokes equations solver PARC3D. The results obtained with the CFL3D code compared well with the PARC3D solutions.
Fast H.264/AVC FRExt intra coding using belief propagation.
Milani, Simone
2011-01-01
In the H.264/AVC FRExt coder, the coding performance of Intra coding significantly overcomes the previous still image coding standards, like JPEG2000, thanks to a massive use of spatial prediction. Unfortunately, the adoption of an extensive set of predictors induces a significant increase of the computational complexity required by the rate-distortion optimization routine. The paper presents a complexity reduction strategy that aims at reducing the computational load of the Intra coding with a small loss in the compression performance. The proposed algorithm relies on selecting a reduced set of prediction modes according to their probabilities, which are estimated adopting a belief-propagation procedure. Experimental results show that the proposed method permits saving up to 60 % of the coding time required by an exhaustive rate-distortion optimization method with a negligible loss in performance. Moreover, it permits an accurate control of the computational complexity unlike other methods where the computational complexity depends upon the coded sequence.
NASA Astrophysics Data System (ADS)
Watanabe, Junpei; Ishikawa, Hiroaki; Arouette, Xavier; Matsumoto, Yasuaki; Miki, Norihisa
2012-06-01
In this paper, we present a vibrational Braille code display with large-displacement micro-electro-mechanical systems (MEMS) actuator arrays. Tactile receptors are more sensitive to vibrational stimuli than to static ones. Therefore, when each cell of the Braille code vibrates at optimal frequencies, subjects can recognize the codes more efficiently. We fabricated a vibrational Braille code display that used actuators consisting of piezoelectric actuators and a hydraulic displacement amplification mechanism (HDAM) as cells. The HDAM that encapsulated incompressible liquids in microchambers with two flexible polymer membranes could amplify the displacement of the MEMS actuator. We investigated the voltage required for subjects to recognize Braille codes when each cell, i.e., the large-displacement MEMS actuator, vibrated at various frequencies. Lower voltages were required at vibration frequencies higher than 50 Hz than at vibration frequencies lower than 50 Hz, which verified that the proposed vibrational Braille code display is efficient by successfully exploiting the characteristics of human tactile receptors.