Identification of limit cycles in multi-nonlinearity, multiple path systems
NASA Technical Reports Server (NTRS)
Mitchell, J. R.; Barron, O. L.
1979-01-01
A method of analysis which identifies limit cycles in autonomous systems with multiple nonlinearities and multiple forward paths is presented. The FORTRAN code for implementing the Harmonic Balance Algorithm is reported. The FORTRAN code is used to identify limit cycles in multiple path and nonlinearity systems while retaining the effects of several harmonic components.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adrian Miron; Joshua Valentine; John Christenson
2009-10-01
The current state of the art in nuclear fuel cycle (NFC) modeling is an eclectic mixture of codes with various levels of applicability, flexibility, and availability. In support of the advanced fuel cycle systems analyses, especially those by the Advanced Fuel Cycle Initiative (AFCI), Unviery of Cincinnati in collaboration with Idaho State University carried out a detailed review of the existing codes describing various aspects of the nuclear fuel cycle and identified the research and development needs required for a comprehensive model of the global nuclear energy infrastructure and the associated nuclear fuel cycles. Relevant information obtained on the NFCmore » codes was compiled into a relational database that allows easy access to various codes' properties. Additionally, the research analyzed the gaps in the NFC computer codes with respect to their potential integration into programs that perform comprehensive NFC analysis.« less
Standardized verification of fuel cycle modeling
Feng, B.; Dixon, B.; Sunny, E.; ...
2016-04-05
A nuclear fuel cycle systems modeling and code-to-code comparison effort was coordinated across multiple national laboratories to verify the tools needed to perform fuel cycle analyses of the transition from a once-through nuclear fuel cycle to a sustainable potential future fuel cycle. For this verification study, a simplified example transition scenario was developed to serve as a test case for the four systems codes involved (DYMOND, VISION, ORION, and MARKAL), each used by a different laboratory participant. In addition, all participants produced spreadsheet solutions for the test case to check all the mass flows and reactor/facility profiles on a year-by-yearmore » basis throughout the simulation period. The test case specifications describe a transition from the current US fleet of light water reactors to a future fleet of sodium-cooled fast reactors that continuously recycle transuranic elements as fuel. After several initial coordinated modeling and calculation attempts, it was revealed that most of the differences in code results were not due to different code algorithms or calculation approaches, but due to different interpretations of the input specifications among the analysts. Therefore, the specifications for the test case itself were iteratively updated to remove ambiguity and to help calibrate interpretations. In addition, a few corrections and modifications were made to the codes as well, which led to excellent agreement between all codes and spreadsheets for this test case. Although no fuel cycle transition analysis codes matched the spreadsheet results exactly, all remaining differences in the results were due to fundamental differences in code structure and/or were thoroughly explained. As a result, the specifications and example results are provided so that they can be used to verify additional codes in the future for such fuel cycle transition scenarios.« less
NASA Technical Reports Server (NTRS)
Mclennan, G. A.
1986-01-01
This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to determine the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.
Parametric Studies of the Ejector Process within a Turbine-Based Combined-Cycle Propulsion System
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Walker, James F.; Trefny, Charles J.
1999-01-01
Performance characteristics of the ejector process within a turbine-based combined-cycle (TBCC) propulsion system are investigated using the NPARC Navier-Stokes code. The TBCC concept integrates a turbine engine with a ramjet into a single propulsion system that may efficiently operate from takeoff to high Mach number cruise. At the operating point considered, corresponding to a flight Mach number of 2.0, an ejector serves to mix flow from the ramjet duct with flow from the turbine engine. The combined flow then passes through a diffuser where it is mixed with hydrogen fuel and burned. Three sets of fully turbulent Navier-Stokes calculations are compared with predictions from a cycle code developed specifically for the TBCC propulsion system. A baseline ejector system is investigated first. The Navier-Stokes calculations indicate that the flow leaving the ejector is not completely mixed, which may adversely affect the overall system performance. Two additional sets of calculations are presented; one set that investigated a longer ejector region (to enhance mixing) and a second set which also utilized the longer ejector but replaced the no-slip surfaces of the ejector with slip (inviscid) walls in order to resolve discrepancies with the cycle code. The three sets of Navier-Stokes calculations and the TBCC cycle code predictions are compared to determine the validity of each of the modeling approaches.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
Utilization of recently developed codes for high power Brayton and Rankine cycle power systems
NASA Technical Reports Server (NTRS)
Doherty, Michael P.
1993-01-01
Two recently developed FORTRAN computer codes for high power Brayton and Rankine thermodynamic cycle analysis for space power applications are presented. The codes were written in support of an effort to develop a series of subsystem models for multimegawatt Nuclear Electric Propulsion, but their use is not limited just to nuclear heat sources or to electric propulsion. Code development background, a description of the codes, some sample input/output from one of the codes, and state future plans/implications for the use of these codes by NASA's Lewis Research Center are provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moisseytsev, A.; Sienicki, J. J.
2011-11-07
Significant progress has been made in the ongoing development of the Argonne National Laboratory (ANL) Plant Dynamics Code (PDC), the ongoing investigation and development of control strategies, and the analysis of system transient behavior for supercritical carbon dioxide (S-CO{sub 2}) Brayton cycles. Several code modifications have been introduced during FY2011 to extend the range of applicability of the PDC and to improve its calculational stability and speed. A new and innovative approach was developed to couple the Plant Dynamics Code for S-CO{sub 2} cycle calculations with SAS4A/SASSYS-1 Liquid Metal Reactor Code System calculations for the transient system level behavior onmore » the reactor side of a Sodium-Cooled Fast Reactor (SFR) or Lead-Cooled Fast Reactor (LFR). The new code system allows use of the full capabilities of both codes such that whole-plant transients can now be simulated without additional user interaction. Several other code modifications, including the introduction of compressor surge control, a new approach for determining the solution time step for efficient computational speed, an updated treatment of S-CO{sub 2} cycle flow mergers and splits, a modified enthalpy equation to improve the treatment of negative flow, and a revised solution of the reactor heat exchanger (RHX) equations coupling the S-CO{sub 2} cycle to the reactor, were introduced to the PDC in FY2011. All of these modifications have improved the code computational stability and computational speed, while not significantly affecting the results of transient calculations. The improved PDC was used to continue the investigation of S-CO{sub 2} cycle control and transient behavior. The coupled PDC-SAS4A/SASSYS-1 code capability was used to study the dynamic characteristics of a S-CO{sub 2} cycle coupled to a SFR plant. Cycle control was investigated in terms of the ability of the cycle to respond to a linear reduction in the electrical grid demand from 100% to 0% at a rate of 5%/minute. It was determined that utilization of turbine throttling control below 50% load improves the cycle efficiency significantly. Consequently, the cycle control strategy has been updated to include turbine throttle valve control. The new control strategy still relies on inventory control in the 50%-90% load range and turbine bypass for fine and fast generator output adjustments, but it now also includes turbine throttling control in the 0%-50% load range. In an attempt to investigate the feasibility of using the S-CO{sub 2} cycle for normal decay heat removal from the reactor, the cycle control study was extended beyond the investigation of normal load following. It was shown that such operation is possible with the extension of the inventory and the turbine throttling controls. However, the cycle operation in this range is calculated to be so inefficient that energy would need to be supplied from the electrical grid assuming that the generator could be capable of being operated in a motoring mode with an input electrical energy from the grid having a magnitude of about 20% of the nominal plant output electrical power level in order to maintain circulation of the CO{sub 2} in the cycle. The work on investigation of cycle operation at low power level will be continued in the future. In addition to the cycle control study, the coupled PDC-SAS4A/SASSYS-1 code system was also used to simulate thermal transients in the sodium-to-CO{sub 2} heat exchanger. Several possible conditions with the potential to introduce significant changes to the heat exchanger temperatures were identified and simulated. The conditions range from reactor scram and primary sodium pump failure or intermediate sodium pump failure on the reactor side to pipe breaks and valve malfunctions on the S-CO{sub 2} side. It was found that the maximum possible rate of the heat exchanger wall temperature change for the particular heat exchanger design assumed is limited to {+-}7 C/s for less than 10 seconds. Modeling in the Plant Dynamics Code has been compared with available data from the Sandia National Laboratories (SNL) small-scale S-CO{sub 2} Brayton cycle demonstration that is being assembled in a phased approach currently at Barber-Nichols Inc. and at SNL in the future. The available data was obtained with an earlier configuration of the S-CO{sub 2} loop involving only a single-turbo-alternator-compressor (TAC) instead of two TACs, a single low temperature recuperator (LTR) instead of both a LTR and a high temperature recuperator (HTR), and fewer than the later to be installed full set of electric heaters. Due to the absence of the full heating capability as well as the lack of a high temperature recuperator providing additional recuperation, the temperature conditions obtained with the loop are too low for the loop conditions to be prototypical of the S-CO{sub 2} cycle.« less
NASA Technical Reports Server (NTRS)
Jones, Scott M.
2007-01-01
This document is intended as an introduction to the analysis of gas turbine engine cycles using the Numerical Propulsion System Simulation (NPSS) code. It is assumed that the analyst has a firm understanding of fluid flow, gas dynamics, thermodynamics, and turbomachinery theory. The purpose of this paper is to provide for the novice the information necessary to begin cycle analysis using NPSS. This paper and the annotated example serve as a starting point and by no means cover the entire range of information and experience necessary for engine performance simulation. NPSS syntax is presented but for a more detailed explanation of the code the user is referred to the NPSS User Guide and Reference document (ref. 1).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moisseytsev, A.; Sienicki, J. J.
2011-04-12
The analysis of specific control strategies and dynamic behavior of the supercritical carbon dioxide (S-CO{sub 2}) Brayton cycle has been extended to the two reactor types selected for continued development under the Generation IV Nuclear Energy Systems Initiative; namely, the Very High Temperature Reactor (VHTR) and the Sodium-Cooled Fast Reactor (SFR). Direct application of the standard S-CO{sub 2} recompression cycle to the VHTR was found to be challenging because of the mismatch in the temperature drop of the He gaseous reactor coolant through the He-to-CO{sub 2} reactor heat exchanger (RHX) versus the temperature rise of the CO{sub 2} through themore » RHX. The reference VHTR features a large temperature drop of 450 C between the assumed core outlet and inlet temperatures of 850 and 400 C, respectively. This large temperature difference is an essential feature of the VHTR enabling a lower He flow rate reducing the required core velocities and pressure drop. In contrast, the standard recompression S-CO{sub 2} cycle wants to operate with a temperature rise through the RHX of about 150 C reflecting the temperature drop as the CO{sub 2} expands from 20 MPa to 7.4 MPa in the turbine and the fact that the cycle is highly recuperated such that the CO{sub 2} entering the RHX is effectively preheated. Because of this mismatch, direct application of the standard recompression cycle results in a relatively poor cycle efficiency of 44.9%. However, two approaches have been identified by which the S-CO{sub 2} cycle can be successfully adapted to the VHTR and the benefits of the S-CO{sub 2} cycle, especially a significant gain in cycle efficiency, can be realized. The first approach involves the use of three separate cascaded S-CO{sub 2} cycles. Each S-CO{sub 2} cycle is coupled to the VHTR through its own He-to-CO{sub 2} RHX in which the He temperature is reduced by 150 C. The three respective cycles have efficiencies of 54, 50, and 44%, respectively, resulting in a net cycle efficiency of 49.3 %. The other approach involves reducing the minimum cycle pressure significantly below the critical pressure such that the temperature drop in the turbine is increased while the minimum cycle temperature is maintained above the critical temperature to prevent the formation of a liquid phase. The latter approach also involves the addition of a precooler and a third compressor before the main compressor to retain the benefits of compression near the critical point with the main compressor. For a minimum cycle pressure of 1 MPa, a cycle efficiency of 49.5% is achieved. Either approach opens up the door to applying the SCO{sub 2} cycle to the VHTR. In contrast, the SFR system typically has a core outlet-inlet temperature difference of about 150 C such that the standard recompression cycle is ideally suited for direct application to the SFR. The ANL Plant Dynamics Code has been modified for application to the VHTR and SFR when the reactor side dynamic behavior is calculated with another system level computer code such as SAS4A/SYSSYS-1 in the SFR case. The key modification involves modeling heat exchange in the RHX, accepting time dependent tabular input from the reactor code, and generating time dependent tabular input to the reactor code such that both the reactor and S-CO{sub 2} cycle sides can be calculated in a convergent iterative scheme. This approach retains the modeling benefits provided by the detailed reactor system level code and can be applied to any reactor system type incorporating a S-CO{sub 2} cycle. This approach was applied to the particular calculation of a scram scenario for a SFR in which the main and intermediate sodium pumps are not tripped and the generator is not disconnected from the electrical grid in order to enhance heat removal from the reactor system thereby enhancing the cooldown rate of the Na-to-CO{sub 2} RHX. The reactor side is calculated with SAS4A/SASSYS-1 while the S-CO{sub 2} cycle is calculated with the Plant Dynamics Code with a number of iterations over a timescale of 500 seconds. It is found that the RHX undergoes a maximum cooldown rate of {approx} -0.3 C/s. The Plant Dynamics Code was also modified to decrease its running time by replacing the compressible flow form of the momentum equation with an incompressible flow equation for use inside of the cooler or recuperators where the CO{sub 2} has a compressibility similar to that of a liquid. Appendices provide a quasi-static control strategy for a SFR as well as the self-adaptive linear function fitting algorithm developed to produce the tabular data for input to the reactor code and Plant Dynamics Code from the detailed output of the other code.« less
Enhanced absorption cycle computer model
NASA Astrophysics Data System (ADS)
Grossman, G.; Wilk, M.
1993-09-01
Absorption heat pumps have received renewed and increasing attention in the past two decades. The rising cost of electricity has made the particular features of this heat-powered cycle attractive for both residential and industrial applications. Solar-powered absorption chillers, gas-fired domestic heat pumps, and waste-heat-powered industrial temperature boosters are a few of the applications recently subjected to intensive research and development. The absorption heat pump research community has begun to search for both advanced cycles in various multistage configurations and new working fluid combinations with potential for enhanced performance and reliability. The development of working absorption systems has created a need for reliable and effective system simulations. A computer code has been developed for simulation of absorption systems at steady state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components and property subroutines containing thermodynamic properties of the working fluids. The user conveys to the computer an image of his cycle by specifying the different subunits and their interconnections. Based on this information, the program calculates the temperature, flow rate, concentration, pressure, and vapor fraction at each state point in the system, and the heat duty at each unit, from which the coefficient of performance (COP) may be determined. This report describes the code and its operation, including improvements introduced into the present version. Simulation results are described for LiBr-H2O triple-effect cycles, LiCl-H2O solar-powered open absorption cycles, and NH3-H2O single-effect and generator-absorber heat exchange cycles. An appendix contains the user's manual.
1983-10-01
SYSTEMS OBJECTIVES. This study was conducted as part of a continuing effort to obtain actual (historical) life cycle costs of major Army systems from...Procurement, AMS Code for RDTE, etc.). System life cycle costs cut across appropriation lines. A common architecture should be prerequisite to... life cycle costs of major Army systems have not been successful, but attention recently has been directed toward the possibility that a significant
Techniques for the analysis of data from coded-mask X-ray telescopes
NASA Technical Reports Server (NTRS)
Skinner, G. K.; Ponman, T. J.; Hammersley, A. P.; Eyles, C. J.
1987-01-01
Several techniques useful in the analysis of data from coded-mask telescopes are presented. Methods of handling changes in the instrument pointing direction are reviewed and ways of using FFT techniques to do the deconvolution considered. Emphasis is on techniques for optimally-coded systems, but it is shown that the range of systems included in this class can be extended through the new concept of 'partial cycle averaging'.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moisseytsev, A.; Sienicki, J. J.
2012-05-10
Significant progress has been made on the development of a control strategy for the supercritical carbon dioxide (S-CO{sub 2}) Brayton cycle enabling removal of power from an autonomous load following Sodium-Cooled Fast Reactor (SFR) down to decay heat levels such that the S-CO{sub 2} cycle can be used to cool the reactor until decay heat can be removed by the normal shutdown heat removal system or a passive decay heat removal system such as Direct Reactor Auxiliary Cooling System (DRACS) loops with DRACS in-vessel heat exchangers. This capability of the new control strategy eliminates the need for use of amore » separate shutdown heat removal system which might also use supercritical CO{sub 2}. It has been found that this capability can be achieved by introducing a new control mechanism involving shaft speed control for the common shaft joining the turbine and two compressors following reduction of the load demand from the electrical grid to zero. Following disconnection of the generator from the electrical grid, heat is removed from the intermediate sodium circuit through the sodium-to-CO{sub 2} heat exchanger, the turbine solely drives the two compressors, and heat is rejected from the cycle through the CO{sub 2}-to-water cooler. To investigate the effectiveness of shaft speed control, calculations are carried out using the coupled Plant Dynamics Code-SAS4A/SASSYS-1 code for a linear load reduction transient for a 1000 MWt metallic-fueled SFR with autonomous load following. No deliberate motion of control rods or adjustment of sodium pump speeds is assumed to take place. It is assumed that the S-CO{sub 2} turbomachinery shaft speed linearly decreases from 100 to 20% nominal following reduction of grid load to zero. The reactor power is calculated to autonomously decrease down to 3% nominal providing a lengthy window in time for the switchover to the normal shutdown heat removal system or for a passive decay heat removal system to become effective. However, the calculations reveal that the compressor conditions are calculated to approach surge such that the need for a surge control system for each compressor is identified. Thus, it is demonstrated that the S-CO{sub 2} cycle can operate in the initial decay heat removal mode even with autonomous reactor control. Because external power is not needed to drive the compressors, the results show that the S-CO{sub 2} cycle can be used for initial decay heat removal for a lengthy interval in time in the absence of any off-site electrical power. The turbine provides sufficient power to drive the compressors. Combined with autonomous reactor control, this represents a significant safety advantage of the S-CO{sub 2} cycle by maintaining removal of the reactor power until the core decay heat falls to levels well below those for which the passive decay heat removal system is designed. The new control strategy is an alternative to a split-shaft layout involving separate power and compressor turbines which had previously been identified as a promising approach enabling heat removal from a SFR at low power levels. The current results indicate that the split-shaft configuration does not provide any significant benefits for the S-CO{sub 2} cycle over the current single-shaft layout with shaft speed control. It has been demonstrated that when connected to the grid the single-shaft cycle can effectively follow the load over the entire range. No compressor speed variation is needed while power is delivered to the grid. When the system is disconnected from the grid, the shaft speed can be changed as effectively as it would be with the split-shaft arrangement. In the split-shaft configuration, zero generator power means disconnection of the power turbine, such that the resulting system will be almost identical to the single-shaft arrangement. Without this advantage of the split-shaft configuration, the economic benefits of the single-shaft arrangement, provided by just one turbine and lower losses at the design point, are more important to the overall cycle performance. Therefore, the single-shaft configuration shall be retained as the reference arrangement for S-CO{sub 2} cycle power converter preconceptual designs. Improvements to the ANL Plant Dynamics Code have been carried out. The major code improvement is the introduction of a restart capability which simplifies investigation of control strategies for very long transients. Another code modification is transfer of the entire code to a new Intel Fortran complier; the execution of the code using the new compiler was verified by demonstrating that the same results are obtained as when the previous Compaq Visual Fortran compiler was used.« less
Comparison of Engine Cycle Codes for Rocket-Based Combined Cycle Engines
NASA Technical Reports Server (NTRS)
Waltrup, Paul J.; Auslender, Aaron H.; Bradford, John E.; Carreiro, Louis R.; Gettinger, Christopher; Komar, D. R.; McDonald, J.; Snyder, Christopher A.
2002-01-01
This paper summarizes the results from a one day workshop on Rocket-Based Combined Cycle (RBCC) Engine Cycle Codes held in Monterey CA in November of 2000 at the 2000 JANNAF JPM with the authors as primary participants. The objectives of the workshop were to discuss and compare the merits of existing Rocket-Based Combined Cycle (RBCC) engine cycle codes being used by government and industry to predict RBCC engine performance and interpret experimental results. These merits included physical and chemical modeling, accuracy and user friendliness. The ultimate purpose of the workshop was to identify the best codes for analyzing RBCC engines and to document any potential shortcomings, not to demonstrate the merits or deficiencies of any particular engine design. Five cases representative of the operating regimes of typical RBCC engines were used as the basis of these comparisons. These included Mach 0 sea level static and Mach 1.0 and Mach 2.5 Air-Augmented-Rocket (AAR), Mach 4 subsonic combustion ramjet or dual-mode scramjet, and Mach 8 scramjet operating modes. Specification of a generic RBCC engine geometry and concomitant component operating efficiencies, bypass ratios, fuel/oxidizer/air equivalence ratios and flight dynamic pressures were provided. The engine included an air inlet, isolator duct, axial rocket motor/injector, axial wall fuel injectors, diverging combustor, and exit nozzle. Gaseous hydrogen was used as the fuel with the rocket portion of the system using a gaseous H2/O2 propellant system to avoid cryogenic issues. The results of the workshop, even after post-workshop adjudication of differences, were surprising. They showed that the codes predicted essentially the same performance at the Mach 0 and I conditions, but progressively diverged from a common value (for example, for fuel specific impulse, Isp) as the flight Mach number increased, with the largest differences at Mach 8. The example cases and results are compared and discussed in this paper.
1989-02-01
installs, and provides life cycle support for information management systems. 16. Provides information and reports to higher authority and the scientific com...instruction/policy. 29 November New Employees Margaret Overton Paula Augustine Staffing Clerk Clerk Typist Code OOB Code I I GS-203-4 GS-322-4 Sylvia ...Evaluation and Survey Systems-Develops systems to evaluate the effectiveness of quality of life programs and to improve the quality of personnel
Automotive Gas Turbine Power System-Performance Analysis Code
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
1997-01-01
An open cycle gas turbine numerical modelling code suitable for thermodynamic performance analysis (i.e. thermal efficiency, specific fuel consumption, cycle state points, working fluid flowrates etc.) of automotive and aircraft powerplant applications has been generated at the NASA Lewis Research Center's Power Technology Division. The use this code can be made available to automotive gas turbine preliminary design efforts, either in its present version, or, assuming that resources can be obtained to incorporate empirical models for component weight and packaging volume, in later version that includes the weight-volume estimator feature. The paper contains a brief discussion of the capabilities of the presently operational version of the code, including a listing of input and output parameters and actual sample output listings.
An Object Oriented Analysis Method for Ada and Embedded Systems
1989-12-01
expansion of the paradligm from the coding anld desiningactivities into the earlier activity of reurmnsalyi.Ts hpl, begins by discussing the application of...response time: 0.1 seconds.I Step le: Identify Known Restrictions on the Software.I " The cruise control system object code must fit within 16K of mem- orv...application of object-oriented techniques to the coding and desigll phases of the life cycle, as well as various approaches to requirements analysis. 3
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.; Jones, Scott M.
1991-01-01
This analysis and this computer code apply to full, split, and dual expander cycles. Heat regeneration from the turbine exhaust to the pump exhaust is allowed. The combustion process is modeled as one of chemical equilibrium in an infinite-area or a finite-area combustor. Gas composition in the nozzle may be either equilibrium or frozen during expansion. This report, which serves as a users guide for the computer code, describes the system, the analysis methodology, and the program input and output. Sample calculations are included to show effects of key variables such as nozzle area ratio and oxidizer-to-fuel mass ratio.
Feasibility of coded vibration in a vibro-ultrasound system for tissue elasticity measurement.
Zhao, Jinxin; Wang, Yuanyuan; Yu, Jinhua; Li, Tianjie; Zheng, Yong-Ping
2016-07-01
The ability of various methods for elasticity measurement and imaging is hampered by the vibration amplitude on biological tissues. Based on the inference that coded excitation will improve the performance of the cross-correlation function of the tissue displacement waves, the idea of exerting encoded external vibration on tested samples for measuring its elasticity is proposed. It was implemented by integrating a programmable vibration generation function into a customized vibro-ultrasound system to generate Barker coded vibration for elasticity measurement. Experiments were conducted on silicone phantoms and porcine muscles. The results showed that coded excitation of the vibration enhanced the accuracy and robustness of the elasticity measurement especially in low signal-to-noise ratio scenarios. In the phantom study, the measured shear modulus values with coded vibration had an R(2 )= 0.993 linear correlation to that of referenced indentation, while for single-cycle pulse the R(2) decreased to 0.987. In porcine muscle study, the coded vibration also obtained a shear modulus value which is more accurate than the single-cycle pulse by 0.16 kPa and 0.33 kPa at two different depths. These results demonstrated the feasibility and potentiality of the coded vibration for enhancing the quality of elasticity measurement and imaging.
ABSIM. Simulation of Absorption Systems in Flexible and Modular Form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grossman, G.
1994-06-01
The computer code has been developed for simulation of absorption systems at steady-state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system`s components. When all the equations have been established, a mathematical solver routine is employed to solve them simultaneously. Property subroutines contained in a separate data base serve to provide thermodynamic properties of the working fluids. The code is user-oriented and requires a relatively simple input containing the given operating conditions and the working fluid atmore » each state point. the user conveys to the computer an image of the cycle by specifying the different components and their interconnections. Based on this information, the program calculates the temperature, flowrate, concentration, pressure and vapor fraction at each state point in the system and the heat duty at each unit, from which the coefficient of performance may be determined. A graphical user-interface is provided to facilitate interactive input and study of the output.« less
Computational Fluid Dynamics Analysis Method Developed for Rocket-Based Combined Cycle Engine Inlet
NASA Technical Reports Server (NTRS)
1997-01-01
Renewed interest in hypersonic propulsion systems has led to research programs investigating combined cycle engines that are designed to operate efficiently across the flight regime. The Rocket-Based Combined Cycle Engine is a propulsion system under development at the NASA Lewis Research Center. This engine integrates a high specific impulse, low thrust-to-weight, airbreathing engine with a low-impulse, high thrust-to-weight rocket. From takeoff to Mach 2.5, the engine operates as an air-augmented rocket. At Mach 2.5, the engine becomes a dual-mode ramjet; and beyond Mach 8, the rocket is turned back on. One Rocket-Based Combined Cycle Engine variation known as the "Strut-Jet" concept is being investigated jointly by NASA Lewis, the U.S. Air Force, Gencorp Aerojet, General Applied Science Labs (GASL), and Lockheed Martin Corporation. Work thus far has included wind tunnel experiments and computational fluid dynamics (CFD) investigations with the NPARC code. The CFD method was initiated by modeling the geometry of the Strut-Jet with the GRIDGEN structured grid generator. Grids representing a subscale inlet model and the full-scale demonstrator geometry were constructed. These grids modeled one-half of the symmetric inlet flow path, including the precompression plate, diverter, center duct, side duct, and combustor. After the grid generation, full Navier-Stokes flow simulations were conducted with the NPARC Navier-Stokes code. The Chien low-Reynolds-number k-e turbulence model was employed to simulate the high-speed turbulent flow. Finally, the CFD solutions were postprocessed with a Fortran code. This code provided wall static pressure distributions, pitot pressure distributions, mass flow rates, and internal drag. These results were compared with experimental data from a subscale inlet test for code validation; then they were used to help evaluate the demonstrator engine net thrust.
Holonomic surface codes for fault-tolerant quantum computation
NASA Astrophysics Data System (ADS)
Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco
2018-02-01
Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.
Fuel cycle for a fusion neutron source
NASA Astrophysics Data System (ADS)
Ananyev, S. S.; Spitsyn, A. V.; Kuteev, B. V.
2015-12-01
The concept of a tokamak-based stationary fusion neutron source (FNS) for scientific research (neutron diffraction, etc.), tests of structural materials for future fusion reactors, nuclear waste transmutation, fission reactor fuel production, and control of subcritical nuclear systems (fusion-fission hybrid reactor) is being developed in Russia. The fuel cycle system is one of the most important systems of FNS that provides circulation and reprocessing of the deuterium-tritium fuel mixture in all fusion reactor systems: the vacuum chamber, neutral injection system, cryogenic pumps, tritium purification system, separation system, storage system, and tritium-breeding blanket. The existing technologies need to be significantly upgraded since the engineering solutions adopted in the ITER project can be only partially used in the FNS (considering the capacity factor higher than 0.3, tritium flow up to 200 m3Pa/s, and temperature of reactor elements up to 650°C). The deuterium-tritium fuel cycle of the stationary FNS is considered. The TC-FNS computer code developed for estimating the tritium distribution in the systems of FNS is described. The code calculates tritium flows and inventory in tokamak systems (vacuum chamber, cryogenic pumps, neutral injection system, fuel mixture purification system, isotope separation system, tritium storage system) and takes into account tritium loss in the fuel cycle due to thermonuclear burnup and β decay. For the two facility versions considered, FNS-ST and DEMO-FNS, the amount of fuel mixture needed for uninterrupted operation of all fuel cycle systems is 0.9 and 1.4 kg, consequently, and the tritium consumption is 0.3 and 1.8 kg per year, including 35 and 55 g/yr, respectively, due to tritium decay.
NASA Astrophysics Data System (ADS)
Lahaye, S.; Huynh, T. D.; Tsilanizara, A.
2016-03-01
Uncertainty quantification of interest outputs in nuclear fuel cycle is an important issue for nuclear safety, from nuclear facilities to long term deposits. Most of those outputs are functions of the isotopic vector density which is estimated by fuel cycle codes, such as DARWIN/PEPIN2, MENDEL, ORIGEN or FISPACT. CEA code systems DARWIN/PEPIN2 and MENDEL propagate by two different methods the uncertainty from nuclear data inputs to isotopic concentrations and decay heat. This paper shows comparisons between those two codes on a Uranium-235 thermal fission pulse. Effects of nuclear data evaluation's choice (ENDF/B-VII.1, JEFF-3.1.1 and JENDL-2011) is inspected in this paper. All results show good agreement between both codes and methods, ensuring the reliability of both approaches for a given evaluation.
ABSIM. Simulation of Absorption Systems in Flexible and Modular Form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grossman, G.
1994-06-01
The computer code has been developed for simulation of absorption systems at steady-state in a flexible and modular form, making it possible to investigate various cycle configurations with different working fluids. The code is based on unit subroutines containing the governing equations for the system's components. When all the equations have been established, a mathematical solver routine is employed to solve them simultaneously. Property subroutines contained in a separate data base serve to provide thermodynamic properties of the working fluids. The code is user-oriented and requires a relatively simple input containing the given operating conditions and the working fluid atmore » each state point. the user conveys to the computer an imagev of the cycle by specifying the different components and their interconnections. Based on this information, the program calculates the temperature, flowrate, concentration, pressure and vapor fraction at each state point in the system and the heat duty at each unit, from which the coefficient of performance may be determined. A graphical user-interface is provided to fcilitate interactive input and study of the output.« less
Cheng, Chao; Ung, Matthew; Grant, Gavin D.; Whitfield, Michael L.
2013-01-01
Cell cycle is a complex and highly supervised process that must proceed with regulatory precision to achieve successful cellular division. Despite the wide application, microarray time course experiments have several limitations in identifying cell cycle genes. We thus propose a computational model to predict human cell cycle genes based on transcription factor (TF) binding and regulatory motif information in their promoters. We utilize ENCODE ChIP-seq data and motif information as predictors to discriminate cell cycle against non-cell cycle genes. Our results show that both the trans- TF features and the cis- motif features are predictive of cell cycle genes, and a combination of the two types of features can further improve prediction accuracy. We apply our model to a complete list of GENCODE promoters to predict novel cell cycle driving promoters for both protein-coding genes and non-coding RNAs such as lincRNAs. We find that a similar percentage of lincRNAs are cell cycle regulated as protein-coding genes, suggesting the importance of non-coding RNAs in cell cycle division. The model we propose here provides not only a practical tool for identifying novel cell cycle genes with high accuracy, but also new insights on cell cycle regulation by TFs and cis-regulatory elements. PMID:23874175
NASA Astrophysics Data System (ADS)
Punov, Plamen; Milkov, Nikolay; Danel, Quentin; Perilhon, Christelle; Podevin, Pierre; Evtimov, Teodossi
2017-02-01
An optimization study of the Rankine cycle as a function of diesel engine operating mode is presented. The Rankine cycle here, is studied as a waste heat recovery system which uses the engine exhaust gases as heat source. The engine exhaust gases parameters (temperature, mass flow and composition) were defined by means of numerical simulation in advanced simulation software AVL Boost. Previously, the engine simulation model was validated and the Vibe function parameters were defined as a function of engine load. The Rankine cycle output power and efficiency was numerically estimated by means of a simulation code in Python(x,y). This code includes discretized heat exchanger model and simplified model of the pump and the expander based on their isentropic efficiency. The Rankine cycle simulation revealed the optimum value of working fluid mass flow and evaporation pressure according to the heat source. Thus, the optimal Rankine cycle performance was obtained over the engine operating map.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1980-01-01
The computational techniques are described which are utilized at Lewis Research Center to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. Cycle performance, and engine weight can be calculated along with costs and installation effects as opposed to fuel consumption alone. Almost any conceivable turbine engine cycle can be studied. These computer codes are: NNEP, WATE, LIFCYC, INSTAL, and POD DRG. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight and cost for representative types of aircraft and missions.
Uranium oxide fuel cycle analysis in VVER-1000 with VISTA simulation code
NASA Astrophysics Data System (ADS)
Mirekhtiary, Seyedeh Fatemeh; Abbasi, Akbar
2018-02-01
The VVER-1000 Nuclear power plant generates about 20-25 tons of spent fuel per year. In this research, the fuel transmutation of Uranium Oxide (UOX) fuel was calculated by using of nuclear fuel cycle simulation system (VISTA) code. In this simulation, we evaluated the back end components fuel cycle. The back end component calculations are Spent Fuel (SF), Actinide Inventory (AI) and Fission Product (FP) radioisotopes. The SF, AI and FP values were obtained 23.792178 ton/y, 22.811139 ton/y, 0.981039 ton/y, respectively. The obtained value of spent fuel, major actinide, and minor actinide and fission products were 23.8 ton/year, 22.795 ton/year, 0.024 ton/year and 0.981 ton/year, respectively.
Environmental performance of green building code and certification systems.
Suh, Sangwon; Tomar, Shivira; Leighton, Matthew; Kneifel, Joshua
2014-01-01
We examined the potential life-cycle environmental impact reduction of three green building code and certification (GBCC) systems: LEED, ASHRAE 189.1, and IgCC. A recently completed whole-building life cycle assessment (LCA) database of NIST was applied to a prototype building model specification by NREL. TRACI 2.0 of EPA was used for life cycle impact assessment (LCIA). The results showed that the baseline building model generates about 18 thousand metric tons CO2-equiv. of greenhouse gases (GHGs) and consumes 6 terajoule (TJ) of primary energy and 328 million liter of water over its life-cycle. Overall, GBCC-compliant building models generated 0% to 25% less environmental impacts than the baseline case (average 14% reduction). The largest reductions were associated with acidification (25%), human health-respiratory (24%), and global warming (GW) (22%), while no reductions were observed for ozone layer depletion (OD) and land use (LU). The performances of the three GBCC-compliant building models measured in life-cycle impact reduction were comparable. A sensitivity analysis showed that the comparative results were reasonably robust, although some results were relatively sensitive to the behavioral parameters, including employee transportation and purchased electricity during the occupancy phase (average sensitivity coefficients 0.26-0.29).
NASA Astrophysics Data System (ADS)
Nekuchaev, A. O.; Shuteev, S. A.
2014-04-01
A new method of data transmission in DWDM systems along existing long-distance fiber-optic communication lines is proposed. The existing method, e.g., uses 32 wavelengths in the NRZ code with an average power of 16 conventional units (16 units and 16 zeros on the average) and transmission of 32 bits/cycle. In the new method, one of 124 wavelengths with a duration of one cycle each (at any time instant, no more than 16 obligatory different wavelengths) and capacity of 4 bits with an average power of 15 conventional units and rate of 64 bits/cycle is transmitted at every instant of a 1/16 cycle. The cross modulation and double Rayleigh scattering are significantly decreased owing to uniform distribution of power over time at different wavelengths. The time redundancy (forward error correction (FEC)) is about 7% and allows one to achieve a coding enhancement of about 6 dB by detecting and removing deletions and errors simultaneously.
Stochastic many-body problems in ecology, evolution, neuroscience, and systems biology
NASA Astrophysics Data System (ADS)
Butler, Thomas C.
Using the tools of many-body theory, I analyze problems in four different areas of biology dominated by strong fluctuations: The evolutionary history of the genetic code, spatiotemporal pattern formation in ecology, spatiotemporal pattern formation in neuroscience and the robustness of a model circadian rhythm circuit in systems biology. In the first two research chapters, I demonstrate that the genetic code is extremely optimal (in the sense that it manages the effects of point mutations or mistranslations efficiently), more than an order of magnitude beyond what was previously thought. I further show that the structure of the genetic code implies that early proteins were probably only loosely defined. Both the nature of early proteins and the extreme optimality of the genetic code are interpreted in light of recent theory [1] as evidence that the evolution of the genetic code was driven by evolutionary dynamics that were dominated by horizontal gene transfer. I then explore the optimality of a proposed precursor to the genetic code. The results show that the precursor code has only limited optimality, which is interpreted as evidence that the precursor emerged prior to translation, or else never existed. In the next part of the dissertation, I introduce a many-body formalism for reaction-diffusion systems described at the mesoscopic scale with master equations. I first apply this formalism to spatially-extended predator-prey ecosystems, resulting in the prediction that many-body correlations and fluctuations drive population cycles in time, called quasicycles. Most of these results were previously known, but were derived using the system size expansion [2, 3]. I next apply the analytical techniques developed in the study of quasi-cycles to a simple model of Turing patterns in a predator-prey ecosystem. This analysis shows that fluctuations drive the formation of a new kind of spatiotemporal pattern formation that I name "quasi-patterns." These quasi-patterns exist over a much larger range of physically accessible parameters than the patterns predicted in mean field theory and therefore account for the apparent observations in ecology of patterns in regimes where Turing patterns do not occur. I further show that quasi-patterns have statistical properties that allow them to be distinguished empirically from mean field Turing patterns. I next analyze a model of visual cortex in the brain that has striking similarities to the activator-inhibitor model of ecosystem quasi-pattern formation. Through analysis of the resulting phase diagram, I show that the architecture of the neural network in the visual cortex is configured to make the visual cortex robust to unwanted internally generated spatial structure that interferes with normal visual function. I also predict that some geometric visual hallucinations are quasi-patterns and that the visual cortex supports a new phase of spatially scale invariant behavior present far from criticality. In the final chapter, I explore the effects of fluctuations on cycles in systems biology, specifically the pervasive phenomenon of circadian rhythms. By exploring the behavior of a generic stochastic model of circadian rhythms, I show that the circadian rhythm circuit exploits leaky mRNA production to safeguard the cycle from failure. I also show that this safeguard mechanism is highly robust to changes in the rate of leaky mRNA production. Finally, I explore the failure of the deterministic model in two different contexts, one where the deterministic model predicts cycles where they do not exist, and another context in which cycles are not predicted by the deterministic model.
Brayton Power Conversion System Parametric Design Modelling for Nuclear Electric Propulsion
NASA Technical Reports Server (NTRS)
Ashe, Thomas L.; Otting, William D.
1993-01-01
The parametrically based closed Brayton cycle (CBC) computer design model was developed for inclusion into the NASA LeRC overall Nuclear Electric Propulsion (NEP) end-to-end systems model. The code is intended to provide greater depth to the NEP system modeling which is required to more accurately predict the impact of specific technology on system performance. The CBC model is parametrically based to allow for conducting detailed optimization studies and to provide for easy integration into an overall optimizer driver routine. The power conversion model includes the modeling of the turbines, alternators, compressors, ducting, and heat exchangers (hot-side heat exchanger and recuperator). The code predicts performance to significant detail. The system characteristics determined include estimates of mass, efficiency, and the characteristic dimensions of the major power conversion system components. These characteristics are parametrically modeled as a function of input parameters such as the aerodynamic configuration (axial or radial), turbine inlet temperature, cycle temperature ratio, power level, lifetime, materials, and redundancy.
Nuclear fuel management optimization using genetic algorithms
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeChaine, M.D.; Feltus, M.A.
1995-07-01
The code independent genetic algorithm reactor optimization (CIGARO) system has been developed to optimize nuclear reactor loading patterns. It uses genetic algorithms (GAs) and a code-independent interface, so any reactor physics code (e.g., CASMO-3/SIMULATE-3) can be used to evaluate the loading patterns. The system is compared to other GA-based loading pattern optimizers. Tests were carried out to maximize the beginning of cycle k{sub eff} for a pressurized water reactor core loading with a penalty function to limit power peaking. The CIGARO system performed well, increasing the k{sub eff} after lowering the peak power. Tests of a prototype parallel evaluation methodmore » showed the potential for a significant speedup.« less
NASA Technical Reports Server (NTRS)
Csank, Jeffrey; Stueber, Thomas
2012-01-01
An inlet system is being tested to evaluate methodologies for a turbine based combined cycle propulsion system to perform a controlled inlet mode transition. Prior to wind tunnel based hardware testing of controlled mode transitions, simulation models are used to test, debug, and validate potential control algorithms. One candidate simulation package for this purpose is the High Mach Transient Engine Cycle Code (HiTECC). The HiTECC simulation package models the inlet system, propulsion systems, thermal energy, geometry, nozzle, and fuel systems. This paper discusses the modification and redesign of the simulation package and control system to represent the NASA large-scale inlet model for Combined Cycle Engine mode transition studies, mounted in NASA Glenn s 10-foot by 10-foot Supersonic Wind Tunnel. This model will be used for designing and testing candidate control algorithms before implementation.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Stueber, Thomas J.
2012-01-01
An inlet system is being tested to evaluate methodologies for a turbine based combined cycle propulsion system to perform a controlled inlet mode transition. Prior to wind tunnel based hardware testing of controlled mode transitions, simulation models are used to test, debug, and validate potential control algorithms. One candidate simulation package for this purpose is the High Mach Transient Engine Cycle Code (HiTECC). The HiTECC simulation package models the inlet system, propulsion systems, thermal energy, geometry, nozzle, and fuel systems. This paper discusses the modification and redesign of the simulation package and control system to represent the NASA large-scale inlet model for Combined Cycle Engine mode transition studies, mounted in NASA Glenn s 10- by 10-Foot Supersonic Wind Tunnel. This model will be used for designing and testing candidate control algorithms before implementation.
Long-time efficacy of the surface code in the presence of a super-Ohmic environment
NASA Astrophysics Data System (ADS)
López-Delgado, D. A.; Novais, E.; Mucciolo, E. R.; Caldeira, A. O.
2017-06-01
We study the long-time evolution of a quantum memory coupled to a bosonic environment on which quantum error correction (QEC) is performed using the surface code. The memory's evolution encompasses N QEC cycles, each of them yielding a nonerror syndrome. This assumption makes our analysis independent of the recovery process. We map the expression for the time evolution of the memory onto the partition function of an equivalent statistical-mechanical spin system. In the super-Ohmic dissipation case the long-time evolution of the memory has the same behavior as the time evolution for just one QEC cycle. For this case we find analytical expressions for the critical parameters of the order-disorder phase transition of an equivalent spin system. These critical parameters determine the threshold value for the system-environment coupling below which it is possible to preserve the memory's state.
Analysis and specification tools in relation to the APSE
NASA Technical Reports Server (NTRS)
Hendricks, John W.
1986-01-01
Ada and the Ada Programming Support Environment (APSE) specifically address the phases of the system/software life cycle which follow after the user's problem was translated into system and software development specifications. The waterfall model of the life cycle identifies the analysis and requirements definition phases as preceeding program design and coding. Since Ada is a programming language and the APSE is a programming support environment, they are primarily targeted to support program (code) development, tecting, and maintenance. The use of Ada based or Ada related specification languages (SLs) and program design languages (PDLs) can extend the use of Ada back into the software design phases of the life cycle. Recall that the standardization of the APSE as a programming support environment is only now happening after many years of evolutionary experience with diverse sets of programming support tools. Restricting consideration to one, or even a few chosen specification and design tools, could be a real mistake for an organization or a major project such as the Space Station, which will need to deal with an increasingly complex level of system problems. To require that everything be Ada-like, be implemented in Ada, run directly under the APSE, and fit into a rigid waterfall model of the life cycle would turn a promising support environment into a straight jacket for progress.
EASY-II Renaissance: n, p, d, α, γ-induced Inventory Code System
NASA Astrophysics Data System (ADS)
Sublet, J.-Ch.; Eastwood, J. W.; Morgan, J. G.
2014-04-01
The European Activation SYstem has been re-engineered and re-written in modern programming languages so as to answer today's and tomorrow's needs in terms of activation, transmutation, depletion, decay and processing of radioactive materials. The new FISPACT-II inventory code development project has allowed us to embed many more features in terms of energy range: up to GeV; incident particles: alpha, gamma, proton, deuteron and neutron; and neutron physics: self-shielding effects, temperature dependence and covariance, so as to cover all anticipated application needs: nuclear fission and fusion, accelerator physics, isotope production, stockpile and fuel cycle stewardship, materials characterization and life, and storage cycle management. In parallel, the maturity of modern, truly general purpose libraries encompassing thousands of target isotopes such as TENDL-2012, the evolution of the ENDF-6 format and the capabilities of the latest generation of processing codes PREPRO, NJOY and CALENDF have allowed the activation code to be fed with more robust, complete and appropriate data: cross sections with covariance, probability tables in the resonance ranges, kerma, dpa, gas and radionuclide production and 24 decay types. All such data for the five most important incident particles (n, p, d, α, γ), are placed in evaluated data files up to an incident energy of 200 MeV. The resulting code system, EASY-II is designed as a functional replacement for the previous European Activation System, EASY-2010. It includes many new features and enhancements, but also benefits already from the feedback from extensive validation and verification activities performed with its predecessor.
A Thermal Management Systems Model for the NASA GTX RBCC Concept
NASA Technical Reports Server (NTRS)
Traci, Richard M.; Farr, John L., Jr.; Laganelli, Tony; Walker, James (Technical Monitor)
2002-01-01
The Vehicle Integrated Thermal Management Analysis Code (VITMAC) was further developed to aid the analysis, design, and optimization of propellant and thermal management concepts for advanced propulsion systems. The computational tool is based on engineering level principles and models. A graphical user interface (GUI) provides a simple and straightforward method to assess and evaluate multiple concepts before undertaking more rigorous analysis of candidate systems. The tool incorporates the Chemical Equilibrium and Applications (CEA) program and the RJPA code to permit heat transfer analysis of both rocket and air breathing propulsion systems. Key parts of the code have been validated with experimental data. The tool was specifically tailored to analyze rocket-based combined-cycle (RBCC) propulsion systems being considered for space transportation applications. This report describes the computational tool and its development and verification for NASA GTX RBCC propulsion system applications.
Leap Frog and Time Step Sub-Cycle Scheme for Coupled Neutronics and Thermal-Hydraulic Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, S.
2002-07-01
As the result of the advancing TCP/IP based inter-process communication technology, more and more legacy thermal-hydraulic codes have been coupled with neutronics codes to provide best-estimate capabilities for reactivity related reactor transient analysis. Most of the coupling schemes are based on closely coupled serial or parallel approaches. Therefore, the execution of the coupled codes usually requires significant CPU time, when a complicated system is analyzed. Leap Frog scheme has been used to reduce the run time. The extent of the decoupling is usually determined based on a trial and error process for a specific analysis. It is the intent ofmore » this paper to develop a set of general criteria, which can be used to invoke the automatic Leap Frog algorithm. The algorithm will not only provide the run time reduction but also preserve the accuracy. The criteria will also serve as the base of an automatic time step sub-cycle scheme when a sudden reactivity change is introduced and the thermal-hydraulic code is marching with a relatively large time step. (authors)« less
Lisman, John
2005-01-01
In the hippocampus, oscillations in the theta and gamma frequency range occur together and interact in several ways, indicating that they are part of a common functional system. It is argued that these oscillations form a coding scheme that is used in the hippocampus to organize the readout from long-term memory of the discrete sequence of upcoming places, as cued by current position. This readout of place cells has been analyzed in several ways. First, plots of the theta phase of spikes vs. position on a track show a systematic progression of phase as rats run through a place field. This is termed the phase precession. Second, two cells with nearby place fields have a systematic difference in phase, as indicated by a cross-correlation having a peak with a temporal offset that is a significant fraction of a theta cycle. Third, several different decoding algorithms demonstrate the information content of theta phase in predicting the animal's position. It appears that small phase differences corresponding to jitter within a gamma cycle do not carry information. This evidence, together with the finding that principle cells fire preferentially at a given gamma phase, supports the concept of theta/gamma coding: a given place is encoded by the spatial pattern of neurons that fire in a given gamma cycle (the exact timing within a gamma cycle being unimportant); sequential places are encoded in sequential gamma subcycles of the theta cycle (i.e., with different discrete theta phase). It appears that this general form of coding is not restricted to readout of information from long-term memory in the hippocampus because similar patterns of theta/gamma oscillations have been observed in multiple brain regions, including regions involved in working memory and sensory integration. It is suggested that dual oscillations serve a general function: the encoding of multiple units of information (items) in a way that preserves their serial order. The relationship of such coding to that proposed by Singer and von der Malsburg is discussed; in their scheme, theta is not considered. It is argued that what theta provides is the absolute phase reference needed for encoding order. Theta/gamma coding therefore bears some relationship to the concept of "word" in digital computers, with word length corresponding to the number of gamma cycles within a theta cycle, and discrete phase corresponding to the ordered "place" within a word. Copyright 2005 Wiley-Liss, Inc.
CELCAP: A Computer Model for Cogeneration System Analysis
NASA Technical Reports Server (NTRS)
1985-01-01
A description of the CELCAP cogeneration analysis program is presented. A detailed description of the methodology used by the Naval Civil Engineering Laboratory in developing the CELCAP code and the procedures for analyzing cogeneration systems for a given user are given. The four engines modeled in CELCAP are: gas turbine with exhaust heat boiler, diesel engine with waste heat boiler, single automatic-extraction steam turbine, and back-pressure steam turbine. Both the design point and part-load performances are taken into account in the engine models. The load model describes how the hourly electric and steam demand of the user is represented by 24 hourly profiles. The economic model describes how the annual and life-cycle operating costs that include the costs of fuel, purchased electricity, and operation and maintenance of engines and boilers are calculated. The CELCAP code structure and principal functions of the code are described to how the various components of the code are related to each other. Three examples of the application of the CELCAP code are given to illustrate the versatility of the code. The examples shown represent cases of system selection, system modification, and system optimization.
Fluid Distribution for In-space Cryogenic Propulsion
NASA Technical Reports Server (NTRS)
Lear, William
2005-01-01
The ultimate goal of this task is to enable the use of a single supply of cryogenic propellants for three distinct spacecraft propulsion missions: main propulsion, orbital maneuvering, and attitude control. A fluid distribution system is sought which allows large propellant flows during the first two missions while still allowing control of small propellant flows during attitude control. Existing research has identified the probable benefits of a combined thermal management/power/fluid distribution system based on the Solar Integrated Thermal Management and Power (SITMAP) cycle. Both a numerical model and an experimental model are constructed in order to predict the performance of such an integrated thermal management/propulsion system. This research task provides a numerical model and an experimental apparatus which will simulate an integrated thermal/power/fluid management system based on the SITMAP cycle, and assess its feasibility for various space missions. Various modifications are done to the cycle, such as the addition of a regeneration process that allows heat to be transferred into the working fluid prior to the solar collector, thereby reducing the collector size and weight. Fabri choking analysis was also accounted for. Finally the cycle is to be optimized for various space missions based on a mass based figure of merit, namely the System Mass Ratio (SMR). -. 1 he theoretical and experimental results from these models are be used to develop a design code (JETSIT code) which is able to provide design parameters for such a system, over a range of cooling loads, power generation, and attitude control thrust levels. The performance gains and mass savings will be compared to those of existing spacecraft systems.
NASA Technical Reports Server (NTRS)
Choo, Y. K.; Staiger, P. J.
1982-01-01
The code was designed to analyze performance at valves-wide-open design flow. The code can model conventional steam cycles as well as cycles that include such special features as process steam extraction and induction and feedwater heating by external heat sources. Convenience features and extensions to the special features were incorporated into the PRESTO code. The features are described, and detailed examples illustrating the use of both the original and the special features are given.
Development and Implementation of Dynamic Scripts to Execute Cycled GSI/WRF Forecasts
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi; Berndt, Emily; Li, Xuanli; Watson, Leela
2014-01-01
The Weather Research and Forecasting (WRF) numerical weather prediction (NWP) model and Gridpoint Statistical Interpolation (GSI) data assimilation (DA) are the operational systems that make up the North American Mesoscale (NAM) model and the NAM Data Assimilation System (NDAS) analysis used by National Weather Service forecasters. The Developmental Testbed Center (DTC) manages and distributes the code for the WRF and GSI, but it is up to individual researchers to link the systems together and write scripts to run the systems, which can take considerable time for those not familiar with the code. The objective of this project is to develop and disseminate a set of dynamic scripts that mimic the unique cycling configuration of the operational NAM to enable researchers to develop new modeling and data assimilation techniques that can be easily transferred to operations. The current version of the SPoRT GSI/WRF Scripts (v3.0.1) is compatible with WRF v3.3 and GSI v3.0.
Experimental Validation of a Closed Brayton Cycle System Transient Simulation
NASA Technical Reports Server (NTRS)
Johnson, Paul K.; Hervol, David S.
2006-01-01
The Brayton Power Conversion Unit (BPCU) located at NASA Glenn Research Center (GRC) in Cleveland, Ohio was used to validate the results of a computational code known as Closed Cycle System Simulation (CCSS). Conversion system thermal transient behavior was the focus of this validation. The BPCU was operated at various steady state points and then subjected to transient changes involving shaft rotational speed and thermal energy input. These conditions were then duplicated in CCSS. Validation of the CCSS BPCU model provides confidence in developing future Brayton power system performance predictions, and helps to guide high power Brayton technology development.
VLA telemetry performance with concatenated coding for Voyager at Neptune
NASA Technical Reports Server (NTRS)
Dolinar, S. J., Jr.
1988-01-01
Current plans for supporting the Voyager encounter at Neptune include the arraying of the Deep Space Network (DSN) antennas at Goldstone, California, with the National Radio Astronomy Observatory's Very Large Array (VLA) in New Mexico. Not designed as a communications antenna, the VLA signal transmission facility suffers a disadvantage in that the received signal is subjected to a gap or blackout period of approximately 1.6 msec once every 5/96 sec control cycle. Previous analyses showed that the VLA data gaps could cause disastrous performance degradation in a VLA stand-alone system and modest degradation when the VLA is arrayed equally with Goldstone. New analysis indicates that the earlier predictions for concatenated code performance were overly pessimistic for most combinations of system parameters, including those of Voyager-VLA. The periodicity of the VLA gap cycle tends to guarantee that all Reed-Solomon codewords will receive an average share of erroneous symbols from the gaps. However, large deterministic fluctuations in the number of gapped symbols from codeword to codeword may occur for certain combinations of code parameters, gap cycle parameters, and data rates. Several mechanisms for causing these fluctuations are identified and analyzed. Even though graceful degradation is predicted for the Voyager-VLA parameters, catastrophic degradation greater than 2 dB can occur for a VLA stand-alone system at certain non-Voyager data rates inside the range of the actual Voyager rates. Thus, it is imperative that all of the Voyager-VLA parameters be very accurately known and precisely controlled.
National Combustion Code Parallel Performance Enhancements
NASA Technical Reports Server (NTRS)
Quealy, Angela; Benyo, Theresa (Technical Monitor)
2002-01-01
The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. The unstructured grid, reacting flow code uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC code to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This report describes recent parallel processing modifications to NCC that have improved the parallel scalability of the code, enabling a two hour turnaround for a 1.3 million element fully reacting combustion simulation on an SGI Origin 2000.
Intrasystem Analysis Program (IAP) code summaries
NASA Astrophysics Data System (ADS)
Dobmeier, J. J.; Drozd, A. L. S.; Surace, J. A.
1983-05-01
This report contains detailed descriptions and capabilities of the codes that comprise the Intrasystem Analysis Program. The four codes are: Intrasystem Electromagnetic Compatibility Analysis Program (IEMCAP), General Electromagnetic Model for the Analysis of Complex Systems (GEMACS), Nonlinear Circuit Analysis Program (NCAP), and Wire Coupling Prediction Models (WIRE). IEMCAP is used for computer-aided evaluation of electromagnetic compatibility (ECM) at all stages of an Air Force system's life cycle, applicable to aircraft, space/missile, and ground-based systems. GEMACS utilizes a Method of Moments (MOM) formalism with the Electric Field Integral Equation (EFIE) for the solution of electromagnetic radiation and scattering problems. The code employs both full matrix decomposition and Banded Matrix Iteration solution techniques and is expressly designed for large problems. NCAP is a circuit analysis code which uses the Volterra approach to solve for the transfer functions and node voltage of weakly nonlinear circuits. The Wire Programs deal with the Application of Multiconductor Transmission Line Theory to the Prediction of Cable Coupling for specific classes of problems.
Methods for nuclear air-cleaning-system accident-consequence assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrae, R.W.; Bolstad, J.W.; Gregory, W.S.
1982-01-01
This paper describes a multilaboratory research program that is directed toward addressing many questions that analysts face when performing air cleaning accident consequence assessments. The program involves developing analytical tools and supportive experimental data that will be useful in making more realistic assessments of accident source terms within and up to the atmospheric boundaries of nuclear fuel cycle facilities. The types of accidents considered in this study includes fires, explosions, spills, tornadoes, criticalities, and equipment failures. The main focus of the program is developing an accident analysis handbook (AAH). We will describe the contents of the AAH, which include descriptionsmore » of selected nuclear fuel cycle facilities, process unit operations, source-term development, and accident consequence analyses. Three computer codes designed to predict gas and material propagation through facility air cleaning systems are described. These computer codes address accidents involving fires (FIRAC), explosions (EXPAC), and tornadoes (TORAC). The handbook relies on many illustrative examples to show the analyst how to approach accident consequence assessments. We will use the FIRAC code and a hypothetical fire scenario to illustrate the accident analysis capability.« less
PARALLEL PERTURBATION MODEL FOR CYCLE TO CYCLE VARIABILITY PPM4CCV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ameen, Muhsin Mohammed; Som, Sibendu
This code consists of a Fortran 90 implementation of the parallel perturbation model to compute cyclic variability in spark ignition (SI) engines. Cycle-to-cycle variability (CCV) is known to be detrimental to SI engine operation resulting in partial burn and knock, and result in an overall reduction in the reliability of the engine. Numerical prediction of cycle-to-cycle variability (CCV) in SI engines is extremely challenging for two key reasons: (i) high-fidelity methods such as large eddy simulation (LES) are required to accurately capture the in-cylinder turbulent flow field, and (ii) CCV is experienced over long timescales and hence the simulations needmore » to be performed for hundreds of consecutive cycles. In the new technique, the strategy is to perform multiple parallel simulations, each of which encompasses 2-3 cycles, by effectively perturbing the simulation parameters such as the initial and boundary conditions. The PPM4CCV code is a pre-processing code and can be coupled with any engine CFD code. PPM4CCV was coupled with Converge CFD code and a 10-time speedup was demonstrated over the conventional multi-cycle LES in predicting the CCV for a motored engine. Recently, the model is also being applied to fired engines including port fuel injected (PFI) and direct injection spark ignition engines and the preliminary results are very encouraging.« less
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Jorgenson, Philip, C. E.; Jones, Scott M.
2014-01-01
The main focus of this study is to apply a computational tool for the flow analysis of the engine that has been tested with ice crystal ingestion in the Propulsion Systems Laboratory (PSL) of NASA Glenn Research Center. A data point was selected for analysis during which the engine experienced a full roll back event due to the ice accretion on the blades and flow path of the low pressure compressor. The computational tool consists of the Numerical Propulsion System Simulation (NPSS) engine system thermodynamic cycle code, and an Euler-based compressor flow analysis code, that has an ice particle melt estimation code with the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Decreasing the performance characteristics of the low pressure compressor (LPC) within the NPSS cycle analysis resulted in matching the overall engine performance parameters measured during testing at data points in short time intervals through the progression of the roll back event. Detailed analysis of the fan-core and LPC with the compressor flow analysis code simulated the effects of ice accretion by increasing the aerodynamic blockage and pressure losses through the low pressure compressor until achieving a match with the NPSS cycle analysis results, at each scan. With the additional blockages and losses in the LPC, the compressor flow analysis code results were able to numerically reproduce the performance that was determined by the NPSS cycle analysis, which was in agreement with the PSL engine test data. The compressor flow analysis indicated that the blockage due to ice accretion in the LPC exit guide vane stators caused the exit guide vane (EGV) to be nearly choked, significantly reducing the air flow rate into the core. This caused the LPC to eventually be in stall due to increasing levels of diffusion in the rotors and high incidence angles in the inlet guide vane (IGV) and EGV stators. The flow analysis indicating compressor stall is substantiated by the video images of the IGV taken during the PSL test, which showed water on the surface of the IGV flowing upstream out of the engine, indicating flow reversal, which is characteristic of a stalled compressor.
User's manual for PRESTO: A computer code for the performance of regenerative steam turbine cycles
NASA Technical Reports Server (NTRS)
Fuller, L. C.; Stovall, T. K.
1979-01-01
Standard turbine cycles for baseload power plants and cycles with such additional features as process steam extraction and induction and feedwater heating by external heat sources may be modeled. Peaking and high back pressure cycles are also included. The code's methodology is to use the expansion line efficiencies, exhaust loss, leakages, mechanical losses, and generator losses to calculate the heat rate and generator output. A general description of the code is given as well as the instructions for input data preparation. Appended are two complete example cases.
Performance of the OVERFLOW-MLP and LAURA-MLP CFD Codes on the NASA Ames 512 CPU Origin System
NASA Technical Reports Server (NTRS)
Taft, James R.
2000-01-01
The shared memory Multi-Level Parallelism (MLP) technique, developed last year at NASA Ames has been very successful in dramatically improving the performance of important NASA CFD codes. This new and very simple parallel programming technique was first inserted into the OVERFLOW production CFD code in FY 1998. The OVERFLOW-MLP code's parallel performance scaled linearly to 256 CPUs on the NASA Ames 256 CPU Origin 2000 system (steger). Overall performance exceeded 20.1 GFLOP/s, or about 4.5x the performance of a dedicated 16 CPU C90 system. All of this was achieved without any major modification to the original vector based code. The OVERFLOW-MLP code is now in production on the inhouse Origin systems as well as being used offsite at commercial aerospace companies. Partially as a result of this work, NASA Ames has purchased a new 512 CPU Origin 2000 system to further test the limits of parallel performance for NASA codes of interest. This paper presents the performance obtained from the latest optimization efforts on this machine for the LAURA-MLP and OVERFLOW-MLP codes. The Langley Aerothermodynamics Upwind Relaxation Algorithm (LAURA) code is a key simulation tool in the development of the next generation shuttle, interplanetary reentry vehicles, and nearly all "X" plane development. This code sustains about 4-5 GFLOP/s on a dedicated 16 CPU C90. At this rate, expected workloads would require over 100 C90 CPU years of computing over the next few calendar years. It is not feasible to expect that this would be affordable or available to the user community. Dramatic performance gains on cheaper systems are needed. This code is expected to be perhaps the largest consumer of NASA Ames compute cycles per run in the coming year.The OVERFLOW CFD code is extensively used in the government and commercial aerospace communities to evaluate new aircraft designs. It is one of the largest consumers of NASA supercomputing cycles and large simulations of highly resolved full aircraft are routinely undertaken. Typical large problems might require 100s of Cray C90 CPU hours to complete. The dramatic performance gains with the 256 CPU steger system are exciting. Obtaining results in hours instead of months is revolutionizing the way in which aircraft manufacturers are looking at future aircraft simulation work. Figure 2 below is a current state of the art plot of OVERFLOW-MLP performance on the 512 CPU Lomax system. As can be seen, the chart indicates that OVERFLOW-MLP continues to scale linearly with CPU count up to 512 CPUs on a large 35 million point full aircraft RANS simulation. At this point performance is such that a fully converged simulation of 2500 time steps is completed in less than 2 hours of elapsed time. Further work over the next few weeks will improve the performance of this code even further.The LAURA code has been converted to the MLP format as well. This code is currently being optimized for the 512 CPU system. Performance statistics indicate that the goal of 100 GFLOP/s will be achieved by year's end. This amounts to 20x the 16 CPU C90 result and strongly demonstrates the viability of the new parallel systems rapidly solving very large simulations in a production environment.
Closed Cycle Engine Program Used in Solar Dynamic Power Testing Effort
NASA Technical Reports Server (NTRS)
Ensworth, Clint B., III; McKissock, David B.
1998-01-01
NASA Lewis Research Center is testing the world's first integrated solar dynamic power system in a simulated space environment. This system converts solar thermal energy into electrical energy by using a closed-cycle gas turbine and alternator. A NASA-developed analysis code called the Closed Cycle Engine Program (CCEP) has been used for both pretest predictions and post-test analysis of system performance. The solar dynamic power system has a reflective concentrator that focuses solar thermal energy into a cavity receiver. The receiver is a heat exchanger that transfers the thermal power to a working fluid, an inert gas mixture of helium and xenon. The receiver also uses a phase-change material to store the thermal energy so that the system can continue producing power when there is no solar input power, such as when an Earth-orbiting satellite is in eclipse. The system uses a recuperated closed Brayton cycle to convert thermal power to mechanical power. Heated gas from the receiver expands through a turbine that turns an alternator and a compressor. The system also includes a gas cooler and a radiator, which reject waste cycle heat, and a recuperator, a gas-to-gas heat exchanger that improves cycle efficiency by recovering thermal energy.
Computation of the phase response curve: a direct numerical approach.
Govaerts, W; Sautois, B
2006-04-01
Neurons are often modeled by dynamical systems--parameterized systems of differential equations. A typical behavioral pattern of neurons is periodic spiking; this corresponds to the presence of stable limit cycles in the dynamical systems model. The phase resetting and phase response curves (PRCs) describe the reaction of the spiking neuron to an input pulse at each point of the cycle. We develop a new method for computing these curves as a by-product of the solution of the boundary value problem for the stable limit cycle. The method is mathematically equivalent to the adjoint method, but our implementation is computationally much faster and more robust than any existing method. In fact, it can compute PRCs even where the limit cycle can hardly be found by time integration, for example, because it is close to another stable limit cycle. In addition, we obtain the discretized phase response curve in a form that is ideally suited for most applications. We present several examples and provide the implementation in a freely available Matlab code.
Long Cycle Life Secondary Lithium Cells Utilizing Tetrahydrofuran.
1984-04-01
Rosenwasser) Code RD-I Washington, D.C. 20360 Washington, D.C. 20380 Naval Civil Engineering Laboratory 1 Dean William Tolles Attn: Dr. R. W. Drisko...Ocean Systems Center 11 apel Street San Diego, California 92152 wton, Massachusetts 02158 Dr. J. J. Auborn Dr. Adam Heller Bell Laboratories Bell...University Research Triangle Park, NC 27709 Evanston, Illinois 60201 Dr. William Ayers Dr. Aaron Fletcher ECD Inc. Naval Weapons Center P.O. Box 5357 Code
Development and Implementation of Dynamic Scripts to Execute Cycled WRF/GSI Forecasts
NASA Technical Reports Server (NTRS)
Zavodsky, Bradley; Srikishen, Jayanthi; Berndt, Emily; Li, Quanli; Watson, Leela
2014-01-01
Automating the coupling of data assimilation (DA) and modeling systems is a unique challenge in the numerical weather prediction (NWP) research community. In recent years, the Development Testbed Center (DTC) has released well-documented tools such as the Weather Research and Forecasting (WRF) model and the Gridpoint Statistical Interpolation (GSI) DA system that can be easily downloaded, installed, and run by researchers on their local systems. However, developing a coupled system in which the various preprocessing, DA, model, and postprocessing capabilities are all integrated can be labor-intensive if one has little experience with any of these individual systems. Additionally, operational modeling entities generally have specific coupling methodologies that can take time to understand and develop code to implement properly. To better enable collaborating researchers to perform modeling and DA experiments with GSI, the Short-term Prediction Research and Transition (SPoRT) Center has developed a set of Perl scripts that couple GSI and WRF in a cycling methodology consistent with the use of real-time, regional observation data from the National Centers for Environmental Prediction (NCEP)/Environmental Modeling Center (EMC). Because Perl is open source, the code can be easily downloaded and executed regardless of the user's native shell environment. This paper will provide a description of this open-source code and descriptions of a number of the use cases that have been performed by SPoRT collaborators using the scripts on different computing systems.
NASA Technical Reports Server (NTRS)
Follen, G.; Naiman, C.; auBuchon, M.
2000-01-01
Within NASA's High Performance Computing and Communication (HPCC) program, NASA Glenn Research Center is developing an environment for the analysis/design of propulsion systems for aircraft and space vehicles called the Numerical Propulsion System Simulation (NPSS). The NPSS focuses on the integration of multiple disciplines such as aerodynamics, structures, and heat transfer, along with the concept of numerical zooming between 0- Dimensional to 1-, 2-, and 3-dimensional component engine codes. The vision for NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Current "state-of-the-art" engine simulations are 0-dimensional in that there is there is no axial, radial or circumferential resolution within a given component (e.g. a compressor or turbine has no internal station designations). In these 0-dimensional cycle simulations the individual component performance characteristics typically come from a table look-up (map) with adjustments for off-design effects such as variable geometry, Reynolds effects, and clearances. Zooming one or more of the engine components to a higher order, physics-based analysis means a higher order code is executed and the results from this analysis are used to adjust the 0-dimensional component performance characteristics within the system simulation. By drawing on the results from more predictive, physics based higher order analysis codes, "cycle" simulations are refined to closely model and predict the complex physical processes inherent to engines. As part of the overall development of the NPSS, NASA and industry began the process of defining and implementing an object class structure that enables Numerical Zooming between the NPSS Version I (0-dimension) and higher order 1-, 2- and 3-dimensional analysis codes. The NPSS Version I preserves the historical cycle engineering practices but also extends these classical practices into the area of numerical zooming for use within a companies' design system. What follows here is a description of successfully zooming I-dimensional (row-by-row) high pressure compressor results back to a NPSS engine 0-dimension simulation and a discussion of the results illustrated using an advanced data visualization tool. This type of high fidelity system-level analysis, made possible by the zooming capability of the NPSS, will greatly improve the fidelity of the engine system simulation and enable the engine system to be "pre-validated" prior to commitment to engine hardware.
Smaller Satellite Operations Near Geostationary Orbit
2007-09-01
At the time, this was considered a very difficult task, due to the complexity involved with creating computer code to autonomously perform... computer systems and even permanently damage equipment. Depending on the solar cycle, solar weather will be properly characterized and modeled to...30 Wayne Tomasi. Electronic Communciations Systems. Upper Saddle River: Pearson Education, 2004. 1041
Development of a Stirling System Dynamic Model With Enhanced Thermodynamics
NASA Technical Reports Server (NTRS)
Regan, Timothy F.; Lewandowski, Edward J.
2005-01-01
The Stirling Convertor System Dynamic Model developed at NASA Glenn Research Center is a software model developed from first principles that includes the mechanical and mounting dynamics, the thermodynamics, the linear alternator, and the controller of a free-piston Stirling power convertor, along with the end user load. As such it represents the first detailed modeling tool for fully integrated Stirling convertor-based power systems. The thermodynamics of the model were originally a form of the isothermal Stirling cycle. In some situations it may be desirable to improve the accuracy of the Stirling cycle portion of the model. An option under consideration is to enhance the SDM thermodynamics by coupling the model with Gedeon Associates Sage simulation code. The result will be a model that gives a more accurate prediction of the performance and dynamics of the free-piston Stirling convertor. A method of integrating the Sage simulation code with the System Dynamic Model is described. Results of SDM and Sage simulation are compared to test data. Model parameter estimation and model validation are discussed.
Development of a Stirling System Dynamic Model with Enhanced Thermodynamics
NASA Astrophysics Data System (ADS)
Regan, Timothy F.; Lewandowski, Edward J.
2005-02-01
The Stirling Convertor System Dynamic Model developed at NASA Glenn Research Center is a software model developed from first principles that includes the mechanical and mounting dynamics, the thermodynamics, the linear alternator, and the controller of a free-piston Stirling power convertor, along with the end user load. As such it represents the first detailed modeling tool for fully integrated Stirling convertor-based power systems. The thermodynamics of the model were originally a form of the isothermal Stirling cycle. In some situations it may be desirable to improve the accuracy of the Stirling cycle portion of the model. An option under consideration is to enhance the SDM thermodynamics by coupling the model with Gedeon Associates' Sage simulation code. The result will be a model that gives a more accurate prediction of the performance and dynamics of the free-piston Stirling convertor. A method of integrating the Sage simulation code with the System Dynamic Model is described. Results of SDM and Sage simulation are compared to test data. Model parameter estimation and model validation are discussed.
A flexible and qualitatively stable model for cell cycle dynamics including DNA damage effects.
Jeffries, Clark D; Johnson, Charles R; Zhou, Tong; Simpson, Dennis A; Kaufmann, William K
2012-01-01
This paper includes a conceptual framework for cell cycle modeling into which the experimenter can map observed data and evaluate mechanisms of cell cycle control. The basic model exhibits qualitative stability, meaning that regardless of magnitudes of system parameters its instances are guaranteed to be stable in the sense that all feasible trajectories converge to a certain trajectory. Qualitative stability can also be described by the signs of real parts of eigenvalues of the system matrix. On the biological side, the resulting model can be tuned to approximate experimental data pertaining to human fibroblast cell lines treated with ionizing radiation, with or without disabled DNA damage checkpoints. Together these properties validate a fundamental, first order systems view of cell dynamics. Classification Codes: 15A68.
Solar dynamic power for the Space Station
NASA Technical Reports Server (NTRS)
Archer, J. S.; Diamant, E. S.
1986-01-01
This paper describes a computer code which provides a significant advance in the systems analysis capabilities of solar dynamic power modules. While the code can be used to advantage in the preliminary analysis of terrestrial solar dynamic modules its real value lies in the adaptions which make it particularly useful for the conceptualization of optimized power modules for space applications. In particular, as illustrated in the paper, the code can be used to establish optimum values of concentrator diameter, concentrator surface roughness, concentrator rim angle and receiver aperture corresponding to the main heat cycle options - Organic Rankine and Brayton - and for certain receiver design options. The code can also be used to establish system sizing margins to account for the loss of reflectivity in orbit or the seasonal variation of insolation. By the simulation of the interactions among the major components of a solar dynamic module and through simplified formulations of the major thermal-optic-thermodynamic interactions the code adds a powerful, efficient and economic analytical tool to the repertory of techniques available for the design of advanced space power systems.
Jones, Dean P; Sies, Helmut
2015-09-20
The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O₂ and H₂O₂ contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael; Jonlin, Duane; Nadel, Steven
Today’s building energy codes focus on prescriptive requirements for features of buildings that are directly controlled by the design and construction teams and verifiable by municipal inspectors. Although these code requirements have had a significant impact, they fail to influence a large slice of the building energy use pie – including not only miscellaneous plug loads, cooking equipment and commercial/industrial processes, but the maintenance and optimization of the code-mandated systems as well. Currently, code compliance is verified only through the end of construction, and there are no limits or consequences for the actual energy use in an occupied building. Inmore » the future, our suite of energy regulations will likely expand to include building efficiency, energy use or carbon emission budgets over their full life cycle. Intelligent building systems, extensive renewable energy, and a transition from fossil fuel to electric heating systems will likely be required to meet ultra-low-energy targets. This paper lays out the authors’ perspectives on how buildings may evolve over the course of the 21st century and the roles that codes and regulations will play in shaping those buildings of the future.« less
Brown, Nicholas R.; Powers, Jeffrey J.; Feng, B.; ...
2015-05-21
This paper presents analyses of possible reactor representations of a nuclear fuel cycle with continuous recycling of thorium and produced uranium (mostly U-233) with thorium-only feed. The analysis was performed in the context of a U.S. Department of Energy effort to develop a compendium of informative nuclear fuel cycle performance data. The objective of this paper is to determine whether intermediate spectrum systems, having a majority of fission events occurring with incident neutron energies between 1 eV and 10 5 eV, perform as well as fast spectrum systems in this fuel cycle. The intermediate spectrum options analyzed include tight latticemore » heavy or light water-cooled reactors, continuously refueled molten salt reactors, and a sodium-cooled reactor with hydride fuel. All options were modeled in reactor physics codes to calculate their lattice physics, spectrum characteristics, and fuel compositions over time. Based on these results, detailed metrics were calculated to compare the fuel cycle performance. These metrics include waste management and resource utilization, and are binned to accommodate uncertainties. The performance of the intermediate systems for this selfsustaining thorium fuel cycle was similar to a representative fast spectrum system. However, the number of fission neutrons emitted per neutron absorbed limits performance in intermediate spectrum systems.« less
Aging, Counterfeiting Configuration Control (AC3)
2010-01-31
SARA continuously polls contributing data sources on a data specific refresh cycle. SARA supports a continuous risk topology assessment by the program...function was demonstrated at the bread -board level based on comparison of North American Industrialization Classification System (NAICS) codes. Other
NASA Technical Reports Server (NTRS)
Walton, J. T.
1994-01-01
The development of a single-stage-to-orbit aerospace vehicle intended to be launched horizontally into low Earth orbit, such as the National Aero-Space Plane (NASP), has concentrated on the use of the supersonic combustion ramjet (scramjet) propulsion cycle. SRGULL, a scramjet cycle analysis code, is an engineer's tool capable of nose-to-tail, hydrogen-fueled, airframe-integrated scramjet simulation in a real gas flow with equilibrium thermodynamic properties. This program facilitates initial estimates of scramjet cycle performance by linking a two-dimensional forebody, inlet and nozzle code with a one-dimensional combustor code. Five computer codes (SCRAM, SEAGUL, INLET, Progam HUD, and GASH) originally developed at NASA Langley Research Center in support of hypersonic technology are integrated in this program to analyze changing flow conditions. The one-dimensional combustor code is based on the combustor subroutine from SCRAM and the two-dimensional coding is based on an inviscid Euler program (SEAGUL). Kinetic energy efficiency input for sidewall area variation modeling can be calculated by the INLET program code. At the completion of inviscid component analysis, Program HUD, an integral boundary layer code based on the Spaulding-Chi method, is applied to determine the friction coefficient which is then used in a modified Reynolds Analogy to calculate heat transfer. Real gas flow properties such as flow composition, enthalpy, entropy, and density are calculated by the subroutine GASH. Combustor input conditions are taken from one-dimensionalizing the two-dimensional inlet exit flow. The SEAGUL portions of this program are limited to supersonic flows, but the combustor (SCRAM) section can handle supersonic and dual-mode operation. SRGULL has been compared to scramjet engine tests with excellent results. SRGULL was written in FORTRAN 77 on an IBM PC compatible using IBM's FORTRAN/2 or Microway's NDP386 F77 compiler. The program is fully user interactive, but can also run in batch mode. It operates under the UNIX, VMS, NOS, and DOS operating systems. The source code is not directly compatible with all PC compilers (e.g., Lahey or Microsoft FORTRAN) due to block and segment size requirements. SRGULL executable code requires about 490K RAM and a math coprocessor on PC's. The SRGULL program was developed in 1989, although the component programs originated in the 1960's and 1970's. IBM, IBM PC, and DOS are registered trademarks of International Business Machines. VMS is a registered trademark of Digital Equipment Corporation. UNIX is a registered trademark of Bell Laboratories. NOS is a registered trademark of Control Data Corporation.
Spatiotemporal coding of inputs for a system of globally coupled phase oscillators
NASA Astrophysics Data System (ADS)
Wordsworth, John; Ashwin, Peter
2008-12-01
We investigate the spatiotemporal coding of low amplitude inputs to a simple system of globally coupled phase oscillators with coupling function g(ϕ)=-sin(ϕ+α)+rsin(2ϕ+β) that has robust heteroclinic cycles (slow switching between cluster states). The inputs correspond to detuning of the oscillators. It was recently noted that globally coupled phase oscillators can encode their frequencies in the form of spatiotemporal codes of a sequence of cluster states [P. Ashwin, G. Orosz, J. Wordsworth, and S. Townley, SIAM J. Appl. Dyn. Syst. 6, 728 (2007)]. Concentrating on the case of N=5 oscillators we show in detail how the spatiotemporal coding can be used to resolve all of the information that relates the individual inputs to each other, providing that a long enough time series is considered. We investigate robustness to the addition of noise and find a remarkable stability, especially of the temporal coding, to the addition of noise even for noise of a comparable magnitude to the inputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stevens, E.J.; McNeilly, G.S.
The existing National Center for Atmospheric Research (NCAR) code in the Hamburg Oceanic Carbon Cycle Circulation Model and the Hamburg Large-Scale Geostrophic Ocean General Circulation Model was modernized and reduced in size while still producing an equivalent end result. A reduction in the size of the existing code from more than 50,000 lines to approximately 7,500 lines in the new code has made the new code much easier to maintain. The existing code in Hamburg model uses legacy NCAR (including even emulated CALCOMP subrountines) graphics to display graphical output. The new code uses only current (version 3.1) NCAR subrountines.
System Mass Variation and Entropy Generation in 100k We Closed-Brayton-Cycle Space Power Systems
NASA Technical Reports Server (NTRS)
Barrett, Michael J.; Reid, Bryan M.
2004-01-01
State-of-the-art closed-Brayton-cycle (CBC) space power systems were modeled to study performance trends in a trade space characteristic of interplanetary orbiters. For working-fluid molar masses of 48.6, 39.9, and 11.9 kg/kmol, peak system pressures of 1.38 and 3.0 MPa and compressor pressure ratios ranging from 1.6 to 2.4, total system masses were estimated. System mass increased as peak operating pressure increased for all compressor pressure ratios and molar mass values examined. Minimum mass point comparison between 72 percent He at 1.38 MPa peak and 94 percent He at 3.0 MPa peak showed an increase in system mass of 14 percent. Converter flow loop entropy generation rates were calculated for 1.38 and 3.0 MPa peak pressure cases. Physical system behavior was approximated using a pedigreed NASA Glenn modeling code, Closed Cycle Engine Program (CCEP), which included realistic performance prediction for heat exchangers, radiators and turbomachinery.
System Mass Variation and Entropy Generation in 100-kWe Closed-Brayton-Cycle Space Power Systems
NASA Technical Reports Server (NTRS)
Barrett, Michael J.; Reid, Bryan M.
2004-01-01
State-of-the-art closed-Brayton-cycle (CBC) space power systems were modeled to study performance trends in a trade space characteristic of interplanetary orbiters. For working-fluid molar masses of 48.6, 39.9, and 11.9 kg/kmol, peak system pressures of 1.38 and 3.0 MPa and compressor pressure ratios ranging from 1.6 to 2.4, total system masses were estimated. System mass increased as peak operating pressure increased for all compressor pressure ratios and molar mass values examined. Minimum mass point comparison between 72 percent He at 1.38 MPa peak and 94 percent He at 3.0 MPa peak showed an increase in system mass of 14 percent. Converter flow loop entropy generation rates were calculated for 1.38 and 3.0 MPa peak pressure cases. Physical system behavior was approximated using a pedigreed NASA Glenn modeling code, Closed Cycle Engine Program (CCEP), which included realistic performance prediction for heat exchangers, radiators and turbomachinery.
A New Design Method of Automotive Electronic Real-time Control System
NASA Astrophysics Data System (ADS)
Zuo, Wenying; Li, Yinguo; Wang, Fengjuan; Hou, Xiaobo
Structure and functionality of automotive electronic control system is becoming more and more complex. The traditional manual programming development mode to realize automotive electronic control system can't satisfy development needs. So, in order to meet diversity and speedability of development of real-time control system, combining model-based design approach and auto code generation technology, this paper proposed a new design method of automotive electronic control system based on Simulink/RTW. Fristly, design algorithms and build a control system model in Matlab/Simulink. Then generate embedded code automatically by RTW and achieve automotive real-time control system development in OSEK/VDX operating system environment. The new development mode can significantly shorten the development cycle of automotive electronic control system, improve program's portability, reusability and scalability and had certain practical value for the development of real-time control system.
Methodology for Software Reliability Prediction. Volume 1.
1987-11-01
SPACECRAFT 0 MANNED SPACECRAFT B ATCH SYSTEM AIRBORNE AVIONICS 0 UNMANNED EVENT C014TROL a REAL TIME CLOSED 0 UNMANNED SPACECRAFT LOOP OPERATINS SPACECRAFT...software reliability. A Software Reliability Measurement Framework was established which spans the life cycle of a software system and includes the...specification, prediction, estimation, and assessment of software reliability. Data from 59 systems , representing over 5 million lines of code, were
Dry Air Cooler Modeling for Supercritical Carbon Dioxide Brayton Cycle Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moisseytsev, A.; Sienicki, J. J.; Lv, Q.
Modeling for commercially available and cost effective dry air coolers such as those manufactured by Harsco Industries has been implemented in the Argonne National Laboratory Plant Dynamics Code for system level dynamic analysis of supercritical carbon dioxide (sCO 2) Brayton cycles. The modeling can now be utilized to optimize and simulate sCO 2 Brayton cycles with dry air cooling whereby heat is rejected directly to the atmospheric heat sink without the need for cooling towers that require makeup water for evaporative losses. It has sometimes been stated that a benefit of the sCO 2 Brayton cycle is that it enablesmore » dry air cooling implying that the Rankine steam cycle does not. A preliminary and simple examination of a Rankine superheated steam cycle and an air-cooled condenser indicates that dry air cooling can be utilized with both cycles provided that the cycle conditions are selected appropriately« less
NASA Technical Reports Server (NTRS)
Sapyta, Joe; Reid, Hank; Walton, Lew
1993-01-01
The topics are presented in viewgraph form and include the following: particle bed reactor (PBR) core cross section; PBR bleed cycle; fuel and moderator flow paths; PBR modeling requirements; characteristics of PBR and nuclear thermal propulsion (NTP) modeling; challenges for PBR and NTP modeling; thermal hydraulic computer codes; capabilities for PBR/reactor application; thermal/hydralic codes; limitations; physical correlations; comparison of predicted friction factor and experimental data; frit pressure drop testing; cold frit mask factor; decay heat flow rate; startup transient simulation; and philosophy of systems modeling.
NASA Astrophysics Data System (ADS)
Porter, Ian Edward
A nuclear reactor systems code has the ability to model the system response in an accident scenario based on known initial conditions at the onset of the transient. However, there has been a tendency for these codes to lack the detailed thermo-mechanical fuel rod response models needed for accurate prediction of fuel rod failure. This proposed work will couple today's most widely used steady-state (FRAPCON) and transient (FRAPTRAN) fuel rod models with a systems code TRACE for best-estimate modeling of system response in accident scenarios such as a loss of coolant accident (LOCA). In doing so, code modifications will be made to model gamma heating in LWRs during steady-state and accident conditions and to improve fuel rod thermal/mechanical analysis by allowing axial nodalization of burnup-dependent phenomena such as swelling, cladding creep and oxidation. With the ability to model both burnup-dependent parameters and transient fuel rod response, a fuel dispersal study will be conducted using a hypothetical accident scenario under both PWR and BWR conditions to determine the amount of fuel dispersed under varying conditions. Due to the fuel fragmentation size and internal rod pressure both being dependent on burnup, this analysis will be conducted at beginning, middle and end of cycle to examine the effects that cycle time can play on fuel rod failure and dispersal. Current fuel rod and system codes used by the Nuclear Regulatory Commission (NRC) are compilations of legacy codes with only commonly used light water reactor materials, Uranium Dioxide (UO2), Mixed Oxide (U/PuO 2) and zirconium alloys. However, the events at Fukushima Daiichi and Three Mile Island accident have shown the need for exploration into advanced materials possessing improved accident tolerance. This work looks to further modify the NRC codes to include silicon carbide (SiC), an advanced cladding material proposed by current DOE funded research on accident tolerant fuels (ATF). Several additional fuels will also be analyzed, including uranium nitride (UN), uranium carbide (UC) and uranium silicide (U3Si2). Focusing on the system response in an accident scenario, an emphasis is placed on the fracture mechanics of the ceramic cladding by design the fuel rods to eliminate pellet cladding mechanical interaction (PCMI). The time to failure and how much of the fuel in the reactor fails with an advanced fuel design will be analyzed and compared to the current UO2/Zircaloy design using a full scale reactor model.
An Object-Oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2009-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn Research Center (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA's NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc., that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300-passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case.
2012 financial outlook: physicians and podiatrists.
Schaum, Kathleen D
2012-04-01
Although the nationally unadjusted average Medicare allowable rates have not increased or decreased significantly, the new codes, the new coding regulations, the NCCI edits, and the Medicare contractors' local coverage determinations (LCDs) will greatly impact physicians' and podiatrists' revenue in 2012. Therefore, every wound care physician and podiatrist should take the time to update their charge sheets and their data entry systems with correct codes, units, and appropriate charges (that account for all the resources needed to perform each service or procedure). They should carefully read the LCDs that are pertinent to the work they perform. If the LCDs contain language that is unclear or incorrect, physicians and podiatrists should contact the Medicare contractor medical director and request a revision through the LCD Reconsideration Process. Medicare has stabilized the MPFS allowable rates for 2012-now physicians and podiatrists must do their part to implement the new coding, payment, and coverage regulations. To be sure that the entire revenue process is working properly, physicians and podiatrists should conduct quarterly, if not monthly, audits of their revenue cycle. Healthcare providers will maintain a healthy revenue cycle by conducting internal audits before outside auditors conduct audits that result in repayments that could have been prevented.
Turbopump Design and Analysis Approach for Nuclear Thermal Rockets
NASA Technical Reports Server (NTRS)
Chen, Shu-cheng S.; Veres, Joseph P.; Fittje, James E.
2006-01-01
A rocket propulsion system, whether it is a chemical rocket or a nuclear thermal rocket, is fairly complex in detail but rather simple in principle. Among all the interacting parts, three components stand out: they are pumps and turbines (turbopumps), and the thrust chamber. To obtain an understanding of the overall rocket propulsion system characteristics, one starts from analyzing the interactions among these three components. It is therefore of utmost importance to be able to satisfactorily characterize the turbopump, level by level, at all phases of a vehicle design cycle. Here at NASA Glenn Research Center, as the starting phase of a rocket engine design, specifically a Nuclear Thermal Rocket Engine design, we adopted the approach of using a high level system cycle analysis code (NESS) to obtain an initial analysis of the operational characteristics of a turbopump required in the propulsion system. A set of turbopump design codes (PumpDes and TurbDes) were then executed to obtain sizing and performance characteristics of the turbopump that were consistent with the mission requirements. A set of turbopump analyses codes (PUMPA and TURBA) were applied to obtain the full performance map for each of the turbopump components; a two dimensional layout of the turbopump based on these mean line analyses was also generated. Adequacy of the turbopump conceptual design will later be determined by further analyses and evaluation. In this paper, descriptions and discussions of the aforementioned approach are provided and future outlooks are discussed.
NASA Astrophysics Data System (ADS)
Huang, Sheng; Ao, Xiang; Li, Yuan-yuan; Zhang, Rui
2016-09-01
In order to meet the needs of high-speed development of optical communication system, a construction method of quasi-cyclic low-density parity-check (QC-LDPC) codes based on multiplicative group of finite field is proposed. The Tanner graph of parity check matrix of the code constructed by this method has no cycle of length 4, and it can make sure that the obtained code can get a good distance property. Simulation results show that when the bit error rate ( BER) is 10-6, in the same simulation environment, the net coding gain ( NCG) of the proposed QC-LDPC(3 780, 3 540) code with the code rate of 93.7% in this paper is improved by 2.18 dB and 1.6 dB respectively compared with those of the RS(255, 239) code in ITU-T G.975 and the LDPC(3 2640, 3 0592) code in ITU-T G.975.1. In addition, the NCG of the proposed QC-LDPC(3 780, 3 540) code is respectively 0.2 dB and 0.4 dB higher compared with those of the SG-QC-LDPC(3 780, 3 540) code based on the two different subgroups in finite field and the AS-QC-LDPC(3 780, 3 540) code based on the two arbitrary sets of a finite field. Thus, the proposed QC-LDPC(3 780, 3 540) code in this paper can be well applied in optical communication systems.
Jones, Dean P.
2015-01-01
Abstract Significance: The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O2 and H2O2 contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Recent Advances: Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Critical Issues: Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Future Directions: Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine. Antioxid. Redox Signal. 23, 734–746. PMID:25891126
Dittrich, Peter
2018-02-01
The organic code concept and its operationalization by molecular codes have been introduced to study the semiotic nature of living systems. This contribution develops further the idea that the semantic capacity of a physical medium can be measured by assessing its ability to implement a code as a contingent mapping. For demonstration and evaluation, the approach is applied to a formal medium: elementary cellular automata (ECA). The semantic capacity is measured by counting the number of ways codes can be implemented. Additionally, a link to information theory is established by taking multivariate mutual information for quantifying contingency. It is shown how ECAs differ in their semantic capacities, how this is related to various ECA classifications, and how this depends on how a meaning is defined. Interestingly, if the meaning should persist for a certain while, the highest semantic capacity is found in CAs with apparently simple behavior, i.e., the fixed-point and two-cycle class. Synergy as a predictor for a CA's ability to implement codes can only be used if context implementing codes are common. For large context spaces with sparse coding contexts synergy is a weak predictor. Concluding, the approach presented here can distinguish CA-like systems with respect to their ability to implement contingent mappings. Applying this to physical systems appears straight forward and might lead to a novel physical property indicating how suitable a physical medium is to implement a semiotic system. Copyright © 2017 Elsevier B.V. All rights reserved.
Dong, Haifeng; Meng, Xiangdan; Dai, Wenhao; Cao, Yu; Lu, Huiting; Zhou, Shufeng; Zhang, Xueji
2015-04-21
Herein, a highly sensitive and selective microRNA (miRNA) detection strategy using DNA-bio-bar-code amplification (BCA) and Nb·BbvCI nicking enzyme-assisted strand cycle for exponential signal amplification was designed. The DNA-BCA system contains a locked nucleic acid (LNA) modified DNA probe for improving hybridization efficiency, while a signal reported molecular beacon (MB) with an endonuclease recognition site was designed for strand cycle amplification. In the presence of target miRNA, the oligonucleotides functionalized magnetic nanoprobe (MNP-DNA) and gold nanoprobe (AuNP-DNA) with numerous reported probes (RP) can hybridize with target miRNA, respectively, to form a sandwich structure. After sandwich structures were separated from the solution by the magnetic field, the RP were released under high temperature to recognize the MB and cleaved the hairpin DNA to induce the dissociation of RP. The dissociated RP then triggered the next strand cycle to produce exponential fluorescent signal amplification for miRNA detection. Under optimized conditions, the exponential signal amplification system shows a good linear range of 6 orders of magnitude (from 0.3 pM to 3 aM) with limit of detection (LOD) down to 52.5 zM, while the sandwich structure renders the system with high selectivity. Meanwhile, the feasibility of the proposed strategy for cell miRNA detection was confirmed by analyzing miRNA-21 in HeLa lysates. Given the high-performance for miRNA analysis, the strategy has a promising application in biological detection and in clinical diagnosis.
Classification of robust heteroclinic cycles for vector fields in {\\protect\\bb R}^3 with symmetry
NASA Astrophysics Data System (ADS)
Hawker, David; Ashwin, Peter
2005-09-01
We consider a classification of robust heteroclinic cycles in the positive octant of {\\bb R}^3 under the action of the symmetry group {{\\bb Z}_2}^3 . We introduce a coding system to represent different classes up to a topological equivalence, and produce a characterization of all types of robust heteroclinic cycle that can arise in this situation. These cycles may or may not contain the origin within the cycle. We proceed to find a connection between our problem and meandric numbers. We find a direct correlation between the number of classes of robust heteroclinic cycle that do not include the origin and the 'Mercedes-Benz' sequence of integers characterizing meanders through a 'Y-shaped' configuration. We investigate upper and lower bounds for the number of classes possible for robust cycles between n equilibria, one of which may be the origin.
Jeong, Jong Seob; Chang, Jin Ho; Shung, K. Kirk
2009-01-01
For noninvasive treatment of prostate tissue using high intensity focused ultrasound (HIFU), this paper proposes a design of an integrated multi-functional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6 MHz array in the center row for imaging and two 4 MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA was verifeid. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured −6 dB and −20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be −48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded −6dB and −20dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be −40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy. PMID:19811994
Jeong, Jong Seob; Chang, Jin Ho; Shung, K Kirk
2009-09-01
For noninvasive treatment of prostate tissue using high-intensity focused ultrasound this paper proposes a design of an integrated multifunctional confocal phased array (IMCPA) and a strategy to perform both imaging and therapy simultaneously with this array. IMCPA is composed of triple-row phased arrays: a 6-MHz array in the center row for imaging and two 4-MHz arrays in the outer rows for therapy. Different types of piezoelectric materials and stack configurations may be employed to maximize their respective functionalities, i.e., therapy and imaging. Fabrication complexity of IMCPA may be reduced by assembling already constructed arrays. In IMCPA, reflected therapeutic signals may corrupt the quality of imaging signals received by the center-row array. This problem can be overcome by implementing a coded excitation approach and/or a notch filter when B-mode images are formed during therapy. The 13-bit Barker code, which is a binary code with unique autocorrelation properties, is preferred for implementing coded excitation, although other codes may also be used. From both Field II simulation and experimental results, we verified whether these remedial approaches would make it feasible to simultaneously carry out imaging and therapy by IMCPA. The results showed that the 13-bit Barker code with 3 cycles per bit provided acceptable performances. The measured -6 dB and -20 dB range mainlobe widths were 0.52 mm and 0.91 mm, respectively, and a range sidelobe level was measured to be -48 dB regardless of whether a notch filter was used. The 13-bit Barker code with 2 cycles per bit yielded -6 dB and -20 dB range mainlobe widths of 0.39 mm and 0.67 mm. Its range sidelobe level was found to be -40 dB after notch filtering. These results indicate the feasibility of the proposed transducer design and system for real-time imaging during therapy.
Water cycle algorithm: A detailed standard code
NASA Astrophysics Data System (ADS)
Sadollah, Ali; Eskandar, Hadi; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
Inspired by the observation of the water cycle process and movements of rivers and streams toward the sea, a population-based metaheuristic algorithm, the water cycle algorithm (WCA) has recently been proposed. Lately, an increasing number of WCA applications have appeared and the WCA has been utilized in different optimization fields. This paper provides detailed open source code for the WCA, of which the performance and efficiency has been demonstrated for solving optimization problems. The WCA has an interesting and simple concept and this paper aims to use its source code to provide a step-by-step explanation of the process it follows.
Structural analysis of cylindrical thrust chambers, volume 3
NASA Technical Reports Server (NTRS)
Pearson, M. L.
1981-01-01
A system of three computer programs is described for use in conjunction with the BOPAGE finite element program. The programs are demonstrated by analyzing cumulative plastic deformation in a regeneratively cooled rocket thrust chamber. The codes provide the capability to predict geometric and material nonlinear behavior of cyclically loaded structures without performing a cycle-by-cycle analysis over the life of the structure. The program set consists of a BOPACE restart tape reader routine, and extrapolation program and a plot package.
Air breathing engine/rocket trajectory optimization
NASA Technical Reports Server (NTRS)
Smith, V. K., III
1979-01-01
This research has focused on improving the mathematical models of the air-breathing propulsion systems, which can be mated with the rocket engine model and incorporated in trajectory optimization codes. Improved engine simulations provided accurate representation of the complex cycles proposed for advanced launch vehicles, thereby increasing the confidence in propellant use and payload calculations. The versatile QNEP (Quick Navy Engine Program) was modified to allow treatment of advanced turboaccelerator cycles using hydrogen or hydrocarbon fuels and operating in the vehicle flow field.
Enhancement/upgrade of Engine Structures Technology Best Estimator (EST/BEST) Software System
NASA Technical Reports Server (NTRS)
Shah, Ashwin
2003-01-01
This report describes the work performed during the contract period and the capabilities included in the EST/BEST software system. The developed EST/BEST software system includes the integrated NESSUS, IPACS, COBSTRAN, and ALCCA computer codes required to perform the engine cycle mission and component structural analysis. Also, the interactive input generator for NESSUS, IPACS, and COBSTRAN computer codes have been developed and integrated with the EST/BEST software system. The input generator allows the user to create input from scratch as well as edit existing input files interactively. Since it has been integrated with the EST/BEST software system, it enables the user to modify EST/BEST generated files and perform the analysis to evaluate the benefits. Appendix A gives details of how to use the newly added features in the EST/BEST software system.
Integrated Turbine-Based Combined Cycle Dynamic Simulation Model
NASA Technical Reports Server (NTRS)
Haid, Daniel A.; Gamble, Eric J.
2011-01-01
A Turbine-Based Combined Cycle (TBCC) dynamic simulation model has been developed to demonstrate all modes of operation, including mode transition, for a turbine-based combined cycle propulsion system. The High Mach Transient Engine Cycle Code (HiTECC) is a highly integrated tool comprised of modules for modeling each of the TBCC systems whose interactions and controllability affect the TBCC propulsion system thrust and operability during its modes of operation. By structuring the simulation modeling tools around the major TBCC functional modes of operation (Dry Turbojet, Afterburning Turbojet, Transition, and Dual Mode Scramjet) the TBCC mode transition and all necessary intermediate events over its entire mission may be developed, modeled, and validated. The reported work details the use of the completed model to simulate a TBCC propulsion system as it accelerates from Mach 2.5, through mode transition, to Mach 7. The completion of this model and its subsequent use to simulate TBCC mode transition significantly extends the state-of-the-art for all TBCC modes of operation by providing a numerical simulation of the systems, interactions, and transient responses affecting the ability of the propulsion system to transition from turbine-based to ramjet/scramjet-based propulsion while maintaining constant thrust.
Maxwell: A semi-analytic 4D code for earthquake cycle modeling of transform fault systems
NASA Astrophysics Data System (ADS)
Sandwell, David; Smith-Konter, Bridget
2018-05-01
We have developed a semi-analytic approach (and computational code) for rapidly calculating 3D time-dependent deformation and stress caused by screw dislocations imbedded within an elastic layer overlying a Maxwell viscoelastic half-space. The maxwell model is developed in the Fourier domain to exploit the computational advantages of the convolution theorem, hence substantially reducing the computational burden associated with an arbitrarily complex distribution of force couples necessary for fault modeling. The new aspect of this development is the ability to model lateral variations in shear modulus. Ten benchmark examples are provided for testing and verification of the algorithms and code. One final example simulates interseismic deformation along the San Andreas Fault System where lateral variations in shear modulus are included to simulate lateral variations in lithospheric structure.
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Stueber, Thomas J.
2013-01-01
A dual flow-path inlet system is being tested to evaluate methodologies for a Turbine Based Combined Cycle (TBCC) propulsion system to perform a controlled inlet mode transition. Prior to experimental testing, simulation models are used to test, debug, and validate potential control algorithms. One simulation package being used for testing is the High Mach Transient Engine Cycle Code simulation, known as HiTECC. This paper discusses the closed loop control system, which utilizes a shock location sensor to improve inlet performance and operability. Even though the shock location feedback has a coarse resolution, the feedback allows for a reduction in steady state error and, in some cases, better performance than with previous proposed pressure ratio based methods. This paper demonstrates the design and benefit with the implementation of a proportional-integral controller, an H-Infinity based controller, and a disturbance observer based controller.
Report on SNL RCBC control options
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ponciroli, R.; Vilim, R. B.
The attractive performance of the S-CO 2 recompression cycle arises from the thermo-physical properties of carbon dioxide near the critical point. However, to ensure efficient operation of the cycle near the critical point, precise control of the heat removal rate by the Printed Circuit Heat Exchanger (PCHE) upstream of the main compressor is required. Accomplishing this task is not trivial because of the large variations in fluid properties with respect to temperature and pressure near the critical point. The use of a model-based approach for the design of a robust feedback regulator is being investigated to achieve acceptable control ofmore » heat removal rate at different operating conditions. A first step in this procedure is the development of a dynamic model of the heat exchanger. In this work, a one-dimensional (1-D) control-oriented model of the PCHE was developed using the General Plant Analyzer and System Simulator (GPASS) code. GPASS is a transient simulation code that supports analysis and control of power conversion cycles based on the S-CO 2 Brayton cycle. This modeling capability was used this fiscal year to analyze experiment data obtained from the heat exchanger in the SNL recompression Brayton cycle. The analysis suggested that the error in the water flowrate measurement was greater than required for achieving precise control of heat removal rate. Accordingly, a new water flowmeter was installed, significantly improving the quality of the measurement. Comparison of heat exchanger measurements in subsequent experiments with code simulations yielded good agreement establishing a reliable basis for the use of the GPASS PCHE model for future development of a model-based feedback controller.« less
Automated Translation of Safety Critical Application Software Specifications into PLC Ladder Logic
NASA Technical Reports Server (NTRS)
Leucht, Kurt W.; Semmel, Glenn S.
2008-01-01
The numerous benefits of automatic application code generation are widely accepted within the software engineering community. A few of these benefits include raising the abstraction level of application programming, shorter product development time, lower maintenance costs, and increased code quality and consistency. Surprisingly, code generation concepts have not yet found wide acceptance and use in the field of programmable logic controller (PLC) software development. Software engineers at the NASA Kennedy Space Center (KSC) recognized the need for PLC code generation while developing their new ground checkout and launch processing system. They developed a process and a prototype software tool that automatically translates a high-level representation or specification of safety critical application software into ladder logic that executes on a PLC. This process and tool are expected to increase the reliability of the PLC code over that which is written manually, and may even lower life-cycle costs and shorten the development schedule of the new control system at KSC. This paper examines the problem domain and discusses the process and software tool that were prototyped by the KSC software engineers.
Impact of Reactor Operating Parameters on Cask Reactivity in BWR Burnup Credit
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilas, Germina; Betzler, Benjamin R; Ade, Brian J
This paper discusses the effect of reactor operating parameters used in fuel depletion calculations on spent fuel cask reactivity, with relevance for boiling-water reactor (BWR) burnup credit (BUC) applications. Assessments that used generic BWR fuel assembly and spent fuel cask configurations are presented. The considered operating parameters, which were independently varied in the depletion simulations for the assembly, included fuel temperature, bypass water density, specific power, and operating history. Different operating history scenarios were considered for the assembly depletion to determine the effect of relative power distribution during the irradiation cycles, as well as the downtime between cycles. Depletion, decay,more » and criticality simulations were performed using computer codes and associated nuclear data within the SCALE code system. Results quantifying the dependence of cask reactivity on the assembly depletion parameters are presented herein.« less
Al-Ghamdi, Sami G; Bilec, Melissa M
2015-04-07
This research investigates the relationship between energy use, geographic location, life cycle environmental impacts, and Leadership in Energy and Environmental Design (LEED). The researchers studied worldwide variations in building energy use and associated life cycle impacts in relation to the LEED rating systems. A Building Information Modeling (BIM) of a reference 43,000 ft(2) office building was developed and situated in 400 locations worldwide while making relevant changes to the energy model to meet reference codes, such as ASHRAE 90.1. Then life cycle environmental and human health impacts from the buildings' energy consumption were calculated. The results revealed considerable variations between sites in the U.S. and international locations (ranging from 394 ton CO2 equiv to 911 ton CO2 equiv, respectively). The variations indicate that location specific results, when paired with life cycle assessment, can be an effective means to achieve a better understanding of possible adverse environmental impacts as a result of building energy consumption in the context of green building rating systems. Looking at these factors in combination and using a systems approach may allow rating systems like LEED to continue to drive market transformation toward sustainable development, while taking into consideration both energy sources and building efficiency.
Composite load spectra for select space propulsion structural components
NASA Technical Reports Server (NTRS)
Newell, J. F.; Kurth, R. E.; Ho, H.
1986-01-01
A multiyear program is performed with the objective to develop generic load models with multiple levels of progressive sophistication to simulate the composite (combined) load spectra that are induced in space propulsion system components, representative of Space Shuttle Main Engines (SSME), such as transfer ducts, turbine blades, and liquid oxygen (LOX) posts. Progress of the first year's effort includes completion of a sufficient portion of each task -- probabilistic models, code development, validation, and an initial operational code. This code has from its inception an expert system philosophy that could be added to throughout the program and in the future. The initial operational code is only applicable to turbine blade type loadings. The probabilistic model included in the operational code has fitting routines for loads that utilize a modified Discrete Probabilistic Distribution termed RASCAL, a barrier crossing method and a Monte Carlo method. An initial load model was developed by Battelle that is currently used for the slowly varying duty cycle type loading. The intent is to use the model and related codes essentially in the current form for all loads that are based on measured or calculated data that have followed a slowly varying profile.
Migration of the Gaudi and LHCb software repositories from CVS to Subversion
NASA Astrophysics Data System (ADS)
Clemencic, M.; Degaudenzi, H.; LHCb Collaboration
2011-12-01
A common code repository is of primary importance in a distributed development environment such as large HEP experiments. CVS (Concurrent Versions System) has been used in the past years at CERN for the hosting of shared software repositories, among which were the repositories for the Gaudi Framework and the LHCb software projects. Many developers around the world produced alternative systems to share code and revisions among several developers, mainly to overcome the limitations in CVS, and CERN has recently started a new service for code hosting based on the version control system Subversion. The differences between CVS and Subversion and the way the code was organized in Gaudi and LHCb CVS repositories required careful study and planning of the migration. Special care was used to define the organization of the new Subversion repository. To avoid as much as possible disruption in the development cycle, the migration has been gradual with the help of tools developed explicitly to hide the differences between the two systems. The principles guiding the migration steps, the organization of the Subversion repository and the tools developed will be presented, as well as the problems encountered both from the librarian and the user points of view.
NASA Astrophysics Data System (ADS)
Watanabe, Masakazu; Fujita, Shigeru; Tanaka, Takashi; Kubota, Yasubumi; Shinagawa, Hiroyuki; Murata, Ken T.
2018-01-01
We perform numerical modeling of the interchange cycle in the magnetosphere-ionosphere convection system for oblique northward interplanetary magnetic field (IMF). The interchange cycle results from the coupling of IMF-to-lobe reconnection and lobe-to-closed reconnection. Using a global magnetohydrodynamic simulation code, for an IMF clock angle of 20° (measured from due north), we successfully reproduced the following features of the interchange cycle. (1) In the ionosphere, for each hemisphere, there appears a reverse cell circulating exclusively in the closed field line region (the reciprocal cell). (2) The topology transition of the magnetic field along a streamline near the equatorial plane precisely represents the magnetic flux reciprocation during the interchange cycle. (3) Field-aligned electric fields on the interplanetary-open separatrix and on the open-closed separatrix are those that are consistent with IMF-to-lobe reconnection and lobe-to-closed reconnection, respectively. These three features prove the existence of the interchange cycle in the simulated magnetosphere-ionosphere system. We conclude that the interchange cycle does exist in the real solar wind-magnetosphere-ionosphere system. In addition, the simulation revealed that the reciprocal cell described above is not a direct projection of the diffusion region as predicted by the "vacuum" model in which diffusion is added a priori to the vacuum magnetic topology. Instead, the reciprocal cell is a consequence of the plasma convection system coupled to the so-called NBZ ("northward
Overview of the Turbine Based Combined Cycle Discipline
NASA Technical Reports Server (NTRS)
Thomas, Scott R.; Walker, James F.; Pittman, James L.
2009-01-01
The NASA Fundamental Aeronautics Hypersonics project is focused on technologies for combined cycle, airbreathing propulsions systems to enable reusable launch systems for access to space. Turbine Based Combined Cycle (TBCC) propulsion systems offer specific impulse (Isp) improvements over rocket-based propulsion systems in the subsonic takeoff and return mission segments and offer improved safety. The potential to realize more aircraft-like operations with expanded launch site capability and reduced system maintenance are additional benefits. The most critical TBCC enabling technologies as identified in the National Aeronautics Institute (NAI) study were: 1) mode transition from the low speed propulsion system to the high speed propulsion system, 2) high Mach turbine engine development, 3) transonic aero-propulsion performance, 4) low-Mach-number dual-mode scramjet operation, 5) innovative 3-D flowpath concepts and 6) innovative turbine based combined cycle integration. To address several of these key TBCC challenges, NASA s Hypersonics project (TBCC Discipline) initiated an experimental mode transition task that includes an analytic research endeavor to assess the state-of-the-art of propulsion system performance and design codes. This initiative includes inlet fluid and turbine performance codes and engineering-level algorithms. This effort has been focused on the Combined Cycle Engine Large-Scale Inlet Mode Transition Experiment (CCE LIMX) which is a fully integrated TBCC propulsion system with flow path sizing consistent with previous NASA and DoD proposed Hypersonic experimental flight test plans. This experiment is being tested in the NASA-GRC 10 x 10 Supersonic Wind Tunnel (SWT) Facility. The goal of this activity is to address key hypersonic combined-cycle-engine issues: (1) dual integrated inlet operability and performance issues unstart constraints, distortion constraints, bleed requirements, controls, and operability margins, (2) mode-transition constraints imposed by the turbine and the ramjet/scramjet flow paths (imposed variable geometry requirements), (3) turbine engine transients (and associated time scales) during transition, (4) high-altitude turbine engine re-light, and (5) the operating constraints of a Mach 3-7 combustor (specific to the TBCC). The model will be tested in several test phases to develop a unique TBCC database to assess and validate design and analysis tools and address operability, integration, and interaction issues for this class of advanced propulsion systems. The test article and all support equipment is complete and available at the facility. The test article installation and facility build-up in preparation for the inlet performance and operability characterization is near completion and testing is planned to commence in FY11.
Studies on Vapor Adsorption Systems
NASA Technical Reports Server (NTRS)
Shamsundar, N.; Ramotowski, M.
1998-01-01
The project consisted of performing experiments on single and dual bed vapor adsorption systems, thermodynamic cycle optimization, and thermal modeling. The work was described in a technical paper that appeared in conference proceedings and a Master's thesis, which were previously submitted to NASA. The present report describes some additional thermal modeling work done subsequently, and includes listings of computer codes developed during the project. Recommendations for future work are provided.
ART/Ada design project, phase 1: Project plan
NASA Technical Reports Server (NTRS)
Allen, Bradley P.
1988-01-01
The plan and schedule for Phase 1 of the Ada based ESBT Design Research Project is described. The main platform for the project is a DEC Ada compiler on VAX mini-computers and VAXstations running the Virtual Memory System (VMS) operating system. The Ada effort and lines of code are given in tabular form. A chart is given of the entire project life cycle.
An Object-oriented Computer Code for Aircraft Engine Weight Estimation
NASA Technical Reports Server (NTRS)
Tong, Michael T.; Naylor, Bret A.
2008-01-01
Reliable engine-weight estimation at the conceptual design stage is critical to the development of new aircraft engines. It helps to identify the best engine concept amongst several candidates. At NASA Glenn (GRC), the Weight Analysis of Turbine Engines (WATE) computer code, originally developed by Boeing Aircraft, has been used to estimate the engine weight of various conceptual engine designs. The code, written in FORTRAN, was originally developed for NASA in 1979. Since then, substantial improvements have been made to the code to improve the weight calculations for most of the engine components. Most recently, to improve the maintainability and extensibility of WATE, the FORTRAN code has been converted into an object-oriented version. The conversion was done within the NASA s NPSS (Numerical Propulsion System Simulation) framework. This enables WATE to interact seamlessly with the thermodynamic cycle model which provides component flow data such as airflows, temperatures, and pressures, etc. that are required for sizing the components and weight calculations. The tighter integration between the NPSS and WATE would greatly enhance system-level analysis and optimization capabilities. It also would facilitate the enhancement of the WATE code for next-generation aircraft and space propulsion systems. In this paper, the architecture of the object-oriented WATE code (or WATE++) is described. Both the FORTRAN and object-oriented versions of the code are employed to compute the dimensions and weight of a 300- passenger aircraft engine (GE90 class). Both versions of the code produce essentially identical results as should be the case. Keywords: NASA, aircraft engine, weight, object-oriented
NASA Astrophysics Data System (ADS)
Yuan, F.; Wang, G.; Painter, S. L.; Tang, G.; Xu, X.; Kumar, J.; Bisht, G.; Hammond, G. E.; Mills, R. T.; Thornton, P. E.; Wullschleger, S. D.
2017-12-01
In Arctic tundra ecosystem soil freezing-thawing is one of dominant physical processes through which biogeochemical (e.g., carbon and nitrogen) cycles are tightly coupled. Besides hydraulic transport, freezing-thawing can cause pore water movement and aqueous species gradients, which are additional mechanisms for soil nitrogen (N) reactive-transport in Tundra ecosystem. In this study, we have fully coupled an in-development ESM(i.e., Advanced Climate Model for Energy, ACME)'s Land Model (ALM) aboveground processes with a state-of-the-art massively parallel 3-D subsurface thermal-hydrology and reactive transport code, PFLOTRAN. The resulting coupled ALM-PFLOTRAN model is a Land Surface Model (LSM) capable of resolving 3-D soil thermal-hydrological-biogeochemical cycles. This specific version of PFLOTRAN has incorporated CLM-CN Converging Trophic Cascade (CTC) model and a full and simple but robust soil N cycle. It includes absorption-desorption for soil NH4+ and gas dissolving-degasing process as well. It also implements thermal-hydrology mode codes with three newly-modified freezing-thawing algorithms which can greatly improve computing performance in regarding to numerical stiffness at freezing-point. Here we tested the model in fully 3-D coupled mode at the Next Generation Ecosystem Experiment-Arctic (NGEE-Arctic) field intensive study site at the Barrow Environmental Observatory (BEO), AK. The simulations show that: (1) synchronous coupling of soil thermal-hydrology and biogeochemistry in 3-D can greatly impact ecosystem dynamics across polygonal tundra landscape; and (2) freezing-thawing cycles can add more complexity to the system, resulting in greater mobility of soil N vertically and laterally, depending upon local micro-topography. As a preliminary experiment, the model is also implemented for Pan-Arctic region in 1-D column mode (i.e. no lateral connection), showing significant differences compared to stand-alone ALM. The developed ALM-PFLOTRAN coupling codes embeded within ESM will be used for Pan-Arctic regional evaluation of climate change-caused ecosystem responses and their feedbacks to climate system at various scales.
Validation of a program for supercritical power plant calculations
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian
2011-12-01
This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.
THRSTER: A THRee-STream Ejector Ramjet Analysis and Design Tool
NASA Technical Reports Server (NTRS)
Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.
2000-01-01
An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.
THRSTER: A Three-Stream Ejector Ramjet Analysis and Design Tool
NASA Technical Reports Server (NTRS)
Chue, R. S.; Sabean, J.; Tyll, J.; Bakos, R. J.; Komar, D. R. (Technical Monitor)
2000-01-01
An engineering tool for analyzing ejectors in rocket based combined cycle (RBCC) engines has been developed. A key technology for multi-cycle RBCC propulsion systems is the ejector which functions as the compression stage of the ejector ramjet cycle. The THRee STream Ejector Ramjet analysis tool was developed to analyze the complex aerothermodynamic and combustion processes that occur in this device. The formulated model consists of three quasi-one-dimensional streams, one each for the ejector primary flow, the secondary flow, and the mixed region. The model space marches through the mixer, combustor, and nozzle to evaluate the solution along the engine. In its present form, the model is intended for an analysis mode in which the diffusion rates of the primary and secondary into the mixed stream are stipulated. The model offers the ability to analyze the highly two-dimensional ejector flowfield while still benefits from the simplicity and speed of an engineering tool. To validate the developed code, wall static pressure measurements from the Penn-State and NASA-ART RBCC experiments were used to compare with the results generated by the code. The calculated solutions were generally found to have satisfactory agreement with the pressure measurements along the engines, although further modeling effort may be required when a strong shock train is formed at the rocket exhaust. The range of parameters in which the code would generate valid results are presented and discussed.
Mazar, Joseph; Rosado, Amy; Shelley, John; Marchica, John; Westmoreland, Tamarah J
2017-01-01
The long non-coding RNA GAS5 has been shown to modulate cancer proliferation in numerous human cancer systems and has been correlated with successful patient outcome. Our examination of GAS5 in neuroblastoma has revealed robust expression in both MYCN-amplified and non-amplified cell lines. Knockdown of GAS5 In vitro resulted in defects in cell proliferation, apoptosis, and induced cell cycle arrest. Further analysis of GAS5 clones revealed multiple novel splice variants, two of which inversely modulated with MYCN status. Complementation studies of the variants post-knockdown of GAS5 indicated alternate phenotypes, with one variant (FL) considerably enhancing cell proliferation by rescuing cell cycle arrest and the other (C2) driving apoptosis, suggesting a unique role for each in neuroblastoma cancer physiology. Global sequencing and ELISA arrays revealed that the loss of GAS5 induced p53, BRCA1, and GADD45A, which appeared to modulate cell cycle arrest in concert. Complementation with only the FL GAS5 clone could rescue cell cycle arrest, stabilizing HDM2, and leading to the loss of p53. Together, these data offer novel therapeutic targets in the form of lncRNA splice variants for separate challenges against cancer growth and cell death. PMID:28035057
Entanglement-assisted quantum quasicyclic low-density parity-check codes
NASA Astrophysics Data System (ADS)
Hsieh, Min-Hsiu; Brun, Todd A.; Devetak, Igor
2009-03-01
We investigate the construction of quantum low-density parity-check (LDPC) codes from classical quasicyclic (QC) LDPC codes with girth greater than or equal to 6. We have shown that the classical codes in the generalized Calderbank-Skor-Steane construction do not need to satisfy the dual-containing property as long as preshared entanglement is available to both sender and receiver. We can use this to avoid the many four cycles which typically arise in dual-containing LDPC codes. The advantage of such quantum codes comes from the use of efficient decoding algorithms such as sum-product algorithm (SPA). It is well known that in the SPA, cycles of length 4 make successive decoding iterations highly correlated and hence limit the decoding performance. We show the principle of constructing quantum QC-LDPC codes which require only small amounts of initial shared entanglement.
A Case Study of 4 & 5 Cost Effectiveness
NASA Technical Reports Server (NTRS)
Neal, Ralph D.; McCaugherty, Dan; Joshi, Tulasi; Callahan, John
1997-01-01
This paper looks at the Independent Verification and Validation (IV&V) of NASA's Space Shuttle Day of Launch I-Load Update (DoLILU) project. IV&V is defined. The system's development life cycle is explained. Data collection and analysis are described. DoLILU Issue Tracking Reports (DITRs) authored by IV&V personnel are analyzed to determine the effectiveness of IV&V in finding errors before the code, testing, and integration phase of the software development life cycle. The study's findings are reported along with the limitations of the study and planned future research.
Prospective scenarios of nuclear energy evolution over the 21. century
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massara, S.; Tetart, P.; Garzenne, C.
2006-07-01
In this paper, different world scenarios of nuclear energy development over the 21. century are analyzed, by means of the EDF fuel cycle simulation code for nuclear scenario studies, TIRELIRE - STRATEGIE. Three nuclear demand scenarios are considered, and the performance of different nuclear strategies in satisfying these scenarios is analyzed and discussed, focusing on natural uranium consumption and industrial requirements related to the nuclear reactors and the associated fuel cycle facilities. Both thermal-spectrum systems (Pressurized Water Reactor and High Temperature Gas-cooled Reactor) and Fast Reactors are investigated. (authors)
The Potential of Different Concepts of Fast Breeder Reactor for the French Fleet Renewal
DOE Office of Scientific and Technical Information (OSTI.GOV)
Massara, Simone; Tetart, Philippe; Lecarpentier, David
2006-07-01
The performances of different concepts of Fast Breeder Reactor (Na-cooled, He-cooled and Pb-cooled FBR) for the current French fleet renewal are analyzed in the framework of a transition scenario to a 100% FBR fleet at the end of the 21. century. Firstly, the modeling of these three FBR types by means of a semi-analytical approach in TIRELIRE - STRATEGIE, the EDF fuel cycle simulation code, is presented, together with some validation elements against ERANOS, the French reference code system for neutronic FBR analysis (CEA). Afterwards, performances comparisons are made in terms of maximum deployable power, natural uranium consumption and wastemore » production. The results show that the FBR maximum deployable capacity, independently from the FBR technology, is highly sensitive to the fuel cycle options, like the spent nuclear fuel cooling time or the Minor Actinides management strategy. Thus, some of the key parameters defining the dynamic of FBR deployment are highlighted, to inform the orientation of R and D in the development and optimization of these systems. (authors)« less
Expert system validation in prolog
NASA Technical Reports Server (NTRS)
Stock, Todd; Stachowitz, Rolf; Chang, Chin-Liang; Combs, Jacqueline
1988-01-01
An overview of the Expert System Validation Assistant (EVA) is being implemented in Prolog at the Lockheed AI Center. Prolog was chosen to facilitate rapid prototyping of the structure and logic checkers and since February 1987, we have implemented code to check for irrelevance, subsumption, duplication, deadends, unreachability, and cycles. The architecture chosen is extremely flexible and expansible, yet concise and complementary with the normal interactive style of Prolog. The foundation of the system is in the connection graph representation. Rules and facts are modeled as nodes in the graph and arcs indicate common patterns between rules. The basic activity of the validation system is then a traversal of the connection graph, searching for various patterns the system recognizes as erroneous. To aid in specifying these patterns, a metalanguage is developed, providing the user with the basic facilities required to reason about the expert system. Using the metalanguage, the user can, for example, give the Prolog inference engine the goal of finding inconsistent conclusions among the rules, and Prolog will search the graph intantiations which can match the definition of inconsistency. Examples of code for some of the checkers are provided and the algorithms explained. Technical highlights include automatic construction of a connection graph, demonstration of the use of metalanguage, the A* algorithm modified to detect all unique cycles, general-purpose stacks in Prolog, and a general-purpose database browser with pattern completion.
Conceptual design study of small long-life PWR based on thorium cycle fuel
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, M. Nurul; Su'ud, Zaki; Waris, Abdul
2014-09-30
A neutronic performance of small long-life Pressurized Water Reactor (PWR) using thorium cycle based fuel has been investigated. Thorium cycle which has higher conversion ratio in thermal region compared to uranium cycle produce some significant of {sup 233}U during burn up time. The cell-burn up calculations were performed by PIJ SRAC code using nuclear data library based on JENDL 3.3, while the multi-energy-group diffusion calculations were optimized in whole core cylindrical two-dimension R-Z geometry by SRAC-CITATION. this study would be introduced thorium nitride fuel system which ZIRLO is the cladding material. The optimization of 350 MWt small long life PWRmore » result small excess reactivity and reduced power peaking during its operation.« less
SERIIUS-MAGEEP Visiting Scholars Program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortega, Jesus D.
2014-08-28
Recent studies have assessed closed-loop supercritical carbon dioxide (s-CO 2) Brayton cycles to be a higher energy-density system in comparison to equivalent superheated steam Rankine systems. At turbine inlet conditions of 700°C and 20 MPa, a cycle thermal efficiency of ~50% can be achieved. Achieving these high efficiencies will help concentrating solar power (CSP) technologies to become a competitive alternative to current power generation methods. To incorporate an s-CO 2 Brayton power cycle in a solar power tower system, the development of a solar receiver capable of providing an outlet temperature of 700°C (at 20 MPa) is necessary. To satisfymore » the temperature requirements of an s-CO 2 Brayton cycle with recuperation and recompression, the s-CO 2 must undergo a temperature rise of ~200°C as it flows through the solar receiver. The main objective is to develop an optical-thermal-fluid and structural model to validate a tubular receiver that will receive a heat input ~0.33 MWth from the heliostat field at the National Solar Thermal Test Facility (NSTTF), Albuquerque, NM, USA. We also commenced the development of computational models and testing of air receivers being developed by the Indian Institute of Science (IISc) and the Indian Institute of Technology in Bombay (IIT-B). The helical tubular receiver is expected to counteract the effect of thermal expansion while using a cavity to reduce the radiative and convective losses. Initially, this receiver will be tested for a temperature range of 100-300°C under 1 MPa of pressurized air. The helical air receiver will be exposed to 10kWth to achieve a temperature rise of ~200°C. Preliminary tests to validate the modeling will be performed before the design and construction of a larger scale receiver. Lastly, I focused on the development of a new computational tool that would allow us to perform a nodal creep-fatigue analysis on the receivers and heat exchangers being developed. This tool was developed using MATLAB and is capable of processing the results obtained from ANSYS Fluent and Structural combined, which was limited when using commercial software. The main advantage of this code is that it can be modified to run in parallel making it more affordable and faster compared to commercial codes available. The code is in the process of validation and is currently being compared to nCode Design Life.« less
Design study of long-life PWR using thorium cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, Moh. Nurul; Su'ud, Zaki; Waris, Abdul
2012-06-06
Design study of long-life Pressurized Water Reactor (PWR) using thorium cycle has been performed. Thorium cycle in general has higher conversion ratio in the thermal spectrum domain than uranium cycle. Cell calculation, Burn-up and multigroup diffusion calculation was performed by PIJ-CITATION-SRAC code using libraries based on JENDL 3.2. The neutronic analysis result of infinite cell calculation shows that {sup 231}Pa better than {sup 237}Np as burnable poisons in thorium fuel system. Thorium oxide system with 8%{sup 233}U enrichment and 7.6{approx} 8%{sup 231}Pa is the most suitable fuel for small-long life PWR core because it gives reactivity swing less than 1%{Delta}k/kmore » and longer burn up period (more than 20 year). By using this result, small long-life PWR core can be designed for long time operation with reduced excess reactivity as low as 0.53%{Delta}k/k and reduced power peaking during its operation.« less
Irradiation-driven Mass Transfer Cycles in Compact Binaries
NASA Astrophysics Data System (ADS)
Büning, A.; Ritter, H.
2005-08-01
We elaborate on the analytical model of Ritter, Zhang, & Kolb (2000) which describes the basic physics of irradiation-driven mass transfer cycles in semi-detached compact binary systems. In particular, we take into account a contribution to the thermal relaxation of the donor star which is unrelated to irradiation and which was neglected in previous studies. We present results of simulations of the evolution of compact binaries undergoing mass transfer cycles, in particular also of systems with a nuclear evolved donor star. These computations have been carried out with a stellar evolution code which computes mass transfer implicitly and models irradiation of the donor star in a point source approximation, thereby allowing for much more realistic simulations than were hitherto possible. We find that low-mass X-ray binaries (LMXBs) and cataclysmic variables (CVs) with orbital periods ⪉ 6hr can undergo mass transfer cycles only for low angular momentum loss rates. CVs containing a giant donor or one near the terminal age main sequence are more stable than previously thought, but can possibly also undergo mass transfer cycles.
Modeling and optimization of a hybrid solar combined cycle (HYCS)
NASA Astrophysics Data System (ADS)
Eter, Ahmad Adel
2011-12-01
The main objective of this thesis is to investigate the feasibility of integrating concentrated solar power (CSP) technology with the conventional combined cycle technology for electric generation in Saudi Arabia. The generated electricity can be used locally to meet the annual increasing demand. Specifically, it can be utilized to meet the demand during the hours 10 am-3 pm and prevent blackout hours, of some industrial sectors. The proposed CSP design gives flexibility in the operation system. Since, it works as a conventional combined cycle during night time and it switches to work as a hybrid solar combined cycle during day time. The first objective of the thesis is to develop a thermo-economical mathematical model that can simulate the performance of a hybrid solar-fossil fuel combined cycle. The second objective is to develop a computer simulation code that can solve the thermo-economical mathematical model using available software such as E.E.S. The developed simulation code is used to analyze the thermo-economic performance of different configurations of integrating the CSP with the conventional fossil fuel combined cycle to achieve the optimal integration configuration. This optimal integration configuration has been investigated further to achieve the optimal design of the solar field that gives the optimal solar share. Thermo-economical performance metrics which are available in the literature have been used in the present work to assess the thermo-economic performance of the investigated configurations. The economical and environmental impact of integration CSP with the conventional fossil fuel combined cycle are estimated and discussed. Finally, the optimal integration configuration is found to be solarization steam side in conventional combined cycle with solar multiple 0.38 which needs 29 hectare and LEC of HYCS is 63.17 $/MWh under Dhahran weather conditions.
Cooperative multi-user detection and ranging based on pseudo-random codes
NASA Astrophysics Data System (ADS)
Morhart, C.; Biebl, E. M.
2009-05-01
We present an improved approach for a Round Trip Time of Flight distance measurement system. The system is intended for the usage in a cooperative localisation system for automotive applications. Therefore, it is designed to address a large number of communication partners per measurement cycle. By using coded signals in a time divison multiple access order, we can detect a large number of pedestrian sensors with just one car sensor. We achieve this by using very short transmit bursts in combination with a real time correlation algorithm. Futhermore, the correlation approach offers real time data, concerning the time of arrival, that can serve as a trigger impulse for other comunication systems. The distance accuracy of the correlation result was further increased by adding a fourier interpolation filter. The system performance was checked with a prototype at 2.4 GHz. We reached a distance measurement accuracy of 12 cm at a range up to 450 m.
Kobayashi, Shintaro; Yoshii, Kentaro; Hirano, Minato; Muto, Memi; Kariwa, Hiroaki
2017-02-01
Reverse genetics systems facilitate investigation of many aspects of the life cycle and pathogenesis of viruses. However, genetic instability in Escherichia coli has hampered development of a reverse genetics system for West Nile virus (WNV). In this study, we developed a novel reverse genetics system for WNV based on homologous recombination in mammalian cells. Introduction of the DNA fragment coding for the WNV structural protein together with a DNA-based replicon resulted in the release of infectious WNV. The growth rate and plaque size of the recombinant virus were almost identical to those of the parent WNV. Furthermore, chimeric WNV was produced by introducing the DNA fragment coding for the structural protein and replicon plasmid derived from various strains. Here, we report development of a novel system that will facilitate research into WNV infection. Copyright © 2016 Elsevier B.V. All rights reserved.
Yamashita, M; Yamashita, A; Ishii, T; Naruo, Y; Nagatomo, M
1998-11-01
A portable recording system was developed for analysis of more than three analog signals collected in field works. Stereo audio recorder, available as consumer products, was made use for a core cornponent of the system. For the two tracks of recording, a multiplexed analog signal is stored on one track, and reference code on the other track. The reference code indicates the start of one cycle for multiplexing and swiching point of each channel. Multiplexed signal is playbacked and decoded with a reference of the code to reconstruct original profiles of the signal. Since commercial stereo recorders have cut DC component off, a fixed reference voltage is inserted in the sequence of multiplexing. Change of voltage at switching from the reference to the data channel is measured from playbacked signal to get the original data with its DC component. Movement of vehicles and human head were analyzed by the system. It was verified to be capable to record and analyze multi-channel signal at a sampling rate more than 10Hz.
A users' guide to the trace contaminant control simulation computer program
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various trace contaminant control technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. The results obtained from the program can be useful in assessing different technology combinations, system sizing, system location with respect to other life support systems, and the overall life cycle economics of a trace contaminant control system. The user's manual is extracted in its entirety from NASA TM-108409 to provide a stand-alone reference for using any version of the program. The first publication of the manual as part of TM-108409 also included a detailed listing of version 8.0 of the program. As changes to the code were necessary, it became apparent that the user's manual should be separate from the computer code documentation and be general enough to provide guidance in using any version of the program. Provided in the guide are tips for input file preparation, general program execution, and output file manipulation. Information concerning source code listings of the latest version of the computer program may be obtained by contacting the author.
Numerical Propulsion System Simulation: A Common Tool for Aerospace Propulsion Being Developed
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Naiman, Cynthia G.
2001-01-01
The NASA Glenn Research Center is developing an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). This simulation is initially being used to support aeropropulsion in the analysis and design of aircraft engines. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the Aviation Safety Program and Advanced Space Transportation. NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes using the Common Object Request Broker Architecture (CORBA) in the NPSS Developer's Kit to facilitate collaborative engineering. The NPSS Developer's Kit will provide the tools to develop custom components and to use the CORBA capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities will extend NPSS from a zero-dimensional simulation tool to a multifidelity, multidiscipline system-level simulation tool for the full life cycle of an engine.
Life Prediction for a CMC Component Using the NASALIFE Computer Code
NASA Technical Reports Server (NTRS)
Gyekenyesi, John Z.; Murthy, Pappu L. N.; Mital, Subodh K.
2005-01-01
The computer code, NASALIFE, was used to provide estimates for life of an SiC/SiC stator vane under varying thermomechanical loading conditions. The primary intention of this effort is to show how the computer code NASALIFE can be used to provide reasonable estimates of life for practical propulsion system components made of advanced ceramic matrix composites (CMC). Simple loading conditions provided readily observable and acceptable life predictions. Varying the loading conditions such that low cycle fatigue and creep were affected independently provided expected trends in the results for life due to varying loads and life due to creep. Analysis was based on idealized empirical data for the 9/99 Melt Infiltrated SiC fiber reinforced SiC.
NASA Technical Reports Server (NTRS)
Foster, Lancert E.; Saunders, John D., Jr.; Sanders, Bobby W.; Weir, Lois J.
2012-01-01
NASA is focused on technologies for combined cycle, air-breathing propulsion systems to enable reusable launch systems for access to space. Turbine Based Combined Cycle (TBCC) propulsion systems offer specific impulse (Isp) improvements over rocket-based propulsion systems in the subsonic takeoff and return mission segments along with improved safety. Among the most critical TBCC enabling technologies are: 1) mode transition from the low speed propulsion system to the high speed propulsion system, 2) high Mach turbine engine development and 3) innovative turbine based combined cycle integration. To address these challenges, NASA initiated an experimental mode transition task including analytical methods to assess the state-of-the-art of propulsion system performance and design codes. One effort has been the Combined-Cycle Engine Large Scale Inlet Mode Transition Experiment (CCE-LIMX) which is a fully integrated TBCC propulsion system with flowpath sizing consistent with previous NASA and DoD proposed Hypersonic experimental flight test plans. This experiment was tested in the NASA GRC 10 by 10-Foot Supersonic Wind Tunnel (SWT) Facility. The goal of this activity is to address key hypersonic combined-cycle engine issues including: (1) dual integrated inlet operability and performance issues-unstart constraints, distortion constraints, bleed requirements, and controls, (2) mode-transition sequence elements caused by switching between the turbine and the ramjet/scramjet flowpaths (imposed variable geometry requirements), and (3) turbine engine transients (and associated time scales) during transition. Testing of the initial inlet and dynamic characterization phases were completed and smooth mode transition was demonstrated. A database focused on a Mach 4 transition speed with limited off-design elements was developed and will serve to guide future TBCC system studies and to validate higher level analyses.
Interactive-graphic flowpath plotting for turbine engines
NASA Technical Reports Server (NTRS)
Corban, R. R.
1981-01-01
An engine cycle program capable of simulating the design and off-design performance of arbitrary turbine engines, and a computer code which, when used in conjunction with the cycle code, can predict the weight of the engines are described. A graphics subroutine was added to the code to enable the engineer to visualize the designed engine with more clarity by producing an overall view of the designed engine for output on a graphics device using IBM-370 graphics subroutines. In addition, with the engine drawn on a graphics screen, the program allows for the interactive user to make changes to the inputs to the code for the engine to be redrawn and reweighed. These improvements allow better use of the code in conjunction with the engine program.
The Statistical Loop Analyzer (SLA)
NASA Technical Reports Server (NTRS)
Lindsey, W. C.
1985-01-01
The statistical loop analyzer (SLA) is designed to automatically measure the acquisition, tracking and frequency stability performance characteristics of symbol synchronizers, code synchronizers, carrier tracking loops, and coherent transponders. Automated phase lock and system level tests can also be made using the SLA. Standard baseband, carrier and spread spectrum modulation techniques can be accomodated. Through the SLA's phase error jitter and cycle slip measurements the acquisition and tracking thresholds of the unit under test are determined; any false phase and frequency lock events are statistically analyzed and reported in the SLA output in probabilistic terms. Automated signal drop out tests can be performed in order to trouble shoot algorithms and evaluate the reacquisition statistics of the unit under test. Cycle slip rates and cycle slip probabilities can be measured using the SLA. These measurements, combined with bit error probability measurements, are all that are needed to fully characterize the acquisition and tracking performance of a digital communication system.
Incorporation of Electrical Systems Models Into an Existing Thermodynamic Cycle Code
NASA Technical Reports Server (NTRS)
Freeh, Josh
2003-01-01
Integration of entire system includes: Fuel cells, motors, propulsors, thermal/power management, compressors, etc. Use of existing, pre-developed NPSS capabilities includes: 1) Optimization tools; 2) Gas turbine models for hybrid systems; 3) Increased interplay between subsystems; 4) Off-design modeling capabilities; 5) Altitude effects; and 6) Existing transient modeling architecture. Other factors inclde: 1) Easier transfer between users and groups of users; 2) General aerospace industry acceptance and familiarity; and 3) Flexible analysis tool that can also be used for ground power applications.
Theta phase precession and phase selectivity: a cognitive device description of neural coding
NASA Astrophysics Data System (ADS)
Zalay, Osbert C.; Bardakjian, Berj L.
2009-06-01
Information in neural systems is carried by way of phase and rate codes. Neuronal signals are processed through transformative biophysical mechanisms at the cellular and network levels. Neural coding transformations can be represented mathematically in a device called the cognitive rhythm generator (CRG). Incoming signals to the CRG are parsed through a bank of neuronal modes that orchestrate proportional, integrative and derivative transformations associated with neural coding. Mode outputs are then mixed through static nonlinearities to encode (spatio) temporal phase relationships. The static nonlinear outputs feed and modulate a ring device (limit cycle) encoding output dynamics. Small coupled CRG networks were created to investigate coding functionality associated with neuronal phase preference and theta precession in the hippocampus. Phase selectivity was found to be dependent on mode shape and polarity, while phase precession was a product of modal mixing (i.e. changes in the relative contribution or amplitude of mode outputs resulted in shifting phase preference). Nonlinear system identification was implemented to help validate the model and explain response characteristics associated with modal mixing; in particular, principal dynamic modes experimentally derived from a hippocampal neuron were inserted into a CRG and the neuron's dynamic response was successfully cloned. From our results, small CRG networks possessing disynaptic feedforward inhibition in combination with feedforward excitation exhibited frequency-dependent inhibitory-to-excitatory and excitatory-to-inhibitory transitions that were similar to transitions seen in a single CRG with quadratic modal mixing. This suggests nonlinear modal mixing to be a coding manifestation of the effect of network connectivity in shaping system dynamic behavior. We hypothesize that circuits containing disynaptic feedforward inhibition in the nervous system may be candidates for interpreting upstream rate codes to guide downstream processes such as phase precession, because of their demonstrated frequency-selective properties.
Double Linear Damage Rule for Fatigue Analysis
NASA Technical Reports Server (NTRS)
Halford, G.; Manson, S.
1985-01-01
Double Linear Damage Rule (DLDR) method for use by structural designers to determine fatigue-crack-initiation life when structure subjected to unsteady, variable-amplitude cyclic loadings. Method calculates in advance of service how many loading cycles imposed on structural component before macroscopic crack initiates. Approach eventually used in design of high performance systems and incorporated into design handbooks and codes.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation
NASA Technical Reports Server (NTRS)
Holt, James B.; Ruf, Joe
1999-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi-dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for both Diffusion and Afterburning (DAB) and Simultaneous Mixing and Combustion (SMC) test conditions. Results from both the 2D and the 3D models are presented.
National Combustion Code: Parallel Implementation and Performance
NASA Technical Reports Server (NTRS)
Quealy, A.; Ryder, R.; Norris, A.; Liu, N.-S.
2000-01-01
The National Combustion Code (NCC) is being developed by an industry-government team for the design and analysis of combustion systems. CORSAIR-CCD is the current baseline reacting flow solver for NCC. This is a parallel, unstructured grid code which uses a distributed memory, message passing model for its parallel implementation. The focus of the present effort has been to improve the performance of the NCC flow solver to meet combustor designer requirements for model accuracy and analysis turnaround time. Improving the performance of this code contributes significantly to the overall reduction in time and cost of the combustor design cycle. This paper describes the parallel implementation of the NCC flow solver and summarizes its current parallel performance on an SGI Origin 2000. Earlier parallel performance results on an IBM SP-2 are also included. The performance improvements which have enabled a turnaround of less than 15 hours for a 1.3 million element fully reacting combustion simulation are described.
Lu, Ming-Chi; Hsieh, Min-Chih; Koo, Malcolm; Lai, Ning-Sheng
2016-01-01
Primary Sjögren's syndrome (pSS) is a progressive systemic autoimmune disorder with a strong female predominance. Hormonal influences are thought to play a role in the development of pSS. However, no studies have specifically evaluated the association between irregular menstrual cycles and pSS. Therefore, using a health claims database, this study investigated the risk of pSS in women with irregular menstrual cycles. We conducted a case-control study using the Taiwan's National Health Insurance Research Database. A total of 360 patients diagnosed with pSS (International Classification of Diseases, ninth revision, clinical modification, ICD-9-CM code 710.2) between 2001 and 2012 were identified. Controls were frequency-matched at a rate of 5:1 to the cases by five-year age interval and index year. Both cases and controls were retrospectively traced back until 2001 for the diagnosis of irregular menstrual cycles (ICD-9-CM code 626.4). The risk of pSS was assessed using multivariate logistic regression analyses. Irregular menstrual cycles were significantly associated with pSS [adjusted odds ratio, (AOR) = 1.38, p = 0.027], after adjusted for insured amount, urbanization level, and thyroid disorder. In addition, when the data were stratified by three age categories, only the patients in the age category of 45-55 years showed significant association between irregular menstrual cycles and pSS (AOR = 1.74, p = 0.005). In this nationwide, population-based case-control study, we found a significant increased risk of pSS in female patients with irregular menstrual cycles, particularly those in their mid-forties to mid-fifties.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, S.M.
1995-01-01
The requirements of ANSI/ANS 8.1 specify that calculational methods for away-from-reactor criticality safety analyses be validated against experimental measurements. If credit for the negative reactivity of the depleted (or spent) fuel isotopics is desired, it is necessary to benchmark computational methods against spent fuel critical configurations. This report summarizes a portion of the ongoing effort to benchmark away-from-reactor criticality analysis methods using critical configurations from commercial pressurized-water reactors. The analysis methodology selected for all the calculations reported herein is based on the codes and data provided in the SCALE-4 code system. The isotopic densities for the spent fuel assemblies inmore » the critical configurations were calculated using the SAS2H analytical sequence of the SCALE-4 system. The sources of data and the procedures for deriving SAS2H input parameters are described in detail. The SNIKR code module was used to extract the necessary isotopic densities from the SAS2H results and provide the data in the format required by the SCALE criticality analysis modules. The CSASN analytical sequence in SCALE-4 was used to perform resonance processing of the cross sections. The KENO V.a module of SCALE-4 was used to calculate the effective multiplication factor (k{sub eff}) of each case. The SCALE-4 27-group burnup library containing ENDF/B-IV (actinides) and ENDF/B-V (fission products) data was used for all the calculations. This volume of the report documents the SCALE system analysis of three reactor critical configurations for the Sequoyah Unit 2 Cycle 3. This unit and cycle were chosen because of the relevance in spent fuel benchmark applications: (1) the unit had a significantly long downtime of 2.7 years during the middle of cycle (MOC) 3, and (2) the core consisted entirely of burned fuel at the MOC restart. The first benchmark critical calculation was the MOC restart at hot, full-power (HFP) critical conditions. The other two benchmark critical calculations were the beginning-of-cycle (BOC) startup at both hot, zero-power (HZP) and HFP critical conditions. These latter calculations were used to check for consistency in the calculated results for different burnups and downtimes. The k{sub eff} results were in the range of 1.00014 to 1.00259 with a standard deviation of less than 0.001.« less
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl's Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl's Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
Full core analysis of IRIS reactor by using MCNPX.
Amin, E A; Bashter, I I; Hassan, Nabil M; Mustafa, S S
2016-07-01
This paper describes neutronic analysis for fresh fuelled IRIS (International Reactor Innovative and Secure) reactor by MCNPX code. The analysis included criticality calculations, radial power and axial power distribution, nuclear peaking factor and axial offset percent at the beginning of fuel cycle. The effective multiplication factor obtained by MCNPX code is compared with previous calculations by HELIOS/NESTLE, CASMO/SIMULATE, modified CORD-2 nodal calculations and SAS2H/KENO-V code systems. It is found that k-eff value obtained by MCNPX is closer to CORD-2 value. The radial and axial powers are compared with other published results carried out using SAS2H/KENO-V code. Moreover, the WIMS-D5 code is used for studying the effect of enriched boron in form of ZrB2 on the effective multiplication factor (K-eff) of the fuel pin. In this part of calculation, K-eff is calculated at different concentrations of Boron-10 in mg/cm at different stages of burnup of unit cell. The results of this part are compared with published results performed by HELIOS code. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Homma, Yuto; Moriwaki, Hiroyuki; Ohki, Shigeo; Ikeda, Kazumi
2014-06-01
This paper deals with verification of three dimensional triangular prismatic discrete ordinates transport calculation code ENSEMBLE-TRIZ by comparison with multi-group Monte Carlo calculation code GMVP in a large fast breeder reactor. The reactor is a 750 MWe electric power sodium cooled reactor. Nuclear characteristics are calculated at beginning of cycle of an initial core and at beginning and end of cycle of equilibrium core. According to the calculations, the differences between the two methodologies are smaller than 0.0002 Δk in the multi-plication factor, relatively about 1% in the control rod reactivity, and 1% in the sodium void reactivity.
C3 generic workstation: Performance metrics and applications
NASA Technical Reports Server (NTRS)
Eddy, Douglas R.
1988-01-01
The large number of integrated dependent measures available on a command, control, and communications (C3) generic workstation under development are described. In this system, embedded communications tasks will manipulate workload to assess the effects of performance-enhancing drugs (sleep aids and decongestants), work/rest cycles, biocybernetics, and decision support systems on performance. Task performance accuracy and latency will be event coded for correlation with other measures of voice stress and physiological functioning. Sessions will be videotaped to score non-verbal communications. Physiological recordings include spectral analysis of EEG, ECG, vagal tone, and EOG. Subjective measurements include SWAT, fatigue, POMS and specialized self-report scales. The system will be used primarily to evaluate the effects on performance of drugs, work/rest cycles, and biocybernetic concepts. Performance assessment algorithms will also be developed, including those used with small teams. This system provides a tool for integrating and synchronizing behavioral and psychophysiological measures in a complex decision-making environment.
High Efficiency Nuclear Power Plants Using Liquid Fluoride Thorium Reactor Technology
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.; Rarick, Richard A.; Rangarajan, Rajmohan
2009-01-01
An overall system analysis approach is used to propose potential conceptual designs of advanced terrestrial nuclear power plants based on Oak Ridge National Laboratory (ORNL) Molten Salt Reactor (MSR) experience and utilizing Closed Cycle Gas Turbine (CCGT) thermal-to-electric energy conversion technology. In particular conceptual designs for an advanced 1 GWe power plant with turbine reheat and compressor intercooling at a 950 K turbine inlet temperature (TIT), as well as near term 100 MWe demonstration plants with TITs of 950 and 1200 K are presented. Power plant performance data were obtained for TITs ranging from 650 to 1300 K by use of a Closed Brayton Cycle (CBC) systems code which considered the interaction between major sub-systems, including the Liquid Fluoride Thorium Reactor (LFTR), heat source and heat sink heat exchangers, turbo-generator machinery, and an electric power generation and transmission system. Optional off-shore submarine installation of the power plant is a major consideration.
Cyber physical systems role in manufacturing technologies
NASA Astrophysics Data System (ADS)
Al-Ali, A. R.; Gupta, Ragini; Nabulsi, Ahmad Al
2018-04-01
Empowered by the recent development in single System-on-Chip, Internet of Things, and cloud computing technologies, cyber physical systems are evolving as a major controller during and post the manufacturing products process. In additional to their real physical space, cyber products nowadays have a virtual space. A product virtual space is a digital twin that is attached to it to enable manufacturers and their clients to better manufacture, monitor, maintain and operate it throughout its life time cycles, i.e. from the product manufacturing date, through operation and to the end of its lifespan. Each product is equipped with a tiny microcontroller that has a unique identification number, access code and WiFi conductivity to access it anytime and anywhere during its life cycle. This paper presents the cyber physical systems architecture and its role in manufacturing. Also, it highlights the role of Internet of Things and cloud computing in industrial manufacturing and factory automation.
Scientists from CCR have generated a comprehensive structural map of Kaposi sarcoma-associated herpesvirus polyadenylated nuclear (PAN) RNA, a long non-coding RNA that helps the virus evade detection by its host’s immune system. The findings open new oppportunites to study the life cycle of this cancer-causing virus. Learn more...
Kneifel, Joshua; O'Rear, Eric; Webb, David; O'Fallon, Cheyney
2018-02-01
To conduct a more complete analysis of low-energy and net-zero energy buildings that considers both the operating and embodied energy/emissions, members of the building community look to life-cycle assessment (LCA) methods. This paper examines differences in the relative impacts of cost-optimal energy efficiency measure combinations depicting residential buildings up to and beyond net-zero energy consumption on operating and embodied flows using data from the Building Industry Reporting and Design for Sustainability (BIRDS) Low-Energy Residential Database. Results indicate that net-zero performance leads to a large increase in embodied flows (over 40%) that offsets some of the reductions in operational flows, but overall life-cycle flows are still reduced by over 60% relative to the state energy code. Overall, building designs beyond net-zero performance can partially offset embodied flows with negative operational flows by replacing traditional electricity generation with solar production, but would require an additional 8.34 kW (18.54 kW in total) of due south facing solar PV to reach net-zero total life-cycle flows. Such a system would meet over 239% of operational consumption of the most energy efficient design considered in this study and over 116% of a state code-compliant building design in its initial year of operation.
NASA Technical Reports Server (NTRS)
Manderscheid, J. M.; Kaufman, A.
1985-01-01
Turbine blades for reusable space propulsion systems are subject to severe thermomechanical loading cycles that result in large inelastic strains and very short lives. These components require the use of anisotropic high-temperature alloys to meet the safety and durability requirements of such systems. To assess the effects on blade life of material anisotropy, cyclic structural analyses are being performed for the first stage high-pressure fuel turbopump blade of the space shuttle main engine. The blade alloy is directionally solidified MAR-M 246 alloy. The analyses are based on a typical test stand engine cycle. Stress-strain histories at the airfoil critical location are computed using the MARC nonlinear finite-element computer code. The MARC solutions are compared to cyclic response predictions from a simplified structural analysis procedure developed at the NASA Lewis Research Center.
The electron transfer system of syntrophically grown Desulfovibrio vulgaris
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, C.B.; He, Z.; Yang, Z.K.
2009-05-01
Interspecies hydrogen transfer between organisms producing and consuming hydrogen promotes the decomposition of organic matter in most anoxic environments. Although syntrophic couplings between hydrogen producers and consumers are a major feature of the carbon cycle, mechanisms for energy recovery at the extremely low free energies of reactions typical of these anaerobic communities have not been established. In this study, comparative transcriptional analysis of a model sulfate-reducing microbe, Desulfovibrio vulgaris Hildenborough, suggested the use of alternative electron transfer systems dependent upon growth modality. During syntrophic growth on lactate with a hydrogenotrophic methanogen, D. vulgaris up-regulated numerous genes involved in electron transfermore » and energy generation when compared with sulfate-limited monocultures. In particular, genes coding for the putative membrane-bound Coo hydrogenase, two periplasmic hydrogenases (Hyd and Hyn) and the well-characterized high-molecular weight cytochrome (Hmc) were among the most highly expressed and up-regulated. Additionally, a predicted operon coding for genes involved in lactate transport and oxidation exhibited up-regulation, further suggesting an alternative pathway for electrons derived from lactate oxidation during syntrophic growth. Mutations in a subset of genes coding for Coo, Hmc, Hyd and Hyn impaired or severely limited syntrophic growth but had little affect on growth via sulfate-respiration. These results demonstrate that syntrophic growth and sulfate-respiration use largely independent energy generation pathways and imply that understanding of microbial processes sustaining nutrient cycling must consider lifestyles not captured in pure culture.« less
The Electron Transfer System of Syntrophically Grown Desulfovibrio vulgaris
DOE Office of Scientific and Technical Information (OSTI.GOV)
PBD; ENIGMA; GTL
2009-06-22
Interspecies hydrogen transfer between organisms producing and consuming hydrogen promotes the decomposition of organic matter in most anoxic environments. Although syntrophic couplings between hydrogen producers and consumers are a major feature of the carbon cycle, mechanisms for energy recovery at the extremely low free energies of reactions typical of these anaerobic communities have not been established. In this study, comparative transcriptional analysis of a model sulfate-reducing microbe, Desulfovibrio vulgaris Hildenborough, suggested the use of alternative electron transfer systems dependent upon growth modality. During syntrophic growth on lactate with a hydrogenotrophic methanogen, D. vulgaris up-regulated numerous genes involved in electron transfermore » and energy generation when compared with sulfate-limited monocultures. In particular, genes coding for the putative membrane-bound Coo hydrogenase, two periplasmic hydrogenases (Hyd and Hyn) and the well-characterized high-molecular weight cytochrome (Hmc) were among the most highly expressed and up-regulated. Additionally, a predicted operon coding for genes involved in lactate transport and oxidation exhibited up-regulation, further suggesting an alternative pathway for electrons derived from lactate oxidation during syntrophic growth. Mutations in a subset of genes coding for Coo, Hmc, Hyd and Hyn impaired or severely limited syntrophic growth but had little affect on growth via sulfate-respiration. These results demonstrate that syntrophic growth and sulfate-respiration use largely independent energy generation pathways and imply that understanding of microbial processes sustaining nutrient cycling must consider lifestyles not captured in pure culture.« less
A CFD Study of Turbojet and Single-Throat Ramjet Ejector Interaction
NASA Technical Reports Server (NTRS)
Chang, Ing; Hunter, Louis
1996-01-01
Supersonic ejector-diffuse systems have application in driving an advanced airbreathing propulsion system, consisting of turbojet engines acting as the primary and a single throat ramjet acting as the secondary. The turbojet engines are integrated into the single throat ramjet to minimize variable geometry and eliminate redundant propulsion components. The result is a simple, lightweight system that is operable from takeoff to high Mach numbers. At this high Mach number (approximately Mach 3.0), the turbojets are turned off and the high speed ramjet/scramjet take over and drive the vehicle to Mach 6.0. The turbojet-ejector-ramjet system consists of nonafterburning turbojet engines with ducting canted at 20 degrees to supply supersonic flow (downstream of CD nozzle) to the horizontal ramjet duct at a supply total pressure and temperature. Two conditions were modelled by a 2-D full Navier Stokes code at Mach 2.0. The code modelled the Fabri choke as well as the non-Fabri non critical case, using a computational throat to supply the back pressure. The results, which primarily predict the secondary mass flow rate and the mixed conditions at the ejector exit were in reasonable agreement with the 1-D cycle code (TBCC).
Subsonic Performance of Ejector Systems
NASA Astrophysics Data System (ADS)
Weil, Samuel
Combined cycle engines combining scramjets with turbo jets or rockets can provide efficient hypersonic flight. Ejectors have the potential to increase the thrust and efficiency of combined cycle engines near static conditions. A computer code was developed to support the design of a small-scale, turbine-based combined cycle demonstrator with an ejector, built around a commercially available turbojet engine. This code was used to analyze the performance of an ejector system built around a micro-turbojet. With the use of a simple ejector, net thrust increases as large as 20% over the base engine were predicted. Additionally the specific fuel consumption was lowered by 10%. Increasing the secondary to primary area ratio of the ejector lead to significant improvements in static thrust, specific fuel consumption (SFC), and propulsive efficiency. Further ejector performance improvements can be achieved by using a diffuser. Ejector performance drops off rapidly with increasing Mach number. The ejector has lower thrust and higher SFC than the turbojet core at Mach numbers above 0.2. When the nozzle chokes a significant drop in ejector performance is seen. When a diffuser is used, higher Mach numbers lead to choking in the mixer and a shock in the nozzle causing a significant decrease in ejector performance. Evaluation of different turbo jets shows that ejector performance depends significantly on the properties of the turbojet. Static thrust and SFC improvements can be achieved with increasing ejector area for all engines, but size of increase and change in performance at higher Mach numbers depend heavily on the turbojet. The use of an ejector in a turbine based combined cycle configuration also increases performance at static conditions with a thrust increase of 5% and SFC decrease of 5% for the tested configuration.
A high order approach to flight software development and testing
NASA Technical Reports Server (NTRS)
Steinbacher, J.
1981-01-01
The use of a software development facility is discussed as a means of producing a reliable and maintainable ECS software system, and as a means of providing efficient use of the ECS hardware test facility. Principles applied to software design are given, including modularity, abstraction, hiding, and uniformity. The general objectives of each phase of the software life cycle are also given, including testing, maintenance, code development, and requirement specifications. Software development facility tools are summarized, and tool deficiencies recognized in the code development and testing phases are considered. Due to limited lab resources, the functional simulation capabilities may be indispensable in the testing phase.
The Navy/NASA Engine Program (NNEP89): A user's manual
NASA Technical Reports Server (NTRS)
Plencner, Robert M.; Snyder, Christopher A.
1991-01-01
An engine simulation computer code called NNEP89 was written to perform 1-D steady state thermodynamic analysis of turbine engine cycles. By using a very flexible method of input, a set of standard components are connected at execution time to simulate almost any turbine engine configuration that the user could imagine. The code was used to simulate a wide range of engine cycles from turboshafts and turboprops to air turborockets and supersonic cruise variable cycle engines. Off design performance is calculated through the use of component performance maps. A chemical equilibrium model is incorporated to adequately predict chemical dissociation as well as model virtually any fuel. NNEP89 is written in standard FORTRAN77 with clear structured programming and extensive internal documentation. The standard FORTRAN77 programming allows it to be installed onto most mainframe computers and workstations without modification. The NNEP89 code was derived from the Navy/NASA Engine program (NNEP). NNEP89 provides many improvements and enhancements to the original NNEP code and incorporates features which make it easier to use for the novice user. This is a comprehensive user's guide for the NNEP89 code.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2001-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
3D graphics hardware accelerator programming methods for real-time visualization systems
NASA Astrophysics Data System (ADS)
Souetov, Andrew E.
2000-02-01
The paper deals with new approaches in software design for creating real-time applications that use modern graphics acceleration hardware. The growing complexity of such type of software compels programmers to use different types of CASE systems in design and development process. The subject under discussion is integration of such systems in a development process, their effective use, and the combination of these new methods with the necessity to produce optimal codes. A method of simulation integration and modeling tools in real-time software development cycle is described.
Spin dynamics modeling in the AGS based on a stepwise ray-tracing method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutheil, Yann
The AGS provides a polarized proton beam to RHIC. The beam is accelerated in the AGS from Gγ= 4.5 to Gγ = 45.5 and the polarization transmission is critical to the RHIC spin program. In the recent years, various systems were implemented to improve the AGS polarization transmission. These upgrades include the double partial snakes configuration and the tune jumps system. However, 100% polarization transmission through the AGS acceleration cycle is not yet reached. The current efficiency of the polarization transmission is estimated to be around 85% in typical running conditions. Understanding the sources of depolarization in the AGS ismore » critical to improve the AGS polarized proton performances. The complexity of beam and spin dynamics, which is in part due to the specialized Siberian snake magnets, drove a strong interest for original methods of simulations. For that, the Zgoubi code, capable of direct particle and spin tracking through field maps, was here used to model the AGS. A model of the AGS using the Zgoubi code was developed and interfaced with the current system through a simple command: the AgsFromSnapRampCmd. Interfacing with the machine control system allows for fast modelization using actual machine parameters. Those developments allowed the model to realistically reproduce the optics of the AGS along the acceleration ramp. Additional developments on the Zgoubi code, as well as on post-processing and pre-processing tools, granted long term multiturn beam tracking capabilities: the tracking of realistic beams along the complete AGS acceleration cycle. Beam multiturn tracking simulations in the AGS, using realistic beam and machine parameters, provided a unique insight into the mechanisms behind the evolution of the beam emittance and polarization during the acceleration cycle. Post-processing softwares were developed to allow the representation of the relevant quantities from the Zgoubi simulations data. The Zgoubi simulations proved particularly useful to better understand the polarization losses through horizontal intrinsic spin resonances The Zgoubi model as well as the tools developed were also used for some direct applications. For instance, some beam experiment simulations allowed an accurate estimation of the expected polarization gains from machine changes. In particular, the simulations that involved involved the tune jumps system provided an accurate estimation of polarization gains and the optimum settings that would improve the performance of the AGS.« less
Heesch, Kristiann C; Langdon, Michael
2016-02-01
Issue addressed A key strategy to increase active travel is the construction of bicycle infrastructure. Tools to evaluate this strategy are limited. This study assessed the usefulness of a smartphone GPS tracking system for evaluating the impact of this strategy on cycling behaviour. Methods Cycling usage data were collected from Queenslanders who used a GPS tracking app on their smartphone from 2013-2014. 'Heat' and volume maps of the data were reviewed, and GPS bicycle counts were compared with surveillance data and bicycle counts from automatic traffic-monitoring devices. Results Heat maps broadly indicated that changes in cycling occurred near infrastructure improvements. Volume maps provided changes in counts of cyclists due to these improvements although errors were noted in geographic information system (GIS) geo-coding of some GPS data. Large variations were evident in the number of cyclists using the app in different locations. These variations limited the usefulness of GPS data for assessing differences in cycling across locations. Conclusion Smartphone GPS data are useful in evaluating the impact of improved bicycle infrastructure in one location. Using GPS data to evaluate differential changes in cycling across multiple locations is problematic when there is insufficient traffic-monitoring devices available to triangulate GPS data with bicycle traffic count data. So what? The use of smartphone GPS data with other data sources is recommended for assessing how infrastructure improvements influence cycling behaviour.
A Roadmap to Continuous Integration for ATLAS Software Development
NASA Astrophysics Data System (ADS)
Elmsheuser, J.; Krasznahorkay, A.; Obreshkov, E.; Undrus, A.; ATLAS Collaboration
2017-10-01
The ATLAS software infrastructure facilitates efforts of more than 1000 developers working on the code base of 2200 packages with 4 million lines of C++ and 1.4 million lines of python code. The ATLAS offline code management system is the powerful, flexible framework for processing new package versions requests, probing code changes in the Nightly Build System, migration to new platforms and compilers, deployment of production releases for worldwide access and supporting physicists with tools and interfaces for efficient software use. It maintains multi-stream, parallel development environment with about 70 multi-platform branches of nightly releases and provides vast opportunities for testing new packages, for verifying patches to existing software and for migrating to new platforms and compilers. The system evolution is currently aimed on the adoption of modern continuous integration (CI) practices focused on building nightly releases early and often, with rigorous unit and integration testing. This paper describes the CI incorporation program for the ATLAS software infrastructure. It brings modern open source tools such as Jenkins and GitLab into the ATLAS Nightly System, rationalizes hardware resource allocation and administrative operations, provides improved feedback and means to fix broken builds promptly for developers. Once adopted, ATLAS CI practices will improve and accelerate innovation cycles and result in increased confidence in new software deployments. The paper reports the status of Jenkins integration with the ATLAS Nightly System as well as short and long term plans for the incorporation of CI practices.
Investigation of the Flow Physics Driving Stall-Side Flutter in Advanced Forward Swept Fan Designs
NASA Technical Reports Server (NTRS)
Sanders, Albert J.; Liu, Jong S.; Panovsky, Josef; Bakhle, Milind A.; Stefko, George; Srivastava, Rakesh
2003-01-01
Flutter-free operation of advanced transonic fan designs continues to be a challenging task for the designers of aircraft engines. In order to meet the demands of increased performance and lighter weight, these modern fan designs usually feature low-aspect ratio shroudless rotor blade designs that make the task of achieving adequate flutter margin even more challenging for the aeroelastician. This is especially true for advanced forward swept designs that encompass an entirely new design space compared to previous experience. Fortunately, advances in unsteady computational fluid dynamic (CFD) techniques over the past decade now provide an analysis capability that can be used to quantitatively assess the aeroelastic characteristics of these next generation fans during the design cycle. For aeroelastic applications, Mississippi State University and NASA Glenn Research Center have developed the CFD code TURBO-AE. This code is a time-accurate three-dimensional Euler/Navier-Stokes unsteady flow solver developed for axial-flow turbomachinery that can model multiple blade rows undergoing harmonic oscillations with arbitrary interblade phase angles, i.e., nodal diameter patterns. Details of the code can be found in Chen et al. (1993, 1994), Bakhle et al. (1997, 1998), and Srivastava et al. (1999). To assess aeroelastic stability, the work-per-cycle from TURBO-AE is converted to the critical damping ratio since this value is more physically meaningful, with both the unsteady normal pressure and viscous shear forces included in the work-per-cycle calculation. If the total damping (aerodynamic plus mechanical) is negative, then the blade is unstable since it extracts energy from the flow field over the vibration cycle. TURBO-AE is an integral part of an aeroelastic design system being developed at Honeywell Engines, Systems & Services for flutter and forced response predictions, with test cases from development rig and engine tests being used to validate its predictive capability. A recent experimental program (Sanders et al., 2002) was aimed at providing the necessary unsteady aerodynamic and vibratory response data needed to validate TURBO-AE for fan flutter predictions. A comparison of numerical TURBO-AE simulations with the benchmark flutter data is given in Sanders et al. (2003), with the data used to guide the validation of the code and define best practices for performing accurate unsteady simulations. The agreement between the analyses and the predictions was quite remarkable, demonstrating the ability of the analysis to accurately model the unsteady flow processes driving stall-side flutter.
Small engine technology programs
NASA Technical Reports Server (NTRS)
Niedzwiecki, Richard W.
1990-01-01
Described here is the small engine technology program being sponsored at the Lewis Research Center. Small gas turbine research is aimed at general aviation, commuter aircraft, rotorcraft, and cruise missile applications. The Rotary Engine program is aimed at supplying fuel flexible, fuel efficient technology to the general aviation industry, but also has applications to other missions. The Automotive Gas Turbine (AGT) and Heavy-Duty Diesel Transport Technology (HDTT) programs are sponsored by DOE. The Compound Cycle Engine program is sponsored by the Army. All of the programs are aimed towards highly efficient engine cycles, very efficient components, and the use of high temperature structural ceramics. This research tends to be generic in nature and has broad applications. The HDTT, rotary technology, and the compound cycle programs are all examining approaches to minimum heat rejection, or 'adiabatic' systems employing advanced materials. The AGT program is also directed towards ceramics application to gas turbine hot section components. Turbomachinery advances in the gas turbine programs will benefit advanced turbochargers and turbocompounders for the intermittent combustion systems, and the fundamental understandings and analytical codes developed in the research and technology programs will be directly applicable to the system projects.
Edwards, N
2008-10-01
The international introduction of performance-based building codes calls for a re-examination of indicators used to monitor their implementation. Indicators used in the building sector have a business orientation, target the life cycle of buildings, and guide asset management. In contrast, indicators used in the health sector focus on injury prevention, have a behavioural orientation, lack specificity with respect to features of the built environment, and do not take into account patterns of building use or building longevity. Suggestions for metrics that bridge the building and health sectors are discussed. The need for integrated surveillance systems in health and building sectors is outlined. It is time to reconsider commonly used epidemiological indicators in the field of injury prevention and determine their utility to address the accountability requirements of performance-based codes.
NASA Astrophysics Data System (ADS)
Strassmann, Kuno M.; Joos, Fortunat
2018-05-01
The Bern Simple Climate Model (BernSCM) is a free open-source re-implementation of a reduced-form carbon cycle-climate model which has been used widely in previous scientific work and IPCC assessments. BernSCM represents the carbon cycle and climate system with a small set of equations for the heat and carbon budget, the parametrization of major nonlinearities, and the substitution of complex component systems with impulse response functions (IRFs). The IRF approach allows cost-efficient yet accurate substitution of detailed parent models of climate system components with near-linear behavior. Illustrative simulations of scenarios from previous multimodel studies show that BernSCM is broadly representative of the range of the climate-carbon cycle response simulated by more complex and detailed models. Model code (in Fortran) was written from scratch with transparency and extensibility in mind, and is provided open source. BernSCM makes scientifically sound carbon cycle-climate modeling available for many applications. Supporting up to decadal time steps with high accuracy, it is suitable for studies with high computational load and for coupling with integrated assessment models (IAMs), for example. Further applications include climate risk assessment in a business, public, or educational context and the estimation of CO2 and climate benefits of emission mitigation options.
FDNS CFD Code Benchmark for RBCC Ejector Mode Operation: Continuing Toward Dual Rocket Effects
NASA Technical Reports Server (NTRS)
West, Jeff; Ruf, Joseph H.; Turner, James E. (Technical Monitor)
2000-01-01
Computational Fluid Dynamics (CFD) analysis results are compared with benchmark quality test data from the Propulsion Engineering Research Center's (PERC) Rocket Based Combined Cycle (RBCC) experiments to verify fluid dynamic code and application procedures. RBCC engine flowpath development will rely on CFD applications to capture the multi -dimensional fluid dynamic interactions and to quantify their effect on the RBCC system performance. Therefore, the accuracy of these CFD codes must be determined through detailed comparisons with test data. The PERC experiments build upon the well-known 1968 rocket-ejector experiments of Odegaard and Stroup by employing advanced optical and laser based diagnostics to evaluate mixing and secondary combustion. The Finite Difference Navier Stokes (FDNS) code [2] was used to model the fluid dynamics of the PERC RBCC ejector mode configuration. Analyses were performed for the Diffusion and Afterburning (DAB) test conditions at the 200-psia thruster operation point, Results with and without downstream fuel injection are presented.
10 CFR 436.42 - Evaluation of Life-Cycle Cost Effectiveness.
Code of Federal Regulations, 2011 CFR
2011-01-01
... the life-cycle cost analysis method in part 436, subpart A, of title 10 of the Code of Federal... 10 Energy 3 2011-01-01 2011-01-01 false Evaluation of Life-Cycle Cost Effectiveness. 436.42... PROGRAMS Agency Procurement of Energy Efficient Products § 436.42 Evaluation of Life-Cycle Cost...
Numerical Propulsion System Simulation
NASA Technical Reports Server (NTRS)
Naiman, Cynthia
2006-01-01
The NASA Glenn Research Center, in partnership with the aerospace industry, other government agencies, and academia, is leading the effort to develop an advanced multidisciplinary analysis environment for aerospace propulsion systems called the Numerical Propulsion System Simulation (NPSS). NPSS is a framework for performing analysis of complex systems. The initial development of NPSS focused on the analysis and design of airbreathing aircraft engines, but the resulting NPSS framework may be applied to any system, for example: aerospace, rockets, hypersonics, power and propulsion, fuel cells, ground based power, and even human system modeling. NPSS provides increased flexibility for the user, which reduces the total development time and cost. It is currently being extended to support the NASA Aeronautics Research Mission Directorate Fundamental Aeronautics Program and the Advanced Virtual Engine Test Cell (AVETeC). NPSS focuses on the integration of multiple disciplines such as aerodynamics, structure, and heat transfer with numerical zooming on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS development includes capabilities to facilitate collaborative engineering. The NPSS will provide improved tools to develop custom components and to use capability for zooming to higher fidelity codes, coupling to multidiscipline codes, transmitting secure data, and distributing simulations across different platforms. These powerful capabilities extend NPSS from a zero-dimensional simulation tool to a multi-fidelity, multidiscipline system-level simulation tool for the full development life cycle.
NASA Technical Reports Server (NTRS)
Follen, Gregory J.; Naiman, Cynthia G.
1999-01-01
The NASA Lewis Research Center is developing an environment for analyzing and designing aircraft engines-the Numerical Propulsion System Simulation (NPSS). NPSS will integrate multiple disciplines, such as aerodynamics, structure, and heat transfer, and will make use of numerical "zooming" on component codes. Zooming is the coupling of analyses at various levels of detail. NPSS uses the latest computing and communication technologies to capture complex physical processes in a timely, cost-effective manner. The vision of NPSS is to create a "numerical test cell" enabling full engine simulations overnight on cost-effective computing platforms. Through the NASA/Industry Cooperative Effort agreement, NASA Lewis and industry partners are developing a new engine simulation called the National Cycle Program (NCP). NCP, which is the first step toward NPSS and is its initial framework, supports the aerothermodynamic system simulation process for the full life cycle of an engine. U.S. aircraft and airframe companies recognize NCP as the future industry standard common analysis tool for aeropropulsion system modeling. The estimated potential payoff for NCP is a $50 million/yr savings to industry through improved engineering productivity.
Multiprocessing MCNP on an IBN RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-01-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors P and the fraction f of task time that multiprocesses, can be formulated using Amdahl's law: S(f, P) =1/(1-f+f/P). However, for most applications, this theoretical limit cannot be achieved because of additional terms (e.g., multitasking overhead, memory overlap, etc.) that are not included in Amdahl's law. Monte Carlo transport is a natural candidate for multiprocessing because the particle tracks are generally independent, and the precision of the result increases as the square Foot of the number of particles tracked.« less
Multiprocessing MCNP on an IBM RS/6000 cluster
DOE Office of Scientific and Technical Information (OSTI.GOV)
McKinney, G.W.; West, J.T.
1993-03-01
The advent of high-performance computer systems has brought to maturity programming concepts like vectorization, multiprocessing, and multitasking. While there are many schools of thought as to the most significant factor in obtaining order-of-magnitude increases in performance, such speedup can only be achieved by integrating the computer system and application code. Vectorization leads to faster manipulation of arrays by overlapping instruction CPU cycles. Discrete ordinates codes, which require the solving of large matrices, have proved to be major benefactors of vectorization. Monte Carlo transport, on the other hand, typically contains numerous logic statements and requires extensive redevelopment to benefit from vectorization.more » Multiprocessing and multitasking provide additional CPU cycles via multiple processors. Such systems are generally designed with either common memory access (multitasking) or distributed memory access. In both cases, theoretical speedup, as a function of the number of processors (P) and the fraction of task time that multiprocesses (f), can be formulated using Amdahl`s Law S ((f,P) = 1 f + f/P). However, for most applications this theoretical limit cannot be achieved, due to additional terms not included in Amdahl`s Law. Monte Carlo transport is a natural candidate for multiprocessing, since the particle tracks are generally independent and the precision of the result increases as the square root of the number of particles tracked.« less
A Fixed Point VHDL Component Library for a High Efficiency Reconfigurable Radio Design Methodology
NASA Technical Reports Server (NTRS)
Hoy, Scott D.; Figueiredo, Marco A.
2006-01-01
Advances in Field Programmable Gate Array (FPGA) technologies enable the implementation of reconfigurable radio systems for both ground and space applications. The development of such systems challenges the current design paradigms and requires more robust design techniques to meet the increased system complexity. Among these techniques is the development of component libraries to reduce design cycle time and to improve design verification, consequently increasing the overall efficiency of the project development process while increasing design success rates and reducing engineering costs. This paper describes the reconfigurable radio component library developed at the Software Defined Radio Applications Research Center (SARC) at Goddard Space Flight Center (GSFC) Microwave and Communications Branch (Code 567). The library is a set of fixed-point VHDL components that link the Digital Signal Processing (DSP) simulation environment with the FPGA design tools. This provides a direct synthesis path based on the latest developments of the VHDL tools as proposed by the BEE VBDL 2004 which allows for the simulation and synthesis of fixed-point math operations while maintaining bit and cycle accuracy. The VHDL Fixed Point Reconfigurable Radio Component library does not require the use of the FPGA vendor specific automatic component generators and provide a generic path from high level DSP simulations implemented in Mathworks Simulink to any FPGA device. The access to the component synthesizable, source code provides full design verification capability:
NASA Technical Reports Server (NTRS)
Vanfossen, G. James
1992-01-01
One possible low speed propulsion system for the National Aerospace Plane is a liquid air cycle engine (LACE). The LACE system uses the heat sink in the liquid hydrogen propellant to liquefy air in a heat exchanger which is then pumped up to high pressure and used as the oxidizer in a hydrogen liquid air rocket. The inlet airstream must be dehumidified or moisture could freeze on the cryogenic heat exchangers and block them. The main objective of this research has been to develop a computer simulation of the cold tube/antifreeze-spray water alleviation system and to verify the model with experimental data. An experimental facility has been built and humid air tests were conducted on a generic heat exchanger to obtain condensing data for code development. The paper describes the experimental setup, outlines the method of calculation used in the code, and presents comparisons of the calculations and measurements. Cause of discrepancies between the model and data are explained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McDeavitt, Sean; Shao, Lin; Tsvetkov, Pavel
2014-04-07
Advanced fast reactor systems being developed under the DOE's Advanced Fuel Cycle Initiative are designed to destroy TRU isotopes generated in existing and future nuclear energy systems. Over the past 40 years, multiple experiments and demonstrations have been completed using U-Zr, U-Pu-Zr, U-Mo and other metal alloys. As a result, multiple empirical and semi-empirical relationships have been established to develop empirical performance modeling codes. Many mechanistic questions about fission as mobility, bubble coalescience, and gas release have been answered through industrial experience, research, and empirical understanding. The advent of modern computational materials science, however, opens new doors of development suchmore » that physics-based multi-scale models may be developed to enable a new generation of predictive fuel performance codes that are not limited by empiricism.« less
Orbit determination using real tracking data from FY3C-GNOS
NASA Astrophysics Data System (ADS)
Xiong, Chao; Lu, Chuanfang; Zhu, Jun; Ding, Huoping
2017-08-01
China is currently developing the BeiDou Navigation Satellite System, also known as BDS. The nominal constellation of BDS (regional), which had been able to provide preliminary regional positioning and navigation functions, was composed of fourteen satellites, including 5 GEO, 5 IGSO and 4 MEO satellites, and was realized by the end of 2013. Global navigation satellite system occultation sounder (GNOS) on board the Fengyun3C (FY3C) satellite, which is the first BDS/GPS compatible radio occultation (RO) sounder in the world, was launched on 23 September 2013. The GNOS instrument is capable of tracking up to 6 BeiDou satellites and more than 8 GPS satellites. We first present a quality analysis using 1-week onboard BDS/GPS measurements collected by GNOS. Satellite visibility, multipath combination and the ratio of cycle slips are analyzed. The analysis of satellite visibility shows that for one week the BDS receiver can track up to 6 healthy satellites. The analysis of multipath combinations (MPC) suggests more multipath present for BDS than GPS for the CA code (B1 MPC is 0.597 m, L1 MPC is 0.326 m), but less multipath for the P code (B2 MPC is 0.421 m, L2 MPC is 0.673 m). More cycle slips occur for the BDS than for the GPS receiver as shown by the ratio of total satellites/cycle slips observed over a 24 h period. Both the maximum value and average of the ratio of cycle slips based on BDS measurements is 72/50.29, which is smaller than 368/278.71 based on GPS measurements. Second, the results of reduced dynamic orbit determination using BDS/GPS code and phase measurements, standalone BDS SPP (Single Point Positioning) kinematic solution and real-time orbit determination using BDS/GPS code measurements are presented and analyzed. Using an overlap analysis, the orbit consistency of FY3C-GNOS is about 3.80 cm. The precision of BDS only solutions is about 22 cm. The precision of FY3C-GNOS orbit with the Helmert variance component estimation are improved slightly after the BDS observations are added for one week (October 10-16, 2013). In the three-dimensional direction, the orbit precision is respectively improved by 0.31 cm. BDS code observations already allow a standalone positioning with RMS accuracy of at least 22 m using BDS broadcast ephemeris, while the accuracy is at least 5 m using BDS precise ephemeris. The standard deviations of differences of real-time orbit determination with the Dynamic Model Compensation using BDS/GPS, GPS, and BDS code measurements are 1.24 m, 1.27 m and 6.67 m in three-dimensional direction, respectively. It can slightly improve convergence time for real-time orbit determination by 17 s after the BDS observations are added. And it can also slightly improve the accuracy of real-time orbit determination by 0.03 m. The results obtained in this paper are already rather promising.
Small Changes Yield Large Results at NIST's Net-Zero Energy Residential Test Facility.
Fanney, A Hunter; Healy, William; Payne, Vance; Kneifel, Joshua; Ng, Lisa; Dougherty, Brian; Ullah, Tania; Omar, Farhad
2017-12-01
The Net-Zero Energy Residential Test Facility (NZERTF) was designed to be approximately 60 % more energy efficient than homes meeting the 2012 International Energy Conservation Code (IECC) requirements. The thermal envelope minimizes heat loss/gain through the use of advanced framing and enhanced insulation. A continuous air/moisture barrier resulted in an air exchange rate of 0.6 air changes per hour at 50 Pa. The home incorporates a vast array of extensively monitored renewable and energy efficient technologies including an air-to-air heat pump system with a dedicated dehumidification cycle; a ducted heat-recovery ventilation system; a whole house dehumidifier; a photovoltaic system; and a solar domestic hot water system. During its first year of operation the NZERTF produced an energy surplus of 1023 kWh. Based on observations during the first year, changes were made to determine if further improvements in energy performance could be obtained. The changes consisted of installing a thermostat that incorporated control logic to minimize the use of auxiliary heat, using a whole house dehumidifier in lieu of the heat pump's dedicated dehumidification cycle, and reducing the ventilation rate to a value that met but did not exceed code requirements. During the second year of operation the NZERTF produced an energy surplus of 2241 kWh. This paper describes the facility, compares the performance data for the two years, and quantifies the energy impact of the weather conditions and operational changes.
A dual-color marker system for in vivo visualization of cell cycle progression in Arabidopsis.
Yin, Ke; Ueda, Minako; Takagi, Hitomi; Kajihara, Takehiro; Sugamata Aki, Shiori; Nobusawa, Takashi; Umeda-Hara, Chikage; Umeda, Masaaki
2014-11-01
Visualization of the spatiotemporal pattern of cell division is crucial to understand how multicellular organisms develop and how they modify their growth in response to varying environmental conditions. The mitotic cell cycle consists of four phases: S (DNA replication), M (mitosis and cytokinesis), and the intervening G1 and G2 phases; however, only G2/M-specific markers are currently available in plants, making it difficult to measure cell cycle duration and to analyze changes in cell cycle progression in living tissues. Here, we developed another cell cycle marker that labels S-phase cells by manipulating Arabidopsis CDT1a, which functions in DNA replication origin licensing. Truncations of the CDT1a coding sequence revealed that its carboxy-terminal region is responsible for proteasome-mediated degradation at late G2 or in early mitosis. We therefore expressed this region as a red fluorescent protein fusion protein under the S-specific promoter of a histone 3.1-type gene, HISTONE THREE RELATED2 (HTR2), to generate an S/G2 marker. Combining this marker with the G2/M-specific CYCB1-GFP marker enabled us to visualize both S to G2 and G2 to M cell cycle stages, and thus yielded an essential tool for time-lapse imaging of cell cycle progression. The resultant dual-color marker system, Cell Cycle Tracking in Plant Cells (Cytrap), also allowed us to identify root cells in the last mitotic cell cycle before they entered the endocycle. Our results demonstrate that Cytrap is a powerful tool for in vivo monitoring of the plant cell cycle, and thus for deepening our understanding of cell cycle regulation in particular cell types during organ development. © 2014 The Authors The Plant Journal © 2014 John Wiley & Sons Ltd.
Design of a Double Anode Magnetron Injection Gun for Q-band Gyro-TWT Using Boundary Element Method
NASA Astrophysics Data System (ADS)
Li, Zhiliang; Feng, Jinjun; Liu, Bentian
2018-04-01
This paper presents a novel design code for double anode magnetron injection guns (MIGs) in gyro-devices based on boundary element method (BEM). The physical and mathematical models were constructed, and then the code using BEM for MIG's calculation was developed. Using the code, a double anode MIG for a Q-band gyrotron traveling-wave tube (gyro-TWT) amplifier operating in the circular TE01 mode at the fundamental cyclotron harmonic was designed. In order to verify the reliability of this code, velocity spread and guiding center radius of the MIG simulated by the BEM code were compared with these from the commonly used EGUN code, showing a reasonable agreement. Then, a Q-band gyro-TWT was fabricated and tested. The testing results show that the device has achieved an average power of 5kW and peak power ≥ 150 kW at a 3% duty cycle within bandwidth of 2 GHz, and maximum output peak power of 220 kW, with a corresponding saturated gain of 50.9 dB and efficiency of 39.8%. This paper demonstrates that the BEM code can be used as an effective approach for analysis of electron optics system in gyro-devices.
Laser pulse coded signal frequency measuring device based on DSP and CPLD
NASA Astrophysics Data System (ADS)
Zhang, Hai-bo; Cao, Li-hua; Geng, Ai-hui; Li, Yan; Guo, Ru-hai; Wang, Ting-feng
2011-06-01
Laser pulse code is an anti-jamming measures used in semi-active laser guided weapons. On account of the laser-guided signals adopting pulse coding mode and the weak signal processing, it need complex calculations in the frequency measurement process according to the laser pulse code signal time correlation to meet the request in optoelectronic countermeasures in semi-active laser guided weapons. To ensure accurately completing frequency measurement in a short time, it needed to carry out self-related process with the pulse arrival time series composed of pulse arrival time, calculate the signal repetition period, and then identify the letter type to achieve signal decoding from determining the time value, number and rank number in a signal cycle by Using CPLD and DSP for signal processing chip, designing a laser-guided signal frequency measurement in the pulse frequency measurement device, improving the signal processing capability through the appropriate software algorithms. In this article, we introduced the principle of frequency measurement of the device, described the hardware components of the device, the system works and software, analyzed the impact of some system factors on the accuracy of the measurement. The experimental results indicated that this system improve the accuracy of the measurement under the premise of volume, real-time, anti-interference, low power of the laser pulse frequency measuring device. The practicality of the design, reliability has been demonstrated from the experimental point of view.
Recent Upgrades to the NASA Ames Mars General Circulation Model: Applications to Mars' Water Cycle
NASA Astrophysics Data System (ADS)
Hollingsworth, Jeffery L.; Kahre, M. A.; Haberle, R. M.; Montmessin, F.; Wilson, R. J.; Schaeffer, J.
2008-09-01
We report on recent improvements to the NASA Ames Mars general circulation model (GCM), a robust 3D climate-modeling tool that is state-of-the-art in terms of its physics parameterizations and subgrid-scale processes, and which can be applied to investigate physical and dynamical processes of the present (and past) Mars climate system. The most recent version (gcm2.1, v.24) of the Ames Mars GCM utilizes a more generalized radiation code (based on a two-stream approximation with correlated k's); an updated transport scheme (van Leer formulation); a cloud microphysics scheme that assumes a log-normal particle size distribution whose first two moments are treated as atmospheric tracers, and which includes the nucleation, growth and sedimentation of ice crystals. Atmospheric aerosols (e.g., dust and water-ice) can either be radiatively active or inactive. We apply this version of the Ames GCM to investigate key aspects of the present water cycle on Mars. Atmospheric dust is partially interactive in our simulations; namely, the radiation code "sees" a prescribed distribution that follows the MGS thermal emission spectrometer (TES) year-one measurements with a self-consistent vertical depth scale that varies with season. The cloud microphysics code interacts with a transported dust tracer column whose surface source is adjusted to maintain the TES distribution. The model is run from an initially dry state with a better representation of the north residual cap (NRC) which accounts for both surface-ice and bare-soil components. A seasonally repeatable water cycle is obtained within five Mars years. Our sub-grid scale representation of the NRC provides for a more realistic flux of moisture to the atmosphere and a much drier water cycle consistent with recent spacecraft observations (e.g., Mars Express PFS, corrected MGS/TES) compared to models that assume a spatially uniform and homogeneous north residual polar cap.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzgrewe, F.; Hegedues, F.; Paratte, J.M.
1995-03-01
The light water reactor BOXER code was used to determine the fast azimuthal neutron fluence distribution at the inner surface of the reactor pressure vessel after the tenth cycle of a pressurized water reactor (PWR). Using a cross-section library in 45 groups, fixed-source calculations in transport theory and x-y geometry were carried out to determine the fast azimuthal neutron flux distribution at the inner surface of the pressure vessel for four different cycles. From these results, the fast azimuthal neutron fluence after the tenth cycle was estimated and compared with the results obtained from scraping test experiments. In these experiments,more » small samples of material were taken from the inner surface of the pressure vessel. The fast neutron fluence was then determined form the measured activity of the samples. Comparing the BOXER and scraping test results have maximal differences of 15%, which is very good, considering the factor of 10{sup 3} neutron attenuation between the reactor core and the pressure vessel. To compare the BOXER results with an independent code, the 21st cycle of the PWR was also calculated with the TWODANT two-dimensional transport code, using the same group structure and cross-section library. Deviations in the fast azimuthal flux distribution were found to be <3%, which verifies the accuracy of the BOXER results.« less
Chen, Wenxi; Kitazawa, Masumi; Togawa, Tatsuo
2009-09-01
This paper proposes a method to estimate a woman's menstrual cycle based on the hidden Markov model (HMM). A tiny device was developed that attaches around the abdominal region to measure cutaneous temperature at 10-min intervals during sleep. The measured temperature data were encoded as a two-dimensional image (QR code, i.e., quick response code) and displayed in the LCD window of the device. A mobile phone captured the QR code image, decoded the information and transmitted the data to a database server. The collected data were analyzed by three steps to estimate the biphasic temperature property in a menstrual cycle. The key step was an HMM-based step between preprocessing and postprocessing. A discrete Markov model, with two hidden phases, was assumed to represent higher- and lower-temperature phases during a menstrual cycle. The proposed method was verified by the data collected from 30 female participants, aged from 14 to 46, over six consecutive months. By comparing the estimated results with individual records from the participants, 71.6% of 190 menstrual cycles were correctly estimated. The sensitivity and positive predictability were 91.8 and 96.6%, respectively. This objective evaluation provides a promising approach for managing premenstrual syndrome and birth control.
Experimental and Analytical Performance of a Dual Brayton Power Conversion System
NASA Technical Reports Server (NTRS)
Lavelle, Thomas A.; Hervol, David S.; Briggs, Maxwell; Owen, A. Karl
2009-01-01
The interactions between two closed Brayton cycle (CBC) power conversion units (PCU) which share a common gas inventory and heat source have been studied experimentally using the Dual Brayton Power Conversion System (DBPCS) and analytically using the Closed- Cycle System Simulation (CCSS) computer code. Selected operating modes include steady-state operation at equal and unequal shaft speeds and various start-up scenarios. Equal shaft speed steady-state tests were conducted for heater exit temperatures of 840 to 950 K and speeds of 50 to 90 krpm, providing a system performance map. Unequal shaft speed steady-state testing over the same operating conditions shows that the power produced by each Brayton is sensitive to the operating conditions of the other due to redistribution of gas inventory. Startup scenarios show that starting the engines one at a time can dramatically reduce the required motoring energy. Although the DBPCS is not considered a flight-like system, these insights, as well as the operational experience gained from operating and modeling this system provide valuable information for the future development of Brayton systems.
Low Power LDPC Code Decoder Architecture Based on Intermediate Message Compression Technique
NASA Astrophysics Data System (ADS)
Shimizu, Kazunori; Togawa, Nozomu; Ikenaga, Takeshi; Goto, Satoshi
Reducing the power dissipation for LDPC code decoder is a major challenging task to apply it to the practical digital communication systems. In this paper, we propose a low power LDPC code decoder architecture based on an intermediate message-compression technique which features as follows: (i) An intermediate message compression technique enables the decoder to reduce the required memory capacity and write power dissipation. (ii) A clock gated shift register based intermediate message memory architecture enables the decoder to decompress the compressed messages in a single clock cycle while reducing the read power dissipation. The combination of the above two techniques enables the decoder to reduce the power dissipation while keeping the decoding throughput. The simulation results show that the proposed architecture improves the power efficiency up to 52% and 18% compared to that of the decoder based on the overlapped schedule and the rapid convergence schedule without the proposed techniques respectively.
High Efficiency Nuclear Power Plants using Liquid Fluoride Thorium Reactor Technology
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.; Rarick, Richard A.; Rangarajan, Rajmohan
2009-01-01
An overall system analysis approach is used to propose potential conceptual designs of advanced terrestrial nuclear power plants based on Oak Ridge National Laboratory (ORNL) Molten Salt Reactor (MSR) experience and utilizing Closed Cycle Gas Turbine (CCGT) thermal-to-electric energy conversion technology. In particular conceptual designs for an advanced 1 GWe power plant with turbine reheat and compressor intercooling at a 950 K turbine inlet temperature (TIT), as well as near term 100 MWe demonstration plants with TITS of 950 K and 1200 K are presented. Power plant performance data were obtained for TITS ranging from 650 to 1300 K by use of a Closed Brayton Cycle (CBC) systems code which considered the interaction between major sub-systems, including the Liquid Fluoride Thorium Reactor (LFTR), heat source and heat sink heat exchangers, turbo -generator machinery, and an electric power generation and transmission system. Optional off-shore submarine installation of the power plant is a major consideration.
MARTe: A Multiplatform Real-Time Framework
NASA Astrophysics Data System (ADS)
Neto, André C.; Sartori, Filippo; Piccolo, Fabio; Vitelli, Riccardo; De Tommasi, Gianmaria; Zabeo, Luca; Barbalace, Antonio; Fernandes, Horacio; Valcarcel, Daniel F.; Batista, Antonio J. N.
2010-04-01
Development of real-time applications is usually associated with nonportable code targeted at specific real-time operating systems. The boundary between hardware drivers, system services, and user code is commonly not well defined, making the development in the target host significantly difficult. The Multithreaded Application Real-Time executor (MARTe) is a framework built over a multiplatform library that allows the execution of the same code in different operating systems. The framework provides the high-level interfaces with hardware, external configuration programs, and user interfaces, assuring at the same time hard real-time performances. End-users of the framework are required to define and implement algorithms inside a well-defined block of software, named Generic Application Module (GAM), that is executed by the real-time scheduler. Each GAM is reconfigurable with a set of predefined configuration meta-parameters and interchanges information using a set of data pipes that are provided as inputs and required as output. Using these connections, different GAMs can be chained either in series or parallel. GAMs can be developed and debugged in a non-real-time system and, only once the robustness of the code and correctness of the algorithm are verified, deployed to the real-time system. The software also supplies a large set of utilities that greatly ease the interaction and debugging of a running system. Among the most useful are a highly efficient real-time logger, HTTP introspection of real-time objects, and HTTP remote configuration. MARTe is currently being used to successfully drive the plasma vertical stabilization controller on the largest magnetic confinement fusion device in the world, with a control loop cycle of 50 ?s and a jitter under 1 ?s. In this particular project, MARTe is used with the Real-Time Application Interface (RTAI)/Linux operating system exploiting the new ?86 multicore processors technology.
NASA Astrophysics Data System (ADS)
Susilo, J.; Suparlina, L.; Deswandri; Sunaryo, G. R.
2018-02-01
The using of a computer program for the PWR type core neutronic design parameters analysis has been carried out in some previous studies. These studies included a computer code validation on the neutronic parameters data values resulted from measurements and benchmarking calculation. In this study, the AP1000 first cycle core radial power peaking factor validation and analysis were performed using CITATION module of the SRAC2006 computer code. The computer code has been also validated with a good result to the criticality values of VERA benchmark core. The AP1000 core power distribution calculation has been done in two-dimensional X-Y geometry through ¼ section modeling. The purpose of this research is to determine the accuracy of the SRAC2006 code, and also the safety performance of the AP1000 core first cycle operating. The core calculations were carried out with the several conditions, those are without Rod Cluster Control Assembly (RCCA), by insertion of a single RCCA (AO, M1, M2, MA, MB, MC, MD) and multiple insertion RCCA (MA + MB, MA + MB + MC, MA + MB + MC + MD, and MA + MB + MC + MD + M1). The maximum power factor of the fuel rods value in the fuel assembly assumedapproximately 1.406. The calculation results analysis showed that the 2-dimensional CITATION module of SRAC2006 code is accurate in AP1000 power distribution calculation without RCCA and with MA+MB RCCA insertion.The power peaking factor on the first operating cycle of the AP1000 core without RCCA, as well as with single and multiple RCCA are still below in the safety limit values (less then about 1.798). So in terms of thermal power generated by the fuel assembly, then it can be considered that the AP100 core at the first operating cycle is safe.
Computational Infrastructure for Engine Structural Performance Simulation
NASA Technical Reports Server (NTRS)
Chamis, Christos C.
1997-01-01
Select computer codes developed over the years to simulate specific aspects of engine structures are described. These codes include blade impact integrated multidisciplinary analysis and optimization, progressive structural fracture, quantification of uncertainties for structural reliability and risk, benefits estimation of new technology insertion and hierarchical simulation of engine structures made from metal matrix and ceramic matrix composites. Collectively these codes constitute a unique infrastructure readiness to credibly evaluate new and future engine structural concepts throughout the development cycle from initial concept, to design and fabrication, to service performance and maintenance and repairs, and to retirement for cause and even to possible recycling. Stated differently, they provide 'virtual' concurrent engineering for engine structures total-life-cycle-cost.
IPHE Regulations Codes and Standards Working Group - Type IV COPV Round Robin Testing
NASA Technical Reports Server (NTRS)
Maes, M.; Starritt, L.; Zheng, J. Y.; Ou, K.; Keller, J.
2017-01-01
This manuscript presents the results of a multi-lateral international activity intended to understand how to execute a cycle stress test as specified in a chosen standard (GTR, SAE, ISO, EIHP...). The purpose of this work was to establish a harmonized test method protocol to ensure that the same results would be achieved regardless of the testing facility. It was found that accurate temperature measurement of the working fluid is necessary to ensure the test conditions remain within the tolerances specified. Continuous operation is possible with adequate cooling of the working fluid but this becomes more demanding if the cycle frequency increases. Recommendations for future test system design and operation are presented.
RRTMGP: A High-Performance Broadband Radiation Code for the Next Decade
2014-09-30
Hardware counters were used to measure several performance metrics, including the number of double-precision (DP) floating- point operations ( FLOPs ...0.2 DP FLOPs per CPU cycle. Experience with production science code is that it is possible to achieve execution rates in the range of 0.5 to 1.0...DP FLOPs per cycle. Looking at the ratio of vectorized DP FLOPs to total DP FLOPs we see (Figure PROF) that for most of the execution time the
10 CFR 434.607 - Life cycle cost analysis criteria.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle cost...
10 CFR 434.607 - Life cycle cost analysis criteria.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle cost...
10 CFR 434.607 - Life cycle cost analysis criteria.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Life cycle cost analysis criteria. 434.607 Section 434.607 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.607 Life cycle cost...
NASA Technical Reports Server (NTRS)
Rosenberg, L. S.; Revere, W. R.; Selcuk, M. K.
1981-01-01
A computer simulation code was employed to evaluate several generic types of solar power systems (up to 10 MWe). Details of the simulation methodology, and the solar plant concepts are given along with cost and performance results. The Solar Energy Simulation computer code (SESII) was used, which optimizes the size of the collector field and energy storage subsystem for given engine-generator and energy-transport characteristics. Nine plant types were examined which employed combinations of different technology options, such as: distributed or central receivers with one- or two-axis tracking or no tracking; point- or line-focusing concentrator; central or distributed power conversion; Rankin, Brayton, or Stirling thermodynamic cycles; and thermal or electrical storage. Optimal cost curves were plotted as a function of levelized busbar energy cost and annualized plant capacity. Point-focusing distributed receiver systems were found to be most efficient (17-26 percent).
Critiquing ';pore connectivity' as basis for in situ flow in geothermal systems
NASA Astrophysics Data System (ADS)
Kenedi, C. L.; Leary, P.; Malin, P.
2013-12-01
Geothermal system in situ flow systematics derived from detailed examination of grain-scale structures, fabrics, mineral alteration, and pore connectivity may be extremely misleading if/when extrapolated to reservoir-scale flow structure. In oil/gas field clastic reservoir operations, it is standard to assume that small scale studies of flow fabric - notably the Kozeny-Carman and Archie's Law treatments at the grain-scale and well-log/well-bore sampling of formations/reservoirs at the cm-m scale - are adequate to define the reservoir-scale flow properties. In the case of clastic reservoirs, however, a wide range of reservoir-scale data wholly discredits this extrapolation: Well-log data show that grain-scale fracture density fluctuation power scales inversely with spatial frequency k, S(k) ~ 1/k^β, 1.0 < β < 1.2, 1cycle/km < k < 1cycle/cm; the scaling is a ';universal' feature of well-logs (neutron porosity, sonic velocity, chemical abundance, mass density, resistivity, in many forms of clastic rock and instances of shale bodies, for both horizontal and vertical wells). Grain-scale fracture density correlates with in situ porosity; spatial fluctuations of porosity φ in well-core correlate with spatial fluctuations in the logarithm of well-core permeability, δφ ~ δlog(κ) with typical correlation coefficient ~ 85%; a similar relation is observed in consolidating sediments/clays, indicating a generic coupling between fluid pressure and solid deformation at pore sites. In situ macroscopic flow systems are lognormally distributed according to κ ~ κ0 exp(α(φ-φ0)), α >>1 an empirical parameter for degree of in situ fracture connectivity; the lognormal distribution applies to well-productivities in US oil fields and NZ geothermal fields, ';frack productivity' in oil/gas shale body reservoirs, ore grade distributions, and trace element abundances. Although presently available evidence for these properties in geothermal reservoirs is limited, there are indications that geothermal system flow essentially obeys the same ';universal' in situ flow rules as does clastic rock: Well-log data from Los Azufres, MX, show power-law scaling S(k) ~ 1/k^β, 1.2 < β < 1.4, for spatial frequency range 2cycles/km to 0.5cycle/m; higher β-values are likely due to the relatively fresh nature of geothermal systems; Well-core at Bulalo (PH) and Ohaaki (NZ) show statistically significant spatial correlation, δφ ~ δlog(κ) Well productivity at Ohaaki/Ngawha (NZ) and in geothermal systems elsewhere are lognormally distributed; K/Th/U abundances lognormally distributed in Los Azufres well-logs We therefore caution that small-scale evidence for in situ flow fabric in geothermal systems that is interpreted in terms of ';pore connectivity' may in fact not reflect how small-scale chemical processes are integrated into a large-scale geothermal flow structure. Rather such small scale studies should (perhaps) be considered in term of the above flow rules. These flow rules are easily incorporated into standard flow simulation codes, in particular the OPM = Open Porous Media open-source industry-standard flow code. Geochemical transport data relevant to geothermal systems can thus be expected to be well modeled by OPM or equivalent (e.g., INL/LANL) codes.
Quicklook overview of model changes in Melcor 2.2: Rev 6342 to Rev 9496
DOE Office of Scientific and Technical Information (OSTI.GOV)
Humphries, Larry L.
2017-05-01
MELCOR 2.2 is a significant official release of the MELCOR code with many new models and model improvements. This report provides the code user with a quick review and characterization of new models added, changes to existing models, the effect of code changes during this code development cycle (rev 6342 to rev 9496), a preview of validation results with this code version. More detailed information is found in the code Subversion logs as well as the User Guide and Reference Manuals.
Development of Advanced Life Cycle Costing Methods for Technology Benefit/Cost/Risk Assessment
NASA Technical Reports Server (NTRS)
Yackovetsky, Robert (Technical Monitor)
2002-01-01
The overall objective of this three-year grant is to provide NASA Langley's System Analysis Branch with improved affordability tools and methods based on probabilistic cost assessment techniques. In order to accomplish this objective, the Aerospace Systems Design Laboratory (ASDL) needs to pursue more detailed affordability, technology impact, and risk prediction methods and to demonstrate them on variety of advanced commercial transports. The affordability assessment, which is a cornerstone of ASDL methods, relies on the Aircraft Life Cycle Cost Analysis (ALCCA) program originally developed by NASA Ames Research Center and enhanced by ASDL. This grant proposed to improve ALCCA in support of the project objective by updating the research, design, test, and evaluation cost module, as well as the engine development cost module. Investigations into enhancements to ALCCA include improved engine development cost, process based costing, supportability cost, and system reliability with airline loss of revenue for system downtime. A probabilistic, stand-alone version of ALCCA/FLOPS will also be developed under this grant in order to capture the uncertainty involved in technology assessments. FLOPS (FLight Optimization System program) is an aircraft synthesis and sizing code developed by NASA Langley Research Center. This probabilistic version of the coupled program will be used within a Technology Impact Forecasting (TIF) method to determine what types of technologies would have to be infused in a system in order to meet customer requirements. A probabilistic analysis of the CER's (cost estimating relationships) within ALCCA will also be carried out under this contract in order to gain some insight as to the most influential costs and the impact that code fidelity could have on future RDS (Robust Design Simulation) studies.
Direct Digital Control of HVAC (Heating, Ventilating, and Air Conditioning).
1985-01-01
controller func- tions such as time-of-day, economizer cycles, reset, load shedding, chiller optimization , VAV fan synchronization, and optimum start/stop...control system such as that illustrated in Fig- urc 4. Data on setpoints , reset schedules, and event timing, such as that presented in Figure 6, are...program code (Figure 7). In addition to the control logic, setpoint and other data are readily available. Program logi:, setpoint and schedule data, and
Structure Limits for a 30mm Annular Piston.
1988-05-01
block at the rear, ending the cycle. III. STRESS ANALYSIS PROCEDURE tress data was generated using the SAAS -II finite element computer code. Applied...Plastics Avenue Rockford, IL 61125 Pittsfield, MA 01201-3698 1 Veritay Technology, Inc. 1 General Electric Company ATTN: E.B. Fisher Armament Systems... VALUATION Slit:ET/CHANGE 01- ADDRESS -hi s laboratoryN undertakes a continuing effort to improve the quality of the( re-ports it publishe,4. Yfour comments
SDTM - SYSTEM DESIGN TRADEOFF MODEL FOR SPACE STATION FREEDOM RELEASE 1.1
NASA Technical Reports Server (NTRS)
Chamberlin, R. G.
1994-01-01
Although extensive knowledge of space station design exists, the information is widely dispersed. The Space Station Freedom Program (SSFP) needs policies and procedures that ensure the use of consistent design objectives throughout its organizational hierarchy. The System Design Tradeoff Model (SDTM) produces information that can be used for this purpose. SDTM is a mathematical model of a set of possible designs for Space Station Freedom. Using the SDTM program, one can find the particular design which provides specified amounts of resources to Freedom's users at the lowest total (or life cycle) cost. One can also compare alternative design concepts by changing the set of possible designs, while holding the specified user services constant, and then comparing costs. Finally, both costs and user services can be varied simultaneously when comparing different designs. SDTM selects its solution from a set of feasible designs. Feasibility constraints include safety considerations, minimum levels of resources required for station users, budget allocation requirements, time limitations, and Congressional mandates. The total, or life cycle, cost includes all of the U.S. costs of the station: design and development, purchase of hardware and software, assembly, and operations throughout its lifetime. The SDTM development team has identified, for a variety of possible space station designs, the subsystems that produce the resources to be modeled. The team has also developed formulas for the cross consumption of resources by other resources, as functions of the amounts of resources produced. SDTM can find the values of station resources, so that subsystem designers can choose new design concepts that further reduce the station's life cycle cost. The fundamental input to SDTM is a set of formulas that describe the subsystems which make up a reference design. Most of the formulas identify how the resources required by each subsystem depend upon the size of the subsystem. Some of the formulas describe how the subsystem costs depend on size. The formulas can be complicated and nonlinear (if nonlinearity is needed to describe how designs change with size). SDTM's outputs are amounts of resources, life-cycle costs, and marginal costs. SDTM will run on IBM PC/XTs, ATs, and 100% compatibles with 640K of RAM and at least 3Mb of fixed-disk storage. A printer which can print in 132-column mode is also required, and a mathematics co-processor chip is highly recommended. This code is written in Turbo C 2.0. However, since the developers used a modified version of the proprietary Vitamin C source code library, the complete source code is not available. The executable is provided, along with all non-proprietary source code. This program was developed in 1989.
Resident challenges with daily life in Chinese long-term care facilities: A qualitative pilot study.
Song, Yuting; Scales, Kezia; Anderson, Ruth A; Wu, Bei; Corazzini, Kirsten N
As traditional family-based care in China declines, the demand for residential care increases. Knowledge of residents' experiences with long-term care (LTC) facilities is essential to improving quality of care. This pilot study aimed to describe residents' experiences in LTC facilities, particularly as it related to physical function. Semi-structured open-ended interviews were conducted in two facilities with residents stratified by three functional levels (n = 5). Directed content analysis was guided by the Adaptive Leadership Framework. A two-cycle coding approach was used with a first-cycle descriptive coding and second-cycle dramaturgical coding. Interviews provided examples of challenges faced by residents in meeting their daily care needs. Five themes emerged: staff care, care from family members, physical environment, other residents in the facility, and personal strategies. Findings demonstrate the significance of organizational context for care quality and reveal foci for future research. Copyright © 2017 Elsevier Inc. All rights reserved.
[Representation of knowledge in respiratory medicine: ontology should help the coding process].
Blanc, F-X; Baneyx, A; Charlet, J; Housset, B
2010-09-01
Access to medical knowledge is a major issue for health professionals and requires the development of terminologies. The objective of the reported work was to construct an ontology of respiratory medicine, i.e. an organized and formalized terminology composed by specific knowledge. The purpose is to help the medico-economical coding process and to represent the relevant knowledge about the patient. Our researches cover the whole life cycle of an ontology, from the development of a methodology, to building it from texts, to its use in an operational system. A computerized tool, based on the ontology, allows both a medico-economical coding and a graphical medical one. This second one will be used to index hospital reports. Our ontology counts 1913 concepts and contains all the knowledge included in the PMSI part of the SPLF thesaurus. Our tool has been evaluated and showed a recall of 80% and an accuracy of 85% regarding the medico-economical coding. The work presented in this paper justifies the approach that has been used. It must be continued on a large scale to validate our coding principles and the possibility of making enquiries on patient reports concerning clinical research. Copyright © 2010. Published by Elsevier Masson SAS.
Quality improvement utilizing in-situ simulation for a dual-hospital pediatric code response team.
Yager, Phoebe; Collins, Corey; Blais, Carlene; O'Connor, Kathy; Donovan, Patricia; Martinez, Maureen; Cummings, Brian; Hartnick, Christopher; Noviski, Natan
2016-09-01
Given the rarity of in-hospital pediatric emergency events, identification of gaps and inefficiencies in the code response can be difficult. In-situ, simulation-based medical education programs can identify unrecognized systems-based challenges. We hypothesized that developing an in-situ, simulation-based pediatric emergency response program would identify latent inefficiencies in a complex, dual-hospital pediatric code response system and allow rapid intervention testing to improve performance before implementation at an institutional level. Pediatric leadership from two hospitals with a shared pediatric code response team employed the Institute for Healthcare Improvement's (IHI) Breakthrough Model for Collaborative Improvement to design a program consisting of Plan-Do-Study-Act cycles occurring in a simulated environment. The objectives of the program were to 1) identify inefficiencies in our pediatric code response; 2) correlate to current workflow; 3) employ an iterative process to test quality improvement interventions in a safe environment; and 4) measure performance before actual implementation at the institutional level. Twelve dual-hospital, in-situ, simulated, pediatric emergencies occurred over one year. The initial simulated event allowed identification of inefficiencies including delayed provider response, delayed initiation of cardiopulmonary resuscitation (CPR), and delayed vascular access. These gaps were linked to process issues including unreliable code pager activation, slow elevator response, and lack of responder familiarity with layout and contents of code cart. From first to last simulation with multiple simulated process improvements, code response time for secondary providers coming from the second hospital decreased from 29 to 7 min, time to CPR initiation decreased from 90 to 15 s, and vascular access obtainment decreased from 15 to 3 min. Some of these simulated process improvements were adopted into the institutional response while others continue to be trended over time for evidence that observed changes represent a true new state of control. Utilizing the IHI's Breakthrough Model, we developed a simulation-based program to 1) successfully identify gaps and inefficiencies in a complex, dual-hospital, pediatric code response system and 2) provide an environment in which to safely test quality improvement interventions before institutional dissemination. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Comparison of Analytical Predictions and Experimental Results for a Dual Brayton Power System
NASA Technical Reports Server (NTRS)
Johnson, Paul
2007-01-01
NASA Glenn Research Center (GRC) contracted Barber- Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.
NASA Astrophysics Data System (ADS)
Goyal, M.; Chakravarty, A.; Atrey, M. D.
2017-02-01
Performance of modern helium refrigeration/ liquefaction systems depends significantly on the effectiveness of heat exchangers. Generally, compact plate fin heat exchangers (PFHE) having very high effectiveness (>0.95) are used in such systems. Apart from basic fluid film resistances, various secondary parameters influence the sizing/ rating of these heat exchangers. In the present paper, sizing calculations are performed, using in-house developed numerical models/ codes, for a set of high effectiveness PFHE for a modified Claude cycle based helium liquefier/ refrigerator operating in the refrigeration mode without liquid nitrogen (LN2) pre-cooling. The combined effects of secondary parameters like axial heat conduction through the heat exchanger metal matrix, parasitic heat in-leak from surroundings and variation in the fluid/ metal properties are taken care of in the sizing calculation. Numerical studies are carried out to predict the off-design performance of the PFHEs in the refrigeration mode with LN2 pre-cooling. Iterative process cycle calculations are also carried out to obtain the inlet/ exit state points of the heat exchangers.
Determination of oestrous cycle of the rats by direct examination: how reliable?
Yener, T; Turkkani Tunc, A; Aslan, H; Aytan, H; Cantug Caliskan, A
2007-02-01
For determination of the oestrous cycle in rats classical Papanicolaou technique has long been used successfully. Instead of using many stains in Papanicolaou, staining the vaginal secretions with only methylene blue has also been defined. Recently a new technique in which vaginal samples are directly examined under light microscope has been introduced. The aim of this study was to assess the reliability of this new technique by comparing it with the classical staining techniques. From 20 Wistar rats 60 vaginal samples were collected with a micropipette, three from each. Briefly, the vagina was flushed two to three times then the fluid was placed onto a glass slide. The fluid was equally distributed onto three glass slides. The glass slides were coded. Two samples were stained with Papanicolaou and methylene blue while the other one was examined directly. Determination of the phases of the oestrous cycle was made by the same histologist who was blinded to the groups and coding system. After determination of the oestrous phase in all samples, the results were compared and it was found that the results were matching. In conclusion, the same results can be obtained with the direct examination technique and this technique is reliable, so there is no need to use relatively time-consuming, less practical and more expensive techniques such as Papanicolaou or methylene blue.
Marquardt, Torsten; Stange, Annette; Pecka, Michael; Grothe, Benedikt; McAlpine, David
2014-01-01
Recently, with the use of an amplitude-modulated binaural beat (AMBB), in which sound amplitude and interaural-phase difference (IPD) were modulated with a fixed mutual relationship (Dietz et al. 2013b), we demonstrated that the human auditory system uses interaural timing differences in the temporal fine structure of modulated sounds only during the rising portion of each modulation cycle. However, the degree to which peripheral or central mechanisms contribute to the observed strong dominance of the rising slope remains to be determined. Here, by recording responses of single neurons in the medial superior olive (MSO) of anesthetized gerbils and in the inferior colliculus (IC) of anesthetized guinea pigs to AMBBs, we report a correlation between the position within the amplitude-modulation (AM) cycle generating the maximum response rate and the position at which the instantaneous IPD dominates the total neural response. The IPD during the rising segment dominates the total response in 78% of MSO neurons and 69% of IC neurons, with responses of the remaining neurons predominantly coding the IPD around the modulation maximum. The observed diversity of dominance regions within the AM cycle, especially in the IC, and its comparison with the human behavioral data suggest that only the subpopulation of neurons with rising slope dominance codes the sound-source location in complex listening conditions. A comparison of two models to account for the data suggests that emphasis on IPDs during the rising slope of the AM cycle depends on adaptation processes occurring before binaural interaction. PMID:24554782
Interference Canceller Based on Cycle-and-Add Property for Single User Detection in DS-CDMA
NASA Astrophysics Data System (ADS)
Hettiarachchi, Ranga; Yokoyama, Mitsuo; Uehara, Hideyuki; Ohira, Takashi
In this paper, performance of a novel interference cancellation technique for the single user detection in a direct-sequence code-division multiple access (DS-CDMA) system has been investigated. This new algorithm is based on the Cycle-and-Add property of PN (Pseudorandom Noise) sequences and can be applied for both synchronous and asynchronous systems. The proposed strategy provides a simple method that can delete interference signals one by one in spite of the power levels of interferences. Therefore, it is possible to overcome the near-far problem (NFP) in a successive manner without using transmit power control (TPC) techniques. The validity of the proposed procedure is corroborated by computer simulations in additive white Gaussian noise (AWGN) and frequency-nonselective fading channels. Performance results indicate that the proposed receiver outperforms the conventional receiver and, in many cases, it does so with a considerable gain.
Advances in Geologic Disposal System Modeling and Application to Crystalline Rock
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mariner, Paul E.; Stein, Emily R.; Frederick, Jennifer M.
The Used Fuel Disposition Campaign (UFDC) of the U.S. Department of Energy (DOE) Office of Nuclear Energy (NE), Office of Fuel Cycle Technology (OFCT) is conducting research and development (R&D) on geologic disposal of used nuclear fuel (UNF) and high-level nuclear waste (HLW). Two of the high priorities for UFDC disposal R&D are design concept development and disposal system modeling (DOE 2011). These priorities are directly addressed in the UFDC Generic Disposal Systems Analysis (GDSA) work package, which is charged with developing a disposal system modeling and analysis capability for evaluating disposal system performance for nuclear waste in geologic mediamore » (e.g., salt, granite, clay, and deep borehole disposal). This report describes specific GDSA activities in fiscal year 2016 (FY 2016) toward the development of the enhanced disposal system modeling and analysis capability for geologic disposal of nuclear waste. The GDSA framework employs the PFLOTRAN thermal-hydrologic-chemical multi-physics code and the Dakota uncertainty sampling and propagation code. Each code is designed for massively-parallel processing in a high-performance computing (HPC) environment. Multi-physics representations in PFLOTRAN are used to simulate various coupled processes including heat flow, fluid flow, waste dissolution, radionuclide release, radionuclide decay and ingrowth, precipitation and dissolution of secondary phases, and radionuclide transport through engineered barriers and natural geologic barriers to the biosphere. Dakota is used to generate sets of representative realizations and to analyze parameter sensitivity.« less
Decay Heat Removal in GEN IV Gas-Cooled Fast Reactors
Cheng, Lap-Yan; Wei, Thomas Y. C.
2009-01-01
The safety goal of the current designs of advanced high-temperature thermal gas-cooled reactors (HTRs) is that no core meltdown would occur in a depressurization event with a combination of concurrent safety system failures. This study focused on the analysis of passive decay heat removal (DHR) in a GEN IV direct-cycle gas-cooled fast reactor (GFR) which is based on the technology developments of the HTRs. Given the different criteria and design characteristics of the GFR, an approach different from that taken for the HTRs for passive DHR would have to be explored. Different design options based on maintaining core flow weremore » evaluated by performing transient analysis of a depressurization accident using the system code RELAP5-3D. The study also reviewed the conceptual design of autonomous systems for shutdown decay heat removal and recommends that future work in this area should be focused on the potential for Brayton cycle DHRs.« less
Embedded real-time image processing hardware for feature extraction and clustering
NASA Astrophysics Data System (ADS)
Chiu, Lihu; Chang, Grant
2003-08-01
Printronix, Inc. uses scanner-based image systems to perform print quality measurements for line-matrix printers. The size of the image samples and image definition required make commercial scanners convenient to use. The image processing is relatively well defined, and we are able to simplify many of the calculations into hardware equations and "c" code. The process of rapidly prototyping the system using DSP based "c" code gets the algorithms well defined early in the development cycle. Once a working system is defined, the rest of the process involves splitting the task up for the FPGA and the DSP implementation. Deciding which of the two to use, the DSP or the FPGA, is a simple matter of trial benchmarking. There are two kinds of benchmarking: One for speed, and the other for memory. The more memory intensive algorithms should run in the DSP, and the simple real time tasks can use the FPGA most effectively. Once the task is split, we can decide which platform the algorithm should be executed. This involves prototyping all the code in the DSP, then timing various blocks of the algorithm. Slow routines can be optimized using the compiler tools, and if further reduction in time is needed, into tasks that the FPGA can perform.
Multi-Megawatt Gas Turbine Power Systems for Lunar Colonies
NASA Technical Reports Server (NTRS)
Juhasz, Albert J.
2006-01-01
A concept for development of second generation 10 MWe prototype lunar power plant utilizing a gas cooled fission reactor supplying heated helium working fluid to two parallel 5 MWe closed cycle gas turbines is presented. Such a power system is expected to supply the energy needs for an initial lunar colony with a crew of up to 50 persons engaged in mining and manufacturing activities. System performance and mass details were generated by an author developed code (BRMAPS). The proposed pilot power plant can be a model for future plants of the same capacity that could be tied to an evolutionary lunar power grid.
NASA Technical Reports Server (NTRS)
Johnson, Paul K.
2007-01-01
NASA Glenn Research Center (GRC) contracted Barber-Nichols, Arvada, CO to construct a dual Brayton power conversion system for use as a hardware proof of concept and to validate results from a computational code known as the Closed Cycle System Simulation (CCSS). Initial checkout tests were performed at Barber- Nichols to ready the system for delivery to GRC. This presentation describes the system hardware components and lists the types of checkout tests performed along with a couple issues encountered while conducting the tests. A description of the CCSS model is also presented. The checkout tests did not focus on generating data, therefore, no test data or model analyses are presented.
NASA Technical Reports Server (NTRS)
Mcgaw, Michael A.; Saltsman, James F.
1993-01-01
A recently developed high-temperature fatigue life prediction computer code is presented and an example of its usage given. The code discussed is based on the Total Strain version of Strainrange Partitioning (TS-SRP). Included in this code are procedures for characterizing the creep-fatigue durability behavior of an alloy according to TS-SRP guidelines and predicting cyclic life for complex cycle types for both isothermal and thermomechanical conditions. A reasonably extensive materials properties database is included with the code.
Nuclear Engine System Simulation (NESS) version 2.0
NASA Technical Reports Server (NTRS)
Pelaccio, Dennis G.; Scheil, Christine M.; Petrosky, Lyman J.
1993-01-01
The topics are presented in viewgraph form and include the following; nuclear thermal propulsion (NTP) engine system analysis program development; nuclear thermal propulsion engine analysis capability requirements; team resources used to support NESS development; expanded liquid engine simulations (ELES) computer model; ELES verification examples; NESS program development evolution; past NTP ELES analysis code modifications and verifications; general NTP engine system features modeled by NESS; representative NTP expander, gas generator, and bleed engine system cycles modeled by NESS; NESS program overview; NESS program flow logic; enabler (NERVA type) nuclear thermal rocket engine; prismatic fuel elements and supports; reactor fuel and support element parameters; reactor parameters as a function of thrust level; internal shield sizing; and reactor thermal model.
Be discs in coplanar circular binaries: Phase-locked variations of emission lines
NASA Astrophysics Data System (ADS)
Panoglou, Despina; Faes, Daniel M.; Carciofi, Alex C.; Okazaki, Atsuo T.; Baade, Dietrich; Rivinius, Thomas; Borges Fernandes, Marcelo
2018-01-01
In this paper, we present the first results of radiative transfer calculations on decretion discs of binary Be stars. A smoothed particle hydrodynamics code computes the structure of Be discs in coplanar circular binary systems for a range of orbital and disc parameters. The resulting disc configuration consists of two spiral arms, and this can be given as input into a Monte Carlo code, which calculates the radiative transfer along the line of sight for various observational coordinates. Making use of the property of steady disc structure in coplanar circular binaries, observables are computed as functions of the orbital phase. Some orbital-phase series of line profiles are given for selected parameter sets under various viewing angles, to allow comparison with observations. Flat-topped profiles with and without superimposed multiple structures are reproduced, showing, for example, that triple-peaked profiles do not have to be necessarily associated with warped discs and misaligned binaries. It is demonstrated that binary tidal effects give rise to phase-locked variability of the violet-to-red (V/R) ratio of hydrogen emission lines. The V/R ratio exhibits two maxima per cycle; in certain cases those maxima are equal, leading to a clear new V/R cycle every half orbital period. This study opens a way to identifying binaries and to constraining the parameters of binary systems that exhibit phase-locked variations induced by tidal interaction with a companion star.
Electronic tagging and integrated product intelligence
NASA Astrophysics Data System (ADS)
Swerdlow, Martin; Weeks, Brian
1996-03-01
The advent of 'intelligent,' electronic data bearing tags is set to revolutionize the way industrial and retail products are identified and tracked throughout their life cycles. The dominant system for unique identification today is the bar code, which is based on printed symbology and regulated by the International Article Numbering Association. Bar codes provide users with significant operational advantages and generate considerable added value to packaging companies, product manufacturers, distributors and retailers, across supply chains in many different sectors, from retailing, to baggage handling and industrial components, e.g., for vehicles or aircraft. Electronic tags offer the potential to: (1) record and store more complex data about the product or any modifications which occur during its life cycle; (2) access (and up-date) stored data in real time in a way which does not involve contact with the product or article; (3) overcome the limitations imposed by systems which rely on line-of-sight access to stored data. Companies are now beginning to consider how electronic data tags can be used, not only to improve the efficiency of their supply chain processes, but also to revolutionize the way they do business. This paper reviews the applications and business opportunities for electronic tags and outlines CEST's strategy for achieving an 'open' standard which will ensure that tags from different vendors can co-exist on an international basis.
Biases in GNSS-Data Processing
NASA Astrophysics Data System (ADS)
Schaer, S. C.; Dach, R.; Lutz, S.; Meindl, M.; Beutler, G.
2010-12-01
Within the Global Positioning System (GPS) traditionally different types of pseudo-range measurements (P-code, C/A-code) are available on the first frequency that are tracked by the receivers with different technologies. For that reason, P1-C1 and P1-P2 Differential Code Biases (DCB) need to be considered in a GPS data processing with a mix of different receiver types. Since the Block IIR-M series of GPS satellites also provide C/A-code on the second frequency, P2-C2 DCB need to be added to the list of biases for maintenance. Potential quarter-cycle biases between different phase observables (specifically L2P and L2C) are another issue. When combining GNSS (currently GPS and GLONASS), careful consideration of inter-system biases (ISB) is indispensable, in particular when an adequate combination of individual GLONASS clock correction results from different sources (using, e.g., different software packages) is intended. Facing the GPS and GLONASS modernization programs and the upcoming GNSS, like the European Galileo and the Chinese Compass, an increasing number of types of biases is expected. The Center for Orbit Determination in Europe (CODE) is monitoring these GPS and GLONASS related biases for a long time based on RINEX files of the tracking network of the International GNSS Service (IGS) and in the frame of the data processing as one of the global analysis centers of the IGS. Within the presentation we give an overview on the stability of the biases based on the monitoring. Biases derived from different sources are compared. Finally, we give an outlook on the potential handling of such biases with the big variety of signals and systems expected in the future.
Gate-to-gate Life-Cycle Inventory of Hardboard Production in North America
Richard Bergman
2014-01-01
Whole-building life-cycle assessments (LCAs) populated by life-cycle inventory (LCI) data are incorporated into environmental footprint software tools for establishing green building certification by building professionals and code. However, LCI data on some wood building products are still needed to help fill gaps in the data and thus provide a more complete picture...
Coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
Analysis of quantum error correcting (QEC) codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. We present analytic results for the logical error as a function of concatenation level and code distance for coherent errors under the repetition code. For data-only coherent errors, we find that the logical error is partially coherent and therefore non-Pauli. However, the coherent part of the error is negligible after two or more concatenation levels or at fewer than ɛ - (d - 1) error correction cycles. Here ɛ << 1 is the rotation angle error per cycle for a single physical qubit and d is the code distance. These results support the validity of modeling coherent errors using a Pauli channel under some minimum requirements for code distance and/or concatenation. We discuss extensions to imperfect syndrome extraction and implications for general QEC.
Development of a Unix/VME data acquisition system
NASA Astrophysics Data System (ADS)
Miller, M. C.; Ahern, S.; Clark, S. M.
1992-01-01
The current status of a Unix-based VME data acquisition development project is described. It is planned to use existing Fortran data collection software to drive the existing CAMAC electronics via a VME CAMAC branch driver card and associated Daresbury Unix driving software. The first usable Unix driver has been written and produces single-action CAMAC cycles from test software. The data acquisition code has been implemented in test mode under Unix with few problems and effort is now being directed toward finalizing calls to the CAMAC-driving software and ultimate evaluation of the complete system.
Applications of AN OO Methodology and Case to a Daq System
NASA Astrophysics Data System (ADS)
Bee, C. P.; Eshghi, S.; Jones, R.; Kolos, S.; Magherini, C.; Maidantchik, C.; Mapelli, L.; Mornacchi, G.; Niculescu, M.; Patel, A.; Prigent, D.; Spiwoks, R.; Soloviev, I.; Caprini, M.; Duval, P. Y.; Etienne, F.; Ferrato, D.; Le van Suu, A.; Qian, Z.; Gaponenko, I.; Merzliakov, Y.; Ambrosini, G.; Ferrari, R.; Fumagalli, G.; Polesello, G.
The RD13 project has evaluated the use of the Object Oriented Information Engineering (OOIE) method during the development of several software components connected to the DAQ system. The method is supported by a sophisticated commercial CASE tool (Object Management Workbench) and programming environment (Kappa) which covers the full life-cycle of the software including model simulation, code generation and application deployment. This paper gives an overview of the method, CASE tool, DAQ components which have been developed and we relate our experiences with the method and tool, its integration into our development environment and the spiral lifecycle it supports.
Correct coding for laboratory procedures during assisted reproductive technology cycles.
2016-04-01
This document provides updated coding information for services related to assisted reproductive technology procedures. This document replaces the 2012 ASRM document of the same name. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
Impact of thorium based molten salt reactor on the closure of the nuclear fuel cycle
NASA Astrophysics Data System (ADS)
Jaradat, Safwan Qasim Mohammad
Molten salt reactor (MSR) is one of six reactors selected by the Generation IV International Forum (GIF). The liquid fluoride thorium reactor (LFTR) is a MSR concept based on thorium fuel cycle. LFTR uses liquid fluoride salts as a nuclear fuel. It uses 232Th and 233U as the fertile and fissile materials, respectively. Fluoride salt of these nuclides is dissolved in a mixed carrier salt of lithium and beryllium (FLiBe). The objective of this research was to complete feasibility studies of a small commercial thermal LFTR. The focus was on neutronic calculations in order to prescribe core design parameter such as core size, fuel block pitch (p), fuel channel radius, fuel path, reflector thickness, fuel salt composition, and power. In order to achieve this objective, the applicability of Monte Carlo N-Particle Transport Code (MCNP) to MSR modeling was verified. Then, a prescription for conceptual small thermal reactor LFTR and relevant calculations were performed using MCNP to determine the main neutronic parameters of the core reactor. The MCNP code was used to study the reactor physics characteristics for the FUJI-U3 reactor. The results were then compared with the results obtained from the original FUJI-U3 using the reactor physics code SRAC95 and the burnup analysis code ORIPHY2. The results were comparable with each other. Based on the results, MCNP was found to be a reliable code to model a small thermal LFTR and study all the related reactor physics characteristics. The results of this study were promising and successful in demonstrating a prefatory small commercial LFTR design. The outcome of using a small core reactor with a diameter/height of 280/260 cm that would operate for more than five years at a power level of 150 MWth was studied. The fuel system 7LiF - BeF2 - ThF4 - UF4 with a (233U/ 232Th) = 2.01 % was the candidate fuel for this reactor core.
Development of high-fidelity multiphysics system for light water reactor analysis
NASA Astrophysics Data System (ADS)
Magedanz, Jeffrey W.
There has been a tendency in recent years toward greater heterogeneity in reactor cores, due to the use of mixed-oxide (MOX) fuel, burnable absorbers, and longer cycles with consequently higher fuel burnup. The resulting asymmetry of the neutron flux and energy spectrum between regions with different compositions causes a need to account for the directional dependence of the neutron flux, instead of the traditional diffusion approximation. Furthermore, the presence of both MOX and high-burnup fuel in the core increases the complexity of the heat conduction. The heat transfer properties of the fuel pellet change with irradiation, and the thermal and mechanical expansion of the pellet and cladding strongly affect the size of the gap between them, and its consequent thermal resistance. These operational tendencies require higher fidelity multi-physics modeling capabilities, and this need is addressed by the developments performed within this PhD research. The dissertation describes the development of a High-Fidelity Multi-Physics System for Light Water Reactor Analysis. It consists of three coupled codes -- CTF for Thermal Hydraulics, TORT-TD for Neutron Kinetics, and FRAPTRAN for Fuel Performance. It is meant to address these modeling challenges in three ways: (1) by resolving the state of the system at the level of each fuel pin, rather than homogenizing entire fuel assemblies, (2) by using the multi-group Discrete Ordinates method to account for the directional dependence of the neutron flux, and (3) by using a fuel-performance code, rather than a Thermal Hydraulics code's simplified fuel model, to account for the material behavior of the fuel and its feedback to the hydraulic and neutronic behavior of the system. While the first two are improvements, the third, the use of a fuel-performance code for feedback, constitutes an innovation in this PhD project. Also important to this work is the manner in which such coupling is written. While coupling involves combining codes into a single executable, they are usually still developed and maintained separately. It should thus be a design objective to minimize the changes to those codes, and keep the changes to each code free of dependence on the details of the other codes. This will ease the incorporation of new versions of the code into the coupling, as well as re-use of parts of the coupling to couple with different codes. In order to fulfill this objective, an interface for each code was created in the form of an object-oriented abstract data type. Object-oriented programming is an effective method for enforcing a separation between different parts of a program, and clarifying the communication between them. The interfaces enable the main program to control the codes in terms of high-level functionality. This differs from the established practice of a master/slave relationship, in which the slave code is incorporated into the master code as a set of subroutines. While this PhD research continues previous work with a coupling between CTF and TORT-TD, it makes two major original contributions: (1) using a fuel-performance code, instead of a thermal-hydraulics code's simplified built-in models, to model the feedback from the fuel rods, and (2) the design of an object-oriented interface as an innovative method to interact with a coupled code in a high-level, easily-understandable manner. The resulting code system will serve as a tool to study the question of under what conditions, and to what extent, these higher-fidelity methods will provide benefits to reactor core analysis. (Abstract shortened by UMI.)
Validation of the NCC Code for Staged Transverse Injection and Computations for a RBCC Combustor
NASA Technical Reports Server (NTRS)
Ajmani, Kumud; Liu, Nan-Suey
2005-01-01
The NCC code was validated for a case involving staged transverse injection into Mach 2 flow behind a rearward facing step. Comparisons with experimental data and with solutions from the FPVortex code was then used to perform computations to study fuel-air mixing for the combustor of a candidate rocket based combined cycle engine geometry. Comparisons with a one-dimensional analysis and a three-dimensional code (VULCAN) were performed to assess the qualitative and quantitative performance of the NCC solver.
Modeling Tools for Propulsion Analysis and Computational Fluid Dynamics on the Internet
NASA Technical Reports Server (NTRS)
Muss, J. A.; Johnson, C. W.; Gotchy, M. B.
2000-01-01
The existing RocketWeb(TradeMark) Internet Analysis System (httr)://www.iohnsonrockets.com/rocketweb) provides an integrated set of advanced analysis tools that can be securely accessed over the Internet. Since these tools consist of both batch and interactive analysis codes, the system includes convenient methods for creating input files and evaluating the resulting data. The RocketWeb(TradeMark) system also contains many features that permit data sharing which, when further developed, will facilitate real-time, geographically diverse, collaborative engineering within a designated work group. Adding work group management functionality while simultaneously extending and integrating the system's set of design and analysis tools will create a system providing rigorous, controlled design development, reducing design cycle time and cost.
High Speed Civil Transport Aircraft Simulation: Reference-H Cycle 1, MATLAB Implementation
NASA Technical Reports Server (NTRS)
Sotack, Robert A.; Chowdhry, Rajiv S.; Buttrill, Carey S.
1999-01-01
The mathematical model and associated code to simulate a high speed civil transport aircraft - the Boeing Reference H configuration - are described. The simulation was constructed in support of advanced control law research. In addition to providing time histories of the dynamic response, the code includes the capabilities for calculating trim solutions and for generating linear models. The simulation relies on the nonlinear, six-degree-of-freedom equations which govern the motion of a rigid aircraft in atmospheric flight. The 1962 Standard Atmosphere Tables are used along with a turbulence model to simulate the Earth atmosphere. The aircraft model has three parts - an aerodynamic model, an engine model, and a mass model. These models use the data from the Boeing Reference H cycle 1 simulation data base. Models for the actuator dynamics, landing gear, and flight control system are not included in this aircraft model. Dynamic responses generated by the nonlinear simulation are presented and compared with results generated from alternate simulations at Boeing Commercial Aircraft Company and NASA Langley Research Center. Also, dynamic responses generated using linear models are presented and compared with dynamic responses generated using the nonlinear simulation.
Linearized Aeroelastic Solver Applied to the Flutter Prediction of Real Configurations
NASA Technical Reports Server (NTRS)
Reddy, Tondapu S.; Bakhle, Milind A.
2004-01-01
A fast-running unsteady aerodynamics code, LINFLUX, was previously developed for predicting turbomachinery flutter. This linearized code, based on a frequency domain method, models the effects of steady blade loading through a nonlinear steady flow field. The LINFLUX code, which is 6 to 7 times faster than the corresponding nonlinear time domain code, is suitable for use in the initial design phase. Earlier, this code was verified through application to a research fan, and it was shown that the predictions of work per cycle and flutter compared well with those from a nonlinear time-marching aeroelastic code, TURBO-AE. Now, the LINFLUX code has been applied to real configurations: fans developed under the Energy Efficient Engine (E-cubed) Program and the Quiet Aircraft Technology (QAT) project. The LINFLUX code starts with a steady nonlinear aerodynamic flow field and solves the unsteady linearized Euler equations to calculate the unsteady aerodynamic forces on the turbomachinery blades. First, a steady aerodynamic solution is computed for given operating conditions using the nonlinear unsteady aerodynamic code TURBO-AE. A blade vibration analysis is done to determine the frequencies and mode shapes of the vibrating blades, and an interface code is used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor is used to interpolate the mode shapes from the structural dynamics mesh onto the computational fluid dynamics mesh. Then, LINFLUX is used to calculate the unsteady aerodynamic pressure distribution for a given vibration mode, frequency, and interblade phase angle. Finally, a post-processor uses the unsteady pressures to calculate the generalized aerodynamic forces, eigenvalues, an esponse amplitudes. The eigenvalues determine the flutter frequency and damping. Results of flutter calculations from the LINFLUX code are presented for (1) the E-cubed fan developed under the E-cubed program and (2) the Quiet High Speed Fan (QHSF) developed under the Quiet Aircraft Technology project. The results are compared with those obtained from the TURBO-AE code. A graph of the work done per vibration cycle for the first vibration mode of the E-cubed fan is shown. It can be seen that the LINFLUX results show a very good comparison with TURBO-AE results over the entire range of interblade phase angle. The work done per vibration cycle for the first vibration mode of the QHSF fan is shown. Once again, the LINFLUX results compare very well with the results from the TURBOAE code.
A study of power cycles using supercritical carbon dioxide as the working fluid
NASA Astrophysics Data System (ADS)
Schroder, Andrew Urban
A real fluid heat engine power cycle analysis code has been developed for analyzing the zero dimensional performance of a general recuperated, recompression, precompression supercritical carbon dioxide power cycle with reheat and a unique shaft configuration. With the proposed shaft configuration, several smaller compressor-turbine pairs could be placed inside of a pressure vessel in order to avoid high speed, high pressure rotating seals. The small compressor-turbine pairs would share some resemblance with a turbocharger assembly. Variation in fluid properties within the heat exchangers is taken into account by discretizing zero dimensional heat exchangers. The cycle analysis code allows for multiple reheat stages, as well as an option for the main compressor to be powered by a dedicated turbine or an electrical motor. Variation in performance with respect to design heat exchanger pressure drops and minimum temperature differences, precompressor pressure ratio, main compressor pressure ratio, recompression mass fraction, main compressor inlet pressure, and low temperature recuperator mass fraction have been explored throughout a range of each design parameter. Turbomachinery isentropic efficiencies are implemented and the sensitivity of the cycle performance and the optimal design parameters is explored. Sensitivity of the cycle performance and optimal design parameters is studied with respect to the minimum heat rejection temperature and the maximum heat addition temperature. A hybrid stochastic and gradient based optimization technique has been used to optimize critical design parameters for maximum engine thermal efficiency. A parallel design exploration mode was also developed in order to rapidly conduct the parameter sweeps in this design space exploration. A cycle thermal efficiency of 49.6% is predicted with a 320K [47°C] minimum temperature and 923K [650°C] maximum temperature. The real fluid heat engine power cycle analysis code was expanded to study a theoretical recuperated Lenoir cycle using supercritical carbon dioxide as the working fluid. The real fluid cycle analysis code was also enhanced to study a combined cycle engine cascade. Two engine cascade configurations were studied. The first consisted of a traditional open loop gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 65.0% using a peak temperature of 1,890K [1,617°C]. The second configuration consisted of a hybrid natural gas powered solid oxide fuel cell and gas turbine, coupled with a series of recuperated, recompression, precompression supercritical carbon dioxide power cycles, with a predicted combined cycle thermal efficiency of 73.1%. Both configurations had a minimum temperature of 306K [33°C]. The hybrid stochastic and gradient based optimization technique was used to optimize all engine design parameters for each engine in the cascade such that the entire engine cascade achieved the maximum thermal efficiency. The parallel design exploration mode was also utilized in order to understand the impact of different design parameters on the overall engine cascade thermal efficiency. Two dimensional conjugate heat transfer (CHT) numerical simulations of a straight, equal height channel heat exchanger using supercritical carbon dioxide were conducted at various Reynolds numbers and channel lengths.
Studies of auroral X-ray imaging from high altitude spacecraft
NASA Technical Reports Server (NTRS)
Mckenzie, D. L.; Mizera, P. F.; Rice, C. J.
1980-01-01
Results of a study of techniques for imaging the aurora from a high altitude satellite at X-ray wavelengths are summarized. The X-ray observations allow the straightforward derivation of the primary auroral X-ray spectrum and can be made at all local times, day and night. Five candidate imaging systems are identified: X-ray telescope, multiple pinhole camera, coded aperture, rastered collimator, and imaging collimator. Examples of each are specified, subject to common weight and size limits which allow them to be intercompared. The imaging ability of each system is tested using a wide variety of sample spectra which are based on previous satellite observations. The study shows that the pinhole camera and coded aperture are both good auroral imaging systems. The two collimated detectors are significantly less sensitive. The X-ray telescope provides better image quality than the other systems in almost all cases, but a limitation to energies below about 4 keV prevents this system from providing the spectra data essential to deriving electron spectra, energy input to the atmosphere, and atmospheric densities and conductivities. The orbit selection requires a tradeoff between spatial resolution and duty cycle.
Shock Position Control for Mode Transition in a Turbine Based Combined Cycle Engine Inlet Model
NASA Technical Reports Server (NTRS)
Csank, Jeffrey T.; Stueber, Thomas J.
2013-01-01
A dual flow-path inlet for a turbine based combined cycle (TBCC) propulsion system is to be tested in order to evaluate methodologies for performing a controlled inlet mode transition. Prior to experimental testing, simulation models are used to test, debug, and validate potential control algorithms which are designed to maintain shock position during inlet disturbances. One simulation package being used for testing is the High Mach Transient Engine Cycle Code simulation, known as HiTECC. This paper discusses the development of a mode transition schedule for the HiTECC simulation that is analogous to the development of inlet performance maps. Inlet performance maps, derived through experimental means, describe the performance and operability of the inlet as the splitter closes, switching power production from the turbine engine to the Dual Mode Scram Jet. With knowledge of the operability and performance tradeoffs, a closed loop system can be designed to optimize the performance of the inlet. This paper demonstrates the design of the closed loop control system and benefit with the implementation of a Proportional-Integral controller, an H-Infinity based controller, and a disturbance observer based controller; all of which avoid inlet unstart during a mode transition with a simulated disturbance that would lead to inlet unstart without closed loop control.
2014-01-01
Background The genome is pervasively transcribed but most transcripts do not code for proteins, constituting non-protein-coding RNAs. Despite increasing numbers of functional reports of individual long non-coding RNAs (lncRNAs), assessing the extent of functionality among the non-coding transcriptional output of mammalian cells remains intricate. In the protein-coding world, transcripts differentially expressed in the context of processes essential for the survival of multicellular organisms have been instrumental in the discovery of functionally relevant proteins and their deregulation is frequently associated with diseases. We therefore systematically identified lncRNAs expressed differentially in response to oncologically relevant processes and cell-cycle, p53 and STAT3 pathways, using tiling arrays. Results We found that up to 80% of the pathway-triggered transcriptional responses are non-coding. Among these we identified very large macroRNAs with pathway-specific expression patterns and demonstrated that these are likely continuous transcripts. MacroRNAs contain elements conserved in mammals and sauropsids, which in part exhibit conserved RNA secondary structure. Comparing evolutionary rates of a macroRNA to adjacent protein-coding genes suggests a local action of the transcript. Finally, in different grades of astrocytoma, a tumor disease unrelated to the initially used cell lines, macroRNAs are differentially expressed. Conclusions It has been shown previously that the majority of expressed non-ribosomal transcripts are non-coding. We now conclude that differential expression triggered by signaling pathways gives rise to a similar abundance of non-coding content. It is thus unlikely that the prevalence of non-coding transcripts in the cell is a trivial consequence of leaky or random transcription events. PMID:24594072
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garzenne, Claude; Massara, Simone; Tetart, Philippe
2006-07-01
Accelerator Driven Systems offer the advantage, thanks to the core sub-criticality, to burn highly radioactive elements such as americium and curium in a dedicated stratum, and then to avoid polluting with these elements the main part of the nuclear fleet, which is optimized for electricity production. This paper presents firstly the ADS model implemented in the fuel cycle simulation code TIRELIRE-STRATEGIE that we developed at EDF R and D Division for nuclear power scenario studies. Then we show and comment the results of TIRELIRE-STRATEGIE calculation of a transition scenario between the current French nuclear fleet, and a fast reactor fleetmore » entirely deployed towards the end of the 21. century, consistently with the EDF prospective view, with 3 options for the minor actinides management:1) vitrified with fission products to be sent to the final disposal; 2) extracted together with plutonium from the spent fuel to be transmuted in Generation IV fast reactors; 3) eventually extracted separately from plutonium to be incinerated in a ADSs double stratum. The comparison of nuclear fuel cycle material fluxes and inventories between these options shows that ADSs are not more efficient than critical fast reactors for reducing the high level waste radio-toxicity; that minor actinides inventory and fluxes in the fuel cycle are more than twice as high in case of a double ADSs stratum than in case of minor actinides transmutation in Generation IV FBRs; and that about fourteen 400 MWth ADS are necessary to incinerate minor actinides issued from a 60 GWe Generation IV fast reactor fleet, corresponding to the current French nuclear fleet installed power. (authors)« less
Self-organizing maps based on limit cycle attractors.
Huang, Di-Wei; Gentili, Rodolphe J; Reggia, James A
2015-03-01
Recent efforts to develop large-scale brain and neurocognitive architectures have paid relatively little attention to the use of self-organizing maps (SOMs). Part of the reason for this is that most conventional SOMs use a static encoding representation: each input pattern or sequence is effectively represented as a fixed point activation pattern in the map layer, something that is inconsistent with the rhythmic oscillatory activity observed in the brain. Here we develop and study an alternative encoding scheme that instead uses sparsely-coded limit cycles to represent external input patterns/sequences. We establish conditions under which learned limit cycle representations arise reliably and dominate the dynamics in a SOM. These limit cycles tend to be relatively unique for different inputs, robust to perturbations, and fairly insensitive to timing. In spite of the continually changing activity in the map layer when a limit cycle representation is used, map formation continues to occur reliably. In a two-SOM architecture where each SOM represents a different sensory modality, we also show that after learning, limit cycles in one SOM can correctly evoke corresponding limit cycles in the other, and thus there is the potential for multi-SOM systems using limit cycles to work effectively as hetero-associative memories. While the results presented here are only first steps, they establish the viability of SOM models based on limit cycle activity patterns, and suggest that such models merit further study. Copyright © 2014 Elsevier Ltd. All rights reserved.
Interfacing Computer Aided Parallelization and Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Jin, Haoqiang; Labarta, Jesus; Gimenez, Judit; Biegel, Bryan A. (Technical Monitor)
2003-01-01
When porting sequential applications to parallel computer architectures, the program developer will typically go through several cycles of source code optimization and performance analysis. We have started a project to develop an environment where the user can jointly navigate through program structure and performance data information in order to make efficient optimization decisions. In a prototype implementation we have interfaced the CAPO computer aided parallelization tool with the Paraver performance analysis tool. We describe both tools and their interface and give an example for how the interface helps within the program development cycle of a benchmark code.
[The new magnetic therapy TAMMEF in the treatment of simple shoulder pain].
Battisti, E; Bianciardi, L; Albanese, A; Piazza, E; Rigato, M; Galassi, G; Giordano, N
2007-01-01
Numerous studies have demonstrated the utility of extremely low frequencies (ELF) electromagnetic fields in the treatment of pain. Moreover, the effects of these fields seems to depend on their respective codes (frequency, intensity, waveform). In our study we want to assess the effects of the TAMMEF (Therapeutic Application of a Musically Modulated Electromagnetic Field) system, whose field is piloted by a musical signal and its parameters (frequency, intensity, waveform) are modified in time, randomly varying within the respective ranges, so that all possible codes can occur during a single application. Sixty subjects, affected by shoulder periarthritis were enrolled in the study and randomly divided into three groups of 20 patients each: A exposed to TAMMEF, B exposed to ELF, C exposed to a simulated field. All subjects underwent a cycle of 15 daily sessions of 30 minutes each and a clinical examination upon enrollment, after 7 days of therapy, at the end of the cycle and at a follow-up 30 days later. All the patients of groups A and B completed the therapy without the appearance of side effects: they presented a significant improvement of the subjective pain and the functional limitation, which remained stable at the follow-up examination. In group C, there was no improvement of the pain symptoms or articular functionality. This study suggests that the TAMMEF system is efficacious in the control of pain symptoms and in the reduction of functional limitation in patients with shoulder periarthritis. Moreover, the effects of the TAMMEF system cover those produced by the ELF field.
Hinze, Jacob F.; Nellis, Gregory F.; Anderson, Mark H.
2017-09-21
Supercritical Carbon Dioxide (sCO 2) power cycles have the potential to deliver high efficiency at low cost. However, in order for an sCO 2 cycle to reach high efficiency, highly effective recuperators are needed. These recuperative heat exchangers must transfer heat at a rate that is substantially larger than the heat transfer to the cycle itself and can therefore represent a significant portion of the power block costs. Regenerators are proposed as a cost saving alternative to high cost printed circuit recuperators for this application. A regenerator is an indirect heat exchanger which periodically stores and releases heat to themore » working fluid. The simple design of a regenerator can be made more inexpensively compared to current options. The objective of this paper is a detailed evaluation of regenerators as a competing technology for recuperators within an sCO 2 Brayton cycle. The level of the analysis presented here is sufficient to identify issues with the regenerator system in order to direct future work and also to clarify the potential advantage of pursuing this technology. A reduced order model of a regenerator is implemented into a cycle model of an sCO 2 Brayton cycle. An economic analysis investigates the cost savings that is possible by switching from recuperative heat exchangers to switched-bed regenerators. The cost of the regenerators was estimated using the amount of material required if the pressure vessel is sized using ASME Boiler Pressure Vessel Code (BPVC) requirements. The cost of the associated valves is found to be substantial for the regenerator system and is estimated in collaboration with an industrial valve supplier. The result of this analysis suggests that a 21.2% reduction in the contribution to the Levelized Cost of Electricity (LCoE) from the power block can be realized by switching to a regenerator-based system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hinze, Jacob F.; Nellis, Gregory F.; Anderson, Mark H.
Supercritical Carbon Dioxide (sCO 2) power cycles have the potential to deliver high efficiency at low cost. However, in order for an sCO 2 cycle to reach high efficiency, highly effective recuperators are needed. These recuperative heat exchangers must transfer heat at a rate that is substantially larger than the heat transfer to the cycle itself and can therefore represent a significant portion of the power block costs. Regenerators are proposed as a cost saving alternative to high cost printed circuit recuperators for this application. A regenerator is an indirect heat exchanger which periodically stores and releases heat to themore » working fluid. The simple design of a regenerator can be made more inexpensively compared to current options. The objective of this paper is a detailed evaluation of regenerators as a competing technology for recuperators within an sCO 2 Brayton cycle. The level of the analysis presented here is sufficient to identify issues with the regenerator system in order to direct future work and also to clarify the potential advantage of pursuing this technology. A reduced order model of a regenerator is implemented into a cycle model of an sCO 2 Brayton cycle. An economic analysis investigates the cost savings that is possible by switching from recuperative heat exchangers to switched-bed regenerators. The cost of the regenerators was estimated using the amount of material required if the pressure vessel is sized using ASME Boiler Pressure Vessel Code (BPVC) requirements. The cost of the associated valves is found to be substantial for the regenerator system and is estimated in collaboration with an industrial valve supplier. The result of this analysis suggests that a 21.2% reduction in the contribution to the Levelized Cost of Electricity (LCoE) from the power block can be realized by switching to a regenerator-based system.« less
The use of Tcl and Tk to improve design and code reutilization
NASA Technical Reports Server (NTRS)
Rodriguez, Lisbet; Reinholtz, Kirk
1995-01-01
Tcl and Tk facilitate design and code reuse in the ZIPSIM series of high-performance, high-fidelity spacecraft simulators. Tcl and Tk provide a framework for the construction of the Graphical User Interfaces for the simulators. The interfaces are architected such that a large proportion of the design and code is used for several applications, which has reduced design time and life-cycle costs.
Contracting to improve your revenue cycle performance.
Welter, Terri L; Semko, George A; Miller, Tony; Lauer, Roberta
2007-09-01
The following key drivers of commercial contract variability can have a material effect on your hospital's revenue cycle: Claim form variance. Benefit design. Contract complexity. Coding variance. Medical necessity. Precertification/authorization. Claim adjudication/appeal requirements. Additional documentation requirements. Timeliness of payment. Third-party payer activity.
Wortman, Jeremy R; Goud, Asha; Raja, Ali S; Marchello, Dana; Sodickson, Aaron
2014-12-01
The purpose of this study was to measure the effects of use of a structured physician order entry system for trauma CT on the communication of clinical information and on coding practices and reimbursement efficiency. This study was conducted between April 1, 2011, and January 14, 2013, at a level I trauma center with 59,000 annual emergency department visits. On March 29, 2012, a structured order entry system was implemented for head through pelvis trauma CT, so-called pan-scan CT. This study compared the following factors before and after implementation: communication of clinical signs and symptoms and mechanism of injury, primary International Classification of Diseases, 9th revision, Clinical Modification (ICD-9-CM) code category, success of reimbursement, and time required for successful reimbursement for the examination. Chi-square statistics were used to compare all categoric variables before and after the intervention, and the Wilcoxon rank sum test was used to compare billing cycle times. A total of 457 patients underwent pan-scan CT in 2734 distinct examinations. After the intervention, there was a 62% absolute increase in requisitions containing clinical signs or symptoms (from 0.4% to 63%, p<0.0001) and a 99% absolute increase in requisitions providing mechanism of injury (from 0.4% to 99%, p<0.0001). There was a 19% absolute increase in primary ICD-9-CM codes representing clinical signs or symptoms (from 2.9% to 21.8%, p<0.0001), and a 7% absolute increase in reimbursement success for examinations submitted to insurance carriers (from 83.0% to 89.7%, p<0.0001). For reimbursed studies, there was a 14.7-day reduction in mean billing cycle time (from 68.4 days to 53.7 days, p=0.008). Implementation of structured physician order entry for trauma CT was associated with significant improvement in the communication of clinical history to radiologists. The improvement was also associated with changes in coding practices, greater billing efficiency, and an increase in reimbursement success.
Arsenic Detoxification by Geobacter Species.
Dang, Yan; Walker, David J F; Vautour, Kaitlin E; Dixon, Steven; Holmes, Dawn E
2017-02-15
Insight into the mechanisms for arsenic detoxification by Geobacter species is expected to improve the understanding of global cycling of arsenic in iron-rich subsurface sedimentary environments. Analysis of 14 different Geobacter genomes showed that all of these species have genes coding for an arsenic detoxification system (ars operon), and several have genes required for arsenic respiration (arr operon) and methylation (arsM). Genes encoding four arsenic repressor-like proteins were detected in the genome of G. sulfurreducens; however, only one (ArsR1) regulated transcription of the ars operon. Elimination of arsR1 from the G. sulfurreducens chromosome resulted in enhanced transcription of genes coding for the arsenic efflux pump (Acr3) and arsenate reductase (ArsC). When the gene coding for Acr3 was deleted, cells were not able to grow in the presence of either the oxidized or reduced form of arsenic, while arsC deletion mutants could grow in the presence of arsenite but not arsenate. These studies shed light on how Geobacter influences arsenic mobility in anoxic sediments and may help us develop methods to remediate arsenic contamination in the subsurface. This study examines arsenic transformation mechanisms utilized by Geobacter, a genus of iron-reducing bacteria that are predominant in many anoxic iron-rich subsurface environments. Geobacter species play a major role in microbially mediated arsenic release from metal hydroxides in the subsurface. This release raises arsenic concentrations in drinking water to levels that are high enough to cause major health problems. Therefore, information obtained from studies of Geobacter should shed light on arsenic cycling in iron-rich subsurface sedimentary environments, which may help reduce arsenic-associated illnesses. These studies should also help in the development of biosensors that can be used to detect arsenic contaminants in anoxic subsurface environments. We examined 14 different Geobacter genomes and found that all of these species possess genes coding for an arsenic detoxification system (ars operon), and some also have genes required for arsenic respiration (arr operon) and arsenic methylation (arsM). Copyright © 2017 American Society for Microbiology.
Arsenic Detoxification by Geobacter Species
Walker, David J. F.; Vautour, Kaitlin E.; Dixon, Steven
2016-01-01
ABSTRACT Insight into the mechanisms for arsenic detoxification by Geobacter species is expected to improve the understanding of global cycling of arsenic in iron-rich subsurface sedimentary environments. Analysis of 14 different Geobacter genomes showed that all of these species have genes coding for an arsenic detoxification system (ars operon), and several have genes required for arsenic respiration (arr operon) and methylation (arsM). Genes encoding four arsenic repressor-like proteins were detected in the genome of G. sulfurreducens; however, only one (ArsR1) regulated transcription of the ars operon. Elimination of arsR1 from the G. sulfurreducens chromosome resulted in enhanced transcription of genes coding for the arsenic efflux pump (Acr3) and arsenate reductase (ArsC). When the gene coding for Acr3 was deleted, cells were not able to grow in the presence of either the oxidized or reduced form of arsenic, while arsC deletion mutants could grow in the presence of arsenite but not arsenate. These studies shed light on how Geobacter influences arsenic mobility in anoxic sediments and may help us develop methods to remediate arsenic contamination in the subsurface. IMPORTANCE This study examines arsenic transformation mechanisms utilized by Geobacter, a genus of iron-reducing bacteria that are predominant in many anoxic iron-rich subsurface environments. Geobacter species play a major role in microbially mediated arsenic release from metal hydroxides in the subsurface. This release raises arsenic concentrations in drinking water to levels that are high enough to cause major health problems. Therefore, information obtained from studies of Geobacter should shed light on arsenic cycling in iron-rich subsurface sedimentary environments, which may help reduce arsenic-associated illnesses. These studies should also help in the development of biosensors that can be used to detect arsenic contaminants in anoxic subsurface environments. We examined 14 different Geobacter genomes and found that all of these species possess genes coding for an arsenic detoxification system (ars operon), and some also have genes required for arsenic respiration (arr operon) and arsenic methylation (arsM). PMID:27940542
An automatic editing algorithm for GPS data
NASA Technical Reports Server (NTRS)
Blewitt, Geoffrey
1990-01-01
An algorithm has been developed to edit automatically Global Positioning System data such that outlier deletion, cycle slip identification, and correction are independent of clock instability, selective availability, receiver-satellite kinematics, and tropospheric conditions. This algorithm, called TurboEdit, operates on undifferenced, dual frequency carrier phase data, and requires the use of P code pseudorange data and a smoothly varying ionospheric electron content. TurboEdit was tested on the large data set from the CASA Uno experiment, which contained over 2500 cycle slips.Analyst intervention was required on 1 percent of the station-satellite passes, almost all of these problems being due to difficulties in extrapolating variations in the ionospheric delay. The algorithm is presently being adapted for real time data editing in the Rogue receiver for continuous monitoring applications.
Single stock dynamics on high-frequency data: from a compressed coding perspective.
Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey
2014-01-01
High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors.
Single Stock Dynamics on High-Frequency Data: From a Compressed Coding Perspective
Fushing, Hsieh; Chen, Shu-Chun; Hwang, Chii-Ruey
2014-01-01
High-frequency return, trading volume and transaction number are digitally coded via a nonparametric computing algorithm, called hierarchical factor segmentation (HFS), and then are coupled together to reveal a single stock dynamics without global state-space structural assumptions. The base-8 digital coding sequence, which is capable of revealing contrasting aggregation against sparsity of extreme events, is further compressed into a shortened sequence of state transitions. This compressed digital code sequence vividly demonstrates that the aggregation of large absolute returns is the primary driving force for stimulating both the aggregations of large trading volumes and transaction numbers. The state of system-wise synchrony is manifested with very frequent recurrence in the stock dynamics. And this data-driven dynamic mechanism is seen to correspondingly vary as the global market transiting in and out of contraction-expansion cycles. These results not only elaborate the stock dynamics of interest to a fuller extent, but also contradict some classical theories in finance. Overall this version of stock dynamics is potentially more coherent and realistic, especially when the current financial market is increasingly powered by high-frequency trading via computer algorithms, rather than by individual investors. PMID:24586235
Prediction of thermal cycling induced cracking in polmer matrix composites
NASA Technical Reports Server (NTRS)
Mcmanus, Hugh L.
1994-01-01
The work done in the period August 1993 through February 1994 on the 'Prediction of Thermal Cycling Induced Cracking In Polymer Matrix Composites' program is summarized. Most of the work performed in this period, as well as the previous one, is described in detail in the attached Master's thesis, 'Analysis of Thermally Induced Damage in Composite Space Structures,' by Cecelia Hyun Seon Park. Work on a small thermal cycling and aging chamber was concluded in this period. The chamber was extensively tested and calibrated. Temperatures can be controlled very precisely, and are very uniform in the test chamber. Based on results obtained in the previous period of this program, further experimental progressive cracking studies were carried out. The laminates tested were selected to clarify the differences between the behaviors of thick and thin ply layers, and to explore other variables such as stacking sequence and scaling effects. Most specimens tested were made available from existing stock at Langley Research Center. One laminate type had to be constructed from available prepreg material at Langley Research Center. Specimens from this laminate were cut and prepared at MIT. Thermal conditioning was carried out at Langley Research Center, and at the newly constructed MIT facility. Specimens were examined by edge inspection and by crack configuration studies, in which specimens were sanded down in order to examine the distribution of cracks within the specimens. A method for predicting matrix cracking due to decreasing temperatures and/or thermal cycling in all plies of an arbitrary laminate was implemented as a computer code. The code also predicts changes in properties due to the cracking. Extensive correlations between test results and code predictions were carried out. The computer code was documented and is ready for distribution.
Simulation studies using multibody dynamics code DART
NASA Technical Reports Server (NTRS)
Keat, James E.
1989-01-01
DART is a multibody dynamics code developed by Photon Research Associates for the Air Force Astronautics Laboratory (AFAL). The code is intended primarily to simulate the dynamics of large space structures, particularly during the deployment phase of their missions. DART integrates nonlinear equations of motion numerically. The number of bodies in the system being simulated is arbitrary. The bodies' interconnection joints can have an arbitrary number of degrees of freedom between 0 and 6. Motions across the joints can be large. Provision for simulating on-board control systems is provided. Conservation of energy and momentum, when applicable, are used to evaluate DART's performance. After a brief description of DART, studies made to test the program prior to its delivery to AFAL are described. The first is a large angle reorientating of a flexible spacecraft consisting of a rigid central hub and four flexible booms. Reorientation was accomplished by a single-cycle sine wave shape torque input. In the second study, an appendage, mounted on a spacecraft, was slewed through a large angle. Four closed-loop control systems provided control of this appendage and of the spacecraft's attitude. The third study simulated the deployment of the rim of a bicycle wheel configuration large space structure. This system contained 18 bodies. An interesting and unexpected feature of the dynamics was a pulsing phenomena experienced by the stays whole playout was used to control the deployment. A short description of the current status of DART is given.
Engine Cycle Analysis for a Particle Bed Reactor Nuclear Rocket
1991-03-01
0 DTIC USERS UNCLASSIFIED 22a. NAME OF RESPONSIBLE INDIVIDUAL ZZb. TELEPHONE (Include Area Code) 22c. OFFICE SYMBOL Lt Timothy J . Lawrence 805-275...Cycle with 2000 MW PBR and Uncooled Nozzle J : Output for Bleed Cycle with 2000 MW PBR and Cooled Nozzle K: Output for Expander Cycle with 2000 MW PBR L...Mars with carbon dioxide, the primary component of the Martian atmosphere. Carbon dioxide would delivera smaller ! j , but its use would eliminate the
Swept Impact Seismic Technique (SIST)
Park, C.B.; Miller, R.D.; Steeples, D.W.; Black, R.A.
1996-01-01
A coded seismic technique is developed that can result in a higher signal-to-noise ratio than a conventional single-pulse method does. The technique is cost-effective and time-efficient and therefore well suited for shallow-reflection surveys where high resolution and cost-effectiveness are critical. A low-power impact source transmits a few to several hundred high-frequency broad-band seismic pulses during several seconds of recording time according to a deterministic coding scheme. The coding scheme consists of a time-encoded impact sequence in which the rate of impact (cycles/s) changes linearly with time providing a broad range of impact rates. Impact times used during the decoding process are recorded on one channel of the seismograph. The coding concept combines the vibroseis swept-frequency and the Mini-Sosie random impact concepts. The swept-frequency concept greatly improves the suppression of correlation noise with much fewer impacts than normally used in the Mini-Sosie technique. The impact concept makes the technique simple and efficient in generating high-resolution seismic data especially in the presence of noise. The transfer function of the impact sequence simulates a low-cut filter with the cutoff frequency the same as the lowest impact rate. This property can be used to attenuate low-frequency ground-roll noise without using an analog low-cut filter or a spatial source (or receiver) array as is necessary with a conventional single-pulse method. Because of the discontinuous coding scheme, the decoding process is accomplished by a "shift-and-stacking" method that is much simpler and quicker than cross-correlation. The simplicity of the coding allows the mechanical design of the source to remain simple. Several different types of mechanical systems could be adapted to generate a linear impact sweep. In addition, the simplicity of the coding also allows the technique to be used with conventional acquisition systems, with only minor modifications.
Energy Savings Analysis of the Proposed Revision of the Washington D.C. Non-Residential Energy Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Athalye, Rahul A.; Hart, Philip R.
This report presents the results of an assessment of savings for the proposed Washington D.C. energy code relative to ASHRAE Standard 90.1-2010. It includes annual and life cycle savings for site energy, source energy, energy cost, and carbon dioxide emissions that would result from adoption and enforcement of the proposed code for newly constructed buildings in Washington D.C. over a five year period.
NASA Technical Reports Server (NTRS)
Watts, Michael E.
1991-01-01
The Acoustic Laboratory Data Acquisition System (ALDAS) is an inexpensive, transportable means to digitize and analyze data. The system is based on the Macintosh 2 family of computers, with internal analog-to-digital boards providing four channels of simultaneous data acquisition at rates up to 50,000 samples/sec. The ALDAS software package, written for use with rotorcraft acoustics, performs automatic acoustic calibration of channels, data display, two types of cycle averaging, and spectral amplitude analysis. The program can use data obtained from internal analog-to-digital conversion, or discrete external data imported in ASCII format. All aspects of ALDAS can be improved as new hardware becomes available and new features are introduced into the code.
McLelland, Douglas; VanRullen, Rufin
2016-10-01
Several theories have been advanced to explain how cross-frequency coupling, the interaction of neuronal oscillations at different frequencies, could enable item multiplexing in neural systems. The communication-through-coherence theory proposes that phase-matching of gamma oscillations between areas enables selective processing of a single item at a time, and a later refinement of the theory includes a theta-frequency oscillation that provides a periodic reset of the system. Alternatively, the theta-gamma neural code theory proposes that a sequence of items is processed, one per gamma cycle, and that this sequence is repeated or updated across theta cycles. In short, both theories serve to segregate representations via the temporal domain, but differ on the number of objects concurrently represented. In this study, we set out to test whether each of these theories is actually physiologically plausible, by implementing them within a single model inspired by physiological data. Using a spiking network model of visual processing, we show that each of these theories is physiologically plausible and computationally useful. Both theories were implemented within a single network architecture, with two areas connected in a feedforward manner, and gamma oscillations generated by feedback inhibition within areas. Simply increasing the amplitude of global inhibition in the lower area, equivalent to an increase in the spatial scope of the gamma oscillation, yielded a switch from one mode to the other. Thus, these different processing modes may co-exist in the brain, enabling dynamic switching between exploratory and selective modes of attention.
JavaGenes: Evolving Graphs with Crossover
NASA Technical Reports Server (NTRS)
Globus, Al; Atsatt, Sean; Lawton, John; Wipke, Todd
2000-01-01
Genetic algorithms usually use string or tree representations. We have developed a novel crossover operator for a directed and undirected graph representation, and used this operator to evolve molecules and circuits. Unlike strings or trees, a single point in the representation cannot divide every possible graph into two parts, because graphs may contain cycles. Thus, the crossover operator is non-trivial. A steady-state, tournament selection genetic algorithm code (JavaGenes) was written to implement and test the graph crossover operator. All runs were executed by cycle-scavagging on networked workstations using the Condor batch processing system. The JavaGenes code has evolved pharmaceutical drug molecules and simple digital circuits. Results to date suggest that JavaGenes can evolve moderate sized drug molecules and very small circuits in reasonable time. The algorithm has greater difficulty with somewhat larger circuits, suggesting that directed graphs (circuits) are more difficult to evolve than undirected graphs (molecules), although necessary differences in the crossover operator may also explain the results. In principle, JavaGenes should be able to evolve other graph-representable systems, such as transportation networks, metabolic pathways, and computer networks. However, large graphs evolve significantly slower than smaller graphs, presumably because the space-of-all-graphs explodes combinatorially with graph size. Since the representation strongly affects genetic algorithm performance, adding graphs to the evolutionary programmer's bag-of-tricks should be beneficial. Also, since graph evolution operates directly on the phenotype, the genotype-phenotype translation step, common in genetic algorithm work, is eliminated.
Small engine technology programs
NASA Technical Reports Server (NTRS)
Niedzwiecki, Richard W.
1987-01-01
Small engine technology programs being conducted at the NASA Lewis Research Center are described. Small gas turbine research is aimed at general aviation, commutercraft, rotorcraft, and cruise missile applications. The Rotary Engine Program is aimed at supplying fuel flexible, fuel efficient technology to the general aviation industry, but also has applications to other missions. There is a strong element of synergism between the various programs in several respects. All of the programs are aimed towards highly efficient engine cycles, very efficient components, and the use of high temperature structural ceramics. This research tends to be generic in nature and has broad applications. The Heavy Duty Diesel Transport (HDTT), rotary technology, and the compound cycle programs are all examining approached to minimum heat rejection, or adiabatic systems employing advanced materials. The Automotive Gas Turbine (AGT) program is also directed towards ceramics application to gas turbine hot section components. Turbomachinery advances in the gas turbines will benefit advanced turbochargers and turbocompounders for the intermittent combustion systems, and the fundamental understandings and analytical codes developed in the research and technology programs will be directly applicable to the system projects.
High-efficiency Gaussian key reconciliation in continuous variable quantum key distribution
NASA Astrophysics Data System (ADS)
Bai, ZengLiang; Wang, XuYang; Yang, ShenShen; Li, YongMin
2016-01-01
Efficient reconciliation is a crucial step in continuous variable quantum key distribution. The progressive-edge-growth (PEG) algorithm is an efficient method to construct relatively short block length low-density parity-check (LDPC) codes. The qua-sicyclic construction method can extend short block length codes and further eliminate the shortest cycle. In this paper, by combining the PEG algorithm and qua-si-cyclic construction method, we design long block length irregular LDPC codes with high error-correcting capacity. Based on these LDPC codes, we achieve high-efficiency Gaussian key reconciliation with slice recon-ciliation based on multilevel coding/multistage decoding with an efficiency of 93.7%.
Code System to Calculate Tornado-Induced Flow Material Transport.
DOE Office of Scientific and Technical Information (OSTI.GOV)
ANDRAE, R. W.
1999-11-18
Version: 00 TORAC models tornado-induced flows, pressures, and material transport within structures. Its use is directed toward nuclear fuel cycle facilities and their primary release pathway, the ventilation system. However, it is applicable to other structures and can model other airflow pathways within a facility. In a nuclear facility, this network system could include process cells, canyons, laboratory offices, corridors, and offgas systems. TORAC predicts flow through a network system that also includes ventilation system components such as filters, dampers, ducts, and blowers. These ventilation system components are connected to the rooms and corridors of the facility to form amore » complete network for moving air through the structure and, perhaps, maintaining pressure levels in certain areas. The material transport capability in TORAC is very basic and includes convection, depletion, entrainment, and filtration of material.« less
NASA Technical Reports Server (NTRS)
Summanen, T.; Kyroelae, E.
1995-01-01
We have developed a computer code which can be used to study 3-dimensional and time-dependent effects of the solar cycle on the interplanetary (IP) hydrogen distribution. The code is based on the inverted Monte Carlo simulation. In this work we have modelled the temporal behaviour of the solar ionisation rate. We have assumed that during the most of the time of the solar cycle there is an anisotopic latitudinal structure but right at the solar maximum the anisotropy disappears. The effects of this behaviour will be discussed both in regard to the IP hydrogen distribution and IP Lyman a a-intensity.
RELAP5-3D Results for Phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW Benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2012-06-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2.« less
RELAP5-3D results for phase I (Exercise 2) of the OECD/NEA MHTGR-350 MW benchmark
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strydom, G.; Epiney, A. S.
2012-07-01
The coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been initiated at the Idaho National Laboratory (INL) to provide a fully coupled prismatic Very High Temperature Reactor (VHTR) system modeling capability as part of the NGNP methods development program. The PHISICS code consists of three modules: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. As part of the verification and validation activities, steady state results have been obtained for Exercise 2 of Phase I of the newly-defined OECD/NEA MHTGR-350 MW Benchmark. This exercise requiresmore » participants to calculate a steady-state solution for an End of Equilibrium Cycle 350 MW Modular High Temperature Reactor (MHTGR), using the provided geometry, material, and coolant bypass flow description. The paper provides an overview of the MHTGR Benchmark and presents typical steady state results (e.g. solid and gas temperatures, thermal conductivities) for Phase I Exercise 2. Preliminary results are also provided for the early test phase of Exercise 3 using a two-group cross-section library and the Relap5-3D model developed for Exercise 2. (authors)« less
Development of a Benchmark Example for Delamination Fatigue Growth Prediction
NASA Technical Reports Server (NTRS)
Krueger, Ronald
2010-01-01
The development of a benchmark example for cyclic delamination growth prediction is presented and demonstrated for a commercial code. The example is based on a finite element model of a Double Cantilever Beam (DCB) specimen, which is independent of the analysis software used and allows the assessment of the delamination growth prediction capabilities in commercial finite element codes. First, the benchmark result was created for the specimen. Second, starting from an initially straight front, the delamination was allowed to grow under cyclic loading in a finite element model of a commercial code. The number of cycles to delamination onset and the number of cycles during stable delamination growth for each growth increment were obtained from the analysis. In general, good agreement between the results obtained from the growth analysis and the benchmark results could be achieved by selecting the appropriate input parameters. Overall, the results are encouraging but further assessment for mixed-mode delamination is required
Theory-based model for the pedestal, edge stability and ELMs in tokamaks
NASA Astrophysics Data System (ADS)
Pankin, A. Y.; Bateman, G.; Brennan, D. P.; Schnack, D. D.; Snyder, P. B.; Voitsekhovitch, I.; Kritz, A. H.; Janeschitz, G.; Kruger, S.; Onjun, T.; Pacher, G. W.; Pacher, H. D.
2006-04-01
An improved model for triggering edge localized mode (ELM) crashes is developed for use within integrated modelling simulations of the pedestal and ELM cycles at the edge of H-mode tokamak plasmas. The new model is developed by using the BALOO, DCON and ELITE ideal MHD stability codes to derive parametric expressions for the ELM triggering threshold. The whole toroidal mode number spectrum is studied with these codes. The DCON code applies to low mode numbers, while the BALOO code applies to only high mode numbers and the ELITE code applies to intermediate and high mode numbers. The variables used in the parametric stability expressions are the normalized pressure gradient and the parallel current density, which drive ballooning and peeling modes. Two equilibria motivated by DIII-D geometry with different plasma triangularities are studied. It is found that the stable region in the high triangularity discharge covers a much larger region of parameter space than the corresponding stability region in the low triangularity discharge. The new ELM trigger model is used together with a previously developed model for pedestal formation and ELM crashes in the ASTRA integrated modelling code to follow the time evolution of the temperature profiles during ELM cycles. The ELM frequencies obtained in the simulations of low and high triangularity discharges are observed to increase with increasing heating power. There is a transition from second stability to first ballooning mode stability as the heating power is increased in the high triangularity simulations. The results from the ideal MHD stability codes are compared with results from the resistive MHD stability code NIMROD.
Nouraei, S A R; O'Hanlon, S; Butler, C R; Hadovsky, A; Donald, E; Benjamin, E; Sandhu, G S
2009-02-01
To audit the accuracy of otolaryngology clinical coding and identify ways of improving it. Prospective multidisciplinary audit, using the 'national standard clinical coding audit' methodology supplemented by 'double-reading and arbitration'. Teaching-hospital otolaryngology and clinical coding departments. Otolaryngology inpatient and day-surgery cases. Concordance between initial coding performed by a coder (first cycle) and final coding by a clinician-coder multidisciplinary team (MDT; second cycle) for primary and secondary diagnoses and procedures, and Health Resource Groupings (HRG) assignment. 1250 randomly-selected cases were studied. Coding errors occurred in 24.1% of cases (301/1250). The clinician-coder MDT reassigned 48 primary diagnoses and 186 primary procedures and identified a further 209 initially-missed secondary diagnoses and procedures. In 203 cases, patient's initial HRG changed. Incorrect coding caused an average revenue loss of 174.90 pounds per patient (14.7%) of which 60% of the total income variance was due to miscoding of a eight highly-complex head and neck cancer cases. The 'HRG drift' created the appearance of disproportionate resource utilisation when treating 'simple' cases. At our institution the total cost of maintaining a clinician-coder MDT was 4.8 times lower than the income regained through the double-reading process. This large audit of otolaryngology practice identifies a large degree of error in coding on discharge. This leads to significant loss of departmental revenue, and given that the same data is used for benchmarking and for making decisions about resource allocation, it distorts the picture of clinical practice. These can be rectified through implementing a cost-effective clinician-coder double-reading multidisciplinary team as part of a data-assurance clinical governance framework which we recommend should be established in hospitals.
Genetic code, hamming distance and stochastic matrices.
He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E
2004-09-01
In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube.
Current and Future Critical Issues in Rocket Propulsion Systems
NASA Technical Reports Server (NTRS)
Navaz, Homayun K.; Dix, Jeff C.
1998-01-01
The objective of this research was to tackle several problems that are currently of great importance to NASA. In a liquid rocket engine several complex processes take place that are not thoroughly understood. Droplet evaporation, turbulence, finite rate chemistry, instability, and injection/atomization phenomena are some of the critical issues being encountered in a liquid rocket engine environment. Pulse Detonation Engines (PDE) performance, combustion chamber instability analysis, 60K motor flowfield pattern from hydrocarbon fuel combustion, and 3D flowfield analysis for the Combined Cycle engine were of special interest to NASA. During the summer of 1997, we made an attempt to generate computational results for all of the above problems and shed some light on understanding some of the complex physical phenomena. For this purpose, the Liquid Thrust Chamber Performance (LTCP) code, mainly designed for liquid rocket engine applications, was utilized. The following test cases were considered: (1) Characterization of a detonation wave in a Pulse Detonation Tube; (2) 60K Motor wall temperature studies; (3) Propagation of a pressure pulse in a combustion chamber (under single and two-phase flow conditions); (4) Transonic region flowfield analysis affected by viscous effects; (5) Exploring the viscous differences between a smooth and a corrugated wall; and (6) 3D thrust chamber flowfield analysis of the Combined Cycle engine. It was shown that the LTCP-2D and LTCP-3D codes are capable of solving complex and stiff conservation equations for gaseous and droplet phases in a very robust and efficient manner. These codes can be run on a workstation and personal computers (PC's).
Perceived Noise Analysis for Offset Jets Applied to Commercial Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Huff, Dennis L.; Henderson, Brenda S.; Berton, Jeffrey J.; Seidel, Jonathan A.
2016-01-01
A systems analysis was performed with experimental jet noise data, engine/aircraft performance codes and aircraft noise prediction codes to assess takeoff noise levels and mission range for conceptual supersonic commercial aircraft. A parametric study was done to identify viable engine cycles that meet NASA's N+2 goals for noise and performance. Model scale data from offset jets were used as input to the aircraft noise prediction code to determine the expected sound levels for the lateral certification point where jet noise dominates over all other noise sources. The noise predictions were used to determine the optimal orientation of the offset nozzles to minimize the noise at the lateral microphone location. An alternative takeoff procedure called "programmed lapse rate" was evaluated for noise reduction benefits. Results show there are two types of engines that provide acceptable mission range performance; one is a conventional mixed-flow turbofan and the other is a three-stream variable-cycle engine. Separate flow offset nozzles reduce the noise directed toward the thicker side of the outer flow stream, but have less benefit as the core nozzle pressure ratio is reduced. At the systems level for a three-engine N+2 aircraft with full throttle takeoff, there is a 1.4 EPNdB margin to Chapter 3 noise regulations predicted for the lateral certification point (assuming jet noise dominates). With a 10% reduction in thrust just after clearing the runway, the margin increases to 5.5 EPNdB. Margins to Chapter 4 and Chapter 14 levels will depend on the cumulative split between the three certification points, but it appears that low specific thrust engines with a 10% reduction in thrust (programmed lapse rate) can come close to meeting Chapter 14 noise levels. Further noise reduction is possible with engine oversizing and derated takeoff, but more detailed mission studies are needed to investigate the range impacts as well as the practical limits for safety and takeoff regulations.
Control Activity in Support of NASA Turbine Based Combined Cycle (TBCC) Research
NASA Technical Reports Server (NTRS)
Stueber, Thomas J.; Vrnak, Daniel R.; Le, Dzu K.; Ouzts, Peter J.
2010-01-01
Control research for a Turbine Based Combined Cycle (TBCC) propulsion system is the current focus of the Hypersonic Guidance, Navigation, and Control (GN&C) discipline team. The ongoing work at the NASA Glenn Research Center (GRC) supports the Hypersonic GN&C effort in developing tools to aid the design of control algorithms to manage a TBCC airbreathing propulsion system during a critical operating period. The critical operating period being addressed in this paper is the span when the propulsion system transitions from one cycle to another, referred to as mode transition. One such tool, that is a basic need for control system design activities, is computational models (hereto forth referred to as models) of the propulsion system. The models of interest for designing and testing controllers are Control Development Models (CDMs) and Control Validation Models (CVMs). CDMs and CVMs are needed for each of the following propulsion system elements: inlet, turbine engine, ram/scram dual-mode combustor, and nozzle. This paper presents an overall architecture for a TBCC propulsion system model that includes all of the propulsion system elements. Efforts are under way, focusing on one of the propulsion system elements, to develop CDMs and CVMs for a TBCC propulsion system inlet. The TBCC inlet aerodynamic design being modeled is that of the Combined-Cycle Engine (CCE) Testbed. The CCE Testbed is a large-scale model of an aerodynamic design that was verified in a small-scale screening experiment. The modeling approach includes employing existing state-of-the-art simulation codes, developing new dynamic simulations, and performing system identification experiments on the hardware in the NASA GRC 10 by10-Foot Supersonic Wind Tunnel. The developed CDMs and CVMs will be available for control studies prior to hardware buildup. The system identification experiments on the CCE Testbed will characterize the necessary dynamics to be represented in CDMs for control design. These system identification models will also be the reference models to validate the CDM and CVM models. Validated models will give value to the tools used to develop the models.
NASA Astrophysics Data System (ADS)
Wei, Pei; Gu, Rentao; Ji, Yuefeng
2014-06-01
As an innovative and promising technology, network coding has been introduced to passive optical networks (PON) in recent years to support inter optical network unit (ONU) communication, yet the signaling process and dynamic bandwidth allocation (DBA) in PON with network coding (NC-PON) still need further study. Thus, we propose a joint signaling and DBA scheme for efficiently supporting differentiated services of inter ONU communication in NC-PON. In the proposed joint scheme, the signaling process lays the foundation to fulfill network coding in PON, and it can not only avoid the potential threat to downstream security in previous schemes but also be suitable for the proposed hybrid dynamic bandwidth allocation (HDBA) scheme. In HDBA, a DBA cycle is divided into two sub-cycles for applying different coding, scheduling and bandwidth allocation strategies to differentiated classes of services. Besides, as network traffic load varies, the entire upstream transmission window for all REPORT messages slides accordingly, leaving the transmission time of one or two sub-cycles to overlap with the bandwidth allocation calculation time at the optical line terminal (the OLT), so that the upstream idle time can be efficiently eliminated. Performance evaluation results validate that compared with the existing two DBA algorithms deployed in NC-PON, HDBA demonstrates the best quality of service (QoS) support in terms of delay for all classes of services, especially guarantees the end-to-end delay bound of high class services. Specifically, HDBA can eliminate queuing delay and scheduling delay of high class services, reduce those of lower class services by at least 20%, and reduce the average end-to-end delay of all services over 50%. Moreover, HDBA also achieves the maximum delay fairness between coded and uncoded lower class services, and medium delay fairness for high class services.
ASME code considerations for the compact heat exchanger
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nestell, James; Sham, Sam
2015-08-31
The mission of the U.S. Department of Energy (DOE), Office of Nuclear Energy is to advance nuclear power in order to meet the nation's energy, environmental, and energy security needs. Advanced high temperature reactor systems such as sodium fast reactors and high and very high temperature gas-cooled reactors are being considered for the next generation of nuclear reactor plant designs. The coolants for these high temperature reactor systems include liquid sodium and helium gas. Supercritical carbon dioxide (sCO₂), a fluid at a temperature and pressure above the supercritical point of CO₂, is currently being investigated by DOE as a workingmore » fluid for a nuclear or fossil-heated recompression closed Brayton cycle energy conversion system that operates at 550°C (1022°F) at 200 bar (2900 psi). Higher operating temperatures are envisioned in future developments. All of these design concepts require a highly effective heat exchanger that transfers heat from the nuclear or chemical reactor to the chemical process fluid or the to the power cycle. In the nuclear designs described above, heat is transferred from the primary to the secondary loop via an intermediate heat exchanger (IHX) and then from the intermediate loop to either a working process or a power cycle via a secondary heat exchanger (SHX). The IHX is a component in the primary coolant loop which will be classified as "safety related." The intermediate loop will likely be classified as "not safety related but important to safety." These safety classifications have a direct bearing on heat exchanger design approaches for the IHX and SHX. The very high temperatures being considered for the VHTR will require the use of very high temperature alloys for the IHX and SHX. Material cost considerations alone will dictate that the IHX and SHX be highly effective; that is, provide high heat transfer area in a small volume. This feature must be accompanied by low pressure drop and mechanical reliability and robustness. Classic shell and tube designs will be large and costly, and may only be appropriate in steam generator service in the SHX where boiling inside the tubes occurs. For other energy conversion systems, all of these features can be met in a compact heat exchanger design. This report will examine some of the ASME Code issues that will need to be addressed to allow use of a Code-qualified compact heat exchanger in IHX or SHX nuclear service. Most effort will focus on the IHX, since the safety-related (Class A) design rules are more extensive than those for important-to-safety (Class B) or commercial rules that are relevant to the SHX.« less
Algorithm Optimally Orders Forward-Chaining Inference Rules
NASA Technical Reports Server (NTRS)
James, Mark
2008-01-01
People typically develop knowledge bases in a somewhat ad hoc manner by incrementally adding rules with no specific organization. This often results in a very inefficient execution of those rules since they are so often order sensitive. This is relevant to tasks like Deep Space Network in that it allows the knowledge base to be incrementally developed and have it automatically ordered for efficiency. Although data flow analysis was first developed for use in compilers for producing optimal code sequences, its usefulness is now recognized in many software systems including knowledge-based systems. However, this approach for exhaustively computing data-flow information cannot directly be applied to inference systems because of the ubiquitous execution of the rules. An algorithm is presented that efficiently performs a complete producer/consumer analysis for each antecedent and consequence clause in a knowledge base to optimally order the rules to minimize inference cycles. An algorithm was developed that optimally orders a knowledge base composed of forwarding chaining inference rules such that independent inference cycle executions are minimized, thus, resulting in significantly faster execution. This algorithm was integrated into the JPL tool Spacecraft Health Inference Engine (SHINE) for verification and it resulted in a significant reduction in inference cycles for what was previously considered an ordered knowledge base. For a knowledge base that is completely unordered, then the improvement is much greater.
An Evaluation of Operational Airspace Sectorization Integrated System (OASIS) Advisory Tool
NASA Technical Reports Server (NTRS)
Lee, Paul U.; Mogford, Richard H.; Bridges, Wayne; Buckley, Nathan; Evans, Mark; Gujral, Vimmy; Lee, Hwasoo; Peknik, Daniel; Preston, William
2013-01-01
In January 2013, a human-in-the-loop evaluation of the Operational Airspace Sectorization Integrated System (OASIS) was conducted in the Airspace Operations Laboratory of the Human Systems Integration Division (Code TH) in conjunction with the Aviation Systems Division (Code AF). The development of OASIS is a major activity of the Dynamic Airspace Configuration (DAC) research focus area within the Aeronautics Research Mission Directorate (ARMD) Airspace Systems Program. OASIS is an advisory tool to assist Federal Aviation Administration (FAA) En Route Area Supervisors in their planning of sector combinedecombine operations as well as opening closing of Data-side (D-side) control positions. These advisory solutions are tailored to the predicted traffic demand over the next few hours. During the experiment, eight retired FAA personnel served as participants for a part-task evaluation of OASIS functionality, covering the user interface as well as the underlying algorithm. Participants gave positive feedback on both the user interface and the algorithm solutions for airspace configuration, including an excellent average rating of 94 on the tool usability scales. They also suggested various enhancements to the OASIS tool, which will be incorporated into the next tool development cycle for the full-scale human-in-the-loop evaluation to be conducted later this year.
Galactic and solar radiation exposure to aircrew during a solar cycle.
Lewis, B J; Bennett, L G I; Green, A R; McCall, M J; Ellaschuk, B; Butler, A; Pierre, M
2002-01-01
An on-going investigation using a tissue-equivalent proportional counter (TEPC) has been carried out to measure the ambient dose equivalent rate of the cosmic radiation exposure of aircrew during a solar cycle. A semi-empirical model has been derived from these data to allow for the interpolation of the dose rate for any global position. The model has been extended to an altitude of up to 32 km with further measurements made on board aircraft and several balloon flights. The effects of changing solar modulation during the solar cycle are characterised by correlating the dose rate data to different solar potential models. Through integration of the dose-rate function over a great circle flight path or between given waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAIRE) has been further developed for estimation of the route dose from galactic cosmic radiation exposure. This estimate is provided in units of ambient dose equivalent as well as effective dose, based on E/H x (10) scaling functions as determined from transport code calculations with LUIN and FLUKA. This experimentally based treatment has also been compared with the CARI-6 and EPCARD codes that are derived solely from theoretical transport calculations. Using TEPC measurements taken aboard the International Space Station, ground based neutron monitoring, GOES satellite data and transport code analysis, an empirical model has been further proposed for estimation of aircrew exposure during solar particle events. This model has been compared to results obtained during recent solar flare events.
NASA Technical Reports Server (NTRS)
1991-01-01
Recommendations are made after 32 interviews, lesson identification, lesson analysis, and mission characteristics identification. The major recommendations are as follows: (1) to develop end-to-end planning and scheduling operations concepts by mission class and to ensure their consideration in system life cycle documentation; (2) to create an organizational infrastructure at the Code 500 level, supported by a Directorate level steering committee with project representation, responsible for systems engineering of end-to-end planning and scheduling systems; (3) to develop and refine mission capabilities to assess impacts of early mission design decisions on planning and scheduling; and (4) to emphasize operational flexibility in the development of the Advanced Space Network, other institutional resources, external (e.g., project) capabilities and resources, operational software and support tools.
Generating high precision ionospheric ground-truth measurements
NASA Technical Reports Server (NTRS)
Komjathy, Attila (Inventor); Sparks, Lawrence (Inventor); Mannucci, Anthony J. (Inventor)
2007-01-01
A method, apparatus and article of manufacture provide ionospheric ground-truth measurements for use in a wide-area augmentation system (WAAS). Ionospheric pseudorange/code and carrier phase data as primary observables is received by a WAAS receiver. A polynomial fit is performed on the phase data that is examined to identify any cycle slips in the phase data. The phase data is then leveled. Satellite and receiver biases are obtained and applied to the leveled phase data to obtain unbiased phase-leveled ionospheric measurements that are used in a WAAS system. In addition, one of several measurements may be selected and data is output that provides information on the quality of the measurements that are used to determine corrective messages as part of the WAAS system.
Aerothermo-Structural Analysis of Low Cost Composite Nozzle/Inlet Components
NASA Technical Reports Server (NTRS)
Shivakumar, Kuwigai; Challa, Preeli; Sree, Dave; Reddy, D.
1999-01-01
This research is a cooperative effort among the Turbomachinery and Propulsion Division of NASA Glenn, CCMR of NC A&T State University, and the Tuskegee University. The NC A&T is the lead center and Tuskegee University is the participating institution. Objectives of the research were to develop an integrated aerodynamic, thermal and structural analysis code for design of aircraft engine components, such as, nozzles and inlets made of textile composites; conduct design studies on typical inlets for hypersonic transportation vehicles and setup standards test examples and finally manufacture a scaled down composite inlet. These objectives are accomplished through the following seven tasks: (1) identify the relevant public domain codes for all three types of analysis; (2) evaluate the codes for the accuracy of results and computational efficiency; (3) develop aero-thermal and thermal structural mapping algorithms; (4) integrate all the codes into one single code; (5) write a graphical user interface to improve the user friendliness of the code; (6) conduct test studies for rocket based combined-cycle engine inlet; and finally (7) fabricate a demonstration inlet model using textile preform composites. Tasks one, two and six are being pursued. Selected and evaluated NPARC for flow field analysis, CSTEM for in-depth thermal analysis of inlets and nozzles and FRAC3D for stress analysis. These codes have been independently verified for accuracy and performance. In addition, graphical user interface based on micromechanics analysis for laminated as well as textile composites was developed. Demonstration of this code will be made at the conference. A rocket based combined cycle engine was selected for test studies. Flow field analysis of various inlet geometries were studied. Integration of codes is being continued. The codes developed are being applied to a candidate example of trailblazer engine proposed for space transportation. A successful development of the code will provide a simpler, faster and user-friendly tool for conducting design studies of aircraft and spacecraft engines, applicable in high speed civil transport and space missions.
NASA Technical Reports Server (NTRS)
Cramer, J. M.; Pal, S.; Marshall, W. M.; Santoro, R. J.
2003-01-01
Contents include the folloving: 1. Motivation. Support NASA's 3d generation launch vehicle technology program. RBCC is promising candidate for 3d generation propulsion system. 2. Approach. Focus on ejector mode p3erformance (Mach 0-3). Perform testing on established flowpath geometry. Use conventional propulsion measurement techniques. Use advanced optical diagnostic techniques to measure local combustion gas properties. 3. Objectives. Gain physical understanding of detailing mixing and combustion phenomena. Establish an experimental data set for CFD code development and validation.
Integration of the SSPM and STAGE with the MPACT Virtual Facility Distributed Test Bed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cipiti, Benjamin B.; Shoman, Nathan
The Material Protection Accounting and Control Technologies (MPACT) program within DOE NE is working toward a 2020 milestone to demonstrate a Virtual Facility Distributed Test Bed. The goal of the Virtual Test Bed is to link all MPACT modeling tools, technology development, and experimental work to create a Safeguards and Security by Design capability for fuel cycle facilities. The Separation and Safeguards Performance Model (SSPM) forms the core safeguards analysis tool, and the Scenario Toolkit and Generation Environment (STAGE) code forms the core physical security tool. These models are used to design and analyze safeguards and security systems and generatemore » performance metrics. Work over the past year has focused on how these models will integrate with the other capabilities in the MPACT program and specific model changes to enable more streamlined integration in the future. This report describes the model changes and plans for how the models will be used more collaboratively. The Virtual Facility is not designed to integrate all capabilities into one master code, but rather to maintain stand-alone capabilities that communicate results between codes more effectively.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1979-06-01
In this compendium each profile of a nuclear facility is a capsule summary of pertinent facts regarding that particular installation. The facilities described include the entire fuel cycle in the broadest sense, encompassing resource recovery through waste management. Power plants and all US facilities have been excluded. To facilitate comparison the profiles have been recorded in a standard format. Because of the breadth of the undertaking some data fields do not apply to the establishment under discussion and accordingly are blank. The set of nuclear facility profiles occupies four volumes; the profiles are ordered by country name, and then bymore » facility code. Each nuclear facility profile volume contains two complete indexes to the information. The first index aggregates the facilities alphabetically by country. It is further organized by category of facility, and then by the four-character facility code. It provides a quick summary of the nuclear energy capability or interest in each country and also an identifier, the facility code, which can be used to access the information contained in the profile.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sesonske, A.
1980-08-01
Detailed core management arrangements are developed requiring four operating cycles for the transition from present three-batch loading to an extended burnup four-batch plan for Zion-1. The ARMP code EPRI-NODE-P was used for core modeling. Although this work is preliminary, uranium and economic savings during the transition cycles appear of the order of 6 percent.
Theoretical study of a dual harmonic system and its application to the CSNS/RCS
NASA Astrophysics Data System (ADS)
Yuan, Yao-Shuo; Wang, Na; Xu, Shou-Yan; Yuan, Yue; Wang, Sheng
2015-12-01
Dual harmonic systems have been widely used in high intensity proton synchrotrons to suppress the space charge effect, as well as reduce the beam loss. To investigate the longitudinal beam dynamics in a dual rf system, the potential well, the sub-buckets in the bunch and the multi-solutions of the phase equation are studied theoretically in this paper. Based on these theoretical studies, optimization of bunching factor and rf voltage waveform are made for the dual harmonic rf system in the upgrade phase of the China Spallation Neutron Source Rapid Cycling Synchrotron (CSNS/RCS). In the optimization process, the simulation with space charge effect is done using a newly developed code, C-SCSIM. Supported by National Natural Science Foundation of China (11175193)
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The paper describes the computational techniques employed in determining the optimal propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. The computer programs used to perform calculations for all the factors that enter into the selection process of determining the optimum combinations of airplanes and engines are examined. Attention is given to the description of the computer codes including NNEP, WATE, LIFCYC, INSTAL, and POD DRG. A process is illustrated by which turbine engines can be evaluated as to fuel consumption, engine weight, cost and installation effects. Examples are shown as to the benefits of variable geometry and of the tradeoff between fuel burned and engine weights. Future plans for further improvements in the analytical modeling of engine systems are also described.
Penna, Ilaria; Vassallo, Irene; Nizzari, Mario; Russo, Debora; Costa, Delfina; Menichini, Paola; Poggi, Alessandro; Russo, Claudio; Dieci, Giorgio; Florio, Tullio; Cancedda, Ranieri; Pagano, Aldo
2013-06-01
FE65 proteins constitute a family of adaptors which modulates the processing of amyloid precursor protein and the consequent amyloid β production. Thus, they have been involved in the complex and partially unknown cascade of reactions at the base of Alzheimer's disease etiology. However, FE65 and FE65-like proteins may be linked to neurodegeneration through the regulation of cell cycle in post-mitotic neurons. In this work we disclose novel molecular mechanisms by which APBB2 can modulate APP processing. We show that APBB2 mRNA splicing, driven by the over-expression of a novel non-coding RNA named 45A, allow the generation of alternative protein forms endowed with differential effects on Aβ production, cell cycle control, and DNA damage response. 45A overexpression also favors cell transformation and tumorigenesis leading to a marked increase of malignancy of neuroblastoma cells. Therefore, our results highlight a novel regulatory pathway of considerable interest linking APP processing with cell cycle regulation and DNA-surveillance systems, that may represent a molecular mechanism to induce neurodegeneration in post-mitotic neurons. Copyright © 2013 Elsevier B.V. All rights reserved.
Greif, Gonzalo; Rodriguez, Matias; Alvarez-Valin, Fernando
2017-01-01
American trypanosomiasis is a chronic and endemic disease which affects millions of people. Trypanosoma cruzi, its causative agent, has a life cycle that involves complex morphological and functional transitions, as well as a variety of environmental conditions. This requires a tight regulation of gene expression, which is achieved mainly by post-transcriptional regulation. In this work we conducted an RNAseq analysis of the three major life cycle stages of T. cruzi: amastigotes, epimastigotes and trypomastigotes. This analysis allowed us to delineate specific transcriptomic profiling for each stage, and also to identify those biological processes of major relevance in each state. Stage specific expression profiling evidenced the plasticity of T. cruzi to adapt quickly to different conditions, with particular focus on membrane remodeling and metabolic shifts along the life cycle. Epimastigotes, which replicate in the gut of insect vectors, showed higher expression of genes related to energy metabolism, mainly Krebs cycle, respiratory chain and oxidative phosphorylation related genes, and anabolism related genes associated to nucleotide and steroid biosynthesis; also, a general down-regulation of surface glycoprotein coding genes was seen at this stage. Trypomastigotes, living extracellularly in the bloodstream of mammals, express a plethora of surface proteins and signaling genes involved in invasion and evasion of immune response. Amastigotes mostly express membrane transporters and genes involved in regulation of cell cycle, and also express a specific subset of surface glycoprotein coding genes. In addition, these results allowed us to improve the annotation of the Dm28c genome, identifying new ORFs and set the stage for construction of networks of co-expression, which can give clues about coded proteins of unknown functions. PMID:28286708
OECD/NEA Ongoing activities related to the nuclear fuel cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cornet, S.M.; McCarthy, K.; Chauvin, N.
2013-07-01
As part of its role in encouraging international collaboration, the OECD Nuclear Energy Agency is coordinating a series of projects related to the Nuclear Fuel Cycle. The Nuclear Science Committee (NSC) Working Party on Scientific Issues of the Nuclear Fuel Cycle (WPFC) comprises five different expert groups covering all aspects of the fuel cycle from front to back-end. Activities related to fuels, materials, physics, separation chemistry, and fuel cycles scenarios are being undertaken. By publishing state-of-the-art reports and organizing workshops, the groups are able to disseminate recent research advancements to the international community. Current activities mainly focus on advanced nuclearmore » systems, and experts are working on analyzing results and establishing challenges associated to the adoption of new materials and fuels. By comparing different codes, the Expert Group on Advanced Fuel Cycle Scenarios is aiming at gaining further understanding of the scientific issues and specific national needs associated with the implementation of advanced fuel cycles. At the back end of the fuel cycle, separation technologies (aqueous and pyrochemical processing) are being assessed. Current and future activities comprise studies on minor actinides separation and post Fukushima studies. Regular workshops are also organized to discuss recent developments on Partitioning and Transmutation. In addition, the Nuclear Development Committee (NDC) focuses on the analysis of the economics of nuclear power across the fuel cycle in the context of changes of electricity markets, social acceptance and technological advances and assesses the availability of the nuclear fuel and infrastructure required for the deployment of existing and future nuclear power. The Expert Group on the Economics of the Back End of the Nuclear Fuel Cycle (EBENFC), in particular, is looking at assessing economic and financial issues related to the long term management of spent nuclear fuel. (authors)« less
Numerical Assessment of Four-Port Through-Flow Wave Rotor Cycles with Passage Height Variation
NASA Technical Reports Server (NTRS)
Paxson, D. E.; Lindau, Jules W.
1997-01-01
The potential for improved performance of wave rotor cycles through the use of passage height variation is examined. A Quasi-one-dimensional CFD code with experimentally validated loss models is used to determine the flowfield in the wave rotor passages. Results indicate that a carefully chosen passage height profile can produce substantial performance gains. Numerical performance data are presented for a specific profile, in a four-port, through-flow cycle design which yielded a computed 4.6% increase in design point pressure ratio over a comparably sized rotor with constant passage height. In a small gas turbine topping cycle application, this increased pressure ratio would reduce specific fuel consumption to 22% below the un-topped engine; a significant improvement over the already impressive 18% reductions predicted for the constant passage height rotor. The simulation code is briefly described. The method used to obtain rotor passage height profiles with enhanced performance is presented. Design and off-design results are shown using two different computational techniques. The paper concludes with some recommendations for further work.
Methodology for Evaluating Cost-effectiveness of Commercial Energy Code Changes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hart, Philip R.; Liu, Bing
This document lays out the U.S. Department of Energy’s (DOE’s) method for evaluating the cost-effectiveness of energy code proposals and editions. The evaluation is applied to provisions or editions of the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) Standard 90.1 and the International Energy Conservation Code (IECC). The method follows standard life-cycle cost (LCC) economic analysis procedures. Cost-effectiveness evaluation requires three steps: 1) evaluating the energy and energy cost savings of code changes, 2) evaluating the incremental and replacement costs related to the changes, and 3) determining the cost-effectiveness of energy code changes based on those costs andmore » savings over time.« less
Development of code evaluation criteria for assessing predictive capability and performance
NASA Technical Reports Server (NTRS)
Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.
1993-01-01
Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.
A Subsonic Aircraft Design Optimization With Neural Network and Regression Approximators
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.; Haller, William J.
2004-01-01
The Flight-Optimization-System (FLOPS) code encountered difficulty in analyzing a subsonic aircraft. The limitation made the design optimization problematic. The deficiencies have been alleviated through use of neural network and regression approximations. The insight gained from using the approximators is discussed in this paper. The FLOPS code is reviewed. Analysis models are developed and validated for each approximator. The regression method appears to hug the data points, while the neural network approximation follows a mean path. For an analysis cycle, the approximate model required milliseconds of central processing unit (CPU) time versus seconds by the FLOPS code. Performance of the approximators was satisfactory for aircraft analysis. A design optimization capability has been created by coupling the derived analyzers to the optimization test bed CometBoards. The approximators were efficient reanalysis tools in the aircraft design optimization. Instability encountered in the FLOPS analyzer was eliminated. The convergence characteristics were improved for the design optimization. The CPU time required to calculate the optimum solution, measured in hours with the FLOPS code was reduced to minutes with the neural network approximation and to seconds with the regression method. Generation of the approximators required the manipulation of a very large quantity of data. Design sensitivity with respect to the bounds of aircraft constraints is easily generated.
NASA Technical Reports Server (NTRS)
Blewitt, Geoffrey
1989-01-01
A technique for resolving the ambiguities in the GPS carrier phase data (which are biased by an integer number of cycles) is described which can be applied to geodetic baselines up to 2000 km in length and can be used with dual-frequency P code receivers. The results of such application demonstrated that a factor of 3 improvement in baseline accuracy could be obtained, giving centimeter-level agreement with coordinates inferred by very-long-baseline interferometry in the western United States. It was found that a method using pseudorange data is more reliable than one using ionospheric constraints for baselines longer than 200 km. It is recommended that future GPS networks have a wide spectrum of baseline lengths (ranging from baselines shorter than 100 km to those longer than 1000 km) and that GPS receivers be used which can acquire dual-frequency P code data.
Re-engineering NASA's space communications to remain viable in a constrained fiscal environment
NASA Astrophysics Data System (ADS)
Hornstein, Rhoda Shaller; Hei, Donald J., Jr.; Kelly, Angelita C.; Lightfoot, Patricia C.; Bell, Holland T.; Cureton-Snead, Izeller E.; Hurd, William J.; Scales, Charles H.
1994-11-01
Along with the Red and Blue Teams commissioned by the NASA Administrator in 1992, NASA's Associate Administrator for Space Communications commissioned a Blue Team to review the Office of Space Communications (Code O) Core Program and determine how the program could be conducted faster, better, and cheaper. Since there was no corresponding Red Team for the Code O Blue Team, the Blue Team assumed a Red Team independent attitude and challenged the status quo, including current work processes, functional distinctions, interfaces, and information flow, as well as traditional management and system development practices. The Blue Team's unconstrained, non-parochial, and imaginative look at NASA's space communications program produced a simplified representation of the space communications infrastructure that transcends organizational and functional boundaries, in addition to existing systems and facilities. Further, the Blue Team adapted the 'faster, better, cheaper' charter to be relevant to the multi-mission, continuous nature of the space communications program and to serve as a gauge for improving customer services concurrent with achieving more efficient operations and infrastructure life cycle economies. This simplified representation, together with the adapted metrics, offers a future view and process model for reengineering NASA's space communications to remain viable in a constrained fiscal environment. Code O remains firm in its commitment to improve productivity, effectiveness, and efficiency. In October 1992, the Associate Administrator reconstituted the Blue Team as the Code O Success Team (COST) to serve as a catalyst for change. In this paper, the COST presents the chronicle and significance of the simplified representation and adapted metrics, and their application during the FY 1993-1994 activities.
Re-engineering NASA's space communications to remain viable in a constrained fiscal environment
NASA Technical Reports Server (NTRS)
Hornstein, Rhoda Shaller; Hei, Donald J., Jr.; Kelly, Angelita C.; Lightfoot, Patricia C.; Bell, Holland T.; Cureton-Snead, Izeller E.; Hurd, William J.; Scales, Charles H.
1994-01-01
Along with the Red and Blue Teams commissioned by the NASA Administrator in 1992, NASA's Associate Administrator for Space Communications commissioned a Blue Team to review the Office of Space Communications (Code O) Core Program and determine how the program could be conducted faster, better, and cheaper. Since there was no corresponding Red Team for the Code O Blue Team, the Blue Team assumed a Red Team independent attitude and challenged the status quo, including current work processes, functional distinctions, interfaces, and information flow, as well as traditional management and system development practices. The Blue Team's unconstrained, non-parochial, and imaginative look at NASA's space communications program produced a simplified representation of the space communications infrastructure that transcends organizational and functional boundaries, in addition to existing systems and facilities. Further, the Blue Team adapted the 'faster, better, cheaper' charter to be relevant to the multi-mission, continuous nature of the space communications program and to serve as a gauge for improving customer services concurrent with achieving more efficient operations and infrastructure life cycle economies. This simplified representation, together with the adapted metrics, offers a future view and process model for reengineering NASA's space communications to remain viable in a constrained fiscal environment. Code O remains firm in its commitment to improve productivity, effectiveness, and efficiency. In October 1992, the Associate Administrator reconstituted the Blue Team as the Code O Success Team (COST) to serve as a catalyst for change. In this paper, the COST presents the chronicle and significance of the simplified representation and adapted metrics, and their application during the FY 1993-1994 activities.
Modeling coherent errors in quantum error correction
NASA Astrophysics Data System (ADS)
Greenbaum, Daniel; Dutton, Zachary
2018-01-01
Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.
Propulsion System Modeling and Simulation
NASA Technical Reports Server (NTRS)
Tai, Jimmy C. M.; McClure, Erin K.; Mavris, Dimitri N.; Burg, Cecile
2002-01-01
The Aerospace Systems Design Laboratory at the School of Aerospace Engineering in Georgia Institute of Technology has developed a core competency that enables propulsion technology managers to make technology investment decisions substantiated by propulsion and airframe technology system studies. This method assists the designer/manager in selecting appropriate technology concepts while accounting for the presence of risk and uncertainty as well as interactions between disciplines. This capability is incorporated into a single design simulation system that is described in this paper. This propulsion system design environment is created with a commercially available software called iSIGHT, which is a generic computational framework, and with analysis programs for engine cycle, engine flowpath, mission, and economic analyses. iSIGHT is used to integrate these analysis tools within a single computer platform and facilitate information transfer amongst the various codes. The resulting modeling and simulation (M&S) environment in conjunction with the response surface method provides the designer/decision-maker an analytical means to examine the entire design space from either a subsystem and/or system perspective. The results of this paper will enable managers to analytically play what-if games to gain insight in to the benefits (and/or degradation) of changing engine cycle design parameters. Furthermore, the propulsion design space will be explored probabilistically to show the feasibility and viability of the propulsion system integrated with a vehicle.
Electromagnetic Smart Valves for Cryogenic Applications
NASA Astrophysics Data System (ADS)
Traum, M. J.; Smith, J. L.; Brisson, J. G.; Gerstmann, J.; Hannon, C. L.
2004-06-01
Electromagnetic valves with smart control capability have been developed and demonstrated for use in the cold end of a Collins-style cryocooler. The toroidal geometry of the valves was developed utilizing a finite-element code and optimized for maximum opening force with minimum input current. Electromagnetic smart valves carry two primary benefits in cryogenic applications: 1) magnetic actuation eliminates the need for mechanical linkages and 2) valve timing can be modified during system cool down and in regular operation for cycle optimization. The smart feature of these electromagnetic valves resides in controlling the flow of current into the magnetic coil. Electronics have been designed to shape the valve actuation current, limiting the residence time of magnetic energy in the winding. This feature allows control of flow through the expander via an electrical signal while dissipating less than 0.0071 J/cycle as heat into the cold end. The electromagnetic smart valves have demonstrated reliable, controllable dynamic cycling. After 40 hours of operation, they suffered no perceptible mechanical degradation. These features enable the development of a miniaturized Collins-style cryocooler capable of removing 1 Watt of heat at 10 K.
Understanding soil health by capitalizing on long-term field studies
NASA Astrophysics Data System (ADS)
Tavakkoli, Ehsan; Wang, Zhe; VanZweieten, Lukas; Rose, Michael
2017-04-01
Microbial biodiversity in Australian agricultural soils is of paramount importance as it plays a critical role in regulating soil health, plant productivity, and the cycling of carbon, nitrogen, and other nutrients. Agricultural practices strongly affect soil microbial communities by changing the physical and chemical characteristics of the soil in which microorganisms live, thereby affecting their abundance, diversity, and activity. Despite its importance, the specific responses of various microbial groups to changing environmental conditions (e.g. increased/decreased carbon in response to land management) in agricultural soils are not well understood. This knowledge gap is largely due to previous methodological limitations that, until recently, did not allow microbial diversity and functioning to be meaningfully investigated on large numbers of samples. We sampled soils from a field trial on the effect of strategic tillage in no-till systems to examine the potential impact of tillage and stubble management on soil microbial composition. To determine the relative abundance of bacteria and fungi, we used quantitative PCR (qPCR), and to analyze the composition and diversity of the bacterial and fungal communities, we used bar-coded high-throughput sequencing. Bioinformatics of the sequencing generated data was performed using a previously scripted and tested pipeline, and involved allocation of the relevant sequences to their samples of origin according to the bar-code. In parallel, changes in soil quality and microbial functionality were determined using multi-enzyme activity assay and multiple substrate-induced respiration. The extracellular enzyme activities that were measured include: β-1,4-glucosidase, β-D-cellobiohydrolase, β-Xylosidase, and α-1,4-glucosidase which are all relevant to the C cycle; β-1,4-N-acetylglucosaminidase and L-leucine aminopeptidase which are both relevant to the N cycle associated and associated with protein catabolism. In this presentation, analyses of soil health and functionality in relation to its response to various agronomic practices and implications for C sequestration and nutrient cycling will be discussed.
Mongoose: Creation of a Rad-Hard MIPS R3000
NASA Technical Reports Server (NTRS)
Lincoln, Dan; Smith, Brian
1993-01-01
This paper describes the development of a 32 Bit, full MIPS R3000 code-compatible Rad-Hard CPU, code named Mongoose. Mongoose progressed from contract award, through the design cycle, to operational silicon in 12 months to meet a space mission for NASA. The goal was the creation of a fully static device capable of operation to the maximum Mil-883 derated speed, worst-case post-rad exposure with full operational integrity. This included consideration of features for functional enhancements relating to mission compatibility and removal of commercial practices not supported by Rad-Hard technology. 'Mongoose' developed from an evolution of LSI Logic's MIPS-I embedded processor, LR33000, code named Cobra, to its Rad-Hard 'equivalent', Mongoose. The term 'equivalent' is used to infer that the core of the processor is functionally identical, allowing the same use and optimizations of the MIPS-I Instruction Set software tool suite for compilation, software program trace, etc. This activity was started in September of 1991 under a contract from NASA-Goddard Space Flight Center (GSFC)-Flight Data Systems. The approach affected a teaming of NASA-GSFC for program development, LSI Logic for system and ASIC design coupled with the Rad-Hard process technology, and Harris (GASD) for Rad-Hard microprocessor design expertise. The program culminated with the generation of Rad-Hard Mongoose prototypes one year later.
Flow of GE90 Turbofan Engine Simulated
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
1999-01-01
The objective of this task was to create and validate a three-dimensional model of the GE90 turbofan engine (General Electric) using the APNASA (average passage) flow code. This was a joint effort between GE Aircraft Engines and the NASA Lewis Research Center. The goal was to perform an aerodynamic analysis of the engine primary flow path, in under 24 hours of CPU time, on a parallel distributed workstation system. Enhancements were made to the APNASA Navier-Stokes code to make it faster and more robust and to allow for the analysis of more arbitrary geometry. The resulting simulation exploited the use of parallel computations by using two levels of parallelism, with extremely high efficiency.The primary flow path of the GE90 turbofan consists of a nacelle and inlet, 49 blade rows of turbomachinery, and an exhaust nozzle. Secondary flows entering and exiting the primary flow path-such as bleed, purge, and cooling flows-were modeled macroscopically as source terms to accurately simulate the engine. The information on these source terms came from detailed descriptions of the cooling flow and from thermodynamic cycle system simulations. These provided boundary condition data to the three-dimensional analysis. A simplified combustor was used to feed boundary conditions to the turbomachinery. Flow simulations of the fan, high-pressure compressor, and high- and low-pressure turbines were completed with the APNASA code.
Quintero, Catherine; Kariv, Ilona
2009-06-01
To meet the needs of the increasingly rapid and parallelized lead optimization process, a fully integrated local compound storage and liquid handling system was designed and implemented to automate the generation of assay-ready plates directly from newly submitted and cherry-picked compounds. A key feature of the system is the ability to create project- or assay-specific compound-handling methods, which provide flexibility for any combination of plate types, layouts, and plate bar-codes. Project-specific workflows can be created by linking methods for processing new and cherry-picked compounds and control additions to produce a complete compound set for both biological testing and local storage in one uninterrupted workflow. A flexible cherry-pick approach allows for multiple, user-defined strategies to select the most appropriate replicate of a compound for retesting. Examples of custom selection parameters include available volume, compound batch, and number of freeze/thaw cycles. This adaptable and integrated combination of software and hardware provides a basis for reducing cycle time, fully automating compound processing, and ultimately increasing the rate at which accurate, biologically relevant results can be produced for compounds of interest in the lead optimization process.
Performance outlook of the SCRAP receiver
NASA Astrophysics Data System (ADS)
Lubkoll, Matti; von Backström, Theodor W.; Harms, Thomas M.
2016-05-01
A combined cycle (CC) concentrating solar power (CSP) plant provides significant potential to achieve an efficiency increase and an electricity cost reduction compared to current single-cycle plants. A CC CSP system requires a receiver technology capable of effectively transferring heat from concentrated solar irradiation to a pressurized air stream of a gas turbine. The small number of pressurized air receivers demonstrated to date have practical limitations, when operating at high temperatures and pressures. As yet, a robust, scalable and efficient system has to be developed and commercialized. A novel receiver system, the Spiky Central Receiver Air Pre-heater (SCRAP) concept has been proposed to comply with these requirements. The SCRAP system is conceived as a solution for an efficient and robust pressurized air receiver that could be implemented in CC CSP concepts or standalone solar Brayton cycles without a bottoming Rankine cycle. The presented work expands on previous publications on the thermal modeling of the receiver system. Based on the analysis of a single heat transfer element (spike), predictions for its thermal performance can be made. To this end the existing thermal model was improved by heat transfer characteristics for the jet impingement region of the spike tip as well as heat transfer models simulating the interaction with ambient. While the jet impingement cooling effect was simulated employing a commercial CFD code, the ambient heat transfer model was based on simplifying assumptions in order to employ empirical and analytical equations. The thermal efficiency of a spike under design conditions (flux 1.0 MW/m2, air outlet temperature just below 800 °C) was calculated at approximately 80 %, where convective heat losses account for 16.2 % of the absorbed radiation and radiative heat losses for a lower 2.9 %. This effect is due to peak surface temperatures occurring at the root of the spikes. It can thus be concluded that the geometric receiver layout assists to limit radiative heat losses.
JavaGenes and Condor: Cycle-Scavenging Genetic Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Langhirt, Eric; Livny, Miron; Ramamurthy, Ravishankar; Soloman, Marvin; Traugott, Steve
2000-01-01
A genetic algorithm code, JavaGenes, was written in Java and used to evolve pharmaceutical drug molecules and digital circuits. JavaGenes was run under the Condor cycle-scavenging batch system managing 100-170 desktop SGI workstations. Genetic algorithms mimic biological evolution by evolving solutions to problems using crossover and mutation. While most genetic algorithms evolve strings or trees, JavaGenes evolves graphs representing (currently) molecules and circuits. Java was chosen as the implementation language because the genetic algorithm requires random splitting and recombining of graphs, a complex data structure manipulation with ample opportunities for memory leaks, loose pointers, out-of-bound indices, and other hard to find bugs. Java garbage-collection memory management, lack of pointer arithmetic, and array-bounds index checking prevents these bugs from occurring, substantially reducing development time. While a run-time performance penalty must be paid, the only unacceptable performance we encountered was using standard Java serialization to checkpoint and restart the code. This was fixed by a two-day implementation of custom checkpointing. JavaGenes is minimally integrated with Condor; in other words, JavaGenes must do its own checkpointing and I/O redirection. A prototype Java-aware version of Condor was developed using standard Java serialization for checkpointing. For the prototype to be useful, standard Java serialization must be significantly optimized. JavaGenes is approximately 8700 lines of code and a few thousand JavaGenes jobs have been run. Most jobs ran for a few days. Results include proof that genetic algorithms can evolve directed and undirected graphs, development of a novel crossover operator for graphs, a paper in the journal Nanotechnology, and another paper in preparation.
Monitoring the Global Soil Moisture Climatology Using GLDAS/LIS
NASA Astrophysics Data System (ADS)
Meng, J.; Mitchell, K.; Wei, H.; Gottschalck, J.
2006-05-01
Soil moisture plays a crucial role in the terrestrial water cycle through governing the process of partitioning precipitation among infiltration, runoff and evaporation. Accurate assessment of soil moisture and other land states, namely, soil temperature, snowpack, and vegetation, is critical in numerical environmental prediction systems because of their regulation of surface water and energy fluxes between the surface and atmosphere over a variety of spatial and temporal scales. The Global Land Data Assimilation System (GLDAS) is developed, jointly by NASA Goddard Space Flight Center (GSFC) and NOAA National Centers for Environmental Prediction (NCEP), to perform high-quality global land surface simulation using state-of-art land surface models and further minimizing the errors of simulation by constraining the models with observation- based precipitation, and satellite land data assimilation techniques. The GLDAS-based Land Information System (LIS) infrastructure has been installed on the NCEP supercomputer that serves the operational weather and climate prediction systems. In this experiment, the Noah land surface model is offline executed within the GLDAS/LIS infrastructure, driven by the NCEP Global Reanalysis-2 (GR2) and the CPC Merged Analysis of Precipitation (CMAP). We use the same Noah code that is coupled to the operational NCEP Global Forecast System (GFS) for weather prediction and test bed versions of the NCEP Climate Forecast System (CFS) for seasonal prediction. For assessment, it is crucial that this uncoupled GLDAS/Noah uses exactly the same Noah code (and soil and vegetation parameters therein), and executes with the same horizontal grid, landmask, terrain field, soil and vegetation types, seasonal cycle of green vegetation fraction and surface albedo as in the coupled GFS/Noah and CFS/Noah. This execution is for the 25-year period of 1980-2005, starting with a pre-execution 10-year spin-up. This 25-year GLDAS/Noah global land climatology will be used for both climate variability assessment and as a source of land initial conditions for ensemble CFS/Noah seasonal hindcast experiments. Finally, this GLDAS/Noah climatology will serve as the foundation for a global drought/flood monitoring system that includes near realtime daily updates of the global land states.
A real-time LPC-based vocal tract area display for voice development.
Rossiter, D; Howard, D M; Downes, M
1994-12-01
This article reports the design and implementation of a graphical display that presents an approximation to vocal tract area in real time for voiced vowel articulation. The acoustic signal is digitally sampled by the system. From these data a set of reflection coefficients is derived using linear predictive coding. A matrix of area coefficients is then determined that approximates the vocal tract area of the user. From this information a graphical display is then generated. The complete cycle of analysis and display is repeated at approximately 20 times/s. Synchronised audio and visual sequences can be recorded and used as dynamic targets for articulatory development. Use of the system is illustrated by diagrams of system output for spoken cardinal vowels and for vowels sung in a trained and untrained style.
Development of a real time magnetic island identification system for HL-2A tokamak.
Chen, Chao; Sun, Shan; Ji, Xiaoquan; Yin, Zejie
2017-08-01
A novel real time magnetic island identification system for HL-2A is introduced. The identification method is based on the measurement of Mirnov probes and the equilibrium flux constructed by the equilibrium fit (EFIT) code. The system consists of an analog front board and a digital processing board connected by a shield cable. Four octal-channel analog-to-digital convertors are utilized for 100 KHz simultaneous sampling of all the probes, and the applications of PCI extensions for Instrumentation platform and reflective memory allow the system to receive EFIT results simultaneously. A high performance field programmable gate array (FPGA) is used to realize the real time identification algorithm. Based on the parallel and pipeline processing of the FPGA, the magnetic island structure can be identified with a cycle time of 3 ms during experiments.
Development of a real time magnetic island identification system for HL-2A tokamak
NASA Astrophysics Data System (ADS)
Chen, Chao; Sun, Shan; Ji, Xiaoquan; Yin, Zejie
2017-08-01
A novel real time magnetic island identification system for HL-2A is introduced. The identification method is based on the measurement of Mirnov probes and the equilibrium flux constructed by the equilibrium fit (EFIT) code. The system consists of an analog front board and a digital processing board connected by a shield cable. Four octal-channel analog-to-digital convertors are utilized for 100 KHz simultaneous sampling of all the probes, and the applications of PCI extensions for Instrumentation platform and reflective memory allow the system to receive EFIT results simultaneously. A high performance field programmable gate array (FPGA) is used to realize the real time identification algorithm. Based on the parallel and pipeline processing of the FPGA, the magnetic island structure can be identified with a cycle time of 3 ms during experiments.
NASA Technical Reports Server (NTRS)
1989-01-01
001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.
ION EFFECTS IN THE APS PARTICLE ACCUMULATOR RING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calvey, J.; Harkay, K.; Yao, CY.
2017-06-25
Trapped ions in the APS Particle Accumulator Ring (PAR) lead to a positive coherent tune shift in both planes, which increases along the PAR cycle as more ions accumulate. This effect has been studied using an ion simulation code developed at SLAC. After modifying the code to include a realistic vacuum profile, multiple ionization, and the effect of shaking the beam to measure the tune, the simulation agrees well with our measurements. This code has also been used to evaluate the possibility of ion instabilities at the high bunch charge needed for the APS-Upgrade.
The effects of solarization on the performance of a gas turbine
NASA Astrophysics Data System (ADS)
Homann, Christiaan; van der Spuy, Johan; von Backström, Theodor
2016-05-01
Various hybrid solar gas turbine configurations exist. The Stellenbosch University Solar Power Thermodynamic (SUNSPOT) cycle consists of a heliostat field, solar receiver, primary Brayton gas turbine cycle, thermal storage and secondary Rankine steam cycle. This study investigates the effect of the solarization of a gas turbine on its performance and details the integration of a gas turbine into a solar power plant. A Rover 1S60 gas turbine was modelled in Flownex, a thermal-fluid system simulation and design code, and validated against a one-dimensional thermodynamic model at design input conditions. The performance map of a newly designed centrifugal compressor was created and implemented in Flownex. The effect of the improved compressor on the performance of the gas turbine was evident. The gas turbine cycle was expanded to incorporate different components of a CSP plant, such as a solar receiver and heliostat field. The solarized gas turbine model simulates the gas turbine performance when subjected to a typical variation in solar resource. Site conditions at the Helio100 solar field were investigated and the possibility of integrating a gas turbine within this system evaluated. Heat addition due to solar irradiation resulted in a decreased fuel consumption rate. The influence of the additional pressure drop over the solar receiver was evident as it leads to decreased net power output. The new compressor increased the overall performance of the gas turbine and compensated for pressure losses incurred by the addition of solar components. The simulated integration of the solarized gas turbine at Helio100 showed potential, although the solar irradiation is too little to run the gas turbine on solar heat alone. The simulation evaluates the feasibility of solarizing a gas turbine and predicts plant performance for such a turbine cycle.
Numerical investigation of two- and three-dimensional heat transfer in expander cycle engines
NASA Technical Reports Server (NTRS)
Burch, Robert L.; Cheung, Fan-Bill
1993-01-01
The concept of using tube canting for enhancing the hot-side convective heat transfer in a cross-stream tubular rocket combustion chamber is evaluated using a CFD technique in this study. The heat transfer at the combustor wall is determined from the flow field generated by a modified version of the PARC Navier-Stokes Code, using the actual dimensions, fluid properties, and design parameters of a split-expander demonstrator cycle engine. The effects of artificial dissipation on convergence and solution accuracy are investigated. Heat transfer results predicted by the code are presented. The use of CFD in heat transfer calculations is critically examined to demonstrate the care needed in the use of artificial dissipation for good convergence and accurate solutions.
Performance Benefits for Wave Rotor-Topped Gas Turbine Engines
NASA Technical Reports Server (NTRS)
Jones, Scott M.; Welch, Gerard E.
1996-01-01
The benefits of wave rotor-topping in turboshaft engines, subsonic high-bypass turbofan engines, auxiliary power units, and ground power units are evaluated. The thermodynamic cycle performance is modeled using a one-dimensional steady-state code; wave rotor performance is modeled using one-dimensional design/analysis codes. Design and off-design engine performance is calculated for baseline engines and wave rotor-topped engines, where the wave rotor acts as a high pressure spool. The wave rotor-enhanced engines are shown to have benefits in specific power and specific fuel flow over the baseline engines without increasing turbine inlet temperature. The off-design steady-state behavior of a wave rotor-topped engine is shown to be similar to a conventional engine. Mission studies are performed to quantify aircraft performance benefits for various wave rotor cycle and weight parameters. Gas turbine engine cycles most likely to benefit from wave rotor-topping are identified. Issues of practical integration and the corresponding technical challenges with various engine types are discussed.
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin; Schlensinger, Adam
2011-01-01
Sinusoidal jitter is produced by simply modulating a clock frequency sinusoidally with a given frequency and amplitude. But this can be expressed as phase jitter, frequency jitter, or cycle-to-cycle jitter, rms or peak, absolute units, or normalized to the base clock frequency. Jitter using other waveforms requires calculating and downloading these waveforms to an arbitrary waveform generator, and helping the user manage relationships among phase jitter crest factor, frequency jitter crest factor, and cycle-to-cycle jitter (CCJ) crest factor. Software was developed for managing these relationships, automatically configuring the generator, and saving test results documentation. Tighter management of clock jitter and jitter sensitivity is required by new codes that further extend the already high performance of space communication links, completely correcting symbol error rates higher than 10 percent, and therefore typically requiring demodulation and symbol synchronization hardware to operating at signal-to-noise ratios of less than one. To accomplish this, greater demands are also made on transmitter performance, and measurement techniques are needed to confirm performance. It was discovered early that sinusoidal jitter can be stepped on a grid such that one can connect points by constant phase jitter, constant frequency jitter, or constant cycle-cycle jitter. The tool automates adherence to a grid while also allowing adjustments off-grid. Also, the jitter can be set by the user on any dimension and the others are calculated. The calculations are all recorded, allowing the data to be rapidly plotted or re-plotted against different interpretations just by changing pointers to columns. A key advantage is taking data on a carefully controlled grid, which allowed a single data set to be post-analyzed many different ways. Another innovation was building a software tool to provide very tight coupling between the generator and the recorded data product, and the operator's worksheet. Together, these allowed the operator to sweep the jitter stimulus quickly along any of three dimensions and focus on the response of the system under test (response was jitter transfer ratio, or performance degradation to the symbol or codeword error rate). Additionally, managing multi-tone and noise waveforms automated a tedious manual process, and provided almost instantaneous decision- making control over test flow. The code was written in LabVIEW, and calls Agilent instrument drivers to write to the generator hardware.
Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating
NASA Astrophysics Data System (ADS)
Chen, Liangji; Guo, Guangsong; Li, Huiying
2017-07-01
NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.
Verification of a Finite Element Model for Pyrolyzing Ablative Materials
NASA Technical Reports Server (NTRS)
Risch, Timothy K.
2017-01-01
Ablating thermal protection system (TPS) materials have been used in many reentering spacecraft and in other applications such as rocket nozzle linings, fire protection materials, and as countermeasures for directed energy weapons. The introduction of the finite element model to the analysis of ablation has arguably resulted in improved computational capabilities due the flexibility and extended applicability of the method, especially to complex geometries. Commercial finite element codes often provide enhanced capability compared to custom, specially written programs based on versatility, usability, pre- and post-processing, grid generation, total life-cycle costs, and speed.
NASA Astrophysics Data System (ADS)
Rizzo, Axel; Vaglio-Gaudard, Claire; Martin, Julie-Fiona; Noguère, Gilles; Eschbach, Romain
2017-09-01
DARWIN2.3 is the reference package used for fuel cycle applications in France. It solves the Boltzmann and Bateman equations in a coupling way, with the European JEFF-3.1.1 nuclear data library, to compute the fuel cycle values of interest. It includes both deterministic transport codes APOLLO2 (for light water reactors) and ERANOS2 (for fast reactors), and the DARWIN/PEPIN2 depletion code, each of them being developed by CEA/DEN with the support of its industrial partners. The DARWIN2.3 package has been experimentally validated for pressurized and boiling water reactors, as well as for sodium fast reactors; this experimental validation relies on the analysis of post-irradiation experiments (PIE). The DARWIN2.3 experimental validation work points out some isotopes for which the depleted concentration calculation can be improved. Some other nuclides have no available experimental validation, and their concentration calculation uncertainty is provided by the propagation of a priori nuclear data uncertainties. This paper describes the work plan of studies initiated this year to improve the accuracy of the DARWIN2.3 depleted material balance calculation concerning some nuclides of interest for the fuel cycle.
The psychology of elite cycling: a systematic review.
Spindler, David J; Allen, Mark S; Vella, Stewart A; Swann, Christian
2018-09-01
This systematic review sought to synthesise what is currently known about the psychology of elite cycling. Nine electronic databases were searched in March 2017 for studies reporting an empirical test of any psychological construct in an elite cycling sample. Fourteen studies (total n = 427) met inclusion criteria. Eight studies were coded as having high risk of bias. Themes extracted included mood, anxiety, self-confidence, pain, and cognitive function. Few studies had similar objectives meaning that in many instances findings could not be synthesised in a meaningful way. Nevertheless, there was some cross-study evidence that elite cyclists have more positive mood states (relative to normative scores), pre-race anxiety impairs performance (among male cyclists), and associative strategies are perceived as helpful for pain management. Among single studies coded as having low risk of bias, evidence suggests that implicit beliefs affect decision making performance, elite cyclists are less susceptible to mental fatigue (than non-elite cyclists), and better leadership skills relates to greater social labouring. Limitations include non-standardisation of measures, lack of follow-up data, small sample sizes, and overall poor research quality. The findings of this systematic review might be used to inform research and theory development on the psychology of elite endurance cycling.
Nouraei, S A R; Hudovsky, A; Virk, J S; Chatrath, P; Sandhu, G S
2013-12-01
To audit the accuracy of clinical coding in otolaryngology, assess the effectiveness of previously implemented interventions, and determine ways in which it can be further improved. Prospective clinician-auditor multidisciplinary audit of clinical coding accuracy. Elective and emergency ENT admissions and day-case activity. Concordance between initial coding and the clinician-auditor multi-disciplinary teams (MDT) coding in respect of primary and secondary diagnoses and procedures, health resource groupings health resource groupings (HRGs) and tariffs. The audit of 3131 randomly selected otolaryngology patients between 2010 and 2012 resulted in 420 instances of change to the primary diagnosis (13%) and 417 changes to the primary procedure (13%). In 1420 cases (44%), there was at least one change to the initial coding and 514 (16%) health resource groupings changed. There was an income variance of £343,169 or £109.46 per patient. The highest rates of health resource groupings change were observed in head and neck surgery and in particular skull-based surgery, laryngology and within that tracheostomy, and emergency admissions, and specially, epistaxis management. A randomly selected sample of 235 patients from the audit were subjected to a second audit by a second clinician-auditor multi-disciplinary team. There were 12 further health resource groupings changes (5%) and at least one further coding change occurred in 57 patients (24%). These changes were significantly lower than those observed in the pre-audit sample, but were also significantly greater than zero. Asking surgeons to 'code in theatre' and applying these codes without further quality assurance to activity resulted in an health resource groupings error rate of 45%. The full audit sample was regrouped under health resource groupings 3.5 and was compared with a previous audit of 1250 patients performed between 2007 and 2008. This comparison showed a reduction in the baseline rate of health resource groupings change from 16% during the first audit cycle to 9% in the current audit cycle (P < 0.001). Otolaryngology coding is complex and susceptible to subjectivity, variability and error. Coding variability can be improved, but not eliminated through regular education supported by an audit programme. © 2013 John Wiley & Sons Ltd.
Coupled field effects in BWR stability simulations using SIMULATE-3K
DOE Office of Scientific and Technical Information (OSTI.GOV)
Borkowski, J.; Smith, K.; Hagrman, D.
1996-12-31
The SIMULATE-3K code is the transient analysis version of the Studsvik advanced nodal reactor analysis code, SIMULATE-3. Recent developments have focused on further broadening the range of transient applications by refinement of core thermal-hydraulic models and on comparison with boiling water reactor (BWR) stability measurements performed at Ringhals unit 1, during the startups of cycles 14 through 17.
Accuracy of clinical coding for procedures in oral and maxillofacial surgery.
Khurram, S A; Warner, C; Henry, A M; Kumar, A; Mohammed-Ali, R I
2016-10-01
Clinical coding has important financial implications, and discrepancies in the assigned codes can directly affect the funding of a department and hospital. Over the last few years, numerous oversights have been noticed in the coding of oral and maxillofacial (OMF) procedures. To establish the accuracy and completeness of coding, we retrospectively analysed the records of patients during two time periods: March to May 2009 (324 patients), and January to March 2014 (200 patients). Two investigators independently collected and analysed the data to ensure accuracy and remove bias. A large proportion of operations were not assigned all the relevant codes, and only 32% - 33% were correct in both cycles. To our knowledge, this is the first reported audit of clinical coding in OMFS, and it highlights serious shortcomings that have substantial financial implications. Better input by the surgical team and improved communication between the surgical and coding departments will improve accuracy. Copyright © 2016 The British Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.
An analysis of international nuclear fuel supply options
NASA Astrophysics Data System (ADS)
Taylor, J'tia Patrice
As the global demand for energy grows, many nations are considering developing or increasing nuclear capacity as a viable, long-term power source. To assess the possible expansion of nuclear power and the intricate relationships---which cover the range of economics, security, and material supply and demand---between established and aspirant nuclear generating entities requires models and system analysis tools that integrate all aspects of the nuclear enterprise. Computational tools and methods now exist across diverse research areas, such as operations research and nuclear engineering, to develop such a tool. This dissertation aims to develop methodologies and employ and expand on existing sources to develop a multipurpose tool to analyze international nuclear fuel supply options. The dissertation is comprised of two distinct components: the development of the Material, Economics, and Proliferation Assessment Tool (MEPAT), and analysis of fuel cycle scenarios using the tool. Development of MEPAT is aimed for unrestricted distribution and therefore uses publicly available and open-source codes in its development when possible. MEPAT is built using the Powersim Studio platform that is widely used in systems analysis. MEPAT development is divided into three modules focusing on: material movement; nonproliferation; and economics. The material movement module tracks material quantity in each process of the fuel cycle and in each nuclear program with respect to ownership, location and composition. The material movement module builds on techniques employed by fuel cycle models such as the Verifiable Fuel Cycle Simulation (VISION) code developed at the Idaho National Laboratory under the Advanced Fuel Cycle Initiative (AFCI) for the analysis of domestic fuel cycle. Material movement parameters such as lending and reactor preference, as well as fuel cycle parameters such as process times and material factors are user-specified through a Microsoft Excel(c) data spreadsheet. The material movement module is the largest of the three, and the two other modules that assess nonproliferation and economics of the options are dependent on its output. Proliferation resistance measures from literature are modified and incorporated in MEPAT. The module to assess the nonproliferation of the supply options allows the user to specify defining attributes for the fuel cycle processes, and determines significant quantities of materials as well as measures of proliferation resistance. The measure is dependent on user-input and material information. The economics module allows the user to specify costs associated with different processes and other aspects of the fuel cycle. The simulation tool then calculates economic measures that relate the cost of the fuel cycle to electricity production. The second part of this dissertation consists of an examination of four scenarios of fuel supply option using MEPAT. The first is a simple scenario illustrating the modules and basic functions of MEPAT. The second scenario recreates a fuel supply study reported earlier in literature, and compares MEPAT results with those reported earlier for validation. The third, and a rather realistic, scenario includes four nuclear programs with one program entering the nuclear energy market. The fourth scenario assesses the reactor options available to the Hashemite Kingdom of Jordan, which is currently assessing available options to introduce nuclear power in the country. The methodology developed and implemented in MEPAT to analyze the material, proliferation and economics of nuclear fuel supply options is expected to help simplify and assess different reactor and fuel options available to utilities, government agencies and international organizations.
A Performance Map for Ideal Air Breathing Pulse Detonation Engines
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2001-01-01
The performance of an ideal, air breathing Pulse Detonation Engine is described in a manner that is useful for application studies (e.g., as a stand-alone, propulsion system, in combined cycles, or in hybrid turbomachinery cycles). It is shown that the Pulse Detonation Engine may be characterized by an averaged total pressure ratio, which is a unique function of the inlet temperature, the fraction of the inlet flow containing a reacting mixture, and the stoichiometry of the mixture. The inlet temperature and stoichiometry (equivalence ratio) may in turn be combined to form a nondimensional heat addition parameter. For each value of this parameter, the average total enthalpy ratio and total pressure ratio across the device are functions of only the reactant fill fraction. Performance over the entire operating envelope can thus be presented on a single plot of total pressure ratio versus total enthalpy ratio for families of the heat addition parameter. Total pressure ratios are derived from thrust calculations obtained from an experimentally validated, reactive Euler code capable of computing complete Pulse Detonation Engine limit cycles. Results are presented which demonstrate the utility of the described method for assessing performance of the Pulse Detonation Engine in several potential applications. Limitations and assumptions of the analysis are discussed. Details of the particular detonative cycle used for the computations are described.
NASA Astrophysics Data System (ADS)
Melton, R.; Thomas, J.
With the rapid growth in the number of space actors, there has been a marked increase in the complexity and diversity of software systems utilized to support SSA target tracking, indication, warning, and collision avoidance. Historically, most SSA software has been constructed with "closed" proprietary code, which limits interoperability, inhibits the code transparency that some SSA customers need to develop domain expertise, and prevents the rapid injection of innovative concepts into these systems. Open-source aerospace software, a rapidly emerging, alternative trend in code development, is based on open collaboration, which has the potential to bring greater transparency, interoperability, flexibility, and reduced development costs. Open-source software is easily adaptable, geared to rapidly changing mission needs, and can generally be delivered at lower costs to meet mission requirements. This paper outlines Ball's COSMOS C2 system, a fully open-source, web-enabled, command-and-control software architecture which provides several unique capabilities to move the current legacy SSA software paradigm to an open source model that effectively enables pre- and post-launch asset command and control. Among the unique characteristics of COSMOS is the ease with which it can integrate with diverse hardware. This characteristic enables COSMOS to serve as the command-and-control platform for the full life-cycle development of SSA assets, from board test, to box test, to system integration and test, to on-orbit operations. The use of a modern scripting language, Ruby, also permits automated procedures to provide highly complex decision making for the tasking of SSA assets based on both telemetry data and data received from outside sources. Detailed logging enables quick anomaly detection and resolution. Integrated real-time and offline data graphing renders the visualization of the both ground and on-orbit assets simple and straightforward.
Small Engine Technology (SET) - Task 14 Axisymmetric Engine Simulation Environment
NASA Technical Reports Server (NTRS)
Miller, Max J.
1999-01-01
As part of the NPSS (Numerical Propulsion Simulation System) project, NASA Lewis has a goal of developing an U.S. industry standard for an axisymmetric engine simulation environment. In this program, AlliedSignal Engines (AE) contributed to this goal by evaluating the ENG20 software and developing support tools. ENG20 is a NASA developed axisymmetric engine simulation tool. The project was divided into six subtasks which are summarized below: Evaluate the capabilities of the ENG20 code using an existing test case to see how this procedure can capture the component interactions for a full engine. Link AE's compressor and turbine axisymmetric streamline curvature codes (UD0300M and TAPS) with ENG20, which will provide the necessary boundary conditions for an ENG20 engine simulation. Evaluate GE's Global Data System (GDS), attempt to use GDS to do the linking of codes described in Subtask 2 above. Use a turbofan engine test case to evaluate various aspects of the system, including the linkage of UD0300M and TAPS with ENG20 and the GE data storage system. Also, compare the solution results with cycle deck results, axisymmetric solutions (UD0300M and TAPS), and test data to determine the accuracy of the solution. Evaluate the order of accuracy and the convergence time for the solution. Provide a monthly status report and a final formal report documenting AE's evaluation of ENG20. Provide the developed interfaces that link UD0300M and TAPS with ENG20, to NASA. The interface that links UD0300M with ENG20 will be compatible with the industr,, version of UD0300M.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simunovic, Srdjan
2015-02-16
CASL's modeling and simulation technology, the Virtual Environment for Reactor Applications (VERA), incorporates coupled physics and science-based models, state-of-the-art numerical methods, modern computational science, integrated uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. The computational simulation component of VERA is the VERA Core Simulator (VERA-CS). The core simulator is the specific collection of multi-physics computer codes used to model and deplete a LWR core over multiple cycles. The core simulator has a single common input file that drives all of the different physics codes. The parser code, VERAIn, converts VERAmore » Input into an XML file that is used as input to different VERA codes.« less
Genomic and Epigenomic Insights into Nutrition and Brain Disorders
Dauncey, Margaret Joy
2013-01-01
Considerable evidence links many neuropsychiatric, neurodevelopmental and neurodegenerative disorders with multiple complex interactions between genetics and environmental factors such as nutrition. Mental health problems, autism, eating disorders, Alzheimer’s disease, schizophrenia, Parkinson’s disease and brain tumours are related to individual variability in numerous protein-coding and non-coding regions of the genome. However, genotype does not necessarily determine neurological phenotype because the epigenome modulates gene expression in response to endogenous and exogenous regulators, throughout the life-cycle. Studies using both genome-wide analysis of multiple genes and comprehensive analysis of specific genes are providing new insights into genetic and epigenetic mechanisms underlying nutrition and neuroscience. This review provides a critical evaluation of the following related areas: (1) recent advances in genomic and epigenomic technologies, and their relevance to brain disorders; (2) the emerging role of non-coding RNAs as key regulators of transcription, epigenetic processes and gene silencing; (3) novel approaches to nutrition, epigenetics and neuroscience; (4) gene-environment interactions, especially in the serotonergic system, as a paradigm of the multiple signalling pathways affected in neuropsychiatric and neurological disorders. Current and future advances in these four areas should contribute significantly to the prevention, amelioration and treatment of multiple devastating brain disorders. PMID:23503168
The cell cycle of early mammalian embryos: lessons from genetic mouse models.
Artus, Jérôme; Babinet, Charles; Cohen-Tannoudji, Michel
2006-03-01
Genes coding for cell cycle components predicted to be essential for its regulation have been shown to be dispensable in mice, at the whole organism level. Such studies have highlighted the extraordinary plasticity of the embryonic cell cycle and suggest that many aspects of in vivo cell cycle regulation remain to be discovered. Here, we discuss the particularities of the mouse early embryonic cell cycle and review the mutations that result in cell cycle defects during mouse early embryogenesis, including deficiencies for genes of the cyclin family (cyclin A2 and B1), genes involved in cell cycle checkpoints (Mad2, Bub3, Chk1, Atr), genes involved in ubiquitin and ubiquitin-like pathways (Uba3, Ubc9, Cul1, Cul3, Apc2, Apc10, Csn2) as well as genes the function of which had not been previously ascribed to cell cycle regulation (Cdc2P1, E4F and Omcg1).
PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems.
Ghaffarizadeh, Ahmadreza; Heiland, Randy; Friedman, Samuel H; Mumenthaler, Shannon M; Macklin, Paul
2018-02-01
Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal "virtual laboratory" for such multicellular systems simulates both the biochemical microenvironment (the "stage") and many mechanically and biochemically interacting cells (the "players" upon the stage). PhysiCell-physics-based multicellular simulator-is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility "out of the box." The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a "cellular cargo delivery" system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net.
Shuttle cryogenic supply system optimization study. Volume 5A-1: Users manual for math models
NASA Technical Reports Server (NTRS)
1973-01-01
The Integrated Math Model for Cryogenic Systems is a flexible, broadly applicable systems parametric analysis tool. The program will effectively accommodate systems of considerable complexity involving large numbers of performance dependent variables such as are found in the individual and integrated cryogen systems. Basically, the program logic structure pursues an orderly progression path through any given system in much the same fashion as is employed for manual systems analysis. The system configuration schematic is converted to an alpha-numeric formatted configuration data table input starting with the cryogen consumer and identifying all components, such as lines, fittings, and valves, each in its proper order and ending with the cryogen supply source assembly. Then, for each of the constituent component assemblies, such as gas generators, turbo machinery, heat exchangers, and accumulators, the performance requirements are assembled in input data tabulations. Systems operating constraints and duty cycle definitions are further added as input data coded to the configuration operating sequence.
Beermann, Julia; Kirste, Dominique; Iwanov, Katharina; Lu, Dongchao; Kleemiß, Felix; Kumarswamy, Regalla; Schimmel, Katharina; Bär, Christian; Thum, Thomas
2018-01-01
The mammalian cell cycle is a complex and tightly controlled event. Myriads of different control mechanisms are involved in its regulation. Long non-coding RNAs (lncRNA) have emerged as important regulators of many cellular processes including cellular proliferation. However, a more global and unbiased approach to identify lncRNAs with importance for cell proliferation is missing. Here, we present a lentiviral shRNA library-based approach for functional lncRNA profiling. We validated our library approach in NIH3T3 (3T3) fibroblasts by identifying lncRNAs critically involved in cell proliferation. Using stringent selection criteria we identified lncRNA NR_015491.1 out of 3842 different RNA targets represented in our library. We termed this transcript Ntep (non-coding transcript essential for proliferation), as a bona fide lncRNA essential for cell cycle progression. Inhibition of Ntep in 3T3 and primary fibroblasts prevented normal cell growth and expression of key fibroblast markers. Mechanistically, we discovered that Ntep is important to activate P53 concomitant with increased apoptosis and cell cycle blockade in late G2/M. Our findings suggest Ntep to serve as an important regulator of fibroblast proliferation and function. In summary, our study demonstrates the applicability of an innovative shRNA library approach to identify long non-coding RNA functions in a massive parallel approach. PMID:29099486
Trypsteen, Wim; Mohammadi, Pejman; Van Hecke, Clarissa; Mestdagh, Pieter; Lefever, Steve; Saeys, Yvan; De Bleser, Pieter; Vandesompele, Jo; Ciuffi, Angela; Vandekerckhove, Linos; De Spiegelaere, Ward
2016-10-26
Studying the effects of HIV infection on the host transcriptome has typically focused on protein-coding genes. However, recent advances in the field of RNA sequencing revealed that long non-coding RNAs (lncRNAs) add an extensive additional layer to the cell's molecular network. Here, we performed transcriptome profiling throughout a primary HIV infection in vitro to investigate lncRNA expression at the different HIV replication cycle processes (reverse transcription, integration and particle production). Subsequently, guilt-by-association, transcription factor and co-expression analysis were performed to infer biological roles for the lncRNAs identified in the HIV-host interplay. Many lncRNAs were suggested to play a role in mechanisms relying on proteasomal and ubiquitination pathways, apoptosis, DNA damage responses and cell cycle regulation. Through transcription factor binding analysis, we found that lncRNAs display a distinct transcriptional regulation profile as compared to protein coding mRNAs, suggesting that mRNAs and lncRNAs are independently modulated. In addition, we identified five differentially expressed lncRNA-mRNA pairs with mRNA involvement in HIV pathogenesis with possible cis regulatory lncRNAs that control nearby mRNA expression and function. Altogether, the present study demonstrates that lncRNAs add a new dimension to the HIV-host interplay and should be further investigated as they may represent targets for controlling HIV replication.
High-temperature solar receiver integrated with a short-term storage system
NASA Astrophysics Data System (ADS)
Giovannelli, Ambra; Bashir, Muhammad Anser; Archilei, Erika Maria
2017-06-01
Small-Scale Concentrated Solar Power Plants could have a potential market for off-grid applications in rural contexts with limited access to the electrical grid and favorable environmental characteristics. Some Small-Scale plants have already been developed, like the 25-30 kWe Dish-Stirling engine. Other ones are under development as, for example, plants based on Parabolic Trough Collectors coupled with Organic Rankine Cycles. Furthermore, the technological progress achieved in the development of new small high-temperature solar receiver, makes possible the development of interesting systems based on Micro Gas Turbines coupled with Dish collectors. Such systems could have several advantages in terms of costs, reliability and availability if compared with Dish-Stirling plants. In addition, Dish-Micro Gas Turbine systems are expected to have higher performance than Solar Organic Rankine Cycle plants. The present work focuses the attention on some challenging aspects related to the design of small high-temperature solar receivers for Dish-Micro Gas Turbine systems. Natural fluctuations in the solar radiation can reduce system performance and damage seriously the Micro Gas Turbine. To stabilize the system operation, the solar receiver has to assure a proper thermal inertia. Therefore, a solar receiver integrated with a short-term storage system based on high-temperature phase-change materials is proposed in this paper. Steady-state and transient analyses (for thermal storage charge and discharge phases) have been carried out using the commercial CFD code Ansys-Fluent. Results are presented and discussed.
Turbine Internal and Film Cooling Modeling For 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Garg Vijay; Ameri, Ali
2005-01-01
The aim of this research project is to make use of NASA Glenn on-site computational facilities in order to develop, validate and apply aerodynamic, heat transfer, and turbine cooling models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes such as the Glenn-" code. Specific areas of effort include: Application of the Glenn-HT code to specific configurations made available under Turbine Based Combined Cycle (TBCC), and Ultra Efficient Engine Technology (UEET) projects. Validating the use of a multi-block code for the time accurate computation of the detailed flow and heat transfer of cooled turbine airfoils. The goal of the current research is to improve the predictive ability of the Glenn-HT code. This will enable one to design more efficient turbine components for both aviation and power generation. The models will be tested against specific configurations provided by NASA Glenn.
Han, Zhifeng; Liu, Jianye; Li, Rongbing; Zeng, Qinghua; Wang, Yi
2017-07-04
BeiDou system navigation messages are modulated with a secondary NH (Neumann-Hoffman) code of 1 kbps, where frequent bit transitions limit the coherent integration time to 1 millisecond. Therefore, a bit synchronization algorithm is necessary to obtain bit edges and NH code phases. In order to realize bit synchronization for BeiDou weak signals with large frequency deviation, a bit synchronization algorithm based on differential coherent and maximum likelihood is proposed. Firstly, a differential coherent approach is used to remove the effect of frequency deviation, and the differential delay time is set to be a multiple of bit cycle to remove the influence of NH code. Secondly, the maximum likelihood function detection is used to improve the detection probability of weak signals. Finally, Monte Carlo simulations are conducted to analyze the detection performance of the proposed algorithm compared with a traditional algorithm under the CN0s of 20~40 dB-Hz and different frequency deviations. The results show that the proposed algorithm outperforms the traditional method with a frequency deviation of 50 Hz. This algorithm can remove the effect of BeiDou NH code effectively and weaken the influence of frequency deviation. To confirm the feasibility of the proposed algorithm, real data tests are conducted. The proposed algorithm is suitable for BeiDou weak signal bit synchronization with large frequency deviation.
Manley, Ray; Satiani, Bhagwan
2009-11-01
With the widening gap between overhead expenses and reimbursement, management of the revenue cycle is a critical part of a successful vascular surgery practice. It is important to review the data on all the components of the revenue cycle: payer contracting, appointment scheduling, preregistration, registration process, coding and capturing charges, proper billing of patients and insurers, follow-up of accounts receivable, and finally using appropriate benchmarking. The industry benchmarks used should be those of peers in identical groups. Warning signs of poor performance are discussed enabling the practice to formulate a performance improvement plan.
The GRO remote terminal system
NASA Technical Reports Server (NTRS)
Zillig, David J.; Valvano, Joe
1994-01-01
In March 1992, NASA HQ challenged GSFC/Code 531 to propose a fast, low-cost approach to close the Tracking Data Relay Satellite System (TDRSS) Zone-of-Exclusion (ZOE) over the Indian Ocean in order to provide global communications coverage for the Compton Gamma Ray Observatory (GRO) spacecraft. GRO had lost its tape recording capability which limited its valuable science data return to real-time contacts with the TDRS-E and TDRS-W synchronous data relay satellites, yielding only approximately 62 percent of the possible data obtainable. To achieve global coverage, a TDRS spacecraft would have to be moved over the Indian Ocean out of line-of-sight control of White Sands Ground Terminal (WSGT). To minimize operations life cycle costs, Headquarters also set a goal for remote control, from the WSGT, of the overseas ground station which was required for direct communications with TDRS-1. On August 27, 1992, Code 531 was given the go ahead to implement the proposed GRO Relay Terminal System (GRTS). This paper describes the Remote Ground Relay Terminal (RGRT) which went operational at the Canberra Deep Space Communications Complex (CDSCC) in Canberra, Australia in December 1993 and is currently augmenting the TDRSS constellation in returning between 80-100 percent of GRO science data under the control of a single operator at WSGT.
Kapeller, Christoph; Kamada, Kyousuke; Ogawa, Hiroshi; Prueckl, Robert; Scharinger, Josef; Guger, Christoph
2014-01-01
A brain-computer-interface (BCI) allows the user to control a device or software with brain activity. Many BCIs rely on visual stimuli with constant stimulation cycles that elicit steady-state visual evoked potentials (SSVEP) in the electroencephalogram (EEG). This EEG response can be generated with a LED or a computer screen flashing at a constant frequency, and similar EEG activity can be elicited with pseudo-random stimulation sequences on a screen (code-based BCI). Using electrocorticography (ECoG) instead of EEG promises higher spatial and temporal resolution and leads to more dominant evoked potentials due to visual stimulation. This work is focused on BCIs based on visual evoked potentials (VEP) and its capability as a continuous control interface for augmentation of video applications. One 35 year old female subject with implanted subdural grids participated in the study. The task was to select one out of four visual targets, while each was flickering with a code sequence. After a calibration run including 200 code sequences, a linear classifier was used during an evaluation run to identify the selected visual target based on the generated code-based VEPs over 20 trials. Multiple ECoG buffer lengths were tested and the subject reached a mean online classification accuracy of 99.21% for a window length of 3.15 s. Finally, the subject performed an unsupervised free run in combination with visual feedback of the current selection. Additionally, an algorithm was implemented that allowed to suppress false positive selections and this allowed the subject to start and stop the BCI at any time. The code-based BCI system attained very high online accuracy, which makes this approach very promising for control applications where a continuous control signal is needed. PMID:25147509
Concept and performance study of turbocharged solid propellant ramjet
NASA Astrophysics Data System (ADS)
Li, Jiang; Liu, Kai; Liu, Yang; Liu, Shichang
2018-06-01
This study proposes a turbocharged solid propellant ramjet (TSPR) propulsion system that integrates a turbocharged system consisting of a solid propellant (SP) air turbo rocket (ATR) and the fuel-rich gas generator of a solid propellant ramjet (SPR). First, a suitable propellant scheme was determined for the TSPR. A solid hydrocarbon propellant is used to generate gas for driving the turbine, and a boron-based fuel-rich propellant is used to provide fuel-rich gas to the afterburner. An appropriate TSPR structure was also determined. The TSPR's thermodynamic cycle was analysed to prove its theoretical feasibility. The results showed that the TSPR's specific cycle power was larger than those of SP-ATR and SPR and thermal efficiency was slightly less than that of SP-ATR. Overall, TSPR showed optimal performance in a wide flight envelope. The specific impulses and specific thrusts of TSPR, SP-ATR, and SPR in the flight envelope were calculated and compared. TSPR's flight envelope roughly overlapped that of SP-ATR, its specific impulse was larger than that of SP-ATR, and its specific thrust was larger than those of SP-ATR and SPR. Attempts to improve the TSPR off-design performance prompted our proposal of a control plan for off-design codes in which both the turbocharger corrected speed and combustor excess gas coefficient are kept constant. An off-design performance model was established by analysing the TSPR working process. We concluded that TSPR with a constant corrected speed had wider flight envelope, higher thrust, and higher specific impulse than TSPR with a constant physical speed determined by calculating the performance of off-design TSPR codes under different control plans. The results of this study can provide a reference for further studies on TSPRs.
Hafer, Jocelyn F; Boyer, Katherine A
2017-01-01
Coordination variability (CV) quantifies the variety of movement patterns an individual uses during a task and may provide a measure of the flexibility of that individual's motor system. While there is growing popularity of segment CV as a marker of motor system health or adaptability, it is not known how many strides of data are needed to reliably calculate CV. This study aimed to determine the number of strides needed to reliably calculate CV in treadmill walking and running, and to compare CV between walking and running in a healthy population. Ten healthy young adults walked and ran at preferred speeds on a treadmill and a modified vector coding technique was used to calculate CV for the following segment couples: pelvis frontal plane vs. thigh frontal plane, thigh sagittal plane vs. shank sagittal plane, thigh sagittal plane vs. shank transverse plane, and shank transverse plane vs. rearfoot frontal plane. CV for each coupling of interest was calculated for 2-15 strides for each participant and gait type. Mean CV was calculated across the entire gait cycle and, separately, for 4 phases of the gait cycle. For running and walking 8 and 10 strides, respectively, were sufficient to obtain a reliable CV estimate. CV was significantly different between walking and running for the thigh vs. shank couple comparisons. These results suggest that 10 strides of treadmill data are needed to reliably calculate CV for walking and running. Additionally, the differences in CV between walking and running suggest that the role of knee (i.e., inter-thigh- shank) control may differ between these forms of locomotion. Copyright © 2016 Elsevier B.V. All rights reserved.
Coordinated design of coding and modulation systems
NASA Technical Reports Server (NTRS)
Massey, J. L.; Ancheta, T.; Johannesson, R.; Lauer, G.; Lee, L.
1976-01-01
The joint optimization of the coding and modulation systems employed in telemetry systems was investigated. Emphasis was placed on formulating inner and outer coding standards used by the Goddard Spaceflight Center. Convolutional codes were found that are nearly optimum for use with Viterbi decoding in the inner coding of concatenated coding systems. A convolutional code, the unit-memory code, was discovered and is ideal for inner system usage because of its byte-oriented structure. Simulations of sequential decoding on the deep-space channel were carried out to compare directly various convolutional codes that are proposed for use in deep-space systems.
The application of domain-driven design in NMS
NASA Astrophysics Data System (ADS)
Zhang, Jinsong; Chen, Yan; Qin, Shengjun
2011-12-01
In the traditional design approach of data-model-driven, system analysis and design phases are often separated which makes the demand information can not be expressed explicitly. The method is also easy to lead developer to the process-oriented programming, making codes between the modules or between hierarchies disordered. So it is hard to meet requirement of system scalability. The paper proposes a software hiberarchy based on rich domain model according to domain-driven design named FHRDM, then the Webwork + Spring + Hibernate (WSH) framework is determined. Domain-driven design aims to construct a domain model which not only meets the demand of the field where the software exists but also meets the need of software development. In this way, problems in Navigational Maritime System (NMS) development like big system business volumes, difficulty of requirement elicitation, high development costs and long development cycle can be resolved successfully.
Using mental mapping to unpack perceived cycling risk.
Manton, Richard; Rau, Henrike; Fahy, Frances; Sheahan, Jerome; Clifford, Eoghan
2016-03-01
Cycling is the most energy-efficient mode of transport and can bring extensive environmental, social and economic benefits. Research has highlighted negative perceptions of safety as a major barrier to the growth of cycling. Understanding these perceptions through the application of novel place-sensitive methodological tools such as mental mapping could inform measures to increase cyclist numbers and consequently improve cyclist safety. Key steps to achieving this include: (a) the design of infrastructure to reduce actual risks and (b) targeted work on improving safety perceptions among current and future cyclists. This study combines mental mapping, a stated-preference survey and a transport infrastructure inventory to unpack perceptions of cycling risk and to reveal both overlaps and discrepancies between perceived and actual characteristics of the physical environment. Participants translate mentally mapped cycle routes onto hard-copy base-maps, colour-coding road sections according to risk, while a transport infrastructure inventory captures the objective cycling environment. These qualitative and quantitative data are matched using Geographic Information Systems and exported to statistical analysis software to model the individual and (infra)structural determinants of perceived cycling risk. This method was applied to cycling conditions in Galway City (Ireland). Participants' (n=104) mental maps delivered data-rich perceived safety observations (n=484) and initial comparison with locations of cycling collisions suggests some alignment between perception and reality, particularly relating to danger at roundabouts. Attributing individual and (infra)structural characteristics to each observation, a Generalised Linear Mixed Model statistical analysis identified segregated infrastructure, road width, the number of vehicles as well as gender and cycling experience as significant, and interactions were found between individual and infrastructural variables. The paper concludes that mental mapping is a highly useful tool for assessing perceptions of cycling risk with a strong visual aspect and significant potential for public participation. This distinguishes it from more traditional cycling safety assessment tools that focus solely on the technical assessment of cycling infrastructure. Further development of online mapping tools is recommended as part of bicycle suitability measures to engage cyclists and the general public and to inform 'soft' and 'hard' cycling policy responses. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Toride, N.; Matsuoka, K.
2017-12-01
In order to predict the fate and transport of nitrogen in a reduced paddy field as a result of decomposition of organic matter, we implemented within the PHREEQC program a modified coupled carbon and nitrogen cycling model based on the LEACHM code. SOM decay processes from organic carbon (Org-C) to biomass carbon (Bio-C), humus carbon (Hum-C), and carbon dioxide (CO2) were described using first-order kinetics. Bio-C was recycled into the organic pool. When oxygen was available in an aerobic condition, O2 was used to produce CO2 as an electron accepter. When O2 availability is low, other electron acceptors such as NO3-, Mn4+, Fe3+, SO42-, were used depending on the redox potential. Decomposition of Org-N was related to the carbon cycle using the C/N ratio. Mineralization and immobilization were determined based on available NH4-N and the nitrogen demand for the formation of biomass and humus. Although nitrification was independently described with the first-order decay process, denitrification was linked with the SOM decay since NO3- was an electron accepter for the CO2 production. Proton reactions were coupled with the nitrification from NH4+ to NO3-, and the ammonium generation from NH3 to NH4+. Furthermore, cation and anion exchange reactions were included with the permanent negative charges and the pH dependent variable charges. The carbon and nitrogen cycling model described with PHREEQC was linked with HYDRUS-1D using the HP1 code. Various nitrogen and carbon transport scenarios were demonstrated for the application of organic matter to a saturated paddy soil.
Perceived Noise Analysis for Offset Jets Applied to Commercial Supersonic Aircraft
NASA Technical Reports Server (NTRS)
Huff, Dennis L.; Henderson, Brenda S.; Berton, Jeffrey J.; Seidel, Jonathan A.
2016-01-01
A systems analysis was performed with experimental jet noise data, engine/aircraft performance codes and aircraft noise prediction codes to assess takeoff noise levels and mission range for conceptual supersonic commercial aircraft. A parametric study was done to identify viable engine cycles that meet NASAs N+2 goals for noise and performance. Model scale data from offset jets was used as input to the aircraft noise prediction code to determine the expected sound levels for the lateral certification point where jet noise dominates over all other noise sources. The noise predictions were used to determine the optimal orientation of the offset nozzles to minimize the noise at the lateral microphone location. An alternative takeoff procedure called programmed lapse rate was evaluated for noise reduction benefits. Results show there are two types of engines that provide acceptable range performance; one is a standard mixed-flow turbofan with a single-stage fan, and the other is a three-stream variable-cycle engine with a multi-stage fan. The engine with a single-stage fan has a lower specific thrust and is 8 to 10 EPNdB quieter for takeoff. Offset nozzles reduce the noise directed toward the thicker side of the outer flow stream, but have less benefit as the core nozzle pressure ratio is reduced and the bypass-to-core area ratio increases. At the systems level for a three-engine N+2 aircraft with full throttle takeoff, there is a 1.4 EPNdB margin to Chapter 3 noise regulations predicted for the lateral certification point (assuming jet noise dominates). With a 10 reduction in thrust just after takeoff rotation, the margin increases to 5.5 EPNdB. Margins to Chapter 4 and Chapter 14 levels will depend on the cumulative split between the three certification points, but it appears that low specific thrust engines with a 10 reduction in thrust (programmed lapse rate) can come close to meeting Chapter 14 noise levels. Further noise reduction is possible with additional reduction in takeoff thrust using programmed lapse rate, but studies are needed to investigate the practical limits for safety and takeoff regulations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gerhard Strydom
2014-04-01
The INL PHISICS code system consists of three modules providing improved core simulation capability: INSTANT (performing 3D nodal transport core calculations), MRTAU (depletion and decay heat generation) and a perturbation/mixer module. Coupling of the PHISICS code suite to the thermal hydraulics system code RELAP5-3D has recently been finalized, and as part of the code verification and validation program the exercises defined for Phase I of the OECD/NEA MHTGR 350 MW Benchmark were completed. This paper provides an overview of the MHTGR Benchmark, and presents selected results of the three steady state exercises 1-3 defined for Phase I. For Exercise 1,more » a stand-alone steady-state neutronics solution for an End of Equilibrium Cycle Modular High Temperature Reactor (MHTGR) was calculated with INSTANT, using the provided geometry, material descriptions, and detailed cross-section libraries. Exercise 2 required the modeling of a stand-alone thermal fluids solution. The RELAP5-3D results of four sub-cases are discussed, consisting of various combinations of coolant bypass flows and material thermophysical properties. Exercise 3 combined the first two exercises in a coupled neutronics and thermal fluids solution, and the coupled code suite PHISICS/RELAP5-3D was used to calculate the results of two sub-cases. The main focus of the paper is a comparison of the traditional RELAP5-3D “ring” model approach vs. a much more detailed model that include kinetics feedback on individual block level and thermal feedbacks on a triangular sub-mesh. The higher fidelity of the block model is illustrated with comparison results on the temperature, power density and flux distributions, and the typical under-predictions produced by the ring model approach are highlighted.« less
Verification and Validation of a Navy ESPC Hindcast with Loosely Coupled Data Assimilation
NASA Astrophysics Data System (ADS)
Metzger, E. J.; Barton, N. P.; Smedstad, O. M.; Ruston, B. C.; Wallcraft, A. J.; Whitcomb, T. R.; Ridout, J. A.; Franklin, D. S.; Zamudio, L.; Posey, P. G.; Reynolds, C. A.; Phelps, M.
2016-12-01
The US Navy is developing an Earth System Prediction Capability (ESPC) to provide global environmental information to meet Navy and Department of Defense (DoD) operations and planning needs from the upper atmosphere to under the sea. It will be a fully coupled global atmosphere/ocean/ice/wave/land prediction system providing daily deterministic forecasts out to 16 days at high horizontal and vertical resolution, and daily probabilistic forecasts out to 45 days at lower resolution. The system will run at the Navy DoD Supercomputing Resource Center with an initial operational capability scheduled for the end of FY18 and the final operational capability scheduled for FY22. The individual model and data assimilation components include: atmosphere - NAVy Global Environmental Model (NAVGEM) and Naval Research Laboratory (NRL) Atmospheric Variational Data Assimilation System - Accelerated Representer (NAVDAS-AR); ocean - HYbrid Coordinate Ocean Model (HYCOM) and Navy Coupled Ocean Data Assimilation (NCODA); ice - Community Ice CodE (CICE) and NCODA; WAVEWATCH III™ and NCODA; and land - NAVGEM Land Surface Model (LSM). Currently, NAVGEM/HYCOM/CICE are three-way coupled and each model component is cycling with its respective assimilation scheme. The assimilation systems do not communicate with each other, but future plans call for these to be coupled as well. NAVGEM runs with a 6-hour update cycle while HYCOM/CICE run with a 24-hour update cycle. The T359L50 NAVGEM/0.08° HYCOM/0.08° CICE system has been integrated in hindcast mode and verification/validation metrics have been computed against unassimilated observations and against stand-alone versions of NAVGEM and HYCOM/CICE. This presentation will focus on typical operational diagnostics for atmosphere, ocean, and ice analyses including 500 hPa atmospheric height anomalies, low-level winds, temperature/salinity ocean depth profiles, ocean acoustical proxies, sea ice edge, and sea ice drift. Overall, the global coupled ESPC system is performing with comparable skill to the stand-alone systems at the nowcast time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowman, A.W.
1990-04-01
This paper describes an approach to solve air quality problems which frequently occur during iterations of the baseline change process. From a schedule standpoint, it is desirable to perform this evaluation in as short a time as possible while budgetary pressures limit the size of the staff available to do the work. Without a method in place to deal with baseline change proposal requests the environment analysts may not be able to produce the analysis results in the time frame expected. Using a concept called the Rapid Response Air Quality Analysis System (RAAS), the problems of timing and cost becomemore » tractable. The system could be adapted to assess other atmospheric pathway impacts, e.g., acoustics or visibility. The air quality analysis system used to perform the EA analysis (EA) for the Salt Repository Project (part of the Civilian Radioactive Waste Management Program), and later to evaluate the consequences of proposed baseline changes, consists of three components: Emission source data files; Emission rates contained in spreadsheets; Impact assessment model codes. The spreadsheets contain user-written codes (macros) that calculate emission rates from (1) emission source data (e.g., numbers and locations of sources, detailed operating schedules, and source specifications including horsepower, load factor, and duty cycle); (2) emission factors such as those published by the U.S. Environmental Protection Agency, and (3) control efficiencies.« less
User's guide for GSMP, a General System Modeling Program. [In PL/I
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cook, J. M.
1979-10-01
GSMP is designed for use by systems analysis teams. Given compiled subroutines that model the behavior of components plus instructions as to how they are to be interconnected, this program links them together to model a complete system. GSMP offers a fast response to management requests for reconfigurations of old systems and even initial configurations of new systems. Standard system-analytic services are provided: parameter sweeps, graphics, free-form input and formatted output, file storage and recovery, user-tested error diagnostics, component model and integration checkout and debugging facilities, sensitivity analysis, and a multimethod optimizer with nonlinear constraint handling capability. Steady-state or cyclicmore » time-dependence is simulated directly, initial-value problems only indirectly. The code is written in PL/I, but interfaces well with FORTRAN component models. Over the last five years GSMP has been used to model theta-pinch, tokamak, and heavy-ion fusion power plants, open- and closed-cycle magneto-hydrodynamic power plants, and total community energy systems.« less
Development of an Aeroelastic Code Based on an Euler/Navier-Stokes Aerodynamic Solver
NASA Technical Reports Server (NTRS)
Bakhle, Milind A.; Srivastava, Rakesh; Keith, Theo G., Jr.; Stefko, George L.; Janus, Mark J.
1996-01-01
This paper describes the development of an aeroelastic code (TURBO-AE) based on an Euler/Navier-Stokes unsteady aerodynamic analysis. A brief review of the relevant research in the area of propulsion aeroelasticity is presented. The paper briefly describes the original Euler/Navier-Stokes code (TURBO) and then details the development of the aeroelastic extensions. The aeroelastic formulation is described. The modeling of the dynamics of the blade using a modal approach is detailed, along with the grid deformation approach used to model the elastic deformation of the blade. The work-per-cycle approach used to evaluate aeroelastic stability is described. Representative results used to verify the code are presented. The paper concludes with an evaluation of the development thus far, and some plans for further development and validation of the TURBO-AE code.
Multiscale integral analysis of a HT leakage in a fusion nuclear power plant
NASA Astrophysics Data System (ADS)
Velarde, M.; Fradera, J.; Perlado, J. M.; Zamora, I.; Martínez-Saban, E.; Colomer, C.; Briani, P.
2016-05-01
The present work presents an example of the application of an integral methodology based on a multiscale analysis that covers the whole tritium cycle within a nuclear fusion power plant, from a micro scale, analyzing key components where tritium is leaked through permeation, to a macro scale, considering its atmospheric transport. A leakage from the Nuclear Power Plants, (NPP) primary to the secondary side of a heat exchanger (HEX) is considered for the present example. Both primary and secondary loop coolants are assumed to be He. Leakage is placed inside the HEX, leaking tritium in elementary tritium (HT) form to the secondary loop where it permeates through the piping structural material to the exterior. The Heating Ventilation and Air Conditioning (HVAC) system removes the leaked tritium towards the NPP exhaust. The HEX is modelled with system codes and coupled to Computational Fluid Dynamic (CFD) to account for tritium dispersion inside the nuclear power plants buildings and in site environment. Finally, tritium dispersion is calculated with an atmospheric transport code and a dosimetry analysis is carried out. Results show how the implemented methodology is capable of assessing the impact of tritium from the microscale to the atmospheric scale including the dosimetric aspect.
NASA Astrophysics Data System (ADS)
Cai, Changsheng; Gao, Yang; Pan, Lin; Dai, Wujiao
2014-09-01
With the rapid development of the COMPASS system, it is currently capable of providing regional navigation services. In order to test its data quality and performance for single point positioning (SPP), experiments have been conducted under different observing conditions including open sky, under trees, nearby a glass wall, nearby a large area of water, under high-voltage lines and under a signal transmitting tower. To assess the COMPASS data quality, the code multipath, cycle slip occurrence rate and data availability were analyzed and compared to GPS data. The datasets obtained from the experiments have also been utilized to perform combined GPS/COMPASS SPP on an epoch-by-epoch basis using unsmoothed single-frequency code observations. The investigation on the regional navigation performance aims at low-accuracy applications and all tests are made in Changsha, China, using the “SOUTH S82-C” GPS/COMPASS receivers. The results show that adding COMPASS observations can significantly improve the positioning accuracy of single-frequency GPS-only SPP in environments with limited satellite visibility. Since the COMPASS system is still in an initial operational stage, all results are obtained based on a fairly limited amount of data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tome, Carlos N; Caro, J A; Lebensohn, R A
2010-01-01
Advancing the performance of Light Water Reactors, Advanced Nuclear Fuel Cycles, and Advanced Reactors, such as the Next Generation Nuclear Power Plants, requires enhancing our fundamental understanding of fuel and materials behavior under irradiation. The capability to accurately model the nuclear fuel systems to develop predictive tools is critical. Not only are fabrication and performance models needed to understand specific aspects of the nuclear fuel, fully coupled fuel simulation codes are required to achieve licensing of specific nuclear fuel designs for operation. The backbone of these codes, models, and simulations is a fundamental understanding and predictive capability for simulating themore » phase and microstructural behavior of the nuclear fuel system materials and matrices. In this paper we review the current status of the advanced modeling and simulation of nuclear reactor cladding, with emphasis on what is available and what is to be developed in each scale of the project, how we propose to pass information from one scale to the next, and what experimental information is required for benchmarking and advancing the modeling at each scale level.« less
Supersonics Project - Airport Noise Tech Challenge
NASA Technical Reports Server (NTRS)
Bridges, James
2010-01-01
The Airport Noise Tech Challenge research effort under the Supersonics Project is reviewed. While the goal of "Improved supersonic jet noise models validated on innovative nozzle concepts" remains the same, the success of the research effort has caused the thrust of the research to be modified going forward in time. The main activities from FY06-10 focused on development and validation of jet noise prediction codes. This required innovative diagnostic techniques to be developed and deployed, extensive jet noise and flow databases to be created, and computational tools to be developed and validated. Furthermore, in FY09-10 systems studies commissioned by the Supersonics Project showed that viable supersonic aircraft were within reach using variable cycle engine architectures if exhaust nozzle technology could provide 3-5dB of suppression. The Project then began to focus on integrating the technologies being developed in its Tech Challenge areas to bring about successful system designs. Consequently, the Airport Noise Tech Challenge area has shifted efforts from developing jet noise prediction codes to using them to develop low-noise nozzle concepts for integration into supersonic aircraft. The new plan of research is briefly presented by technology and timelines.
Development of a security system for assisted reproductive technology (ART).
Hur, Yong Soo; Ryu, Eun Kyung; Park, Sung Jin; Yoon, Jeong; Yoon, San Hyun; Yang, Gi Deok; Hur, Chang Young; Lee, Won Don; Lim, Jin Ho
2015-01-01
In the field of assisted reproductive technology (ART), medical accidents can result in serious legal and social consequences. This study was conducted to develop a security system (called IVF-guardian; IG) that could prevent mismatching or mix-ups in ART. A software program was developed in collaboration with outside computer programmers. A quick response (QR) code was used to identify the patients, gametes and embryos in a format that was printed on a label. There was a possibility that embryo development could be affected by volatile organic components (VOC) in the printing material and adhesive material in the label paper. Further, LED light was used as the light source to recognize the QR code. Using mouse embryos, the effects of the label paper and LED light were examined. The stability of IG was assessed when applied in clinical practice after developing the system. A total of 104 cycles formed the study group, and 82 cycles (from patients who did not want to use IG because of safety concerns and lack of confidence in the security system) to which IG was not applied comprised the control group. Many of the label paper samples were toxic to mouse embryo development. We selected a particular label paper (P touch label) that did not affect mouse embryo development. The LED lights were non-toxic to the development of the mouse embryos under any experimental conditions. There were no differences in the clinical pregnancy rates between the IG-applied group and the control group (40/104 = 38.5 % and 30/82 = 36.6 %, respectively). The application of IG in clinical practice did not affect human embryo development or clinical outcomes. The use of IG reduces the misspelling of patient names. Using IG, there was a disadvantage in that each treatment step became more complicated, but the medical staff improved and became sufficiently confident in ART to offset this disadvantage. Patients who received treatment using the IG system also went through a somewhat tedious process, but there were no complaints. These patients gained further confidence in the practitioners over the course of treatment.
NASA Technical Reports Server (NTRS)
Rajagopal, Kadambi R.; DebChaudhury, Amitabha; Orient, George
2000-01-01
This report describes a probabilistic structural analysis performed to determine the probabilistic structural response under fluctuating random pressure loads for the Space Shuttle Main Engine (SSME) turnaround vane. It uses a newly developed frequency and distance dependent correlation model that has features to model the decay phenomena along the flow and across the flow with the capability to introduce a phase delay. The analytical results are compared using two computer codes SAFER (Spectral Analysis of Finite Element Responses) and NESSUS (Numerical Evaluation of Stochastic Structures Under Stress) and with experimentally observed strain gage data. The computer code NESSUS with an interface to a sub set of Composite Load Spectra (CLS) code is used for the probabilistic analysis. A Fatigue code was used to calculate fatigue damage due to the random pressure excitation. The random variables modeled include engine system primitive variables that influence the operating conditions, convection velocity coefficient, stress concentration factor, structural damping, and thickness of the inner and outer vanes. The need for an appropriate correlation model in addition to magnitude of the PSD is emphasized. The study demonstrates that correlation characteristics even under random pressure loads are capable of causing resonance like effects for some modes. The study identifies the important variables that contribute to structural alternate stress response and drive the fatigue damage for the new design. Since the alternate stress for the new redesign is less than the endurance limit for the material, the damage due high cycle fatigue is negligible.
How Clean is your Local Air? Here's an app for that
NASA Astrophysics Data System (ADS)
Maskey, M.; Yang, E.; Christopher, S. A.; Keiser, K.; Nair, U. S.; Graves, S. J.
2011-12-01
Air quality is a vital element of our environment. Accurate and localized air quality information is critical for characterizing environmental impacts at the local and regional levels. Advances in location-aware handheld devices and air quality modeling have enabled a group of UAHuntsville scientists to develop a mobile app, LocalAQI, that informs users of current conditions and forecasts of up to twenty-four hours, of air quality indices. The air quality index is based on Community Multiscale Air Quality Modeling System (CMAQ). UAHuntsville scientists have used satellite remote sensing products as inputs to CMAQ, resulting in forecast guidance for particulate matter air quality. The CMAQ output is processed to compute a standardized air quality index. Currently, the air quality index is available for the eastern half of the United States. LocalAQI consists of two main views: air quality index view and map view. The air quality index view displays current air quality for the zip code of a location of interest. Air quality index value is translated into a color-coded advisory system. In addition, users are able to cycle through available hourly forecasts for a location. This location-aware app defaults to the current air quality of user's location. The map view displays color-coded air quality information for the eastern US with an ability to animate through the available forecasts. The app is developed using a cross-platform native application development tool, appcelerator; hence LocalAQI is available for iOS and Android-based phones and pads.
GCS component development cycle
NASA Astrophysics Data System (ADS)
Rodríguez, Jose A.; Macias, Rosa; Molgo, Jordi; Guerra, Dailos; Pi, Marti
2012-09-01
The GTC1 is an optical-infrared 10-meter segmented mirror telescope at the ORM observatory in Canary Islands (Spain). First light was at 13/07/2007 and since them it is in the operation phase. The GTC control system (GCS) is a distributed object & component oriented system based on RT-CORBA8 and it is responsible for the management and operation of the telescope, including its instrumentation. GCS has used the Rational Unified process (RUP9) in its development. RUP is an iterative software development process framework. After analysing (use cases) and designing (UML10) any of GCS subsystems, an initial component description of its interface is obtained and from that information a component specification is written. In order to improve the code productivity, GCS has adopted the code generation to transform this component specification into the skeleton of component classes based on a software framework, called Device Component Framework. Using the GCS development tools, based on javadoc and gcc, in only one step, the component is generated, compiled and deployed to be tested for the first time through our GUI inspector. The main advantages of this approach are the following: It reduces the learning curve of new developers and the development error rate, allows a systematic use of design patterns in the development and software reuse, speeds up the deliverables of the software product and massively increase the timescale, design consistency and design quality, and eliminates the future refactoring process required for the code.
Francis, Brian R.
2015-01-01
Although analysis of the genetic code has allowed explanations for its evolution to be proposed, little evidence exists in biochemistry and molecular biology to offer an explanation for the origin of the genetic code. In particular, two features of biology make the origin of the genetic code difficult to understand. First, nucleic acids are highly complicated polymers requiring numerous enzymes for biosynthesis. Secondly, proteins have a simple backbone with a set of 20 different amino acid side chains synthesized by a highly complicated ribosomal process in which mRNA sequences are read in triplets. Apparently, both nucleic acid and protein syntheses have extensive evolutionary histories. Supporting these processes is a complex metabolism and at the hub of metabolism are the carboxylic acid cycles. This paper advances the hypothesis that the earliest predecessor of the nucleic acids was a β-linked polyester made from malic acid, a highly conserved metabolite in the carboxylic acid cycles. In the β-linked polyester, the side chains are carboxylic acid groups capable of forming interstrand double hydrogen bonds. Evolution of the nucleic acids involved changes to the backbone and side chain of poly(β-d-malic acid). Conversion of the side chain carboxylic acid into a carboxamide or a longer side chain bearing a carboxamide group, allowed information polymers to form amide pairs between polyester chains. Aminoacylation of the hydroxyl groups of malic acid and its derivatives with simple amino acids such as glycine and alanine allowed coupling of polyester synthesis and protein synthesis. Use of polypeptides containing glycine and l-alanine for activation of two different monomers with either glycine or l-alanine allowed simple coded autocatalytic synthesis of polyesters and polypeptides and established the first genetic code. A primitive cell capable of supporting electron transport, thioester synthesis, reduction reactions, and synthesis of polyesters and polypeptides is proposed. The cell consists of an iron-sulfide particle enclosed by tholin, a heterogeneous organic material that is produced by Miller-Urey type experiments that simulate conditions on the early Earth. As the synthesis of nucleic acids evolved from β-linked polyesters, the singlet coding system for replication evolved into a four nucleotide/four amino acid process (AMP = aspartic acid, GMP = glycine, UMP = valine, CMP = alanine) and then into the triplet ribosomal process that permitted multiple copies of protein to be synthesized independent of replication. This hypothesis reconciles the “genetics first” and “metabolism first” approaches to the origin of life and explains why there are four bases in the genetic alphabet. PMID:25679748
Consistent criticality and radiation studies of Swiss spent nuclear fuel: The CS2M approach.
Rochman, D; Vasiliev, A; Ferroukhi, H; Pecchia, M
2018-06-15
In this paper, a new method is proposed to systematically calculate at the same time canister loading curves and radiation sources, based on the inventory information from an in-core fuel management system. As a demonstration, the isotopic contents of the assemblies come from a Swiss PWR, considering more than 6000 cases from 34 reactor cycles. The CS 2 M approach consists in combining four codes: CASMO and SIMULATE to extract the assembly characteristics (based on validated models), the SNF code for source emission and MCNP for criticality calculations for specific canister loadings. The considered cases cover enrichments from 1.9 to 5.0% for the UO 2 assemblies and 4.8% for the MOX, with assembly burnup values from 7 to 74 MWd/kgU. Because such a study is based on the individual fuel assembly history, it opens the possibility to optimize canister loadings from the point-of-view of criticality, decay heat and emission sources. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Lee, Alice T.; Gunn, Todd; Pham, Tuan; Ricaldi, Ron
1994-01-01
This handbook documents the three software analysis processes the Space Station Software Analysis team uses to assess space station software, including their backgrounds, theories, tools, and analysis procedures. Potential applications of these analysis results are also presented. The first section describes how software complexity analysis provides quantitative information on code, such as code structure and risk areas, throughout the software life cycle. Software complexity analysis allows an analyst to understand the software structure, identify critical software components, assess risk areas within a software system, identify testing deficiencies, and recommend program improvements. Performing this type of analysis during the early design phases of software development can positively affect the process, and may prevent later, much larger, difficulties. The second section describes how software reliability estimation and prediction analysis, or software reliability, provides a quantitative means to measure the probability of failure-free operation of a computer program, and describes the two tools used by JSC to determine failure rates and design tradeoffs between reliability, costs, performance, and schedule.
... functionality. ------------------------------------------------------------ Find a Health Center ------------------------------------------------------------ Share this page: Twitter MySpace Technorati Facebook StumbleUpon Delicious Email to friend ... such as 20002), address, state, or place Share Twitter Facebook Email to friend Embed Code Embed this ...
Support for life-cycle product reuse in NASA's SSE
NASA Technical Reports Server (NTRS)
Shotton, Charles
1989-01-01
The Software Support Environment (SSE) is a software factory for the production of Space Station Freedom Program operational software. The SSE is to be centrally developed and maintained and used to configure software production facilities in the field. The PRC product TTCQF provides for an automated qualification process and analysis of existing code that can be used for software reuse. The interrogation subsystem permits user queries of the reusable data and components which have been identified by an analyzer and qualified with associated metrics. The concept includes reuse of non-code life-cycle components such as requirements and designs. Possible types of reusable life-cycle components include templates, generics, and as-is items. Qualification of reusable elements requires analysis (separation of candidate components into primitives), qualification (evaluation of primitives for reusability according to reusability criteria) and loading (placing qualified elements into appropriate libraries). There can be different qualifications for different installations, methodologies, applications and components. Identifying reusable software and related components is labor-intensive and is best carried out as an integrated function of an SSE.
MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test
NASA Technical Reports Server (NTRS)
Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.
2001-01-01
The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including Limit Cycle Oscillation behavior.
Relationship between paternal somatic health and assisted reproductive technology outcomes.
Eisenberg, Michael L; Li, Shufeng; Wise, Lauren A; Lynch, Courtney D; Nakajima, Steven; Meyers, Stuart A; Behr, Barry; Baker, Valerie L
2016-09-01
To study the association between paternal medical comorbidities and the outcomes of assisted reproductive technology (ART). Retrospective cohort study. Academic reproductive medicine center. We analyzed fresh ART cycles uszing freshly ejaculated sperm from the male partner of couples undergoing ART cycles from 2004 until 2014. We recorded patient and partner demographic characteristics. The cohort was linked to hospital billing data to obtain information on selected male partners' comorbidities identified using ICD-9-CM codes. None. Fertilization, clinical pregnancy, miscarriage, implantation, and live-birth rates as well as birth weights and gestational ages. In all, we identified 2,690 men who underwent 5,037 fresh ART cycles. Twenty-seven percent of men had at least one medical diagnosis. Men with nervous system diseases had on average lower pregnancy rates (23% vs. 30%) and live-birth rates (15% vs. 23%) than men without nervous system diseases. Lower fertilization rates were also observed among men with respiratory diseases (61% vs. 64%) and musculoskeletal diseases (61% vs. 64%) relative to those without these diseases. In addition, men with diseases of the endocrine system had smaller children (2,970 vs. 3,210 g) than men without such diseases. Finally, men with mental disorders had children born at an earlier gestational age (36.5 vs. 38.0 weeks). The current report identified a possible relationship between a man's health history and IVF outcomes. As these are potentially modifiable factors, further research should determine whether treatment for men's health conditions may improve or impair IVF outcomes. Copyright © 2016 American Society for Reproductive Medicine. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Tsou, P.; Stolte, W.
1978-01-01
The paper examines the impact of module and array designs on the balance-of-plant costs for flat-plate terrestrial central station power applications. Consideration is given to the following types of arrays: horizontal, tandem, augmented, tilt adjusted, and E-W tracking. The life-cycle cost of a 20-year plant life serves as the costing criteria for making design and cost tradeoffs. A tailored code of accounts is developed for determining consistent photovoltaic power plant costs and providing credible photovoltaic system cost baselines for flat-plate module and array designs by costing several varying array design approaches.
Analysis of a hypersonic waverider research vehicle with a hydrocarbon scramjet engine
NASA Technical Reports Server (NTRS)
Molvik, Gregory A.; Bowles, Jeffrey V.; Huynh, Loc C.
1993-01-01
The results of a feasibility study of a hypersonic waverider research vehicle with a hydrocarbon scramjet engine are presented. The integrated waverider/scramjet geometry is first optimized with a vehicle synthesis code to produce a maximum product of the lift-to-drag ratio and the cycle specific impulse, hence cruise range. Computational fluid dynamics (CFD) is then employed to provide a nose-to-tail analysis of the system at the on-design conditions. Some differences are noted between the results of the two analysis techniques. A comparison of experimental, engineering analysis and CFD results on a waverider forebody are also included for validation.
Modeling Two-Phase Flow and Vapor Cycles Using the Generalized Fluid System Simulation Program
NASA Technical Reports Server (NTRS)
Smith, Amanda D.; Majumdar, Alok K.
2017-01-01
This work presents three new applications for the general purpose fluid network solver code GFSSP developed at NASA's Marshall Space Flight Center: (1) cooling tower, (2) vapor-compression refrigeration system, and (3) vapor-expansion power generation system. These systems are widely used across engineering disciplines in a variety of energy systems, and these models expand the capabilities and the use of GFSSP to include fluids and features that are not part of its present set of provided examples. GFSSP provides pressure, temperature, and species concentrations at designated locations, or nodes, within a fluid network based on a finite volume formulation of thermodynamics and conservation laws. This paper describes the theoretical basis for the construction of the models, their implementation in the current GFSSP modeling system, and a brief evaluation of the usefulness of the model results, as well as their applicability toward a broader spectrum of analytical problems in both university teaching and engineering research.
A technique for integrating engine cycle and aircraft configuration optimization
NASA Technical Reports Server (NTRS)
Geiselhart, Karl A.
1994-01-01
A method for conceptual aircraft design that incorporates the optimization of major engine design variables for a variety of cycle types was developed. The methodology should improve the lengthy screening process currently involved in selecting an appropriate engine cycle for a given application or mission. The new capability will allow environmental concerns such as airport noise and emissions to be addressed early in the design process. The ability to rapidly perform optimization and parametric variations using both engine cycle and aircraft design variables, and to see the impact on the aircraft, should provide insight and guidance for more detailed studies. A brief description of the aircraft performance and mission analysis program and the engine cycle analysis program that were used is given. A new method of predicting propulsion system weight and dimensions using thermodynamic cycle data, preliminary design, and semi-empirical techniques is introduced. Propulsion system performance and weights data generated by the program are compared with industry data and data generated using well established codes. The ability of the optimization techniques to locate an optimum is demonstrated and some of the problems that had to be solved to accomplish this are illustrated. Results from the application of the program to the analysis of three supersonic transport concepts installed with mixed flow turbofans are presented. The results from the application to a Mach 2.4, 5000 n.mi. transport indicate that the optimum bypass ratio is near 0.45 with less than 1 percent variation in minimum gross weight for bypass ratios ranging from 0.3 to 0.6. In the final application of the program, a low sonic boom fix a takeoff gross weight concept that would fly at Mach 2.0 overwater and at Mach 1.6 overland is compared with a baseline concept of the same takeoff gross weight that would fly Mach 2.4 overwater and subsonically overland. The results indicate that for the design mission, the low boom concept has a 5 percent total range penalty relative to the baseline. Additional cycles were optimized for various design overland distances and the effect of flying off-design overland distances is illustrated.
Evaluation and implementation of QR Code Identity Tag system for Healthcare in Turkey.
Uzun, Vassilya; Bilgin, Sami
2016-01-01
For this study, we designed a QR Code Identity Tag system to integrate into the Turkish healthcare system. This system provides QR code-based medical identification alerts and an in-hospital patient identification system. Every member of the medical system is assigned a unique QR Code Tag; to facilitate medical identification alerts, the QR Code Identity Tag can be worn as a bracelet or necklace or carried as an ID card. Patients must always possess the QR Code Identity bracelets within hospital grounds. These QR code bracelets link to the QR Code Identity website, where detailed information is stored; a smartphone or standalone QR code scanner can be used to scan the code. The design of this system allows authorized personnel (e.g., paramedics, firefighters, or police) to access more detailed patient information than the average smartphone user: emergency service professionals are authorized to access patient medical histories to improve the accuracy of medical treatment. In Istanbul, we tested the self-designed system with 174 participants. To analyze the QR Code Identity Tag system's usability, the participants completed the System Usability Scale questionnaire after using the system.
Stationary Liquid Fuel Fast Reactor
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yang, Won Sik; Grandy, Andrew; Boroski, Andrew
For effective burning of hazardous transuranic (TRU) elements of used nuclear fuel, a transformational advanced reactor concept named SLFFR (Stationary Liquid Fuel Fast Reactor) was proposed based on stationary molten metallic fuel. The fuel enters the reactor vessel in a solid form, and then it is heated to molten temperature in a small melting heater. The fuel is contained within a closed, thick container with penetrating coolant channels, and thus it is not mixed with coolant nor flow through the primary heat transfer circuit. The makeup fuel is semi- continuously added to the system, and thus a very small excessmore » reactivity is required. Gaseous fission products are also removed continuously, and a fraction of the fuel is periodically drawn off from the fuel container to a processing facility where non-gaseous mixed fission products and other impurities are removed and then the cleaned fuel is recycled into the fuel container. A reference core design and a preliminary plant system design of a 1000 MWt TRU- burning SLFFR concept were developed using TRU-Ce-Co fuel, Ta-10W fuel container, and sodium coolant. Conservative design approaches were adopted to stay within the current material performance database. Detailed neutronics and thermal-fluidic analyses were performed to develop a reference core design. Region-dependent 33-group cross sections were generated based on the ENDF/B-VII.0 data using the MC2-3 code. Core and fuel cycle analyses were performed in theta-r-z geometries using the DIF3D and REBUS-3 codes. Reactivity coefficients and kinetics parameters were calculated using the VARI3D perturbation theory code. Thermo-fluidic analyses were performed using the ANSYS FLUENT computational fluid dynamics (CFD) code. Figure 0.1 shows a schematic radial layout of the reference 1000 MWt SLFFR core, and Table 0.1 summarizes the main design parameters of SLFFR-1000 loop plant. The fuel container is a 2.5 cm thick cylinder with an inner radius of 87.5 cm. The fuel container is penetrated by twelve hexagonal control assembly (CA) guide tubes, each of which has 3.0 mm thickness and 69.4 mm flat-to-flat outer distance. The distance between two neighboring CA guide tube is selected to be 26 cm to provide an adequate space for CA driving systems. The fuel container has 18181 penetrating coolant tubes of 6.0 mm inner diameter and 2.0 mm thickness. The coolant tubes are arranged in a triangular lattice with a lattice pitch of 1.21 cm. The fuel, structure, and coolant volume fractions inside the fuel container are 0.386, 0.383, and 0.231, respectively. Separate steel reflectors and B4C shields are used outside of the fuel container. Six gas expansion modules (GEMs) of 5.0 cm thickness are introduced in the radial reflector region. Between the radial reflector and the fuel container is a 2.5 cm sodium gap. The TRU inventory at the beginning of equilibrium cycle (BOEC) is 5081 kg, whereas the TRU inventory at the beginning of life (BOL) was 3541 kg. This is because the equilibrium cycle fuel contains a significantly smaller fissile fraction than the LWR TRU feed. The fuel inventory at BOEC is composed of 34.0 a/o TRU, 41.4 a/o Ce, 23.6 a/o Co, and 1.03 a/o solid fission products. Since uranium-free fuel is used, a theoretical maximum TRU consumption rate of 1.011 kg/day is achieved. The semi-continuous fuel cycle based on the 300-batch, 1- day cycle approximation yields a burnup reactivity loss of 26 pcm/day, and requires a daily reprocessing of 32.5 kg of SLFFR fuel. This yields a daily TRU charge rate of 17.45 kg, including a makeup TRU feed of 1.011 kg recovered from the LWR used fuel. The charged TRU-Ce-Co fuel is composed of 34.4 a/o TRU, 40.6 a/o Ce, and 25.0 a/o Co.« less
System Design Description for the TMAD Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Finfrock, S.H.
This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.
High-Fidelity Three-Dimensional Simulation of the GE90
NASA Technical Reports Server (NTRS)
Turner, Mark G.; Norris, Andrew; Veres, Josphe P.
2004-01-01
A full-engine simulation of the three-dimensional flow in the GE90 94B high-bypass ratio turbofan engine has been achieved. It would take less than 11 hr of wall clock time if starting from scratch through the exploitation of parallel processing. The simulation of the compressor components, the cooled high-pressure turbine, and the low-pressure turbine was performed using the APNASA turbomachinery flow code. The combustor flow and chemistry were simulated using the National Combustor Code (NCC). The engine simulation matches the engine thermodynamic cycle for a sea-level takeoff condition. The simulation is started at the inlet of the fan and progresses downstream. Comparisons with the cycle point are presented. A detailed look at the blockage in the turbomachinery is presented as one measure to assess and view the solution and the multistage interaction effects.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
Gaupels, Frank; Sarioglu, Hakan; Beckmann, Manfred; Hause, Bettina; Spannagl, Manuel; Draper, John; Lindermayr, Christian; Durner, Jörg
2012-01-01
In cucurbits, phloem latex exudes from cut sieve tubes of the extrafascicular phloem (EFP), serving in defense against herbivores. We analyzed inducible defense mechanisms in the EFP of pumpkin (Cucurbita maxima) after leaf damage. As an early systemic response, wounding elicited transient accumulation of jasmonates and a decrease in exudation probably due to partial sieve tube occlusion by callose. The energy status of the EFP was enhanced as indicated by increased levels of ATP, phosphate, and intermediates of the citric acid cycle. Gas chromatography coupled to mass spectrometry also revealed that sucrose transport, gluconeogenesis/glycolysis, and amino acid metabolism were up-regulated after wounding. Combining ProteoMiner technology for the enrichment of low-abundance proteins with stable isotope-coded protein labeling, we identified 51 wound-regulated phloem proteins. Two Sucrose-Nonfermenting1-related protein kinases and a 32-kD 14-3-3 protein are candidate central regulators of stress metabolism in the EFP. Other proteins, such as the Silverleaf Whitefly-Induced Protein1, Mitogen Activated Protein Kinase6, and Heat Shock Protein81, have known defensive functions. Isotope-coded protein labeling and western-blot analyses indicated that Cyclophilin18 is a reliable marker for stress responses of the EFP. As a hint toward the induction of redox signaling, we have observed delayed oxidation-triggered polymerization of the major Phloem Protein1 (PP1) and PP2, which correlated with a decline in carbonylation of PP2. In sum, wounding triggered transient sieve tube occlusion, enhanced energy metabolism, and accumulation of defense-related proteins in the pumpkin EFP. The systemic wound response was mediated by jasmonate and redox signaling. PMID:23085839
NASA Astrophysics Data System (ADS)
Fable, E.; Angioni, C.; Ivanov, A. A.; Lackner, K.; Maj, O.; Medvedev, S. Yu; Pautasso, G.; Pereverzev, G. V.; Treutterer, W.; the ASDEX Upgrade Team
2013-07-01
The modelling of tokamak scenarios requires the simultaneous solution of both the time evolution of the plasma kinetic profiles and of the magnetic equilibrium. Their dynamical coupling involves additional complications, which are not present when the two physical problems are solved separately. Difficulties arise in maintaining consistency in the time evolution among quantities which appear in both the transport and the Grad-Shafranov equations, specifically the poloidal and toroidal magnetic fluxes as a function of each other and of the geometry. The required consistency can be obtained by means of iteration cycles, which are performed outside the equilibrium code and which can have different convergence properties depending on the chosen numerical scheme. When these external iterations are performed, the stability of the coupled system becomes a concern. In contrast, if these iterations are not performed, the coupled system is numerically stable, but can become physically inconsistent. By employing a novel scheme (Fable E et al 2012 Nucl. Fusion submitted), which ensures stability and physical consistency among the same quantities that appear in both the transport and magnetic equilibrium equations, a newly developed version of the ASTRA transport code (Pereverzev G V et al 1991 IPP Report 5/42), which is coupled to the SPIDER equilibrium code (Ivanov A A et al 2005 32nd EPS Conf. on Plasma Physics (Tarragona, 27 June-1 July) vol 29C (ECA) P-5.063), in both prescribed- and free-boundary modes is presented here for the first time. The ASTRA-SPIDER coupled system is then applied to the specific study of the modelling of controlled current ramp-up in ASDEX Upgrade discharges.
Synoptic Scale North American Weather Tracks and the Formation of North Atlantic Windstorms
NASA Astrophysics Data System (ADS)
Baum, A. J.; Godek, M. L.
2014-12-01
Each winter, dozens of fatalities occur when intense North Atlantic windstorms impact Western Europe. Forecasting the tracks of these storms in the short term is often problematic, but long term forecasts provide an even greater challenge. Improved prediction necessitates the ability to identify these low pressure areas at formation and understand commonalities that distinguish these storms from other systems crossing the Atlantic, such as where they develop. There is some evidence that indicates the majority of intense windstorms that reach Europe have origins far west, as low pressure systems that develop over the North American continent. This project aims to identify the specific cyclogenesis regions in North America that produce a significantly greater number of dangerous storms. NOAA Ocean Prediction Center surface pressure reanalysis maps are used to examine the tracks of storms. Strong windstorms are characterized by those with a central pressure of less than 965 hPa at any point in their life cycle. Tracks are recorded using a coding system based on source region, storm track and dissipation region. The codes are analyzed to determine which region contains the most statistical significance with respect to strong Atlantic windstorm generation. The resultant set of codes also serves as a climatology of North Atlantic extratropical cyclones. Results indicate that a number of windstorms favor cyclogenesis regions off the east coast of the United States. A large number of strong storms that encounter east coast cyclogenesis zones originate in the central mountain region, around Colorado. These storms follow a path that exits North America around New England and subsequently travel along the Canadian coast. Some of these are then primed to become "bombs" over the open Atlantic Ocean.
An evolution of technologies and applications of gamma imagers in the nuclear cycle industry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Khalil, R. A.; Carrel, F.; Menaa, N.
The tracking of radiation contamination and distribution has become a high priority in the nuclear cycle industry in order to respect the ALARA principle which is a main challenge during decontamination and dismantling activities. To support this need, AREVA/CANBERRA and CEA LIST have been actively carrying out research and development on a gamma-radiation imager. In this paper we will present the new generation of gamma camera, called GAMPIX. This system is based on the Timepix chip, hybridized with a CdTe substrate. A coded mask could be used in order to increase the sensitivity of the camera. Moreover, due to themore » USB connection with a standard computer, this gamma camera is immediately operational and user-friendly. The final system is a very compact gamma camera (global weight is less than 1 kg without any shielding) which could be used as a hand-held device for radioprotection purposes. In this article, we present the main characteristics of this new generation of gamma camera and we expose experimental results obtained during in situ measurements. Even though we present preliminary results the final product is under industrialization phase to address various applications specifications. (authors)« less
Digital signal processor and processing method for GPS receivers
NASA Technical Reports Server (NTRS)
Thomas, Jr., Jess B. (Inventor)
1989-01-01
A digital signal processor and processing method therefor for use in receivers of the NAVSTAR/GLOBAL POSITIONING SYSTEM (GPS) employs a digital carrier down-converter, digital code correlator and digital tracking processor. The digital carrier down-converter and code correlator consists of an all-digital, minimum bit implementation that utilizes digital chip and phase advancers, providing exceptional control and accuracy in feedback phase and in feedback delay. Roundoff and commensurability errors can be reduced to extremely small values (e.g., less than 100 nanochips and 100 nanocycles roundoff errors and 0.1 millichip and 1 millicycle commensurability errors). The digital tracking processor bases the fast feedback for phase and for group delay in the C/A, P.sub.1, and P.sub.2 channels on the L.sub.1 C/A carrier phase thereby maintaining lock at lower signal-to-noise ratios, reducing errors in feedback delays, reducing the frequency of cycle slips and in some cases obviating the need for quadrature processing in the P channels. Simple and reliable methods are employed for data bit synchronization, data bit removal and cycle counting. Improved precision in averaged output delay values is provided by carrier-aided data-compression techniques. The signal processor employs purely digital operations in the sense that exactly the same carrier phase and group delay measurements are obtained, to the last decimal place, every time the same sampled data (i.e., exactly the same bits) are processed.
Reducing False Positives in Runtime Analysis of Deadlocks
NASA Technical Reports Server (NTRS)
Bensalem, Saddek; Havelund, Klaus; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper presents an improvement of a standard algorithm for detecting dead-lock potentials in multi-threaded programs, in that it reduces the number of false positives. The standard algorithm works as follows. The multi-threaded program under observation is executed, while lock and unlock events are observed. A graph of locks is built, with edges between locks symbolizing locking orders. Any cycle in the graph signifies a potential for a deadlock. The typical standard example is the group of dining philosophers sharing forks. The algorithm is interesting because it can catch deadlock potentials even though no deadlocks occur in the examined trace, and at the same time it scales very well in contrast t o more formal approaches to deadlock detection. The algorithm, however, can yield false positives (as well as false negatives). The extension of the algorithm described in this paper reduces the amount of false positives for three particular cases: when a gate lock protects a cycle, when a single thread introduces a cycle, and when the code segments in different threads that cause the cycle can actually not execute in parallel. The paper formalizes a theory for dynamic deadlock detection and compares it to model checking and static analysis techniques. It furthermore describes an implementation for analyzing Java programs and its application to two case studies: a planetary rover and a space craft altitude control system.
A Comparison of Single-Cycle Versus Multiple-Cycle Proof Testing Strategies
NASA Technical Reports Server (NTRS)
McClung, R. C.; Chell, G. G.; Millwater, H. R.; Russell, D. A.; Millwater, H. R.
1999-01-01
Single-cycle and multiple-cycle proof testing (SCPT and MCPT) strategies for reusable aerospace propulsion system components are critically evaluated and compared from a rigorous elastic-plastic fracture mechanics perspective. Earlier MCPT studies are briefly reviewed. New J-integral estimation methods for semielliptical surface cracks and cracks at notches are derived and validated. Engineering methods are developed to characterize crack growth rates during elastic-plastic fatigue crack growth (FCG) and the tear-fatigue interaction near instability. Surface crack growth experiments are conducted with Inconel 718 to characterize tearing resistance, FCG under small-scale yielding and elastic-plastic conditions, and crack growth during simulated MCPT. Fractography and acoustic emission studies provide additional insight. The relative merits of SCPT and MCPT are directly compared using a probabilistic analysis linked with an elastic-plastic crack growth computer code. The conditional probability of failure in service is computed for a population of components that have survived a previous proof test, based on an assumed distribution of initial crack depths. Parameter studies investigate the influence of proof factor, tearing resistance, crack shape, initial crack depth distribution, and notches on the MCPT versus SCPT comparison. The parameter studies provide a rational basis to formulate conclusions about the relative advantages and disadvantages of SCPT and MCPT. Practical engineering guidelines are proposed to help select the optimum proof test protocol in a given application.
A Comparison of Single-Cycle Versus Multiple-Cycle Proof Testing Strategies
NASA Technical Reports Server (NTRS)
McClung, R. C.; Chell, G. G.; Millwater, H. R.; Russell, D. A.; Orient, G. E.
1996-01-01
Single-cycle and multiple-cycle proof testing (SCPT and MCPT) strategies for reusable aerospace propulsion system components are critically evaluated and compared from a rigorous elastic-plastic fracture mechanics perspective. Earlier MCPT studies are briefly reviewed. New J-integral estimation methods for semi-elliptical surface cracks and cracks at notches are derived and validated. Engineering methods are developed to characterize crack growth rates during elastic-plastic fatigue crack growth (FCG) and the tear-fatigue interaction near instability. Surface crack growth experiments are conducted with Inconel 718 to characterize tearing resistance, FCG under small-scale yielding and elastic-plastic conditions, and crack growth during simulated MCPT. Fractography and acoustic emission studies provide additional insight. The relative merits of SCPT and MCPT are directly compared using a probabilistic analysis linked with an elastic-plastic crack growth computer code. The conditional probability of failure in service is computed for a population of components that have survived a previous proof test, based on an assumed distribution of initial crack depths. Parameter studies investigate the influence of proof factor, tearing resistance, crack shape, initial crack depth distribution, and notches on the MCPT vs. SCPT comparison. The parameter studies provide a rational basis to formulate conclusions about the relative advantages and disadvantages of SCPT and MCPT. Practical engineering guidelines are proposed to help select the optimum proof test protocol in a given application.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
The Simpsons program 6-D phase space tracking with acceleration
NASA Astrophysics Data System (ADS)
Machida, S.
1993-12-01
A particle tracking code, Simpsons, in 6-D phase space including energy ramping has been developed to model proton synchrotrons and storage rings. We take time as the independent variable to change machine parameters and diagnose beam quality in a quite similar way as real machines, unlike existing tracking codes for synchrotrons which advance a particle element by element. Arbitrary energy ramping and rf voltage curves as a function of time are read as an input file for defining a machine cycle. The code is used to study beam dynamics with time dependent parameters. Some of the examples from simulations of the Superconducting Super Collider (SSC) boosters are shown.
NASA Technical Reports Server (NTRS)
Adams, Thomas; VanBaalen, Mary
2009-01-01
The Radiation Health Office (RHO) determines each astronaut s cancer risk by using models to associate the amount of radiation dose that astronauts receive from spaceflight missions. The baryon transport codes (BRYNTRN), high charge (Z) and energy transport codes (HZETRN), and computer risk models are used to determine the effective dose received by astronauts in Low Earth orbit (LEO). This code uses an approximation of the Boltzman transport formula. The purpose of the project is to run this code for various International Space Station (ISS) flight parameters in order to gain a better understanding of how this code responds to different scenarios. The project will determine how variations in one set of parameters such as, the point of the solar cycle and altitude can affect the radiation exposure of astronauts during ISS missions. This project will benefit NASA by improving mission dosimetry.
Comparison Between Simulated and Experimentally Measured Performance of a Four Port Wave Rotor
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.; Wilson, Jack; Welch, Gerard E.
2007-01-01
Performance and operability testing has been completed on a laboratory-scale, four-port wave rotor, of the type suitable for use as a topping cycle on a gas turbine engine. Many design aspects, and performance estimates for the wave rotor were determined using a time-accurate, one-dimensional, computational fluid dynamics-based simulation code developed specifically for wave rotors. The code follows a single rotor passage as it moves past the various ports, which in this reference frame become boundary conditions. This paper compares wave rotor performance predicted with the code to that measured during laboratory testing. Both on and off-design operating conditions were examined. Overall, the match between code and rig was found to be quite good. At operating points where there were disparities, the assumption of larger than expected internal leakage rates successfully realigned code predictions and laboratory measurements. Possible mechanisms for such leakage rates are discussed.
Observation of the limit cycle in asymmetric plasma divided by a magnetic filter
NASA Astrophysics Data System (ADS)
Ohi, Kazuo; Naitou, Hiroshi; Tauchi, Yasushi; Fukumasa, Osamu
2001-01-01
An asymmetric plasma divided by a magnetic filter is numerically simulated by the one-dimensional particle-in-cell code VSIM1D [Koga et al., J. Phys. Soc. Jpn. 68, 1578 (1999)]. Depending on the asymmetry, the system behavior is static or dynamic. In the static state, the potentials of the main plasma and the subplasma are given by the sheath potentials, φM˜3TMe/e and φS˜3TSe/e, respectively, with e being an electron charge and TMe and TSe being electron temperatures (TMe>TSe). In the dynamic state, while φM˜3TMe/e, φS oscillates periodically between φS,min˜3TSe/e and φS,max˜3TMe/e. The ions accelerated by the time varying potential gap get into the subplasma and excite the laminar shock waves. The period of the limit cycle is determined by the transit time of the shock wave structure.
Noise characteristics of nanoscaled redox-cycling sensors: investigations based on random walks.
Kätelhön, Enno; Krause, Kay J; Singh, Pradyumna S; Lemay, Serge G; Wolfrum, Bernhard
2013-06-19
We investigate noise effects in nanoscaled electrochemical sensors using a three-dimensional simulation based on random walks. The presented approach allows the prediction of time-dependent signals and noise characteristics for redox cycling devices of arbitrary geometry. We demonstrate that the simulation results closely match experimental data as well as theoretical expectations with regard to measured currents and noise power spectra. We further analyze the impact of the sensor design on characteristics of the noise power spectrum. Specific transitions between independent noise sources in the frequency domain are indicative of the sensor-reservoir coupling and can be used to identify stationary design features or time-dependent blocking mechanisms. We disclose the source code of our simulation. Since our approach is highly flexible with regard to the implemented boundary conditions, it opens up the possibility for integrating a variety of surface-specific molecular reactions in arbitrary electrochemical systems. Thus, it may become a useful tool for the investigation of a wide range of noise effects in nanoelectrochemical sensors.
Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support
NASA Astrophysics Data System (ADS)
Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar
This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.
Impacts of Model Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.
The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less
An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.
Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan
2017-11-18
Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.
An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers
Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan
2017-01-01
Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration—which are the basis of tracking error estimation—are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (−0.25 cycle, 0.25 cycle) to (−0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz. PMID:29156581
NASA Astrophysics Data System (ADS)
Barton, N. P.; Metzger, E. J.; Smedstad, O. M.; Ruston, B. C.; Wallcraft, A. J.; Whitcomb, T.; Ridout, J. A.; Zamudio, L.; Posey, P.; Reynolds, C. A.; Richman, J. G.; Phelps, M.
2017-12-01
The Naval Research Laboratory is developing an Earth System Model (NESM) to provide global environmental information to meet Navy and Department of Defense (DoD) operations and planning needs from the upper atmosphere to under the sea. This system consists of a global atmosphere, ocean, ice, wave, and land prediction models and the individual models include: atmosphere - NAVy Global Environmental Model (NAVGEM); ocean - HYbrid Coordinate Ocean Model (HYCOM); sea ice - Community Ice CodE (CICE); WAVEWATCH III™; and land - NAVGEM Land Surface Model (LSM). Data assimilation is currently loosely coupled between the atmosphere component using a 6-hour update cycle in the Naval Research Laboratory (NRL) Atmospheric Variational Data Assimilation System - Accelerated Representer (NAVDAS-AR) and the ocean/ice components using a 24-hour update cycle in the Navy Coupled Ocean Data Assimilation (NCODA) with 3 hours of incremental updating. This presentation will describe the US Navy's coupled forecast model, the loosely coupled data assimilation, and compare results against stand-alone atmosphere and ocean/ice models. In particular, we will focus on the unique aspects of this modeling system, which includes an eddy resolving ocean model and challenges associated with different update-windows and solvers for the data assimilation in the atmosphere and ocean. Results will focus on typical operational diagnostics for atmosphere, ocean, and ice analyses including 500 hPa atmospheric height anomalies, low-level winds, temperature/salinity ocean depth profiles, ocean acoustical proxies, sea ice edge, and sea ice drift. Overall, the global coupled system is performing with comparable skill to the stand-alone systems.
Boundary layer simulator improvement
NASA Technical Reports Server (NTRS)
Praharaj, S. C.; Schmitz, C.; Frost, C.; Engel, C. D.; Fuller, C. E.; Bender, R. L.; Pond, J.
1984-01-01
High chamber pressure expander cycles proposed for orbit transfer vehicles depend primarily on the heat energy transmitted from the combustion products through the thrust wall chamber wall. The heat transfer to the nozzle wall is affected by such variables as wall roughness, relamarization, and the presence of particles in the flow. Motor performance loss for these nozzles with thick boundary layers is inaccurate using the existing procedure coded BLIMPJ. Modifications and innovations to the code are examined. Updated routines are listed.
Small non-coding RNAs in streptomycetes.
Heueis, Nona; Vockenhuber, Michael-Paul; Suess, Beatrix
2014-01-01
Streptomycetes are Gram-positive, GC-rich, soil dwelling bacteria, occurring ubiquitary throughout nature. They undergo extensive morphological changes from spores to filamentous mycelia and produce a plethora of secondary metabolites. Owing to their complex life cycle, streptomycetes require efficient regulatory machinery for the control of gene expression. Therefore, they possess a large diversity of regulators. Within this review we summarize the current knowledge about the importance of small non-coding RNA for the control of gene expression in these organisms.
Simulation Enabled Safeguards Assessment Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Robert Bean; Trond Bjornard; Thomas Larson
2007-09-01
It is expected that nuclear energy will be a significant component of future supplies. New facilities, operating under a strengthened international nonproliferation regime will be needed. There is good reason to believe virtual engineering applied to the facility design, as well as to the safeguards system design will reduce total project cost and improve efficiency in the design cycle. Simulation Enabled Safeguards Assessment MEthodology (SESAME) has been developed as a software package to provide this capability for nuclear reprocessing facilities. The software architecture is specifically designed for distributed computing, collaborative design efforts, and modular construction to allow step improvements inmore » functionality. Drag and drop wireframe construction allows the user to select the desired components from a component warehouse, render the system for 3D visualization, and, linked to a set of physics libraries and/or computational codes, conduct process evaluations of the system they have designed.« less
ADGS-2100 Adaptive Display and Guidance System Window Manager Analysis
NASA Technical Reports Server (NTRS)
Whalen, Mike W.; Innis, John D.; Miller, Steven P.; Wagner, Lucas G.
2006-01-01
Recent advances in modeling languages have made it feasible to formally specify and analyze the behavior of large system components. Synchronous data flow languages, such as Lustre, SCR, and RSML-e are particularly well suited to this task, and commercial versions of these tools such as SCADE and Simulink are growing in popularity among designers of safety critical systems, largely due to their ability to automatically generate code from the models. At the same time, advances in formal analysis tools have made it practical to formally verify important properties of these models to ensure that design defects are identified and corrected early in the lifecycle. This report describes how these tools have been applied to the ADGS-2100 Adaptive Display and Guidance Window Manager being developed by Rockwell Collins Inc. This work demonstrates how formal methods can be easily and cost-efficiently used to remove defects early in the design cycle.
Thermal-Acoustic Analysis of a Metallic Integrated Thermal Protection System Structure
NASA Technical Reports Server (NTRS)
Behnke, Marlana N.; Sharma, Anurag; Przekop, Adam; Rizzi, Stephen A.
2010-01-01
A study is undertaken to investigate the response of a representative integrated thermal protection system structure under combined thermal, aerodynamic pressure, and acoustic loadings. A two-step procedure is offered and consists of a heat transfer analysis followed by a nonlinear dynamic analysis under a combined loading environment. Both analyses are carried out in physical degrees-of-freedom using implicit and explicit solution techniques available in the Abaqus commercial finite-element code. The initial study is conducted on a reduced-size structure to keep the computational effort contained while validating the procedure and exploring the effects of individual loadings. An analysis of a full size integrated thermal protection system structure, which is of ultimate interest, is subsequently presented. The procedure is demonstrated to be a viable approach for analysis of spacecraft and hypersonic vehicle structures under a typical mission cycle with combined loadings characterized by largely different time-scales.
NASA Astrophysics Data System (ADS)
Attention is given to aspects of quality assurance methodologies in development life cycles, optical intercity transmission systems, multiaccess protocols, system and technology aspects in the case of regional/domestic satellites, advances in SSB-AM radio transmission over terrestrial and satellite network, and development environments for telecommunications systems. Other subjects studied are concerned with business communication networks for voice and data, VLSI in local network and communication protocol, product evaluation and support, an update regarding Videotex, topics in communication theory, topics in radio propagation, a status report regarding societal effects of technology in the workplace, digital image processing, and adaptive signal processing for communications. The management of the reliability function in the development process is considered along with Giga-bit technologies for long distance large capacity optical transmission equipment. The application of gallium arsenide analog and digital integrated circuits for high-speed fiber optical communications, and a simple algorithm for image data coding.
Lisman, John E; Jensen, Ole
2013-03-20
Theta and gamma frequency oscillations occur in the same brain regions and interact with each other, a process called cross-frequency coupling. Here, we review evidence for the following hypothesis: that the dual oscillations form a code for representing multiple items in an ordered way. This form of coding has been most clearly demonstrated in the hippocampus, where different spatial information is represented in different gamma subcycles of a theta cycle. Other experiments have tested the functional importance of oscillations and their coupling. These involve correlation of oscillatory properties with memory states, correlation with memory performance, and effects of disrupting oscillations on memory. Recent work suggests that this coding scheme coordinates communication between brain regions and is involved in sensory as well as memory processes. Copyright © 2013 Elsevier Inc. All rights reserved.
Group delay variations of GPS transmitting and receiving antennas
NASA Astrophysics Data System (ADS)
Wanninger, Lambert; Sumaya, Hael; Beer, Susanne
2017-09-01
GPS code pseudorange measurements exhibit group delay variations at the transmitting and the receiving antenna. We calibrated C1 and P2 delay variations with respect to dual-frequency carrier phase observations and obtained nadir-dependent corrections for 32 satellites of the GPS constellation in early 2015 as well as elevation-dependent corrections for 13 receiving antenna models. The combined delay variations reach up to 1.0 m (3.3 ns) in the ionosphere-free linear combination for specific pairs of satellite and receiving antennas. Applying these corrections to the code measurements improves code/carrier single-frequency precise point positioning, ambiguity fixing based on the Melbourne-Wübbena linear combination, and determination of ionospheric total electron content. It also affects fractional cycle biases and differential code biases.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dokhane, A.; Canepa, S.; Ferroukhi, H.
For stability analyses of the Swiss operating Boiling-Water-Reactors (BWRs), the methodology employed and validated so far at the Paul Scherrer Inst. (PSI) was based on the RAMONA-3 code with a hybrid upstream static lattice/core analysis approach using CASMO-4 and PRESTO-2. More recently, steps were undertaken towards a new methodology based on the SIMULATE-3K (S3K) code for the dynamical analyses combined with the CMSYS system relying on the CASMO/SIMULATE-3 suite of codes and which was established at PSI to serve as framework for the development and validation of reference core models of all the Swiss reactors and operated cycles. This papermore » presents a first validation of the new methodology on the basis of a benchmark recently organised by a Swiss utility and including the participation of several international organisations with various codes/methods. Now in parallel, a transition from CASMO-4E (C4E) to CASMO-5M (C5M) as basis for the CMSYS core models was also recently initiated at PSI. Consequently, it was considered adequate to address the impact of this transition both for the steady-state core analyses as well as for the stability calculations and to achieve thereby, an integral approach for the validation of the new S3K methodology. Therefore, a comparative assessment of C4 versus C5M is also presented in this paper with particular emphasis on the void coefficients and their impact on the downstream stability analysis results. (authors)« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisohara, Naoyuki; Moribe, Takeshi; Sakai, Takaaki
2006-07-01
The sodium heated steam generator (SG) being designed in the feasibility study on commercialized fast reactor cycle systems is a straight double-wall-tube type. The SG is large sized to reduce its manufacturing cost by economics of scale. This paper addresses the temperature and flow multi-dimensional distributions at steady state to obtain the prospect of the SG. Large-sized heat exchanger components are prone to have non-uniform flow and temperature distributions. These phenomena might lead to tube buckling or tube to tube-sheet junction failure in straight tube type SGs, owing to tubes thermal expansion difference. The flow adjustment devices installed in themore » SG are optimized to prevent these issues, and the temperature distribution properties are uncovered by analysis methods. The analysis model of the SG consists of two parts, a sodium inlet distribution plenum (the plenum) and a heat transfer tubes bundle region (the bundle). The flow and temperature distributions in the plenum and the bundle are evaluated by the three-dimensional code 'FLUENT' and the two dimensional thermal-hydraulic code 'MSG', respectively. The MSG code is particularly developed for sodium heated SGs in JAEA. These codes have revealed that the sodium flow is distributed uniformly by the flow adjustment devices, and that the lateral tube temperature distributions remain within the allowable temperature range for the structural integrity of the tubes and the tube to tube-sheet junctions. (authors)« less
A computational theory for the classification of natural biosonar targets based on a spike code.
Müller, Rolf
2003-08-01
A computational theory for the classification of natural biosonar targets is developed based on the properties of an example stimulus ensemble. An extensive set of echoes (84 800) from four different foliages was transcribed into a spike code using a parsimonious model (linear filtering, half-wave rectification, thresholding). The spike code is assumed to consist of time differences (interspike intervals) between threshold crossings. Among the elementary interspike intervals flanked by exceedances of adjacent thresholds, a few intervals triggered by disjoint half-cycles of the carrier oscillation stand out in terms of resolvability, visibility across resolution scales and a simple stochastic structure (uncorrelatedness). They are therefore argued to be a stochastic analogue to edges in vision. A three-dimensional feature vector representing these interspike intervals sustained a reliable target classification performance (0.06% classification error) in a sequential probability ratio test, which models sequential processing of echo trains by biological sonar systems. The dimensions of the representation are the first moments of duration and amplitude location of these interspike intervals as well as their number. All three quantities are readily reconciled with known principles of neural signal representation, since they correspond to the centre of gravity of excitation on a neural map and the total amount of excitation.
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-03-28
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation.
Rodríguez, Manuel; Magdaleno, Eduardo; Pérez, Fernando; García, Cristhian
2017-01-01
Non-equispaced Fast Fourier transform (NFFT) is a very important algorithm in several technological and scientific areas such as synthetic aperture radar, computational photography, medical imaging, telecommunications, seismic analysis and so on. However, its computation complexity is high. In this paper, we describe an efficient NFFT implementation with a hardware coprocessor using an All-Programmable System-on-Chip (APSoC). This is a hybrid device that employs an Advanced RISC Machine (ARM) as Processing System with Programmable Logic for high-performance digital signal processing through parallelism and pipeline techniques. The algorithm has been coded in C language with pragma directives to optimize the architecture of the system. We have used the very novel Software Develop System-on-Chip (SDSoC) evelopment tool that simplifies the interface and partitioning between hardware and software. This provides shorter development cycles and iterative improvements by exploring several architectures of the global system. The computational results shows that hardware acceleration significantly outperformed the software based implementation. PMID:28350358
The advanced software development workstation project
NASA Technical Reports Server (NTRS)
Fridge, Ernest M., III; Pitman, Charles L.
1991-01-01
The Advanced Software Development Workstation (ASDW) task is researching and developing the technologies required to support Computer Aided Software Engineering (CASE) with the emphasis on those advanced methods, tools, and processes that will be of benefit to support all NASA programs. Immediate goals are to provide research and prototype tools that will increase productivity, in the near term, in projects such as the Software Support Environment (SSE), the Space Station Control Center (SSCC), and the Flight Analysis and Design System (FADS) which will be used to support the Space Shuttle and Space Station Freedom. Goals also include providing technology for development, evolution, maintenance, and operations. The technologies under research and development in the ASDW project are targeted to provide productivity enhancements during the software life cycle phase of enterprise and information system modeling, requirements generation and analysis, system design and coding, and system use and maintenance. On-line user's guides will assist users in operating the developed information system with knowledge base expert assistance.
Computational Analysis for Rocket-Based Combined-Cycle Systems During Rocket-Only Operation
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.; Smith, T. D.; Yungster, S.; Keller, D. J.
2000-01-01
A series of Reynolds-averaged Navier-Stokes calculations were employed to study the performance of rocket-based combined-cycle systems operating in an all-rocket mode. This parametric series of calculations were executed within a statistical framework, commonly known as design of experiments. The parametric design space included four geometric and two flowfield variables set at three levels each, for a total of 729 possible combinations. A D-optimal design strategy was selected. It required that only 36 separate computational fluid dynamics (CFD) solutions be performed to develop a full response surface model, which quantified the linear, bilinear, and curvilinear effects of the six experimental variables. The axisymmetric, Reynolds-averaged Navier-Stokes simulations were executed with the NPARC v3.0 code. The response used in the statistical analysis was created from Isp efficiency data integrated from the 36 CFD simulations. The influence of turbulence modeling was analyzed by using both one- and two-equation models. Careful attention was also given to quantify the influence of mesh dependence, iterative convergence, and artificial viscosity upon the resulting statistical model. Thirteen statistically significant effects were observed to have an influence on rocket-based combined-cycle nozzle performance. It was apparent that the free-expansion process, directly downstream of the rocket nozzle, can influence the Isp efficiency. Numerical schlieren images and particle traces have been used to further understand the physical phenomena behind several of the statistically significant results.
Schaap, Pauline; Barrantes, Israel; Minx, Pat; Sasaki, Narie; Anderson, Roger W.; Bénard, Marianne; Biggar, Kyle K.; Buchler, Nicolas E.; Bundschuh, Ralf; Chen, Xiao; Fronick, Catrina; Fulton, Lucinda; Golderer, Georg; Jahn, Niels; Knoop, Volker; Landweber, Laura F.; Maric, Chrystelle; Miller, Dennis; Noegel, Angelika A.; Peace, Rob; Pierron, Gérard; Sasaki, Taeko; Schallenberg-Rüdinger, Mareike; Schleicher, Michael; Singh, Reema; Spaller, Thomas; Storey, Kenneth B.; Suzuki, Takamasa; Tomlinson, Chad; Tyson, John J.; Warren, Wesley C.; Werner, Ernst R.; Werner-Felmayer, Gabriele; Wilson, Richard K.; Winckler, Thomas; Gott, Jonatha M.; Glöckner, Gernot; Marwan, Wolfgang
2016-01-01
Physarum polycephalum is a well-studied microbial eukaryote with unique experimental attributes relative to other experimental model organisms. It has a sophisticated life cycle with several distinct stages including amoebal, flagellated, and plasmodial cells. It is unusual in switching between open and closed mitosis according to specific life-cycle stages. Here we present the analysis of the genome of this enigmatic and important model organism and compare it with closely related species. The genome is littered with simple and complex repeats and the coding regions are frequently interrupted by introns with a mean size of 100 bases. Complemented with extensive transcriptome data, we define approximately 31,000 gene loci, providing unexpected insights into early eukaryote evolution. We describe extensive use of histidine kinase-based two-component systems and tyrosine kinase signaling, the presence of bacterial and plant type photoreceptors (phytochromes, cryptochrome, and phototropin) and of plant-type pentatricopeptide repeat proteins, as well as metabolic pathways, and a cell cycle control system typically found in more complex eukaryotes. Our analysis characterizes P. polycephalum as a prototypical eukaryote with features attributed to the last common ancestor of Amorphea, that is, the Amoebozoa and Opisthokonts. Specifically, the presence of tyrosine kinases in Acanthamoeba and Physarum as representatives of two distantly related subdivisions of Amoebozoa argues against the later emergence of tyrosine kinase signaling in the opisthokont lineage and also against the acquisition by horizontal gene transfer. PMID:26615215
Numerical Simulation of Wall Heat Load in Combustor Flow
NASA Astrophysics Data System (ADS)
Panara, D.; Hase, M.; Krebs, W.; Noll, B.
2007-09-01
Due to the major mechanism of NOx generation, there is generally a temperature trade off between improved cycle efficiency, material constraints and low NOx emission. The cycle efficiency is proportional to the highest cycle temperature, but unfortunately also the NOx production increases with increasing combustion temperature. For this reason, the modern combustion chamber design has been oriented towards lean premixed combustion system and more and more attention must be focused on the cooling air management. The challenge is to ensure sufficiently low temperature of the combustion liner with very low amount of film or effusion cooling air. Correct numerical prediction of temperature fields and wall heat load are therefore of critical interest in the modern combustion chamber design. Moreover, lean combustion technology has shown the appearance of thermo-acoustic instabilities which have to be taken into account in the simulation and, more in general, in the design of reliable combustion systems. In this framework, the present investigation addresses the capability of a commercial multiphysics code (ANSYS CFX) to correctly predict the wall heat load and the core flow temperature field in a scaled power generation combustion chamber with a simplified ceramic liner. Comparison are made with the experimental results from the ITS test rig at the University of Karlsruhe [1] and with a previous numerical campaign from [2]. In addition the effect of flow unsteadyness on the wall heat load is discussed showing some limitations of the traditional steady state flow thermal design.
L3.PHI.CTF.P10.02-rev2 Coupling of Subchannel T/H (CTF) and CRUD Chemistry (MAMBA1D)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Salko, Robert K.; Palmtag, Scott; Collins, Benjamin S.
2015-05-15
The purpose of this milestone is to create a preliminary capability for modeling light water reactor (LWR) thermal-hydraulic (T/H) and CRUD growth using the CTF subchannel code and the subgrid version of the MAMBA CRUD chemistry code, MAMBA1D. In part, this is a follow-on to Milestone L3.PHI.VCS.P9.01, which is documented in Report CASL-U-2014-0188-000, titled "Development of CTF Capability for Modeling Reactor Operating Cycles with Crud Growth". As the title suggests, the previous milestone set up a framework for modeling reactor operation cycles with CTF. The framework also facilitated coupling to a CRUD chemistry capability for modeling CRUD growth throughout themore » reactor operating cycle. To demonstrate the capability, a simple CRUD \\surrogate" tool was developed and coupled to CTF; however, it was noted that CRUD growth predictions by the surrogate were not considered realistic. This milestone builds on L3.PHI.VCS.P9.01 by replacing this simple surrogate tool with the more advanced MAMBA1D CRUD chemistry code. Completing this task involves addressing unresolved tasks from Milestone L3.PHI.VCS.P9.01, setting up an interface to MAMBA1D, and extracting new T/H information from CTF that was not previously required in the simple surrogate tool. Speci c challenges encountered during this milestone include (1) treatment of the CRUD erosion model, which requires local turbulent kinetic energy (TKE) (a value that CTF does not calculate) and (2) treatment of the MAMBA1D CRUD chimney boiling model in the CTF rod heat transfer solution. To demonstrate this new T/H, CRUD modeling capability, two sets of simulations were performed: (1) an 18 month cycle simulation of a quarter symmetry model of Watts Bar and (2) a simulation of Assemblies G69 and G70 from Seabrook Cycle 5. The Watts Bar simulation is merely a demonstration of the capability. The simulation of the Seabrook cycle, which had experienced CRUD-related fuel rod failures, had actual CRUD-scrape data to compare with results. As results show, the initial CTF/MAMBA1D-predicted CRUD thicknesses were about half of their expected values, so further investigation will be required for this simulation.« less
Revenue cycle management, Part II.
Crew, Matt
2007-01-01
The proper management of your revenue cycle requires the application of "best practices" and the continual monitoring and measuring of the entire cycle. The correct technology will enable you to gain the insight and efficiencies needed in the ever-changing healthcare economy. The revenue cycle is a process that begins when you negotiate payor contracts, set fees, and schedule appointments and continues until claims are paid in full. Every single step in the cycle carries equal importance. Monitoring all phases and a commitment to continually communicating the results will allow you to achieve unparalleled success. In part I of this article, we explored the importance of contracting, scheduling, and case management as well as coding and clinical documentation. We will now take a closer look at the benefits charge capture, claim submission, payment posting, accounts receivable follow-up, and reporting can mean to your practice.
Wang, Zhongyi; Li, Jiaming; Fu, Yingying; Zhao, Zongzheng; Zhang, Chunmao; Li, Nan; Li, Jingjing; Cheng, Hongliang; Jin, Xiaojun; Lu, Bing; Guo, Zhendong; Qian, Jun; Liu, Linna
2018-05-16
MicroRNAs (miRNAs) may become efficient antiviral agents against the Ebola virus (EBOV) targeting viral genomic RNAs or transcripts. We previously conducted a genome-wide search for differentially expressed miRNAs during viral replication and transcription. In this study, we established a rapid screen for miRNAs with inhibitory effects against EBOV using a tetracistronic transcription- and replication-competent virus-like particle (trVLP) system. This system uses a minigenome comprising an EBOV leader region, luciferase reporter, VP40, GP, VP24, EBOV trailer region, and three noncoding regions from the EBOV genome and can be used to model the life cycle of EBOV under biosafety level (BSL) 2 conditions. Informatic analysis was performed to select up-regulated miRNAs targeting the coding regions of the minigenome with the highest binding energy to perform inhibitory effect screening. Among these miRNAs, miR-150-3p had the most significant inhibitory effect. Reverse transcription polymerase chain reaction (RT-PCR), Western blot, and double fluorescence reporter experiments demonstrated that miR-150-3p inhibited the reproduction of trVLPs via the regulation of GP and VP40 expression by directly targeting the coding regions of GP and VP40. This novel, rapid, and convenient screening method will efficiently facilitate the exploration of miRNAs against EBOV under BSL-2 conditions.
Unsteady Flow Interactions Between the LH2 Feed Line and SSME LPFP Inducer
NASA Technical Reports Server (NTRS)
Dorney, Dan; Griffin, Lisa; Marcu, Bogdan; Williams, Morgan
2006-01-01
An extensive computational effort has been performed in order to investigate the nature of unsteady flow in the fuel line supplying the three Space Shuttle Main Engines during flight. Evidence of high cycle fatigue (HCF) in the flow liner one diameter upstream of the Low Pressure Fuel Pump inducer has been observed in several locations. The analysis presented in this report has the objective of determining the driving mechanisms inducing HCF and the associated fluid flow phenomena. The simulations have been performed using two different computational codes, the NASA MSFC PHANTOM code and the Pratt and Whitney Rocketdyne ENIGMA code. The fuel flow through the flow liner and the pump inducer have been modeled in full three-dimensional geometry, and the results of the computations compared with test data taken during hot fire tests at NASA Stennis Space Center, and cold-flow water flow test data obtained at NASA MSFC. The numerical results indicate that unsteady pressure fluctuations at specific frequencies develop in the duct at the flow-liner location. Detailed frequency analysis of the flow disturbances is presented. The unsteadiness is believed to be an important source for fluctuating pressures generating high cycle fatigue.
Leach, R; McNally, Donal; Bashir, Mohamad; Sastry, Priya; Cuerden, Richard; Richens, David; Field, Mark
2012-10-01
The severity and location of injuries resulting from vehicular collisions are normally recorded in Abbreviated Injury Scale (AIS) code; we propose a system to link AIS code to a description of acute aortic syndrome (AAS), thus allowing the hypothesis that aortic injury is progressive with collision kinematics to be tested. Standard AIS codes were matched with a clinical description of AAS. A total of 199 collisions that resulted in aortic injury were extracted from a national automotive collision database and the outcomes mapped onto AAS descriptions. The severity of aortic injury (AIS severity score) and stage of AAS progression were compared with collision kinematics and occupant demographics. Post hoc power analyses were used to estimate maximum effect size. The general demographic distribution of the sample represented that of the UK population in regard to sex and age. No significant relationship was observed between estimated test speed, collision direction, occupant location or seat belt use and clinical progression of aortic injury (once initiated). Power analysis confirmed that a suitable sample size was used to observe a medium effect in most of the cases. Similarly, no association was observed between injury severity and collision kinematics. There is sufficient information on AIS severity and location codes to map onto the clinical AAS spectrum. It was not possible, with this data set, to consider the influence of collision kinematics on aortic injury initiation. However, it was demonstrated that after initiation, further progression along the AAS pathway was not influenced by collision kinematics. This might be because the injury is not progressive, because the vehicle kinematics studied do not fully represent the kinematics of the occupants, or because an unknown factor, such as stage of cardiac cycle, dominates. Epidemiologic/prognostic study, level IV.
Performance (Off-Design) Cycle Analysis for a Turbofan Engine With Interstage Turbine Burner
NASA Technical Reports Server (NTRS)
Liew, K. H.; Urip, E.; Yang, S. L.; Mattingly, J. D.; Marek, C. J.
2005-01-01
This report presents the performance of a steady-state, dual-spool, separate-exhaust turbofan engine, with an interstage turbine burner (ITB) serving as a secondary combustor. The ITB, which is located in the transition duct between the high- and the low-pressure turbines, is a relatively new concept for increasing specific thrust and lowering pollutant emissions in modern jet-engine propulsion. A detailed off-design performance analysis of ITB engines is written in Microsoft(Registered Trademark) Excel (Redmond, Washington) macrocode with Visual Basic Application to calculate engine performances over the entire operating envelope. Several design-point engine cases are pre-selected using a parametric cycle-analysis code developed previously in Microsoft(Registered Trademark) Excel, for off-design analysis. The off-design code calculates engine performances (i.e. thrust and thrust-specific-fuel-consumption) at various flight conditions and throttle settings.
Channel coding in the space station data system network
NASA Technical Reports Server (NTRS)
Healy, T.
1982-01-01
A detailed discussion of the use of channel coding for error correction, privacy/secrecy, channel separation, and synchronization is presented. Channel coding, in one form or another, is an established and common element in data systems. No analysis and design of a major new system would fail to consider ways in which channel coding could make the system more effective. The presence of channel coding on TDRS, Shuttle, the Advanced Communication Technology Satellite Program system, the JSC-proposed Space Operations Center, and the proposed 30/20 GHz Satellite Communication System strongly support the requirement for the utilization of coding for the communications channel. The designers of the space station data system have to consider the use of channel coding.
Ground Systems Development Environment (GSDE) software configuration management
NASA Technical Reports Server (NTRS)
Church, Victor E.; Long, D.; Hartenstein, Ray; Perez-Davila, Alfredo
1992-01-01
This report presents a review of the software configuration management (CM) plans developed for the Space Station Training Facility (SSTF) and the Space Station Control Center. The scope of the CM assessed in this report is the Systems Integration and Testing Phase of the Ground Systems development life cycle. This is the period following coding and unit test and preceding delivery to operational use. This report is one of a series from a study of the interfaces among the Ground Systems Development Environment (GSDE), the development systems for the SSTF and the SSCC, and the target systems for SSCC and SSTF. This is the last report in the series. The focus of this report is on the CM plans developed by the contractors for the Mission Systems Contract (MSC) and the Training Systems Contract (TSC). CM requirements are summarized and described in terms of operational software development. The software workflows proposed in the TSC and MSC plans are reviewed in this context, and evaluated against the CM requirements defined in earlier study reports. Recommendations are made to improve the effectiveness of CM while minimizing its impact on the developers.
GPS-Like Phasing Control of the Space Solar Power System Transmission Array
NASA Technical Reports Server (NTRS)
Psiaki, Mark L.
2003-01-01
The problem of phasing of the Space Solar Power System's transmission array has been addressed by developing a GPS-like radio navigation system. The goal of this system is to provide power transmission phasing control for each node of the array that causes the power signals to add constructively at the ground reception station. The phasing control system operates in a distributed manner, which makes it practical to implement. A leader node and two radio navigation beacons are used to control the power transmission phasing of multiple follower nodes. The necessary one-way communications to the follower nodes are implemented using the RF beacon signals. The phasing control system uses differential carrier phase relative navigation/timing techniques. A special feature of the system is an integer ambiguity resolution procedure that periodically resolves carrier phase cycle count ambiguities via encoding of pseudo-random number codes on the power transmission signals. The system is capable of achieving phasing accuracies on the order of 3 mm down to 0.4 mm depending on whether the radio navigation beacons operate in the L or C bands.
Why and how Mastering an Incremental and Iterative Software Development Process
NASA Astrophysics Data System (ADS)
Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe
2004-06-01
One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.
Risk-Based Probabilistic Approach to Aeropropulsion System Assessment
NASA Technical Reports Server (NTRS)
Tong, Michael T.
2002-01-01
In an era of shrinking development budgets and resources, where there is also an emphasis on reducing the product development cycle, the role of system assessment, performed in the early stages of an engine development program, becomes very critical to the successful development of new aeropropulsion systems. A reliable system assessment not only helps to identify the best propulsion system concept among several candidates, it can also identify which technologies are worth pursuing. This is particularly important for advanced aeropropulsion technology development programs, which require an enormous amount of resources. In the current practice of deterministic, or point-design, approaches, the uncertainties of design variables are either unaccounted for or accounted for by safety factors. This could often result in an assessment with unknown and unquantifiable reliability. Consequently, it would fail to provide additional insight into the risks associated with the new technologies, which are often needed by decision makers to determine the feasibility and return-on-investment of a new aircraft engine. In this work, an alternative approach based on the probabilistic method was described for a comprehensive assessment of an aeropropulsion system. The statistical approach quantifies the design uncertainties inherent in a new aeropropulsion system and their influences on engine performance. Because of this, it enhances the reliability of a system assessment. A technical assessment of a wave-rotor-enhanced gas turbine engine was performed to demonstrate the methodology. The assessment used probability distributions to account for the uncertainties that occur in component efficiencies and flows and in mechanical design variables. The approach taken in this effort was to integrate the thermodynamic cycle analysis embedded in the computer code NEPP (NASA Engine Performance Program) and the engine weight analysis embedded in the computer code WATE (Weight Analysis of Turbine Engines) with the fast probability integration technique (FPI). FPI was developed by Southwest Research Institute under contract with the NASA Glenn Research Center. The results were plotted in the form of cumulative distribution functions and sensitivity analyses and were compared with results from the traditional deterministic approach. The comparison showed that the probabilistic approach provides a more realistic and systematic way to assess an aeropropulsion system. The current work addressed the application of the probabilistic approach to assess specific fuel consumption, engine thrust, and weight. Similarly, the approach can be used to assess other aspects of aeropropulsion system performance, such as cost, acoustic noise, and emissions. Additional information is included in the original extended abstract.
Tritium Mitigation/Control for Advanced Reactor System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Xiaodong; Christensen, Richard; Saving, John P.
A tritium removal facility, which is similar to the design used for tritium recovery in fusion reactors, is proposed in this study for fluoride-salt-cooled high-temperature reactors (FHRs) to result in a two-loop FHR design with the elimination of an intermediate loop. Using this approach, an economic benefit can potentially be obtained by removing the intermediate loop, while the safety concern of tritium release can be mitigated. In addition, an intermediate heat exchanger (IHX) that can yield a similar tritium permeation rate to the production rate of 1.9 Ci/day in a 1,000 MWe PWR needs to be designed to prevent themore » residual tritium that is not captured in the tritium removal system from escaping into the power cycle and ultimately the environment. The main focus of this study is to aid the mitigation of tritium permeation issue from the FHR primary side to significantly reduce the concentration of tritium in the secondary side and the process heat application side (if applicable). The goal of the research is to propose a baseline FHR system without the intermediate loop. The specific objectives to accomplish the goals are: To estimate tritium permeation behavior in FHRs; To design a tritium removal system for FHRs; To meet the same tritium permeation level in FHRs as the tritium production rate of 1.9 Ci/day in 1,000 MWe PWRs; To demonstrate economic benefits of the proposed FHR system via comparing with the three-loop FHR system. The objectives were accomplished by designing tritium removal facilities, developing a tritium analysis code, and conducting an economic analysis. In the fusion reactor community, tritium extraction has been widely investigated and researched. Borrowing the experiences from the fusion reactor community, a tritium control and mitigation system was proposed. Based on mass transport theories, a tritium analysis code was developed, and the tritium behaviors were analyzed using the developed code. Tritium removal facilities were designed and laboratory-scale experiments were proposed for the validation of the proposed tritium removal facilities.« less
PhysiCell: An open source physics-based cell simulator for 3-D multicellular systems
Ghaffarizadeh, Ahmadreza; Mumenthaler, Shannon M.
2018-01-01
Many multicellular systems problems can only be understood by studying how cells move, grow, divide, interact, and die. Tissue-scale dynamics emerge from systems of many interacting cells as they respond to and influence their microenvironment. The ideal “virtual laboratory” for such multicellular systems simulates both the biochemical microenvironment (the “stage”) and many mechanically and biochemically interacting cells (the “players” upon the stage). PhysiCell—physics-based multicellular simulator—is an open source agent-based simulator that provides both the stage and the players for studying many interacting cells in dynamic tissue microenvironments. It builds upon a multi-substrate biotransport solver to link cell phenotype to multiple diffusing substrates and signaling factors. It includes biologically-driven sub-models for cell cycling, apoptosis, necrosis, solid and fluid volume changes, mechanics, and motility “out of the box.” The C++ code has minimal dependencies, making it simple to maintain and deploy across platforms. PhysiCell has been parallelized with OpenMP, and its performance scales linearly with the number of cells. Simulations up to 105-106 cells are feasible on quad-core desktop workstations; larger simulations are attainable on single HPC compute nodes. We demonstrate PhysiCell by simulating the impact of necrotic core biomechanics, 3-D geometry, and stochasticity on the dynamics of hanging drop tumor spheroids and ductal carcinoma in situ (DCIS) of the breast. We demonstrate stochastic motility, chemical and contact-based interaction of multiple cell types, and the extensibility of PhysiCell with examples in synthetic multicellular systems (a “cellular cargo delivery” system, with application to anti-cancer treatments), cancer heterogeneity, and cancer immunology. PhysiCell is a powerful multicellular systems simulator that will be continually improved with new capabilities and performance improvements. It also represents a significant independent code base for replicating results from other simulation platforms. The PhysiCell source code, examples, documentation, and support are available under the BSD license at http://PhysiCell.MathCancer.org and http://PhysiCell.sf.net. PMID:29474446
Systems biology by the rules: hybrid intelligent systems for pathway modeling and discovery.
Bosl, William J
2007-02-15
Expert knowledge in journal articles is an important source of data for reconstructing biological pathways and creating new hypotheses. An important need for medical research is to integrate this data with high throughput sources to build useful models that span several scales. Researchers traditionally use mental models of pathways to integrate information and development new hypotheses. Unfortunately, the amount of information is often overwhelming and these are inadequate for predicting the dynamic response of complex pathways. Hierarchical computational models that allow exploration of semi-quantitative dynamics are useful systems biology tools for theoreticians, experimentalists and clinicians and may provide a means for cross-communication. A novel approach for biological pathway modeling based on hybrid intelligent systems or soft computing technologies is presented here. Intelligent hybrid systems, which refers to several related computing methods such as fuzzy logic, neural nets, genetic algorithms, and statistical analysis, has become ubiquitous in engineering applications for complex control system modeling and design. Biological pathways may be considered to be complex control systems, which medicine tries to manipulate to achieve desired results. Thus, hybrid intelligent systems may provide a useful tool for modeling biological system dynamics and computational exploration of new drug targets. A new modeling approach based on these methods is presented in the context of hedgehog regulation of the cell cycle in granule cells. Code and input files can be found at the Bionet website: www.chip.ord/~wbosl/Software/Bionet. This paper presents the algorithmic methods needed for modeling complicated biochemical dynamics using rule-based models to represent expert knowledge in the context of cell cycle regulation and tumor growth. A notable feature of this modeling approach is that it allows biologists to build complex models from their knowledge base without the need to translate that knowledge into mathematical form. Dynamics on several levels, from molecular pathways to tissue growth, are seamlessly integrated. A number of common network motifs are examined and used to build a model of hedgehog regulation of the cell cycle in cerebellar neurons, which is believed to play a key role in the etiology of medulloblastoma, a devastating childhood brain cancer.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lucas, Robert G.; Taylor, Zachary T.; Mendon, Vrushali V.
2012-07-03
The 2012 International Energy Conservation Code (IECC) yields positive benefits for Michigan homeowners. Moving to the 2012 IECC from the Michigan Uniform Energy Code is cost-effective over a 30-year life cycle. On average, Michigan homeowners will save $10,081 with the 2012 IECC. Each year, the reduction to energy bills will significantly exceed increased mortgage costs. After accounting for up-front costs and additional costs financed in the mortgage, homeowners should see net positive cash flows (i.e., cumulative savings exceeding cumulative cash outlays) in 1 year for the 2012 IECC. Average annual energy savings are $604 for the 2012 IECC.
Mean Line Pump Flow Model in Rocket Engine System Simulation
NASA Technical Reports Server (NTRS)
Veres, Joseph P.; Lavelle, Thomas M.
2000-01-01
A mean line pump flow modeling method has been developed to provide a fast capability for modeling turbopumps of rocket engines. Based on this method, a mean line pump flow code PUMPA has been written that can predict the performance of pumps at off-design operating conditions, given the loss of the diffusion system at the design point. The pump code can model axial flow inducers, mixed-flow and centrifugal pumps. The code can model multistage pumps in series. The code features rapid input setup and computer run time, and is an effective analysis and conceptual design tool. The map generation capability of the code provides the map information needed for interfacing with a rocket engine system modeling code. The off-design and multistage modeling capabilities of the code permit parametric design space exploration of candidate pump configurations and provide pump performance data for engine system evaluation. The PUMPA code has been integrated with the Numerical Propulsion System Simulation (NPSS) code and an expander rocket engine system has been simulated. The mean line pump flow code runs as an integral part of the NPSS rocket engine system simulation and provides key pump performance information directly to the system model at all operating conditions.
Automated Concurrent Blackboard System Generation in C++
NASA Technical Reports Server (NTRS)
Kaplan, J. A.; McManus, J. W.; Bynum, W. L.
1999-01-01
In his 1992 Ph.D. thesis, "Design and Analysis Techniques for Concurrent Blackboard Systems", John McManus defined several performance metrics for concurrent blackboard systems and developed a suite of tools for creating and analyzing such systems. These tools allow a user to analyze a concurrent blackboard system design and predict the performance of the system before any code is written. The design can be modified until simulated performance is satisfactory. Then, the code generator can be invoked to generate automatically all of the code required for the concurrent blackboard system except for the code implementing the functionality of each knowledge source. We have completed the port of the source code generator and a simulator for a concurrent blackboard system. The source code generator generates the necessary C++ source code to implement the concurrent blackboard system using Parallel Virtual Machine (PVM) running on a heterogeneous network of UNIX(trademark) workstations. The concurrent blackboard simulator uses the blackboard specification file to predict the performance of the concurrent blackboard design. The only part of the source code for the concurrent blackboard system that the user must supply is the code implementing the functionality of the knowledge sources.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stimpson, Shane G; Powers, Jeffrey J; Clarno, Kevin T
The Consortium for Advanced Simulation of Light Water Reactors (CASL) aims to provide high-fidelity, multiphysics simulations of light water reactors (LWRs) by coupling a variety of codes within the Virtual Environment for Reactor Analysis (VERA). One of the primary goals of CASL is to predict local cladding failure through pellet-clad interaction (PCI). This capability is currently being pursued through several different approaches, such as with Tiamat, which is a simulation tool within VERA that more tightly couples the MPACT neutron transport solver, the CTF thermal hydraulics solver, and the MOOSE-based Bison-CASL fuel performance code. However, the process in this papermore » focuses on running fuel performance calculations with Bison-CASL to predict PCI using the multicycle output data from coupled neutron transport/thermal hydraulics simulations. In recent work within CASL, Watts Bar Unit 1 has been simulated over 12 cycles using the VERA core simulator capability based on MPACT and CTF. Using the output from these simulations, Bison-CASL results can be obtained without rerunning all 12 cycles, while providing some insight into PCI indicators. Multi-cycle Bison-CASL results are presented and compared against results from the FRAPCON fuel performance code. There are several quantities of interest in considering PCI and subsequent fuel rod failures, such as the clad hoop stress and maximum centerline fuel temperature, particularly as a function of time. Bison-CASL performs single-rod simulations using representative power and temperature distributions, providing high-resolution results for these and a number of other quantities. This will assist in identifying fuels rods as potential failure locations for use in further analyses.« less
NASA Technical Reports Server (NTRS)
Bever, G. A.
1981-01-01
The flight test data requirements at the NASA Dryden Flight Research Center increased in complexity, and more advanced instrumentation became necessary to accomplish mission goals. This paper describes the way in which an airborne computer was used to perform real-time calculations on critical flight test parameters during a flight test on a winglet-equipped KC-135A aircraft. With the computer, an airborne flight test engineer can select any sensor for airborne display in several formats, including engineering units. The computer is able to not only calculate values derived from the sensor outputs but also to interact with the data acquisition system. It can change the data cycle format and data rate, and even insert the derived values into the pulse code modulation (PCM) bit stream for recording.
PCG: A prototype incremental compilation facility for the SAGA environment, appendix F
NASA Technical Reports Server (NTRS)
Kimball, Joseph John
1985-01-01
A programming environment supports the activity of developing and maintaining software. New environments provide language-oriented tools such as syntax-directed editors, whose usefulness is enhanced because they embody language-specific knowledge. When syntactic and semantic analysis occur early in the cycle of program production, that is, during editing, the use of a standard compiler is inefficient, for it must re-analyze the program before generating code. Likewise, it is inefficient to recompile an entire file, when the editor can determine that only portions of it need updating. The pcg, or Pascal code generation, facility described here generates code directly from the syntax trees produced by the SAGA syntax directed Pascal editor. By preserving the intermediate code used in the previous compilation, it can limit recompilation to the routines actually modified by editing.
Wen, Dong-Yue; Lin, Peng; Pang, Yu-Yan; Chen, Gang; He, Yun; Dang, Yi-Wu; Yang, Hong
2018-05-05
BACKGROUND Long non-coding RNAs (lncRNAs) have a role in physiological and pathological processes, including cancer. The aim of this study was to investigate the expression of the long intergenic non-protein coding RNA 665 (LINC00665) gene and the cell cycle in hepatocellular carcinoma (HCC) using database analysis including The Cancer Genome Atlas (TCGA), the Gene Expression Omnibus (GEO), and quantitative real-time polymerase chain reaction (qPCR). MATERIAL AND METHODS Expression levels of LINC00665 were compared between human tissue samples of HCC and adjacent normal liver, clinicopathological correlations were made using TCGA and the GEO, and qPCR was performed to validate the findings. Other public databases were searched for other genes associated with LINC00665 expression, including The Atlas of Noncoding RNAs in Cancer (TANRIC), the Multi Experiment Matrix (MEM), Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG) and protein-protein interaction (PPI) networks. RESULTS Overexpression of LINC00665 in patients with HCC was significantly associated with gender, tumor grade, stage, and tumor cell type. Overexpression of LINC00665 in patients with HCC was significantly associated with overall survival (OS) (HR=1.47795%; CI: 1.046-2.086). Bioinformatics analysis identified 469 related genes and further analysis supported a hypothesis that LINC00665 regulates pathways in the cell cycle to facilitate the development and progression of HCC through ten identified core genes: CDK1, BUB1B, BUB1, PLK1, CCNB2, CCNB1, CDC20, ESPL1, MAD2L1, and CCNA2. CONCLUSIONS Overexpression of the lncRNA, LINC00665 may be involved in the regulation of cell cycle pathways in HCC through ten identified hub genes.
The Exchange Data Communication System based on Centralized Database for the Meat Industry
NASA Astrophysics Data System (ADS)
Kobayashi, Yuichi; Taniguchi, Yoji; Terada, Shuji; Komoda, Norihisa
We propose applying the EDI system that is based on centralized database and supports conversion of code data to the meat industry. This system makes it possible to share exchange data on beef between enterprises from producers to retailers by using Web EDI technology. In order to efficiently convert code direct conversion of a sender's code to a receiver's code using a code map is used. This system that mounted this function has been implemented in September 2004. Twelve enterprises including retailers, and processing traders, and wholesalers were using the system as of June 2005. In this system, the number of code maps relevant to the introductory cost of the code conversion function was lower than the theoretical value and were close to the case that a standard code is mediated.
NASA Astrophysics Data System (ADS)
Neveu, M.; Felton, R.; Domagal-Goldman, S. D.; Desch, S. J.; Arney, G. N.
2017-12-01
About 20 Earth-sized planets (0.6-1.6 Earth masses and radii) have now been discovered beyond our solar system [1]. Although such planets are prime targets in the upcoming search for atmospheric biosignatures, their composition, geology, and climate are essentially unconstrained. Yet, developing an understanding of how these factors influence planetary evolution through time and space is essential to establishing abiotic backgrounds against which any deviations can provide evidence for biological activity. To this end, we are building coupled geophysical-geochemical models of abiotic carbon cycling on such planets. Our models are controlled by atmospheric factors such as temperature and composition, and compute interior inputs to atmospheric species. They account for crustal weathering, ocean-atmosphere equilibria, and exchange with the deep interior as a function of planet composition and size (and, eventually, age).Planets in other solar systems differ from the Earth not only in their bulk physical properties, but also likely in their bulk chemical composition [2], which influences key parameters such as the vigor of mantle convection and the near-surface redox state. Therefore, simulating how variations in such parameters affect carbon cycling requires us to simulate the above processes from first principles, rather than by using arbitrary parameterizations derived from observations as is often done with models of carbon cycling on Earth [3] or extrapolations thereof [4]. As a first step, we have developed a kinetic model of crustal weathering using the PHREEQC code [5] and kinetic data from [6]. We will present the ability of such a model to replicate Earth's carbon cycle using, for the time being, parameterizations for surface-interior-atmosphere exchange processes such as volcanism (e.g., [7]).[1] exoplanet.eu, 7/28/2017.[2] Young et al. (2014) Astrobiology 14, 603-626.[3] Lerman & Wu (2008) Kinetics of Global Geochemical Cycles. In Kinetics of Water-Rock Interaction (Brantley et al., eds.), Springer, New York.[4] Edson et al. (2012) Astrobiology 12, 562-571.[5] Parkhurst & Appelo (2013) USGS Techniques and Methods 6-A43.[6] Palandri & Kharaka (2008) USGS Report 2004-1068.[7] Kite et al. (2009) ApJ 700, 1732-1749.
Optical protocols for advanced spacecraft networks
NASA Technical Reports Server (NTRS)
Bergman, Larry A.
1991-01-01
Most present day fiber optic networks are in fact extensions of copper wire networks. As a result, their speed is still limited by electronics even though optics is capable of running three orders of magnitude faster. Also, the fact that photons do not interact with one another (as electrons do) provides optical communication systems with some unique properties or new functionality that is not readily taken advantage of with conventional approaches. Some of the motivation for implementing network protocols in the optical domain, a few possible approaches including optical code-division multiple-access (CDMA), and how this class of networks can extend the technology life cycle of the Space Station Freedom (SSF) with increased performance and functionality are described.
Hostile behavior during marital conflict alters pituitary and adrenal hormones.
Malarkey, W B; Kiecolt-Glaser, J K; Pearl, D; Glaser, R
1994-01-01
We evaluated hormonal changes and problem-solving behaviors in 90 newlywed couples who were admitted to a hospital research unit for 24 hours. The subjects were selected on the basis of stringent mental and physical health criteria, and admissions were scheduled during the follicular phase of the woman's menstrual cycle. For frequent, unobtrusive endocrine sampling during the interaction tasks, a long polyethylene tube was attached to a heparin well, allowing nurses to draw blood samples at set intervals, out of subjects' sight. Five blood samples were obtained before, during, and after a 30-minute structured problem-solving or conflict task. The conflict session was recorded on videotapes that were later scored for problem-solving behaviors using the Marital Interaction Coding System (MICS). Marital conflict and MICS-coded hostile or negative behavior during conflict was closely linked to changes in serum hormonal levels across five of the six hormones we studied, in spite of the high marital satisfaction of our newlywed couples and the healthy lifestyles demanded by our exclusion criteria. Hostile behavior was associated with decreased levels of prolactin (PRL) and increases in epinephrine (EPI), norepinephrine (NEPI), ACTH, and growth hormone (GH), but not cortisol. These data suggest that the endocrine system may be an important mediator between personal relationships and health.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Shiyu; Kaeppler, Shawn M.; Vogel, Kenneth P.
Switchgrass is undergoing development as a dedicated cellulosic bioenergy crop. Fermentation of lignocellulosic biomass to ethanol in a bioenergy system or to volatile fatty acids in a livestock production system is strongly and negatively influenced by lignification of cell walls. This study detects specific loci that exhibit selection signatures across switchgrass breeding populations that differ in in vitro dry matter digestibility (IVDMD), ethanol yield, and lignin concentration. Allele frequency changes in candidate genes were used to detect loci under selection. Out of the 183 polymorphisms identified in the four candidate genes, twenty-five loci in the intron regions and four locimore » in coding regions were found to display a selection signature. All loci in the coding regions are synonymous substitutions. Selection in both directions were observed on polymorphisms that appeared to be under selection. Genetic diversity and linkage disequilibrium within the candidate genes were low. The recurrent divergent selection caused excessive moderate allele frequencies in the cycle 3 reduced lignin population as compared to the base population. As a result, this study provides valuable insight on genetic changes occurring in short-term selection in the polyploid populations, and discovered potential markers for breeding switchgrass with improved biomass quality.« less
Chen, Shiyu; Kaeppler, Shawn M.; Vogel, Kenneth P.; ...
2016-11-28
Switchgrass is undergoing development as a dedicated cellulosic bioenergy crop. Fermentation of lignocellulosic biomass to ethanol in a bioenergy system or to volatile fatty acids in a livestock production system is strongly and negatively influenced by lignification of cell walls. This study detects specific loci that exhibit selection signatures across switchgrass breeding populations that differ in in vitro dry matter digestibility (IVDMD), ethanol yield, and lignin concentration. Allele frequency changes in candidate genes were used to detect loci under selection. Out of the 183 polymorphisms identified in the four candidate genes, twenty-five loci in the intron regions and four locimore » in coding regions were found to display a selection signature. All loci in the coding regions are synonymous substitutions. Selection in both directions were observed on polymorphisms that appeared to be under selection. Genetic diversity and linkage disequilibrium within the candidate genes were low. The recurrent divergent selection caused excessive moderate allele frequencies in the cycle 3 reduced lignin population as compared to the base population. As a result, this study provides valuable insight on genetic changes occurring in short-term selection in the polyploid populations, and discovered potential markers for breeding switchgrass with improved biomass quality.« less
NASA Technical Reports Server (NTRS)
Schmidt, G.; Ruster, R.; Czechowsky, P.
1983-01-01
The SOUSY-VHF-Radar operates at a frequency of 53.5 MHz in a valley in the Harz mountains, Germany, 90 km from Hanover. The radar controller, which is programmed by a 16-bit computer holds 1024 program steps in core and controls, via 8 channels, the whole radar system: in particular the master oscillator, the transmitter, the transmit-receive-switch, the receiver, the analog to digital converter, and the hardware adder. The high-sensitivity receiver has a dynamic range of 70 dB and a video bandwidth of 1 MHz. Phase coding schemes are applied, in particular for investigations at mesospheric heights, in order to carry out measurements with the maximum duty cycle and the maximum height resolution. The computer takes the data from the adder to store it in magnetic tape or disc. The radar controller is programmed by the computer using simple FORTRAN IV statements. After the program has been loaded and the computer has started the radar controller, it runs automatically, stopping at the program end. In case of errors or failures occurring during the radar operation, the radar controller is shut off caused either by a safety circuit or by a power failure circuit or by a parity check system.
A Modified Through-Flow Wave Rotor Cycle with Combustor Bypass Ducts
NASA Technical Reports Server (NTRS)
Paxson Daniel E.; Nalim, M. Razi
1998-01-01
A wave rotor cycle is described which avoids the inherent problem of combustor exhaust gas recirculation (EGR) found in four-port, through-flow wave rotor cycles currently under consideration for topping gas turbine engines. The recirculated hot gas is eliminated by the judicious placement of a bypass duct which transfers gas from one end of the rotor to the other. The resulting cycle, when analyzed numerically, yields an absolute mean rotor temperature 18% below the already impressive value of the conventional four-port cycle (approximately the turbine inlet temperature). The absolute temperature of the gas leading to the combustor is also reduced from the conventional four-port design by 22%. The overall design point pressure ratio of this new bypass cycle is approximately the same as the conventional four-port cycle. This paper will describe the EGR problem and the bypass cycle solution including relevant wave diagrams. Performance estimates of design and off-design operation of a specific wave rotor will be presented. The results were obtained using a one-dimensional numerical simulation and design code.
System Analysis for Decay Heat Removal in Lead-Bismuth Cooled Natural Circulated Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Takaaki Sakai; Yasuhiro Enuma; Takashi Iwasaki
2002-07-01
Decay heat removal analyses for lead-bismuth cooled natural circulation reactors are described in this paper. A combined multi-dimensional plant dynamics code (MSG-COPD) has been developed to conduct the system analysis for the natural circulation reactors. For the preliminary study, transient analysis has been performed for a 100 MWe lead-bismuth-cooled reactor designed by Argonne National Laboratory (ANL). In addition, decay heat removal characteristics of a 400 MWe lead-bismuth-cooled natural circulation reactor designed by Japan Nuclear Cycle Development Institute (JNC) has been evaluated by using MSG-COPD. PRACS (Primary Reactor Auxiliary Cooling System) is prepared for the JNC's concept to get sufficient heatmore » removal capacity. During 2000 sec after the transient, the outlet temperature shows increasing tendency up to the maximum temperature of 430 Centigrade, because the buoyancy force in a primary circulation path is temporary reduced. However, the natural circulation is recovered by the PRACS system and the out let temperature decreases successfully. (authors)« less
System Analysis for Decay Heat Removal in Lead-Bismuth-Cooled Natural-Circulation Reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sakai, Takaaki; Enuma, Yasuhiro; Iwasaki, Takashi
2004-03-15
Decay heat removal analyses for lead-bismuth-cooled natural-circulation reactors are described in this paper. A combined multidimensional plant dynamics code (MSG-COPD) has been developed to conduct the system analysis for the natural-circulation reactors. For the preliminary study, transient analysis has been performed for a 300-MW(thermal) lead-bismuth-cooled reactor designed by Argonne National Laboratory. In addition, decay heat removal characteristics of a 400-MW(electric) lead-bismuth-cooled natural-circulation reactor designed by the Japan Nuclear Cycle Development Institute (JNC) has been evaluated by using MSG-COPD. The primary reactor auxiliary cooling system (PRACS) is prepared for the JNC concept to get sufficient heat removal capacity. During 2000 smore » after the transient, the outlet temperature shows increasing tendency up to the maximum temperature of 430 deg. C because the buoyancy force in a primary circulation path is temporarily reduced. However, the natural circulation is recovered by the PRACS system, and the outlet temperature decreases successfully.« less
Integrated design and manufacturing for the high speed civil transport
NASA Technical Reports Server (NTRS)
1993-01-01
In June 1992, Georgia Tech's School of Aerospace Engineering was awarded a NASA University Space Research Association (USRA) Advanced Design Program (ADP) to address 'Integrated Design and Manufacturing for the High Speed Civil Transport (HSCT)' in its graduate aerospace systems design courses. This report summarizes the results of the five courses incorporated into the Georgia Tech's USRA ADP program. It covers AE8113: Introduction to Concurrent Engineering, AE4360: Introduction to CAE/CAD, AE4353: Design for Life Cycle Cost, AE6351: Aerospace Systems Design One, and AE6352: Aerospace Systems Design Two. AE8113: Introduction to Concurrent Engineering was an introductory course addressing the basic principles of concurrent engineering (CE) or integrated product development (IPD). The design of a total system was not the objective of this course. The goal was to understand and define the 'up-front' customer requirements, their decomposition, and determine the value objectives for a complex product, such as the high speed civil transport (HSCT). A generic CE methodology developed at Georgia Tech was used for this purpose. AE4353: Design for Life Cycle Cost addressed the basic economic issues for an HSCT using a robust design technique, Taguchi's parameter design optimization method (PDOM). An HSCT economic sensitivity assessment was conducted using a Taguchi PDOM approach to address the robustness of the basic HSCT design. AE4360: Introduction to CAE/CAD permitted students to develop and utilize CAE/CAD/CAM knowledge and skills using CATIA and CADAM as the basic geometric tools. AE6351: Aerospace Systems Design One focused on the conceptual design refinement of a baseline HSCT configuration as defined by Boeing, Douglas, and NASA in their system studies. It required the use of NASA's synthesis codes FLOPS and ACSYNT. A criterion called the productivity index (P.I.) was used to evaluate disciplinary sensitivities and provide refinements of the baseline HSCT configuration. AE6352: Aerospace Systems Design Two was a continuation of Aerospace Systems Design One in which wing concepts were researched and analyzed in more detail. FLOPS and ACSYNT were again used at the system level while other off-the-shelf computer codes were used for more detailed wing disciplinary analysis and optimization. The culmination of all efforts and submission of this report conclude the first year's efforts of Georgia Tech's NASA USRA ADP. It will hopefully provide the foundation for next year's efforts concerning continuous improvement of integrated design and manufacturing for the HSCT.
Design of ACM system based on non-greedy punctured LDPC codes
NASA Astrophysics Data System (ADS)
Lu, Zijun; Jiang, Zihong; Zhou, Lin; He, Yucheng
2017-08-01
In this paper, an adaptive coded modulation (ACM) scheme based on rate-compatible LDPC (RC-LDPC) codes was designed. The RC-LDPC codes were constructed by a non-greedy puncturing method which showed good performance in high code rate region. Moreover, the incremental redundancy scheme of LDPC-based ACM system over AWGN channel was proposed. By this scheme, code rates vary from 2/3 to 5/6 and the complication of the ACM system is lowered. Simulations show that more and more obvious coding gain can be obtained by the proposed ACM system with higher throughput.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Montégiani, Jean-François; Gaudin, Émilie; Després, Philippe
2014-08-15
In peptide receptor radionuclide therapy (PRRT), huge inter-patient variability in absorbed radiation doses per administered activity mandates the utilization of individualized dosimetry to evaluate therapeutic efficacy and toxicity. We created a reliable GPU-calculated dosimetry code (irtGPUMCD) and assessed {sup 177}Lu-octreotate renal dosimetry in eight patients (4 cycles of approximately 7.4 GBq). irtGPUMCD was derived from a brachytherapy dosimetry code (bGPUMCD), which was adapted to {sup 177}Lu PRRT dosimetry. Serial quantitative single-photon emission computed tomography (SPECT) images were obtained from three SPECT/CT acquisitions performed at 4, 24 and 72 hours after {sup 177}Lu-octreotate administration, and registered with non-rigid deformation of CTmore » volumes, to obtain {sup 177}Lu-octreotate 4D quantitative biodistribution. Local energy deposition from the β disintegrations was assumed. Using Monte Carlo gamma photon transportation, irtGPUMCD computed dose rate at each time point. Average kidney absorbed dose was obtained from 1-cm{sup 3} VOI dose rate samples on each cortex, subjected to a biexponential curve fit. Integration of the latter time-dose rate curve yielded the renal absorbed dose. The mean renal dose per administered activity was 0.48 ± 0.13 Gy/GBq (range: 0.30–0.71 Gy/GBq). Comparison to another PRRT dosimetry code (VRAK: Voxelized Registration and Kinetics) showed fair accordance with irtGPUMCD (11.4 ± 6.8 %, range: 3.3–26.2%). These results suggest the possibility to use the irtGPUMCD code in order to personalize administered activity in PRRT. This could allow improving clinical outcomes by maximizing per-cycle tumor doses, without exceeding the tolerable renal dose.« less
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
The spectrum of ethical issues in a Learning Health Care System: a systematic qualitative review.
McLennan, Stuart; Kahrass, Hannes; Wieschowski, Susanne; Strech, Daniel; Langhof, Holger
2018-04-01
To determine systematically the spectrum of ethical issues that is raised for stakeholders in a 'Learning Health Care System' (LHCS). The systematic review was conducted in PubMed and Google Books between the years 2007 and 2015. The literature search retrieved 1258 publications. Each publication was independently screened by two reviewers for eligibility for inclusion. Ethical issues were defined as arising when a relevant normative principle is not adequately considered or two principles come into conflict. A total of 65 publications were included in the final analysis and were analysed using an adapted version of qualitative content analysis. A coding frame was developed inductively from the data, only the highest-level categories were generated deductively for a life-cycle perspective. A total of 67 distinct ethical issues could be categorized under different phases of the LHCS life-cycle. An overarching theme that was repeatedly raised was the conflict between the current regulatory system and learning health care. The implementation of a LHCS can help realize the ethical imperative to continuously improve the quality of health care. However, the implementation of a LHCS can also raise a number of important ethical issues itself. This review highlights the importance for health care leaders and policy makers to balance the need to protect and respect individual participants involved in learning health care activities with the social value of improving health care.
NASA Technical Reports Server (NTRS)
Saunders, J. D.; Stueber, T. J.; Thomas, S. R.; Suder, K. L.; Weir, L. J.; Sanders, B. W.
2012-01-01
Status on an effort to develop Turbine Based Combined Cycle (TBCC) propulsion is described. This propulsion technology can enable reliable and reusable space launch systems. TBCC propulsion offers improved performance and safety over rocket propulsion. The potential to realize aircraft-like operations and reduced maintenance are additional benefits. Among most the critical TBCC enabling technologies are: 1) mode transition from turbine to scramjet propulsion, 2) high Mach turbine engines and 3) TBCC integration. To address these TBCC challenges, the effort is centered on a propulsion mode transition experiment and includes analytical research. The test program, the Combined-Cycle Engine Large Scale Inlet Mode Transition Experiment (CCE LIMX), was conceived to integrate TBCC propulsion with proposed hypersonic vehicles. The goals address: (1) dual inlet operability and performance, (2) mode-transition sequences enabling a switch between turbine and scramjet flow paths, and (3) turbine engine transients during transition. Four test phases are planned from which a database can be used to both validate design and analysis codes and characterize operability and integration issues for TBCC propulsion. In this paper we discuss the research objectives, features of the CCE hardware and test plans, and status of the parametric inlet characterization testing which began in 2011. This effort is sponsored by the NASA Fundamental Aeronautics Hypersonics project
Encrypted holographic data storage based on orthogonal-phase-code multiplexing.
Heanue, J F; Bashaw, M C; Hesselink, L
1995-09-10
We describe an encrypted holographic data-storage system that combines orthogonal-phase-code multiplexing with a random-phase key. The system offers the security advantages of random-phase coding but retains the low cross-talk performance and the minimum code storage requirements typical in an orthogonal-phase-code-multiplexing system.
NASA Astrophysics Data System (ADS)
Dieudonne, Cyril; Dumonteil, Eric; Malvagi, Fausto; M'Backé Diop, Cheikh
2014-06-01
For several years, Monte Carlo burnup/depletion codes have appeared, which couple Monte Carlo codes to simulate the neutron transport to deterministic methods, which handle the medium depletion due to the neutron flux. Solving Boltzmann and Bateman equations in such a way allows to track fine 3-dimensional effects and to get rid of multi-group hypotheses done by deterministic solvers. The counterpart is the prohibitive calculation time due to the Monte Carlo solver called at each time step. In this paper we present a methodology to avoid the repetitive and time-expensive Monte Carlo simulations, and to replace them by perturbation calculations: indeed the different burnup steps may be seen as perturbations of the isotopic concentration of an initial Monte Carlo simulation. In a first time we will present this method, and provide details on the perturbative technique used, namely the correlated sampling. In a second time the implementation of this method in the TRIPOLI-4® code will be discussed, as well as the precise calculation scheme a meme to bring important speed-up of the depletion calculation. Finally, this technique will be used to calculate the depletion of a REP-like assembly, studied at beginning of its cycle. After having validated the method with a reference calculation we will show that it can speed-up by nearly an order of magnitude standard Monte-Carlo depletion codes.
ODECS -- A computer code for the optimal design of S.I. engine control strategies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arsie, I.; Pianese, C.; Rizzo, G.
1996-09-01
The computer code ODECS (Optimal Design of Engine Control Strategies) for the design of Spark Ignition engine control strategies is presented. This code has been developed starting from the author`s activity in this field, availing of some original contributions about engine stochastic optimization and dynamical models. This code has a modular structure and is composed of a user interface for the definition, the execution and the analysis of different computations performed with 4 independent modules. These modules allow the following calculations: (1) definition of the engine mathematical model from steady-state experimental data; (2) engine cycle test trajectory corresponding to amore » vehicle transient simulation test such as ECE15 or FTP drive test schedule; (3) evaluation of the optimal engine control maps with a steady-state approach; (4) engine dynamic cycle simulation and optimization of static control maps and/or dynamic compensation strategies, taking into account dynamical effects due to the unsteady fluxes of air and fuel and the influences of combustion chamber wall thermal inertia on fuel consumption and emissions. Moreover, in the last two modules it is possible to account for errors generated by a non-deterministic behavior of sensors and actuators and the related influences on global engine performances, and compute robust strategies, less sensitive to stochastic effects. In the paper the four models are described together with significant results corresponding to the simulation and the calculation of optimal control strategies for dynamic transient tests.« less
The Use of a Pseudo Noise Code for DIAL Lidar
NASA Technical Reports Server (NTRS)
Burris, John F.
2010-01-01
Retrievals of CO2 profiles within the planetary boundary layer (PBL) are required to understand CO2 transport over regional scales and for validating the future space borne CO2 remote sensing instrument, such as the CO2 Laser Sounder, for the ASCENDS mission, We report the use of a return-to-zero (RZ) pseudo noise (PN) code modulation technique for making range resolved measurements of CO2 within the PBL using commercial, off-the-shelf, components. Conventional, range resolved, measurements require laser pulse widths that are s#rorter than the desired spatial resolution and have pulse spacing such that returns from only a single pulse are observed by the receiver at one time (for the PBL pulse separations must be greater than approximately 2000m). This imposes a serious limitation when using available fiber lasers because of the resulting low duty cycle (less than 0.001) and consequent low average laser output power. RZ PN code modulation enables a fiber laser to operate at much higher duty cycles (approaching 0.1) thereby more effectively utilizing the amplifier's output. This results in an increase in received counts by approximately two orders of magnitude. The approach involves employing two, back to back, CW fiber amplifiers seeded at the appropriate on and offline CO2 wavelengths (approximately 1572 nm) using distributed feedback diode lasers modulated by a PN code at rates significantly above 1 megahertz. An assessment of the technique, discussions of measurement precision and error sources as well as preliminary data will be presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, D.; Levine, S.L.; Luoma, J.
1992-01-01
The Three Mile Island unit 1 core reloads have been designed using fast but accurate scoping codes, PSUI-LEOPARD and ADMARC. PSUI-LEOPARD has been normalized to EPRI-CPM2 results and used to calculate the two-group constants, whereas ADMARC is a modern two-dimensional, two-group diffusion theory nodal code. Problems in accuracy were encountered for cycles 8 and higher as the core lifetime was increased beyond 500 effective full-power days. This is because the heavier loaded cores in both {sup 235}U and {sup 10}B have harder neutron spectra, which produces a change in the transport effect in the baffle reflector region, and the burnablemore » poison (BP) simulations were not accurate enough for the cores containing the increased amount of {sup 10}B required in the BP rods. In the authors study, a technique has been developed to take into account the change in the transport effect in the baffle region by modifying the fast neutron diffusion coefficient as a function of cycle length and core exposure or burnup. A more accurate BP simulation method is also developed, using integral transport theory and CPM2 data, to calculate the BP contribution to the equivalent fuel assembly (supercell) two-group constants. The net result is that the accuracy of the scoping codes is as good as that produced by CASMO/SIMULATE or CPM2/SIMULATE when comparing with measured data.« less
Towards 100,000 CPU Cycle-Scavenging by Genetic Algorithms
NASA Technical Reports Server (NTRS)
Globus, Al; Biegel, Bryan A. (Technical Monitor)
2001-01-01
We examine a web-centric design using standard tools such as web servers, web browsers, PHP, and mySQL. We also consider the applicability of Information Power Grid tools such as the Globus (no relation to the author) Toolkit. We intend to implement this architecture with JavaGenes running on at least two cycle-scavengers: Condor and United Devices. JavaGenes, a genetic algorithm code written in Java, will be used to evolve multi-species reactive molecular force field parameters.
Interframe vector wavelet coding technique
NASA Astrophysics Data System (ADS)
Wus, John P.; Li, Weiping
1997-01-01
Wavelet coding is often used to divide an image into multi- resolution wavelet coefficients which are quantized and coded. By 'vectorizing' scalar wavelet coding and combining this with vector quantization (VQ), vector wavelet coding (VWC) can be implemented. Using a finite number of states, finite-state vector quantization (FSVQ) takes advantage of the similarity between frames by incorporating memory into the video coding system. Lattice VQ eliminates the potential mismatch that could occur using pre-trained VQ codebooks. It also eliminates the need for codebook storage in the VQ process, thereby creating a more robust coding system. Therefore, by using the VWC coding method in conjunction with the FSVQ system and lattice VQ, the formulation of a high quality very low bit rate coding systems is proposed. A coding system using a simple FSVQ system where the current state is determined by the previous channel symbol only is developed. To achieve a higher degree of compression, a tree-like FSVQ system is implemented. The groupings are done in this tree-like structure from the lower subbands to the higher subbands in order to exploit the nature of subband analysis in terms of the parent-child relationship. Class A and Class B video sequences from the MPEG-IV testing evaluations are used in the evaluation of this coding method.
Ideal cycle analysis of a regenerative pulse detonation engine for power production
NASA Astrophysics Data System (ADS)
Bellini, Rafaela
Over the last few decades, considerable research has been focused on pulse detonation engines (PDEs) as a promising replacement for existing propulsion systems with potential applications in aircraft ranging from the subsonic to the lower hypersonic regimes. On the other hand, very little attention has been given to applying detonation for electric power production. One method for assessing the performance of a PDE is through thermodynamic cycle analysis. Earlier works have adopted a thermodynamic cycle for the PDE that was based on the assumption that the detonation process could be approximated by a constant volume process, called the Humphrey cycle. The Fickett-Jacob cycle, which uses the one--dimensional Chapman--Jouguet (CJ) theory of detonation, has also been used to model the PDE cycle. However, an ideal PDE cycle must include a detonation based compression and heat release processes with a finite chemical reaction rate that is accounted for in the Zeldovich -- von Neumann -- Doring model of detonation where the shock is considered a discontinuous jump and is followed by a finite exothermic reaction zone. This work presents a thermodynamic cycle analysis for an ideal PDE cycle for power production. A code has been written that takes only one input value, namely the heat of reaction of a fuel-oxidizer mixture, based on which the program computes all the points on the ZND cycle (both p--v and T--s plots), including the von Neumann spike and the CJ point along with all the non-dimensionalized state properties at each point. In addition, the program computes the points on the Humphrey and Brayton cycles for the same input value. Thus, the thermal efficiencies of the various cycles can be calculated and compared. The heat release of combustion is presented in a generic form to make the program usable with a wide variety of fuels and oxidizers and also allows for its use in a system for the real time monitoring and control of a PDE in which the heat of reaction can be obtained as a function of fuel-oxidizer ratio. The Humphrey and ZND cycles are studied in comparison with the Brayton cycle for different fuel-air mixtures such as methane, propane and hydrogen. The validity and limitations of the ZND and Humphrey cycles related to the detonation process are discussed and the criteria for the selection of the best model for the PDE cycle are explained. It is seen that the ZND cycle is a more appropriate representation of the PDE cycle. Next, the thermal and electrical power generation efficiencies for the PDE are compared with those of the deflagration based Brayton cycle. While the Brayton cycle shows an efficiency of 0 at a compressor pressure ratio of 1, the thermal efficiency for the ZND cycle starts out at 42% for hydrogen--air and then climbs to a peak of 66% at a compression ratio of 7 before falling slowly for higher compression ratios. The Brayton cycle efficiency rises above the PDEs for compression ratios above 23. This finding supports the theoretical advantage of PDEs over the gas turbines because PDEs only require a fan or only a few compressor stages, thereby eliminating the need for heavy compressor machinery, making the PDEs less complex and therefore more cost effective than other engines. Lastly, a regeneration study is presented to analyze how the use of exhaust gases can improve the performance of the system. The thermal efficiencies for the regenerative ZND cycle are compared with the efficiencies for the non--regenerative cycle. For a hydrogen--air mixture the thermal efficiency increases from 52%, for a cycle without regeneration, to 78%, for the regenerative cycle. The efficiency is compared with the Carnot efficiency of 84% which is the maximum possible theoretical efficiency of the cycle. When compared to the Brayton cycle thermal efficiencies, the regenerative cycle shows efficiencies that are always higher for the pressure ratio studied of 5 ≤ pic ≤ 25, where pi c the compressor pressure ratio of the cycle. This observation strengthens the idea of using regeneration on PDEs.
High rate concatenated coding systems using bandwidth efficient trellis inner codes
NASA Technical Reports Server (NTRS)
Deng, Robert H.; Costello, Daniel J., Jr.
1989-01-01
High-rate concatenated coding systems with bandwidth-efficient trellis inner codes and Reed-Solomon (RS) outer codes are investigated for application in high-speed satellite communication systems. Two concatenated coding schemes are proposed. In one the inner code is decoded with soft-decision Viterbi decoding, and the outer RS code performs error-correction-only decoding (decoding without side information). In the other, the inner code is decoded with a modified Viterbi algorithm, which produces reliability information along with the decoded output. In this algorithm, path metrics are used to estimate the entire information sequence, whereas branch metrics are used to provide reliability information on the decoded sequence. This information is used to erase unreliable bits in the decoded output. An errors-and-erasures RS decoder is then used for the outer code. The two schemes have been proposed for high-speed data communication on NASA satellite channels. The rates considered are at least double those used in current NASA systems, and the results indicate that high system reliability can still be achieved.
Kraft, Matthew A; Gilmour, Allison
2016-12-01
New teacher evaluation systems have expanded the role of principals as instructional leaders, but little is known about principals' ability to promote teacher development through the evaluation process. We conducted a case study of principals' perspectives on evaluation and their experiences implementing observation and feedback cycles to better understand whether principals feel as though they are able to promote teacher development as evaluators. We conducted interviews with a stratified random sample of 24 principals in an urban district that recently implemented major reforms to its teacher evaluation system. We analyzed these interviews by drafting thematic summaries, coding interview transcripts, creating data-analytic matrices, and writing analytic memos. We found that the evaluation reforms provided a common framework and language that helped facilitate principals' feedback conversations with teachers. However, we also found that tasking principals with primary responsibility for conducting evaluations resulted in a variety of unintended consequences which undercut the quality of evaluation feedback they provided. We analyze five broad solutions to these challenges: strategically targeting evaluations, reducing operational responsibilities, providing principal training, hiring instructional coaches, and developing peer evaluation systems. The quality of feedback teachers receive through the evaluation process depends critically on the time and training evaluators have to provide individualized and actionable feedback. Districts that task principals with primary responsibility for conducting observation and feedback cycles must attend to the many implementation challenges associated with this approach in order for next-generation evaluation systems to successfully promote teacher development.
Fast interrupt platform for extended DOS
NASA Technical Reports Server (NTRS)
Duryea, T. W.
1995-01-01
Extended DOS offers the unique combination of a simple operating system which allows direct access to the interrupt tables, 32 bit protected mode access to 4096 MByte address space, and the use of industry standard C compilers. The drawback is that fast interrupt handling requires both 32 bit and 16 bit versions of each real-time process interrupt handler to avoid mode switches on the interrupts. A set of tools has been developed which automates the process of transforming the output of a standard 32 bit C compiler to 16 bit interrupt code which directly handles the real mode interrupts. The entire process compiles one set of source code via a make file, which boosts productivity by making the management of the compile-link cycle very simple. The software components are in the form of classes written mostly in C. A foreground process written as a conventional application which can use the standard C libraries can communicate with the background real-time classes via a message passing mechanism. The platform thus enables the integration of high performance real-time processing into a conventional application framework.
NASA Technical Reports Server (NTRS)
Pamadi, Bandu N.; Pei, Jing; Covell, Peter F.; Favaregh, Noah M.; Gumbert, Clyde R.; Hanke, Jeremy L.
2011-01-01
NASA Langley Research Center, in partnership with NASA Marshall Space Flight Center and NASA Ames Research Center, was involved in the aerodynamic analyses, testing, and database development for the Ares I A106 crew launch vehicle in support of the Ares Design and Analysis Cycle. This paper discusses the development of lift-off/transition and ascent databases. The lift-off/transition database was developed using data from tests on a 1.75% scale model of the A106 configuration in the NASA Langley 14x22 Subsonic Wind Tunnel. The power-off ascent database was developed using test data on a 1% A106 scale model from two different facilities, the Boeing Polysonic Wind Tunnel and the NASA Langley Unitary Plan Wind Tunnel. The ascent database was adjusted for differences in wind tunnel and flight Reynolds numbers using USM3D CFD code. The aerodynamic jet interaction effects due to first stage roll control system were modeled using USM3D and OVERFLOW CFD codes.
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 2 2013-10-01 2013-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 2 2011-10-01 2011-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 2 2014-10-01 2014-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 2 2012-10-01 2012-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
42 CFR 405.512 - Carriers' procedural terminology and coding systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 2 2010-10-01 2010-10-01 false Carriers' procedural terminology and coding systems... Determining Reasonable Charges § 405.512 Carriers' procedural terminology and coding systems. (a) General. Procedural terminology and coding systems are designed to provide physicians and third party payers with a...
A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong
2013-01-01
Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.
Trace-shortened Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Solomon, G.
1994-01-01
Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Toward a first-principles integrated simulation of tokamak edge plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C S; Klasky, Scott A; Cummings, Julian
2008-01-01
Performance of the ITER is anticipated to be highly sensitive to the edge plasma condition. The edge pedestal in ITER needs to be predicted from an integrated simulation of the necessary firstprinciples, multi-scale physics codes. The mission of the SciDAC Fusion Simulation Project (FSP) Prototype Center for Plasma Edge Simulation (CPES) is to deliver such a code integration framework by (1) building new kinetic codes XGC0 and XGC1, which can simulate the edge pedestal buildup; (2) using and improving the existing MHD codes ELITE, M3D-OMP, M3D-MPP and NIMROD, for study of large-scale edge instabilities called Edge Localized Modes (ELMs); andmore » (3) integrating the codes into a framework using cutting-edge computer science technology. Collaborative effort among physics, computer science, and applied mathematics within CPES has created the first working version of the End-to-end Framework for Fusion Integrated Simulation (EFFIS), which can be used to study the pedestal-ELM cycles.« less
NASA Technical Reports Server (NTRS)
Jorgenson, Philip C. E.; Veres, Joseph P.; Wright, William B.; Struk, Peter M.
2013-01-01
The occurrence of ice accretion within commercial high bypass aircraft turbine engines has been reported under certain atmospheric conditions. Engine anomalies have taken place at high altitudes that were attributed to ice crystal ingestion, partially melting, and ice accretion on the compression system components. The result was one or more of the following anomalies: degraded engine performance, engine roll back, compressor surge and stall, and flameout of the combustor. The main focus of this research is the development of a computational tool that can estimate whether there is a risk of ice accretion by tracking key parameters through the compression system blade rows at all engine operating points within the flight trajectory. The tool has an engine system thermodynamic cycle code, coupled with a compressor flow analysis code, and an ice particle melt code that has the capability of determining the rate of sublimation, melting, and evaporation through the compressor blade rows. Assumptions are made to predict the complex physics involved in engine icing. Specifically, the code does not directly estimate ice accretion and does not have models for particle breakup or erosion. Two key parameters have been suggested as conditions that must be met at the same location for ice accretion to occur: the local wet-bulb temperature to be near freezing or below and the local melt ratio must be above 10%. These parameters were deduced from analyzing laboratory icing test data and are the criteria used to predict the possibility of ice accretion within an engine including the specific blade row where it could occur. Once the possibility of accretion is determined from these parameters, the degree of blockage due to ice accretion on the local stator vane can be estimated from an empirical model of ice growth rate and time spent at that operating point in the flight trajectory. The computational tool can be used to assess specific turbine engines to their susceptibility to ice accretion in an ice crystal environment.
NASA Astrophysics Data System (ADS)
Mirvis, E.; Iredell, M.
2015-12-01
The operational (OPS) NOAA National Centers for Environmental Prediction (NCEP) suite, traditionally, consist of a large set of multi- scale HPC models, workflows, scripts, tools and utilities, which are very much depending on the variety of the additional components. Namely, this suite utilizes a unique collection of the in-house developed 20+ shared libraries (NCEPLIBS), certain versions of the 3-rd party libraries (like netcdf, HDF, ESMF, jasper, xml etc.), HPC workflow tool within dedicated (sometimes even vendors' customized) HPC system homogeneous environment. This domain and site specific, accompanied with NCEP's product- driven large scale real-time data operations complicates NCEP collaborative development tremendously by reducing chances to replicate this OPS environment anywhere else. The NOAA/NCEP's Environmental Modeling Center (EMC) missions to develop and improve numerical weather, climate, hydrological and ocean prediction through the partnership with the research community. Realizing said difficulties, lately, EMC has been taken an innovative approach to improve flexibility of the HPC environment by building the elements and a foundation for NCEP OPS functionally equivalent environment (FEE), which can be used to ease the external interface constructs as well. Aiming to reduce turnaround time of the community code enhancements via Research-to-Operations (R2O) cycle, EMC developed and deployed several project sub-set standards that already paved the road to NCEP OPS implementation standards. In this topic we will discuss the EMC FEE for O2R requirements and approaches in collaborative standardization, including NCEPLIBS FEE and models code version control paired with the models' derived customized HPC modules and FEE footprints. We will share NCEP/EMC experience and potential in the refactoring of EMC development processes, legacy codes and in securing model source code quality standards by using combination of the Eclipse IDE, integrated with the reverse engineering tools/APIs. We will also inform on collaborative efforts in the restructuring of the NOAA Environmental Modeling System (NEMS) - the multi- model and coupling framework, and transitioning FEE verification methodology.
Advanced Test Reactor Core Modeling Update Project Annual Report for Fiscal Year 2012
DOE Office of Scientific and Technical Information (OSTI.GOV)
David W. Nigg, Principal Investigator; Kevin A. Steuhm, Project Manager
Legacy computational reactor physics software tools and protocols currently used for support of Advanced Test Reactor (ATR) core fuel management and safety assurance, and to some extent, experiment management, are inconsistent with the state of modern nuclear engineering practice, and are difficult, if not impossible, to properly verify and validate (V&V) according to modern standards. Furthermore, the legacy staff knowledge required for application of these tools and protocols from the 1960s and 1970s is rapidly being lost due to staff turnover and retirements. In late 2009, the Idaho National Laboratory (INL) initiated a focused effort, the ATR Core Modeling Updatemore » Project, to address this situation through the introduction of modern high-fidelity computational software and protocols. This aggressive computational and experimental campaign will have a broad strategic impact on the operation of the ATR, both in terms of improved computational efficiency and accuracy for support of ongoing DOE programs as well as in terms of national and international recognition of the ATR National Scientific User Facility (NSUF). The ATR Core Modeling Update Project, targeted for full implementation in phase with the next anticipated ATR Core Internals Changeout (CIC) in the 2014-2015 time frame, began during the last quarter of Fiscal Year 2009, and has just completed its third full year. Key accomplishments so far have encompassed both computational as well as experimental work. A new suite of stochastic and deterministic transport theory based reactor physics codes and their supporting nuclear data libraries (HELIOS, KENO6/SCALE, NEWT/SCALE, ATTILA, and an extended implementation of MCNP5) has been installed at the INL under various licensing arrangements. Corresponding models of the ATR and ATRC are now operational with all five codes, demonstrating the basic feasibility of the new code packages for their intended purpose. Of particular importance, a set of as-run core depletion HELIOS calculations for all ATR cycles since August 2009, Cycle 145A through Cycle 151B, was successfully completed during 2012. This major effort supported a decision late in the year to proceed with the phased incorporation of the HELIOS methodology into the ATR Core Safety Analysis Package (CSAP) preparation process, in parallel with the established PDQ-based methodology, beginning late in Fiscal Year 2012. Acquisition of the advanced SERPENT (VTT-Finland) and MC21 (DOE-NR) Monte Carlo stochastic neutronics simulation codes was also initiated during the year and some initial applications of SERPENT to ATRC experiment analysis were demonstrated. These two new codes will offer significant additional capability, including the possibility of full-3D Monte Carlo fuel management support capabilities for the ATR at some point in the future. Finally, a capability for rigorous sensitivity analysis and uncertainty quantification based on the TSUNAMI system has been implemented and initial computational results have been obtained. This capability will have many applications as a tool for understanding the margins of uncertainty in the new models as well as for validation experiment design and interpretation.« less
Using a Magnetic Flux Transport Model to Predict the Solar Cycle
NASA Technical Reports Server (NTRS)
Lyatskaya, S.; Hathaway, D.; Winebarger, A.
2007-01-01
We present the results of an investigation into the use of a magnetic flux transport model to predict the amplitude of future solar cycles. Recently Dikpati, de Toma, & Gilman (2006) showed how their dynamo model could be used to accurately predict the amplitudes of the last eight solar cycles and offered a prediction for the next solar cycle - a large amplitude cycle. Cameron & Schussler (2007) found that they could reproduce this predictive skill with a simple 1-dimensional surface flux transport model - provided they used the same parameters and data as Dikpati, de Toma, & Gilman. However, when they tried incorporating the data in what they argued was a more realistic manner, they found that the predictive skill dropped dramatically. We have written our own code for examining this problem and have incorporated updated and corrected data for the source terms - the emergence of magnetic flux in active regions. We present both the model itself and our results from it - in particular our tests of its effectiveness at predicting solar cycles.
NASA Astrophysics Data System (ADS)
Flanagan, S.; Schachter, J. M.; Schissel, D. P.
2001-10-01
A Data Analysis Monitoring (DAM) system has been developed to monitor between pulse physics analysis at the DIII-D National Fusion Facility. The system allows for rapid detection of discrepancies in diagnostic measurements or the results from physics analysis codes. This enables problems to be detected and possibly fixed between pulses as opposed to after the experimental run has concluded thus increasing the efficiency of experimental time. An example of a consistency check is comparing the stored energy from integrating the measured kinetic profiles to that calculated from magnetic measurements by EFIT. This new system also tracks the progress of MDSplus dispatching of software for data analysis and the loading of analyzed data into MDSplus. DAM uses a Java Servlet to receive messages, Clips to implement expert system logic, and displays its results to multiple web clients via HTML. If an error is detected by DAM, users can view more detailed information so that steps can be taken to eliminate the error for the next pulse. A demonstration of this system including a simulated DIII-D pulse cycle will be presented.
NASA Astrophysics Data System (ADS)
Naumov, D.; Fischer, T.; Böttcher, N.; Watanabe, N.; Walther, M.; Rink, K.; Bilke, L.; Shao, H.; Kolditz, O.
2014-12-01
OpenGeoSys (OGS) is a scientific open source code for numerical simulation of thermo-hydro-mechanical-chemical processes in porous and fractured media. Its basic concept is to provide a flexible numerical framework for solving multi-field problems for applications in geoscience and hydrology as e.g. for CO2 storage applications, geothermal power plant forecast simulation, salt water intrusion, water resources management, etc. Advances in computational mathematics have revolutionized the variety and nature of the problems that can be addressed by environmental scientists and engineers nowadays and an intensive code development in the last years enables in the meantime the solutions of much larger numerical problems and applications. However, solving environmental processes along the water cycle at large scales, like for complete catchment or reservoirs, stays computationally still a challenging task. Therefore, we started a new OGS code development with focus on execution speed and parallelization. In the new version, a local data structure concept improves the instruction and data cache performance by a tight bundling of data with an element-wise numerical integration loop. Dedicated analysis methods enable the investigation of memory-access patterns in the local and global assembler routines, which leads to further data structure optimization for an additional performance gain. The concept is presented together with a technical code analysis of the recent development and a large case study including transient flow simulation in the unsaturated / saturated zone of the Thuringian Syncline, Germany. The analysis is performed on a high-resolution mesh (up to 50M elements) with embedded fault structures.
Interface design of VSOP'94 computer code for safety analysis
NASA Astrophysics Data System (ADS)
Natsir, Khairina; Yazid, Putranto Ilham; Andiwijayakusuma, D.; Wahanani, Nursinta Adi
2014-09-01
Today, most software applications, also in the nuclear field, come with a graphical user interface. VSOP'94 (Very Superior Old Program), was designed to simplify the process of performing reactor simulation. VSOP is a integrated code system to simulate the life history of a nuclear reactor that is devoted in education and research. One advantage of VSOP program is its ability to calculate the neutron spectrum estimation, fuel cycle, 2-D diffusion, resonance integral, estimation of reactors fuel costs, and integrated thermal hydraulics. VSOP also can be used to comparative studies and simulation of reactor safety. However, existing VSOP is a conventional program, which was developed using Fortran 65 and have several problems in using it, for example, it is only operated on Dec Alpha mainframe platforms and provide text-based output, difficult to use, especially in data preparation and interpretation of results. We develop a GUI-VSOP, which is an interface program to facilitate the preparation of data, run the VSOP code and read the results in a more user friendly way and useable on the Personal 'Computer (PC). Modifications include the development of interfaces on preprocessing, processing and postprocessing. GUI-based interface for preprocessing aims to provide a convenience way in preparing data. Processing interface is intended to provide convenience in configuring input files and libraries and do compiling VSOP code. Postprocessing interface designed to visualized the VSOP output in table and graphic forms. GUI-VSOP expected to be useful to simplify and speed up the process and analysis of safety aspects.
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
2013-01-01
Background Myelosuppressive chemotherapy can lead to dose-limiting febrile neutropenia. Prophylactic use of recombinant human G-CSF such as daily filgrastim and once-per-cycle pegfilgrastim may reduce the incidence of febrile neutropenia. This comparative study examined the effect of pegfilgrastim versus daily filgrastim on the risk of hospitalization. Methods This retrospective United States claims analysis utilized 2004–2009 data for filgrastim- and pegfilgrastim-treated patients receiving chemotherapy for non-Hodgkin’s lymphoma (NHL) or breast, lung, ovarian, or colorectal cancers. Cycles in which pegfilgrastim or filgrastim was administered within 5 days from initiation of chemotherapy (considered to represent prophylaxis) were pooled for analysis. Neutropenia-related hospitalization and other healthcare encounters were defined with a “narrow” criterion for claims with an ICD-9 code for neutropenia and with a “broad” criterion for claims with an ICD-9 code for neutropenia, fever, or infection. Odds ratios (OR) for hospitalization and 95% confidence intervals (CI) were estimated by generalized estimating equation (GEE) models and adjusted for patient, tumor, and treatment characteristics. Per-cycle healthcare utilization and costs were examined for cycles with pegfilgrastim or filgrastim prophylaxis. Results We identified 3,535 patients receiving G-CSF prophylaxis, representing 12,056 chemotherapy cycles (11,683 pegfilgrastim, 373 filgrastim). The mean duration of filgrastim prophylaxis in the sample was 4.8 days. The mean duration of pegfilgrastim prophylaxis in the sample was 1.0 day, consistent with the recommended dosage of pegfilgrastim - a single injection once per chemotherapy cycle. Cycles with prophylactic pegfilgrastim were associated with a decreased risk of neutropenia-related hospitalization (narrow definition: OR = 0.43, 95% CI: 0.16–1.13; broad definition: OR = 0.38, 95% CI: 0.24–0.59) and all-cause hospitalization (OR = 0.50, 95% CI: 0.35–0.72) versus cycles with prophylactic filgrastim. For neutropenia-related utilization by setting of care, there were more ambulatory visits and hospitalizations per cycle associated with filgrastim prophylaxis than with pegfilgrastim prophylaxis. Mean per-cycle neutropenia-related costs were also higher with prophylactic filgrastim than with prophylactic pegfilgrastim. Conclusions In this comparative effectiveness study, pegfilgrastim prophylaxis was associated with a reduced risk of neutropenia-related or all-cause hospitalization relative to filgrastim prophylaxis. PMID:23298389
Nuclear modules for space electric propulsion
NASA Technical Reports Server (NTRS)
Difilippo, F. C.
1998-01-01
Analysis of interplanetary cargo and piloted missions requires calculations of the performances and masses of subsystems to be integrated in a final design. In a preliminary and scoping stage the designer needs to evaluate options iteratively by using fast computer simulations. The Oak Ridge National Laboratory (ORNL) has been involved in the development of models and calculational procedures for the analysis (neutronic and thermal hydraulic) of power sources for nuclear electric propulsion. The nuclear modules will be integrated into the whole simulation of the nuclear electric propulsion system. The vehicles use either a Brayton direct-conversion cycle, using the heated helium from a NERVA-type reactor, or a potassium Rankine cycle, with the working fluid heated on the secondary side of a heat exchanger and lithium on the primary side coming from a fast reactor. Given a set of input conditions, the codes calculate composition. dimensions, volumes, and masses of the core, reflector, control system, pressure vessel, neutron and gamma shields, as well as the thermal hydraulic conditions of the coolant, clad and fuel. Input conditions are power, core life, pressure and temperature of the coolant at the inlet of the core, either the temperature of the coolant at the outlet of the core or the coolant mass flow and the fluences and integrated doses at the cargo area. Using state-of-the-art neutron cross sections and transport codes, a database was created for the neutronic performance of both reactor designs. The free parameters of the models are the moderator/fuel mass ratio for the NERVA reactor and the enrichment and the pitch of the lattice for the fast reactor. Reactivity and energy balance equations are simultaneously solved to find the reactor design. Thermalhydraulic conditions are calculated by solving the one-dimensional versions of the equations of conservation of mass, energy, and momentum with compressible flow.
Controlled longitudinal emittance blow-up using band-limited phase noise in CERN PSB
NASA Astrophysics Data System (ADS)
Quartullo, D.; Shaposhnikova, E.; Timko, H.
2017-07-01
Controlled longitudinal emittance blow-up (from 1 eVs to 1.4 eVs) for LHC beams in the CERN PS Booster is currently achievied using sinusoidal phase modulation of a dedicated high-harmonic RF system. In 2021, after the LHC injectors upgrade, 3 eVs should be extracted to the PS. Even if the current method may satisfy the new requirements, it relies on low-power level RF improvements. In this paper another method of blow-up was considered, that is the injection of band-limited phase noise in the main RF system (h=1), never tried in PSB but already used in CERN SPS and LHC, under different conditions (longer cycles). This technique, which lowers the peak line density and therefore the impact of intensity effects in the PSB and the PS, can also be complementary to the present method. The longitudinal space charge, dominant in the PSB, causes significant synchrotron frequency shifts with intensity, and its effect should be taken into account. Another complication arises from the interaction of the phase loop with the injected noise, since both act on the RF phase. All these elements were studied in simulations of the PSB cycle with the BLonD code, and the required blow-up was achieved.
Implementation of Energy Code Controls Requirements in New Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenberg, Michael I.; Hart, Philip R.; Hatten, Mike
Most state energy codes in the United States are based on one of two national model codes; ANSI/ASHRAE/IES 90.1 (Standard 90.1) or the International Code Council (ICC) International Energy Conservation Code (IECC). Since 2004, covering the last four cycles of Standard 90.1 updates, about 30% of all new requirements have been related to building controls. These requirements can be difficult to implement and verification is beyond the expertise of most building code officials, yet the assumption in studies that measure the savings from energy codes is that they are implemented and working correctly. The objective of the current research ismore » to evaluate the degree to which high impact controls requirements included in commercial energy codes are properly designed, commissioned and implemented in new buildings. This study also evaluates the degree to which these control requirements are realizing their savings potential. This was done using a three-step process. The first step involved interviewing commissioning agents to get a better understanding of their activities as they relate to energy code required controls measures. The second involved field audits of a sample of commercial buildings to determine whether the code required control measures are being designed, commissioned and correctly implemented and functioning in new buildings. The third step includes compilation and analysis of the information gather during the first two steps. Information gathered during these activities could be valuable to code developers, energy planners, designers, building owners, and building officials.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eslinger, Paul W.; Aaberg, Rosanne L.; Lopresti, Charles A.
2004-09-14
This document contains detailed user instructions for a suite of utility codes developed for Rev. 1 of the Systems Assessment Capability. The suite of computer codes for Rev. 1 of Systems Assessment Capability performs many functions.
Independent rate and temporal coding in hippocampal pyramidal cells.
Huxter, John; Burgess, Neil; O'Keefe, John
2003-10-23
In the brain, hippocampal pyramidal cells use temporal as well as rate coding to signal spatial aspects of the animal's environment or behaviour. The temporal code takes the form of a phase relationship to the concurrent cycle of the hippocampal electroencephalogram theta rhythm. These two codes could each represent a different variable. However, this requires the rate and phase to vary independently, in contrast to recent suggestions that they are tightly coupled, both reflecting the amplitude of the cell's input. Here we show that the time of firing and firing rate are dissociable, and can represent two independent variables: respectively the animal's location within the place field, and its speed of movement through the field. Independent encoding of location together with actions and stimuli occurring there may help to explain the dual roles of the hippocampus in spatial and episodic memory, or may indicate a more general role of the hippocampus in relational/declarative memory.
NASA Astrophysics Data System (ADS)
Tsilanizara, A.; Gilardi, N.; Huynh, T. D.; Jouanne, C.; Lahaye, S.; Martinez, J. M.; Diop, C. M.
2014-06-01
The knowledge of the decay heat quantity and the associated uncertainties are important issues for the safety of nuclear facilities. Many codes are available to estimate the decay heat. ORIGEN, FISPACT, DARWIN/PEPIN2 are part of them. MENDEL is a new depletion code developed at CEA, with new software architecture, devoted to the calculation of physical quantities related to fuel cycle studies, in particular decay heat. The purpose of this paper is to present a probabilistic approach to assess decay heat uncertainty due to the decay data uncertainties from nuclear data evaluation like JEFF-3.1.1 or ENDF/B-VII.1. This probabilistic approach is based both on MENDEL code and URANIE software which is a CEA uncertainty analysis platform. As preliminary applications, single thermal fission of uranium 235, plutonium 239 and PWR UOx spent fuel cell are investigated.
Cell Cycle Regulation of Stem Cells by MicroRNAs.
Mens, Michelle M J; Ghanbari, Mohsen
2018-06-01
MicroRNAs (miRNAs) are a class of small non-coding RNA molecules involved in the regulation of gene expression. They are involved in the fine-tuning of fundamental biological processes such as proliferation, differentiation, survival and apoptosis in many cell types. Emerging evidence suggests that miRNAs regulate critical pathways involved in stem cell function. Several miRNAs have been suggested to target transcripts that directly or indirectly coordinate the cell cycle progression of stem cells. Moreover, previous studies have shown that altered expression levels of miRNAs can contribute to pathological conditions, such as cancer, due to the loss of cell cycle regulation. However, the precise mechanism underlying miRNA-mediated regulation of cell cycle in stem cells is still incompletely understood. In this review, we discuss current knowledge of miRNAs regulatory role in cell cycle progression of stem cells. We describe how specific miRNAs may control cell cycle associated molecules and checkpoints in embryonic, somatic and cancer stem cells. We further outline how these miRNAs could be regulated to influence cell cycle progression in stem cells as a potential clinical application.
Taylor-Robinson, David C; Milton, Beth; Lloyd-Williams, Ffion; O'Flaherty, Martin; Capewell, Simon
2008-01-01
Background In order to better understand factors that influence decisions for public health, we undertook a qualitative study to explore issues relating to the time horizons used in decision-making. Methods Qualitative study using semi-structured interviews. 33 individuals involved in the decision making process around coronary heart disease were purposively sampled from the UK National Health Service (national, regional and local levels), academia and voluntary organizations. Analysis was based on the framework method using N-VIVO software. Interviews were transcribed, coded and emergent themes identified. Results Many participants suggested that the timescales for public health decision-making are too short. Commissioners and some practitioners working at the national level particularly felt constrained in terms of planning for the long-term. Furthermore respondents felt that longer term planning was needed to address the wider determinants of health and to achieve societal level changes. Three prominent 'systems' issues were identified as important drivers of short term thinking: the need to demonstrate impact within the 4 year political cycle; the requirement to 'balance the books' within the annual commissioning cycle and the disruption caused by frequent re-organisations within the health service. In addition respondents suggested that the tools and evidence base for longer term planning were not well established. Conclusion Many public health decision and policy makers feel that the timescales for decision-making are too short. Substantial systemic barriers to longer-term planning exist. Policy makers need to look beyond short-term targets and budget cycles to secure investment for long-term improvement in public health. PMID:19094194
Taylor-Robinson, David C; Milton, Beth; Lloyd-Williams, Ffion; O'Flaherty, Martin; Capewell, Simon
2008-12-18
In order to better understand factors that influence decisions for public health, we undertook a qualitative study to explore issues relating to the time horizons used in decision-making. Qualitative study using semi-structured interviews. 33 individuals involved in the decision making process around coronary heart disease were purposively sampled from the UK National Health Service (national, regional and local levels), academia and voluntary organizations. Analysis was based on the framework method using N-VIVO software. Interviews were transcribed, coded and emergent themes identified. Many participants suggested that the timescales for public health decision-making are too short. Commissioners and some practitioners working at the national level particularly felt constrained in terms of planning for the long-term. Furthermore respondents felt that longer term planning was needed to address the wider determinants of health and to achieve societal level changes. Three prominent 'systems' issues were identified as important drivers of short term thinking: the need to demonstrate impact within the 4 year political cycle; the requirement to 'balance the books' within the annual commissioning cycle and the disruption caused by frequent re-organisations within the health service. In addition respondents suggested that the tools and evidence base for longer term planning were not well established. Many public health decision and policy makers feel that the timescales for decision-making are too short. Substantial systemic barriers to longer-term planning exist. Policy makers need to look beyond short-term targets and budget cycles to secure investment for long-term improvement in public health.
Transient dynamics capability at Sandia National Laboratories
NASA Technical Reports Server (NTRS)
Attaway, Steven W.; Biffle, Johnny H.; Sjaardema, G. D.; Heinstein, M. W.; Schoof, L. A.
1993-01-01
A brief overview of the transient dynamics capabilities at Sandia National Laboratories, with an emphasis on recent new developments and current research is presented. In addition, the Sandia National Laboratories (SNL) Engineering Analysis Code Access System (SEACAS), which is a collection of structural and thermal codes and utilities used by analysts at SNL, is described. The SEACAS system includes pre- and post-processing codes, analysis codes, database translation codes, support libraries, Unix shell scripts for execution, and an installation system. SEACAS is used at SNL on a daily basis as a production, research, and development system for the engineering analysts and code developers. Over the past year, approximately 190 days of CPU time were used by SEACAS codes on jobs running from a few seconds up to two and one-half days of CPU time. SEACAS is running on several different systems at SNL including Cray Unicos, Hewlett Packard PH-UX, Digital Equipment Ultrix, and Sun SunOS. An overview of SEACAS, including a short description of the codes in the system, are presented. Abstracts and references for the codes are listed at the end of the report.
Discharge properties of upper airway motor units during wakefulness and sleep.
Trinder, John; Jordan, Amy S; Nicholas, Christian L
2014-01-01
Upper airway muscle motoneurons, as assessed at the level of the motor unit, have a range of different discharge patterns, varying as to whether their activity is modulated in phase with the respiratory cycle, are predominantly inspiratory or expiratory, or are phasic as opposed to tonic. Two fundamental questions raised by this observation are: how are synaptic inputs from premotor neurons distributed over motoneurons to achieve these different discharge patterns; and how do different discharge patterns contribute to muscle function? We and others have studied the behavior of genioglossus (GG) and tensor palatini (TP) single motor units at transitions from wakefulness to sleep (sleep onset), from sleep to wakefulness (arousal from sleep), and during hypercapnia. Results indicate that decreases or increases in GG and TP muscle activity occur as a consequence of derecruitment or recruitment, respectively, of phasic and tonic inspiratory-modulated motoneurons, with only minor changes in rate coding. Further, sleep-wake state and chemical inputs to this "inspiratory system" appear to be mediated through the respiratory pattern generator. In contrast, phasic and tonic expiratory units and units with a purely tonic pattern, the "tonic system," are largely unaffected by sleep-wake state, and are only weakly influenced by chemical stimuli and the respiratory cycle. We speculate that the "inspiratory system" produces gross changes in upper airway muscle activity in response to changes in respiratory drive, while the "tonic system" fine tunes airway configuration with activity in this system being determined by local mechanical conditions. © 2014 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Clement, J. D.; Kirby, K. D.
1973-01-01
Exploratory calculations were performed for several gas core breeder reactor configurations. The computational method involved the use of the MACH-1 one dimensional diffusion theory code and the THERMOS integral transport theory code for thermal cross sections. Computations were performed to analyze thermal breeder concepts and nonbreeder concepts. Analysis of breeders was restricted to the (U-233)-Th breeding cycle, and computations were performed to examine a range of parameters. These parameters include U-233 to hydrogen atom ratio in the gaseous cavity, carbon to thorium atom ratio in the breeding blanket, cavity size, and blanket size.
Late-onset urea cycle disorder in adulthood unmasked by severe malnutrition.
Wells, Diana L; Thomas, Jillian B; Sacks, Gordon S; Zouhary, L Anna
2014-01-01
Urea cycle disorders (UCDs) most often involve inherited deficiencies in genes that code for enzymes normally used by the urea cycle to breakdown nitrogen. UCDs lead to serious metabolic complications, including severe neurologic decompensation related to hyperammonemia. Although the majority of UCDs are revealed soon after birth, stressful events in adulthood can lead to unmasking of a partial, late-onset UCDs. In this report, we describe a late-onset UCD unmasked by severe malnutrition. Early, specialized nutrition therapy is a fundamental aspect of treating hyperammonemic crises in patients with UCD. The case presented here demonstrates the importance of early recognition of UCD and appropriate interventions with nutrition support. Copyright © 2014 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qiu, Jun-jun; Department of Obstetrics and Gynecology of Shanghai Medical College, Fudan University, 138 Yixueyuan Road, Shanghai 200032; Shanghai Key Laboratory of Female Reproductive Endocrine-Related Diseases, 413 Zhaozhou Road, Shanghai 200011
HOX transcript antisense RNA (HOTAIR) is a well-known long non-coding RNA (lncRNA) whose dysregulation correlates with poor prognosis and malignant progression in many forms of cancer. Here, we investigate the expression pattern, clinical significance, and biological function of HOTAIR in serous ovarian cancer (SOC). Clinically, we found that HOTAIR levels were overexpressed in SOC tissues compared with normal controls and that HOTAIR overexpression was correlated with an advanced FIGO stage and a high histological grade. Multivariate analysis revealed that HOTAIR is an independent prognostic factor for predicting overall survival in SOC patients. We demonstrated that HOTAIR silencing inhibited A2780 andmore » OVCA429 SOC cell proliferation in vitro and that the anti-proliferative effects of HOTAIR silencing also occurred in vivo. Further investigation into the mechanisms responsible for the growth inhibitory effects by HOTAIR silencing revealed that its knockdown resulted in the induction of cell cycle arrest and apoptosis through certain cell cycle-related and apoptosis-related proteins. Together, these results highlight a critical role of HOTAIR in SOC cell proliferation and contribute to a better understanding of the importance of dysregulated lncRNAs in SOC progression. - Highlights: • HOTAIR overexpression correlates with an aggressive tumour phenotype and a poor prognosis in SOC. • HOTAIR promotes SOC cell proliferation both in vitro and in vivo. • The proliferative role of HOTAIR is associated with regulation of the cell cycle and apoptosis.« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-25
... estrous cycles to allow for fixed time artificial insemination in lactating dairy cows and beef cows.\\1... insemination in lactating dairy cows and beef cows. Administer to each cow 100 [micro]g gonadorelin by...
The application of coded excitation technology in medical ultrasonic Doppler imaging
NASA Astrophysics Data System (ADS)
Li, Weifeng; Chen, Xiaodong; Bao, Jing; Yu, Daoyin
2008-03-01
Medical ultrasonic Doppler imaging is one of the most important domains of modern medical imaging technology. The application of coded excitation technology in medical ultrasonic Doppler imaging system has the potential of higher SNR and deeper penetration depth than conventional pulse-echo imaging system, it also improves the image quality, and enhances the sensitivity of feeble signal, furthermore, proper coded excitation is beneficial to received spectrum of Doppler signal. Firstly, this paper analyzes the application of coded excitation technology in medical ultrasonic Doppler imaging system abstractly, showing the advantage and bright future of coded excitation technology, then introduces the principle and the theory of coded excitation. Secondly, we compare some coded serials (including Chirp and fake Chirp signal, Barker codes, Golay's complementary serial, M-sequence, etc). Considering Mainlobe Width, Range Sidelobe Level, Signal-to-Noise Ratio and sensitivity of Doppler signal, we choose Barker codes as coded serial. At last, we design the coded excitation circuit. The result in B-mode imaging and Doppler flow measurement coincided with our expectation, which incarnated the advantage of application of coded excitation technology in Digital Medical Ultrasonic Doppler Endoscope Imaging System.
Accuracy and time requirements of a bar-code inventory system for medical supplies.
Hanson, L B; Weinswig, M H; De Muth, J E
1988-02-01
The effects of implementing a bar-code system for issuing medical supplies to nursing units at a university teaching hospital were evaluated. Data on the time required to issue medical supplies to three nursing units at a 480-bed, tertiary-care teaching hospital were collected (1) before the bar-code system was implemented (i.e., when the manual system was in use), (2) one month after implementation, and (3) four months after implementation. At the same times, the accuracy of the central supply perpetual inventory was monitored using 15 selected items. One-way analysis of variance tests were done to determine any significant differences between the bar-code and manual systems. Using the bar-code system took longer than using the manual system because of a significant difference in the time required for order entry into the computer. Multiple-use requirements of the central supply computer system made entering bar-code data a much slower process. There was, however, a significant improvement in the accuracy of the perpetual inventory. Using the bar-code system for issuing medical supplies to the nursing units takes longer than using the manual system. However, the accuracy of the perpetual inventory was significantly improved with the implementation of the bar-code system.
Code of Federal Regulations, 2010 CFR
2010-10-01
... laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding System Code...
Code of Federal Regulations, 2011 CFR
2011-10-01
... laboratory test for which a new or substantially revised Healthcare Common Procedure Coding System Code is assigned on or after January 1, 2005. Substantially Revised Healthcare Common Procedure Coding System Code...
Channel coding for underwater acoustic single-carrier CDMA communication system
NASA Astrophysics Data System (ADS)
Liu, Lanjun; Zhang, Yonglei; Zhang, Pengcheng; Zhou, Lin; Niu, Jiong
2017-01-01
CDMA is an effective multiple access protocol for underwater acoustic networks, and channel coding can effectively reduce the bit error rate (BER) of the underwater acoustic communication system. For the requirements of underwater acoustic mobile networks based on CDMA, an underwater acoustic single-carrier CDMA communication system (UWA/SCCDMA) based on the direct-sequence spread spectrum is proposed, and its channel coding scheme is studied based on convolution, RA, Turbo and LDPC coding respectively. The implementation steps of the Viterbi algorithm of convolutional coding, BP and minimum sum algorithms of RA coding, Log-MAP and SOVA algorithms of Turbo coding, and sum-product algorithm of LDPC coding are given. An UWA/SCCDMA simulation system based on Matlab is designed. Simulation results show that the UWA/SCCDMA based on RA, Turbo and LDPC coding have good performance such that the communication BER is all less than 10-6 in the underwater acoustic channel with low signal to noise ratio (SNR) from -12 dB to -10dB, which is about 2 orders of magnitude lower than that of the convolutional coding. The system based on Turbo coding with Log-MAP algorithm has the best performance.
Evaluation of isotopic composition of fast reactor core in closed nuclear fuel cycle
NASA Astrophysics Data System (ADS)
Tikhomirov, Georgy; Ternovykh, Mikhail; Saldikov, Ivan; Fomichenko, Peter; Gerasimov, Alexander
2017-09-01
The strategy of the development of nuclear power in Russia provides for use of fast power reactors in closed nuclear fuel cycle. The PRORYV (i.e. «Breakthrough» in Russian) project is currently under development. Within the framework of this project, fast reactors BN-1200 and BREST-OD-300 should be built to, inter alia, demonstrate possibility of the closed nuclear fuel cycle technologies with plutonium as a main source of energy. Russia has a large inventory of plutonium which was accumulated in the result of reprocessing of spent fuel of thermal power reactors and conversion of nuclear weapons. This kind of plutonium will be used for development of initial fuel assemblies for fast reactors. The closed nuclear fuel cycle concept of the PRORYV assumes self-supplied mode of operation with fuel regeneration by neutron capture reaction in non-enriched uranium, which is used as a raw material. Operating modes of reactors and its characteristics should be chosen so as to provide the self-sufficient mode by using of fissile isotopes while refueling by depleted uranium and to support this state during the entire period of reactor operation. Thus, the actual issue is modeling fuel handling processes. To solve these problems, the code REPRORYV (Recycle for PRORYV) has been developed. It simulates nuclide streams in non-reactor stages of the closed fuel cycle. At the same time various verified codes can be used to evaluate in-core characteristics of a reactor. By using this approach various options for nuclide streams and assess the impact of different plutonium content in the fuel, fuel processing conditions, losses during fuel processing, as well as the impact of initial uncertainties on neutron-physical characteristics of reactor are considered in this study.
A Validation of Object-Oriented Design Metrics as Quality Indicators
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio
1997-01-01
This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.
A Validation of Object-Oriented Design Metrics
NASA Technical Reports Server (NTRS)
Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.
1995-01-01
This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.
GEDAE-LaB: A Free Software to Calculate the Energy System Contributions during Exercise
Bertuzzi, Rômulo; Melegati, Jorge; Bueno, Salomão; Ghiarone, Thaysa; Pasqua, Leonardo A.; Gáspari, Arthur Fernandes; Lima-Silva, Adriano E.; Goldman, Alfredo
2016-01-01
Purpose The aim of the current study is to describe the functionality of free software developed for energy system contributions and energy expenditure calculation during exercise, namely GEDAE-LaB. Methods Eleven participants performed the following tests: 1) a maximal cycling incremental test to measure the ventilatory threshold and maximal oxygen uptake (V˙O2max); 2) a cycling workload constant test at moderate domain (90% ventilatory threshold); 3) a cycling workload constant test at severe domain (110% V˙O2max). Oxygen uptake and plasma lactate were measured during the tests. The contributions of the aerobic (AMET), anaerobic lactic (LAMET), and anaerobic alactic (ALMET) systems were calculated based on the oxygen uptake during exercise, the oxygen energy equivalents provided by lactate accumulation, and the fast component of excess post-exercise oxygen consumption, respectively. In order to assess the intra-investigator variation, four different investigators performed the analyses independently using GEDAE-LaB. A direct comparison with commercial software was also provided. Results All subjects completed 10 min of exercise at moderate domain, while the time to exhaustion at severe domain was 144 ± 65 s. The AMET, LAMET, and ALMET contributions during moderate domain were about 93, 2, and 5%, respectively. The AMET, LAMET, and ALMET contributions during severe domain were about 66, 21, and 13%, respectively. No statistical differences were found between the energy system contributions and energy expenditure obtained by GEDAE-LaB and commercial software for both moderate and severe domains (P > 0.05). The ICC revealed that these estimates were highly reliable among the four investigators for both moderate and severe domains (all ICC ≥ 0.94). Conclusion These findings suggest that GEDAE-LaB is a free software easily comprehended by users minimally familiarized with adopted procedures for calculations of energetic profile using oxygen uptake and lactate accumulation during exercise. By providing availability of the software and its source code we hope to facilitate future related research. PMID:26727499
GEDAE-LaB: A Free Software to Calculate the Energy System Contributions during Exercise.
Bertuzzi, Rômulo; Melegati, Jorge; Bueno, Salomão; Ghiarone, Thaysa; Pasqua, Leonardo A; Gáspari, Arthur Fernandes; Lima-Silva, Adriano E; Goldman, Alfredo
2016-01-01
The aim of the current study is to describe the functionality of free software developed for energy system contributions and energy expenditure calculation during exercise, namely GEDAE-LaB. Eleven participants performed the following tests: 1) a maximal cycling incremental test to measure the ventilatory threshold and maximal oxygen uptake (V̇O2max); 2) a cycling workload constant test at moderate domain (90% ventilatory threshold); 3) a cycling workload constant test at severe domain (110% V̇O2max). Oxygen uptake and plasma lactate were measured during the tests. The contributions of the aerobic (AMET), anaerobic lactic (LAMET), and anaerobic alactic (ALMET) systems were calculated based on the oxygen uptake during exercise, the oxygen energy equivalents provided by lactate accumulation, and the fast component of excess post-exercise oxygen consumption, respectively. In order to assess the intra-investigator variation, four different investigators performed the analyses independently using GEDAE-LaB. A direct comparison with commercial software was also provided. All subjects completed 10 min of exercise at moderate domain, while the time to exhaustion at severe domain was 144 ± 65 s. The AMET, LAMET, and ALMET contributions during moderate domain were about 93, 2, and 5%, respectively. The AMET, LAMET, and ALMET contributions during severe domain were about 66, 21, and 13%, respectively. No statistical differences were found between the energy system contributions and energy expenditure obtained by GEDAE-LaB and commercial software for both moderate and severe domains (P > 0.05). The ICC revealed that these estimates were highly reliable among the four investigators for both moderate and severe domains (all ICC ≥ 0.94). These findings suggest that GEDAE-LaB is a free software easily comprehended by users minimally familiarized with adopted procedures for calculations of energetic profile using oxygen uptake and lactate accumulation during exercise. By providing availability of the software and its source code we hope to facilitate future related research.
New developments and prospects on COSI, the simulation software for fuel cycle analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eschbach, R.; Meyer, M.; Coquelet-Pascal, C.
2013-07-01
COSI, software developed by the Nuclear Energy Direction of the CEA, is a code simulating a pool of nuclear power plants with its associated fuel cycle facilities. This code has been designed to study various short, medium and long term options for the introduction of various types of nuclear reactors and for the use of associated nuclear materials. In the frame of the French Act for waste management, scenario studies are carried out with COSI, to compare different options of evolution of the French reactor fleet and options of partitioning and transmutation of plutonium and minor actinides. Those studies aimmore » in particular at evaluating the sustainability of Sodium cooled Fast Reactors (SFR) deployment and the possibility to transmute minor actinides. The COSI6 version is a completely renewed software released in 2006. COSI6 is now coupled with the last version of CESAR (CESAR5.3 based on JEFF3.1.1 nuclear data) allowing the calculations on irradiated fuel with 200 fission products and 100 heavy nuclides. A new release is planned in 2013, including in particular the coupling with a recommended database of reactors. An exercise of validation of COSI6, carried out on the French PWR historic nuclear fleet, has been performed. During this exercise quantities like cumulative natural uranium consumption, or cumulative depleted uranium, or UOX/MOX spent fuel storage, or stocks of reprocessed uranium, or plutonium content in fresh MOX fuel, or the annual production of high level waste, have been computed by COSI6 and compared to industrial data. The results have allowed us to validate the essential phases of the fuel cycle computation, and reinforces the credibility of the results provided by the code.« less