Zhang, Xu; Jin, Weiqi; Li, Jiakun; Wang, Xia; Li, Shuo
2017-04-01
Thermal imaging technology is an effective means of detecting hazardous gas leaks. Much attention has been paid to evaluation of the performance of gas leak infrared imaging detection systems due to several potential applications. The minimum resolvable temperature difference (MRTD) and the minimum detectable temperature difference (MDTD) are commonly used as the main indicators of thermal imaging system performance. This paper establishes a minimum detectable gas concentration (MDGC) performance evaluation model based on the definition and derivation of MDTD. We proposed the direct calculation and equivalent calculation method of MDGC based on the MDTD measurement system. We build an experimental MDGC measurement system, which indicates the MDGC model can describe the detection performance of a thermal imaging system to typical gases. The direct calculation, equivalent calculation, and direct measurement results are consistent. The MDGC and the minimum resolvable gas concentration (MRGC) model can effectively describe the performance of "detection" and "spatial detail resolution" of thermal imaging systems to gas leak, respectively, and constitute the main performance indicators of gas leak detection systems.
A model for calculating expected performance of the Apollo unified S-band (USB) communication system
NASA Technical Reports Server (NTRS)
Schroeder, N. W.
1971-01-01
A model for calculating the expected performance of the Apollo unified S-band (USB) communication system is presented. The general organization of the Apollo USB is described. The mathematical model is reviewed and the computer program for implementation of the calculations is included.
NASA Technical Reports Server (NTRS)
Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)
2003-01-01
A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.
An Approximate Ablative Thermal Protection System Sizing Tool for Entry System Design
NASA Technical Reports Server (NTRS)
Dec, John A.; Braun, Robert D.
2005-01-01
A computer tool to perform entry vehicle ablative thermal protection systems sizing has been developed. Two options for calculating the thermal response are incorporated into the tool. One, an industry-standard, high-fidelity ablation and thermal response program was integrated into the tool, making use of simulated trajectory data to calculate its boundary conditions at the ablating surface. Second, an approximate method that uses heat of ablation data to estimate heat shield recession during entry has been coupled to a one-dimensional finite-difference calculation that calculates the in-depth thermal response. The in-depth solution accounts for material decomposition, but does not account for pyrolysis gas energy absorption through the material. Engineering correlations are used to estimate stagnation point convective and radiative heating as a function of time. The sizing tool calculates recovery enthalpy, wall enthalpy, surface pressure, and heat transfer coefficient. Verification of this tool is performed by comparison to past thermal protection system sizings for the Mars Pathfinder and Stardust entry systems and calculations are performed for an Apollo capsule entering the atmosphere at lunar and Mars return speeds.
An Approximate Ablative Thermal Protection System Sizing Tool for Entry System Design
NASA Technical Reports Server (NTRS)
Dec, John A.; Braun, Robert D.
2006-01-01
A computer tool to perform entry vehicle ablative thermal protection systems sizing has been developed. Two options for calculating the thermal response are incorporated into the tool. One, an industry-standard, high-fidelity ablation and thermal response program was integrated into the tool, making use of simulated trajectory data to calculate its boundary conditions at the ablating surface. Second, an approximate method that uses heat of ablation data to estimate heat shield recession during entry has been coupled to a one-dimensional finite-difference calculation that calculates the in-depth thermal response. The in-depth solution accounts for material decomposition, but does not account for pyrolysis gas energy absorption through the material. Engineering correlations are used to estimate stagnation point convective and radiative heating as a function of time. The sizing tool calculates recovery enthalpy, wall enthalpy, surface pressure, and heat transfer coefficient. Verification of this tool is performed by comparison to past thermal protection system sizings for the Mars Pathfinder and Stardust entry systems and calculations are performed for an Apollo capsule entering the atmosphere at lunar and Mars return speeds.
Performance estimates for the Space Station power system Brayton Cycle compressor and turbine
NASA Technical Reports Server (NTRS)
Cummings, Robert L.
1989-01-01
The methods which have been used by the NASA Lewis Research Center for predicting Brayton Cycle compressor and turbine performance for different gases and flow rates are described. These methods were developed by NASA Lewis during the early days of Brayton cycle component development and they can now be applied to the task of predicting the performance of the Closed Brayton Cycle (CBC) Space Station Freedom power system. Computer programs are given for performing these calculations and data from previous NASA Lewis Brayton Compressor and Turbine tests is used to make accurate estimates of the compressor and turbine performance for the CBC power system. Results of these calculations are also given. In general, calculations confirm that the CBC Brayton Cycle contractor has made realistic compressor and turbine performance estimates.
Received optical power calculations for optical communications link performance analysis
NASA Technical Reports Server (NTRS)
Marshall, W. K.; Burk, B. D.
1986-01-01
The factors affecting optical communication link performance differ substantially from those at microwave frequencies, due to the drastically differing technologies, modulation formats, and effects of quantum noise in optical communications. In addition detailed design control table calculations for optical systems are less well developed than corresponding microwave system techniques, reflecting the relatively less mature state of development of optical communications. Described below are detailed calculations of received optical signal and background power in optical communication systems, with emphasis on analytic models for accurately predicting transmitter and receiver system losses.
NASA Technical Reports Server (NTRS)
Mclennan, G. A.
1986-01-01
This report describes, and is a User's Manual for, a computer code (ANL/RBC) which calculates cycle performance for Rankine bottoming cycles extracting heat from a specified source gas stream. The code calculates cycle power and efficiency and the sizes for the heat exchangers, using tabular input of the properties of the cycle working fluid. An option is provided to calculate the costs of system components from user defined input cost functions. These cost functions may be defined in equation form or by numerical tabular data. A variety of functional forms have been included for these functions and they may be combined to create very general cost functions. An optional calculation mode can be used to determine the off-design performance of a system when operated away from the design-point, using the heat exchanger areas calculated for the design-point.
Nagaoka, Tomoaki; Watanabe, Soichi
2012-01-01
Electromagnetic simulation with anatomically realistic computational human model using the finite-difference time domain (FDTD) method has recently been performed in a number of fields in biomedical engineering. To improve the method's calculation speed and realize large-scale computing with the computational human model, we adapt three-dimensional FDTD code to a multi-GPU cluster environment with Compute Unified Device Architecture and Message Passing Interface. Our multi-GPU cluster system consists of three nodes. The seven GPU boards (NVIDIA Tesla C2070) are mounted on each node. We examined the performance of the FDTD calculation on multi-GPU cluster environment. We confirmed that the FDTD calculation on the multi-GPU clusters is faster than that on a multi-GPU (a single workstation), and we also found that the GPU cluster system calculate faster than a vector supercomputer. In addition, our GPU cluster system allowed us to perform the large-scale FDTD calculation because were able to use GPU memory of over 100 GB.
Improved Ecosystem Predictions of the California Current System via Accurate Light Calculations
2011-09-30
System via Accurate Light Calculations Curtis D. Mobley Sequoia Scientific, Inc. 2700 Richards Road, Suite 107 Bellevue, WA 98005 phone: 425...7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Sequoia Scientific, Inc,2700 Richards Road, Suite 107,Bellevue,WA,98005 8. PERFORMING...EcoLight-S 1.0 Users’ Guide and Technical Documentation. Sequoia Scientific, Inc., Bellevue, WA, 38 pages. Mobley, C. D., 2011. Fast light calculations
The grout/glass performance assessment code system (GPACS) with verification and benchmarking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Piepho, M.G.; Sutherland, W.H.; Rittmann, P.D.
1994-12-01
GPACS is a computer code system for calculating water flow (unsaturated or saturated), solute transport, and human doses due to the slow release of contaminants from a waste form (in particular grout or glass) through an engineered system and through a vadose zone to an aquifer, well and river. This dual-purpose document is intended to serve as a user`s guide and verification/benchmark document for the Grout/Glass Performance Assessment Code system (GPACS). GPACS can be used for low-level-waste (LLW) Glass Performance Assessment and many other applications including other low-level-waste performance assessments and risk assessments. Based on all the cses presented, GPACSmore » is adequate (verified) for calculating water flow and contaminant transport in unsaturated-zone sediments and for calculating human doses via the groundwater pathway.« less
Performance calculation and simulation system of high energy laser weapon
NASA Astrophysics Data System (ADS)
Wang, Pei; Liu, Min; Su, Yu; Zhang, Ke
2014-12-01
High energy laser weapons are ready for some of today's most challenging military applications. Based on the analysis of the main tactical/technical index and combating process of high energy laser weapon, a performance calculation and simulation system of high energy laser weapon was established. Firstly, the index decomposition and workflow of high energy laser weapon was proposed. The entire system was composed of six parts, including classical target, platform of laser weapon, detect sensor, tracking and pointing control, laser atmosphere propagation and damage assessment module. Then, the index calculation modules were designed. Finally, anti-missile interception simulation was performed. The system can provide reference and basis for the analysis and evaluation of high energy laser weapon efficiency.
PVWatts Version 1 Technical Reference
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, A. P.
2013-10-01
The NREL PVWatts(TM) calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes several hidden assumptions about performance parameters. This technical reference details the individual sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimation.
Unsteady Full Annulus Simulations of a Transonic Axial Compressor Stage
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Hathaway, Michael D.; Chen, Jen-Ping
2009-01-01
Two recent research endeavors in turbomachinery at NASA Glenn Research Center have focused on compression system stall inception and compression system aerothermodynamic performance. Physical experiment and computational research are ongoing in support of these research objectives. TURBO, an unsteady, three-dimensional, Navier-Stokes computational fluid dynamics code commissioned and developed by NASA, has been utilized, enhanced, and validated in support of these endeavors. In the research which follows, TURBO is shown to accurately capture compression system flow range-from choke to stall inception-and also to accurately calculate fundamental aerothermodynamic performance parameters. Rigorous full-annulus calculations are performed to validate TURBO s ability to simulate the unstable, unsteady, chaotic stall inception process; as part of these efforts, full-annulus calculations are also performed at a condition approaching choke to further document TURBO s capabilities to compute aerothermodynamic performance data and support a NASA code assessment effort.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1979-01-01
The computational techniques utilized to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements are described. The characteristics and use of the following computer codes are discussed: (1) NNEP - a very general cycle analysis code that can assemble an arbitrary matrix fans, turbines, ducts, shafts, etc., into a complete gas turbine engine and compute on- and off-design thermodynamic performance; (2) WATE - a preliminary design procedure for calculating engine weight using the component characteristics determined by NNEP; (3) POD DRG - a table look-up program to calculate wave and friction drag of nacelles; (4) LIFCYC - a computer code developed to calculate life cycle costs of engines based on the output from WATE; and (5) INSTAL - a computer code developed to calculate installation effects, inlet performance and inlet weight. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight, and cost for representative types of aircraft and missions.
NASA Technical Reports Server (NTRS)
1976-01-01
A candidate hodoscope uses arrays of scintillator fibers, followed by an image intensifier and imaging system such as that proposed for the X-ray shadowgraph. A literature search was performed to ascertain the experience of other workers with hodoscopes using this or similar principles. Calculations were performed to determine the feasibility of candidate systems and some laboratory experiments were performed to attempt to check these numbers.
Uniformity testing: assessment of a centralized web-based uniformity analysis system.
Klempa, Meaghan C
2011-06-01
Uniformity testing is performed daily to ensure adequate camera performance before clinical use. The aim of this study is to assess the reliability of Beth Israel Deaconess Medical Center's locally built, centralized, Web-based uniformity analysis system by examining the differences between manufacturer and Web-based National Electrical Manufacturers Association integral uniformity calculations measured in the useful field of view (FOV) and the central FOV. Manufacturer and Web-based integral uniformity calculations measured in the useful FOV and the central FOV were recorded over a 30-d period for 4 cameras from 3 different manufacturers. These data were then statistically analyzed. The differences between the uniformity calculations were computed, in addition to the means and the SDs of these differences for each head of each camera. There was a correlation between the manufacturer and Web-based integral uniformity calculations in the useful FOV and the central FOV over the 30-d period. The average differences between the manufacturer and Web-based useful FOV calculations ranged from -0.30 to 0.099, with SD ranging from 0.092 to 0.32. For the central FOV calculations, the average differences ranged from -0.163 to 0.055, with SD ranging from 0.074 to 0.24. Most of the uniformity calculations computed by this centralized Web-based uniformity analysis system are comparable to the manufacturers' calculations, suggesting that this system is reasonably reliable and effective. This finding is important because centralized Web-based uniformity analysis systems are advantageous in that they test camera performance in the same manner regardless of the manufacturer.
GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT
NASA Astrophysics Data System (ADS)
Strubbe, David A.
GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.
Considerations for calculating arterial system performance measures in Virginia.
DOT National Transportation Integrated Search
2017-02-01
The Moving Ahead for Progress in the 21st Century Act (MAP-21) mandates that state departments of transportation monitor and report performance measures in several areas. System performance measures on the National Highway System (NHS) are part of th...
Upper bound on the efficiency of certain nonimaging concentrators in the physical-optics model
NASA Astrophysics Data System (ADS)
Welford, W. T.; Winston, R.
1982-09-01
Upper bounds on the performance of nonimaging concentrators are obtained within the framework of scalar-wave theory by using a simple approach to avoid complex calculations on multiple phase fronts. The approach consists in treating a theoretically perfect image-forming device and postulating that no non-image-forming concentrator can have a better performance than such an ideal image-forming system. The performance of such a system can be calculated according to wave theory, and this will provide, in accordance with the postulate, upper bounds on the performance of nonimaging systems. The method is demonstrated for a two-dimensional compound parabolic concentrator.
Modelling and experimental performance analysis of solar-assisted ground source heat pump system
NASA Astrophysics Data System (ADS)
Esen, Hikmet; Esen, Mehmet; Ozsolak, Onur
2017-01-01
In this study, slinky (the slinky-loop configuration is also known as the coiled loop or spiral loop of flexible plastic pipe)type ground heat exchanger (GHE) was established for a solar-assisted ground source heat pump system. System modelling is performed with the data obtained from the experiment. Artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS) are used in modelling. The slinky pipes have been laid horizontally and vertically in a ditch. The system coefficient of performance (COPsys) and the heat pump coefficient of performance (COPhp) have been calculated as 2.88 and 3.55, respectively, at horizontal slinky-type GHE, while COPsys and COPhp were calculated as 2.34 and 2.91, respectively, at vertical slinky-type GHE. The obtained results showed that the ANFIS is more successful than that of ANN for forecasting performance of a solar ground source heat pump system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
2011-01-01
Analysis of the material protection, control, and accountability (MPC&A) system is necessary to understand the limits and vulnerabilities of the system to internal threats. A self-appraisal helps the facility be prepared to respond to internal threats and reduce the risk of theft or diversion of nuclear material. The material control and accountability (MC&A) system effectiveness tool (MSET) fault tree was developed to depict the failure of the MPC&A system as a result of poor practices and random failures in the MC&A system. It can also be employed as a basis for assessing deliberate threats against a facility. MSET uses faultmore » tree analysis, which is a top-down approach to examining system failure. The analysis starts with identifying a potential undesirable event called a 'top event' and then determining the ways it can occur (e.g., 'Fail To Maintain Nuclear Materials Under The Purview Of The MC&A System'). The analysis proceeds by determining how the top event can be caused by individual or combined lower level faults or failures. These faults, which are the causes of the top event, are 'connected' through logic gates. The MSET model uses AND-gates and OR-gates and propagates the effect of event failure using Boolean algebra. To enable the fault tree analysis calculations, the basic events in the fault tree are populated with probability risk values derived by conversion of questionnaire data to numeric values. The basic events are treated as independent variables. This assumption affects the Boolean algebraic calculations used to calculate results. All the necessary calculations are built into the fault tree codes, but it is often useful to estimate the probabilities manually as a check on code functioning. The probability of failure of a given basic event is the probability that the basic event primary question fails to meet the performance metric for that question. The failure probability is related to how well the facility performs the task identified in that basic event over time (not just one performance or exercise). Fault tree calculations provide a failure probability for the top event in the fault tree. The basic fault tree calculations establish a baseline relative risk value for the system. This probability depicts relative risk, not absolute risk. Subsequent calculations are made to evaluate the change in relative risk that would occur if system performance is improved or degraded. During the development effort of MSET, the fault tree analysis program used was SAPHIRE. SAPHIRE is an acronym for 'Systems Analysis Programs for Hands-on Integrated Reliability Evaluations.' Version 1 of the SAPHIRE code was sponsored by the Nuclear Regulatory Commission in 1987 as an innovative way to draw, edit, and analyze graphical fault trees primarily for safe operation of nuclear power reactors. When the fault tree calculations are performed, the fault tree analysis program will produce several reports that can be used to analyze the MPC&A system. SAPHIRE produces reports showing risk importance factors for all basic events in the operational MC&A system. The risk importance information is used to examine the potential impacts when performance of certain basic events increases or decreases. The initial results produced by the SAPHIRE program are considered relative risk values. None of the results can be interpreted as absolute risk values since the basic event probability values represent estimates of risk associated with the performance of MPC&A tasks throughout the material balance area (MBA). The RRR for a basic event represents the decrease in total system risk that would result from improvement of that one event to a perfect performance level. Improvement of the basic event with the greatest RRR value produces a greater decrease in total system risk than improvement of any other basic event. Basic events with the greatest potential for system risk reduction are assigned performance improvement values, and new fault tree calculations show the improvement in total system risk. The operational impact or cost-effectiveness from implementing the performance improvements can then be evaluated. The improvements being evaluated can be system performance improvements, or they can be potential, or actual, upgrades to the system. The RIR for a basic event represents the increase in total system risk that would result from failure of that one event. Failure of the basic event with the greatest RIR value produces a greater increase in total system risk than failure of any other basic event. Basic events with the greatest potential for system risk increase are assigned failure performance values, and new fault tree calculations show the increase in total system risk. This evaluation shows the importance of preventing performance degradation of the basic events. SAPHIRE identifies combinations of basic events where concurrent failure of the events results in failure of the top event.« less
NASA Astrophysics Data System (ADS)
Kim, Chul-Ho; Lee, Kee-Man; Lee, Sang-Heon
Power train system design is one of the key R&D areas on the development process of new automobile because an optimum size of engine with adaptable power transmission which can accomplish the design requirement of new vehicle can be obtained through the system design. Especially, for the electric vehicle design, very reliable design algorithm of a power train system is required for the energy efficiency. In this study, an analytical simulation algorithm is developed to estimate driving performance of a designed power train system of an electric. The principal theory of the simulation algorithm is conservation of energy with several analytical and experimental data such as rolling resistance, aerodynamic drag, mechanical efficiency of power transmission etc. From the analytical calculation results, running resistance of a designed vehicle is obtained with the change of operating condition of the vehicle such as inclined angle of road and vehicle speed. Tractive performance of the model vehicle with a given power train system is also calculated at each gear ratio of transmission. Through analysis of these two calculation results: running resistance and tractive performance, the driving performance of a designed electric vehicle is estimated and it will be used to evaluate the adaptability of the designed power train system on the vehicle.
NASA Astrophysics Data System (ADS)
Hübener, H.; Pérez-Osorio, M. A.; Ordejón, P.; Giustino, F.
2012-09-01
We present a systematic study of the performance of numerical pseudo-atomic orbital basis sets in the calculation of dielectric matrices of extended systems using the self-consistent Sternheimer approach of [F. Giustino et al., Phys. Rev. B 81, 115105 (2010)]. In order to cover a range of systems, from more insulating to more metallic character, we discuss results for the three semiconductors diamond, silicon, and germanium. Dielectric matrices of silicon and diamond calculated using our method fall within 1% of reference planewaves calculations, demonstrating that this method is promising. We find that polarization orbitals are critical for achieving good agreement with planewaves calculations, and that only a few additional ζ's are required for obtaining converged results, provided the split norm is properly optimized. Our present work establishes the validity of local orbital basis sets and the self-consistent Sternheimer approach for the calculation of dielectric matrices in extended systems, and prepares the ground for future studies of electronic excitations using these methods.
NASA Technical Reports Server (NTRS)
1973-01-01
Calculations, curves, and substantiating data which support the engine design characteristics of the RL-10 engines are presented. A description of the RL-10 ignition system is provided. The performance calculations of the RL-10 derivative engines and the performance results obtained are reported. The computer simulations used to establish the control system requirements and to define the engine transient characteristics are included.
Evaluating Performances of Solar-Energy Systems
NASA Technical Reports Server (NTRS)
Jaffe, L. D.
1987-01-01
CONC11 computer program calculates performances of dish-type solar thermal collectors and power systems. Solar thermal power system consists of one or more collectors, power-conversion subsystems, and powerprocessing subsystems. CONC11 intended to aid system designer in comparing performance of various design alternatives. Written in Athena FORTRAN and Assembler.
Performance Analysis of Stirling Engine-Driven Vapor Compression Heat Pump System
NASA Astrophysics Data System (ADS)
Kagawa, Noboru
Stirling engine-driven vapor compression systems have many unique advantages including higher thermal efficiencies, preferable exhaust gas characteristics, multi-fuel usage, and low noise and vibration which can play an important role in alleviating environmental and energy problems. This paper introduces a design method for the systems based on reliable mathematical methods for Stirling and Rankin cycles using reliable thermophysical information for refrigerants. The model deals with a combination of a kinematic Stirling engine and a scroll compressor. Some experimental coefficients are used to formulate the model. The obtained results show the performance behavior in detail. The measured performance of the actual system coincides with the calculated results. Furthermore, the calculated results clarify the performance using alternative refrigerants for R-22.
Powell, Sarah R; Fuchs, Lynn S; Cirino, Paul T; Fuchs, Douglas; Compton, Donald L; Changas, Paul C
2015-07-01
The focus of the present study was enhancing word-problem and calculation achievement in ways that support pre-algebraic thinking among 2 nd -grade students at risk for mathematics difficulty. Intervention relied on a multi-tier support system (i.e., responsiveness-to-intervention or RTI) in which at-risk students participate in general classroom instruction and receive supplementary small-group tutoring. Participants were 265 students in 110 classrooms in 25 schools. Teachers were randomly assigned to 3 conditions: calculation RTI, word-problem RTI, and business-as-usual control. Intervention lasted 17 weeks. Multilevel modeling indicated that calculation RTI improved calculation but not word-problem outcomes; word-problem RTI enhanced proximal word-problem outcomes as well as performance on some calculation outcomes; and word-problem RTI provided a stronger route than calculation RTI to pre-algebraic knowledge.
Powell, Sarah R.; Fuchs, Lynn S.; Cirino, Paul T.; Fuchs, Douglas; Compton, Donald L.; Changas, Paul C.
2014-01-01
The focus of the present study was enhancing word-problem and calculation achievement in ways that support pre-algebraic thinking among 2nd-grade students at risk for mathematics difficulty. Intervention relied on a multi-tier support system (i.e., responsiveness-to-intervention or RTI) in which at-risk students participate in general classroom instruction and receive supplementary small-group tutoring. Participants were 265 students in 110 classrooms in 25 schools. Teachers were randomly assigned to 3 conditions: calculation RTI, word-problem RTI, and business-as-usual control. Intervention lasted 17 weeks. Multilevel modeling indicated that calculation RTI improved calculation but not word-problem outcomes; word-problem RTI enhanced proximal word-problem outcomes as well as performance on some calculation outcomes; and word-problem RTI provided a stronger route than calculation RTI to pre-algebraic knowledge. PMID:26097244
SPREADSHEET BASED SCALING CALCULATIONS AND MEMBRANE PERFORMANCE
Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total...
ITS data quality control and the calculation of mobility performance measures
DOT National Transportation Integrated Search
2000-09-01
This report describes the results of research on the use of intelligent transportation system (ITS) data in calculating mobility performance measures for ITS operations. The report also describes a data quality control process developed for the Trans...
NASA Technical Reports Server (NTRS)
Svehla, R. A.; Mcbride, B. J.
1973-01-01
A FORTRAN IV computer program for the calculation of the thermodynamic and transport properties of complex mixtures is described. The program has the capability of performing calculations such as:(1) chemical equilibrium for assigned thermodynamic states, (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. Condensed species, as well as gaseous species, are considered in the thermodynamic calculation; but only the gaseous species are considered in the transport calculations.
Validation and Improvement of Reliability Methods for Air Force Building Systems
focusing primarily on HVAC systems . This research used contingency analysis to assess the performance of each model for HVAC systems at six Air Force...probabilistic model produced inflated reliability calculations for HVAC systems . In light of these findings, this research employed a stochastic method, a...Nonhomogeneous Poisson Process (NHPP), in an attempt to produce accurate HVAC system reliability calculations. This effort ultimately concluded that
Neural computing thermal comfort index PMV for the indoor environment intelligent control system
NASA Astrophysics Data System (ADS)
Liu, Chang; Chen, Yifei
2013-03-01
Providing indoor thermal comfort and saving energy are two main goals of indoor environmental control system. An intelligent comfort control system by combining the intelligent control and minimum power control strategies for the indoor environment is presented in this paper. In the system, for realizing the comfort control, the predicted mean vote (PMV) is designed as the control goal, and with chastening formulas of PMV, it is controlled to optimize for improving indoor comfort lever by considering six comfort related variables. On the other hand, a RBF neural network based on genetic algorithm is designed to calculate PMV for better performance and overcoming the nonlinear feature of the PMV calculation better. The formulas given in the paper are presented for calculating the expected output values basing on the input samples, and the RBF network model is trained depending on input samples and the expected output values. The simulation result is proved that the design of the intelligent calculation method is valid. Moreover, this method has a lot of advancements such as high precision, fast dynamic response and good system performance are reached, it can be used in practice with requested calculating error.
NASA Technical Reports Server (NTRS)
Jaffe, L. D.
1984-01-01
The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.
Comparative PV LCOE calculator | Photovoltaic Research | NREL
Use the Comparative Photovoltaic Levelized Cost of Energy Calculator (Comparative PV LCOE Calculator) to calculate levelized cost of energy (LCOE) for photovoltaic (PV) systems based on cost effect on LCOE to determine whether a proposed technology is cost-effective, perform trade-off analysis
Performance evaluation capabilities for the design of physical systems
NASA Technical Reports Server (NTRS)
Pilkey, W. D.; Wang, B. P.
1972-01-01
The results are presented of a study aimed at developing and formulating a capability for the limiting performance of large steady state systems. The accomplishments reported include: (1) development of a theory of limiting performance of large systems subject to steady state inputs; (2) application and modification of PERFORM, the computational capability for the limiting performance of systems with transient inputs; and (3) demonstration that use of an inherently smooth control force for a limiting performance calculation improves the system identification phase of the design process for physical systems subjected to transient loading.
RELAV - RELIABILITY/AVAILABILITY ANALYSIS PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
RELAV (Reliability/Availability Analysis Program) is a comprehensive analytical tool to determine the reliability or availability of any general system which can be modeled as embedded k-out-of-n groups of items (components) and/or subgroups. Both ground and flight systems at NASA's Jet Propulsion Laboratory have utilized this program. RELAV can assess current system performance during the later testing phases of a system design, as well as model candidate designs/architectures or validate and form predictions during the early phases of a design. Systems are commonly modeled as System Block Diagrams (SBDs). RELAV calculates the success probability of each group of items and/or subgroups within the system assuming k-out-of-n operating rules apply for each group. The program operates on a folding basis; i.e. it works its way towards the system level from the most embedded level by folding related groups into single components. The entire folding process involves probabilities; therefore, availability problems are performed in terms of the probability of success, and reliability problems are performed for specific mission lengths. An enhanced cumulative binomial algorithm is used for groups where all probabilities are equal, while a fast algorithm based upon "Computing k-out-of-n System Reliability", Barlow & Heidtmann, IEEE TRANSACTIONS ON RELIABILITY, October 1984, is used for groups with unequal probabilities. Inputs to the program include a description of the system and any one of the following: 1) availabilities of the items, 2) mean time between failures and mean time to repairs for the items from which availabilities are calculated, 3) mean time between failures and mission length(s) from which reliabilities are calculated, or 4) failure rates and mission length(s) from which reliabilities are calculated. The results are probabilities of success of each group and the system in the given configuration. RELAV assumes exponential failure distributions for reliability calculations and infinite repair resources for availability calculations. No more than 967 items or groups can be modeled by RELAV. If larger problems can be broken into subsystems of 967 items or less, the subsystem results can be used as item inputs to a system problem. The calculated availabilities are steady-state values. Group results are presented in the order in which they were calculated (from the most embedded level out to the system level). This provides a good mechanism to perform trade studies. Starting from the system result and working backwards, the granularity gets finer; therefore, system elements that contribute most to system degradation are detected quickly. RELAV is a C-language program originally developed under the UNIX operating system on a MASSCOMP MC500 computer. It has been modified, as necessary, and ported to an IBM PC compatible with a math coprocessor. The current version of the program runs in the DOS environment and requires a Turbo C vers. 2.0 compiler. RELAV has a memory requirement of 103 KB and was developed in 1989. RELAV is a copyrighted work with all copyright vested in NASA.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dobos, A. P.
2014-09-01
The NREL PVWatts calculator is a web application developed by the National Renewable Energy Laboratory (NREL) that estimates the electricity production of a grid-connected photovoltaic system based on a few simple inputs. PVWatts combines a number of sub-models to predict overall system performance, and makes includes several built-in parameters that are hidden from the user. This technical reference describes the sub-models, documents assumptions and hidden parameters, and explains the sequence of calculations that yield the final system performance estimate. This reference is applicable to the significantly revised version of PVWatts released by NREL in 2014.
Evaluation of audit-based performance measures for dental care plans.
Bader, J D; Shugars, D A; White, B A; Rindal, D B
1999-01-01
Although a set of clinical performance measures, i.e., a report card for dental plans, has been designed for use with administrative data, most plans do not have administrative data systems containing the data needed to calculate the measures. Therefore, we evaluated the use of a set of proxy clinical performance measures calculated from data obtained through chart audits. Chart audits were conducted in seven dental programs--three public health clinics, two dental health maintenance organizations (DHMO), and two preferred provider organizations (PPO). In all instances audits were completed by clinical staff who had been trained using telephone consultation and a self-instructional audit manual. The performance measures were calculated for the seven programs, audit reliability was assessed in four programs, and for one program the audit-based proxy measures were compared to the measures calculated using administrative data. The audit-based measures were sensitive to known differences in program performance. The chart audit procedures yielded reasonably reliable data. However, missing data in patient charts rendered the calculation of some measures problematic--namely, caries and periodontal disease assessment and experience. Agreement between administrative and audit-based measures was good for most, but not all, measures in one program. The audit-based proxy measures represent a complex but feasible approach to the calculation of performance measures for those programs lacking robust administrative data systems. However, until charts contain more complete diagnostic information (i.e., periodontal charting and diagnostic codes or reason-for-treatment codes), accurate determination of these aspects of clinical performance will be difficult.
Initial Performance of the Keck AO Wavefront Controller System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johansson, E M; Acton, D S; An, J R
2001-03-01
The wavefront controller for the Keck Observatory AO system consists of two separate real-time control loops: a tip-tilt control loop to remove tilt from the incoming wavefront, and a deformable mirror control loop to remove higher-order aberrations. In this paper, we describe these control loops and analyze their performance using diagnostic data acquired during the integration and testing of the AO system on the telescope. Disturbance rejection curves for the controllers are calculated from the experimental data and compared to theory. The residual wavefront errors due to control loop bandwidth are also calculated from the data, and possible improvements tomore » the controller performance are discussed.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qi, Junjian; Wang, Jianhui; Liu, Hui
Abstract: In this paper, nonlinear model reduction for power systems is performed by the balancing of empirical controllability and observability covariances that are calculated around the operating region. Unlike existing model reduction methods, the external system does not need to be linearized but is directly dealt with as a nonlinear system. A transformation is found to balance the controllability and observability covariances in order to determine which states have the greatest contribution to the input-output behavior. The original system model is then reduced by Galerkin projection based on this transformation. The proposed method is tested and validated on a systemmore » comprised of a 16-machine 68-bus system and an IEEE 50-machine 145-bus system. The results show that by using the proposed model reduction the calculation efficiency can be greatly improved; at the same time, the obtained state trajectories are close to those for directly simulating the whole system or partitioning the system while not performing reduction. Compared with the balanced truncation method based on a linearized model, the proposed nonlinear model reduction method can guarantee higher accuracy and similar calculation efficiency. It is shown that the proposed method is not sensitive to the choice of the matrices for calculating the empirical covariances.« less
Design and performance evaluation of the imaging payload for a remote sensing satellite
NASA Astrophysics Data System (ADS)
Abolghasemi, Mojtaba; Abbasi-Moghadam, Dariush
2012-11-01
In this paper an analysis method and corresponding analytical tools for design of the experimental imaging payload (IMPL) of a remote sensing satellite (SINA-1) are presented. We begin with top-level customer system performance requirements and constraints and derive the critical system and component parameters, then analyze imaging payload performance until a preliminary design that meets customer requirements. We consider system parameters and components composing the image chain for imaging payload system which includes aperture, focal length, field of view, image plane dimensions, pixel dimensions, detection quantum efficiency, and optical filter requirements. The performance analysis is accomplished by calculating the imaging payload's SNR (signal-to-noise ratio), and imaging resolution. The noise components include photon noise due to signal scene and atmospheric background, cold shield, out-of-band optical filter leakage and electronic noise. System resolution is simulated through cascaded modulation transfer functions (MTFs) and includes effects due to optics, image sampling, and system motion. Calculations results for the SINA-1 satellite are also presented.
El Shahat, Khaled; El Saeid, Aziza; Attalla, Ehab; Yassin, Adel
2014-01-01
To achieve tumor control for radiotherapy, a dose distribution is planned which has a good chance of sterilizing all cancer cells without causing unacceptable normal tissue complications. The aim of the present study was to achieve an accurate calculation of dose for small field dimensions and perform this by evaluating the accuracy of planning system calculation. This will be compared with real measurement of dose for the same small field dimensions using different detectors. Practical work was performed in two steps: (i) determination of the physical factors required for dose estimation measured by three ionization chambers and calculated by treatment planning system (TPS) based on the latest technical report series (IAEATRS-398) and (ii) comparison of the calculated and measured data. Our data analysis for small field is irradiated by photon energy matched with the data obtained from the ionization chambers and the treatment planning system. Radiographic films were used as an additional detector for the obtained data and showed matching with TPS calculation. It can be concluded that studied small field dimensions were averaged 6% and 4% for 6 MV and 15 MV, respectively. Radiographic film measurements showed a variation in results within ±2% than TPS calculation.
SmartWay 2.0 Partner Assessment Tools and Data Management System
A set of calculator tools used by SmartWay partners to assess their envirnomental performance, including calculation of their annual emissions of CO2, NOx, and PM, and a data system to manage the information. Different tools are available for carrier partners in the four main tr...
Sub-barrier fusion and transfers in the 40Ca + 58,64Ni systems
NASA Astrophysics Data System (ADS)
Bourgin, D.; Courtin, S.; Haas, F.; Goasduff, A.; Stefanini, A. M.; Montagnoli, G.; Montanari, D.; Corradi, L.; Huiming, J.; Scarlassara, F.; Fioretto, E.; Simenel, C.; Rowley, N.; Szilner, S.; Mijatović, T.
2016-05-01
Fusion cross sections have been measured in the 40Ca + 58Ni and 40Ca + 64Ni systems at energies around and below the Coulomb barrier. The 40Ca beam was delivered by the XTU Tandem accelerator of the Laboratori Nazionali di Legnaro and evaporation residues were measured at very forward angles with the LNL electrostatic beam deflector. Coupled-channels calculations were performed which highlight possible strong effects of neutron transfers on the fusion below the barrier in the 40Ca + 64Ni system. Microscopic time-dependent Hartree-Fock calculations have also been performed for both systems. Preliminary results are shown.
Hybrid Geothermal Heat Pumps for Cooling Telecommunications Data Centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beckers, Koenraad J; Zurmuhl, David P.; Lukawski, Maciej Z.
The technical and economic performance of geothermal heat pump (GHP) systems supplying year-round cooling to representative small data centers with cooling loads less than 500 kWth were analyzed and compared to air-source heat pumps (ASHPs). A numerical model was developed in TRNSYS software to simulate the operation of air-source and geothermal heat pumps with and without supplementary air cooled heat exchangers - dry coolers (DCs). The model was validated using data measured at an experimental geothermal system installed in Ithaca, NY, USA. The coefficient of performance (COP) and cooling capacity of the GHPs were calculated over a 20-year lifetime andmore » compared to the performance of ASHPs. The total cost of ownership (TCO) of each of the cooling systems was calculated to assess its economic performance. Both the length of the geothermal borehole heat exchangers (BHEs) and the dry cooler temperature set point were optimized to minimize the TCO of the geothermal systems. Lastly, a preliminary analysis of the performance of geothermal heat pumps for cooling dominated systems was performed for other locations including Dallas, TX, Sacramento, CA, and Minneapolis, MN.« less
49 CFR 173.319 - Cryogenic liquids in tank cars.
Code of Federal Regulations, 2014 CFR
2014-10-01
... increase more than 25 microns during the 24-hour period; or (ii) Calculated heat transfer rate test. The insulation system must be performance tested as prescribed in § 179.400-4 of this subchapter. When the calculated heat transfer rate test is performed, the absolute pressure in the annular space of the loaded...
49 CFR 173.319 - Cryogenic liquids in tank cars.
Code of Federal Regulations, 2012 CFR
2012-10-01
... increase more than 25 microns during the 24-hour period; or (ii) Calculated heat transfer rate test. The insulation system must be performance tested as prescribed in § 179.400-4 of this subchapter. When the calculated heat transfer rate test is performed, the absolute pressure in the annular space of the loaded...
49 CFR 173.319 - Cryogenic liquids in tank cars.
Code of Federal Regulations, 2013 CFR
2013-10-01
... increase more than 25 microns during the 24-hour period; or (ii) Calculated heat transfer rate test. The insulation system must be performance tested as prescribed in § 179.400-4 of this subchapter. When the calculated heat transfer rate test is performed, the absolute pressure in the annular space of the loaded...
49 CFR 173.319 - Cryogenic liquids in tank cars.
Code of Federal Regulations, 2011 CFR
2011-10-01
... increase more than 25 microns during the 24-hour period; or (ii) Calculated heat transfer rate test. The insulation system must be performance tested as prescribed in § 179.400-4 of this subchapter. When the calculated heat transfer rate test is performed, the absolute pressure in the annular space of the loaded...
49 CFR 173.319 - Cryogenic liquids in tank cars.
Code of Federal Regulations, 2010 CFR
2010-10-01
... increase more than 25 microns during the 24-hour period; or (ii) Calculated heat transfer rate test. The insulation system must be performance tested as prescribed in § 179.400-4 of this subchapter. When the calculated heat transfer rate test is performed, the absolute pressure in the annular space of the loaded...
Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions.
Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong
2016-11-11
Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials.
NASA Astrophysics Data System (ADS)
Sharma, R.; McCalley, J. D.
2016-12-01
Geomagnetic disturbance (GMD) causes the flow of geomagnetically induced currents (GIC) in the power transmission system that may cause large scale power outages and power system equipment damage. In order to plan for defense against GMD, it is necessary to accurately estimate the flow of GICs in the power transmission system. The current calculation as per NERC standards uses the 1-D earth conductivity models that don't reflect the coupling between the geoelectric and geomagnetic field components in the same direction. For accurate estimation of GICs, it is important to have spatially granular 3-D earth conductivity tensors, accurate DC network model of the transmission system and precisely estimated or measured input in the form of geomagnetic or geoelectric field data. Using these models and data the pre event, post event and online planning and assessment can be performed. The pre, post and online planning can be done by calculating GIC, analyzing voltage stability margin, identifying protection system vulnerabilities and estimating heating in transmission equipment. In order to perform the above mentioned tasks, an established GIC calculation and analysis procedure is needed that uses improved geophysical and DC network models obtained by model parameter tuning. The issue is addressed by performing the following tasks; 1) Geomagnetic field data and improved 3-D earth conductivity tensors are used to plot the geoelectric field map of a given area. The obtained geoelectric field map then serves as an input to the PSS/E platform, where through DC circuit analysis the GIC flows are calculated. 2) The computed GIC is evaluated against GIC measurements in order to fine tune the geophysical and DC network model parameters for any mismatch in the calculated and measured GIC. 3) The GIC calculation procedure is then adapted for a one in 100 year storm, in order to assess the impact of the worst case GMD on the power system. 4) Using the transformer models, the voltage stability margin would be analyzed for various real and synthetic geomagnetic or geoelectric field inputs, by calculating the reactive power absorbed by the transformers during an event. All four steps will help the electric utilities and planners to make use of better and accurate estimation techniques for GIC calculation, and impact assessment for future GMD events.
System for corrosion monitoring in pipeline applying fuzzy logic mathematics
NASA Astrophysics Data System (ADS)
Kuzyakov, O. N.; Kolosova, A. L.; Andreeva, M. A.
2018-05-01
A list of factors influencing corrosion rate on the external side of underground pipeline is determined. Principles of constructing a corrosion monitoring system are described; the system performance algorithm and program are elaborated. A comparative analysis of methods for calculating corrosion rate is undertaken. Fuzzy logic mathematics is applied to reduce calculations while considering a wider range of corrosion factors.
Development of Safety Analysis Code System of Beam Transport and Core for Accelerator Driven System
NASA Astrophysics Data System (ADS)
Aizawa, Naoto; Iwasaki, Tomohiko
2014-06-01
Safety analysis code system of beam transport and core for accelerator driven system (ADS) is developed for the analyses of beam transients such as the change of the shape and position of incident beam. The code system consists of the beam transport analysis part and the core analysis part. TRACE 3-D is employed in the beam transport analysis part, and the shape and incident position of beam at the target are calculated. In the core analysis part, the neutronics, thermo-hydraulics and cladding failure analyses are performed by the use of ADS dynamic calculation code ADSE on the basis of the external source database calculated by PHITS and the cross section database calculated by SRAC, and the programs of the cladding failure analysis for thermoelastic and creep. By the use of the code system, beam transient analyses are performed for the ADS proposed by Japan Atomic Energy Agency. As a result, the rapid increase of the cladding temperature happens and the plastic deformation is caused in several seconds. In addition, the cladding is evaluated to be failed by creep within a hundred seconds. These results have shown that the beam transients have caused a cladding failure.
NASA Astrophysics Data System (ADS)
Ulutas, E.; Inan, A.; Annunziato, A.
2012-06-01
This study analyzes the response of the Global Disasters Alerts and Coordination System (GDACS) in relation to a case study: the Kepulaunan Mentawai earthquake and related tsunami, which occurred on 25 October 2010. The GDACS, developed by the European Commission Joint Research Center, combines existing web-based disaster information management systems with the aim to alert the international community in case of major disasters. The tsunami simulation system is an integral part of the GDACS. In more detail, the study aims to assess the tsunami hazard on the Mentawai and Sumatra coasts: the tsunami heights and arrival times have been estimated employing three propagation models based on the long wave theory. The analysis was performed in three stages: (1) pre-calculated simulations by using the tsunami scenario database for that region, used by the GDACS system to estimate the alert level; (2) near-real-time simulated tsunami forecasts, automatically performed by the GDACS system whenever a new earthquake is detected by the seismological data providers; and (3) post-event tsunami calculations using GCMT (Global Centroid Moment Tensor) fault mechanism solutions proposed by US Geological Survey (USGS) for this event. The GDACS system estimates the alert level based on the first type of calculations and on that basis sends alert messages to its users; the second type of calculations is available within 30-40 min after the notification of the event but does not change the estimated alert level. The third type of calculations is performed to improve the initial estimations and to have a better understanding of the extent of the possible damage. The automatic alert level for the earthquake was given between Green and Orange Alert, which, in the logic of GDACS, means no need or moderate need of international humanitarian assistance; however, the earthquake generated 3 to 9 m tsunami run-up along southwestern coasts of the Pagai Islands where 431 people died. The post-event calculations indicated medium-high humanitarian impacts.
NASA Technical Reports Server (NTRS)
Koerner, M. A.
1986-01-01
The performance of X-band (8.5-GHz) and 32-GHz telemetry links is compared on the basis of the total data return per DSN station pass. Differences in spacecraft transmitter efficiency, transmit circuit loss, and transmitting antenna area efficiency and pointing loss are not considered in these calculations. Thus, the performance differentials calculated in this memo are those produced by a DSN 70-m station antenna gain and clear weather receiving system noise temperature and by weather. These calculations show that, assuming mechanical compensation of the DSN 70-m antenna for 32-GHz operation, a performance advantage for 32 GHz over X-band of 8.2 dB can be achieved for at least one DSN station location. Even if only Canberra and Madrid are used, a performance advantage of 7.7 dB can be obtained for at least one DSN station location. A system using a multiple beam feed (electronic compensation) should achieve similar results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kistler, B.L.
DELSOL3 is a revised and updated version of the DELSOL2 computer program (SAND81-8237) for calculating collector field performance and layout and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design based on energy cost. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and externalmore » cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. DELSOL3 maintains the advantages of speed and accuracy which are characteristics of DELSOL2.« less
Choi, Tayoung; Ganapathy, Sriram; Jung, Jaehak; Savage, David R.; Lakshmanan, Balasubramanian; Vecasey, Pamela M.
2013-04-16
A system and method for detecting a low performing cell in a fuel cell stack using measured cell voltages. The method includes determining that the fuel cell stack is running, the stack coolant temperature is above a certain temperature and the stack current density is within a relatively low power range. The method further includes calculating the average cell voltage, and determining whether the difference between the average cell voltage and the minimum cell voltage is greater than a predetermined threshold. If the difference between the average cell voltage and the minimum cell voltage is greater than the predetermined threshold and the minimum cell voltage is less than another predetermined threshold, then the method increments a low performing cell timer. A ratio of the low performing cell timer and a system run timer is calculated to identify a low performing cell.
Probing Actinide Electronic Structure through Pu Cluster Calculations
Ryzhkov, Mickhail V.; Mirmelstein, Alexei; Yu, Sung-Woo; ...
2013-02-26
The calculations for the electronic structure of clusters of plutonium have been performed, within the framework of the relativistic discrete-variational method. Moreover, these theoretical results and those calculated earlier for related systems have been compared to spectroscopic data produced in the experimental investigations of bulk systems, including photoelectron spectroscopy. Observation of the changes in the Pu electronic structure as a function of size provides powerful insight for aspects of bulk Pu electronic structure.
GMXPBSA 2.1: A GROMACS tool to perform MM/PBSA and computational alanine scanning
NASA Astrophysics Data System (ADS)
Paissoni, C.; Spiliotopoulos, D.; Musco, G.; Spitaleri, A.
2015-01-01
GMXPBSA 2.1 is a user-friendly suite of Bash/Perl scripts for streamlining MM/PBSA calculations on structural ensembles derived from GROMACS trajectories, to automatically calculate binding free energies for protein-protein or ligand-protein complexes [R.T. Bradshaw et al., Protein Eng. Des. Sel. 24 (2011) 197-207]. GMXPBSA 2.1 is flexible and can easily be customized to specific needs and it is an improvement of the previous GMXPBSA 2.0 [C. Paissoni et al., Comput. Phys. Commun. (2014), 185, 2920-2929]. Additionally, it performs computational alanine scanning (CAS) to study the effects of ligand and/or receptor alanine mutations on the free energy of binding. Calculations require only for protein-protein or protein-ligand MD simulations. GMXPBSA 2.1 performs different comparative analyses, including a posteriori generation of alanine mutants of the wild-type complex, calculation of the binding free energy values of the mutant complexes and comparison of the results with the wild-type system. Moreover, it compares the binding free energy of different complex trajectories, allowing the study of the effects of non-alanine mutations, post-translational modifications or unnatural amino acids on the binding free energy of the system under investigation. Finally, it can calculate and rank relative affinity to the same receptor utilizing MD simulations of proteins in complex with different ligands. In order to dissect the different MM/PBSA energy contributions, including molecular mechanic (MM), electrostatic contribution to solvation (PB) and nonpolar contribution to solvation (SA), the tool combines two freely available programs: the MD simulations software GROMACS [S. Pronk et al., Bioinformatics 29 (2013) 845-854] and the Poisson-Boltzmann equation solver APBS [N.A. Baker et al., Proc. Natl. Acad. Sci. U.S.A 98 (2001) 10037-10041]. All the calculations can be performed in single or distributed automatic fashion on a cluster facility in order to increase the calculation by dividing frames across the available processors. This new version with respect to our previously published GMXPBSA 2.0 fixes some problem and allows additional kind of calculations, such as CAS on single protein in order to individuate the hot-spots, more custom options to perform APBS calculations, improvements of speed calculation of APBS (precF set to 0), possibility to work with multichain systems (see Summary of revisions for more details). The program is freely available under the GPL license.
Use of Neuroimaging to Clarify How Human Brains Perform Mental Calculations
ERIC Educational Resources Information Center
Ortiz, Enrique
2010-01-01
The purpose of this study was to analyze participants' levels of hemoglobin as they performed arithmetic mental calculations using Optical Topography (OT, helmet type brain-scanning system, also known as Functional Near-Infrared Spectroscopy or fNIRS). A central issue in cognitive neuroscience involves the study of how the human brain encodes and…
Real-time implementing wavefront reconstruction for adaptive optics
NASA Astrophysics Data System (ADS)
Wang, Caixia; Li, Mei; Wang, Chunhong; Zhou, Luchun; Jiang, Wenhan
2004-12-01
The capability of real time wave-front reconstruction is important for an adaptive optics (AO) system. The bandwidth of system and the real-time processing ability of the wave-front processor is mainly affected by the speed of calculation. The system requires enough number of subapertures and high sampling frequency to compensate atmospheric turbulence. The number of reconstruction operation is increased accordingly. Since the performance of AO system improves with the decrease of calculation latency, it is necessary to study how to increase the speed of wavefront reconstruction. There are two methods to improve the real time of the reconstruction. One is to convert the wavefront reconstruction matrix, such as by wavelet or FFT. The other is enhancing the performance of the processing element. Analysis shows that the latency cutting is performed with the cost of reconstruction precision by the former method. In this article, the latter method is adopted. From the characteristic of the wavefront reconstruction algorithm, a systolic array by FPGA is properly designed to implement real-time wavefront reconstruction. The system delay is reduced greatly by the utilization of pipeline and parallel processing. The minimum latency of reconstruction is the reconstruction calculation of one subaperture.
MTF measurements on real time for performance analysis of electro-optical systems
NASA Astrophysics Data System (ADS)
Stuchi, Jose Augusto; Signoreto Barbarini, Elisa; Vieira, Flavio Pascoal; dos Santos, Daniel, Jr.; Stefani, Mário Antonio; Yasuoka, Fatima Maria Mitsue; Castro Neto, Jarbas C.; Linhari Rodrigues, Evandro Luis
2012-06-01
The need of methods and tools that assist in determining the performance of optical systems is actually increasing. One of the most used methods to perform analysis of optical systems is to measure the Modulation Transfer Function (MTF). The MTF represents a direct and quantitative verification of the image quality. This paper presents the implementation of the software, in order to calculate the MTF of electro-optical systems. The software was used for calculating the MTF of Digital Fundus Camera, Thermal Imager and Ophthalmologic Surgery Microscope. The MTF information aids the analysis of alignment and measurement of optical quality, and also defines the limit resolution of optical systems. The results obtained with the Fundus Camera and Thermal Imager was compared with the theoretical values. For the Microscope, the results were compared with MTF measured of Microscope Zeiss model, which is the quality standard of ophthalmological microscope.
Speeding up GW Calculations to Meet the Challenge of Large Scale Quasiparticle Predictions
Gao, Weiwei; Xia, Weiyi; Gao, Xiang; Zhang, Peihong
2016-01-01
Although the GW approximation is recognized as one of the most accurate theories for predicting materials excited states properties, scaling up conventional GW calculations for large systems remains a major challenge. We present a powerful and simple-to-implement method that can drastically accelerate fully converged GW calculations for large systems, enabling fast and accurate quasiparticle calculations for complex materials systems. We demonstrate the performance of this new method by presenting the results for ZnO and MgO supercells. A speed-up factor of nearly two orders of magnitude is achieved for a system containing 256 atoms (1024 valence electrons) with a negligibly small numerical error of ±0.03 eV. Finally, we discuss the application of our method to the GW calculations for 2D materials. PMID:27833140
Estimation of PV energy production based on satellite data
NASA Astrophysics Data System (ADS)
Mazurek, G.
2015-09-01
Photovoltaic (PV) technology is an attractive source of power for systems without connection to power grid. Because of seasonal variations of solar radiation, design of such a power system requires careful analysis in order to provide required reliability. In this paper we present results of three-year measurements of experimental PV system located in Poland and based on polycrystalline silicon module. Irradiation values calculated from results of ground measurements have been compared with data from solar radiation databases employ calculations from of satellite observations. Good convergence level of both data sources has been shown, especially during summer. When satellite data from the same time period is available, yearly and monthly production of PV energy can be calculated with 2% and 5% accuracy, respectively. However, monthly production during winter seems to be overestimated, especially in January. Results of this work may be helpful in forecasting performance of similar PV systems in Central Europe and allow to make more precise forecasts of PV system performance than based only on tables with long time averaged values.
Cooling Performance Analysis of ThePrimary Cooling System ReactorTRIGA-2000Bandung
NASA Astrophysics Data System (ADS)
Irianto, I. D.; Dibyo, S.; Bakhri, S.; Sunaryo, G. R.
2018-02-01
The conversion of reactor fuel type will affect the heat transfer process resulting from the reactor core to the cooling system. This conversion resulted in changes to the cooling system performance and parameters of operation and design of key components of the reactor coolant system, especially the primary cooling system. The calculation of the operating parameters of the primary cooling system of the reactor TRIGA 2000 Bandung is done using ChemCad Package 6.1.4. The calculation of the operating parameters of the cooling system is based on mass and energy balance in each coolant flow path and unit components. Output calculation is the temperature, pressure and flow rate of the coolant used in the cooling process. The results of a simulation of the performance of the primary cooling system indicate that if the primary cooling system operates with a single pump or coolant mass flow rate of 60 kg/s, it will obtain the reactor inlet and outlet temperature respectively 32.2 °C and 40.2 °C. But if it operates with two pumps with a capacity of 75% or coolant mass flow rate of 90 kg/s, the obtained reactor inlet, and outlet temperature respectively 32.9 °C and 38.2 °C. Both models are qualified as a primary coolant for the primary coolant temperature is still below the permitted limit is 49.0 °C.
Detailed performance and environmental monitoring of aquifer heating and cooling systems
NASA Astrophysics Data System (ADS)
Acuna, José; Ahlkrona, Malva; Zandin, Hanna; Singh, Ashutosh
2016-04-01
The project intends to quantify the performance and environmental impact of large scale aquifer thermal energy storage, as well as point at recommendations for operating and estimating the environmental footprint of future systems. Field measurements, test of innovative equipment as well as advanced modelling work and analysis will be performed. The following aspects are introduced and covered in the presentation: -Thermal, chemical and microbiological influence of akvifer thermal energy storage systems: measurement and evaluation of real conditions and the influence of one system in operation. -Follow up of energy extraction from aquifer as compared to projected values, recommendations for improvements. -Evaluation of the most used thermal modeling tool for design and calculation of groundwater temperatures, calculations with MODFLOW/MT3DMS -Test and evaluation of optical fiber cables as a way to measure temperatures in aquifer thermal energy storages
NASA Astrophysics Data System (ADS)
Ganev, Kostadin; Todorova, Angelina; Jordanov, Georgi; Gadzhev, Georgi; Syrakov, Dimiter; Miloshev, Nikolai; Prodanova, Maria
2010-05-01
The NATO SfP N 981393 project aims at developing of a unified Balkan region oriented modelling system for operational response to accidental releases of harmful gases in the atmosphere, which would be able to: 1.Perform highly acurate and reliable risk analysis and assessment for selected "hot spots"; 2.Support the emergency fast decisions with short-term regional scale forecast of the propagation of harmful gasesin case of accidental release; 3.Perform, in an off-line mode, a more detailed and comprehensive analysis of the possible longer-term impacts on the environment and human health and make the results available to the authorities and the public. The present paper describes the set up and the testing of the system, mainly focusing on the risk analysis mode. The modeling tool used in the system is the US EPA Models-3 System: WRF, CMAQ and SMOKE (partly). The CB05 toxic chemical mechanism, including chlorine reactions, is employed. The emission input exploits the high-resolution TNO emission inventory. The meteorological pre-processor WRF is driven by NCAR Final Reanalysis data and performs calculations in 3 nested domains, covering respectively the regions of South-Eastern Europe, Bulgaria, and the area surrounding the particular site. The risk assessment for the region of "Vereja Him" factory, Jambol, Bulgaria is performed on the basis of one-year long model calculations. The calculations with CMAQ chemical transport model are performed for the two inner domains. An ammount of 25 tons of chlorine is released two times daily in the innermost domain, and sepаrate calculations are performed for every release. The results are averaged over one year in order to evaluate the probability of exceeding some regulatory treshold value in each grid point. The completion of this task in a relatively short period of time was made possible by using the newly developed Grid computational environment, which allows for shared use of facilities in the research community.
Reference Manual for the System Advisor Model's Wind Power Performance Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freeman, J.; Jorgenson, J.; Gilman, P.
2014-08-01
This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface andmore » as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.« less
Characterizing Density and Complexity of Imported Cargos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birrer, Nathaniel; Divin, Charles; Glenn, Steven
X-ray inspection systems are used to detect radiological and nuclear threats in imported cargo. In order to better understand performance of these systems, system imaging capabilities and the characteristics of imported cargo need to be determined. This project involved calculation of the modulation transfer function as a metric of system imaging performance and a study of the density and inhomogeneity of imported cargos, which have been shown to correlate with human analysts, threat detection performance.
Calculation of Gallium-metal-Arsenic phase diagrams
NASA Technical Reports Server (NTRS)
Scofield, J. D.; Davison, J. E.; Ray, A. E.; Smith, S. R.
1991-01-01
Electrical contacts and metallization to GaAs solar cells must survive at high temperatures for several minutes under specific mission scenarios. The determination of which metallizations or alloy systems that are able to withstand extreme thermal excursions with minimum degradation to solar cell performance can be predicted by properly calculated temperature constitution phase diagrams. A method for calculating a ternary diagram and its three constituent binary phase diagrams is briefly outlined and ternary phase diagrams for three Ga-As-X alloy systems are presented. Free energy functions of the liquid and solid phase are approximated by the regular solution theory. Phase diagrams calculated using this method are presented for the Ga-As-Ge and Ga-As-Ag systems.
Comparison of Hansen--Roach and ENDF/B-IV cross sections for $sup 233$U criticality calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNeany, S. R.; Jenkins, J. D.
A comparison is made between criticality calculations performed using ENDF/B-IV cross sections and the 16-group Hansen-- Roach library at ORNL. The area investigated is homogeneous systems of highly enriched $sup 233$U in simple geometries. Calculations are compared with experimental data for a wide range of H/$sup 233$U ratios. Results show that calculations of k/sub eff/ made with the Hansen--Roach cross sections agree within 1.5 percent for the experiments considered. Results using ENDF/B-IV cross sections were in good agreement for well-thermalized systems, but discrepancies up to 7 percent in k/sub eff/ were observed in fast and epithermal systems. (auth)
Synthesis of novel stable compounds in the phosphorous-nitrogen system under pressure
NASA Astrophysics Data System (ADS)
Stavrou, Elissaios; Batyrev, Iskander; Ciezak-Jenkins, Jennifer; Grivickas, Paulius; Zaug, Joseph; Greenberg, Eran; Kunz, Martin
2017-06-01
We explore the possible formation of stable, and metastable at ambient conditions, polynitrogen compounds in the P-N system under pressure using in situ X-ray diffraction and Raman spectroscopy in synergy with first-principles evolutionary structural search algorithms (USPEX). We have performed numerous synthesis experiments at pressures from near ambient up to +50 GPa using both a mixture of elemental P and N2 and relevant precursors such as P3N5. Calculation of P-N extended structures at 10, 30, and 50 GPa was done using USPEX based on density functional theory (DFT) plane-waves calculations (VASP) with ultrasoft pseudopotentials. Full convex plot was found for N rich concentrations of P-N binary system. Variable content calculations were complemented by fixed concentration calculations at certain nitrogen rich concentration. Stable structures refined by DFT calculations using norm-concerning pseudopotentials. A comparison between our results and previous studies in the same system will be also given. Part of this work was performed under the auspices of the U. S. DoE by LLNS, LLC under Contract DE-AC52-07NA27344. We thank the Joint DoD/DOE Munitions Technology Development Program and the HE science C-II program at LLNL for supporting this study.
An urban energy performance evaluation system and its computer implementation.
Wang, Lei; Yuan, Guan; Long, Ruyin; Chen, Hong
2017-12-15
To improve the urban environment and effectively reflect and promote urban energy performance, an urban energy performance evaluation system was constructed, thereby strengthening urban environmental management capabilities. From the perspectives of internalization and externalization, a framework of evaluation indicators and key factors that determine urban energy performance and explore the reasons for differences in performance was proposed according to established theory and previous studies. Using the improved stochastic frontier analysis method, an urban energy performance evaluation and factor analysis model was built that brings performance evaluation and factor analysis into the same stage for study. According to data obtained for the Chinese provincial capitals from 2004 to 2013, the coefficients of the evaluation indicators and key factors were calculated by the urban energy performance evaluation and factor analysis model. These coefficients were then used to compile the program file. The urban energy performance evaluation system developed in this study was designed in three parts: a database, a distributed component server, and a human-machine interface. Its functions were designed as login, addition, edit, input, calculation, analysis, comparison, inquiry, and export. On the basis of these contents, an urban energy performance evaluation system was developed using Microsoft Visual Studio .NET 2015. The system can effectively reflect the status of and any changes in urban energy performance. Beijing was considered as an example to conduct an empirical study, which further verified the applicability and convenience of this evaluation system. Copyright © 2017 Elsevier Ltd. All rights reserved.
Los Alamos radiation transport code system on desktop computing platforms
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briesmeister, J.F.; Brinkley, F.W.; Clark, B.A.
The Los Alamos Radiation Transport Code System (LARTCS) consists of state-of-the-art Monte Carlo and discrete ordinates transport codes and data libraries. These codes were originally developed many years ago and have undergone continual improvement. With a large initial effort and continued vigilance, the codes are easily portable from one type of hardware to another. The performance of scientific work-stations (SWS) has evolved to the point that such platforms can be used routinely to perform sophisticated radiation transport calculations. As the personal computer (PC) performance approaches that of the SWS, the hardware options for desk-top radiation transport calculations expands considerably. Themore » current status of the radiation transport codes within the LARTCS is described: MCNP, SABRINA, LAHET, ONEDANT, TWODANT, TWOHEX, and ONELD. Specifically, the authors discuss hardware systems on which the codes run and present code performance comparisons for various machines.« less
Weighted Geometric Dilution of Precision Calculations with Matrix Multiplication
Chen, Chien-Sheng
2015-01-01
To enhance the performance of location estimation in wireless positioning systems, the geometric dilution of precision (GDOP) is widely used as a criterion for selecting measurement units. Since GDOP represents the geometric effect on the relationship between measurement error and positioning determination error, the smallest GDOP of the measurement unit subset is usually chosen for positioning. The conventional GDOP calculation using matrix inversion method requires many operations. Because more and more measurement units can be chosen nowadays, an efficient calculation should be designed to decrease the complexity. Since the performance of each measurement unit is different, the weighted GDOP (WGDOP), instead of GDOP, is used to select the measurement units to improve the accuracy of location. To calculate WGDOP effectively and efficiently, the closed-form solution for WGDOP calculation is proposed when more than four measurements are available. In this paper, an efficient WGDOP calculation method applying matrix multiplication that is easy for hardware implementation is proposed. In addition, the proposed method can be used when more than exactly four measurements are available. Even when using all-in-view method for positioning, the proposed method still can reduce the computational overhead. The proposed WGDOP methods with less computation are compatible with global positioning system (GPS), wireless sensor networks (WSN) and cellular communication systems. PMID:25569755
MHD Energy Bypass Scramjet Performance with Real Gas Effects
NASA Technical Reports Server (NTRS)
Park, Chul; Mehta, Unmeel B.; Bogdanoff, David W.
2000-01-01
The theoretical performance of a scramjet propulsion system incorporating an magneto-hydro-dynamic (MHD) energy bypass scheme is calculated. The one-dimensional analysis developed earlier, in which the theoretical performance is calculated neglecting skin friction and using a sudden-freezing approximation for the nozzle flow, is modified to incorporate the method of Van Driest for turbulent skin friction and a finite-rate chemistry calculation in the nozzle. Unlike in the earlier design, in which four ramp compressions occurred in the pitch plane, in the present design the first two ramp compressions occur in the pitch plane and the next two compressions occur in the yaw plane. The results for the simplified design of a spaceliner show that (1) the present design produces higher specific impulses than the earlier design, (2) skin friction substantially reduces thrust and specific impulse, and (3) the specific impulse of the MHD-bypass system is still better than the non-MHD system and typical rocket over a narrow region of flight speeds and design parameters. Results suggest that the energy management with MHD principles offers the possibility of improving the performance of the scramjet. The technical issues needing further studies are identified.
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
PANORAMA: An approach to performance modeling and diagnosis of extreme-scale workflows
Deelman, Ewa; Carothers, Christopher; Mandal, Anirban; ...
2015-07-14
Here we report that computational science is well established as the third pillar of scientific discovery and is on par with experimentation and theory. However, as we move closer toward the ability to execute exascale calculations and process the ensuing extreme-scale amounts of data produced by both experiments and computations alike, the complexity of managing the compute and data analysis tasks has grown beyond the capabilities of domain scientists. Therefore, workflow management systems are absolutely necessary to ensure current and future scientific discoveries. A key research question for these workflow management systems concerns the performance optimization of complex calculation andmore » data analysis tasks. The central contribution of this article is a description of the PANORAMA approach for modeling and diagnosing the run-time performance of complex scientific workflows. This approach integrates extreme-scale systems testbed experimentation, structured analytical modeling, and parallel systems simulation into a comprehensive workflow framework called Pegasus for understanding and improving the overall performance of complex scientific workflows.« less
Johnston, Iain G; Rickett, Benjamin C; Jones, Nick S
2014-12-02
Back-of-the-envelope or rule-of-thumb calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behavior of physical systems, for example in so-called Fermi problems in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a probabilistic calculator called CALADIS, a free web tool, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between CALADIS and the BioNumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Brown, Nicholas R.; Powers, Jeffrey J.; Feng, B.; ...
2015-05-21
This paper presents analyses of possible reactor representations of a nuclear fuel cycle with continuous recycling of thorium and produced uranium (mostly U-233) with thorium-only feed. The analysis was performed in the context of a U.S. Department of Energy effort to develop a compendium of informative nuclear fuel cycle performance data. The objective of this paper is to determine whether intermediate spectrum systems, having a majority of fission events occurring with incident neutron energies between 1 eV and 10 5 eV, perform as well as fast spectrum systems in this fuel cycle. The intermediate spectrum options analyzed include tight latticemore » heavy or light water-cooled reactors, continuously refueled molten salt reactors, and a sodium-cooled reactor with hydride fuel. All options were modeled in reactor physics codes to calculate their lattice physics, spectrum characteristics, and fuel compositions over time. Based on these results, detailed metrics were calculated to compare the fuel cycle performance. These metrics include waste management and resource utilization, and are binned to accommodate uncertainties. The performance of the intermediate systems for this selfsustaining thorium fuel cycle was similar to a representative fast spectrum system. However, the number of fission neutrons emitted per neutron absorbed limits performance in intermediate spectrum systems.« less
Methods And Systms For Analyzing The Degradation And Failure Of Mechanical Systems
Jarrell, Donald B.; Sisk, Daniel R.; Hatley, Darrel D.; Kirihara, Leslie J.; Peters, Timothy J.
2005-02-08
Methods and systems for identifying, understanding, and predicting the degradation and failure of mechanical systems are disclosed. The methods include measuring and quantifying stressors that are responsible for the activation of degradation mechanisms in the machine component of interest. The intensity of the stressor may be correlated with the rate of physical degradation according to some determinable function such that a derivative relationship exists between the machine performance, degradation, and the underlying stressor. The derivative relationship may be used to make diagnostic and prognostic calculations concerning the performance and projected life of the machine. These calculations may be performed in real time to allow the machine operator to quickly adjust the operational parameters of the machinery in order to help minimize or eliminate the effects of the degradation mechanism, thereby prolonging the life of the machine. Various systems implementing the methods are also disclosed.
Open-cycle OTEC system performance analysis. [Claude cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lewandowski, A.A.; Olson, D.A.; Johnson, D.H.
1980-10-01
An algorithm developed to calculate the performance of Claude-Cycle ocean thermal energy conversion (OTEC) systems is described. The algorithm treats each component of the system separately and then interfaces them to form a complete system, allowing a component to be changed without changing the rest of the algorithm. Two components that are subject to change are the evaporator and condenser. For this study we developed mathematical models of a channel-flow evaporator and both a horizontal jet and spray director contact condenser. The algorithm was then programmed to run on SERI's CDC 7600 computer and used to calculate the effect onmore » performance of deaerating the warm and cold water streams before entering the evaporator and condenser, respectively. This study indicates that there is no advantage to removing air from these streams compared with removing the air from the condenser.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beshr, Mohamed; Aute, Vikrant; Abdelaziz, Omar
Commercial refrigeration systems consumed 1.21 Quads of primary energy in 2010 and are known to be a major source for refrigerant charge leakage into the environment. Thus, it is important to study the environmental impact of commercial supermarket refrigeration systems and improve their design to minimize any adverse impacts. The system s Life Cycle Climate Performance (LCCP) was presented as a comprehensive metric with the aim of calculating the equivalent mass of carbon dioxide released into the atmosphere throughout its lifetime, from construction to operation and destruction. In this paper, an open source tool for the evaluation of the LCCPmore » of different air-conditioning and refrigeration systems is presented and used to compare the environmental impact of a typical multiplex direct expansion (DX) supermarket refrigeration systems based on three different refrigerants as follows: two hydrofluorocarbon (HFC) refrigerants (R-404A, and R-407F), and a low global warming potential (GWP) refrigerant (N-40). The comparison is performed in 8 US cities representing different climates. The hourly energy consumption of the refrigeration system, required for the calculation of the indirect emissions, is calculated using a widely used building energy modeling tool (EnergyPlus). A sensitivity analysis is performed to determine the impact of system charge and power plant emission factor on the LCCP results. Finally, we performed an uncertainty analysis to determine the uncertainty in total emissions for both R-404A and N-40 operated systems. We found that using low GWP refrigerants causes a considerable drop in the impact of uncertainty in the inputs related to direct emissions on the uncertainty of the total emissions of the system.« less
Reduced-Order Modeling of 3D Rayleigh-Benard Turbulent Convection
NASA Astrophysics Data System (ADS)
Hassanzadeh, Pedram; Grover, Piyush; Nabi, Saleh
2017-11-01
Accurate Reduced-Order Models (ROMs) of turbulent geophysical flows have broad applications in science and engineering; for example, to study the climate system or to perform real-time flow control/optimization in energy systems. Here we focus on 3D Rayleigh-Benard turbulent convection at the Rayleigh number of 106 as a prototype for turbulent geophysical flows, which are dominantly buoyancy driven. The purpose of the study is to evaluate and improve the performance of different model reduction techniques using this setting. One-dimensional ROMs for horizontally averaged temperature are calculated using several methods. Specifically, the Linear Response Function (LRF) of the system is calculated from a large DNS dataset using Dynamic Mode Decomposition (DMD) and Fluctuation-Dissipation Theorem (FDT). The LRF is also calculated using the Green's function method of Hassanzadeh and Kuang (2016, J. Atmos. Sci.), which is based on using numerous forced DNS runs. The performance of these LRFs in estimating the system's response to weak external forcings or controlling the time-mean flow are compared and contrasted. The spectral properties of the LRFs and the scaling of the accuracy with the length of the dataset (for the data-driven methods) are also discussed.
Processing Infrared Images For Fire Management Applications
NASA Astrophysics Data System (ADS)
Warren, John R.; Pratt, William K.
1981-12-01
The USDA Forest Service has used airborne infrared systems for forest fire detection and mapping for many years. The transfer of the images from plane to ground and the transposition of fire spots and perimeters to maps has been performed manually. A new system has been developed which uses digital image processing, transmission, and storage. Interactive graphics, high resolution color display, calculations, and computer model compatibility are featured in the system. Images are acquired by an IR line scanner and converted to 1024 x 1024 x 8 bit frames for transmission to the ground at a 1.544 M bit rate over a 14.7 GHZ carrier. Individual frames are received and stored, then transferred to a solid state memory to refresh the display at a conventional 30 frames per second rate. Line length and area calculations, false color assignment, X-Y scaling, and image enhancement are available. Fire spread can be calculated for display and fire perimeters plotted on maps. The performance requirements, basic system, and image processing will be described.
Transmission Loss Calculation using A and B Loss Coefficients in Dynamic Economic Dispatch Problem
NASA Astrophysics Data System (ADS)
Jethmalani, C. H. Ram; Dumpa, Poornima; Simon, Sishaj P.; Sundareswaran, K.
2016-04-01
This paper analyzes the performance of A-loss coefficients while evaluating transmission losses in a Dynamic Economic Dispatch (DED) Problem. The performance analysis is carried out by comparing the losses computed using nominal A loss coefficients and nominal B loss coefficients in reference with load flow solution obtained by standard Newton-Raphson (NR) method. Density based clustering method based on connected regions with sufficiently high density (DBSCAN) is employed in identifying the best regions of A and B loss coefficients. Based on the results obtained through cluster analysis, a novel approach in improving the accuracy of network loss calculation is proposed. Here, based on the change in per unit load values between the load intervals, loss coefficients are updated for calculating the transmission losses. The proposed algorithm is tested and validated on IEEE 6 bus system, IEEE 14 bus, system IEEE 30 bus system and IEEE 118 bus system. All simulations are carried out using SCILAB 5.4 (www.scilab.org) which is an open source software.
Shock Positioning Controls Designs for a Supersonic Inlet
NASA Technical Reports Server (NTRS)
Kopasakis, George; Connolly, Joseph W.
2010-01-01
Under the NASA Fundamental Aeronautics Program, the Supersonics Project is working to overcome the obstacles to supersonic commercial flight. The supersonic inlet design that is utilized to efficiently compress the incoming air and deliver it to the engine has many design challenges. Among those challenges is the shock positioning of internal compression inlets, which requires active control in order to maintain performance and to prevent inlet unstarts due to upstream (freestream) and downstream (engine) disturbances. In this paper a novel feedback control technique is presented, which emphasizes disturbance attenuation among other control performance criteria, while it ties the speed of the actuation system(s) to the design of the controller. In this design, the desired performance specifications for the overall control system are used to design the closed loop gain of the feedback controller and then, knowing the transfer function of the plant, the controller is calculated to achieve this performance. The innovation is that this design procedure is methodical and allows maximization of the performance of the designed control system with respect to actuator rates, while the stability of the calculated controller is guaranteed.
Optimal Redundancy Management in Reconfigurable Control Systems Based on Normalized Nonspecificity
NASA Technical Reports Server (NTRS)
Wu, N.Eva; Klir, George J.
1998-01-01
In this paper the notion of normalized nonspecificity is introduced. The nonspecifity measures the uncertainty of the estimated parameters that reflect impairment in a controlled system. Based on this notion, a quantity called a reconfiguration coverage is calculated. It represents the likelihood of success of a control reconfiguration action. This coverage links the overall system reliability to the achievable and required control, as well as diagnostic performance. The coverage, when calculated on-line, is used for managing the redundancy in the system.
ERIC Educational Resources Information Center
Powell, Sarah R.; Fuchs, Lynn S.; Cirino, Paul T.; Fuchs, Douglas; Compton, Donald L.; Changas, Paul C.
2015-01-01
The focus of the present study was enhancing word problem and calculation achievement in ways that support prealgebraic thinking among second-grade students at risk for mathematics difficulty. Intervention relied on a multitier support system (i.e., responsiveness to intervention, or RTI) in which at-risk students participate in general classroom…
DOT National Transportation Integrated Search
2012-10-01
This project conducted a thorough review of the existing Pavement Management Information System (PMIS) database, : performance models, needs estimates, utility curves, and scores calculations, as well as a review of District practices : concerning th...
An all digital phase locked loop for FM demodulation.
NASA Technical Reports Server (NTRS)
Greco, J.; Garodnick, J.; Schilling, D. L.
1972-01-01
A phase-locked loop designed with all-digital circuitry which avoids certain problems, and a digital voltage controlled oscillator algorithm are described. The system operates synchronously and performs all required digital calculations within one sampling period, thereby performing as a real-time special-purpose computer. The SNR ratio is computed for frequency offsets and sinusoidal modulation, and experimental results verify the theoretical calculations.
NASA Technical Reports Server (NTRS)
Gordon, Sanford
1991-01-01
The NNEP is a general computer program for calculating aircraft engine performance. NNEP has been used extensively to calculate the design and off-design (matched) performance of a broad range of turbine engines, ranging from subsonic turboprops to variable cycle engines for supersonic transports. Recently, however, there has been increased interest in applications for which NNEP is not capable of simulating, such as the use of alternate fuels including cryogenic fuels and the inclusion of chemical dissociation effects at high temperatures. To overcome these limitations, NNEP was extended by including a general chemical equilibrium method. This permits consideration of any propellant system and the calculation of performance with dissociation effects. The new extended program is referred to as NNEP89.
A computational imaging target specific detectivity metric
NASA Astrophysics Data System (ADS)
Preece, Bradley L.; Nehmetallah, George
2017-05-01
Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.
Durán-Grados, Vanesa; Mejías, Javier; Musina, Liliya; Moreno-Gutiérrez, Juan
2018-08-01
In this study we consider the problems associated with calculating ships' energy and emission inventories. Various related uncertainties are described in many similar studies published in the last decade, and applying to Europe, the USA and Canada. However, none of them have taken into account the performance of ships' propulsion systems. On the one hand, when a ship uses its propellers, there is no unanimous agreement on the equations used to calculate the main engines load factor and, on the other, the performance of waterjet propulsion systems (for which this variable depends on the speed of the ship) has not been taken into account in any previous studies. This paper proposes that the efficiency of the propulsion system should be included as a new parameter in the equation that defines the actual power delivered by a ship's main engines, as applied to calculate energy consumption and emissions in maritime transport. To highlight the influence of the propulsion system on calculated energy consumption and emissions, the bottom-up method has been applied using data from eight fast ferries operating across the Strait of Gibraltar over the course of one year. This study shows that the uncertainty about the efficiency of the propulsion system should be added as one more uncertainty in the energy and emission inventories for maritime transport as currently prepared. After comparing four methods for this calculation, the authors propose a new method for eight cases. For the calculation of the Main Engine's fuel oil consumption, differences up to 22% between some methods were obtained at low loads. Copyright © 2018 Elsevier B.V. All rights reserved.
A system for 3D representation of burns and calculation of burnt skin area.
Prieto, María Felicidad; Acha, Begoña; Gómez-Cía, Tomás; Fondón, Irene; Serrano, Carmen
2011-11-01
In this paper a computer-based system for burnt surface area estimation (BAI), is presented. First, a 3D model of a patient, adapted to age, weight, gender and constitution is created. On this 3D model, physicians represent both burns as well as burn depth allowing the burnt surface area to be automatically calculated by the system. Each patient models as well as photographs and burn area estimation can be stored. Therefore, these data can be included in the patient's clinical records for further review. Validation of this system was performed. In a first experiment, artificial known sized paper patches were attached to different parts of the body in 37 volunteers. A panel of 5 experts diagnosed the extent of the patches using the Rule of Nines. Besides, our system estimated the area of the "artificial burn". In order to validate the null hypothesis, Student's t-test was applied to collected data. In addition, intraclass correlation coefficient (ICC) was calculated and a value of 0.9918 was obtained, demonstrating that the reliability of the program in calculating the area is of 99%. In a second experiment, the burnt skin areas of 80 patients were calculated using BAI system and the Rule of Nines. A comparison between these two measuring methods was performed via t-Student test and ICC. The hypothesis of null difference between both measures is only true for deep dermal burns and the ICC is significantly different, indicating that the area estimation calculated by applying classical techniques can result in a wrong diagnose of the burnt surface. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.
Freckmann, Guido; Jendrike, Nina; Baumstark, Annette; Pleus, Stefan; Liebing, Christina; Haug, Cornelia
2018-04-01
The international standard ISO 15197:2013 requires a user performance evaluation to assess if intended users are able to obtain accurate blood glucose measurement results with a self-monitoring of blood glucose (SMBG) system. In this study, user performance was evaluated for four SMBG systems on the basis of ISO 15197:2013, and possibly related insulin dosing errors were calculated. Additionally, accuracy was assessed in the hands of study personnel. Accu-Chek ® Performa Connect (A), Contour ® plus ONE (B), FreeStyle Optium Neo (C), and OneTouch Select ® Plus (D) were evaluated with one test strip lot. After familiarization with the systems, subjects collected a capillary blood sample and performed an SMBG measurement. Study personnel observed the subjects' measurement technique. Then, study personnel performed SMBG measurements and comparison measurements. Number and percentage of SMBG measurements within ± 15 mg/dl and ± 15% of the comparison measurements at glucose concentrations < 100 and ≥ 100 mg/dl, respectively, were calculated. In addition, insulin dosing errors were modelled. In the hands of lay-users three systems fulfilled ISO 15197:2013 accuracy criteria with the investigated test strip lot showing 96% (A), 100% (B), and 98% (C) of results within the defined limits. All systems fulfilled minimum accuracy criteria in the hands of study personnel [99% (A), 100% (B), 99.5% (C), 96% (D)]. Measurements with all four systems were within zones of the consensus error grid and surveillance error grid associated with no or minimal risk. Regarding calculated insulin dosing errors, all 99% ranges were between dosing errors of - 2.7 and + 1.4 units for measurements in the hands of lay-users and between - 2.5 and + 1.4 units for study personnel. Frequent lay-user errors were not checking the test strips' expiry date and applying blood incorrectly. Data obtained in this study show that not all available SMBG systems complied with ISO 15197:2013 accuracy criteria when measurements were performed by lay-users. The study was registered at ClinicalTrials.gov (NCT02916576). Ascensia Diabetes Care Deutschland GmbH.
Moradi, Farhad; Mahdavi, Seyed Rabi; Mostaar, Ahmad; Motamedi, Mohsen
2012-01-01
In this study the commissioning of a dose calculation algorithm in a currently used treatment planning system was performed and the calculation accuracy of two available methods in the treatment planning system i.e., collapsed cone convolution (CCC) and equivalent tissue air ratio (ETAR) was verified in tissue heterogeneities. For this purpose an inhomogeneous phantom (IMRT thorax phantom) was used and dose curves obtained by the TPS (treatment planning system) were compared with experimental measurements and Monte Carlo (MCNP code) simulation. Dose measurements were performed by using EDR2 radiographic films within the phantom. Dose difference (DD) between experimental results and two calculation methods was obtained. Results indicate maximum difference of 12% in the lung and 3% in the bone tissue of the phantom between two methods and the CCC algorithm shows more accurate depth dose curves in tissue heterogeneities. Simulation results show the accurate dose estimation by MCNP4C in soft tissue region of the phantom and also better results than ETAR method in bone and lung tissues. PMID:22973081
An experimental SMI adaptive antenna array for weak interfering signals
NASA Technical Reports Server (NTRS)
Dilsavor, R. L.; Gupta, I. J.
1989-01-01
A modified sample matrix inversion (SMI) algorithm designed to increase the suppression of weak interference is implemented on an existing experimental array system. The algorithm itself is fully described as are a number of issues concerning its implementation and evaluation, such as sample scaling, snapshot formation, weight normalization, power calculation, and system calibration. Several experiments show that the steady state performance (i.e., many snapshots are used to calculate the array weights) of the experimental system compares favorably with its theoretical performance. It is demonstrated that standard SMI does not yield adequate suppression of weak interference. Modified SMI is then used to experimentally increase this suppression by as much as 13dB.
Numerical Simulation of Measurements during the Reactor Physical Startup at Unit 3 of Rostov NPP
NASA Astrophysics Data System (ADS)
Tereshonok, V. A.; Kryakvin, L. V.; Pitilimov, V. A.; Karpov, S. A.; Kulikov, V. I.; Zhylmaganbetov, N. M.; Kavun, O. Yu.; Popykin, A. I.; Shevchenko, R. A.; Shevchenko, S. A.; Semenova, T. V.
2017-12-01
The results of numerical calculations and measurements of some reactor parameters during the physical startup tests at unit 3 of Rostov NPP are presented. The following parameters are considered: the critical boron acid concentration and the currents from ionization chambers (IC) during the scram system efficiency evaluation. The scram system efficiency was determined using the inverse point kinetics equation with the measured and simulated IC currents. The results of steady-state calculations of relative power distribution and efficiency of the scram system and separate groups of control rods of the control and protection system are also presented. The calculations are performed using several codes, including precision ones.
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
The number processing and calculation system: evidence from cognitive neuropsychology.
Salguero-Alcañiz, M P; Alameda-Bailén, J R
2015-04-01
Cognitive neuropsychology focuses on the concepts of dissociation and double dissociation. The performance of number processing and calculation tasks by patients with acquired brain injury can be used to characterise the way in which the healthy cognitive system manipulates number symbols and quantities. The objective of this study is to determine the components of the numerical processing and calculation system. Participants consisted of 6 patients with acquired brain injuries in different cerebral localisations. We used Batería de evaluación del procesamiento numérico y el cálculo, a battery assessing number processing and calculation. Data was analysed using the difference in proportions test. Quantitative numerical knowledge is independent from number transcoding, qualitative numerical knowledge, and calculation. Recodification is independent from qualitative numerical knowledge and calculation. Quantitative numerical knowledge and calculation are also independent functions. The number processing and calculation system comprises at least 4 components that operate independently: quantitative numerical knowledge, number transcoding, qualitative numerical knowledge, and calculation. Therefore, each one may be damaged selectively without affecting the functioning of another. According to the main models of number processing and calculation, each component has different characteristics and cerebral localisations. Copyright © 2013 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.
Estimation and detection information trade-off for x-ray system optimization
NASA Astrophysics Data System (ADS)
Cushing, Johnathan B.; Clarkson, Eric W.; Mandava, Sagar; Bilgin, Ali
2016-05-01
X-ray Computed Tomography (CT) systems perform complex imaging tasks to detect and estimate system parameters, such as a baggage imaging system performing threat detection and generating reconstructions. This leads to a desire to optimize both the detection and estimation performance of a system, but most metrics only focus on one of these aspects. When making design choices there is a need for a concise metric which considers both detection and estimation information parameters, and then provides the user with the collection of possible optimal outcomes. In this paper a graphical analysis of Estimation and Detection Information Trade-off (EDIT) will be explored. EDIT produces curves which allow for a decision to be made for system optimization based on design constraints and costs associated with estimation and detection. EDIT analyzes the system in the estimation information and detection information space where the user is free to pick their own method of calculating these measures. The user of EDIT can choose any desired figure of merit for detection information and estimation information then the EDIT curves will provide the collection of optimal outcomes. The paper will first look at two methods of creating EDIT curves. These curves can be calculated using a wide variety of systems and finding the optimal system by maximizing a figure of merit. EDIT could also be found as an upper bound of the information from a collection of system. These two methods allow for the user to choose a method of calculation which best fits the constraints of their actual system.
GMXPBSA 2.0: A GROMACS tool to perform MM/PBSA and computational alanine scanning
NASA Astrophysics Data System (ADS)
Paissoni, C.; Spiliotopoulos, D.; Musco, G.; Spitaleri, A.
2014-11-01
GMXPBSA 2.0 is a user-friendly suite of Bash/Perl scripts for streamlining MM/PBSA calculations on structural ensembles derived from GROMACS trajectories, to automatically calculate binding free energies for protein-protein or ligand-protein complexes. GMXPBSA 2.0 is flexible and can easily be customized to specific needs. Additionally, it performs computational alanine scanning (CAS) to study the effects of ligand and/or receptor alanine mutations on the free energy of binding. Calculations require only for protein-protein or protein-ligand MD simulations. GMXPBSA 2.0 performs different comparative analysis, including a posteriori generation of alanine mutants of the wild-type complex, calculation of the binding free energy values of the mutant complexes and comparison of the results with the wild-type system. Moreover, it compares the binding free energy of different complexes trajectories, allowing the study the effects of non-alanine mutations, post-translational modifications or unnatural amino acids on the binding free energy of the system under investigation. Finally, it can calculate and rank relative affinity to the same receptor utilizing MD simulations of proteins in complex with different ligands. In order to dissect the different MM/PBSA energy contributions, including molecular mechanic (MM), electrostatic contribution to solvation (PB) and nonpolar contribution to solvation (SA), the tool combines two freely available programs: the MD simulations software GROMACS and the Poisson-Boltzmann equation solver APBS. All the calculations can be performed in single or distributed automatic fashion on a cluster facility in order to increase the calculation by dividing frames across the available processors. The program is freely available under the GPL license.
Parametric Studies of the Ejector Process within a Turbine-Based Combined-Cycle Propulsion System
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Walker, James F.; Trefny, Charles J.
1999-01-01
Performance characteristics of the ejector process within a turbine-based combined-cycle (TBCC) propulsion system are investigated using the NPARC Navier-Stokes code. The TBCC concept integrates a turbine engine with a ramjet into a single propulsion system that may efficiently operate from takeoff to high Mach number cruise. At the operating point considered, corresponding to a flight Mach number of 2.0, an ejector serves to mix flow from the ramjet duct with flow from the turbine engine. The combined flow then passes through a diffuser where it is mixed with hydrogen fuel and burned. Three sets of fully turbulent Navier-Stokes calculations are compared with predictions from a cycle code developed specifically for the TBCC propulsion system. A baseline ejector system is investigated first. The Navier-Stokes calculations indicate that the flow leaving the ejector is not completely mixed, which may adversely affect the overall system performance. Two additional sets of calculations are presented; one set that investigated a longer ejector region (to enhance mixing) and a second set which also utilized the longer ejector but replaced the no-slip surfaces of the ejector with slip (inviscid) walls in order to resolve discrepancies with the cycle code. The three sets of Navier-Stokes calculations and the TBCC cycle code predictions are compared to determine the validity of each of the modeling approaches.
Solar space- and water-heating system at Stanford University. Central Food Services Building
NASA Astrophysics Data System (ADS)
1980-05-01
The closed-loop drain-back system is described as offering dependability of gravity drain-back freeze protection, low maintenance, minimal costs, and simplicity. The system features an 840 square-foot collector and storage capacity of 1550 gallons. The acceptance testing and the predicted system performance data are briefly described. Solar performance calculations were performed using a computer design program (FCHART). Bidding, costs, and economics of the system are reviewed. Problems are discussed and solutions and recommendations given. An operation and maintenance manual is given.
NASA Astrophysics Data System (ADS)
Susmikanti, Mike; Dewayatna, Winter; Sulistyo, Yos
2014-09-01
One of the research activities in support of commercial radioisotope production program is a safety research on target FPM (Fission Product Molybdenum) irradiation. FPM targets form a tube made of stainless steel which contains nuclear-grade high-enrichment uranium. The FPM irradiation tube is intended to obtain fission products. Fission materials such as Mo99 used widely the form of kits in the medical world. The neutronics problem is solved using first-order perturbation theory derived from the diffusion equation for four groups. In contrast, Mo isotopes have longer half-lives, about 3 days (66 hours), so the delivery of radioisotopes to consumer centers and storage is possible though still limited. The production of this isotope potentially gives significant economic value. The criticality and flux in multigroup diffusion model was calculated for various irradiation positions and uranium contents. This model involves complex computation, with large and sparse matrix system. Several parallel algorithms have been developed for the sparse and large matrix solution. In this paper, a successive over-relaxation (SOR) algorithm was implemented for the calculation of reactivity coefficients which can be done in parallel. Previous works performed reactivity calculations serially with Gauss-Seidel iteratives. The parallel method can be used to solve multigroup diffusion equation system and calculate the criticality and reactivity coefficients. In this research a computer code was developed to exploit parallel processing to perform reactivity calculations which were to be used in safety analysis. The parallel processing in the multicore computer system allows the calculation to be performed more quickly. This code was applied for the safety limits calculation of irradiated FPM targets containing highly enriched uranium. The results of calculations neutron show that for uranium contents of 1.7676 g and 6.1866 g (× 106 cm-1) in a tube, their delta reactivities are the still within safety limits; however, for 7.9542 g and 8.838 g (× 106 cm-1) the limits were exceeded.
Autonomous safety and reliability features of the K-1 avionics system
NASA Astrophysics Data System (ADS)
Mueller, George E.; Kohrs, Dick; Bailey, Richard; Lai, Gary
2004-03-01
Kistler Aerospace Corporation is developing the K-1, a fully reusable, two-stage-to-orbit launch vehicle. Both stages return to the launch site using parachutes and airbags. Initial flight operations will occur from Woomera, Australia. K-1 guidance is performed autonomously. Each stage of the K-1 employs a triplex, fault tolerant avionics architecture, including three fault tolerant computers and three radiation hardened Embedded GPS/INS units with a hardware voter. The K-1 has an Integrated Vehicle Health Management (IVHM) system on each stage residing in the three vehicle computers based on similar systems in commercial aircraft. During first-stage ascent, the IVHM system performs an Instantaneous Impact Prediction (IIP) calculation 25 times per second, initiating an abort in the event the vehicle is outside a predetermined safety corridor for at least 3 consecutive calculations. In this event, commands are issued to terminate thrust, separate the stages, dump all propellant in the first-stage, and initiate a normal landing sequence. The second-stage flight computer calculates its ability to reach orbit along its state vector, initiating an abort sequence similar to the first stage if it cannot. On a nominal mission, following separation, the second-stage also performs calculations to assure its impact point is within a safety corridor. The K-1's guidance and control design is being tested through simulation with hardware-in-the-loop at Draper Laboratory. Kistler's verification strategy assures reliable and safe operation of the K-1.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leal, L.C.; Deen, J.R.; Woodruff, W.L.
1995-02-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment for Research and Test (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the procedure to generatemore » cross-section libraries for reactor analyses and calculations utilizing the WIMSD4M code. To do so, the results of calculations performed with group cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory(ORNL) unreflected critical spheres, the TRX critical experiments, and calculations of a modified Los Alamos highly-enriched heavy-water moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Wang, Lei; Troyer, Matthias
2014-09-12
We present a new algorithm for calculating the Renyi entanglement entropy of interacting fermions using the continuous-time quantum Monte Carlo method. The algorithm only samples the interaction correction of the entanglement entropy, which by design ensures the efficient calculation of weakly interacting systems. Combined with Monte Carlo reweighting, the algorithm also performs well for systems with strong interactions. We demonstrate the potential of this method by studying the quantum entanglement signatures of the charge-density-wave transition of interacting fermions on a square lattice.
Performance Test Data Analysis of Scintillation Cameras
NASA Astrophysics Data System (ADS)
Demirkaya, Omer; Mazrou, Refaat Al
2007-10-01
In this paper, we present a set of image analysis tools to calculate the performance parameters of gamma camera systems from test data acquired according to the National Electrical Manufacturers Association NU 1-2001 guidelines. The calculation methods are either completely automated or require minimal user interaction; minimizing potential human errors. The developed methods are robust with respect to varying conditions under which these tests may be performed. The core algorithms have been validated for accuracy. They have been extensively tested on images acquired by the gamma cameras from different vendors. All the algorithms are incorporated into a graphical user interface that provides a convenient way to process the data and report the results. The entire application has been developed in MATLAB programming environment and is compiled to run as a stand-alone program. The developed image analysis tools provide an automated, convenient and accurate means to calculate the performance parameters of gamma cameras and SPECT systems. The developed application is available upon request for personal or non-commercial uses. The results of this study have been partially presented in Society of Nuclear Medicine Annual meeting as an InfoSNM presentation.
Development of GUI Type On-Line Condition Monitoring Program for a Turboprop Engine Using Labview
NASA Astrophysics Data System (ADS)
Kong, Changduk; Kim, Keonwoo
2011-12-01
Recently, an aero gas turbine health monitoring system has been developed for precaution and maintenance action against faults or performance degradations of the advanced propulsion system which occurs in severe environments such as high altitude, foreign object damage particles, hot and heavy rain and snowy atmospheric conditions. However to establish this health monitoring system, the online condition monitoring program is firstly required, and the program must monitor the engine performance trend through comparison between measured engine performance data and base performance results calculated by base engine performance model. This work aims to develop a GUI type on-line condition monitoring program for the PT6A-67 turboprop engine of a high altitude and long endurance operation UAV using LabVIEW. The base engine performance of the on-line condition monitoring program is simulated using component maps inversely generated from the limited performance deck data provided by engine manufacturer. The base engine performance simulation program is evaluated because analysis results by this program agree well with the performance deck data. The proposed on-line condition program can monitor the real engine performance as well as the trend through precise comparison between clean engine performance results calculated by the base performance simulation program and measured engine performance signals. In the development phase of this monitoring system, a signal generation module is proposed to evaluate the proposed online monitoring system. For user friendly purpose, all monitoring program are coded by LabVIEW, and monitoring examples are demonstrated using the proposed GUI type on-condition monitoring program.
Sensitivity Analysis for Steady State Groundwater Flow Using Adjoint Operators
NASA Astrophysics Data System (ADS)
Sykes, J. F.; Wilson, J. L.; Andrews, R. W.
1985-03-01
Adjoint sensitivity theory is currently being considered as a potential method for calculating the sensitivity of nuclear waste repository performance measures to the parameters of the system. For groundwater flow systems, performance measures of interest include piezometric heads in the vicinity of a waste site, velocities or travel time in aquifers, and mass discharge to biosphere points. The parameters include recharge-discharge rates, prescribed boundary heads or fluxes, formation thicknesses, and hydraulic conductivities. The derivative of a performance measure with respect to the system parameters is usually taken as a measure of sensitivity. To calculate sensitivities, adjoint sensitivity equations are formulated from the equations describing the primary problem. The solution of the primary problem and the adjoint sensitivity problem enables the determination of all of the required derivatives and hence related sensitivity coefficients. In this study, adjoint sensitivity theory is developed for equations of two-dimensional steady state flow in a confined aquifer. Both the primary flow equation and the adjoint sensitivity equation are solved using the Galerkin finite element method. The developed computer code is used to investigate the regional flow parameters of the Leadville Formation of the Paradox Basin in Utah. The results illustrate the sensitivity of calculated local heads to the boundary conditions. Alternatively, local velocity related performance measures are more sensitive to hydraulic conductivities.
A new BP Fourier algorithm and its application in English teaching evaluation
NASA Astrophysics Data System (ADS)
Pei, Xuehui; Pei, Guixin
2017-08-01
BP neural network algorithm has wide adaptability and accuracy when used in complicated system evaluation, but its calculation defects such as slow convergence have limited its practical application. The paper tries to speed up the calculation convergence of BP neural network algorithm with Fourier basis functions and presents a new BP Fourier algorithm for complicated system evaluation. First, shortages and working principle of BP algorithm are analyzed for subsequent targeted improvement; Second, the presented BP Fourier algorithm adopts Fourier basis functions to simplify calculation structure, designs new calculation transfer function between input and output layers, and conducts theoretical analysis to prove the efficiency of the presented algorithm; Finally, the presented algorithm is used in evaluating university English teaching and the application results shows that the presented BP Fourier algorithm has better performance in calculation efficiency and evaluation accuracy and can be used in evaluating complicated system practically.
Global optimization method based on ray tracing to achieve optimum figure error compensation
NASA Astrophysics Data System (ADS)
Liu, Xiaolin; Guo, Xuejia; Tang, Tianjin
2017-02-01
Figure error would degrade the performance of optical system. When predicting the performance and performing system assembly, compensation by clocking of optical components around the optical axis is a conventional but user-dependent method. Commercial optical software cannot optimize this clocking. Meanwhile existing automatic figure-error balancing methods can introduce approximate calculation error and the build process of optimization model is complex and time-consuming. To overcome these limitations, an accurate and automatic global optimization method of figure error balancing is proposed. This method is based on precise ray tracing to calculate the wavefront error, not approximate calculation, under a given elements' rotation angles combination. The composite wavefront error root-mean-square (RMS) acts as the cost function. Simulated annealing algorithm is used to seek the optimal combination of rotation angles of each optical element. This method can be applied to all rotational symmetric optics. Optimization results show that this method is 49% better than previous approximate analytical method.
SU-E-J-199: A Software Tool for Quality Assurance of Online Replanning with MR-Linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, G; Ahunbay, E; Li, X
2015-06-15
Purpose: To develop a quality assurance software tool, ArtQA, capable of automatically checking radiation treatment plan parameters, verifying plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary MU calculation considering the effect of magnetic field from MR-Linac, and verifying the delivery and plan consistency, for online replanning. Methods: ArtQA was developed by creating interfaces to TPS (e.g., Monaco, Elekta), R&V system (Mosaiq, Elekta), and secondary MU calculation system. The tool obtains plan parameters from the TPS via direct file reading, and retrieves plan data both transferred from TPS and recorded during themore » actual delivery in the R&V system database via open database connectivity and structured query language. By comparing beam/plan datasets in different systems, ArtQA detects and outputs discrepancies between TPS, R&V system and secondary MU calculation system, and delivery. To consider the effect of 1.5T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA is capable of automatically checking plan integrity and logic consistency, detecting plan data transfer errors, performing secondary MU calculations with or without a transverse magnetic field, and verifying treatment delivery. The tool is efficient and effective for pre- and post-treatment QA checks of all available treatment parameters that may be impractical with the commonly-used visual inspection. Conclusion: The software tool ArtQA can be used for quick and automatic pre- and post-treatment QA check, eliminating human error associated with visual inspection. While this tool is developed for online replanning to be used on MR-Linac, where the QA needs to be performed rapidly as the patient is lying on the table waiting for the treatment, ArtQA can be used as a general QA tool in radiation oncology practice. This work is partially supported by Elekta Inc.« less
NASA Astrophysics Data System (ADS)
Alves Júnior, A. A.; Sokoloff, M. D.
2017-10-01
MCBooster is a header-only, C++11-compliant library that provides routines to generate and perform calculations on large samples of phase space Monte Carlo events. To achieve superior performance, MCBooster is capable to perform most of its calculations in parallel using CUDA- and OpenMP-enabled devices. MCBooster is built on top of the Thrust library and runs on Linux systems. This contribution summarizes the main features of MCBooster. A basic description of the user interface and some examples of applications are provided, along with measurements of performance in a variety of environments
Dosimetry study for a new in vivo X-ray fluorescence (XRF) bone lead measurement system
NASA Astrophysics Data System (ADS)
Nie, Huiling; Chettle, David; Luo, Liqiang; O'Meara, Joanne
2007-10-01
A new 109Cd γ-ray induced bone lead measurement system has been developed to reduce the minimum detectable limit (MDL) of the system. The system consists of four 16 mm diameter detectors. It requires a stronger source compared to the "conventional" system. A dosimetry study has been performed to estimate the dose delivered by this system. The study was carried out by using human-equivalent phantoms. Three sets of phantoms were made to estimate the dose delivered to three age groups: 5-year old, 10-year old and adults. Three approaches have been applied to evaluate the dose: calculations, Monte Carlo (MC) simulations, and experiments. Experimental results and analytical calculations were used to validate MC simulation. The experiments were performed by placing Panasonic UD-803AS TLDs at different places in phantoms that representing different organs. Due to the difficulty of obtaining the organ dose and the whole body dose solely by experiments and traditional calculations, the equivalent dose and effective dose were calculated by MC simulations. The result showed that the doses delivered to the organs other than the targeted lower leg are negligibly small. The total effective doses to the three age groups are 8.45/9.37 μSv (female/male), 4.20 μSv, and 0.26 μSv for 5-year old, 10-year old and adult, respectively. An approval to conduct human measurements on this system has been received from the Research Ethics Board based on this research.
Alagoz, Baris Baykant; Deniz, Furkan Nur; Keles, Cemal; Tan, Nusret
2015-03-01
This study investigates disturbance rejection capacity of closed loop control systems by means of reference to disturbance ratio (RDR). The RDR analysis calculates the ratio of reference signal energy to disturbance signal energy at the system output and provides a quantitative evaluation of disturbance rejection performance of control systems on the bases of communication channel limitations. Essentially, RDR provides a straightforward analytical method for the comparison and improvement of implicit disturbance rejection capacity of closed loop control systems. Theoretical analyses demonstrate us that RDR of the negative feedback closed loop control systems are determined by energy spectral density of controller transfer function. In this manner, authors derived design criteria for specifications of disturbance rejection performances of PID and fractional order PID (FOPID) controller structures. RDR spectra are calculated for investigation of frequency dependence of disturbance rejection capacity and spectral RDR analyses are carried out for PID and FOPID controllers. For the validation of theoretical results, simulation examples are presented. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.
Simulation of car movement along circular path
NASA Astrophysics Data System (ADS)
Fedotov, A. I.; Tikhov-Tinnikov, D. A.; Ovchinnikova, N. I.; Lysenko, A. V.
2017-10-01
Under operating conditions, suspension system performance changes which negatively affects vehicle stability and handling. The paper aims to simulate the impact of changes in suspension system performance on vehicle stability and handling. Methods. The paper describes monitoring of suspension system performance, testing of vehicle stability and handling, analyzes methods of suspension system performance monitoring under operating conditions. The mathematical model of a car movement along a circular path was developed. Mathematical tools describing a circular movement of a vehicle along a horizontal road were developed. Turning car movements were simulated. Calculation and experiment results were compared. Simulation proves the applicability of a mathematical model for assessment of the impact of suspension system performance on vehicle stability and handling.
CET89 - CHEMICAL EQUILIBRIUM WITH TRANSPORT PROPERTIES, 1989
NASA Technical Reports Server (NTRS)
Mcbride, B.
1994-01-01
Scientists and engineers need chemical equilibrium composition data to calculate the theoretical thermodynamic properties of a chemical system. This information is essential in the design and analysis of equipment such as compressors, turbines, nozzles, engines, shock tubes, heat exchangers, and chemical processing equipment. The substantial amount of numerical computation required to obtain equilibrium compositions and transport properties for complex chemical systems led scientists at NASA's Lewis Research Center to develop CET89, a program designed to calculate the thermodynamic and transport properties of these systems. CET89 is a general program which will calculate chemical equilibrium compositions and mixture properties for any chemical system with available thermodynamic data. Generally, mixtures may include condensed and gaseous products. CET89 performs the following operations: it 1) obtains chemical equilibrium compositions for assigned thermodynamic states, 2) calculates dilute-gas transport properties of complex chemical mixtures, 3) obtains Chapman-Jouguet detonation properties for gaseous species, 4) calculates incident and reflected shock properties in terms of assigned velocities, and 5) calculates theoretical rocket performance for both equilibrium and frozen compositions during expansion. The rocket performance function allows the option of assuming either a finite area or an infinite area combustor. CET89 accommodates problems involving up to 24 reactants, 20 elements, and 600 products (400 of which may be condensed). The program includes a library of thermodynamic and transport properties in the form of least squares coefficients for possible reaction products. It includes thermodynamic data for over 1300 gaseous and condensed species and transport data for 151 gases. The subroutines UTHERM and UTRAN convert thermodynamic and transport data to unformatted form for faster processing. The program conforms to the FORTRAN 77 standard, except for some input in NAMELIST format. It requires about 423 KB memory, and is designed to be used on mainframe, workstation, and mini computers. Due to its memory requirements, this program does not readily lend itself to implementation on MS-DOS based machines.
Performance modeling for large database systems
NASA Astrophysics Data System (ADS)
Schaar, Stephen; Hum, Frank; Romano, Joe
1997-02-01
One of the unique approaches Science Applications International Corporation took to meet performance requirements was to start the modeling effort during the proposal phase of the Interstate Identification Index/Federal Bureau of Investigations (III/FBI) project. The III/FBI Performance Model uses analytical modeling techniques to represent the III/FBI system. Inputs to the model include workloads for each transaction type, record size for each record type, number of records for each file, hardware envelope characteristics, engineering margins and estimates for software instructions, memory, and I/O for each transaction type. The model uses queuing theory to calculate the average transaction queue length. The model calculates a response time and the resources needed for each transaction type. Outputs of the model include the total resources needed for the system, a hardware configuration, and projected inherent and operational availability. The III/FBI Performance Model is used to evaluate what-if scenarios and allows a rapid response to engineering change proposals and technical enhancements.
SU-E-T-465: Dose Calculation Method for Dynamic Tumor Tracking Using a Gimbal-Mounted Linac
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sugimoto, S; Inoue, T; Kurokawa, C
Purpose: Dynamic tumor tracking using the gimbal-mounted linac (Vero4DRT, Mitsubishi Heavy Industries, Ltd., Japan) has been available when respiratory motion is significant. The irradiation accuracy of the dynamic tumor tracking has been reported to be excellent. In addition to the irradiation accuracy, a fast and accurate dose calculation algorithm is needed to validate the dose distribution in the presence of respiratory motion because the multiple phases of it have to be considered. A modification of dose calculation algorithm is necessary for the gimbal-mounted linac due to the degrees of freedom of gimbal swing. The dose calculation algorithm for the gimbalmore » motion was implemented using the linear transformation between coordinate systems. Methods: The linear transformation matrices between the coordinate systems with and without gimbal swings were constructed using the combination of translation and rotation matrices. The coordinate system where the radiation source is at the origin and the beam axis along the z axis was adopted. The transformation can be divided into the translation from the radiation source to the gimbal rotation center, the two rotations around the center relating to the gimbal swings, and the translation from the gimbal center to the radiation source. After operating the transformation matrix to the phantom or patient image, the dose calculation can be performed as the no gimbal swing. The algorithm was implemented in the treatment planning system, PlanUNC (University of North Carolina, NC). The convolution/superposition algorithm was used. The dose calculations with and without gimbal swings were performed for the 3 × 3 cm{sup 2} field with the grid size of 5 mm. Results: The calculation time was about 3 minutes per beam. No significant additional time due to the gimbal swing was observed. Conclusions: The dose calculation algorithm for the finite gimbal swing was implemented. The calculation time was moderate.« less
NASA Astrophysics Data System (ADS)
Pravdivtsev, Andrey V.
2012-06-01
The article presents the approach to the design wide-angle optical systems with special illumination and instantaneous field of view (IFOV) requirements. The unevenness of illumination reduces the dynamic range of the system, which negatively influence on the system ability to perform their task. The result illumination on the detector depends among other factors from the IFOV changes. It is also necessary to consider IFOV in the synthesis of data processing algorithms, as it directly affects to the potential "signal/background" ratio for the case of statistically homogeneous backgrounds. A numerical-analytical approach that simplifies the design of wideangle optical systems with special illumination and IFOV requirements is presented. The solution can be used for optical systems which field of view greater than 180 degrees. Illumination calculation in optical CAD is based on computationally expensive tracing of large number of rays. The author proposes to use analytical expression for some characteristics which illumination depends on. The rest characteristic are determined numerically in calculation with less computationally expensive operands, the calculation performs not every optimization step. The results of analytical calculation inserts in the merit function of optical CAD optimizer. As a result we reduce the optimizer load, since using less computationally expensive operands. It allows reducing time and resources required to develop a system with the desired characteristics. The proposed approach simplifies the creation and understanding of the requirements for the quality of the optical system, reduces the time and resources required to develop an optical system, and allows creating more efficient EOS.
Research on stellarator-mirror fission-fusion hybrid
NASA Astrophysics Data System (ADS)
Moiseenko, V. E.; Kotenko, V. G.; Chernitskiy, S. V.; Nemov, V. V.; Ågren, O.; Noack, K.; Kalyuzhnyi, V. N.; Hagnestål, A.; Källne, J.; Voitsenya, V. S.; Garkusha, I. E.
2014-09-01
The development of a stellarator-mirror fission-fusion hybrid concept is reviewed. The hybrid comprises of a fusion neutron source and a powerful sub-critical fast fission reactor core. The aim is the transmutation of spent nuclear fuel and safe fission energy production. In its fusion part, neutrons are generated in deuterium-tritium (D-T) plasma, confined magnetically in a stellarator-type system with an embedded magnetic mirror. Based on kinetic calculations, the energy balance for such a system is analyzed. Neutron calculations have been performed with the MCNPX code, and the principal design of the reactor part is developed. Neutron outflux at different outer parts of the reactor is calculated. Numerical simulations have been performed on the structure of a magnetic field in a model of the stellarator-mirror device, and that is achieved by switching off one or two coils of toroidal field in the Uragan-2M torsatron. The calculations predict the existence of closed magnetic surfaces under certain conditions. The confinement of fast particles in such a magnetic trap is analyzed.
Richings, Gareth W; Habershon, Scott
2017-09-12
We describe a method for performing nuclear quantum dynamics calculations using standard, grid-based algorithms, including the multiconfiguration time-dependent Hartree (MCTDH) method, where the potential energy surface (PES) is calculated "on-the-fly". The method of Gaussian process regression (GPR) is used to construct a global representation of the PES using values of the energy at points distributed in molecular configuration space during the course of the wavepacket propagation. We demonstrate this direct dynamics approach for both an analytical PES function describing 3-dimensional proton transfer dynamics in malonaldehyde and for 2- and 6-dimensional quantum dynamics simulations of proton transfer in salicylaldimine. In the case of salicylaldimine we also perform calculations in which the PES is constructed using Hartree-Fock calculations through an interface to an ab initio electronic structure code. In all cases, the results of the quantum dynamics simulations are in excellent agreement with previous simulations of both systems yet do not require prior fitting of a PES at any stage. Our approach (implemented in a development version of the Quantics package) opens a route to performing accurate quantum dynamics simulations via wave function propagation of many-dimensional molecular systems in a direct and efficient manner.
NASA Astrophysics Data System (ADS)
Maharani, Septya; Hatta, Heliza Rahmania; Anzhari, Afif Nur; Khairina, Dyna Marisa
2018-02-01
Paskibraka as troops whose job is to flap the heritage duplicates flag. To become a Paskibraka a selection that participants are high school students are made. Because the number of participants of the selection of many support systems to facilitate the assessment process is made. This system uses Analytical Hierarchy Process (AHP) to determine the weight value criteria that comprise the value of the interview, health, physical, .height and value rules for marching as well as using Technique For Others Preference by Similarity to Ideal Solution (TOPSIS) methods to seek best alternative participants. The calculation results of 21 alternative names best male and female of the participants and their school origin. The system has also been tested by performing the calculations manually using Microsoft Excel (Ms.Excel) to calculate the calculation of the system using AHP and TOPSIS.
Design of magnetic system to produce intense beam of polarized molecules of H2 and D2
NASA Astrophysics Data System (ADS)
Yurchenko, A. V.; Nikolenko, D. M.; Rachek, I. A.; Shestakov, Yu V.; Toporkov, D. K.; Zorin, A. V.
2017-12-01
A magnetic-separating system is designed to produce polarized molecular high-density beams of H2/D2. The distribution of the magnetic field inside the aperture of the multipole magnet was calculated using the Mermaid software package. The calculation showed that the characteristic value of the magnetic field is 40 kGs, the field gradient is about 60 kGs/cm. A numerical calculation of the trajectories of the motion of molecules with different spin projections in this magnetic system is performed. The article discusses the possibility of using the magnetic system designed for the creation of a high-intensity source of polarized molecules. The expected intensity of this source is calculated. The expected flux of molecules focused in the receiver tube is 3.5·1016 mol/s for the hydrogen molecule and 2.0·1015 mol/s for the deuterium molecule.
Shen, L; Levine, S H; Catchen, G L
1987-07-01
This paper describes an optimization method for determining the beta dose distribution in tissue, and it describes the associated testing and verification. The method uses electron transport theory and optimization techniques to analyze the responses of a three-element thermoluminescent dosimeter (TLD) system. Specifically, the method determines the effective beta energy distribution incident on the dosimeter system, and thus the system performs as a beta spectrometer. Electron transport theory provides the mathematical model for performing the optimization calculation. In this calculation, parameters are determined that produce calculated doses for each of the chip/absorber components in the three-element TLD system. The resulting optimized parameters describe an effective incident beta distribution. This method can be used to determine the beta dose specifically at 7 mg X cm-2 or at any depth of interest. The doses at 7 mg X cm-2 in tissue determined by this method are compared to those experimentally determined using an extrapolation chamber. For a great variety of pure beta sources having different incident beta energy distributions, good agreement is found. The results are also compared to those produced by a commonly used empirical algorithm. Although the optimization method produces somewhat better results, the advantage of the optimization method is that its performance is not sensitive to the specific method of calibration.
A geometrical interpretation of the 2n-th central difference
NASA Technical Reports Server (NTRS)
Tapia, R. A.
1972-01-01
Many algorithms used for data smoothing, data classification and error detection require the calculation of the distance from a point to the polynomial interpolating its 2n neighbors (n on each side). This computation, if performed naively, would require the solution of a system of equations and could create numerical problems. This note shows that if the data is equally spaced, then this calculation can be performed using a simple recursion formula.
Rheological Predictions of Network Systems Swollen with Entangled Solvent
2014-04-01
represent binary entanglements and the crosses represent cross-links. Both of which are fixed in space for Green– Kubo calculations or moved affinely for...Two types of calculations can be performed, equilibrium (or Green– Kubo ) calculations in which the rate of deformation tensor21,22 is set to zero and the...autocorrelation function of stress at equilibrium is followed; or flow calculations in which a specific flow field is applied and the stress as a
Adaptive coded aperture imaging in the infrared: towards a practical implementation
NASA Astrophysics Data System (ADS)
Slinger, Chris W.; Gilholm, Kevin; Gordon, Neil; McNie, Mark; Payne, Doug; Ridley, Kevin; Strens, Malcolm; Todd, Mike; De Villiers, Geoff; Watson, Philip; Wilson, Rebecca; Dyer, Gavin; Eismann, Mike; Meola, Joe; Rogers, Stanley
2008-08-01
An earlier paper [1] discussed the merits of adaptive coded apertures for use as lensless imaging systems in the thermal infrared and visible. It was shown how diffractive (rather than the more conventional geometric) coding could be used, and that 2D intensity measurements from multiple mask patterns could be combined and decoded to yield enhanced imagery. Initial experimental results in the visible band were presented. Unfortunately, radiosity calculations, also presented in that paper, indicated that the signal to noise performance of systems using this approach was likely to be compromised, especially in the infrared. This paper will discuss how such limitations can be overcome, and some of the tradeoffs involved. Experimental results showing tracking and imaging performance of these modified, diffractive, adaptive coded aperture systems in the visible and infrared will be presented. The subpixel imaging and tracking performance is compared to that of conventional imaging systems and shown to be superior. System size, weight and cost calculations indicate that the coded aperture approach, employing novel photonic MOEMS micro-shutter architectures, has significant merits for a given level of performance in the MWIR when compared to more conventional imaging approaches.
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; ...
2017-11-14
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals formore » the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. In conclusion, the chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.« less
Pushing configuration-interaction to the limit: Towards massively parallel MCSCF calculations
NASA Astrophysics Data System (ADS)
Vogiatzis, Konstantinos D.; Ma, Dongxia; Olsen, Jeppe; Gagliardi, Laura; de Jong, Wibe A.
2017-11-01
A new large-scale parallel multiconfigurational self-consistent field (MCSCF) implementation in the open-source NWChem computational chemistry code is presented. The generalized active space approach is used to partition large configuration interaction (CI) vectors and generate a sufficient number of batches that can be distributed to the available cores. Massively parallel CI calculations with large active spaces can be performed. The new parallel MCSCF implementation is tested for the chromium trimer and for an active space of 20 electrons in 20 orbitals, which can now routinely be performed. Unprecedented CI calculations with an active space of 22 electrons in 22 orbitals for the pentacene systems were performed and a single CI iteration calculation with an active space of 24 electrons in 24 orbitals for the chromium tetramer was possible. The chromium tetramer corresponds to a CI expansion of one trillion Slater determinants (914 058 513 424) and is the largest conventional CI calculation attempted up to date.
Mission and system optimization of nuclear electric propulsion vehicles for lunar and Mars missions
NASA Technical Reports Server (NTRS)
Gilland, James H.
1991-01-01
The detailed mission and system optimization of low thrust electric propulsion missions is a complex, iterative process involving interaction between orbital mechanics and system performance. Through the use of appropriate approximations, initial system optimization and analysis can be performed for a range of missions. The intent of these calculations is to provide system and mission designers with simple methods to assess system design without requiring access or detailed knowledge of numerical calculus of variations optimizations codes and methods. Approximations for the mission/system optimization of Earth orbital transfer and Mars mission have been derived. Analyses include the variation of thruster efficiency with specific impulse. Optimum specific impulse, payload fraction, and power/payload ratios are calculated. The accuracy of these methods is tested and found to be reasonable for initial scoping studies. Results of optimization for Space Exploration Initiative lunar cargo and Mars missions are presented for a range of power system and thruster options.
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
Validation of the WIMSD4M cross-section generation code with benchmark results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deen, J.R.; Woodruff, W.L.; Leal, L.E.
1995-01-01
The WIMSD4 code has been adopted for cross-section generation in support of the Reduced Enrichment Research and Test Reactor (RERTR) program at Argonne National Laboratory (ANL). Subsequently, the code has undergone several updates, and significant improvements have been achieved. The capability of generating group-collapsed micro- or macroscopic cross sections from the ENDF/B-V library and the more recent evaluation, ENDF/B-VI, in the ISOTXS format makes the modified version of the WIMSD4 code, WIMSD4M, very attractive, not only for the RERTR program, but also for the reactor physics community. The intent of the present paper is to validate the WIMSD4M cross-section librariesmore » for reactor modeling of fresh water moderated cores. The results of calculations performed with multigroup cross-section data generated with the WIMSD4M code will be compared against experimental results. These results correspond to calculations carried out with thermal reactor benchmarks of the Oak Ridge National Laboratory (ORNL) unreflected HEU critical spheres, the TRX LEU critical experiments, and calculations of a modified Los Alamos HEU D{sub 2}O moderated benchmark critical system. The benchmark calculations were performed with the discrete-ordinates transport code, TWODANT, using WIMSD4M cross-section data. Transport calculations using the XSDRNPM module of the SCALE code system are also included. In addition to transport calculations, diffusion calculations with the DIF3D code were also carried out, since the DIF3D code is used in the RERTR program for reactor analysis and design. For completeness, Monte Carlo results of calculations performed with the VIM and MCNP codes are also presented.« less
Calculated Drag of an Aerial Refueling Assembly Through Airplane Performance Analysis
NASA Technical Reports Server (NTRS)
Vachon, Jake; Ray, Ronald; Calianno, Carl
2004-01-01
This viewgraph document reviews NASA Dryden's work on Aerial refueling, with specific interest in calculating the drag of the refueling system. The aerodynamic drag of an aerial refueling assembly was calculated during the Automated Aerial Refueling project at the NASA Dryden Flight Research Center. An F/A-18A airplane was specially instrumented to obtain accurate fuel flow measurements and to determine engine thrust
Spreadsheet Analysis of Harvesting Systems
R.B. Rummer; B.L. Lanford
1987-01-01
Harvesting systems can be modeled and analyzed on microcomputers using commercially available "spreadsheet" software. The effect of system or external variables on the production rate or system cost can be evaluated and alternative systems can be easily examined. The tedious calculations associated with such analyses are performed by the computer. For users...
Heat transfer and phase transitions of water in multi-layer cryolithozone-surface systems
NASA Astrophysics Data System (ADS)
Khabibullin, I. L.; Nigametyanova, G. A.; Nazmutdinov, F. F.
2018-01-01
A mathematical model for calculating the distribution of temperature and the dynamics of the phase transfor-mations of water in multilayer systems on permafrost-zone surface is proposed. The model allows one to perform calculations in the annual cycle, taking into account the distribution of temperature on the surface in warm and cold seasons. A system involving four layers, a snow or land cover, a top layer of soil, a layer of thermal-insulation materi-al, and a mineral soil, is analyzed. The calculations by the model allow one to choose the optimal thickness and com-position of the layers which would ensure the stability of structures built on the permafrost-zone surface.
NASA Astrophysics Data System (ADS)
Powers, Jeffrey J.
2011-12-01
This study focused on creating a new tristructural isotropic (TRISO) coated particle fuel performance model and demonstrating the integration of this model into an existing system of neutronics and heat transfer codes, creating a user-friendly option for including fuel performance analysis within system design optimization and system-level trade-off studies. The end product enables both a deeper understanding and better overall system performance of nuclear energy systems limited or greatly impacted by TRISO fuel performance. A thorium-fueled hybrid fusion-fission Laser Inertial Fusion Energy (LIFE) blanket design was used for illustrating the application of this new capability and demonstrated both the importance of integrating fuel performance calculations into mainstream design studies and the impact that this new integrated analysis had on system-level design decisions. A new TRISO fuel performance model named TRIUNE was developed and verified and validated during this work with a novel methodology established for simulating the actual lifetime of a TRISO particle during repeated passes through a pebble bed. In addition, integrated self-consistent calculations were performed for neutronics depletion analysis, heat transfer calculations, and then fuel performance modeling for a full parametric study that encompassed over 80 different design options that went through all three phases of analysis. Lastly, side studies were performed that included a comparison of thorium and depleted uranium (DU) LIFE blankets as well as some uncertainty quantification work to help guide future experimental work by assessing what material properties in TRISO fuel performance modeling are most in need of improvement. A recommended thorium-fueled hybrid LIFE engine design was identified with an initial fuel load of 20MT of thorium, 15% TRISO packing within the graphite fuel pebbles, and a 20cm neutron multiplier layer with beryllium pebbles in flibe molten salt coolant. It operated at a system power level of 2000 MWth, took about 3.5 years to reach full plateau power, and was capable of an End of Plateau burnup of 38.7 %FIMA if considering just the neutronic constraints in the system design; however, fuel performance constraints led to a maximum credible burnup of 12.1 %FIMA due to a combination of internal gas pressure and irradiation effects on the TRISO materials (especially PyC) leading to SiC pressure vessel failures. The optimal neutron spectrum for the thorium-fueled blanket options evaluated seemed to favor a hard spectrum (low but non-zero neutron multiplier thicknesses and high TRISO packing fractions) in terms of neutronic performance but the fuel performance constraints demonstrated that a significantly softer spectrum would be needed to decrease the rate of accumulation of fast neutron fluence in order to improve the maximum credible burnup the system could achieve.
NASA Technical Reports Server (NTRS)
Tenney, D. R.; Unnam, J.
1978-01-01
Diffusion calculations were performed to establish the conditions under which concentration dependence of the diffusion coefficient was important in single, two, and three phase binary alloy systems. Finite-difference solutions were obtained for each type of system using diffusion coefficient variations typical of those observed in real alloy systems. Solutions were also obtained using average diffusion coefficients determined by taking a logarithmic average of each diffusion coefficient variation considered. The constant diffusion coefficient solutions were used as reference in assessing diffusion coefficient variation effects. Calculations were performed for planar, cylindrical, and spherical geometries in order to compare the effect of diffusion coefficient variations with the effect of interface geometries. In most of the cases considered, the diffusion coefficient of the major-alloy phase was the key parameter that controlled the kinetics of interdiffusion.
Real-time endoscopic image orientation correction system using an accelerometer and gyrosensor.
Lee, Hyung-Chul; Jung, Chul-Woo; Kim, Hee Chan
2017-01-01
The discrepancy between spatial orientations of an endoscopic image and a physician's working environment can make it difficult to interpret endoscopic images. In this study, we developed and evaluated a device that corrects the endoscopic image orientation using an accelerometer and gyrosensor. The acceleration of gravity and angular velocity were retrieved from the accelerometer and gyrosensor attached to the handle of the endoscope. The rotational angle of the endoscope handle was calculated using a Kalman filter with transmission delay compensation. Technical evaluation of the orientation correction system was performed using a camera by comparing the optical rotational angle from the captured image with the rotational angle calculated from the sensor outputs. For the clinical utility test, fifteen anesthesiology residents performed a video endoscopic examination of an airway model with and without using the orientation correction system. The participants reported numbers written on papers placed at the left main, right main, and right upper bronchi of the airway model. The correctness and the total time it took participants to report the numbers were recorded. During the technical evaluation, errors in the calculated rotational angle were less than 5 degrees. In the clinical utility test, there was a significant time reduction when using the orientation correction system compared with not using the system (median, 52 vs. 76 seconds; P = .012). In this study, we developed a real-time endoscopic image orientation correction system, which significantly improved physician performance during a video endoscopic exam.
GRAPE-4: A special-purpose computer for gravitational N-body problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makino, Junichiro; Taiji, Makoto; Ebisuzaki, Toshikazu
1995-12-01
We describe GRAPE-4, a special-purpose computer for gravitational N-body simulations. In gravitational N-body simulations, almost all computing time is spent for the calculation of interaction between particles. GRAPE-4 is a specialized hardware to calculate the interaction between particles. It is used with a general-purpose host computer that performs all calculations other than the force calculation. With this architecture, it is relatively easy to realize a massively parallel system. In 1991, we developed the GRAPE-3 system with the peak speed equivalent to 14.4 Gflops. It consists of 48 custom pipelined processors. In 1992 we started the development of GRAPE-4. The GRAPE-4more » system will consist of 1920 custom pipeline chips. Each chip has the speed of 600 Mflops, when operated on 30 MHz clock. A prototype system with two custom LSIs has been completed July 1994, and the full system is now under manufacturing.« less
Affordable and accurate large-scale hybrid-functional calculations on GPU-accelerated supercomputers
NASA Astrophysics Data System (ADS)
Ratcliff, Laura E.; Degomme, A.; Flores-Livas, José A.; Goedecker, Stefan; Genovese, Luigi
2018-03-01
Performing high accuracy hybrid functional calculations for condensed matter systems containing a large number of atoms is at present computationally very demanding or even out of reach if high quality basis sets are used. We present a highly optimized multiple graphics processing unit implementation of the exact exchange operator which allows one to perform fast hybrid functional density-functional theory (DFT) calculations with systematic basis sets without additional approximations for up to a thousand atoms. With this method hybrid DFT calculations of high quality become accessible on state-of-the-art supercomputers within a time-to-solution that is of the same order of magnitude as traditional semilocal-GGA functionals. The method is implemented in a portable open-source library.
Advanced Life Support Equivalent System Mass Guidelines Document
NASA Technical Reports Server (NTRS)
Levri, Julie; Fisher, John W.; Jones, Harry W.; Drysdale, Alan E.; Ewert, Michael K.; Hanford, Anthony J.; Hogan, John A.; Joshi, Jitendri, A.; Vaccari, David A.
2003-01-01
This document is a viewgraph presentation which provides guidelines for performing an Equivalent System Mass (ESM) evaluation for trade study purposes. The document: 1) Defines ESM; 2) Explains how to calculate ESM; 3) Discusses interpretation of ESM results. The document is designed to provide detailed instructive material for researchers who are performing ESM evaluations for the first time.
Public Health Analysis Transport Optimization Model v. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beyeler, Walt; Finley, Patrick; Walser, Alex
PHANTOM models logistic functions of national public health systems. The system enables public health officials to visualize and coordinate options for public health surveillance, diagnosis, response and administration in an integrated analytical environment. Users may simulate and analyze system performance applying scenarios that represent current conditions or future contingencies what-if analyses of potential systemic improvements. Public health networks are visualized as interactive maps, with graphical displays of relevant system performance metrics as calculated by the simulation modeling components.
System design and installation for RS600 programmable control system for solar heating and cooling
NASA Technical Reports Server (NTRS)
1978-01-01
Procedures for installing, operating, and maintaining a programmable control system which utilizes a F8 microprocessor to perform all timing, control, and calculation functions in order to customize system performance to meet individual requirements for solar heating, combined heating and cooling, and/or hot water systems are described. The manual discusses user configuration and options, displays, theory of operation, trouble-shooting procedures, and warranty and assistance. Wiring lists, parts lists, drawings, and diagrams are included.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fuller, L.C.
The ORCENT-II digital computer program will perform calculations at valves-wide-open design conditions, maximum guaranteed rating conditions, and an approximation of part-load conditions for steam turbine cycles supplied with throttle steam characteristic of contemporary light-water reactors. Turbine performance calculations are based on a method published by the General Electric Company. Output includes all information normally shown on a turbine-cycle heat balance diagram. The program is written in FORTRAN IV for the IBM System 360 digital computers at the Oak Ridge National Laboratory.
Theoretical study of the bonding of the first-row transition-metal positive ions to ethylene
NASA Technical Reports Server (NTRS)
Sodupe, M.; Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Partridge, Harry
1992-01-01
Ab initio calculations were performed to study the bonding of the first-row transition-metal ions with ethylene. While Sc(+) and Ti(+) insert into the pi bond of ethylene to form a three-membered ring, the ions V(+) through Cu(+) form an electrostatic complex with ethylene. The binding energies are compared with those from experiment and with those of comparable calculations performed previously for the metal-acetylene ion systems.
Passive load follow analysis of the STAR-LM and STAR-H2 systems
NASA Astrophysics Data System (ADS)
Moisseytsev, Anton
A steady-state model for the calculation of temperature and pressure distributions, and heat and work balance for the STAR-LM and the STAR-H2 systems was developed. The STAR-LM system is designed for electricity production and consists of the lead cooled reactor on natural circulation and the supercritical carbon dioxide Brayton cycle. The STAR-H2 system uses the same reactor which is coupled to the hydrogen production plant, the Brayton cycle, and the water desalination plant. The Brayton cycle produces electricity for the on-site needs. Realistic modules for each system component were developed. The model also performs design calculations for the turbine and compressors for the CO2 Brayton cycle. The model was used to optimize the performance of the entire system as well as every system component. The size of each component was calculated. For the 400 MWt reactor power the STAR-LM produces 174.4 MWe (44% efficiency) and the STAR-H2 system produces 7450 kg H2/hr. The steady state model was used to conduct quasi-static passive load follow analysis. The control strategy was developed for each system; no control action on the reactor is required. As a main safety criterion, the peak cladding temperature is used. It was demonstrated that this temperature remains below the safety limit during both normal operation and load follow.
NASA Astrophysics Data System (ADS)
Adamowicz, Ludwik; Stanke, Monika; Tellgren, Erik; Helgaker, Trygve
2017-08-01
Explicitly correlated all-particle Gaussian functions with shifted centers (ECGs) are implemented within the earlier proposed effective variational non-Born-Oppenheimer method for calculating bound states of molecular systems in magnetic field (Adamowicz et al., 2015). The Hamiltonian used in the calculations is obtained by subtracting the operator representing the kinetic energy of the center-of-mass motion from the total laboratory-frame Hamiltonian. Test ECG calculations are performed for the HD molecule.
Modeling the long-term evolution of space debris
Nikolaev, Sergei; De Vries, Willem H.; Henderson, John R.; Horsley, Matthew A.; Jiang, Ming; Levatin, Joanne L.; Olivier, Scot S.; Pertica, Alexander J.; Phillion, Donald W.; Springer, Harry K.
2017-03-07
A space object modeling system that models the evolution of space debris is provided. The modeling system simulates interaction of space objects at simulation times throughout a simulation period. The modeling system includes a propagator that calculates the position of each object at each simulation time based on orbital parameters. The modeling system also includes a collision detector that, for each pair of objects at each simulation time, performs a collision analysis. When the distance between objects satisfies a conjunction criterion, the modeling system calculates a local minimum distance between the pair of objects based on a curve fitting to identify a time of closest approach at the simulation times and calculating the position of the objects at the identified time. When the local minimum distance satisfies a collision criterion, the modeling system models the debris created by the collision of the pair of objects.
Modification of LAMPF's magnet-mapping code for offsets of center coordinates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hurd, J.W.; Gomulka, S.; Merrill, F.
1991-01-01
One of the magnet measurements performed at LAMPF is the determination of the cylindrical harmonics of a quadrupole magnet using a rotating coil. The data are analyzed with the code HARMAL to derive the amplitudes of the harmonics. Initially, the origin of the polar coordinate system is the axis of the rotating coil. A new coordinate system is found by a simple translation of the old system such that the dipole moment in the new system is zero. The origin of this translated system is referred to as the magnetic center. Given this translation, the code calculates the coefficients ofmore » the cylindrical harmonics in the new system. The code has been modified to use an analytical calculation to determine these new coefficients. The method of calculation is described and some implications of this formulation are presented. 8 refs., 2 figs.« less
Clemente-Gutiérrez, Francisco; Pérez-Vara, Consuelo
2015-03-08
A pretreatment quality assurance program for volumetric techniques should include redundant calculations and measurement-based verifications. The patient-specific quality assurance process must be based in clinically relevant metrics. The aim of this study was to show the commission, clinical implementation, and comparison of two systems that allow performing a 3D redundant dose calculation. In addition, one of them is capable of reconstructing the dose on patient anatomy from measurements taken with a 2D ion chamber array. Both systems were compared in terms of reference calibration data (absolute dose, output factors, percentage depth-dose curves, and profiles). Results were in good agreement for absolute dose values (discrepancies were below 0.5%) and output factors (mean differences were below 1%). Maximum mean discrepancies were located between 10 and 20 cm of depth for PDDs (-2.7%) and in the penumbra region for profiles (mean DTA of 1.5 mm). Validation of the systems was performed by comparing point-dose measurements with values obtained by the two systems for static, dynamic fields from AAPM TG-119 report, and 12 real VMAT plans for different anatomical sites (differences better than 1.2%). Comparisons between measurements taken with a 2D ion chamber array and results obtained by both systems for real VMAT plans were also performed (mean global gamma passing rates better than 87.0% and 97.9% for the 2%/2 mm and 3%/3 mm criteria). Clinical implementation of the systems was evaluated by comparing dose-volume parameters for all TG-119 tests and real VMAT plans with TPS values (mean differences were below 1%). In addition, comparisons between dose distributions calculated by TPS and those extracted by the two systems for real VMAT plans were also performed (mean global gamma passing rates better than 86.0% and 93.0% for the 2%/2 mm and 3%/ 3 mm criteria). The clinical use of both systems was successfully evaluated.
NASA Astrophysics Data System (ADS)
Ueda, Yoshikatsu; Omura, Yoshiharu; Kojima, Hiro
Spacecraft observation is essentially "one-point measurement", while numerical simulation can reproduce a whole system of physical processes on a computer. By performing particle simulations of plasma wave instabilities and calculating correlation of waves and particles observed at a single point, we examine how well we can infer the characteristics of the whole system by a one-point measurement. We perform various simulation runs with different plasma parameters using one-dimensional electromagnetic particle code (KEMPO1) and calculate 'E dot v' or other moments at a single point. We find good correlation between the measurement and the macroscopic fluctuations of the total simulation region. We make use of the results of the computer experiments in our system design of new instruments 'One-chip Wave Particle Interaction Analyzer (OWPIA)'.
Performance of bent-crystal x-ray microscopes for high energy density physics research
Schollmeier, Marius S.; Geissel, Matthias; Shores, Jonathon E.; ...
2015-05-29
We present calculations for the field of view (FOV), image fluence, image monochromaticity, spectral acceptance, and image aberrations for spherical crystal microscopes, which are used as self-emission imaging or backlighter systems at large-scale high energy density physics facilities. Our analytic results are benchmarked with ray-tracing calculations as well as with experimental measurements from the 6.151 keV backlighter system at Sandia National Laboratories. Furthermore, the analytic expressions can be used for x-ray source positions anywhere between the Rowland circle and object plane. We discovered that this enables quick optimization of the performance of proposed but untested, bent-crystal microscope systems to findmore » the best compromise between FOV, image fluence, and spatial resolution for a particular application.« less
Calculation of background effects on the VESUVIO eV neutron spectrometer
NASA Astrophysics Data System (ADS)
Mayers, J.
2011-01-01
The VESUVIO spectrometer at the ISIS pulsed neutron source measures the momentum distribution n(p) of atoms by 'neutron Compton scattering' (NCS). Measurements of n(p) provide a unique window into the quantum behaviour of atomic nuclei in condensed matter systems. The VESUVIO 6Li-doped neutron detectors at forward scattering angles were replaced in February 2008 by yttrium aluminium perovskite (YAP)-doped γ-ray detectors. This paper compares the performance of the two detection systems. It is shown that the YAP detectors provide a much superior resolution and general performance, but suffer from a sample-dependent gamma background. This report details how this background can be calculated and data corrected. Calculation is compared with data for two different instrument geometries. Corrected and uncorrected data are also compared for the current instrument geometry. Some indications of how the gamma background can be reduced are also given.
Yi, Xingwen; Xu, Bo; Zhang, Jing; Lin, Yun; Qiu, Kun
2014-12-15
Digital coherent superposition (DCS) of optical OFDM subcarrier pairs with Hermitian symmetry can reduce the inter-carrier-interference (ICI) noise resulted from phase noise. In this paper, we show two different implementations of DCS-OFDM that have the same performance in the presence of laser phase noise. We complete the theoretical calculation on ICI reduction by using the model of pure Wiener phase noise. By Taylor expansion of the ICI, we show that the ICI power is cancelled to the second order by DCS. The fourth order term is further derived out and only decided by the ratio of laser linewidth to OFDM subcarrier symbol rate, which can greatly simplify the system design. Finally, we verify our theoretical calculations in simulations and use the analytical results to predict the system performance. DCS-OFDM is expected to be beneficial to certain optical fiber transmissions.
Kumada, H; Saito, K; Nakamura, T; Sakae, T; Sakurai, H; Matsumura, A; Ono, K
2011-12-01
Treatment planning for boron neutron capture therapy generally utilizes Monte-Carlo methods for calculation of the dose distribution. The new treatment planning system JCDS-FX employs the multi-purpose Monte-Carlo code PHITS to calculate the dose distribution. JCDS-FX allows to build a precise voxel model consisting of pixel based voxel cells in the scale of 0.4×0.4×2.0 mm(3) voxel in order to perform high-accuracy dose estimation, e.g. for the purpose of calculating the dose distribution in a human body. However, the miniaturization of the voxel size increases calculation time considerably. The aim of this study is to investigate sophisticated modeling methods which can perform Monte-Carlo calculations for human geometry efficiently. Thus, we devised a new voxel modeling method "Multistep Lattice-Voxel method," which can configure a voxel model that combines different voxel sizes by utilizing the lattice function over and over. To verify the performance of the calculation with the modeling method, several calculations for human geometry were carried out. The results demonstrated that the Multistep Lattice-Voxel method enabled the precise voxel model to reduce calculation time substantially while keeping the high-accuracy of dose estimation. Copyright © 2011 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clemente, F; Perez, C
Purpose: Redundant treatment verifications in conformal and intensity-modulated radiation therapy techniques are traditionally performed with single point calculations. New solutions can replace these checks with 3D treatment plan verifications. This work describes a software tool (Mobius3D, Mobius Medical Systems) that uses a GPU-accelerated collapsed cone algorithm to perform 3D independent verifications of TPS calculations. Methods: Mobius3D comes with reference beam models for common linear accelerators. The system uses an independently developed collapsed cone algorithm updated with recent enhancements. 144 isotropically-spaced cones are used for each voxel for calculations. These complex calculations can be sped up by using GPUs. Mobius3D calculatemore » dose using DICOM information coming from TPS (CT, RT Struct, RT Plan RT Dose). DVH-metrics and 3D gamma tests can be used to compare both TPS and secondary calculations. 170 patients treated with all common techniques as 3DCFRT (including wedged), static and dynamic IMRT and VMAT have been successfully verified with this solution. Results: Calculation times are between 3–5 minutes for 3DCFRT treatments and 15–20 for most complex dMLC and VMAT plans. For all PTVs mean dose and 90% coverage differences are (1.12±0.97)% and (0.68±1.19)%, respectively. Mean dose discrepancies for all OARs is (0.64±1.00)%. 3D gamma (global, 3%/3 mm) analysis shows a mean passing rate of (97.8 ± 3.0)% for PTVs and (99.0±3.0)% for OARs. 3D gamma pasing rate for all voxels in CT has a mean value of (98.5±1.6)%. Conclusion: Mobius3D is a powerful tool to verify all modalities of radiation therapy treatments. Dose discrepancies calculated by this system are in good agreement with TPS. The use of reference beam data results in time savings and can be used to avoid the propagation of errors in original beam data into our QA system. GPU calculations permit enhanced collapsed cone calculations with reasonable calculation times.« less
Multimode waveguide speckle patterns for compressive sensing.
Valley, George C; Sefler, George A; Justin Shaw, T
2016-06-01
Compressive sensing (CS) of sparse gigahertz-band RF signals using microwave photonics may achieve better performances with smaller size, weight, and power than electronic CS or conventional Nyquist rate sampling. The critical element in a CS system is the device that produces the CS measurement matrix (MM). We show that passive speckle patterns in multimode waveguides potentially provide excellent MMs for CS. We measure and calculate the MM for a multimode fiber and perform simulations using this MM in a CS system. We show that the speckle MM exhibits the sharp phase transition and coherence properties needed for CS and that these properties are similar to those of a sub-Gaussian MM with the same mean and standard deviation. We calculate the MM for a multimode planar waveguide and find dimensions of the planar guide that give a speckle MM with a performance similar to that of the multimode fiber. The CS simulations show that all measured and calculated speckle MMs exhibit a robust performance with equal amplitude signals that are sparse in time, in frequency, and in wavelets (Haar wavelet transform). The planar waveguide results indicate a path to a microwave photonic integrated circuit for measuring sparse gigahertz-band RF signals using CS.
NASA Astrophysics Data System (ADS)
Xu, Ye; Lee, Michael C.; Boroczky, Lilla; Cann, Aaron D.; Borczuk, Alain C.; Kawut, Steven M.; Powell, Charles A.
2009-02-01
Features calculated from different dimensions of images capture quantitative information of the lung nodules through one or multiple image slices. Previously published computer-aided diagnosis (CADx) systems have used either twodimensional (2D) or three-dimensional (3D) features, though there has been little systematic analysis of the relevance of the different dimensions and of the impact of combining different dimensions. The aim of this study is to determine the importance of combining features calculated in different dimensions. We have performed CADx experiments on 125 pulmonary nodules imaged using multi-detector row CT (MDCT). The CADx system computed 192 2D, 2.5D, and 3D image features of the lesions. Leave-one-out experiments were performed using five different combinations of features from different dimensions: 2D, 3D, 2.5D, 2D+3D, and 2D+3D+2.5D. The experiments were performed ten times for each group. Accuracy, sensitivity and specificity were used to evaluate the performance. Wilcoxon signed-rank tests were applied to compare the classification results from these five different combinations of features. Our results showed that 3D image features generate the best result compared with other combinations of features. This suggests one approach to potentially reducing the dimensionality of the CADx data space and the computational complexity of the system while maintaining diagnostic accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hall, D.G.: Watkins, J.C.
This report documents an evaluation of the TRAC-PF1/MOD1 reactor safety analysis computer code during computer simulations of feedwater line break transients. The experimental data base for the evaluation included the results of three bottom feedwater line break tests performed in the Semiscale Mod-2C test facility. The tests modeled 14.3% (S-FS-7), 50% (S-FS-11), and 100% (S-FS-6B) breaks. The test facility and the TRAC-PF1/MOD1 model used in the calculations are described. Evaluations of the accuracy of the calculations are presented in the form of comparisons of measured and calculated histories of selected parameters associated with the primary and secondary systems. In additionmore » to evaluating the accuracy of the code calculations, the computational performance of the code during the simulations was assessed. A conclusion was reached that the code is capable of making feedwater line break transient calculations efficiently, but there is room for significant improvements in the simulations that were performed. Recommendations are made for follow-on investigations to determine how to improve future feedwater line break calculations and for code improvements to make the code easier to use.« less
Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A
2015-01-01
Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.
Real-time POD-CFD Wind-Load Calculator for PV Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huayamave, Victor; Divo, Eduardo; Ceballos, Andres
The primary objective of this project is to create an accurate web-based real-time wind-load calculator. This is of paramount importance for (1) the rapid and accurate assessments of the uplift and downforce loads on a PV mounting system, (2) identifying viable solutions from available mounting systems, and therefore helping reduce the cost of mounting hardware and installation. Wind loading calculations for structures are currently performed according to the American Society of Civil Engineers/ Structural Engineering Institute Standard ASCE/SEI 7; the values in this standard were calculated from simplified models that do not necessarily take into account relevant characteristics such asmore » those from full 3D effects, end effects, turbulence generation and dissipation, as well as minor effects derived from shear forces on installation brackets and other accessories. This standard does not include provisions that address the special requirements of rooftop PV systems, and attempts to apply this standard may lead to significant design errors as wind loads are incorrectly estimated. Therefore, an accurate calculator would be of paramount importance for the preliminary assessments of the uplift and downforce loads on a PV mounting system, identifying viable solutions from available mounting systems, and therefore helping reduce the cost of the mounting system and installation. The challenge is that although a full-fledged three-dimensional computational fluid dynamics (CFD) analysis would properly and accurately capture the complete physical effects of air flow over PV systems, it would be impractical for this tool, which is intended to be a real-time web-based calculator. CFD routinely requires enormous computation times to arrive at solutions that can be deemed accurate and grid-independent even in powerful and massively parallel computer platforms. This work is expected not only to accelerate solar deployment nationwide, but also help reach the SunShot Initiative goals of reducing the total installed cost of solar energy systems by 75%. The largest percentage of the total installed cost of solar energy system is associated with balance of system cost, with up to 40% going to “soft” costs; which include customer acquisition, financing, contracting, permitting, interconnection, inspection, installation, performance, operations, and maintenance. The calculator that is being developed will provide wind loads in real-time for any solar system designs and suggest the proper installation configuration and hardware; and therefore, it is anticipated to reduce system design, installation and permitting costs.« less
Present Status and Extensions of the Monte Carlo Performance Benchmark
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Petrovic, Bojan; Martin, William R.
2014-06-01
The NEA Monte Carlo Performance benchmark started in 2011 aiming to monitor over the years the abilities to perform a full-size Monte Carlo reactor core calculation with a detailed power production for each fuel pin with axial distribution. This paper gives an overview of the contributed results thus far. It shows that reaching a statistical accuracy of 1 % for most of the small fuel zones requires about 100 billion neutron histories. The efficiency of parallel execution of Monte Carlo codes on a large number of processor cores shows clear limitations for computer clusters with common type computer nodes. However, using true supercomputers the speedup of parallel calculations is increasing up to large numbers of processor cores. More experience is needed from calculations on true supercomputers using large numbers of processors in order to predict if the requested calculations can be done in a short time. As the specifications of the reactor geometry for this benchmark test are well suited for further investigations of full-core Monte Carlo calculations and a need is felt for testing other issues than its computational performance, proposals are presented for extending the benchmark to a suite of benchmark problems for evaluating fission source convergence for a system with a high dominance ratio, for coupling with thermal-hydraulics calculations to evaluate the use of different temperatures and coolant densities and to study the correctness and effectiveness of burnup calculations. Moreover, other contemporary proposals for a full-core calculation with realistic geometry and material composition will be discussed.
NASA Technical Reports Server (NTRS)
1973-01-01
A shuttle (ARS) atmosphere revitalization subsystem active thermal control subsystem (ATCS) performance routine was developed. This computer program is adapted from the Shuttle EC/LSS Design Computer Program. The program was upgraded in three noteworthy areas: (1) The functional ARS/ATCS schematic has been revised to accurately synthesize the shuttle baseline system definition. (2) The program logic has been improved to provide a more accurate prediction of the integrated ARS/ATCS system performance. Additionally, the logic has been expanded to model all components and thermal loads in the ARS/ATCS system. (3) The program is designed to be used on the NASA JSC crew system division's programmable calculator system. As written the new computer routine has an average running time of five minutes. The use of desk top type calculation equipment, and the rapid response of the program provides the NASA with an analytical tool for trade studies to refine the system definition, and for test support of the RSECS or integrated Shuttle ARS/ATCS test programs.
Accuracy and coverage of the modernized Polish Maritime differential GPS system
NASA Astrophysics Data System (ADS)
Specht, Cezary
2011-01-01
The DGPS navigation service augments The NAVSTAR Global Positioning System by providing localized pseudorange correction factors and ancillary information which are broadcast over selected marine reference stations. The DGPS service position and integrity information satisfy requirements in coastal navigation and hydrographic surveys. Polish Maritime DGPS system has been established in 1994 and modernized (in 2009) to meet the requirements set out in IMO resolution for a future GNSS, but also to preserve backward signal compatibility of user equipment. Having finalized installation of the new technology L1, L2 reference equipment performance tests were performed.The paper presents results of the coverage modeling and accuracy measuring campaign based on long-term signal analyses of the DGPS reference station Rozewie, which was performed for 26 days in July 2009. Final results allowed to verify the coverage area of the differential signal from reference station and calculated repeatable and absolute accuracy of the system, after the technical modernization. Obtained field strength level area and position statistics (215,000 fixes) were compared to past measurements performed in 2002 (coverage) and 2005 (accuracy), when previous system infrastructure was in operation.So far, no campaigns were performed on differential Galileo. However, as signals, signal processing and receiver techniques are comparable to those know from DGPS. Because all satellite differential GNSS systems use the same transmission standard (RTCM), maritime DGPS Radiobeacons are standardized in all radio communication aspects (frequency, binary rate, modulation), then the accuracy results of differential Galileo can be expected as a similar to DGPS.Coverage of the reference station was calculated based on unique software, which calculate the signal strength level based on transmitter parameters or field signal strength measurement campaign, done in the representative points. The software works based on Baltic sea vector map, ground electric parameters and models atmospheric noise level in the transmission band.
A study of the influence of mean flow on the acoustic performance of Herschel-Quincke tubes
Torregrosa; Broatch; Payri
2000-04-01
In this paper, a simple flow model is used in order to assess the influence of mean flow and dissipation on the acoustic performance of the classical two-duct Herschel-Quincke tube. First, a transfer matrix is obtained for the system, which depends on the values of the Mach number in the two branches. These Mach numbers are then estimated separately by means of an incompressible flow calculation. Finally, both calculations are used to study the way in which mean flow affects the position and value of the characteristic attenuation and resonances of the system. The results indicate the nontrivial character of the influence observed.
NASA Astrophysics Data System (ADS)
Gruzin, A. V.; Gruzin, V. V.; Shalay, V. V.
2018-04-01
Analysis of existing technologies for preparing foundation beds of oil and gas buildings and structures has revealed the lack of reasoned recommendations on the selection of rational technical and technological parameters of compaction. To study the nature of the dynamics of fast processes during compaction of foundation beds of oil and gas facilities, a specialized software and hardware system was developed. The method of calculating the basic technical parameters of the equipment for recording fast processes is presented, as well as the algorithm for processing the experimental data. The performed preliminary studies confirmed the accuracy of the decisions made and the calculations performed.
RHIC BPM system average orbit calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michnoff,R.; Cerniglia, P.; Degen, C.
2009-05-04
RHIC beam position monitor (BPM) system average orbit was originally calculated by averaging positions of 10000 consecutive turns for a single selected bunch. Known perturbations in RHIC particle trajectories, with multiple frequencies around 10 Hz, contribute to observed average orbit fluctuations. In 2006, the number of turns for average orbit calculations was made programmable; this was used to explore averaging over single periods near 10 Hz. Although this has provided an average orbit signal quality improvement, an average over many periods would further improve the accuracy of the measured closed orbit. A new continuous average orbit calculation was developed justmore » prior to the 2009 RHIC run and was made operational in March 2009. This paper discusses the new algorithm and performance with beam.« less
NASA Technical Reports Server (NTRS)
Mingelgrin, U.
1972-01-01
Many properties of gaseous systems such as electromagnetic absorption and emission, sound dispersion and absorption, may be elucidated if the nature of collisions between the particles in the system is understood. A procedure for the calculation of the classical trajectories of two interacting diatomic molecules is described. The dynamics of the collision will be assumed to be that of two rigid rotors moving in a specified potential. The actual outcome of a representative sample of many trajectories at 298K was computed, and the use of these values at any temperature for calculations of various molecular properties will be described. Calculations performed for the O2 microwave spectrum are given to demonstrate the use of the procedure described.
Cost Effectiveness of Hybrid Solar Powerplants
NASA Technical Reports Server (NTRS)
Wen, L. C.; Steele, H. L.
1983-01-01
Report discusses cost effectiveness of high-temperature thermal storage system for representative parabolic dish solar powerplant. Economic viability of thermal storage system assesses; cost and performance projections made; cost of electricity generated by solar power plant also calculated.
Li-Decorated β12-Borophene as Potential Candidates for Hydrogen Storage: A First-Principle Study.
Liu, Tingting; Chen, Yuhong; Wang, Haifeng; Zhang, Meiling; Yuan, Lihua; Zhang, Cairong
2017-12-07
The hydrogen storage properties of pristine β 12 -borophene and Li-decorated β 12 -borophene are systemically investigated by means of first-principles calculations based on density functional theory. The adsorption sites, adsorption energies, electronic structures, and hydrogen storage performance of pristine β 12 -borophene/H₂ and Li- β 12 -borophene/H₂ systems are discussed in detail. The results show that H₂ is dissociated into Two H atoms that are then chemisorbed on β 12 -borophene via strong covalent bonds. Then, we use Li atom to improve the hydrogen storage performance and modify the hydrogen storage capacity of β 12 -borophene. Our numerical calculation shows that Li- β 12 -borophene system can adsorb up to 7 H₂ molecules; while 2Li- β 12 -borophene system can adsorb up to 14 H₂ molecules and the hydrogen storage capacity up to 10.85 wt %.
Measurement of Interfacial Adhesion in Glass-Epoxy Systems Using the Indentation Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchins, Karen Isabel
2015-07-01
The adhesion of coatings often controls the performance of the substrate-coating system. Certain engineering applications require an epoxy coating on a brittle substrate to protect and improve the performance of the substrate. Experimental observations and measurements of interfacial adhesion in glass-epoxy systems are described in this thesis. The Oliver and Pharr method was utilized to calculate the bulk epoxy hardness and elastic modulus. Spherical indentations were used to induce delaminations at the substrate-coating interface. The delamination sizes as a function of load were used to calculate the interfacial toughness. The interfacial fracture energy of my samples is an order ofmore » magnitude higher than a previous group who studied a similar glass-epoxy system. A comparison study of how different glass treatments affect adhesion was also conducted: smooth versus rough, clean versus dirty, stressed versus non-stressed.« less
NASA Astrophysics Data System (ADS)
Goyal, M.; Chakravarty, A.; Atrey, M. D.
2017-02-01
Performance of modern helium refrigeration/ liquefaction systems depends significantly on the effectiveness of heat exchangers. Generally, compact plate fin heat exchangers (PFHE) having very high effectiveness (>0.95) are used in such systems. Apart from basic fluid film resistances, various secondary parameters influence the sizing/ rating of these heat exchangers. In the present paper, sizing calculations are performed, using in-house developed numerical models/ codes, for a set of high effectiveness PFHE for a modified Claude cycle based helium liquefier/ refrigerator operating in the refrigeration mode without liquid nitrogen (LN2) pre-cooling. The combined effects of secondary parameters like axial heat conduction through the heat exchanger metal matrix, parasitic heat in-leak from surroundings and variation in the fluid/ metal properties are taken care of in the sizing calculation. Numerical studies are carried out to predict the off-design performance of the PFHEs in the refrigeration mode with LN2 pre-cooling. Iterative process cycle calculations are also carried out to obtain the inlet/ exit state points of the heat exchangers.
Katouda, Michio; Naruse, Akira; Hirano, Yukihiko; Nakajima, Takahito
2016-11-15
A new parallel algorithm and its implementation for the RI-MP2 energy calculation utilizing peta-flop-class many-core supercomputers are presented. Some improvements from the previous algorithm (J. Chem. Theory Comput. 2013, 9, 5373) have been performed: (1) a dual-level hierarchical parallelization scheme that enables the use of more than 10,000 Message Passing Interface (MPI) processes and (2) a new data communication scheme that reduces network communication overhead. A multi-node and multi-GPU implementation of the present algorithm is presented for calculations on a central processing unit (CPU)/graphics processing unit (GPU) hybrid supercomputer. Benchmark results of the new algorithm and its implementation using the K computer (CPU clustering system) and TSUBAME 2.5 (CPU/GPU hybrid system) demonstrate high efficiency. The peak performance of 3.1 PFLOPS is attained using 80,199 nodes of the K computer. The peak performance of the multi-node and multi-GPU implementation is 514 TFLOPS using 1349 nodes and 4047 GPUs of TSUBAME 2.5. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Duct flow nonuniformities: Effect of struts in SSME HGM II(+)
NASA Technical Reports Server (NTRS)
Burke, Roger
1988-01-01
A numerical study, using the INS3D flow solver, of laminar and turbulent flow around a two dimensional strut, and three dimensional flow around a strut in an annulus is presented. A multi-block procedure was used to calculate two dimensional laminar flow around two struts in parallel, with each strut represented by one computational block. Single block calculations were performed for turbulent flow around a two dimensional strut, using a Baldwin-Lomax turbulence model to parameterize the turbulent shear stresses. A modified Baldwin-Lomax model was applied to the case of a three dimensional strut in an annulus. The results displayed the essential features of wing-body flows, including the presence of a horseshoe vortex system at the junction of the strut and the lower annulus surface. A similar system was observed at the upper annulus surface. The test geometries discussed were useful in developing the capability to perform multiblock calculations, and to simulate turbulent flow around obstructions located between curved walls. Both of these skills will be necessary to model the three dimensional flow in the strut assembly of the SSME. Work is now in progress on performing a three dimensional two block turbulent calculation of the flow in the turnaround duct (TAD) and strut/fuel bowl juncture region.
Phased models for evaluating the performability of computing systems
NASA Technical Reports Server (NTRS)
Wu, L. T.; Meyer, J. F.
1979-01-01
A phase-by-phase modelling technique is introduced to evaluate a fault tolerant system's ability to execute different sets of computational tasks during different phases of the control process. Intraphase processes are allowed to differ from phase to phase. The probabilities of interphase state transitions are specified by interphase transition matrices. Based on constraints imposed on the intraphase and interphase transition probabilities, various iterative solution methods are developed for calculating system performability.
Rare-gas impurities in alkali metals: Relation to optical absorption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meltzer, D.E.; Pinski, F.J.; Stocks, G.M.
1988-04-15
An investigation of the nature of rare-gas impurity potentials in alkali metals is performed. Results of calculations based on simple models are presented, which suggest the possibility of resonance phenomena. These could lead to widely varying values for the exponents which describe the shape of the optical-absorption spectrum at threshold in the Mahan--Nozieres--de Dominicis theory. Detailed numerical calculations are then performed with the Korringa-Kohn-Rostoker coherent-potential-approximation method. The results of these highly realistic calculations show no evidence for the resonance phenomena, and lead to predictions for the shape of the spectra which are in contradiction to observations. Absorption and emission spectramore » are calculated for two of the systems studied, and their relation to experimental data is discussed.« less
A side-by-side comparison of CPV module and system performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muller, Matthew; Marion, Bill; Kurtz, Sarah
A side-by-side comparison is made between concentrator photovoltaic module and system direct current aperture efficiency data with a focus on quantifying system performance losses. The individual losses measured/calculated, when combined, are in good agreement with the total loss seen between the module and the system. Results indicate that for the given test period, the largest individual loss of 3.7% relative is due to the baseline performance difference between the individual module and the average for the 200 modules in the system. A basic empirical model is derived based on module spectral performance data and the tabulated losses between the modulemore » and the system. The model predicts instantaneous system direct current aperture efficiency with a root mean square error of 2.3% relative.« less
Hardware for Accelerating N-Modular Redundant Systems for High-Reliability Computing
NASA Technical Reports Server (NTRS)
Dobbs, Carl, Sr.
2012-01-01
A hardware unit has been designed that reduces the cost, in terms of performance and power consumption, for implementing N-modular redundancy (NMR) in a multiprocessor device. The innovation monitors transactions to memory, and calculates a form of sumcheck on-the-fly, thereby relieving the processors of calculating the sumcheck in software
A program for the calculation of paraboloidal-dish solar thermal power plant performance
NASA Technical Reports Server (NTRS)
Bowyer, J. M., Jr.
1985-01-01
A program capable of calculating the design-point and quasi-steady-state annual performance of a paraboloidal-concentrator solar thermal power plant without energy storage was written for a programmable calculator equipped with suitable printer. The power plant may be located at any site for which a histogram of annual direct normal insolation is available. Inputs required by the program are aperture area and the design and annual efficiencies of the concentrator; the intercept factor and apparent efficiency of the power conversion subsystem and a polynomial representation of its normalized part-load efficiency; the efficiency of the electrical generator or alternator; the efficiency of the electric power conditioning and transport subsystem; and the fractional parasitic loses for the plant. Losses to auxiliaries associated with each individual module are to be deducted when the power conversion subsystem efficiencies are calculated. Outputs provided by the program are the system design efficiency, the annualized receiver efficiency, the annualized power conversion subsystem efficiency, total annual direct normal insolation received per unit area of concentrator aperture, and the system annual efficiency.
Easy handling of tectonic data: the programs TectonicVB for Mac and TectonicsFP for Windows™
NASA Astrophysics Data System (ADS)
Ortner, Hugo; Reiter, Franz; Acs, Peter
2002-12-01
TectonicVB for Macintosh and TectonicsFP for Windows TM operating systems are two menu-driven computer programs which allow the shared use of data on these environments. The programs can produce stereographic plots of orientation data (great circles, poles, lineations). Frequently used statistical procedures like calculation of eigenvalues and eigenvectors, calculation of mean vector with concentration parameters and confidence cone can be easily performed. Fault data can be plotted in stereographic projection (Angelier and Hoeppener plots). Sorting of datasets into homogeneous subsets and rotation of tectonic data can be performed in interactive two-diagram windows. The paleostress tensor can be calculated from fault data sets using graphical (calculation of kinematic axes and right dihedra method) or mathematical methods (direct inversion or numerical dynamical analysis). The calculations can be checked in dimensionless Mohr diagrams and fluctuation histograms.
CyberShake: Running Seismic Hazard Workflows on Distributed HPC Resources
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Graves, R. W.; Gill, D.; Olsen, K. B.; Milner, K. R.; Yu, J.; Jordan, T. H.
2013-12-01
As part of its program of earthquake system science research, the Southern California Earthquake Center (SCEC) has developed a simulation platform, CyberShake, to perform physics-based probabilistic seismic hazard analysis (PSHA) using 3D deterministic wave propagation simulations. CyberShake performs PSHA by simulating a tensor-valued wavefield of Strain Green Tensors, and then using seismic reciprocity to calculate synthetic seismograms for about 415,000 events per site of interest. These seismograms are processed to compute ground motion intensity measures, which are then combined with probabilities from an earthquake rupture forecast to produce a site-specific hazard curve. Seismic hazard curves for hundreds of sites in a region can be used to calculate a seismic hazard map, representing the seismic hazard for a region. We present a recently completed PHSA study in which we calculated four CyberShake seismic hazard maps for the Southern California area to compare how CyberShake hazard results are affected by different SGT computational codes (AWP-ODC and AWP-RWG) and different community velocity models (Community Velocity Model - SCEC (CVM-S4) v11.11 and Community Velocity Model - Harvard (CVM-H) v11.9). We present our approach to running workflow applications on distributed HPC resources, including systems without support for remote job submission. We show how our approach extends the benefits of scientific workflows, such as job and data management, to large-scale applications on Track 1 and Leadership class open-science HPC resources. We used our distributed workflow approach to perform CyberShake Study 13.4 on two new NSF open-science HPC computing resources, Blue Waters and Stampede, executing over 470 million tasks to calculate physics-based hazard curves for 286 locations in the Southern California region. For each location, we calculated seismic hazard curves with two different community velocity models and two different SGT codes, resulting in over 1100 hazard curves. We will report on the performance of this CyberShake study, four times larger than previous studies. Additionally, we will examine the challenges we face applying these workflow techniques to additional open-science HPC systems and discuss whether our workflow solutions continue to provide value to our large-scale PSHA calculations.
Sensitivity of fenestration solar gain to source spectrum and angle of incidence
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCluney, W.R.
1996-12-31
The solar heat gain coefficient (SHGC) is the fraction of solar radiant flux incident on a fenestration system entering a building as heat gain. In general it depends on both the angle of incidence and the spectral distribution of the incident solar radiation. In attempts to improve energy performance and user acceptance of high-performance glazing systems, manufacturers are producing glazing systems with increasing spectral selectivity. This poses potential difficulties for calculations of solar heat gain through windows based upon the use of a single solar spectral weighting function. The sensitivity of modern high-performance glazing systems to both the angle ofmore » incidence and the shape of the incident solar spectrum is examined using a glazing performance simulation program. It is found that as the spectral selectivity of the glazing system increases, the SHGC can vary as the incident spectral distribution varies. The variations can be as great as 50% when using several different representative direct-beam spectra. These include spectra having low and high air masses and a standard spectrum having an air mass of 1.5. The variations can be even greater if clear blue diffuse skylight is considered. It is recommended that the current broad-band shading coefficient method of calculating solar gain be replaced by one that is spectral based.« less
The SAMPL4 host-guest blind prediction challenge: an overview.
Muddana, Hari S; Fenley, Andrew T; Mobley, David L; Gilson, Michael K
2014-04-01
Prospective validation of methods for computing binding affinities can help assess their predictive power and thus set reasonable expectations for their performance in drug design applications. Supramolecular host-guest systems are excellent model systems for testing such affinity prediction methods, because their small size and limited conformational flexibility, relative to proteins, allows higher throughput and better numerical convergence. The SAMPL4 prediction challenge therefore included a series of host-guest systems, based on two hosts, cucurbit[7]uril and octa-acid. Binding affinities in aqueous solution were measured experimentally for a total of 23 guest molecules. Participants submitted 35 sets of computational predictions for these host-guest systems, based on methods ranging from simple docking, to extensive free energy simulations, to quantum mechanical calculations. Over half of the predictions provided better correlations with experiment than two simple null models, but most methods underperformed the null models in terms of root mean squared error and linear regression slope. Interestingly, the overall performance across all SAMPL4 submissions was similar to that for the prior SAMPL3 host-guest challenge, although the experimentalists took steps to simplify the current challenge. While some methods performed fairly consistently across both hosts, no single approach emerged as consistent top performer, and the nonsystematic nature of the various submissions made it impossible to draw definitive conclusions regarding the best choices of energy models or sampling algorithms. Salt effects emerged as an issue in the calculation of absolute binding affinities of cucurbit[7]uril-guest systems, but were not expected to affect the relative affinities significantly. Useful directions for future rounds of the challenge might involve encouraging participants to carry out some calculations that replicate each others' studies, and to systematically explore parameter options.
Performance analysis and optimization of power plants with gas turbines
NASA Astrophysics Data System (ADS)
Besharati-Givi, Maryam
The gas turbine is one of the most important applications for power generation. The purpose of this research is performance analysis and optimization of power plants by using different design systems at different operation conditions. In this research, accurate efficiency calculation and finding optimum values of efficiency for design of chiller inlet cooling and blade cooled gas turbine are investigated. This research shows how it is possible to find the optimum design for different operation conditions, like ambient temperature, relative humidity, turbine inlet temperature, and compressor pressure ratio. The simulated designs include the chiller, with varied COP and fogging cooling for a compressor. In addition, the overall thermal efficiency is improved by adding some design systems like reheat and regenerative heating. The other goal of this research focuses on the blade-cooled gas turbine for higher turbine inlet temperature, and consequently, higher efficiency. New film cooling equations, along with changing film cooling effectiveness for optimum cooling air requirement at the first-stage blades, and an internal and trailing edge cooling for the second stage, are innovated for optimal efficiency calculation. This research sets the groundwork for using the optimum value of efficiency calculation, while using inlet cooling and blade cooling designs. In the final step, the designed systems in the gas cycles are combined with a steam cycle for performance improvement.
Chemistry of the 5g Elements: Relativistic Calculations on Hexafluorides.
Dognon, Jean-Pierre; Pyykkö, Pekka
2017-08-14
A Periodic System was proposed for the elements 1-172 by Pyykkö on the basis of atomic and ionic calculations. In it, the elements 121-138 were nominally assigned to a 5g row. We now perform molecular, relativistic four-component DFT calculations and find that the hexafluorides of the elements 125-129 indeed enjoy occupied 5g states. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, W.C.; Turner, J.C.
1992-12-01
The purpose of this report is to document reference calculations performed using the SCALE-4.0 code system to determine the critical parameters of UO[sub 2]F[sub 2]-H[sub 2]O spheres. The calculations are an extension of those documented in ORNL/CSD/TM-284. Specifically, the data for low-enriched UO[sub 2]F[sub 2]-H[sub 2]O spheres have been extended to highly enriched uranium. These calculations, together with those reported in ORNL/CSD/TM-284, provide a consistent set of critical parameters (k[sub [infinity
The approach to engineering tasks composition on knowledge portals
NASA Astrophysics Data System (ADS)
Novogrudska, Rina; Globa, Larysa; Schill, Alexsander; Romaniuk, Ryszard; Wójcik, Waldemar; Karnakova, Gaini; Kalizhanova, Aliya
2017-08-01
The paper presents an approach to engineering tasks composition on engineering knowledge portals. The specific features of engineering tasks are highlighted, their analysis makes the basis for partial engineering tasks integration. The formal algebraic system for engineering tasks composition is proposed, allowing to set the context-independent formal structures for engineering tasks elements' description. The method of engineering tasks composition is developed that allows to integrate partial calculation tasks into general calculation tasks on engineering portals, performed on user request demand. The real world scenario «Calculation of the strength for the power components of magnetic systems» is represented, approving the applicability and efficiency of proposed approach.
Model-centric distribution automation: Capacity, reliability, and efficiency
Onen, Ahmet; Jung, Jaesung; Dilek, Murat; ...
2016-02-26
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
Model-centric distribution automation: Capacity, reliability, and efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Onen, Ahmet; Jung, Jaesung; Dilek, Murat
A series of analyses along with field validations that evaluate efficiency, reliability, and capacity improvements of model-centric distribution automation are presented. With model-centric distribution automation, the same model is used from design to real-time control calculations. A 14-feeder system with 7 substations is considered. The analyses involve hourly time-varying loads and annual load growth factors. Phase balancing and capacitor redesign modifications are used to better prepare the system for distribution automation, where the designs are performed considering time-varying loads. Coordinated control of load tap changing transformers, line regulators, and switched capacitor banks is considered. In evaluating distribution automation versus traditionalmore » system design and operation, quasi-steady-state power flow analysis is used. In evaluating distribution automation performance for substation transformer failures, reconfiguration for restoration analysis is performed. In evaluating distribution automation for storm conditions, Monte Carlo simulations coupled with reconfiguration for restoration calculations are used. As a result, the evaluations demonstrate that model-centric distribution automation has positive effects on system efficiency, capacity, and reliability.« less
Performance of a laser microsatellite network with an optical preamplifier.
Arnon, Shlomi
2005-04-01
Laser satellite communication (LSC) uses free space as a propagation medium for various applications, such as intersatellite communication or satellite networking. An LSC system includes a laser transmitter and an optical receiver. For communication to occur, the line of sight of the transmitter and the receiver must be aligned. However, mechanical vibration and electronic noise in the control system reduce alignment between the transmitter laser beam and the receiver field of view (FOV), which results in pointing errors. The outcome of pointing errors is fading of the received signal, which leads to impaired link performance. An LSC system is considered in which the optical preamplifier is incorporated into the receiver, and a bit error probability (BEP) model is derived that takes into account the statistics of the pointing error as well as the optical amplifier and communication system parameters. The model and the numerical calculation results indicate that random pointing errors of sigma(chi)2G > 0.05 penalize communication performance dramatically for all combinations of optical amplifier gains and noise figures that were calculated.
Research on quantitative relationship between NIIRS and the probabilities of discrimination
NASA Astrophysics Data System (ADS)
Bai, Honggang
2011-08-01
There are a large number of electro-optical (EO) and infrared (IR) sensors used on military platforms including ground vehicle, low altitude air vehicle, high altitude air vehicle, and satellite systems. Ground vehicle and low-altitude air vehicle (rotary and fixed-wing aircraft) sensors typically use the probabilities of discrimination (detection, recognition, and identification) as design requirements and system performance indicators. High-altitude air vehicles and satellite sensors have traditionally used the National Imagery Interpretation Rating Scale (NIIRS) performance measures for guidance in design and measures of system performance. Recently, there has a large effort to make strategic sensor information available to tactical forces or make the information of targets acquisition can be used by strategic systems. In this paper, the two techniques about the probabilities of discrimination and NIIRS for sensor design are presented separately. For the typical infrared remote sensor design parameters, the function of the probability of recognition and NIIRS scale as the distance R is given to Standard NATO Target and M1Abrams two different size targets based on the algorithm of predicting the field performance and NIIRS. For Standard NATO Target, M1Abrams, F-15, and B-52 four different size targets, the conversion from NIIRS to the probabilities of discrimination are derived and calculated, and the similarities and differences between NIIRS and the probabilities of discrimination are analyzed based on the result of calculation. Comparisons with preliminary calculation results show that the conversion between NIIRS and the probabilities of discrimination is probable although more validation experiments are needed.
Conceptual design of high speed supersonic aircraft: A brief review on SR-71 (Blackbird) aircraft
NASA Astrophysics Data System (ADS)
Xue, Hui; Khawaja, H.; Moatamedi, M.
2014-12-01
The paper presents the conceptual design of high-speed supersonic aircraft. The study focuses on SR-71 (Blackbird) aircraft. The input to the conceptual design is a mission profile. Mission profile is a flight profile of the aircraft defined by the customer. This paper gives the SR-71 aircraft mission profile specified by US air force. Mission profile helps in defining the attributes the aircraft such as wing profile, vertical tail configuration, propulsion system, etc. Wing profile and vertical tail configurations have direct impact on lift, drag, stability, performance and maneuverability of the aircraft. A propulsion system directly influences the performance of the aircraft. By combining the wing profile and the propulsion system, two important parameters, known as wing loading and thrust to weight ratio can be calculated. In this work, conceptual design procedure given by D. P. Raymer (AIAA Educational Series) is applied to calculate wing loading and thrust to weight ratio. The calculated values are compared against the actual values of the SR-71 aircraft. Results indicates that the values are in agreement with the trend of developments in aviation.
Paramedir: A Tool for Programmable Performance Analysis
NASA Technical Reports Server (NTRS)
Jost, Gabriele; Labarta, Jesus; Gimenez, Judit
2004-01-01
Performance analysis of parallel scientific applications is time consuming and requires great expertise in areas such as programming paradigms, system software, and computer hardware architectures. In this paper we describe a tool that facilitates the programmability of performance metric calculations thereby allowing the automation of the analysis and reducing the application development time. We demonstrate how the system can be used to capture knowledge and intuition acquired by advanced parallel programmers in order to be transferred to novice users.
A numerical fragment basis approach to SCF calculations.
NASA Astrophysics Data System (ADS)
Hinde, Robert J.
1997-11-01
The counterpoise method is often used to correct for basis set superposition error in calculations of the electronic structure of bimolecular systems. One drawback of this approach is the need to specify a ``reference state'' for the system; for reactive systems, the choice of an unambiguous reference state may be difficult. An example is the reaction F^- + HCl arrow HF + Cl^-. Two obvious reference states for this reaction are F^- + HCl and HF + Cl^-; however, different counterpoise-corrected interaction energies are obtained using these two reference states. We outline a method for performing SCF calculations which employs numerical basis functions; this method attempts to eliminate basis set superposition errors in an a priori fashion. We test the proposed method on two one-dimensional, three-center systems and discuss the possibility of extending our approach to include electron correlation effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powers, Jeffrey James
2011-11-30
This study focused on creating a new tristructural isotropic (TRISO) coated particle fuel performance model and demonstrating the integration of this model into an existing system of neutronics and heat transfer codes, creating a user-friendly option for including fuel performance analysis within system design optimization and system-level trade-off studies. The end product enables both a deeper understanding and better overall system performance of nuclear energy systems limited or greatly impacted by TRISO fuel performance. A thorium-fueled hybrid fusion-fission Laser Inertial Fusion Energy (LIFE) blanket design was used for illustrating the application of this new capability and demonstrated both the importancemore » of integrating fuel performance calculations into mainstream design studies and the impact that this new integrated analysis had on system-level design decisions. A new TRISO fuel performance model named TRIUNE was developed and verified and validated during this work with a novel methodology established for simulating the actual lifetime of a TRISO particle during repeated passes through a pebble bed. In addition, integrated self-consistent calculations were performed for neutronics depletion analysis, heat transfer calculations, and then fuel performance modeling for a full parametric study that encompassed over 80 different design options that went through all three phases of analysis. Lastly, side studies were performed that included a comparison of thorium and depleted uranium (DU) LIFE blankets as well as some uncertainty quantification work to help guide future experimental work by assessing what material properties in TRISO fuel performance modeling are most in need of improvement. A recommended thorium-fueled hybrid LIFE engine design was identified with an initial fuel load of 20MT of thorium, 15% TRISO packing within the graphite fuel pebbles, and a 20cm neutron multiplier layer with beryllium pebbles in flibe molten salt coolant. It operated at a system power level of 2000 MW th, took about 3.5 years to reach full plateau power, and was capable of an End of Plateau burnup of 38.7 %FIMA if considering just the neutronic constraints in the system design; however, fuel performance constraints led to a maximum credible burnup of 12.1 %FIMA due to a combination of internal gas pressure and irradiation effects on the TRISO materials (especially PyC) leading to SiC pressure vessel failures. The optimal neutron spectrum for the thorium-fueled blanket options evaluated seemed to favor a hard spectrum (low but non-zero neutron multiplier thicknesses and high TRISO packing fractions) in terms of neutronic performance but the fuel performance constraints demonstrated that a significantly softer spectrum would be needed to decrease the rate of accumulation of fast neutron fluence in order to improve the maximum credible burnup the system could achieve.« less
High-efficiency concentration/multi-solar-cell system for orbital power generation
NASA Technical Reports Server (NTRS)
Onffroy, J. R.; Stoltzmann, D. E.; Lin, R. J. H.; Knowles, G. R.
1980-01-01
An analysis was performed to determine the economic feasibility of a concentrating spectrophotovoltaic orbital electrical power generation system. In this system dichroic beam-splitting mirrors are used to divide the solar spectrum into several wavebands. Absorption of these wavebands by solar cells with matched energy bandgaps increases the cell efficiency while decreasing the amount of heat which must be rejected. The optical concentration is performed in two stages. The first concentration stage employs a Cassegrain-type telescope, resulting in a short system length. The output from this stage is directed to compound parabolic concentrators which comprise the second stage of concentration. Ideal efficiencies for one-, two-, three-, and four-cell systems were calculated under 1000 sun, AMO conditions, and optimum energy bands were determined. Realistic efficiencies were calculated for various combinations of Si, GaAs, Ge and GaP. Efficiencies of 32 to 33 percent were obtained with the multicell systems. The optimum system consists of an f/3.5 optical system, a beam splitter to divide the spectrum at 0.9 microns, and two solar cell arrays, GaAs and Si.
Chen, Guang-Pei; Ahunbay, Ergun; Li, X Allen
2016-04-01
To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data are accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose-volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Guang-Pei, E-mail: gpchen@mcw.edu; Ahunbay, Ergun; Li, X. Allen
Purpose: To develop an integrated quality assurance (QA) software tool for online replanning capable of efficiently and automatically checking radiation treatment (RT) planning parameters and gross plan quality, verifying treatment plan data transfer from treatment planning system (TPS) to record and verify (R&V) system, performing a secondary monitor unit (MU) calculation with or without a presence of a magnetic field from MR-Linac, and validating the delivery record consistency with the plan. Methods: The software tool, named ArtQA, was developed to obtain and compare plan and treatment parameters from both the TPS and the R&V system database. The TPS data aremore » accessed via direct file reading and the R&V data are retrieved via open database connectivity and structured query language. Plan quality is evaluated with both the logical consistency of planning parameters and the achieved dose–volume histograms. Beams in between the TPS and R&V system are matched based on geometry configurations. To consider the effect of a 1.5 T transverse magnetic field from MR-Linac in the secondary MU calculation, a method based on modified Clarkson integration algorithm was developed and tested for a series of clinical situations. Results: ArtQA has been used in their clinic and can quickly detect inconsistencies and deviations in the entire RT planning process. With the use of the ArtQA tool, the efficiency for plan check including plan quality, data transfer, and delivery check can be improved by at least 60%. The newly developed independent MU calculation tool for MR-Linac reduces the difference between the plan and calculated MUs by 10%. Conclusions: The software tool ArtQA can be used to perform a comprehensive QA check from planning to delivery with conventional Linac or MR-Linac and is an essential tool for online replanning where the QA check needs to be performed rapidly.« less
NASA Astrophysics Data System (ADS)
Callaghan, S.; Maechling, P. J.; Juve, G.; Vahi, K.; Deelman, E.; Jordan, T. H.
2015-12-01
The CyberShake computational platform, developed by the Southern California Earthquake Center (SCEC), is an integrated collection of scientific software and middleware that performs 3D physics-based probabilistic seismic hazard analysis (PSHA) for Southern California. CyberShake integrates large-scale and high-throughput research codes to produce probabilistic seismic hazard curves for individual locations of interest and hazard maps for an entire region. A recent CyberShake calculation produced about 500,000 two-component seismograms for each of 336 locations, resulting in over 300 million synthetic seismograms in a Los Angeles-area probabilistic seismic hazard model. CyberShake calculations require a series of scientific software programs. Early computational stages produce data used as inputs by later stages, so we describe CyberShake calculations using a workflow definition language. Scientific workflow tools automate and manage the input and output data and enable remote job execution on large-scale HPC systems. To satisfy the requests of broad impact users of CyberShake data, such as seismologists, utility companies, and building code engineers, we successfully completed CyberShake Study 15.4 in April and May 2015, calculating a 1 Hz urban seismic hazard map for Los Angeles. We distributed the calculation between the NSF Track 1 system NCSA Blue Waters, the DOE Leadership-class system OLCF Titan, and USC's Center for High Performance Computing. This study ran for over 5 weeks, burning about 1.1 million node-hours and producing over half a petabyte of data. The CyberShake Study 15.4 results doubled the maximum simulated seismic frequency from 0.5 Hz to 1.0 Hz as compared to previous studies, representing a factor of 16 increase in computational complexity. We will describe how our workflow tools supported splitting the calculation across multiple systems. We will explain how we modified CyberShake software components, including GPU implementations and migrating from file-based communication to MPI messaging, to greatly reduce the I/O demands and node-hour requirements of CyberShake. We will also present performance metrics from CyberShake Study 15.4, and discuss challenges that producers of Big Data on open-science HPC resources face moving forward.
Multispectral scanner system parameter study and analysis software system description, volume 2
NASA Technical Reports Server (NTRS)
Landgrebe, D. A. (Principal Investigator); Mobasseri, B. G.; Wiersma, D. J.; Wiswell, E. R.; Mcgillem, C. D.; Anuta, P. E.
1978-01-01
The author has identified the following significant results. The integration of the available methods provided the analyst with the unified scanner analysis package (USAP), the flexibility and versatility of which was superior to many previous integrated techniques. The USAP consisted of three main subsystems; (1) a spatial path, (2) a spectral path, and (3) a set of analytic classification accuracy estimators which evaluated the system performance. The spatial path consisted of satellite and/or aircraft data, data correlation analyzer, scanner IFOV, and random noise model. The output of the spatial path was fed into the analytic classification and accuracy predictor. The spectral path consisted of laboratory and/or field spectral data, EXOSYS data retrieval, optimum spectral function calculation, data transformation, and statistics calculation. The output of the spectral path was fended into the stratified posterior performance estimator.
Methods and new approaches to the calculation of physiological parameters by videodensitometry
NASA Technical Reports Server (NTRS)
Kedem, D.; Londstrom, D. P.; Rhea, T. C., Jr.; Nelson, J. H.; Price, R. R.; Smith, C. W.; Graham, T. P., Jr.; Brill, A. B.; Kedem, D.
1976-01-01
A complex system featuring a video-camera connected to a video disk, cine (medical motion picture) camera and PDP-9 computer with various input/output facilities has been developed. This system enables the performance of quantitative analysis of various functions recorded in clinical studies. Several studies are described, such as heart chamber volume calculations, left ventricle ejection fraction, blood flow through the lungs and also the possibility of obtaining information about blood flow and constrictions in small cross-section vessels
NASA Astrophysics Data System (ADS)
Akhmed-Ogly, K. V.; Savichev, O. G.; Tokarenko, O. G.; Pasechnik, E. Yu; Reshetko, M. V.; Nalivajko, N. G.; Vlasova, M. V.
2014-08-01
Technique for the domestic wastewater treatment in the small residential areas and oil and gas facilities of the natural and man-made systems including a settling tank for mechanical treatment and a biological pond with peat substrate and bog vegetation for biological treatment has been substantiated. Technique for parameters calculation of the similar natural and man-made systems has been developed. It was proven that effective treatment of wastewater can be performed in Siberia all year round.
Adaptive real-time methodology for optimizing energy-efficient computing
Hsu, Chung-Hsing [Los Alamos, NM; Feng, Wu-Chun [Blacksburg, VA
2011-06-28
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to each process running on a system.
NASA Technical Reports Server (NTRS)
Gordon, Sanford; Zeleznik, Frank J.; Huff, Vearl N.
1959-01-01
A general computer program for chemical equilibrium and rocket performance calculations was written for the IBM 650 computer with 2000 words of drum storage, 60 words of high-speed core storage, indexing registers, and floating point attachments. The program is capable of carrying out combustion and isentropic expansion calculations on a chemical system that may include as many as 10 different chemical elements, 30 reaction products, and 25 pressure ratios. In addition to the equilibrium composition, temperature, and pressure, the program calculates specific impulse, specific impulse in vacuum, characteristic velocity, thrust coefficient, area ratio, molecular weight, Mach number, specific heat, isentropic exponent, enthalpy, entropy, and several thermodynamic first derivatives.
NASA Technical Reports Server (NTRS)
Vicroy, D. D.; Knox, C. E.
1983-01-01
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modeling required for the DC-10 airplane is described.
HRMS sky survey wideband feed system design for DSS 24 beam waveguide antenna
NASA Technical Reports Server (NTRS)
Stanton, P. H.; Lee, P. R.; Reilly, H. F.
1993-01-01
The High-Resolution Microwave Survey (HRMS) Sky Survey project will be implemented on the DSS 24 beam waveguide (BWG) antenna over the frequency range of 2.86 to 10 GHz. Two wideband, ring-loaded, corrugated feed horns were designed to cover this range. The horns match the frequency-dependent gain requirements for the DSS 24 BWG system. The performance of the feed horns and the calculated system performance of DSS 24 are presented.
System For Characterizing Three-Phase Brushless dc Motors
NASA Technical Reports Server (NTRS)
Howard, David E.; Smith, Dennis A.
1996-01-01
System of electronic hardware and software developed to automate measurements and calculations needed to characterize electromechanical performances of three-phase brushless dc motors, associated shaft-angle sensors needed for commutation, and associated brushless tachometers. System quickly takes measurements on all three phases of motor, tachometer, and shaft-angle sensor simultaneously and processes measurements into performance data. Also useful in development and testing of motors with not only three phases but also two, four, or more phases.
Initial test of MITA/DIMM with an operational CBP system
NASA Astrophysics Data System (ADS)
Baldwin, Kevin; Hanna, Randall; Brown, Andrea; Brown, David; Moyer, Steven; Hixson, Jonathan G.
2018-05-01
The MITA (Motion Imagery Task Analyzer) project was conceived by CBP OA (Customs and Border Protection - Office of Acquisition) and executed by JHU/APL (Johns Hopkins University/Applied Physics Laboratory) and CERDEC NVESD MSD (Communications and Electronics Research Development Engineering Command Night Vision and Electronic Sensors Directorate Modeling and Simulation Division). The intent was to develop an efficient methodology whereby imaging system performance could be quickly and objectively characterized in a field setting. The initial design, development, and testing spanned a period of approximately 18 months with the initial project coming to a conclusion after testing of the MITA system in June 2017 with a fielded CBP system. The NVESD contribution to MITA was thermally heated target resolution boards deployed to support a range close to the sensor and, when possible, at range with the targets of interest. JHU/APL developed a laser DIMM (Differential Image Motion Monitor) system designed to measure the optical turbulence present along the line of sight of the imaging system during the time of image collection. The imagery collected of the target board was processed to calculate the in situ system resolution. This in situ imaging system resolution and the time-correlated turbulence measured by the DIMM system were used in NV-IPM (Night Vision Integrated Performance Model) to calculate the theoretical imaging system performance. Overall, this proves the MITA concept feasible. However, MITA is still in the initial phases of development and requires further verification and validation to ensure accuracy and reliability of both the instrument and the imaging system performance predictions.
Preliminary performance analysis of an interplanetary navigation system using asteroid based beacons
NASA Technical Reports Server (NTRS)
Jee, J. Rodney; Khatib, Ahmad R.; Muellerschoen, Ronald J.; Williams, Bobby G.; Vincent, Mark A.
1988-01-01
A futuristic interplanetary navigation system using transmitters placed on selected asteroids is introduced. This network of space beacons is seen as a needed alternative to the overly burdened Deep Space Network. Covariance analyses on the potential performance of these space beacons located on a candidate constellation of eight real asteroids are initiated. Simplified analytic calculations are performed to determine limiting accuracies attainable with the network for geometric positioning. More sophisticated computer simulations are also performed to determine potential accuracies using long arcs of range and Doppler data from the beacons. The results from these computations show promise for this navigation system.
CMS endcap RPC performance analysis
NASA Astrophysics Data System (ADS)
Teng, H.; CMS Collaboration
2014-08-01
The Resistive Plate Chamber (RPC) detector system in LHC-CMS experiment is designed for the trigger purpose. The endcap RPC system has been successfully operated since the commissioning period (2008) to the end of RUN1 (2013). We have developed an analysis tool for endcap RPC performance and validated the efficiency calculation algorithm, focusing on the first endcap station which was assembled and tested by the Peking University group. We cross checked the results obtained with those extracted with alternative methods and we found good agreement in terms of performance parameters [1]. The results showed that the CMS-RPC endcap system fulfilled the performance expected in the Technical Design Report [2].
Integrated flight/propulsion control - Subsystem specifications for performance
NASA Technical Reports Server (NTRS)
Neighbors, W. K.; Rock, Stephen M.
1993-01-01
A procedure is presented for calculating multiple subsystem specifications given a number of performance requirements on the integrated system. This procedure applies to problems where the control design must be performed in a partitioned manner. It is based on a structured singular value analysis, and generates specifications as magnitude bounds on subsystem uncertainties. The performance requirements should be provided in the form of bounds on transfer functions of the integrated system. This form allows the expression of model following, command tracking, and disturbance rejection requirements. The procedure is demonstrated on a STOVL aircraft design.
Volume accumulator design analysis computer codes
NASA Technical Reports Server (NTRS)
Whitaker, W. D.; Shimazaki, T. T.
1973-01-01
The computer codes, VANEP and VANES, were written and used to aid in the design and performance calculation of the volume accumulator units (VAU) for the 5-kwe reactor thermoelectric system. VANEP computes the VAU design which meets the primary coolant loop VAU volume and pressure performance requirements. VANES computes the performance of the VAU design, determined from the VANEP code, at the conditions of the secondary coolant loop. The codes can also compute the performance characteristics of the VAU's under conditions of possible modes of failure which still permit continued system operation.
Computer programs simplify optical system analysis
NASA Technical Reports Server (NTRS)
1965-01-01
The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.
Are university rankings useful to improve research? A systematic review.
Vernon, Marlo M; Balas, E Andrew; Momani, Shaher
2018-01-01
Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement. A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted. A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems. No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and "luxury" indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance.
FLUSH: A tool for the design of slush hydrogen flow systems
NASA Technical Reports Server (NTRS)
Hardy, Terry L.
1990-01-01
As part of the National Aerospace Plane Project an analytical model was developed to perform calculations for in-line transfer of solid-liquid mixtures of hydrogen. This code, called FLUSH, calculates pressure drop and solid fraction loss for the flow of slush hydrogen through pipe systems. The model solves the steady-state, one-dimensional equation of energy to obtain slush loss estimates. A description of the code is provided as well as a guide for users of the program. Preliminary results are also presented showing the anticipated degradation of slush hydrogen solid content for various piping systems.
SU-E-J-60: Efficient Monte Carlo Dose Calculation On CPU-GPU Heterogeneous Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, K; Chen, D. Z; Hu, X. S
Purpose: It is well-known that the performance of GPU-based Monte Carlo dose calculation implementations is bounded by memory bandwidth. One major cause of this bottleneck is the random memory writing patterns in dose deposition, which leads to several memory efficiency issues on GPU such as un-coalesced writing and atomic operations. We propose a new method to alleviate such issues on CPU-GPU heterogeneous systems, which achieves overall performance improvement for Monte Carlo dose calculation. Methods: Dose deposition is to accumulate dose into the voxels of a dose volume along the trajectories of radiation rays. Our idea is to partition this proceduremore » into the following three steps, which are fine-tuned for CPU or GPU: (1) each GPU thread writes dose results with location information to a buffer on GPU memory, which achieves fully-coalesced and atomic-free memory transactions; (2) the dose results in the buffer are transferred to CPU memory; (3) the dose volume is constructed from the dose buffer on CPU. We organize the processing of all radiation rays into streams. Since the steps within a stream use different hardware resources (i.e., GPU, DMA, CPU), we can overlap the execution of these steps for different streams by pipelining. Results: We evaluated our method using a Monte Carlo Convolution Superposition (MCCS) program and tested our implementation for various clinical cases on a heterogeneous system containing an Intel i7 quad-core CPU and an NVIDIA TITAN GPU. Comparing with a straightforward MCCS implementation on the same system (using both CPU and GPU for radiation ray tracing), our method gained 2-5X speedup without losing dose calculation accuracy. Conclusion: The results show that our new method improves the effective memory bandwidth and overall performance for MCCS on the CPU-GPU systems. Our proposed method can also be applied to accelerate other Monte Carlo dose calculation approaches. This research was supported in part by NSF under Grants CCF-1217906, and also in part by a research contract from the Sandia National Laboratories.« less
Fakih, Mohamad G; Skierczynski, Boguslow; Bufalino, Angelo; Groves, Clariecia; Roberts, Phillip; Heavens, Michelle; Hendrich, Ann; Haydar, Ziad
2016-12-01
The standardized infection ratio (SIR) evaluates individual publicly reported health care-associated infections, but it may not assess overall performance. We piloted an infection composite score (ICS) in 82 hospitals of a single health system. The ICS is a combined score for central line-associated bloodstream infections, catheter-associated urinary tract infections, colon and abdominal hysterectomy surgical site infections, and hospital-onset methicillin-resistant Staphylococcus aureus bacteremia and Clostridium difficile infections. Individual facility ICSs were calculated by normalizing each of the 6 SIR events to the system SIR for baseline and performance periods (ICS ib and ICS ip , respectively). A hospital ICS ib reflected its baseline performance compared with system baseline, whereas a ICS ip provided information of its outcome changes compared with system baseline. Both the ICS ib (baseline 2013) and ICS ip (performance 2014) were calculated for 63 hospitals (reporting at least 4 of the 6 event types). The ICS ip improved in 36 of 63 (57.1%) hospitals in 2014 when compared with the ICS ib in 2013. The ICS ib 2013 median was 0.96 (range, 0.13-2.94) versus the 2014 ICS ip median of 0.92 (range, 0-6.55). Variation was more evident in hospitals with ≤100 beds. The system performance score (ICS sp ) in 2014 was 0.95, a 5% improvement compared with 2013. The proposed ICS may help large health systems and state hospital associations better evaluate key infectious outcomes, comparing them with historic and concurrent performance of peers. Copyright © 2016 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Elsevier Inc. All rights reserved.
The Determination of the Percent of Oxygen in Air Using a Gas Pressure Sensor
ERIC Educational Resources Information Center
Gordon, James; Chancey, Katherine
2005-01-01
The experiment of determination of the percent of oxygen in air is performed in a general chemistry laboratory in which students compare the results calculated from the pressure measurements obtained with the calculator-based systems to those obtained in a water-measurement method. This experiment allows students to explore a fundamental reaction…
Access information about how CHP systems work; their efficiency, environmental, economic, and reliability benefits; the cost and performance characteristics of CHP technologies; and how to calculate CHP efficiency emissions savings.
TU-D-201-05: Validation of Treatment Planning Dose Calculations: Experience Working with MPPG 5.a
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xue, J; Park, J; Kim, L
2016-06-15
Purpose: Newly published medical physics practice guideline (MPPG 5.a.) has set the minimum requirements for commissioning and QA of treatment planning dose calculations. We present our experience in the validation of a commercial treatment planning system based on MPPG 5.a. Methods: In addition to tests traditionally performed to commission a model-based dose calculation algorithm, extensive tests were carried out at short and extended SSDs, various depths, oblique gantry angles and off-axis conditions to verify the robustness and limitations of a dose calculation algorithm. A comparison between measured and calculated dose was performed based on validation tests and evaluation criteria recommendedmore » by MPPG 5.a. An ion chamber was used for the measurement of dose at points of interest, and diodes were used for photon IMRT/VMAT validations. Dose profiles were measured with a three-dimensional scanning system and calculated in the TPS using a virtual water phantom. Results: Calculated and measured absolute dose profiles were compared at each specified SSD and depth for open fields. The disagreement is easily identifiable with the difference curve. Subtle discrepancy has revealed the limitation of the measurement, e.g., a spike at the high dose region and an asymmetrical penumbra observed on the tests with an oblique MLC beam. The excellent results we had (> 98% pass rate on 3%/3mm gamma index) on the end-to-end tests for both IMRT and VMAT are attributed to the quality beam data and the good understanding of the modeling. The limitation of the model and the uncertainty of measurement were considered when comparing the results. Conclusion: The extensive tests recommended by the MPPG encourage us to understand the accuracy and limitations of a dose algorithm as well as the uncertainty of measurement. Our experience has shown how the suggested tests can be performed effectively to validate dose calculation models.« less
Calculating Reuse Distance from Source Code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Narayanan, Sri Hari Krishna; Hovland, Paul
The efficient use of a system is of paramount importance in high-performance computing. Applications need to be engineered for future systems even before the architecture of such a system is clearly known. Static performance analysis that generates performance bounds is one way to approach the task of understanding application behavior. Performance bounds provide an upper limit on the performance of an application on a given architecture. Predicting cache hierarchy behavior and accesses to main memory is a requirement for accurate performance bounds. This work presents our static reuse distance algorithm to generate reuse distance histograms. We then use these histogramsmore » to predict cache miss rates. Experimental results for kernels studied show that the approach is accurate.« less
Favazza, Christopher P.; Fetterly, Kenneth A.; Hangiandreou, Nicholas J.; Leng, Shuai; Schueler, Beth A.
2015-01-01
Abstract. Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks. PMID:26158086
DSN 100-meter X and S band microwave antenna design and performance
NASA Technical Reports Server (NTRS)
Williams, W. F.
1978-01-01
The RF performance is studied for large reflector antenna systems (100 meters) when using the high efficiency dual shaped reflector approach. An altered phase was considered so that the scattered field from a shaped surface could be used in the JPL efficiency program. A new dual band (X-S) microwave feed horn was used in the shaping calculations. A great many shaping calculations were made for various horn sizes and locations and final RF efficiencies are reported. A conclusion is reached that when using the new dual band horn, shaping should probably be performed using the pattern of the lower frequency
SU-E-T-423: Fast Photon Convolution Calculation with a 3D-Ideal Kernel On the GPU
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moriya, S; Sato, M; Tachibana, H
Purpose: The calculation time is a trade-off for improving the accuracy of convolution dose calculation with fine calculation spacing of the KERMA kernel. We investigated to accelerate the convolution calculation using an ideal kernel on the Graphic Processing Units (GPU). Methods: The calculation was performed on the AMD graphics hardware of Dual FirePro D700 and our algorithm was implemented using the Aparapi that convert Java bytecode to OpenCL. The process of dose calculation was separated with the TERMA and KERMA steps. The dose deposited at the coordinate (x, y, z) was determined in the process. In the dose calculation runningmore » on the central processing unit (CPU) of Intel Xeon E5, the calculation loops were performed for all calculation points. On the GPU computation, all of the calculation processes for the points were sent to the GPU and the multi-thread computation was done. In this study, the dose calculation was performed in a water equivalent homogeneous phantom with 150{sup 3} voxels (2 mm calculation grid) and the calculation speed on the GPU to that on the CPU and the accuracy of PDD were compared. Results: The calculation time for the GPU and the CPU were 3.3 sec and 4.4 hour, respectively. The calculation speed for the GPU was 4800 times faster than that for the CPU. The PDD curve for the GPU was perfectly matched to that for the CPU. Conclusion: The convolution calculation with the ideal kernel on the GPU was clinically acceptable for time and may be more accurate in an inhomogeneous region. Intensity modulated arc therapy needs dose calculations for different gantry angles at many control points. Thus, it would be more practical that the kernel uses a coarse spacing technique if the calculation is faster while keeping the similar accuracy to a current treatment planning system.« less
NASA Astrophysics Data System (ADS)
Schlichting, Johannes; Winkler, Kerstin; Koerner, Lienhard; Schletterer, Thomas; Burghardt, Berthold; Kahlert, Hans-Juergen
2000-10-01
The productive and accurate ablation of microstructures demands the precise imaging of a mask pattern onto the substrate under work. The job can be done with high performance wide field lenses as a key component of ablation equipment. The image field has dimensions of 20 to 30 mm. Typical dimensions and accuracy of the microstructures are in the order of some microns. On the other hand, the working depth of focus (DOF) has to be in the order of some 10 microns to be successful on drilling through 20 to 50 μm substrates. All these features have to be reached under the conditions of high power laser UV light. Some design principles for such systems are applied, such as optimum number of elements, minimum tolerance sensitivity, material restrictions for the lens elements as well as mechanical parts (mounting), restrictions of possible power densities on lens surfaces (including ghosts), matched quality for the manufactures system. The special applications require appropriate performance criteria for theoretical calculation and measurements, which allow to conclude the performance of the application. The base is wave front calculation and measurement (using Shack- Hartmann sensor) in UV. Derived criteria are calculated and compared with application results.
Frau, Juan; Glossman-Mitnik, Daniel
2017-01-01
Amino acids and peptides have the potential to perform as corrosion inhibitors. The chemical reactivity descriptors that arise from Conceptual DFT for the twenty natural amino acids have been calculated by using the latest Minnesota family of density functionals. In order to verify the validity of the calculation of the descriptors directly from the HOMO and LUMO, a comparison has been performed with those obtained through ΔSCF results. Moreover, the active sites for nucleophilic and electrophilic attacks have been identified through Fukui function indices, the dual descriptor Δf( r ) and the electrophilic and nucleophilic Parr functions. The results could be of interest as a starting point for the study of large peptides where the calculation of the radical cation and anion of each system may be computationally harder and costly.
NASA Astrophysics Data System (ADS)
Shchinnikov, P. A.; Safronov, A. V.
2014-12-01
General principles of a procedure for matching energy balances of thermal power plants (TPPs), whose use enhances the accuracy of information-measuring systems (IMSs) during calculations of performance characteristics (PCs), are stated. To do this, there is the possibility for changing values of measured and calculated variables within intervals determined by measurement errors and regulations. An example of matching energy balances of the thermal power plants with a T-180 turbine is made. The proposed procedure allows one to reduce the divergence of balance equations by 3-4 times. It is shown also that the equipment operation mode affects the profit deficiency. Dependences for the divergence of energy balances on the deviation of input parameters and calculated data for the fuel economy before and after matching energy balances are represented.
Development report: Automatic System Test and Calibration (ASTAC) equipment
NASA Technical Reports Server (NTRS)
Thoren, R. J.
1981-01-01
A microcomputer based automatic test system was developed for the daily performance monitoring of wind energy system time domain (WEST) analyzer. The test system consists of a microprocessor based controller and hybrid interface unit which are used for inputing prescribed test signals into all WEST subsystems and for monitoring WEST responses to these signals. Performance is compared to theoretically correct performance levels calculated off line on a large general purpose digital computer. Results are displayed on a cathode ray tube or are available from a line printer. Excessive drift and/or lack of repeatability of the high speed analog sections within WEST is easily detected and the malfunctioning hardware identified using this system.
DSN G/T(sub op) and telecommunications system performance
NASA Technical Reports Server (NTRS)
Stelzried, C.; Clauss, R.; Rafferty, W.; Petty, S.
1992-01-01
Provided here is an intersystem comparison of present and evolving Deep Space Network (DSN) microwave receiving systems. Comparisons of the receiving systems are based on the widely used G/T sub op figure of merit, which is defined as antenna gain divided by operating system noise temperature. In 10 years, it is expected that the DSN 32 GHz microwave receiving system will improve the G/T sub op performance over the current 8.4 GHz system by 8.3 dB. To compare future telecommunications system end-to-end performance, both the receiving systems' G/T sub op and spacecraft transmit parameters are used. Improving the 32 GHz spacecraft transmitter system is shown to increase the end-to-end telecommunications system performance an additional 3.2 dB, for a net improvement of 11.5 dB. These values are without a planet in the field of view (FOV). A Saturn mission is used for an example calculation to indicate the degradation in performance with a planet in the field of view.
Computer simulation of surface and film processes
NASA Technical Reports Server (NTRS)
Tiller, W. A.; Halicioglu, M. T.
1984-01-01
All the investigations which were performed employed in one way or another a computer simulation technique based on atomistic level considerations. In general, three types of simulation methods were used for modeling systems with discrete particles that interact via well defined potential functions: molecular dynamics (a general method for solving the classical equations of motion of a model system); Monte Carlo (the use of Markov chain ensemble averaging technique to model equilibrium properties of a system); and molecular statics (provides properties of a system at T = 0 K). The effects of three-body forces on the vibrational frequencies of triatomic cluster were investigated. The multilayer relaxation phenomena for low index planes of an fcc crystal was analyzed also as a function of the three-body interactions. Various surface properties for Si and SiC system were calculated. Results obtained from static simulation calculations for slip formation were presented. The more elaborate molecular dynamics calculations on the propagation of cracks in two-dimensional systems were outlined.
Natural bond orbital analysis in the ONETEP code: applications to large protein systems.
Lee, Louis P; Cole, Daniel J; Payne, Mike C; Skylaris, Chris-Kriton
2013-03-05
First principles electronic structure calculations are typically performed in terms of molecular orbitals (or bands), providing a straightforward theoretical avenue for approximations of increasing sophistication, but do not usually provide any qualitative chemical information about the system. We can derive such information via post-processing using natural bond orbital (NBO) analysis, which produces a chemical picture of bonding in terms of localized Lewis-type bond and lone pair orbitals that we can use to understand molecular structure and interactions. We present NBO analysis of large-scale calculations with the ONETEP linear-scaling density functional theory package, which we have interfaced with the NBO 5 analysis program. In ONETEP calculations involving thousands of atoms, one is typically interested in particular regions of a nanosystem whilst accounting for long-range electronic effects from the entire system. We show that by transforming the Non-orthogonal Generalized Wannier Functions of ONETEP to natural atomic orbitals, NBO analysis can be performed within a localized region in such a way that ensures the results are identical to an analysis on the full system. We demonstrate the capabilities of this approach by performing illustrative studies of large proteins--namely, investigating changes in charge transfer between the heme group of myoglobin and its ligands with increasing system size and between a protein and its explicit solvent, estimating the contribution of electronic delocalization to the stabilization of hydrogen bonds in the binding pocket of a drug-receptor complex, and observing, in situ, the n → π* hyperconjugative interactions between carbonyl groups that stabilize protein backbones. Copyright © 2012 Wiley Periodicals, Inc.
Photovoltaic performance models - A report card
NASA Technical Reports Server (NTRS)
Smith, J. H.; Reiter, L. R.
1985-01-01
Models for the analysis of photovoltaic (PV) systems' designs, implementation policies, and economic performance, have proliferated while keeping pace with rapid changes in basic PV technology and extensive empirical data compiled for such systems' performance. Attention is presently given to the results of a comparative assessment of ten well documented and widely used models, which range in complexity from first-order approximations of PV system performance to in-depth, circuit-level characterizations. The comparisons were made on the basis of the performance of their subsystem, as well as system, elements. The models fall into three categories in light of their degree of aggregation into subsystems: (1) simplified models for first-order calculation of system performance, with easily met input requirements but limited capability to address more than a small variety of design considerations; (2) models simulating PV systems in greater detail, encompassing types primarily intended for either concentrator-incorporating or flat plate collector PV systems; and (3) models not specifically designed for PV system performance modeling, but applicable to aspects of electrical system design. Models ignoring subsystem failure or degradation are noted to exclude operating and maintenance characteristics as well.
Real-time simulation of an automotive gas turbine using the hybrid computer
NASA Technical Reports Server (NTRS)
Costakis, W.; Merrill, W. C.
1984-01-01
A hybrid computer simulation of an Advanced Automotive Gas Turbine Powertrain System is reported. The system consists of a gas turbine engine, an automotive drivetrain with four speed automatic transmission, and a control system. Generally, dynamic performance is simulated on the analog portion of the hybrid computer while most of the steady state performance characteristics are calculated to run faster than real time and makes this simulation a useful tool for a variety of analytical studies.
Collaborative Analysis Tool for Thermal Protection Systems for Single Stage to Orbit Launch Vehicles
NASA Technical Reports Server (NTRS)
Alexander, Reginald Andrew; Stanley, Thomas Troy
1999-01-01
Presented is a design tool and process that connects several disciplines which are needed in the complex and integrated design of high performance reusable single stage to orbit (SSTO) vehicles. Every system is linked to every other system and in the case of SSTO vehicles with air breathing propulsion, which is currently being studied by the National Aeronautics and Space Administration (NASA); the thermal protection system (TPS) is linked directly to almost every major system. The propulsion system pushes the vehicle to velocities on the order of 15 times the speed of sound in the atmosphere before pulling up to go to orbit which results high temperatures on the external surfaces of the vehicle. Thermal protection systems to maintain the structural integrity of the vehicle must be able to mitigate the heat transfer to the structure and be lightweight. Herein lies the interdependency, in that as the vehicle's speed increases, the TPS requirements are increased. And as TPS masses increase the effect on the propulsion system and all other systems is compounded. To adequately determine insulation masses for a vehicle such as the one described above, the aeroheating loads must be calculated and the TPS thicknesses must be calculated for the entire vehicle. To accomplish this an ascent or reentry trajectory is obtained using the computer code Program to Optimize Simulated Trajectories (POST). The trajectory is then used to calculate the convective heat rates on several locations on the vehicles using the Miniature Version of the JA70 Aerodynamic Heating Computer Program (MINIVER). Once the heat rates are defined for each body point on the vehicle, then insulation thickness that are required to maintain the vehicle within structural limits are calculated using Systems Improved Numerical Differencing Analyzer (SINDA) models. If the TPS masses are too heavy for the performance of the vehicle the process may be repeated altering the trajectory or some other input to reduce the TPS mass.
A feedback control for the advanced launch system
NASA Technical Reports Server (NTRS)
Seywald, Hans; Cliff, Eugene M.
1991-01-01
A robust feedback algorithm is presented for a near-minimum-fuel ascent of a two-stage launch vehicle operating in the equatorial plane. The development of the algorithm is based on the ideas of neighboring optimal control and can be derived into three phases. In phase 1, the formalism of optimal control is employed to calculate fuel-optimal ascent trajectories for a simple point-mass model. In phase 2, these trajectories are used to numerically calculate gain functions of time for the control(s), the total flight time, and possibly, for other variables of interest. In phase 3, these gains are used to determine feedback expressions for the controls associated with a more realistic model of a launch vehicle. With the Advanced Launch System in mind, all calculations are performed on a two-stage vehicle with fixed thrust history, but this restriction is by no means important for the approach taken. Performance and robustness of the algorithm is found to be excellent.
Design study of long-life PWR using thorium cycle
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subkhi, Moh. Nurul; Su'ud, Zaki; Waris, Abdul
2012-06-06
Design study of long-life Pressurized Water Reactor (PWR) using thorium cycle has been performed. Thorium cycle in general has higher conversion ratio in the thermal spectrum domain than uranium cycle. Cell calculation, Burn-up and multigroup diffusion calculation was performed by PIJ-CITATION-SRAC code using libraries based on JENDL 3.2. The neutronic analysis result of infinite cell calculation shows that {sup 231}Pa better than {sup 237}Np as burnable poisons in thorium fuel system. Thorium oxide system with 8%{sup 233}U enrichment and 7.6{approx} 8%{sup 231}Pa is the most suitable fuel for small-long life PWR core because it gives reactivity swing less than 1%{Delta}k/kmore » and longer burn up period (more than 20 year). By using this result, small long-life PWR core can be designed for long time operation with reduced excess reactivity as low as 0.53%{Delta}k/k and reduced power peaking during its operation.« less
Online performance evaluation of RAID 5 using CPU utilization
NASA Astrophysics Data System (ADS)
Jin, Hai; Yang, Hua; Zhang, Jiangling
1998-09-01
Redundant arrays of independent disks (RAID) technology is the efficient way to solve the bottleneck problem between CPU processing ability and I/O subsystem. For the system point of view, the most important metric of on line performance is the utilization of CPU. This paper first employs the way to calculate the CPU utilization of system connected with RAID level 5 using statistic average method. From the simulation results of CPU utilization of system connected with RAID level 5 subsystem can we see that using multiple disks as an array to access data in parallel is the efficient way to enhance the on-line performance of disk storage system. USing high-end disk drivers to compose the disk array is the key to enhance the on-line performance of system.
Internal computational fluid mechanics on supercomputers for aerospace propulsion systems
NASA Technical Reports Server (NTRS)
Andersen, Bernhard H.; Benson, Thomas J.
1987-01-01
The accurate calculation of three-dimensional internal flowfields for application towards aerospace propulsion systems requires computational resources available only on supercomputers. A survey is presented of three-dimensional calculations of hypersonic, transonic, and subsonic internal flowfields conducted at the Lewis Research Center. A steady state Parabolized Navier-Stokes (PNS) solution of flow in a Mach 5.0, mixed compression inlet, a Navier-Stokes solution of flow in the vicinity of a terminal shock, and a PNS solution of flow in a diffusing S-bend with vortex generators are presented and discussed. All of these calculations were performed on either the NAS Cray-2 or the Lewis Research Center Cray XMP.
Calculation of Electronic Structure and Field Induced Magnetic Collapse in Ferroic Materials
NASA Astrophysics Data System (ADS)
Entel, Peter; Arróyave, Raymundo; Singh, Navdeep; Sokolovskiy, Vladimir V.; Buchelnikov, Vasiliy D.
We have performed ab inito electronic structure calculations and Monte Carlo simulations of FeRh, Mn3GaC and Heusler intermetallics alloys such as Ni-Co-Cr-Mn-(Ga, In, Sn) which are of interest for solid refrigeration and energy systems, an emerging technology involving such solid-solid systems. The calculations reveal that the important magnetic phase diagrams of these alloys which show the magnetic collapse and allow predictions of the related magnetocaloric effect (MCE) which they exhibit at finite temperatures, can be obtained by ab inito and Monte Carlo computations in qualitatively good agreement with experimental data. This is a one-step procedure from theory to alloy design of ferroic functional devices.
NASA Astrophysics Data System (ADS)
Kamaltdinov, V. G.; Markov, V. A.; Lysov, I. O.
2018-03-01
To analyze the peculiarities of the combustion process in an overload diesel engine with the system of Common Rail type with one-stage injection, the indicator diagram was registered. The parameters of the combustion process simulated by the double-Wiebe function were calculated as satisfactorily reconstructing the law of burning rate variation. The main parameters of the operating cycle obtained through the indicator diagram processing and the double-Wiebe function calculation differed insignificantly. And the calculated curve of the cylinder pressure differed notably only in the end of the expansion stroke. To improve the performance of the diesel engine, a two-stage fuel injection was recommended.
Defining the Ecological Coefficient of Performance for an Aircraft Propulsion System
NASA Astrophysics Data System (ADS)
Şöhret, Yasin
2018-05-01
The aircraft industry, along with other industries, is considered responsible these days regarding environmental issues. Therefore, the performance evaluation of aircraft propulsion systems should be conducted with respect to environmental and ecological considerations. The current paper aims to present the ecological coefficient of performance calculation methodology for aircraft propulsion systems. The ecological coefficient performance is a widely-preferred performance indicator of numerous energy conversion systems. On the basis of thermodynamic laws, the methodology used to determine the ecological coefficient of performance for an aircraft propulsion system is parametrically explained and illustrated in this paper for the first time. For a better understanding, to begin with, the exergy analysis of a turbojet engine is described in detail. Following this, the outputs of the analysis are employed to define the ecological coefficient of performance for a turbojet engine. At the end of the study, the ecological coefficient of performance is evaluated parametrically and discussed depending on selected engine design parameters and performance measures. The author asserts the ecological coefficient of performance to be a beneficial indicator for researchers interested in aircraft propulsion system design and related topics.
Performance evaluation of the insurance companies based on AHP
NASA Astrophysics Data System (ADS)
Lu, Manhong; Zhu, Kunping
2018-04-01
With the entry of foreign capital, China's insurance industry is under increasing pressure of competition. The performance of a company is the external manifestation of its comprehensive strength. Therefore, the establishment of a scientific evaluation system is of practical significance for the insurance companies. In this paper, based on the financial and non-financial indicators of the companies, the performance evaluation system is constructed by means of the analytic hierarchy process (AHP). In the system, the weights of the indicators which represent the impact on the performance of the companies will be calculated by the process. The evaluation system is beneficial for the companies to realize their own strengths and weaknesses, so as to take steps to enhance the core competitiveness of the companies.
User’s Guide for the VTRPE (Variable Terrain Radio Parabolic Equation) Computer Model
1991-10-01
propagation effects and antenna characteristics in radar system performance calculations. the radar transmission equation is oiten employed. Fol- lowing Kerr.2...electromagnetic wave equations for the complex electric and magnetic radiation fields. The model accounts for the effects of nonuniform atmospheric refractivity...mission equation, that is used in the performance prediction and analysis of radar and communication systems. Optimized fast Fourier transform (FFT
Yu, Jen-Shiang K; Yu, Chin-Hui
2002-01-01
One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.
Mass-energy distribution of fragments within Langevin dynamics of fission induced by heavy ions
NASA Astrophysics Data System (ADS)
Anischenko, Yu. A.; Adeev, G. D.
2012-08-01
A stochastic approach based on four-dimensional Langevin fission dynamics is applied to calculating mass-energy distributions of fragments originating from the fission of excited compound nuclei. In the model under investigation, the coordinate K representing the projection of the total angular momentum onto the symmetry axis of the nucleus is taken into account in addition to three collective shape coordinates introduced on the basis of the { c, h, α} parametrization. The evolution of the orientation degree of freedom ( K mode) is described by means of the Langevin equation in the overdamped regime. The tensor of friction is calculated under the assumption of the reducedmechanismof one-body dissipation in the wall-plus-window model. The calculations are performed for two values of the coefficient that takes into account the reduction of the contribution from the wall formula: k s = 0.25 and k s = 1.0. Calculations with a modified wall-plus-window formula are also performed, and the quantity measuring the degree to which the single-particle motion of nucleons within the nuclear system being considered is chaotic is used for k s in this calculation. Fusion-fission reactions leading to the production of compound nuclei are considered for values of the parameter Z 2/ A in the range between 21 and 44. So wide a range is chosen in order to perform a comparative analysis not only for heavy but also for light compound nuclei in the vicinity of the Businaro-Gallone point. For all of the reactions considered in the present study, the calculations performed within four-dimensional Langevin dynamics faithfully reproduce mass-energy and mass distributions obtained experimentally. The inclusion of the K mode in the Langevin equation leads to an increase in the variances of mass and energy distributions in relation to what one obtains from three-dimensional Langevin calculations. The results of the calculations where one associates k s with the measure of chaoticity in the single-particle motion of nucleons within the nuclear system under study are in good agreement for variances of mass distributions. The results of calculations for the correlations between the prescission neutron multiplicity and the fission-fragment mass, < n pre( M)>, and between, this multiplicity and the kinetic energy of fission fragments, < n pre( E k )>, are also presented.
Cost-effective use of liquid nitrogen in cryogenic wind tunnels, phase 2
NASA Technical Reports Server (NTRS)
Mcintosh, Glen E.; Lombard, David S.; Leonard, Kenneth R.; Morhorst, Gerald D.
1990-01-01
Cryogenic seal tests were performed and Rulon A was selected for the subject nutating positive displacement expander. A four-chamber expander was designed and fabricated. A nitrogen reliquefier flow system was also designed and constructed for testing the cold expander. Initial tests were unsatisfactory because of high internal friction attributed to nutating Rulon inlet and outlet valve plates. Replacement of the nutating valves with cam-actuated poppet valves improved performance. However, no net nitrogen reliquefaction was achieved due to high internal friction. Computer software was developed for accurate calculation of nitrogen reliquefaction from a system such as that proposed. These calculations indicated that practical reliquefaction rates of 15 to 19 percent could be obtained. Due to mechanical problems, the nutating expander did not demonstrate its feasibility nor that of the system. It was concluded that redesign and testing of a smaller nutating expander was required to prove concept feasibility.
System of end-to-end symmetric database encryption
NASA Astrophysics Data System (ADS)
Galushka, V. V.; Aydinyan, A. R.; Tsvetkova, O. L.; Fathi, V. A.; Fathi, D. V.
2018-05-01
The article is devoted to the actual problem of protecting databases from information leakage, which is performed while bypassing access control mechanisms. To solve this problem, it is proposed to use end-to-end data encryption, implemented at the end nodes of an interaction of the information system components using one of the symmetric cryptographic algorithms. For this purpose, a key management method designed for use in a multi-user system based on the distributed key representation model, part of which is stored in the database, and the other part is obtained by converting the user's password, has been developed and described. In this case, the key is calculated immediately before the cryptographic transformations and is not stored in the memory after the completion of these transformations. Algorithms for registering and authorizing a user, as well as changing his password, have been described, and the methods for calculating parts of a key when performing these operations have been provided.
NASA Astrophysics Data System (ADS)
Markelov, V.; Shukalov, A.; Zharinov, I.; Kostishin, M.; Kniga, I.
2016-04-01
The use of the correction course option before aircraft take-off after inertial navigation system (INS) inaccurate alignment based on the platform attitude-and-heading reference system in azimuth is considered in the paper. A course correction is performed based on the track angle defined by the information received from the satellite navigation system (SNS). The course correction includes a calculated track error definition during ground taxiing along straight sections before take-off with its input in the onboard digital computational system like amendment for using in the current flight. The track error calculation is performed by the statistical evaluation of the track angle comparison defined by the SNS information with the current course measured by INS for a given number of measurements on the realizable time interval. The course correction testing results and recommendation application are given in the paper. The course correction based on the information from SNS can be used for improving accuracy characteristics for determining an aircraft path after making accelerated INS preparation concerning inaccurate initial azimuth alignment.
Pérez‐Vara, Consuelo
2015-01-01
A pretreatment quality assurance program for volumetric techniques should include redundant calculations and measurement‐based verifications. The patient‐specific quality assurance process must be based in clinically relevant metrics. The aim of this study was to show the commission, clinical implementation, and comparison of two systems that allow performing a 3D redundant dose calculation. In addition, one of them is capable of reconstructing the dose on patient anatomy from measurements taken with a 2D ion chamber array. Both systems were compared in terms of reference calibration data (absolute dose, output factors, percentage depth‐dose curves, and profiles). Results were in good agreement for absolute dose values (discrepancies were below 0.5%) and output factors (mean differences were below 1%). Maximum mean discrepancies were located between 10 and 20 cm of depth for PDDs (‐2.7%) and in the penumbra region for profiles (mean DTA of 1.5 mm). Validation of the systems was performed by comparing point‐dose measurements with values obtained by the two systems for static, dynamic fields from AAPM TG‐119 report, and 12 real VMAT plans for different anatomical sites (differences better than 1.2%). Comparisons between measurements taken with a 2D ion chamber array and results obtained by both systems for real VMAT plans were also performed (mean global gamma passing rates better than 87.0% and 97.9% for the 2%/2 mm and 3%/3 mm criteria). Clinical implementation of the systems was evaluated by comparing dose‐volume parameters for all TG‐119 tests and real VMAT plans with TPS values (mean differences were below 1%). In addition, comparisons between dose distributions calculated by TPS and those extracted by the two systems for real VMAT plans were also performed (mean global gamma passing rates better than 86.0% and 93.0% for the 2%/2 mm and 3%/3 mm criteria). The clinical use of both systems was successfully evaluated. PACS numbers: 87.56.Fc, 87.56.‐v, 87.55.dk, 87.55.Qr, 87.55.‐x, 07.57.Kp, 85.25.Pb PMID:26103189
NASA Astrophysics Data System (ADS)
Puzyrkov, Dmitry; Polyakov, Sergey; Podryga, Viktoriia; Markizov, Sergey
2018-02-01
At the present stage of computer technology development it is possible to study the properties and processes in complex systems at molecular and even atomic levels, for example, by means of molecular dynamics methods. The most interesting are problems related with the study of complex processes under real physical conditions. Solving such problems requires the use of high performance computing systems of various types, for example, GRID systems and HPC clusters. Considering the time consuming computational tasks, the need arises of software for automatic and unified monitoring of such computations. A complex computational task can be performed over different HPC systems. It requires output data synchronization between the storage chosen by a scientist and the HPC system used for computations. The design of the computational domain is also quite a problem. It requires complex software tools and algorithms for proper atomistic data generation on HPC systems. The paper describes the prototype of a cloud service, intended for design of atomistic systems of large volume for further detailed molecular dynamic calculations and computational management for this calculations, and presents the part of its concept aimed at initial data generation on the HPC systems.
NASA Technical Reports Server (NTRS)
Rockfeller, W C
1939-01-01
Equations have been developed for the analysis of the performance of the ideal airplane, leading to an approximate physical interpretation of the performance problem. The basic sea-level airplane parameters have been generalized to altitude parameters and a new parameter has been introduced and physically interpreted. The performance analysis for actual airplanes has been obtained in terms of the equivalent ideal airplane in order that the charts developed for use in practical calculations will for the most part apply to any type of engine-propeller combination and system of control, the only additional material required consisting of the actual engine and propeller curves for propulsion unit. Finally, a more exact method for the calculation of the climb characteristics for the constant-speed controllable propeller is presented in the appendix.
Preliminary Monte Carlo calculations for the UNCOSS neutron-based explosive detector
NASA Astrophysics Data System (ADS)
Eleon, C.; Perot, B.; Carasco, C.
2010-07-01
The goal of the FP7 UNCOSS project (Underwater Coastal Sea Surveyor) is to develop a non destructive explosive detection system based on the associated particle technique, in view to improve the security of coastal area and naval infrastructures where violent conflicts took place. The end product of the project will be a prototype of a complete coastal survey system, including a neutron-based sensor capable of confirming the presence of explosives on the sea bottom. A 3D analysis of prompt gamma rays induced by 14 MeV neutrons will be performed to identify elements constituting common military explosives, such as C, N and O. This paper presents calculations performed with the MCNPX computer code to support the ongoing design studies performed by the UNCOSS collaboration. Detection efficiencies, time and energy resolutions of the possible gamma-ray detectors are compared, which show NaI(Tl) or LaBr 3(Ce) scintillators will be suitable for this application. The effect of neutron attenuation and scattering in the seawater, influencing the counting statistics and signal-to-noise ratio, are also studied with calculated neutron time-of-flight and gamma-ray spectra for an underwater TNT target.
NASA Astrophysics Data System (ADS)
Ji, Yanju; Wang, Hongyuan; Lin, Jun; Guan, Shanshan; Feng, Xue; Li, Suyi
2014-12-01
Performance testing and calibration of airborne transient electromagnetic (ATEM) systems are conducted to obtain the electromagnetic response of ground loops. It is necessary to accurately calculate the mutual inductance between transmitting coils, receiving coils and ground loops to compute the electromagnetic responses. Therefore, based on Neumann's formula and the measured attitudes of the coils, this study deduces the formula for the mutual inductance calculation between circular and quadrilateral coils, circular and circular coils, and quadrilateral and quadrilateral coils using a rotation matrix, and then proposes a method to calculate the mutual inductance between two coils at arbitrary attitudes (roll, pitch, and yaw). Using coil attitude simulated data of an ATEM system, we calculate the mutual inductance of transmitting coils and ground loops at different attitudes, analyze the impact of coil attitudes on mutual inductance, and compare the computational accuracy and speed of the proposed method with those of other methods using the same data. The results show that the relative error of the calculation is smaller and that the speed-up is significant compared to other methods. Moreover, the proposed method is also applicable to the mutual inductance calculation of polygonal and circular coils at arbitrary attitudes and is highly expandable.
NASA Astrophysics Data System (ADS)
Yu, Yuting; Cheng, Ming
2018-05-01
Aiming at various configuration scheme and inertial measurement units of Strapdown Inertial Navigation System, selected tetrahedron skew configuration and coaxial orthogonal configuration by nine low cost IMU to build system. Calculation and simulation the performance index, reliability and fault diagnosis ability of the navigation system. Analysis shows that the reliability and reconfiguration scheme of skew configuration is superior to the orthogonal configuration scheme, while the performance index and fault diagnosis ability of the system are similar. The work in this paper provides a strong reference for the selection of engineering applications.
Airborne oceanographic lidar system
NASA Technical Reports Server (NTRS)
1975-01-01
Specifications and preliminary design of an Airborne Oceanographic Lidar (AOL) system, which is to be constructed for installation and used on a NASA Wallops Flight Center (WFC) C-54 research aircraft, are reported. The AOL system is to provide an airborne facility for use by various government agencies to demonstrate the utility and practicality of hardware of this type in the wide area collection of oceanographic data on an operational basis. System measurement and performance requirements are presented, followed by a description of the conceptual system approach and the considerations attendant to its development. System performance calculations are addressed, and the system specifications and preliminary design are presented and discussed.
Figure of merit for direct-detection optical channels
NASA Technical Reports Server (NTRS)
Chen, C.-C.
1992-01-01
The capacity and sensitivity of a direct-detection optical channel are calculated and compared to those of a white Gaussian noise channel. Unlike Gaussian channels in which the receiver performance can be characterized using the noise temperature, the performance of the direct-detection channel depends on both signal and background noise, as well as the ratio of peak to average signal power. Because of the signal-power dependence of the optical channel, actual performance of the channel can be evaluated only by considering both transmit and receive ends of the systems. Given the background noise power and the modulation bandwidth, however, the theoretically optimum receiver sensitivity can be calculated. This optimum receiver sensitivity can be used to define the equivalent receiver noise temperature and calculate the corresponding G/T product. It should be pointed out, however, that the receiver sensitivity is a function of signal power, and care must be taken to avoid deriving erroneous projections of the direct-detection channel performance.
Software Applications on the Peregrine System | High-Performance Computing
programming and optimization. Gaussian Chemistry Program for calculating molecular electronic structure and Materials Science Open-source classical molecular dynamics program designed for massively parallel systems framework Q-Chem Chemistry ab initio quantum chemistry package for predictin molecular structures
Using steady-state equations for transient flow calculation in natural gas pipelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddox, R.N.; Zhou, P.
1984-04-02
Maddox and Zhou have extended their technique for calculating the unsteady-state behavior of straight gas pipelines to complex pipeline systems and networks. After developing the steady-state flow rate and pressure profile for each pipe in the network, analysts can perform the transient-state analysis in the real-time step-wise manner described for this technique.
Code of Federal Regulations, 2014 CFR
2014-07-01
... for each data set that is collected during the initial performance test. A single composite value of... Multiple Zone Concentrations Calculations Procedure based on inlet and outlet concentrations (Column A of... composite value of Ks discussed in section III.C of this appendix. This value of Ks is calculated during the...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vicroy, D.D.; Knox, C.E.
A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modelingmore » required for the DC-10 airplane is described.« less
Research on capability of detecting ballistic missile by near space infrared system
NASA Astrophysics Data System (ADS)
Lu, Li; Sheng, Wen; Jiang, Wei; Jiang, Feng
2018-01-01
The infrared detection technology of ballistic missile based on near space platform can effectively make up the shortcomings of high-cost of traditional early warning satellites and the limited earth curvature of ground-based early warning radar. In terms of target detection capability, aiming at the problem that the formula of the action distance based on contrast performance ignores the background emissivity in the calculation process and the formula is only valid for the monochromatic light, an improved formula of the detecting range based on contrast performance is proposed. The near space infrared imaging system parameters are introduced, the expression of the contrastive action distance formula based on the target detection of the near space platform is deduced. The detection range of the near space infrared system for the booster stage ballistic missile skin, the tail nozzle and the tail flame is calculated. The simulation results show that the near-space infrared system has the best effect on the detection of tail-flame radiation.
Implementation of total focusing method for phased array ultrasonic imaging on FPGA
NASA Astrophysics Data System (ADS)
Guo, JianQiang; Li, Xi; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke
2015-02-01
This paper describes a multi-FPGA imaging system dedicated for the real-time imaging using the Total Focusing Method (TFM) and Full Matrix Capture (FMC). The system was entirely described using Verilog HDL language and implemented on Altera Stratix IV GX FPGA development board. The whole algorithm process is to: establish a coordinate system of image and divide it into grids; calculate the complete acoustic distance of array element between transmitting array element and receiving array element, and transform it into index value; then index the sound pressure values from ROM and superimpose sound pressure values to get pixel value of one focus point; and calculate the pixel values of all focus points to get the final imaging. The imaging result shows that this algorithm has high SNR of defect imaging. And FPGA with parallel processing capability can provide high speed performance, so this system can provide the imaging interface, with complete function and good performance.
A rocket engine design expert system
NASA Technical Reports Server (NTRS)
Davidian, Kenneth J.
1989-01-01
The overall structure and capabilities of an expert system designed to evaluate rocket engine performance are described. The expert system incorporates a JANNAF standard reference computer code to determine rocket engine performance and a state-of-the-art finite element computer code to calculate the interactions between propellant injection, energy release in the combustion chamber, and regenerative cooling heat transfer. Rule-of-thumb heuristics were incorporated for the hydrogen-oxygen coaxial injector design, including a minimum gap size constraint on the total number of injector elements. One-dimensional equilibrium chemistry was employed in the energy release analysis of the combustion chamber and three-dimensional finite-difference analysis of the regenerative cooling channels was used to calculate the pressure drop along the channels and the coolant temperature as it exits the coolant circuit. Inputting values to describe the geometry and state properties of the entire system is done directly from the computer keyboard. Graphical display of all output results from the computer code analyses is facilitated by menu selection of up to five dependent variables per plot.
Prediction of the Effective Thermal Conductivity of Powder Insulation
NASA Astrophysics Data System (ADS)
Jin, Lingxue; Park, Jiho; Lee, Cheonkyu; Jeong, Sangkwon
The powder insulation method is widely used in structural and cryogenic systems such as transportation and storage tanks of cryogenic fluids. The powder insulation layer is constructed by small particle powder with light weight and some residual gas with high porosity. So far, many experiments have been carried out to test the thermal performance of various kinds of powder, including expanded perlite, glass microspheres, expanded polystyrene (EPS). However, it is still difficult to predict the thermal performance of powder insulation by calculation due to the complicated geometries, including various particle shapes, wide powder diameter distribution, and various pore sizes. In this paper, the effective thermal conductivity of powder insulation has been predicted based on an effective thermal conductivity calculationmodel of porous packed beds. The calculation methodology was applied to the insulation system with expanded perlite, glass microspheres and EPS beads at cryogenic temperature and various vacuum pressures. The calculation results were compared with previous experimental data. Moreover, additional tests were carried out at cryogenic temperature in this research. The fitting equations of the deformation factor of the area-contact model are presented for various powders. The calculation results show agood agreement with the experimental results.
Adaptive real-time methodology for optimizing energy-efficient computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Chung-Hsing; Feng, Wu-Chun
Dynamic voltage and frequency scaling (DVFS) is an effective way to reduce energy and power consumption in microprocessor units. Current implementations of DVFS suffer from inaccurate modeling of power requirements and usage, and from inaccurate characterization of the relationships between the applicable variables. A system and method is proposed that adjusts CPU frequency and voltage based on run-time calculations of the workload processing time, as well as a calculation of performance sensitivity with respect to CPU frequency. The system and method are processor independent, and can be applied to either an entire system as a unit, or individually to eachmore » process running on a system.« less
Electromagnetic scattering calculations on the Intel Touchstone Delta
NASA Technical Reports Server (NTRS)
Cwik, Tom; Patterson, Jean; Scott, David
1992-01-01
During the first year's operation of the Intel Touchstone Delta system, software which solves the electric field integral equations for fields scattered from arbitrarily shaped objects has been transferred to the Delta. To fully realize the Delta's resources, an out-of-core dense matrix solution algorithm that utilizes some or all of the 90 Gbyte of concurrent file system (CFS) has been used. The largest calculation completed to date computes the fields scattered from a perfectly conducting sphere modeled by 48,672 unknown functions, resulting in a complex valued dense matrix needing 37.9 Gbyte of storage. The out-of-core LU matrix factorization algorithm was executed in 8.25 h at a rate of 10.35 Gflops. Total time to complete the calculation was 19.7 h-the additional time was used to compute the 48,672 x 48,672 matrix entries, solve the system for a given excitation, and compute observable quantities. The calculation was performed in 64-b precision.
Speed Approach for UAV Collision Avoidance
NASA Astrophysics Data System (ADS)
Berdonosov, V. D.; Zivotova, A. A.; Htet Naing, Zaw; Zhuravlev, D. O.
2018-05-01
The article represents a new approach of defining potential collision of two or more UAVs in a common aviation area. UAVs trajectories are approximated by two or three trajectories’ points obtained from the ADS-B system. In the process of defining meeting points of trajectories, two cutoff values of the critical speed range, at which a UAVs collision is possible, are calculated. As calculation expressions for meeting points and cutoff values of the critical speed are represented in the analytical form, even if an on-board computer system has limited computational capacity, the time for calculation will be far less than the time of receiving data from ADS-B. For this reason, calculations can be updated at each cycle of new data receiving, and the trajectory approximation can be bounded by straight lines. Such approach allows developing the compact algorithm of collision avoidance, even for a significant amount of UAVs (more than several dozens). To proof the research adequacy, modeling was performed using a software system developed specifically for this purpose.
NASA Astrophysics Data System (ADS)
Alaei, Parham
2000-11-01
A number of procedures in diagnostic radiology and cardiology make use of long exposures to x rays from fluoroscopy units. Adverse effects of these long exposure times on the patients' skin have been documented in recent years. These include epilation, erythema, and, in severe cases, moist desquamation and tissue necrosis. Potential biological effects from these exposures to other organs include radiation-induced cataracts and pneumonitis. Although there have been numerous studies to measure or calculate the dose to skin from these procedures, there have only been a handful of studies to determine the dose to other organs. Therefore, there is a need for accurate methods to measure the dose in tissues and organs other than the skin. This research was concentrated in devising a method to determine accurately the radiation dose to these tissues and organs. The work was performed in several stages: First, a three dimensional (3D) treatment planning system used in radiation oncology was modified and complemented to make it usable with the low energies of x rays used in diagnostic radiology. Using the system for low energies required generation of energy deposition kernels using Monte Carlo methods. These kernels were generated using the EGS4 Monte Carlo system of codes and added to the treatment planning system. Following modification, the treatment planning system was evaluated for its accuracy of calculations in low energies within homogeneous and heterogeneous media. A study of the effects of lungs and bones on the dose distribution was also performed. The next step was the calculation of dose distributions in humanoid phantoms using this modified system. The system was used to calculate organ doses in these phantoms and the results were compared to those obtained from other methods. These dose distributions can subsequently be used to create dose-volume histograms (DVHs) for internal organs irradiated by these beams. Using this data and the concept of normal tissue complication probability (NTCP) developed for radiation oncology, the risk of future complications in a particular organ can be estimated.
Analysis and methodology for aeronautical systems technology program planning
NASA Technical Reports Server (NTRS)
White, M. J.; Gershkoff, I.; Lamkin, S.
1983-01-01
A structured methodology was developed that allows the generation, analysis, and rank-ordering of system concepts by their benefits and costs, indicating the preferred order of implementation. The methodology is supported by a base of data on civil transport aircraft fleet growth projections and data on aircraft performance relating the contribution of each element of the aircraft to overall performance. The performance data are used to assess the benefits of proposed concepts. The methodology includes a computer program for performing the calculations needed to rank-order the concepts and compute their cumulative benefit-to-cost ratio. The use of the methodology and supporting data is illustrated through the analysis of actual system concepts from various sources.
Fast modeling of flux trapping cascaded explosively driven magnetic flux compression generators.
Wang, Yuwei; Zhang, Jiande; Chen, Dongqun; Cao, Shengguang; Li, Da; Liu, Chebo
2013-01-01
To predict the performance of flux trapping cascaded flux compression generators, a calculation model based on an equivalent circuit is investigated. The system circuit is analyzed according to its operation characteristics in different steps. Flux conservation coefficients are added to the driving terms of circuit differential equations to account for intrinsic flux losses. To calculate the currents in the circuit by solving the circuit equations, a simple zero-dimensional model is used to calculate the time-varying inductance and dc resistance of the generator. Then a fast computer code is programmed based on this calculation model. As an example, a two-staged flux trapping generator is simulated by using this computer code. Good agreements are achieved by comparing the simulation results with the measurements. Furthermore, it is obvious that this fast calculation model can be easily applied to predict performances of other flux trapping cascaded flux compression generators with complex structures such as conical stator or conical armature sections and so on for design purpose.
Optimal rotation sequences for active perception
NASA Astrophysics Data System (ADS)
Nakath, David; Rachuy, Carsten; Clemens, Joachim; Schill, Kerstin
2016-05-01
One major objective of autonomous systems navigating in dynamic environments is gathering information needed for self localization, decision making, and path planning. To account for this, such systems are usually equipped with multiple types of sensors. As these sensors often have a limited field of view and a fixed orientation, the task of active perception breaks down to the problem of calculating alignment sequences which maximize the information gain regarding expected measurements. Action sequences that rotate the system according to the calculated optimal patterns then have to be generated. In this paper we present an approach for calculating these sequences for an autonomous system equipped with multiple sensors. We use a particle filter for multi- sensor fusion and state estimation. The planning task is modeled as a Markov decision process (MDP), where the system decides in each step, what actions to perform next. The optimal control policy, which provides the best action depending on the current estimated state, maximizes the expected cumulative reward. The latter is computed from the expected information gain of all sensors over time using value iteration. The algorithm is applied to a manifold representation of the joint space of rotation and time. We show the performance of the approach in a spacecraft navigation scenario where the information gain is changing over time, caused by the dynamic environment and the continuous movement of the spacecraft
Web-based application on employee performance assessment using exponential comparison method
NASA Astrophysics Data System (ADS)
Maryana, S.; Kurnia, E.; Ruyani, A.
2017-02-01
Employee performance assessment is also called a performance review, performance evaluation, or assessment of employees, is an effort to assess the achievements of staffing performance with the aim to increase productivity of employees and companies. This application helps in the assessment of employee performance using five criteria: Presence, Quality of Work, Quantity of Work, Discipline, and Teamwork. The system uses the Exponential Comparative Method and Weighting Eckenrode. Calculation results using graphs were provided to see the assessment of each employee. Programming language used in this system is written in Notepad++ and MySQL database. The testing result on the system can be concluded that this application is correspond with the design and running properly. The test conducted is structural test, functional test, and validation, sensitivity analysis, and SUMI testing.
Dynamic Stability Experiment of Maglev Systems,
1995-04-01
This report summarizes the research performed on maglev vehicle dynamic stability at Argonne National Laboratory during the past few years. It also... maglev system, it is important to consider this phenomenon in the development of all maglev systems. This report presents dynamic stability experiments...on maglev systems and compares their numerical simulation with predictions calculated by a nonlinear dynamic computer code. Instabilities of an
Spread-spectrum multiple access using wideband noncoherent MFSK
NASA Technical Reports Server (NTRS)
Ha, Tri T.; Pratt, Timothy; Maggenti, Mark A.
1987-01-01
Two spread-spectrum multiple access systems which use wideband M-ary frequency shift keying (FSK) (MFSK) as the primary modulation are presented. A bit error rate performance analysis is presented and system throughput is calculated for sample C band and Ku band satellite systems. Sample link analyses are included to illustrate power and adjacent satellite interference considerations in practical multiple access systems.
A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry
NASA Astrophysics Data System (ADS)
Stamer, Torsten; Inutsuka, Shu-ichiro
2018-06-01
We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.
Fang, Teng; Zhao, Xinbing; Zhu, Tiejun
2018-05-19
Half-Heusler (HH) compounds, with a valence electron count of 8 or 18, have gained popularity as promising high-temperature thermoelectric (TE) materials due to their excellent electrical properties, robust mechanical capabilities, and good high-temperature thermal stability. With the help of first-principles calculations, great progress has been made in half-Heusler thermoelectric materials. In this review, we summarize some representative theoretical work on band structures and transport properties of HH compounds. We introduce how basic band-structure calculations are used to investigate the atomic disorder in n-type M NiSb ( M = Ti, Zr, Hf) compounds and guide the band engineering to enhance TE performance in p-type Fe R Sb ( R = V, Nb) based systems. The calculations on electrical transport properties, especially the scattering time, and lattice thermal conductivities are also demonstrated. The outlook for future research directions of first-principles calculations on HH TE materials is also discussed.
Frau, Juan; Glossman-Mitnik, Daniel
2017-01-01
Amino acids and peptides have the potential to perform as corrosion inhibitors. The chemical reactivity descriptors that arise from Conceptual DFT for the twenty natural amino acids have been calculated by using the latest Minnesota family of density functionals. In order to verify the validity of the calculation of the descriptors directly from the HOMO and LUMO, a comparison has been performed with those obtained through ΔSCF results. Moreover, the active sites for nucleophilic and electrophilic attacks have been identified through Fukui function indices, the dual descriptor Δf(r) and the electrophilic and nucleophilic Parr functions. The results could be of interest as a starting point for the study of large peptides where the calculation of the radical cation and anion of each system may be computationally harder and costly. PMID:28361050
Fang, Teng; Zhao, Xinbing
2018-01-01
Half-Heusler (HH) compounds, with a valence electron count of 8 or 18, have gained popularity as promising high-temperature thermoelectric (TE) materials due to their excellent electrical properties, robust mechanical capabilities, and good high-temperature thermal stability. With the help of first-principles calculations, great progress has been made in half-Heusler thermoelectric materials. In this review, we summarize some representative theoretical work on band structures and transport properties of HH compounds. We introduce how basic band-structure calculations are used to investigate the atomic disorder in n-type MNiSb (M = Ti, Zr, Hf) compounds and guide the band engineering to enhance TE performance in p-type FeRSb (R = V, Nb) based systems. The calculations on electrical transport properties, especially the scattering time, and lattice thermal conductivities are also demonstrated. The outlook for future research directions of first-principles calculations on HH TE materials is also discussed. PMID:29783759
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuzminskii, M.B.; Bagator'yants, A.A.; Kazanskii, V.B.
1986-08-01
The authors perform ab-initio calculations, by the SCF MO LCAO method, of the electronic and geometric structure of the systems CuCO /SUP n+/ (n=0, 1) and potential curves of CO, depending on the charge state of the copper, with variation of all geometric parameters. The calculations of open-shell electronic states were performed by the unrestricted SCF method in a minimal basis set (I, STO-3G for the C and O, and MINI-1' for the Cu) and in a valence two-exponential basis set (II, MIDI-1 for the C and O, and MIDI'2' for the Cu). The principal results from the calculation inmore » the more flexible basis II are presented and the agreement between the results obtained in the minimal basis I and these data is then analyzed qualitatively.« less
NASA Astrophysics Data System (ADS)
Petersen, Philippe; Cunha, Vanessa; Gonçalves, Marcos; Petrilli, Helena; Constantino, Vera; Instituto de Física, Departamento de Física de Materiais e Mecânica Team; Instituto de Química, Departamento de Química Fundamental Team
2013-03-01
Layered double hydroxides (LDH) can be used as nanocontainers for immobilization of Pravastatin, in order to obtain suitable drug carriers. The material's structure and spectroscopic properties were analyzed by NMR, IR/RAMAN and supported by theoretical calculations. Density Functional Theory (DFT) calculations were performed using the Gaussian03 package. The geometry optimizations were performed considering the single crystal X-ray diffraction data of tert-octylamonium salt of Pravastatin. Tetramethylsilane (TMS), obtained with the same basis set, was used as reference for calculating the chemical shift of 13C. A scaling factor was used to compare theoretical and experimental harmonic vibrational frequencies. Through the NMR and IR/RAMAN spectra, we were able to make precise assignments of the NMR and IR/RAMAN of Sodium Pravastatin. We acknowledge support from CAPES, INEO and CNPQ.
Theory and design of interferometric synthetic aperture radars
NASA Technical Reports Server (NTRS)
Rodriguez, E.; Martin, J. M.
1992-01-01
A derivation of the signal statistics, an optimal estimator of the interferometric phase, and the expression necessary to calculate the height-error budget are presented. These expressions are used to derive methods of optimizing the parameters of the interferometric synthetic aperture radar system (InSAR), and are then employed in a specific design example for a system to perform high-resolution global topographic mapping with a one-year mission lifetime, subject to current technological constraints. A Monte Carlo simulation of this InSAR system is performed to evaluate its performance for realistic topography. The results indicate that this system has the potential to satisfy the stringent accuracy and resolution requirements for geophysical use of global topographic data.
Performance, Agility and Cost of Cloud Computing Services for NASA GES DISC Giovanni Application
NASA Astrophysics Data System (ADS)
Pham, L.; Chen, A.; Wharton, S.; Winter, E. L.; Lynnes, C.
2013-12-01
The NASA Goddard Earth Science Data and Information Services Center (GES DISC) is investigating the performance, agility and cost of Cloud computing for GES DISC applications. Giovanni (Geospatial Interactive Online Visualization ANd aNalysis Infrastructure), one of the core applications at the GES DISC for online climate-related Earth science data access, subsetting, analysis, visualization, and downloading, was used to evaluate the feasibility and effort of porting an application to the Amazon Cloud Services platform. The performance and the cost of running Giovanni on the Amazon Cloud were compared to similar parameters for the GES DISC local operational system. A Giovanni Time-Series analysis of aerosol absorption optical depth (388nm) from OMI (Ozone Monitoring Instrument)/Aura was selected for these comparisons. All required data were pre-cached in both the Cloud and local system to avoid data transfer delays. The 3-, 6-, 12-, and 24-month data were used for analysis on the Cloud and local system respectively, and the processing times for the analysis were used to evaluate system performance. To investigate application agility, Giovanni was installed and tested on multiple Cloud platforms. The cost of using a Cloud computing platform mainly consists of: computing, storage, data requests, and data transfer in/out. The Cloud computing cost is calculated based on the hourly rate, and the storage cost is calculated based on the rate of Gigabytes per month. Cost for incoming data transfer is free, and for data transfer out, the cost is based on the rate in Gigabytes. The costs for a local server system consist of buying hardware/software, system maintenance/updating, and operating cost. The results showed that the Cloud platform had a 38% better performance and cost 36% less than the local system. This investigation shows the potential of cloud computing to increase system performance and lower the overall cost of system management.
Performance analysis of an air drier for a liquid dehumidifier solar air conditioning system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Queiroz, A.G.; Orlando, A.F.; Saboya, F.E.M.
1988-05-01
A model was developed for calculating the operating conditions of a non-adiabatic liquid dehumidifier used in solar air conditioning systems. In the experimental facility used for obtaining the data, air and triethylene glycol circulate countercurrently outside staggered copper tubes which are the filling of an absorption tower. Water flows inside the copper tubes, thus cooling the whole system and increasing the mass transfer potential for drying air. The methodology for calculating the mass transfer coefficient is based on the Merkel integral approach, taking into account the lowering of the water vapor pressure in equilibrium with the water glycol solution.
SU-E-T-100: Designing a QA Tool for Enhance Dynamic Wedges Based On Dynalog Files
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yousuf, A; Hussain, A
2014-06-01
Purpose: A robust quality assurance (QA) program for computer controlled enhanced dynamic wedge (EDW) has been designed and tested. Calculations to perform such QA test is based upon the EDW dynamic log files generated during dose delivery. Methods: Varian record and verify system generates dynamic log (dynalog) files during dynamic dose delivery. The system generated dynalog files contain information such as date and time of treatment, energy, monitor units, wedge orientation, and type of treatment. It also contains the expected calculated segmented treatment tables (STT) and the actual delivered STT for the treatment delivery as a verification record. These filesmore » can be used to assess the integrity and precision of the treatment plan delivery. The plans were delivered with a 6 MV beam from a Varian linear accelerator. For available EDW angles (10°, 15°, 20°, 25°, 30°, 45°, and 60°) Varian STT values were used to manually calculate monitor units for each segment. It can also be used to calculate the EDW factors. Independent verification of fractional MUs per segment was performed against those generated from dynalog files. The EDW factors used to calculate MUs in TPS were dosimetrically verified in solid water phantom with semiflex chamber on central axis. Results: EDW factors were generated from the STT provided by Varian and verified against practical measurements. The measurements were in agreement of the order of 1 % to the calculated EDW data. Variation between the MUs per segment obtained from dynalog files and those manually calculated was found to be less than 2%. Conclusion: An efficient and easy tool to perform routine QA procedure of EDW is suggested. The method can be easily implemented in any institution without a need for expensive QA equipment. An error of the order of ≥2% can be easily detected.« less
A Bayesian and Physics-Based Ground Motion Parameters Map Generation System
NASA Astrophysics Data System (ADS)
Ramirez-Guzman, L.; Quiroz, A.; Sandoval, H.; Perez-Yanez, C.; Ruiz, A. L.; Delgado, R.; Macias, M. A.; Alcántara, L.
2014-12-01
We present the Ground Motion Parameters Map Generation (GMPMG) system developed by the Institute of Engineering at the National Autonomous University of Mexico (UNAM). The system delivers estimates of information associated with the social impact of earthquakes, engineering ground motion parameters (gmp), and macroseismic intensity maps. The gmp calculated are peak ground acceleration and velocity (pga and pgv) and response spectral acceleration (SA). The GMPMG relies on real-time data received from strong ground motion stations belonging to UNAM's networks throughout Mexico. Data are gathered via satellite and internet service providers, and managed with the data acquisition software Earthworm. The system is self-contained and can perform all calculations required for estimating gmp and intensity maps due to earthquakes, automatically or manually. An initial data processing, by baseline correcting and removing records containing glitches or low signal-to-noise ratio, is performed. The system then assigns a hypocentral location using first arrivals and a simplified 3D model, followed by a moment tensor inversion, which is performed using a pre-calculated Receiver Green's Tensors (RGT) database for a realistic 3D model of Mexico. A backup system to compute epicentral location and magnitude is in place. A Bayesian Kriging is employed to combine recorded values with grids of computed gmp. The latter are obtained by using appropriate ground motion prediction equations (for pgv, pga and SA with T=0.3, 0.5, 1 and 1.5 s ) and numerical simulations performed in real time, using the aforementioned RGT database (for SA with T=2, 2.5 and 3 s). Estimated intensity maps are then computed using SA(T=2S) to Modified Mercalli Intensity correlations derived for central Mexico. The maps are made available to the institutions in charge of the disaster prevention systems. In order to analyze the accuracy of the maps, we compare them against observations not considered in the computations, and present some examples of recent earthquakes. We conclude that the system provides information with a fair goodness-of-fit against observations. This project is partially supported by DGAPA-PAPIIT (UNAM) project TB100313-RR170313.
Creating a balanced scorecard for a hospital system.
Pink, G H; McKillop, I; Schraa, E G; Preyra, C; Montgomery, C; Baker, G R
2001-01-01
In 1999, hospitals in Ontario, Canada, collaborated with a university-based research team to develop a report on the relative performance of individual hospitals in Canada's most populated province. The researchers used the balanced-scorecard framework advocated by Kaplan and Norton. Indicators of performance were developed in four areas: clinical utilization and outcomes, patient satisfaction, system integration and change, and financial performance and condition. The process of selecting, calculating, and validating meaningful indicators of financial performance and condition is outlined. Lessons learned along the way are provided. These lessons may prove valuable to other finance researchers and practitioners who are engaged in performance measurement endeavors.
NASA Technical Reports Server (NTRS)
Dankanich, John W.; Walker, Mitchell; Swiatek, Michael W.; Yim, John T.
2013-01-01
The electric propulsion community has been implored to establish and implement a set of universally applicable test standards during the research, development, and qualification of electric propulsion systems. Variability between facility-to-facility and more importantly ground-to-flight performance can result in large margins in application or aversion to mission infusion. Performance measurements and life testing under appropriate conditions can be costly and lengthy. Measurement practices must be consistent, accurate, and repeatable. Additionally, the measurements must be universally transportable across facilities throughout the development, qualification, spacecraft integration, and on-orbit performance. A recommended practice for making pressure measurements, pressure diagnostics, and calculating effective pumping speeds with justification is presented.
Large Scale GW Calculations on the Cori System
NASA Astrophysics Data System (ADS)
Deslippe, Jack; Del Ben, Mauro; da Jornada, Felipe; Canning, Andrew; Louie, Steven
The NERSC Cori system, powered by 9000+ Intel Xeon-Phi processors, represents one of the largest HPC systems for open-science in the United States and the world. We discuss the optimization of the GW methodology for this system, including both node level and system-scale optimizations. We highlight multiple large scale (thousands of atoms) case studies and discuss both absolute application performance and comparison to calculations on more traditional HPC architectures. We find that the GW method is particularly well suited for many-core architectures due to the ability to exploit a large amount of parallelism across many layers of the system. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Materials Sciences and Engineering Division, as part of the Computational Materials Sciences Program.
Configuration study for a 30 GHz monolithic receive array, volume 2
NASA Technical Reports Server (NTRS)
Nester, W. H.; Cleaveland, B.; Edward, B.; Gotkis, S.; Hesserbacker, G.; Loh, J.; Mitchell, B.
1984-01-01
The formalism of the sidelobe suppression algorithm and the method used to calculate the system noise figure for a 30 GHz monolithic receive array are presented. Results of array element weight determination and performance studies of a Gregorian aperture image system are also given.
Performance enhancement of linear stirling cryocoolers
NASA Astrophysics Data System (ADS)
Korf, Herbert; Ruehlich, Ingo; Wiedmann, Th.
2000-12-01
Performance and reliability parameters of the AIM Stirling coolers have been presented in several previous publications. This paper focuses on recent developments at AIM for the COP improvement of cryocoolers in IR-detectors and systems applications. Improved COP of cryocoolers is a key for optimized form factors, weight and reliability. In addition, some systems are critical for minimum input power and consequently minimum electromagnetic interference or magnetic stray fields, heat sinking or minimum stress under high g-level, etc. Although performance parameters and loss mechanism are well understood and can be calculated precisely, several losses still had been excessive and needed to be minimized. The AIM program is based on the SADA I cryocooler, which now is optimized to carry 4.3 W net heat load at 77K. As this program will lead into applications on a space platform, in a next step AIM is introducing flexure bearings and in a final step, an advanced pulse tube cold head will be implemented. The performance of the SADA II cooler is also improved by using the same tools and methods than used for the performance increase of the SADA I cooler by a factor of two. The main features are summarized together with measured or calculated performance data.
Calculation of nuclear spin-spin coupling constants using frozen density embedding
DOE Office of Scientific and Technical Information (OSTI.GOV)
Götz, Andreas W., E-mail: agoetz@sdsc.edu; Autschbach, Jochen; Visscher, Lucas, E-mail: visscher@chem.vu.nl
2014-03-14
We present a method for a subsystem-based calculation of indirect nuclear spin-spin coupling tensors within the framework of current-spin-density-functional theory. Our approach is based on the frozen-density embedding scheme within density-functional theory and extends a previously reported subsystem-based approach for the calculation of nuclear magnetic resonance shielding tensors to magnetic fields which couple not only to orbital but also spin degrees of freedom. This leads to a formulation in which the electron density, the induced paramagnetic current, and the induced spin-magnetization density are calculated separately for the individual subsystems. This is particularly useful for the inclusion of environmental effects inmore » the calculation of nuclear spin-spin coupling constants. Neglecting the induced paramagnetic current and spin-magnetization density in the environment due to the magnetic moments of the coupled nuclei leads to a very efficient method in which the computationally expensive response calculation has to be performed only for the subsystem of interest. We show that this approach leads to very good results for the calculation of solvent-induced shifts of nuclear spin-spin coupling constants in hydrogen-bonded systems. Also for systems with stronger interactions, frozen-density embedding performs remarkably well, given the approximate nature of currently available functionals for the non-additive kinetic energy. As an example we show results for methylmercury halides which exhibit an exceptionally large shift of the one-bond coupling constants between {sup 199}Hg and {sup 13}C upon coordination of dimethylsulfoxide solvent molecules.« less
Niaksu, Olegas; Zaptorius, Jonas
2014-01-01
This paper presents the methodology suitable for creation of a performance related remuneration system in healthcare sector, which would meet requirements for efficiency and sustainable quality of healthcare services. Methodology for performance indicators selection, ranking and a posteriori evaluation has been proposed and discussed. Priority Distribution Method is applied for unbiased performance criteria weighting. Data mining methods are proposed to monitor and evaluate the results of motivation system.We developed a method for healthcare specific criteria selection consisting of 8 steps; proposed and demonstrated application of Priority Distribution Method for the selected criteria weighting. Moreover, a set of data mining methods for evaluation of the motivational system outcomes was proposed. The described methodology for calculating performance related payment needs practical approbation. We plan to develop semi-automated tools for institutional and personal performance indicators monitoring. The final step would be approbation of the methodology in a healthcare facility.
Development of a multi-modal Monte-Carlo radiation treatment planning system combined with PHITS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumada, Hiroaki; Nakamura, Takemi; Komeda, Masao
A new multi-modal Monte-Carlo radiation treatment planning system is under development at Japan Atomic Energy Agency. This system (developing code: JCDS-FX) builds on fundamental technologies of JCDS. JCDS was developed by JAEA to perform treatment planning of boron neutron capture therapy (BNCT) which is being conducted at JRR-4 in JAEA. JCDS has many advantages based on practical accomplishments for actual clinical trials of BNCT at JRR-4, the advantages have been taken over to JCDS-FX. One of the features of JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multipurpose particle Monte-Carlo transport code, thus applicationmore » of PHITS enables to evaluate doses for not only BNCT but also several radiotherapies like proton therapy. To verify calculation accuracy of JCDS-FX with PHITS for BNCT, treatment planning of an actual BNCT conducted at JRR-4 was performed retrospectively. The verification results demonstrated the new system was applicable to BNCT clinical trials in practical use. In framework of R and D for laser-driven proton therapy, we begin study for application of JCDS-FX combined with PHITS to proton therapy in addition to BNCT. Several features and performances of the new multimodal Monte-Carlo radiotherapy planning system are presented.« less
Quantitative evaluation of patient-specific quality assurance using online dosimetry system
NASA Astrophysics Data System (ADS)
Jung, Jae-Yong; Shin, Young-Ju; Sohn, Seung-Chang; Min, Jung-Whan; Kim, Yon-Lae; Kim, Dong-Su; Choe, Bo-Young; Suh, Tae-Suk
2018-01-01
In this study, we investigated the clinical performance of an online dosimetry system (Mobius FX system, MFX) by 1) dosimetric plan verification using gamma passing rates and dose volume metrics and 2) error-detection capability evaluation by deliberately introduced machine error. Eighteen volumetric modulated arc therapy (VMAT) plans were studied. To evaluate the clinical performance of the MFX, we used gamma analysis and dose volume histogram (DVH) analysis. In addition, to evaluate the error-detection capability, we used gamma analysis and DVH analysis utilizing three types of deliberately introduced errors (Type 1: gantry angle-independent multi-leaf collimator (MLC) error, Type 2: gantry angle-dependent MLC error, and Type 3: gantry angle error). A dosimetric verification comparison of physical dosimetry system (Delt4PT) and online dosimetry system (MFX), gamma passing rates of the two dosimetry systems showed very good agreement with treatment planning system (TPS) calculation. For the average dose difference between the TPS calculation and the MFX measurement, most of the dose metrics showed good agreement within a tolerance of 3%. For the error-detection comparison of Delta4PT and MFX, the gamma passing rates of the two dosimetry systems did not meet the 90% acceptance criterion with the magnitude of error exceeding 2 mm and 1.5 ◦, respectively, for error plans of Types 1, 2, and 3. For delivery with all error types, the average dose difference of PTV due to error magnitude showed good agreement between calculated TPS and measured MFX within 1%. Overall, the results of the online dosimetry system showed very good agreement with those of the physical dosimetry system. Our results suggest that a log file-based online dosimetry system is a very suitable verification tool for accurate and efficient clinical routines for patient-specific quality assurance (QA).
Software for X-Ray Images Calculation of Hydrogen Compression Device in Megabar Pressure Range
NASA Astrophysics Data System (ADS)
Egorov, Nikolay; Bykov, Alexander; Pavlov, Valery
2007-06-01
Software for x-ray images simulation is described. The software is a part of x-ray method used for investigation of an equation of state of hydrogen in a megabar pressure range. A graphical interface that clearly and simply allows users to input data for x-ray image calculation: properties of the studied device, parameters of the x-ray radiation source, parameters of the x-ray radiation recorder, the experiment geometry; to represent the calculation results and efficiently transmit them to other software for processing. The calculation time is minimized. This makes it possible to perform calculations in a dialogue regime. The software is written in ``MATLAB'' system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbert, J.; van Lint, V.; Sherwood, S.
This report is a compilation of two previous sets of pretest calculations, references 1 and 2 and the grounding and shielding report, reference 3. The calculations performed in reference 1 were made for the baseline system, with the instrumentation trailers not isolated from ground, and wider ranges of ground conductivity were considered. This was used to develop the grounding and shielding plan included in the appendix. The final pretest calculations of reference 2 were performed for the modified system with isolated trailers, and with a better knowledge of the ground conductivity. The basic driving mechanism for currents in the modelmore » is the motion of Compton electrons, driven by gamma rays, in the air gaps and soil. Most of the Compton current is balanced by conduction current which returns directly along the path of the Compton electron, but a small fraction will return by circuitous paths involving current flow on conductors, including the uphole cables. The calculation of the currents is done in a two step process -- first the voltages in the ground near the conducting metallic structures is calculated without considering the presence of the structures. These are then used as open circuit drivers for an electrical model of the conductors which is obtained from loop integrals of Maxwell`s equations. The model which is used is a transmission line model, similar to those which have been used to calculate EMP currents on buried and overhead cables in other situations, including previous underground tests, although on much shorter distance and time scales, and with more controlled geometries. The behavior of air gaps between the conducting structure and the walls of the drift is calculated using an air chemistry model which determines the electron and ion densities and uses them to calculate the air conductivity across the gap.« less
Adaptive Optics Communications Performance Analysis
NASA Technical Reports Server (NTRS)
Srinivasan, M.; Vilnrotter, V.; Troy, M.; Wilson, K.
2004-01-01
The performance improvement obtained through the use of adaptive optics for deep-space communications in the presence of atmospheric turbulence is analyzed. Using simulated focal-plane signal-intensity distributions, uncoded pulse-position modulation (PPM) bit-error probabilities are calculated assuming the use of an adaptive focal-plane detector array as well as an adaptively sized single detector. It is demonstrated that current practical adaptive optics systems can yield performance gains over an uncompensated system ranging from approximately 1 dB to 6 dB depending upon the PPM order and background radiation level.
High performance computing in biology: multimillion atom simulations of nanoscale systems
Sanbonmatsu, K. Y.; Tung, C.-S.
2007-01-01
Computational methods have been used in biology for sequence analysis (bioinformatics), all-atom simulation (molecular dynamics and quantum calculations), and more recently for modeling biological networks (systems biology). Of these three techniques, all-atom simulation is currently the most computationally demanding, in terms of compute load, communication speed, and memory load. Breakthroughs in electrostatic force calculation and dynamic load balancing have enabled molecular dynamics simulations of large biomolecular complexes. Here, we report simulation results for the ribosome, using approximately 2.64 million atoms, the largest all-atom biomolecular simulation published to date. Several other nanoscale systems with different numbers of atoms were studied to measure the performance of the NAMD molecular dynamics simulation program on the Los Alamos National Laboratory Q Machine. We demonstrate that multimillion atom systems represent a 'sweet spot' for the NAMD code on large supercomputers. NAMD displays an unprecedented 85% parallel scaling efficiency for the ribosome system on 1024 CPUs. We also review recent targeted molecular dynamics simulations of the ribosome that prove useful for studying conformational changes of this large biomolecular complex in atomic detail. PMID:17187988
Laser diode initiated detonators for space applications
NASA Technical Reports Server (NTRS)
Ewick, David W.; Graham, J. A.; Hawley, J. D.
1993-01-01
Ensign Bickford Aerospace Company (EBAC) has over ten years of experience in the design and development of laser ordnance systems. Recent efforts have focused on the development of laser diode ordnance systems for space applications. Because the laser initiated detonators contain only insensitive secondary explosives, a high degree of system safety is achieved. Typical performance characteristics of a laser diode initiated detonator are described in this paper, including all-fire level, function time, and output. A finite difference model used at EBAC to predict detonator performance, is described and calculated results are compared to experimental data. Finally, the use of statistically designed experiments to evaluate performance of laser initiated detonators is discussed.
Prototype solar heating and hot water systems
NASA Technical Reports Server (NTRS)
1977-01-01
Alternative approaches to solar heating and hot water system configurations were studied, parametrizing the number and location of the dampers, the number and location of the fans, the interface locations with the furnace, the size and type of subsystems, and operating modes. A two-pass air-heating collector was selected based on efficiency and ease of installation. Also, an energy transport module was designed to compactly contain all the mechanical and electrical control components. System performance calculations were carried out over a heating season for the tentative site location at Tunkhnana, Pa. Results illustrate the effect of collector size, storage capacity, and use of a reflector. Factors which affected system performance include site location, insulative quality of the house, and of the system components. A preliminary system performance specification is given.
Georg, Dietmar; Stock, Markus; Kroupa, Bernhard; Olofsson, Jörgen; Nyholm, Tufve; Ahnesjö, Anders; Karlsson, Mikael
2007-08-21
Experimental methods are commonly used for patient-specific intensity-modulated radiotherapy (IMRT) verification. The purpose of this study was to investigate the accuracy and performance of independent dose calculation software (denoted as 'MUV' (monitor unit verification)) for patient-specific quality assurance (QA). 52 patients receiving step-and-shoot IMRT were considered. IMRT plans were recalculated by the treatment planning systems (TPS) in a dedicated QA phantom, in which an experimental 1D and 2D verification (0.3 cm(3) ionization chamber; films) was performed. Additionally, an independent dose calculation was performed. The fluence-based algorithm of MUV accounts for collimator transmission, rounded leaf ends, tongue-and-groove effect, backscatter to the monitor chamber and scatter from the flattening filter. The dose calculation utilizes a pencil beam model based on a beam quality index. DICOM RT files from patient plans, exported from the TPS, were directly used as patient-specific input data in MUV. For composite IMRT plans, average deviations in the high dose region between ionization chamber measurements and point dose calculations performed with the TPS and MUV were 1.6 +/- 1.2% and 0.5 +/- 1.1% (1 S.D.). The dose deviations between MUV and TPS slightly depended on the distance from the isocentre position. For individual intensity-modulated beams (total 367), an average deviation of 1.1 +/- 2.9% was determined between calculations performed with the TPS and with MUV, with maximum deviations up to 14%. However, absolute dose deviations were mostly less than 3 cGy. Based on the current results, we aim to apply a confidence limit of 3% (with respect to the prescribed dose) or 6 cGy for routine IMRT verification. For off-axis points at distances larger than 5 cm and for low dose regions, we consider 5% dose deviation or 10 cGy acceptable. The time needed for an independent calculation compares very favourably with the net time for an experimental approach. The physical effects modelled in the dose calculation software MUV allow accurate dose calculations in individual verification points. Independent calculations may be used to replace experimental dose verification once the IMRT programme is mature.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heintz, P; Heintz, B; Sandoval, D
Purpose: Computerized radiation therapy treatment planning is performed on almost all patients today. However it is seldom used for laboratory irradiations. The first objective is to assess whether modern radiation therapy treatment planning (RTP) systems accurately predict the subject dose by comparing in vivo and decedent dose measurements to calculated doses. The other objective is determine the importance of using a RTP system for laboratory irradiations. Methods: 5 MOSFET radiation dosimeters were placed enterically in each subject (2 sedated Rhesus Macaques) to measure the absorbed dose at 5 levels (carina, lung, heart, liver and rectum) during whole body irradiation. Themore » subjects were treated with large opposed lateral fields and extended distances to cover the entire subject using a Varian 600C linac. CT simulation was performed ante-mortem (AM) and post-mortem (PM). To compare AM and PM doses, calculation points were placed at the location of each dosimeter in the treatment plan. The measured results were compared to the results using Varian Eclipse and Prowess Panther RTP systems. Results: The Varian and Prowess treatment planning system agreed to within in +1.5% for both subjects. However there were significant differences between the measured and calculated doses. For both animals the calculated central axis dose was higher than prescribed by 3–5%. This was caused in part by inaccurate measurement of animal thickness at the time of irradiation. For one subject the doses ranged from 4% to 7% high and the other subject the doses ranged 7% to 14% high when compared to the RTP doses. Conclusions: Our results suggest that using proper CT RTP system can more accurately deliver the prescribed dose to laboratory subjects. It also shows that there is significant dose variation in such subjects when inhomogeneities are not considered in the planning process.« less
Array structure design handbook for stand alone photovoltaic applications
NASA Technical Reports Server (NTRS)
Didelot, R. C.
1980-01-01
This handbook will permit the user to design a low-cost structure for a variety of photovoltaic system applications under 10 kW. Any presently commercially available photovoltaic modules may be used. Design alternatives are provided for different generic structure types, structural materials, and electric interfaces. The use of a hand-held calculator is sufficient to perform the necessary calculations for the array designs.
A minimal multiconfigurational technique.
Fernández Rico, J; Paniagua, M; GarcíA De La Vega, J M; Fernández-Alonso, J I; Fantucci, P
1986-04-01
A direct minimization method previously presented by the authors is applied here to biconfigurational wave functions. A very moderate increasing in the time by iteration with respect to the one-determinant calculation and good convergence properties have been found. So qualitatively correct studies on singlet systems with strong biradical character can be performed with a cost similar to that required by Hartree-Fock calculations. Copyright © 1986 John Wiley & Sons, Inc.
Comparative Study of the Volumetric Methods Calculation Using GNSS Measurements
NASA Astrophysics Data System (ADS)
Şmuleac, Adrian; Nemeş, Iacob; Alina Creţan, Ioana; Sorina Nemeş, Nicoleta; Şmuleac, Laura
2017-10-01
This paper aims to achieve volumetric calculations for different mineral aggregates using different methods of analysis and also comparison of results. To achieve these comparative studies and presentation were chosen two software licensed, namely TopoLT 11.2 and Surfer 13. TopoLT program is a program dedicated to the development of topographic and cadastral plans. 3D terrain model, level courves and calculation of cut and fill volumes, including georeferencing of images. The program Surfer 13 is produced by Golden Software, in 1983 and is active mainly used in various fields such as agriculture, construction, geophysical, geotechnical engineering, GIS, water resources and others. It is also able to achieve GRID terrain model, to achieve the density maps using the method of isolines, volumetric calculations, 3D maps. Also, it can read different file types, including SHP, DXF and XLSX. In these paper it is presented a comparison in terms of achieving volumetric calculations using TopoLT program by two methods: a method where we choose a 3D model both for surface as well as below the top surface and a 3D model in which we choose a 3D terrain model for the bottom surface and another 3D model for the top surface. The comparison of the two variants will be made with data obtained from the realization of volumetric calculations with the program Surfer 13 generating GRID terrain model. The topographical measurements were performed with equipment from Leica GPS 1200 Series. Measurements were made using Romanian position determination system - ROMPOS which ensures accurate positioning of reference and coordinates ETRS through the National Network of GNSS Permanent Stations. GPS data processing was performed with the program Leica Geo Combined Office. For the volumetric calculating the GPS used point are in 1970 stereographic projection system and for the altitude the reference is 1975 the Black Sea projection system.
Application of infrared uncooled cameras in surveillance systems
NASA Astrophysics Data System (ADS)
Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.
2013-10-01
The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.
NASA Technical Reports Server (NTRS)
Gordon, S.; Mcbride, B. J.
1976-01-01
A detailed description of the equations and computer program for computations involving chemical equilibria in complex systems is given. A free-energy minimization technique is used. The program permits calculations such as (1) chemical equilibrium for assigned thermodynamic states (T,P), (H,P), (S,P), (T,V), (U,V), or (S,V), (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. The program considers condensed species as well as gaseous species.
Are university rankings useful to improve research? A systematic review
Momani, Shaher
2018-01-01
Introduction Concerns about reproducibility and impact of research urge improvement initiatives. Current university ranking systems evaluate and compare universities on measures of academic and research performance. Although often useful for marketing purposes, the value of ranking systems when examining quality and outcomes is unclear. The purpose of this study was to evaluate usefulness of ranking systems and identify opportunities to support research quality and performance improvement. Methods A systematic review of university ranking systems was conducted to investigate research performance and academic quality measures. Eligibility requirements included: inclusion of at least 100 doctoral granting institutions, be currently produced on an ongoing basis and include both global and US universities, publish rank calculation methodology in English and independently calculate ranks. Ranking systems must also include some measures of research outcomes. Indicators were abstracted and contrasted with basic quality improvement requirements. Exploration of aggregation methods, validity of research and academic quality indicators, and suitability for quality improvement within ranking systems were also conducted. Results A total of 24 ranking systems were identified and 13 eligible ranking systems were evaluated. Six of the 13 rankings are 100% focused on research performance. For those reporting weighting, 76% of the total ranks are attributed to research indicators, with 24% attributed to academic or teaching quality. Seven systems rely on reputation surveys and/or faculty and alumni awards. Rankings influence academic choice yet research performance measures are the most weighted indicators. There are no generally accepted academic quality indicators in ranking systems. Discussion No single ranking system provides a comprehensive evaluation of research and academic quality. Utilizing a combined approach of the Leiden, Thomson Reuters Most Innovative Universities, and the SCImago ranking systems may provide institutions with a more effective feedback for research improvement. Rankings which extensively rely on subjective reputation and “luxury” indicators, such as award winning faculty or alumni who are high ranking executives, are not well suited for academic or research performance improvement initiatives. Future efforts should better explore measurement of the university research performance through comprehensive and standardized indicators. This paper could serve as a general literature citation when one or more of university ranking systems are used in efforts to improve academic prominence and research performance. PMID:29513762
The Use of Magnetoencephalography in Evaluating Human Performance
1991-06-01
determines the head cartesian coordinate system, and calculates the locations of the dipole sets in this reference frame. This system is based on an optical ...differences in brain activity are found between imagers and nonimagers , the brain areas which seem to be involved will be localized. 25 3. The poor
SPARC (SPARC Performs Automated Reasoning in Chemistry) chemical reactivity models were extended to calculate hydrolysis rate constants for carboxylic acid ester and phosphate ester compounds in aqueous non- aqueous and systems strictly from molecular structure. The energy diffe...
Turing instability in reaction-diffusion systems with nonlinear diffusion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zemskov, E. P., E-mail: zemskov@ccas.ru
2013-10-15
The Turing instability is studied in two-component reaction-diffusion systems with nonlinear diffusion terms, and the regions in parametric space where Turing patterns can form are determined. The boundaries between super- and subcritical bifurcations are found. Calculations are performed for one-dimensional brusselator and oregonator models.
Wei, Qichao; Zhao, Weilong; Yang, Yang; Cui, Beiliang; Xu, Zhijun; Yang, Xiaoning
2018-03-19
Considerable interest in characterizing protein/peptide-surface interactions has prompted extensive computational studies on calculations of adsorption free energy. However, in many cases, each individual study has focused on the application of free energy calculations to a specific system; therefore, it is difficult to combine the results into a general picture for choosing an appropriate strategy for the system of interest. Herein, three well-established computational algorithms are systemically compared and evaluated to compute the adsorption free energy of small molecules on two representative surfaces. The results clearly demonstrate that the characteristics of studied interfacial systems have crucial effects on the accuracy and efficiency of the adsorption free energy calculations. For the hydrophobic surface, steered molecular dynamics exhibits the highest efficiency, which appears to be a favorable method of choice for enhanced sampling simulations. However, for the charged surface, only the umbrella sampling method has the ability to accurately explore the adsorption free energy surface. The affinity of the water layer to the surface significantly affects the performance of free energy calculation methods, especially at the region close to the surface. Therefore, a general principle of how to discriminate between methodological and sampling issues based on the interfacial characteristics of the system under investigation is proposed. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
Non-Born-Oppenheimer self-consistent field calculations with cubic scaling
NASA Astrophysics Data System (ADS)
Moncada, Félix; Posada, Edwin; Flores-Moreno, Roberto; Reyes, Andrés
2012-05-01
An efficient nuclear molecular orbital methodology is presented. This approach combines an auxiliary density functional theory for electrons (ADFT) and a localized Hartree product (LHP) representation for the nuclear wave function. A series of test calculations conducted on small molecules exposed that energy and geometry errors introduced by the use of ADFT and LHP approximations are small and comparable to those obtained by the use of electronic ADFT. In addition, sample calculations performed on (HF)n chains disclosed that the combined ADFT/LHP approach scales cubically with system size (n) as opposed to the quartic scaling of Hartree-Fock/LHP or DFT/LHP methods. Even for medium size molecules the improved scaling of the ADFT/LHP approach resulted in speedups of at least 5x with respect to Hartree-Fock/LHP calculations. The ADFT/LHP method opens up the possibility of studying nuclear quantum effects on large size systems that otherwise would be impractical.
NASA Astrophysics Data System (ADS)
Masrour, R.; Hlil, E. K.
2016-08-01
Self-consistent ab initio calculations based on density-functional theory and using both full potential linearized augmented plane wave and Korring-Kohn-Rostoker-coherent potential approximation methods, are performed to investigate both electronic and magnetic properties of the Ga1-xMnxN system. Magnetic moments considered to lie along (001) axes are computed. Obtained data from ab initio calculations are used as input for the high temperature series expansions (HTSEs) calculations to compute other magnetic parameters such as the magnetic phase diagram and the critical exponent. The increasing of the dilution x in this system has allowed to verify a series of HTSEs predictions on the possibility of ferromagnetism in dilute magnetic insulators and to demonstrate that the interaction changes from antiferromagnetic to ferromagnetic passing through the spins glace phase.
NASA Astrophysics Data System (ADS)
Nakamura, Atsutomo; Ukita, Masaya; Shimoda, Naofumi; Furushima, Yuho; Toyoura, Kazuaki; Matsunaga, Katsuyuki
2017-06-01
First principles calculations were performed to understand an electronic origin of high ductility in silver chloride (AgCl) with the rock salt structure. From calculations of generalised stacking fault energies for different slip systems, it was found that only the {1 1 0}? slip system is favourably activated in sodium chloride (NaCl) with the same rock salt structure, whereas AgCl shows three kinds of possible slip systems along the ? direction on the {0 0 1}, {1 1 0}, and {1 1 1} planes, which is in excellent agreement with experiment. Detailed analyses of the electronic structures across slip planes showed that the more covalent character of bonding of Ag-Cl than Na-Cl tends to make the slip motion energetically favourable. It was also surprising to find out that strong Ag-Ag covalent bonds across the slip plane are formed in the {0 0 1}〈1 1 0〉 slip system in AgCl, which makes it possible to activate the multiple slip systems in AgCl.
Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun
2008-05-28
Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4-15.9 times faster, while Unphased jobs performed 1.1-18.6 times faster compared to the accumulated computation duration. Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance.
Mishima, Hiroyuki; Lidral, Andrew C; Ni, Jun
2008-01-01
Background Genetic association studies have been used to map disease-causing genes. A newly introduced statistical method, called exhaustive haplotype association study, analyzes genetic information consisting of different numbers and combinations of DNA sequence variations along a chromosome. Such studies involve a large number of statistical calculations and subsequently high computing power. It is possible to develop parallel algorithms and codes to perform the calculations on a high performance computing (HPC) system. However, most existing commonly-used statistic packages for genetic studies are non-parallel versions. Alternatively, one may use the cutting-edge technology of grid computing and its packages to conduct non-parallel genetic statistical packages on a centralized HPC system or distributed computing systems. In this paper, we report the utilization of a queuing scheduler built on the Grid Engine and run on a Rocks Linux cluster for our genetic statistical studies. Results Analysis of both consecutive and combinational window haplotypes was conducted by the FBAT (Laird et al., 2000) and Unphased (Dudbridge, 2003) programs. The dataset consisted of 26 loci from 277 extended families (1484 persons). Using the Rocks Linux cluster with 22 compute-nodes, FBAT jobs performed about 14.4–15.9 times faster, while Unphased jobs performed 1.1–18.6 times faster compared to the accumulated computation duration. Conclusion Execution of exhaustive haplotype analysis using non-parallel software packages on a Linux-based system is an effective and efficient approach in terms of cost and performance. PMID:18541045
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
Combination of large and small basis sets in electronic structure calculations on large systems
NASA Astrophysics Data System (ADS)
Røeggen, Inge; Gao, Bin
2018-04-01
Two basis sets—a large and a small one—are associated with each nucleus of the system. Each atom has its own separate one-electron basis comprising the large basis set of the atom in question and the small basis sets for the partner atoms in the complex. The perturbed atoms in molecules and solids model is at core of the approach since it allows for the definition of perturbed atoms in a system. It is argued that this basis set approach should be particularly useful for periodic systems. Test calculations are performed on one-dimensional arrays of H and Li atoms. The ground-state energy per atom in the linear H array is determined versus bond length.
Three-dimensional polarization algebra for all polarization sensitive optical systems.
Li, Yahong; Fu, Yuegang; Liu, Zhiying; Zhou, Jianhong; Bryanston-Cross, P J; Li, Yan; He, Wenjun
2018-05-28
Using three-dimensional (3D) coherency vector (9 × 1), we develop a new 3D polarization algebra to calculate the polarization properties of all polarization sensitive optical systems, especially when the incident optical field is partially polarized or un-polarized. The polarization properties of a high numerical aperture (NA) microscope objective (NA = 1.25 immersed in oil) are analyzed based on the proposed 3D polarization algebra. Correspondingly, the polarization simulation of this high NA optical system is performed by the commercial software VirtualLAB Fusion. By comparing the theoretical calculations with polarization simulations, a perfect matching relation is obtained, which demonstrates that this 3D polarization algebra is valid to quantify the 3D polarization properties for all polarization sensitive optical systems.
40 CFR 63.11224 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2014 CFR
2014-07-01
... include a daily calibration drift assessment, a quarterly performance audit, and an annual zero alignment... performance audit, or an annual zero alignment audit. (7) You must calculate and record 6-minute averages from... absolute particulate matter loadings. (5) The bag leak detection system must be equipped with a device to...
40 CFR 63.11224 - What are my monitoring, installation, operation, and maintenance requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... include a daily calibration drift assessment, a quarterly performance audit, and an annual zero alignment... performance audit, or an annual zero alignment audit. (7) You must calculate and record 6-minute averages from... absolute particulate matter loadings. (5) The bag leak detection system must be equipped with a device to...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Govoni, Marco; Galli, Giulia
We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Govoni, Marco; Galli, Giulia
We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. We applied the newly developed technique to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less
Airborne antenna pattern calculations
NASA Technical Reports Server (NTRS)
Knerr, T. J.; Mielke, R. R.
1981-01-01
Progress on the development of modeling software, testing software against caclulated data from program VPAP and measured patterns, and calculating roll plane patterns for general aviation aircraft is reported. Major objectives are the continued development of computer software for aircraft modeling and use of this software and program OSUVOL to calculate principal plane and volumetric radiation patterns. The determination of proper placement of antennas on aircraft to meet the requirements of the Microwave Landing System is discussed. An overview of the performed work, and an example of a roll plane model for the Piper PA-31T Cheyenne aircraft and the resulting calculated roll plane radiation pattern are included.
Govoni, Marco; Galli, Giulia
2015-01-12
We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green’s function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfacesmore » with thousands of electrons.« less
BASIC Data Manipulation And Display System (BDMADS)
NASA Technical Reports Server (NTRS)
Szuch, J. R.
1983-01-01
BDMADS, a BASIC Data Manipulation and Display System, is a collection of software programs that run on an Apple II Plus personal computer. BDMADS provides a user-friendly environment for the engineer in which to perform scientific data processing. The computer programs and their use are described. Jet engine performance calculations are used to illustrate the use of BDMADS. Source listings of the BDMADS programs are provided and should permit users to customize the programs for their particular applications.
Wind farm topology-finding algorithm considering performance, costs, and environmental impacts.
Tazi, Nacef; Chatelet, Eric; Bouzidi, Youcef; Meziane, Rachid
2017-06-05
Optimal power in wind farms turns to be a modern problem for investors and decision makers; onshore wind farms are subject to performance and economic and environmental constraints. The aim of this work is to define the best installed capacity (best topology) with maximum performance and profits and consider environmental impacts as well. In this article, we continue the work recently done on wind farm topology-finding algorithm. The proposed resolution technique is based on finding the best topology of the system that maximizes the wind farm performance (availability) under the constraints of costs and capital investments. Global warming potential of wind farm is calculated and taken into account in the results. A case study is done using data and constraints similar to those collected from wind farm constructors, managers, and maintainers. Multi-state systems (MSS), universal generating function (UGF), wind, and load charge functions are applied. An economic study was conducted to assess the wind farm investment. Net present value (NPV) and levelized cost of energy (LCOE) were calculated for best topologies found.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gevorkyan, A. S., E-mail: g-ashot@sci.am; Sahakyan, V. V.
We study the classical 1D Heisenberg spin glasses in the framework of nearest-neighboring model. Based on the Hamilton equations we obtained the system of recurrence equations which allows to perform node-by-node calculations of a spin-chain. It is shown that calculations from the first principles of classical mechanics lead to ℕℙ hard problem, that however in the limit of the statistical equilibrium can be calculated by ℙ algorithm. For the partition function of the ensemble a new representation is offered in the form of one-dimensional integral of spin-chains’ energy distribution.
Posttest RELAP5 simulations of the Semiscale S-UT series experiments. [PWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leonard, M.T.
The RELAP5/MOD1 computer code was used to perform posttest calculations, simulating six experiments, run in the Semiscale Mod-2A facility, investigating the effects of upper head injection on small break transient behavior. The results of these calculations and corresponding test data are presented in this report. An evaluation is made of the capability of RELAP5 to calculate the thermal-hydraulic response of the Mod-2A system over a spectrum of break sizes, with and without the use of upper head injection.
2014-12-01
from standard HSE06 hybrid functional with α = 0.25 and ω = 0.11 bohr–1 and b) from HSE with α = 0.093 and ω of 0.11 bohr–1...better agreement for the band gap value for future calculations, a systemic study was conducted for the (α, ω) parameter space of the HSE ...orthogonal). Future HSE calculations will be performed with the updated parameters. Fig. 7 Density of States of PEEK based on the optimized
Development of a new multi-modal Monte-Carlo radiotherapy planning system.
Kumada, H; Nakamura, T; Komeda, M; Matsumura, A
2009-07-01
A new multi-modal Monte-Carlo radiotherapy planning system (developing code: JCDS-FX) is under development at Japan Atomic Energy Agency. This system builds on fundamental technologies of JCDS applied to actual boron neutron capture therapy (BNCT) trials in JRR-4. One of features of the JCDS-FX is that PHITS has been applied to particle transport calculation. PHITS is a multi-purpose particle Monte-Carlo transport code. Hence application of PHITS enables to evaluate total doses given to a patient by a combined modality therapy. Moreover, JCDS-FX with PHITS can be used for the study of accelerator based BNCT. To verify calculation accuracy of the JCDS-FX, dose evaluations for neutron irradiation of a cylindrical water phantom and for an actual clinical trial were performed, then the results were compared with calculations by JCDS with MCNP. The verification results demonstrated that JCDS-FX is applicable to BNCT treatment planning in practical use.
Finding trap stiffness of optical tweezers using digital filters.
Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G
2018-02-01
Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo
2015-10-14
We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation inmore » metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.« less
Anharmonic Vibrational Spectroscopy on Metal Transition Complexes
NASA Astrophysics Data System (ADS)
Latouche, Camille; Bloino, Julien; Barone, Vincenzo
2014-06-01
Advances in hardware performance and the availability of efficient and reliable computational models have made possible the application of computational spectroscopy to ever larger molecular systems. The systematic interpretation of experimental data and the full characterization of complex molecules can then be facilitated. Focusing on vibrational spectroscopy, several approaches have been proposed to simulate spectra beyond the double harmonic approximation, so that more details become available. However, a routine use of such tools requires the preliminary definition of a valid protocol with the most appropriate combination of electronic structure and nuclear calculation models. Several benchmark of anharmonic calculations frequency have been realized on organic molecules. Nevertheless, benchmarks of organometallics or inorganic metal complexes at this level are strongly lacking despite the interest of these systems due to their strong emission and vibrational properties. Herein we report the benchmark study realized with anharmonic calculations on simple metal complexes, along with some pilot applications on systems of direct technological or biological interest.
Jha, Ashish Kumar
2015-01-01
Glomerular filtration rate (GFR) estimation by plasma sampling method is considered as the gold standard. However, this method is not widely used because the complex technique and cumbersome calculations coupled with the lack of availability of user-friendly software. The routinely used Serum Creatinine method (SrCrM) of GFR estimation also requires the use of online calculators which cannot be used without internet access. We have developed user-friendly software "GFR estimation software" which gives the options to estimate GFR by plasma sampling method as well as SrCrM. We have used Microsoft Windows(®) as operating system and Visual Basic 6.0 as the front end and Microsoft Access(®) as database tool to develop this software. We have used Russell's formula for GFR calculation by plasma sampling method. GFR calculations using serum creatinine have been done using MIRD, Cockcroft-Gault method, Schwartz method, and Counahan-Barratt methods. The developed software is performing mathematical calculations correctly and is user-friendly. This software also enables storage and easy retrieval of the raw data, patient's information and calculated GFR for further processing and comparison. This is user-friendly software to calculate the GFR by various plasma sampling method and blood parameter. This software is also a good system for storing the raw and processed data for future analysis.
Drama in Dynamics: Boom, Splash, and Speed
DOE Office of Scientific and Technical Information (OSTI.GOV)
Netzloff, Heather Marie
2004-12-19
The full nature of chemistry and physics cannot be captured by static calculations alone. Dynamics calculations allow the simulation of time-dependent phenomena. This facilitates both comparisons with experimental data and the prediction and interpretation of details not easily obtainable from experiments. Simulations thus provide a direct link between theory and experiment, between microscopic details of a system and macroscopic observed properties. Many types of dynamics calculations exist. The most important distinction between the methods and the decision of which method to use can be described in terms of the size and type of molecule/reaction under consideration and the type andmore » level of accuracy required in the final properties of interest. These considerations must be balanced with available computational codes and resources as simulations to mimic ''real-life'' may require many time steps. As indicated in the title, the theme of this thesis is dynamics. The goal is to utilize the best type of dynamics for the system under study while trying to perform dynamics in the most accurate way possible. As a quantum chemist, this involves some level of first principles calculations by default. Very accurate calculations of small molecules and molecular systems are now possible with relatively high-level ab initio quantum chemistry. For example, a quantum chemical potential energy surface (PES) can be developed ''on-the-fly'' with dynamic reaction path (DRP) methods. In this way a classical trajectory is developed without prior knowledge of the PES. In order to treat solvation processes and the condensed phase, large numbers of molecules are required, especially in predicting bulk behavior. The Effective Fragment Potential (EFP) method for solvation decreases the cost of a fully quantum mechanical calculation by dividing a chemical system into an ab initio region that contains the solute and an ''effective fragment'' region that contains the remaining solvent molecules. But, despite the reduced cost relative to fully QM calculations, the EFP method, due to its complex, QM-based potential, does require more computation time than simple interaction potentials, especially when the method is used for large scale molecular dynamics simulations. Thus, the EFP method was parallelized to facilitate these calculations within the quantum chemistry program GAMESS. The EFP method provides relative energies and structures that are in excellent agreement with the analogous fully quantum results for small water clusters. The ability of the method to predict bulk water properties with a comparable accuracy is assessed by performing EFP molecular dynamics simulations. Molecular dynamics simulations can provide properties that are directly comparable with experimental results, for example radial distribution functions. The molecular PES is a fundamental starting point for chemical reaction dynamics. Many methods can be used to obtain a PES; for example, assuming a global functional form for the PES or, as mentioned above, performing ''on-the-fly'' dynamics with Al or semi-empirical calculations at every molecular configuration. But as the size of the system grows, using electronic structure theory to build a PES and, therefore, study reaction dynamics becomes virtually impossible. The program Grow builds a PES as an interpolation of Al data; the goal is to attempt to produce an accurate PES with the smallest number of Al calculations. The Grow-GAMESS interface was developed to obtain the Al data from GAMESS. Classical or quantum dynamics can be performed on the resulting surface. The interface includes the novel capability to build multi-reference PESs; these types of calculations are applicable to problems ranging from atmospheric chemistry to photochemical reaction mechanisms in organic and inorganic chemistry to fundamental biological phenomena such as photosynthesis.« less
2017-01-01
Binding free energy calculations that make use of alchemical pathways are becoming increasingly feasible thanks to advances in hardware and algorithms. Although relative binding free energy (RBFE) calculations are starting to find widespread use, absolute binding free energy (ABFE) calculations are still being explored mainly in academic settings due to the high computational requirements and still uncertain predictive value. However, in some drug design scenarios, RBFE calculations are not applicable and ABFE calculations could provide an alternative. Computationally cheaper end-point calculations in implicit solvent, such as molecular mechanics Poisson–Boltzmann surface area (MMPBSA) calculations, could too be used if one is primarily interested in a relative ranking of affinities. Here, we compare MMPBSA calculations to previously performed absolute alchemical free energy calculations in their ability to correlate with experimental binding free energies for three sets of bromodomain–inhibitor pairs. Different MMPBSA approaches have been considered, including a standard single-trajectory protocol, a protocol that includes a binding entropy estimate, and protocols that take into account the ligand hydration shell. Despite the improvements observed with the latter two MMPBSA approaches, ABFE calculations were found to be overall superior in obtaining correlation with experimental affinities for the test cases considered. A difference in weighted average Pearson () and Spearman () correlations of 0.25 and 0.31 was observed when using a standard single-trajectory MMPBSA setup ( = 0.64 and = 0.66 for ABFE; = 0.39 and = 0.35 for MMPBSA). The best performing MMPBSA protocols returned weighted average Pearson and Spearman correlations that were about 0.1 inferior to ABFE calculations: = 0.55 and = 0.56 when including an entropy estimate, and = 0.53 and = 0.55 when including explicit water molecules. Overall, the study suggests that ABFE calculations are indeed the more accurate approach, yet there is also value in MMPBSA calculations considering the lower compute requirements, and if agreement to experimental affinities in absolute terms is not of interest. Moreover, for the specific protein–ligand systems considered in this study, we find that including an explicit ligand hydration shell or a binding entropy estimate in the MMPBSA calculations resulted in significant performance improvements at a negligible computational cost. PMID:28786670
Buoyancy Suppression in Gases at High Temperatures
NASA Technical Reports Server (NTRS)
Kuczmarski, Maria A.; Gokoglu, Suleyman A.
2005-01-01
The computational fluid dynamics code FLUENT was used to study Rayleigh instability at large temperature differences in a sealed gas-filled enclosure with a cold top surface and a heated bottom wall (Benard problem). Both steady state and transient calculations were performed. The results define the boundaries of instability in a system depending on the geometry, temperature and pressure. It is shown that regardless of how fast the bottom-wall temperature can be ramped up to minimize the time spent in the unstable region of fluid motion, the eventual stability of the system depends on the prevailing final pressure after steady state has been reached. Calculations also show that the final state of the system can be different depending on whether the result is obtained via a steady-state solution or is reached by transient calculations. Changes in the slope of the pressure-versus-time curve are found to be a very good indicator of changes in the flow patterns in the system.
Engineered Barrier System performance requirements systems study report. Revision 02
DOE Office of Scientific and Technical Information (OSTI.GOV)
Balady, M.A.
This study evaluates the current design concept for the Engineered Barrier System (EBS), in concert with the current understanding of the geologic setting to assess whether enhancements to the required performance of the EBS are necessary. The performance assessment calculations are performed by coupling the EBS with the geologic setting based on the models (some of which were updated for this study) and assumptions used for the 1995 Total System Performance Assessment (TSPA). The need for enhancements is determined by comparing the performance assessment results against the EBS related performance requirements. Subsystem quantitative performance requirements related to the EBS includemore » the requirement to allow no more than 1% of the waste packages (WPs) to fail before 1,000 years after permanent closure of the repository, as well as a requirement to control the release rate of radionuclides from the EBS. The EBS performance enhancements considered included additional engineered components as well as evaluating additional performance available from existing design features but for which no performance credit is currently being taken.« less
Robustness of controllers designed using Galerkin type approximations
NASA Technical Reports Server (NTRS)
Morris, K. A.
1990-01-01
One of the difficulties in designing controllers for infinite-dimensional systems arises from attempting to calculate a state for the system. It is shown that Galerkin type approximations can be used to design controllers which will perform as designed when implemented on the original infinite-dimensional system. No assumptions, other than those typically employed in numerical analysis, are made on the approximating scheme.
Software For Computer-Aided Design Of Control Systems
NASA Technical Reports Server (NTRS)
Wette, Matthew
1994-01-01
Computer Aided Engineering System (CAESY) software developed to provide means to evaluate methods for dealing with users' needs in computer-aided design of control systems. Interpreter program for performing engineering calculations. Incorporates features of both Ada and MATLAB. Designed to be flexible and powerful. Includes internally defined functions, procedures and provides for definition of functions and procedures by user. Written in C language.
VHF command system study. [spectral analysis of GSFC VHF-PSK and VHF-FSK Command Systems
NASA Technical Reports Server (NTRS)
Gee, T. H.; Geist, J. M.
1973-01-01
Solutions are provided to specific problems arising in the GSFC VHF-PSK and VHF-FSK Command Systems in support of establishment and maintenance of Data Systems Standards. Signal structures which incorporate transmission on the uplink of a clock along with the PSK or FSK data are considered. Strategies are developed for allocating power between the clock and data, and spectral analyses are performed. Bit error probability and other probabilities pertinent to correct transmission of command messages are calculated. Biphase PCM/PM and PCM/FM are considered as candidate modulation techniques on the telemetry downlink, with application to command verification. Comparative performance of PCM/PM and PSK systems is given special attention, including implementation considerations. Gain in bit error performance due to coding is also considered.
Non-symbolic halving in an Amazonian indigene group
McCrink, Koleen; Spelke, Elizabeth S.; Dehaene, Stanislas; Pica, Pierre
2014-01-01
Much research supports the existence of an Approximate Number System (ANS) that is recruited by infants, children, adults, and non-human animals to generate coarse, non-symbolic representations of number. This system supports simple arithmetic operations such as addition, subtraction, and ordering of amounts. The current study tests whether an intuition of a more complex calculation, division, exists in an indigene group in the Amazon, the Mundurucu, whose language includes no words for large numbers. Mundurucu children were presented with a video event depicting a division transformation of halving, in which pairs of objects turned into single objects, reducing the array's numerical magnitude. Then they were tested on their ability to calculate the outcome of this division transformation with other large-number arrays. The Mundurucu children effected this transformation even when non-numerical variables were controlled, performed above chance levels on the very first set of test trials, and exhibited performance similar to urban children who had access to precise number words and a surrounding symbolic culture. We conclude that a halving calculation is part of the suite of intuitive operations supported by the ANS. PMID:23587042
A method of evaluating efficiency during space-suited work in a neutral buoyancy environment
NASA Technical Reports Server (NTRS)
Greenisen, Michael C.; West, Phillip; Newton, Frederick K.; Gilbert, John H.; Squires, William G.
1991-01-01
The purpose was to investigate efficiency as related to the work transmission and the metabolic cost of various extravehicular activity (EVA) tasks during simulated microgravity (whole body water immersion) using three space suits. Two new prototype space station suits, AX-5 and MKIII, are pressurized at 57.2 kPa and were tested concurrently with the operationally used 29.6 kPa shuttle suit. Four male astronauts were asked to perform a fatigue trial on four upper extremity exercises during which metabolic rate and work output were measured and efficiency was calculated in each suit. The activities were selected to simulate actual EVA tasks. The test article was an underwater dynamometry system to which the astronauts were secured by foot restraints. All metabolic data was acquired, calculated, and stored using a computerized indirect calorimetry system connected to the suit ventilation/gas supply control console. During the efficiency testing, steady state metabolic rate could be evaluated as well as work transmitted to the dynamometer. Mechanical efficiency could then be calculated for each astronaut in each suit performing each movement.
Automatic generation of the index of productive syntax for child language transcripts.
Hassanali, Khairun-nisa; Liu, Yang; Iglesias, Aquiles; Solorio, Thamar; Dollaghan, Christine
2014-03-01
The index of productive syntax (IPSyn; Scarborough (Applied Psycholinguistics 11:1-22, 1990) is a measure of syntactic development in child language that has been used in research and clinical settings to investigate the grammatical development of various groups of children. However, IPSyn is mostly calculated manually, which is an extremely laborious process. In this article, we describe the AC-IPSyn system, which automatically calculates the IPSyn score for child language transcripts using natural language processing techniques. Our results show that the AC-IPSyn system performs at levels comparable to scores computed manually. The AC-IPSyn system can be downloaded from www.hlt.utdallas.edu/~nisa/ipsyn.html .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schollmeier, Marius S.; Geissel, Matthias; Shores, Jonathon E.
We present calculations for the field of view (FOV), image fluence, image monochromaticity, spectral acceptance, and image aberrations for spherical crystal microscopes, which are used as self-emission imaging or backlighter systems at large-scale high energy density physics facilities. Our analytic results are benchmarked with ray-tracing calculations as well as with experimental measurements from the 6.151 keV backlighter system at Sandia National Laboratories. Furthermore, the analytic expressions can be used for x-ray source positions anywhere between the Rowland circle and object plane. We discovered that this enables quick optimization of the performance of proposed but untested, bent-crystal microscope systems to findmore » the best compromise between FOV, image fluence, and spatial resolution for a particular application.« less
Validation of Computerized Automatic Calculation of the Sequential Organ Failure Assessment Score
Harrison, Andrew M.; Pickering, Brian W.; Herasevich, Vitaly
2013-01-01
Purpose. To validate the use of a computer program for the automatic calculation of the sequential organ failure assessment (SOFA) score, as compared to the gold standard of manual chart review. Materials and Methods. Adult admissions (age > 18 years) to the medical ICU with a length of stay greater than 24 hours were studied in the setting of an academic tertiary referral center. A retrospective cross-sectional analysis was performed using a derivation cohort to compare automatic calculation of the SOFA score to the gold standard of manual chart review. After critical appraisal of sources of disagreement, another analysis was performed using an independent validation cohort. Then, a prospective observational analysis was performed using an implementation of this computer program in AWARE Dashboard, which is an existing real-time patient EMR system for use in the ICU. Results. Good agreement between the manual and automatic SOFA calculations was observed for both the derivation (N=94) and validation (N=268) cohorts: 0.02 ± 2.33 and 0.29 ± 1.75 points, respectively. These results were validated in AWARE (N=60). Conclusion. This EMR-based automatic tool accurately calculates SOFA scores and can facilitate ICU decisions without the need for manual data collection. This tool can also be employed in a real-time electronic environment. PMID:23936639
Technical review of SRT-CMA-930058 revalidation studies of Mark 16 experiments: J70
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, R.L.
1993-10-25
This study is a reperformance of a set of MGBS-TGAL criticality safety code validation calculations previously reported by Clark. The reperformance was needed because the records of the previous calculations could not be located in current APG files and records. As noted by the author, preliminary attempts to reproduce the Clark results by direct modeling in MGBS and TGAL were unsuccessful. Consultation with Clark indicated that the MGBS-TGAL (EXPT) option within the KOKO system should be used to set up the MGBS and TGAL input data records. The results of the study indicate that the technique used by Clark hasmore » been established and that the technique is now documented for future use. File records of the calculations have also been established in APG files. The review was performed per QAP 11--14 of 1Q34. Since the reviewer was involved in developing the procedural technique used for this study, this review can not be considered a fully independent review, but should be considered a verification that the document contains adequate information to allow a new user to perform similar calculations, a verification of the procedure by performing several calculations independently with identical results to the reported results, and a verification of the readability of the report.« less
NASA Astrophysics Data System (ADS)
Maskaeva, L. N.; Fedorova, E. A.; Yusupov, R. A.; Markov, V. F.
2018-05-01
The potentiometric titration of tin chloride SnCl2 is performed in the concentration range of 0.00009-1.1 mol/L with a solution of sodium hydroxide NaOH. According to potentiometric titration data based on modeling equilibria in the SnCl2-H2O-NaOH system, basic equations are generated for the main processes, and instability constants are calculated for the resulting hydroxo complexes and equilibrium constants of low-soluble tin(II) compounds. The data will be of interest for specialists in the field of theory of solutions.
Leak detection by mass balance effective for Norman Wells line
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liou, J.C.P.
Mass-balance calculations for leak detection have been shown as effective as a leading software system, in a comparison based on a major Canadian crude-oil pipeline. The calculations and NovaCorp`s Leakstop software each detected 4% (approximately) or greater leaks on Interprovincial Pipe Line (IPL) Inc.`s Norman Wells pipeline. Insufficient data exist to assess performances of the two methods for leaks smaller than 4%. Pipeline leak detection using such software-based systems are common. Their effectiveness is measured by how small and how quickly a leak can be detected. Algorithms used and measurement uncertainties determine leak detectability.
NASA Astrophysics Data System (ADS)
Sahariya, Jagrati; Soni, Amit; Kumar, Pancham
2018-04-01
In this paper, the first principle calculations are performed to analyze the structural, electronic and optical behavior of promising solar materials (Cd,Zn)Ga2Te4. To perform these calculations we have used one of the most accurate Full Potential Linearized Augmented Plane Wave (FP-LAPW) method. The ground state properties of these compounds are confirmed over here after proper examination of energy and charge convergence using Perdew-Burke-Ernzerhof (PBE-sol) exchange correlation potential. The investigations performed such as energy band structure, Density of States (DOS), optical parameters like complex dielectric function and absorption co-efficient are discussed over here to understand the overall response of the chosen system.
Apparatus, systems, and methods for ultrasound synthetic aperature focusing
Schuster, George J.; Crawford, Susan L.; Doctor, Steven R.; Harris, Robert V.
2005-04-12
One form of the present invention is a technique for interrogating a sample with ultrasound which includes: generating ultrasonic energy data corresponding to a volume of a sample and performing a synthetic aperture focusing technique on the ultrasonic energy data. The synthetic aperture focusing technique includes: defining a number of hyperbolic surfaces which extend through the volume at different depths and a corresponding number of multiple element accumulation vectors, performing a focused element calculation procedure for a group of vectors which are representative of the interior of a designated aperture, performing another focused element calculation procedure for vectors corresponding to the boundary of the aperture, and providing an image corresponding to features of the sample in accordance with the synthetic aperture focusing technique.
NASA Astrophysics Data System (ADS)
Kang, Jae-sik; Oh, Eun-Joo; Bae, Min-Jung; Song, Doo-Sam
2017-12-01
Given that the Korean government is implementing what has been termed the energy standards and labelling program for windows, window companies will be required to assign window ratings based on the experimental results of their product. Because this has added to the cost and time required for laboratory tests by window companies, the simulation system for the thermal performance of windows has been prepared to compensate for time and cost burdens. In Korea, a simulator is usually used to calculate the thermal performance of a window through WINDOW/THERM, complying with ISO 15099. For a single window, the simulation results are similar to experimental results. A double window is also calculated using the same method, but the calculation results for this type of window are unreliable. ISO 15099 should not recommend the calculation of the thermal properties of an air cavity between window sashes in a double window. This causes a difference between simulation and experimental results pertaining to the thermal performance of a double window. In this paper, the thermal properties of air cavities between window sashes in a double window are analyzed through computational fluid dynamics (CFD) simulations with the results compared to calculation results certified by ISO 15099. The surface temperature of the air cavity analyzed by CFD is compared to the experimental temperatures. These results show that an appropriate calculation method for an air cavity between window sashes in a double window should be established for reliable thermal performance results for a double window.
A note on AB INITIO semiconductor band structures
NASA Astrophysics Data System (ADS)
Fiorentini, Vincenzo
1992-09-01
We point out that only the internal features of the DFT ab initio theoretical picture of a crystal should be used in a consistent ab initio calculation of the band structure. As a consequence, we show that ground-state band structure calculations should be performed for the system in equilibrium at zero pressure, i.e. at the computed equilibrium cell volume ω th. Examples of consequences of this attitude are considered.
Star Clusters Simulations Using GRAPE-5
NASA Astrophysics Data System (ADS)
Fukushige, Toshiyuki
We discuss simulations of star cluster, such as globular cluster, galaxy, and galaxy cluster, using GRAPE(GRAvity PipE)-5. GRAPE-5 is a new version of special-purpose computer for many-body simulation, GRAPE. GRAPE-5 has eight custom pipeline LSI (G5 chip) per board, and its peak performance is 38.4 Gflops. GRAPE-5 is different from its predecessor, GRAPE-3, regarding four points: a) the calculation speed per chip is 8 time faster, b) the PCI bus is adapted as an interface between host computer and GRAPE-5, and, therefore, the communication speed is order of magnitude faster, c) in addition to the pure 1/r potential, GRAPE-5 can calculate force with arbitrary cutoff function so that it can be applied to the Ewald or P3M methods, and d) the pair wise force calculated on GRAPE-5 is about 10 times more accurate. Using the GRAPE-5 system with Barnes-Hut tree algorithm, we can complete force calculations for one timestep in 10(N/106) seconds. This speed enables us to perform a pre-collapse globular cluster simulation with real number of particles, and a galaxy simulation with more than 1 million particles, within several days. We also present some results of star cluster simulations using the GRAPE-5 system.
Calculation and synthesis of ZrC by CVD from ZrCl4-C3H6-H2-Ar system with high H2 percentage
NASA Astrophysics Data System (ADS)
Zhu, Yan; Cheng, Laifei; Ma, Baisheng; Gao, Shuang; Feng, Wei; Liu, Yongsheng; Zhang, Litong
2015-03-01
A thermodynamic calculation about the synthesis of ZrC from the ZrCl4-C3H6-H2-Ar system with high percentage of H2 was performed using the FactSage thermochemical software. According to the calculation, ZrC coating was synthesized on graphite substrates and carbon fibers by a low pressure chemical vapor deposition (LPCVD) process, and growth rate of the ZrC coating as a function of temperature was investigated. The surface diagrams of condensed-phases in this system were expressed as the functions of the deposition temperature, total pressure and reactant ratios of ZrCl4/(ZrCl4 + C3H6), H2/(ZrCl4 + C3H6), and the yield of the products was determined by the diagrams. A smooth and dense ZrC coating could be synthesized under the instruction of the calculated parameters. The morphologies of the ZrC coatings were significantly affected by temperature and gases flux. The deposition temperature is much lower than that from the ZrCl4-CH4-H2-Ar system.
Modulation transfer function of a fish-eye lens based on the sixth-order wave aberration theory.
Jia, Han; Lu, Lijun; Cao, Yiqing
2018-01-10
A calculation program of the modulation transfer function (MTF) of a fish-eye lens is developed with the autocorrelation method, in which the sixth-order wave aberration theory of ultra-wide-angle optical systems is used to simulate the wave aberration distribution at the exit pupil of the optical systems. The autocorrelation integral is processed with the Gauss-Legendre integral, and the magnification chromatic aberration is discussed to calculate polychromatic MTF. The MTF calculation results of a given example are then compared with those previously obtained based on the fourth-order wave aberration theory of plane-symmetrical optical systems and with those from the Zemax program. The study shows that MTF based on the sixth-order wave aberration theory has satisfactory calculation accuracy even for a fish-eye lens with a large acceptance aperture. And the impacts of different types of aberrations on the MTF of a fish-eye lens are analyzed. Finally, we apply the self-adaptive and normalized real-coded genetic algorithm and the MTF developed in the paper to optimize the Nikon F/2.8 fish-eye lens; consequently, the optimized system shows better MTF performances than those of the original design.
An Experimental and Theoretical Study of Nitrogen-Broadened Acetylene Lines
NASA Technical Reports Server (NTRS)
Thibault, Franck; Martinez, Raul Z.; Bermejo, Dionisio; Ivanov, Sergey V.; Buzykin, Oleg G.; Ma, Qiancheng
2014-01-01
We present experimental nitrogen-broadening coefficients derived from Voigt profiles of isotropic Raman Q-lines measured in the 2 band of acetylene (C2H2) at 150 K and 298 K, and compare them to theoretical values obtained through calculations that were carried out specifically for this work. Namely, full classical calculations based on Gordon's approach, two kinds of semi-classical calculations based on Robert Bonamy method as well as full quantum dynamical calculations were performed. All the computations employed exactly the same ab initio potential energy surface for the C2H2N2 system which is, to our knowledge, the most realistic, accurate and up-to-date one. The resulting calculated collisional half-widths are in good agreement with the experimental ones only for the full classical and quantum dynamical methods. In addition, we have performed similar calculations for IR absorption lines and compared the results to bibliographic values. Results obtained with the full classical method are again in good agreement with the available room temperature experimental data. The quantum dynamical close-coupling calculations are too time consuming to provide a complete set of values and therefore have been performed only for the R(0) line of C2H2. The broadening coefficient obtained for this line at 173 K and 297 K also compares quite well with the available experimental data. The traditional Robert Bonamy semi-classical formalism, however, strongly overestimates the values of half-width for both Qand R-lines. The refined semi-classical Robert Bonamy method, first proposed for the calculations of pressure broadening coefficients of isotropic Raman lines, is also used for IR lines. By using this improved model that takes into account effects from line coupling, the calculated semi-classical widths are significantly reduced and closer to the measured ones.
CSI Flight Computer System and experimental test results
NASA Technical Reports Server (NTRS)
Sparks, Dean W., Jr.; Peri, F., Jr.; Schuler, P.
1993-01-01
This paper describes the CSI Computer System (CCS) and the experimental tests performed to validate its functionality. This system is comprised of two major components: the space flight qualified Excitation and Damping Subsystem (EDS) which performs controls calculations; and the Remote Interface Unit (RIU) which is used for data acquisition, transmission, and filtering. The flight-like RIU is the interface between the EDS and the sensors and actuators positioned on the particular structure under control. The EDS and RIU communicate over the MIL-STD-1553B, a space flight qualified bus. To test the CCS under realistic conditions, it was connected to the Phase-0 CSI Evolutionary Model (CEM) at NASA Langley Research Center. The following schematic shows how the CCS is connected to the CEM. Various tests were performed which validated the ability of the system to perform control/structures experiments.
Boguslav, Mayla; Cohen, Kevin Bretonnel
2017-01-01
Human-annotated data is a fundamental part of natural language processing system development and evaluation. The quality of that data is typically assessed by calculating the agreement between the annotators. It is widely assumed that this agreement between annotators is the upper limit on system performance in natural language processing: if humans can't agree with each other about the classification more than some percentage of the time, we don't expect a computer to do any better. We trace the logical positivist roots of the motivation for measuring inter-annotator agreement, demonstrate the prevalence of the widely-held assumption about the relationship between inter-annotator agreement and system performance, and present data that suggest that inter-annotator agreement is not, in fact, an upper bound on language processing system performance.
[Financing, organization, costs and services performance of the Argentinean health sub-systems.
Yavich, Natalia; Báscolo, Ernesto Pablo; Haggerty, Jeannie
2016-01-01
To analyze the relationship between health system financing and services organization models with costs and health services performance in each of Rosario's health sub-systems. The financing and organization models were characterized using secondary data. Costs were calculated using the WHO/SHA methodology. Healthcare quality was measured by a household survey (n=822). Public subsystem:Vertically integrated funding and primary healthcare as a leading strategy to provide services produced low costs and individual-oriented healthcare but with weak accessibility conditions and comprehensiveness. Private subsystem: Contractual integration and weak regulatory and coordination mechanisms produced effects opposed to those of the public sub-system. Social security: Contractual integration and strong regulatory and coordination mechanisms contributed to intermediate costs and overall high performance. Each subsystem financing and services organization model had a strong and heterogeneous influence on costs and health services performance.
A perspective on future directions in aerospace propulsion system simulation
NASA Technical Reports Server (NTRS)
Miller, Brent A.; Szuch, John R.; Gaugler, Raymond E.; Wood, Jerry R.
1989-01-01
The design and development of aircraft engines is a lengthy and costly process using today's methodology. This is due, in large measure, to the fact that present methods rely heavily on experimental testing to verify the operability, performance, and structural integrity of components and systems. The potential exists for achieving significant speedups in the propulsion development process through increased use of computational techniques for simulation, analysis, and optimization. This paper outlines the concept and technology requirements for a Numerical Propulsion Simulation System (NPSS) that would provide capabilities to do interactive, multidisciplinary simulations of complete propulsion systems. By combining high performance computing hardware and software with state-of-the-art propulsion system models, the NPSS will permit the rapid calculation, assessment, and optimization of subcomponent, component, and system performance, durability, reliability and weight-before committing to building hardware.
Development of a Solid-Oxide Fuel Cell/Gas Turbine Hybrid System Model for Aerospace Applications
NASA Technical Reports Server (NTRS)
Freeh, Joshua E.; Pratt, Joseph W.; Brouwer, Jacob
2004-01-01
Recent interest in fuel cell-gas turbine hybrid applications for the aerospace industry has led to the need for accurate computer simulation models to aid in system design and performance evaluation. To meet this requirement, solid oxide fuel cell (SOFC) and fuel processor models have been developed and incorporated into the Numerical Propulsion Systems Simulation (NPSS) software package. The SOFC and reformer models solve systems of equations governing steady-state performance using common theoretical and semi-empirical terms. An example hybrid configuration is presented that demonstrates the new capability as well as the interaction with pre-existing gas turbine and heat exchanger models. Finally, a comparison of calculated SOFC performance with experimental data is presented to demonstrate model validity. Keywords: Solid Oxide Fuel Cell, Reformer, System Model, Aerospace, Hybrid System, NPSS
Real-time implementation of an interactive jazz accompaniment system
NASA Astrophysics Data System (ADS)
Deshpande, Nikhil
Modern computational algorithms and digital signal processing (DSP) are able to combine with human performers without forced or predetermined structure in order to create dynamic and real-time accompaniment systems. With modern computing power and intelligent algorithm layout and design, it is possible to achieve more detailed auditory analysis of live music. Using this information, computer code can follow and predict how a human's musical performance evolves, and use this to react in a musical manner. This project builds a real-time accompaniment system to perform together with live musicians, with a focus on live jazz performance and improvisation. The system utilizes a new polyphonic pitch detector and embeds it in an Ableton Live system - combined with Max for Live - to perform elements of audio analysis, generation, and triggering. The system also relies on tension curves and information rate calculations from the Creative Artificially Intuitive and Reasoning Agent (CAIRA) system to help understand and predict human improvisation. These metrics are vital to the core system and allow for extrapolated audio analysis. The system is able to react dynamically to a human performer, and can successfully accompany the human as an entire rhythm section.
Performance Analysis of the ITER Plasma Position Reflectometry (PPR) Ex-vessel Transmission Lines
NASA Astrophysics Data System (ADS)
Martínez-Fernández, J.; Simonetto, A.; Cappa, Á.; Rincón, M. E.; Cabrera, S.; Ramos, F. J.
2018-03-01
As the design of the ITER Plasma Position Reflectometry (PPR) diagnostic progresses, some segments of the transmission line have become fully specified and estimations of their performance can already be obtained. This work presents the calculations carried out for the longest section of the PPR, which is in final state of design and will be the main contributor to the total system performance. Considering the 88.9 mm circular corrugated waveguide (CCWG) that was previously chosen, signal degradation calculations have been performed. Different degradation sources have been studied: ohmic attenuation losses for CCWG; mode conversion losses for gaps, mitre bends, waveguide sag and different types of misalignments; reflection and absorption losses due to microwave windows and coupling losses to free space Gaussian beam. Contributions from all these sources have been integrated to give a global estimation of performance in the transmission lines segments under study.
UV Lidar Receiver Analysis for Tropospheric Sensing of Ozone
NASA Technical Reports Server (NTRS)
Pliutau, Denis; DeYoung, Russell J.
2013-01-01
A simulation of a ground based Ultra-Violet Differential Absorption Lidar (UV-DIAL) receiver system was performed under realistic daytime conditions to understand how range and lidar performance can be improved for a given UV pulse laser energy. Calculations were also performed for an aerosol channel transmitting at 3 W. The lidar receiver simulation studies were optimized for the purpose of tropospheric ozone measurements. The transmitted lidar UV measurements were from 285 to 295 nm and the aerosol channel was 527-nm. The calculations are based on atmospheric transmission given by the HITRAN database and the Modern Era Retrospective Analysis for Research and Applications (MERRA) meteorological data. The aerosol attenuation is estimated using both the BACKSCAT 4.0 code as well as data collected during the CALIPSO mission. The lidar performance is estimated for both diffuseirradiance free cases corresponding to nighttime operation as well as the daytime diffuse scattered radiation component based on previously reported experimental data. This analysis presets calculations of the UV-DIAL receiver ozone and aerosol measurement range as a function of sky irradiance, filter bandwidth and laser transmitted UV and 527-nm energy
Dosimetry audit of radiotherapy treatment planning systems.
Bulski, Wojciech; Chełmiński, Krzysztof; Rostkowska, Joanna
2015-07-01
In radiotherapy Treatment Planning Systems (TPS) various calculation algorithms are used. The accuracy of dose calculations has to be verified. Numerous phantom types, detectors and measurement methodologies are proposed to verify the TPS calculations with dosimetric measurements. A heterogeneous slab phantom has been designed within a Coordinated Research Project (CRP) of the IAEA. The heterogeneous phantom was developed in the frame of the IAEA CRP. The phantom consists of frame slabs made with polystyrene and exchangeable inhomogeneity slabs equivalent to bone or lung tissue. Special inserts allow to position thermoluminescent dosimeters (TLD) capsules within the polystyrene slabs below the bone or lung equivalent slabs and also within the lung equivalent material. Additionally, there are inserts that allow to position films or ionisation chamber in the phantom. Ten Polish radiotherapy centres (of 30 in total) were audited during on-site visits. Six different TPSs and five calculation algorithms were examined in the presence of inhomogeneities. Generally, most of the results from TLD were within 5 % tolerance. Differences between doses calculated by TPSs and measured with TLD did not exceed 4 % for bone and polystyrene equivalent materials. Under the lung equivalent material, on the beam axis the differences were lower than 5 %, whereas inside the lung equivalent material, off the beam axis, in some cases they were of around 7 %. The TLD results were confirmed with the ionisation chamber measurements. The comparison results of the calculations and the measurements allow to detect limitations of TPS calculation algorithms. The audits performed with the use of heterogeneous phantom and TLD seem to be an effective tool for detecting the limitations in the TPS performance or beam configuration errors at audited radiotherapy departments. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
SNPSAM - Space Nuclear Power System Analysis Model
NASA Astrophysics Data System (ADS)
El-Genk, Mohamed S.; Seo, Jong T.
The current version of SNPSAM is described, and the results of the integrated thermoeletric SP-100 system performance studies using SNPSAM are reported. The electric power output, conversion efficiency, coolant temperatures, and specific pumping power of the system are calculated as functions of the reactor thermal power and the liquid metal coolant type (Li or NaK-78) during steady state operation. The transient behavior of the system is also discussed.
NASA Astrophysics Data System (ADS)
Michalski, Rafał; Zygadło, Jakub
2018-04-01
Recent calculations of properties of TbAl2 GdAl2 and SmAl2 single crystals, performed with our new computation system called ATOMIC MATTERS MFA are presented. We applied localized electron approach to describe the thermal evolution of Fine Electronic Structure of Tb3+, Gd3+ and Sm3+ ions over a wide temperature range and estimate Magnetocaloric Effect (MCE). Thermomagnetic properties of TbAl2, GdAl2 and SmAl2 were calculated based on the fine electronic structure of the 4f8, 4f7 and 4f5 electronic configuration of the Tb3+ and Gd3+ and Sm3+ ions, respectively. Our calculations yielded: magnetic moment value and direction; single-crystalline magnetization curves in zero field and in external magnetic field applied in various directions m(T,Bext); the 4f-electronic components of specific heat c4f(T,Bext); and temperature dependence of the magnetic entropy and isothermal entropy change with external magnetic field - ΔS(T,Bext). The cubic universal CEF parameters values used for all CEF calculations was taken from literature and recalculated for universal cubic parameters set for the RAl2 series: A4 = +7.164 Ka04 and A6 = -1.038 Ka06. Magnetic properties were found to be anisotropic due to cubic Laves phase C15 crystal structure symmetry. These studies reveal the importance of multipolar charge interactions when describing thermomagnetic properties of real 4f electronic systems and the effectiveness of an applied self-consistent molecular field in calculations for magnetic phase transition simulation.
Determining the nuclear data uncertainty on MONK10 and WIMS10 criticality calculations
NASA Astrophysics Data System (ADS)
Ware, Tim; Dobson, Geoff; Hanlon, David; Hiles, Richard; Mason, Robert; Perry, Ray
2017-09-01
The ANSWERS Software Service is developing a number of techniques to better understand and quantify uncertainty on calculations of the neutron multiplication factor, k-effective, in nuclear fuel and other systems containing fissile material. The uncertainty on the calculated k-effective arises from a number of sources, including nuclear data uncertainties, manufacturing tolerances, modelling approximations and, for Monte Carlo simulation, stochastic uncertainty. For determining the uncertainties due to nuclear data, a set of application libraries have been generated for use with the MONK10 Monte Carlo and the WIMS10 deterministic criticality and reactor physics codes. This paper overviews the generation of these nuclear data libraries by Latin hypercube sampling of JEFF-3.1.2 evaluated data based upon a library of covariance data taken from JEFF, ENDF/B, JENDL and TENDL evaluations. Criticality calculations have been performed with MONK10 and WIMS10 using these sampled libraries for a number of benchmark models of fissile systems. Results are presented which show the uncertainty on k-effective for these systems arising from the uncertainty on the input nuclear data.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
ERIC Educational Resources Information Center
da Silveira, Pedro Rodrigo Castro
2014-01-01
This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…
Clarkson, Sean; Wheat, Jon; Heller, Ben; Choppin, Simon
2016-01-01
Use of anthropometric data to infer sporting performance is increasing in popularity, particularly within elite sport programmes. Measurement typically follows standards set by the International Society for the Advancement of Kinanthropometry (ISAK). However, such techniques are time consuming, which reduces their practicality. Schranz et al. recently suggested 3D body scanners could replace current measurement techniques; however, current systems are costly. Recent interest in natural user interaction has led to a range of low-cost depth cameras capable of producing 3D body scans, from which anthropometrics can be calculated. A scanning system comprising 4 depth cameras was used to scan 4 cylinders, representative of the body segments. Girth measurements were calculated from the 3D scans and compared to gold standard measurements. Requirements of a Level 1 ISAK practitioner were met in all 4 cylinders, and ISO standards for scan-derived girth measurements were met in the 2 larger cylinders only. A fixed measurement bias was identified that could be corrected with a simple offset factor. Further work is required to determine comparable performance across a wider range of measurements performed upon living participants. Nevertheless, findings of the study suggest such a system offers many advantages over current techniques, having a range of potential applications.
Simulation of a tagged neutron inspection system prototype
NASA Astrophysics Data System (ADS)
Donzella, A.; Boghen, G.; Bonomi, G.; Fontana, A.; Formisano, P.; Pesente, S.; Sudac, D.; Valkovic, V.; Zenoni, A.
2006-05-01
The illicit trafficking of explosive materials in cargo containers has become, in recent years, a serious problem. Currently used X-ray or γ-ray based systems provide only limited information about the elemental composition of the inspected cargo items. During the last years, a new neutron interrogation technique, named TNIS (Tagged Neutron Inspection System), has been developed, which should permit to determine the chemical composition of the suspect item by coincidence measurements between alpha particles and photons produced. A prototype of such a system for container inspection has been built, at the Institute Ruder Boskovic (IRB) in Zagreb, Croatia, for the European Union 6FP EURITRACK project. We present the results of a detailed simulation of the IRB prototype performed with the MCNP Monte Carlo program and a comparison with beam attenuation calculations performed with GEANT3/MICAP. Detector signals, rates and signal over background ratios have been calculated for 100 kg of TNT explosive located inside a cargo container filled with a metallic matrix of density 0.2 g/cm3. The case of an organic filling material is discussed too.
Method and system for measuring multiphase flow using multiple pressure differentials
Fincke, James R.
2001-01-01
An improved method and system for measuring a multiphase flow in a pressure flow meter. An extended throat venturi is used and pressure of the multiphase flow is measured at three or more positions in the venturi, which define two or more pressure differentials in the flow conduit. The differential pressures are then used to calculate the mass flow of the gas phase, the total mass flow, and the liquid phase. The method for determining the mass flow of the high void fraction fluid flow and the gas flow includes certain steps. The first step is calculating a gas density for the gas flow. The next two steps are finding a normalized gas mass flow rate through the venturi and computing a gas mass flow rate. The following step is estimating the gas velocity in the venturi tube throat. The next step is calculating the pressure drop experienced by the gas-phase due to work performed by the gas phase in accelerating the liquid phase between the upstream pressure measuring point and the pressure measuring point in the venturi throat. Another step is estimating the liquid velocity in the venturi throat using the calculated pressure drop experienced by the gas-phase due to work performed by the gas phase. Then the friction is computed between the liquid phase and a wall in the venturi tube. Finally, the total mass flow rate based on measured pressure in the venturi throat is calculated, and the mass flow rate of the liquid phase is calculated from the difference of the total mass flow rate and the gas mass flow rate.
Recipes for free energy calculations in biomolecular systems.
Moradi, Mahmoud; Babin, Volodymyr; Sagui, Celeste; Roland, Christopher
2013-01-01
During the last decade, several methods for sampling phase space and calculating various free energies in biomolecular systems have been devised or refined for molecular dynamics (MD) simulations. Thus, state-of-the-art methodology and the ever increasing computer power allow calculations that were forbidden a decade ago. These calculations, however, are not trivial as they require knowledge of the methods, insight into the system under study, and, quite often, an artful combination of different methodologies in order to avoid the various traps inherent in an unknown free energy landscape. In this chapter, we illustrate some of these concepts with two relatively simple systems, a sugar ring and proline oligopeptides, whose free energy landscapes still offer considerable challenges. In order to explore the configurational space of these systems, and to surmount the various free energy barriers, we combine three complementary methods: a nonequilibrium umbrella sampling method (adaptively biased MD, or ABMD), replica-exchange molecular dynamics (REMD), and steered molecular dynamics (SMD). In particular, ABMD is used to compute the free energy surface of a set of collective variables; REMD is used to improve the performance of ABMD, to carry out sampling in space complementary to the collective variables, and to sample equilibrium configurations directly; and SMD is used to study different transition mechanisms.
Site dose calculations for the INEEL/TMI-2 storage facility
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, K.B.
1997-12-01
The U.S. Department of Energy (DOE) is licensing an independent spent-fuel storage installation (ISFSI) for the Three Mile Island unit 2 (TMI-2) core debris to be constructed at the Idaho Chemical Processing Plant (ICPP) site at the Idaho National Engineering and Environmental Laboratory (INEEL) using the NUHOMS spent-fuel storage system. This paper describes the site dose calculations, performed in support of the license application, that estimate exposures both on the site and for members of the public. These calculations are unusual for dry-storage facilities in that they must account for effluents from the system in addition to skyshine from themore » ISFSI. The purpose of the analysis was to demonstrate compliance with the 10 CFR 20 and 10 CFR 72.104 exposure limits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jordan, W.C.; Turner, J.C.
1992-12-01
The purpose of this report is to document reference calculations performed using the SCALE-4.0 code system to determine the critical parameters of UO{sub 2}F{sub 2}-H{sub 2}O spheres. The calculations are an extension of those documented in ORNL/CSD/TM-284. Specifically, the data for low-enriched UO{sub 2}F{sub 2}-H{sub 2}O spheres have been extended to highly enriched uranium. These calculations, together with those reported in ORNL/CSD/TM-284, provide a consistent set of critical parameters (k{sub {infinity}}, volume, mass, mass of water) for UO{sub 2}F{sub 2} and water over the full range of enrichment and moderation ratio.
Dosimetric investigation of LDR brachytherapy ¹⁹²Ir wires by Monte Carlo and TPS calculations.
Bozkurt, Ahmet; Acun, Hediye; Kemikler, Gonul
2013-01-01
The aim of this study was to investigate the dose rate distribution around (192)Ir wires used as radioactive sources in low-dose-rate brachytherapy applications. Monte Carlo modeling of a 0.3-mm diameter source and its surrounding water medium was performed for five different wire lengths (1-5 cm) using the MCNP software package. The computed dose rates per unit of air kerma at distances from 0.1 up to 10 cm away from the source were first verified with literature data sets. Then, the simulation results were compared with the calculations from the XiO CMS commercial treatment planning system. The study results were found to be in concordance with the treatment planning system calculations except for the shorter wires at close distances.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This paper reports on an automated metering/proving system for custody transfer of crude oil at the Phillips 66 Co. tanker unloading terminal in Freeport, Texas. It is described as one of the most sophisticated systems developed. The menu-driven, one-button automation removes the proving sequence entirely from manual control. The system also is the to be cost-effective and versatile compared to a dedicated flow computer with API calculation capabilities. Developed by Puffer-Sweiven, systems integrators, the new technology additionally is thought to be the first custody transfer system to employ a programmable logic controller (PLC). The PLC provides the automation, gathers andmore » stores all raw data, and prints alarms. Also the system uses a personal computer operator interface (OI) that runs on the Intel iRMX real time operating system. The OI is loaded with Puffer-Sweiven application software that performs API meter factor and volume correction calculations as well as present color graphics and generate reports.« less
Edwards, Mervyn; Nathanson, Andrew; Wisch, Marcus
2014-01-01
The objective of the current study was to estimate the benefit for Europe of fitting precrash braking systems to cars that detect pedestrians and autonomously brake the car to prevent or lower the speed of the impact with the pedestrian. The analysis was divided into 2 main parts: (1) Develop and apply methodology to estimate benefit for Great Britain and Germany; (2) scale Great Britain and German results to give an indicative estimate for Europe (EU27). The calculation methodology developed to estimate the benefit was based on 2 main steps: 1. Calculate the change in the impact speed distribution curve for pedestrian casualties hit by the fronts of cars assuming pedestrian autonomous emergency braking (AEB) system fitment. 2. From this, calculate the change in the number of fatally, seriously, and slightly injured casualties by using the relationship between risk of injury and the casualty impact speed distribution to sum the resulting risks for each individual casualty. The methodology was applied to Great Britain and German data for 3 types of pedestrian AEB systems representative of (1) currently available systems; (2) future systems with improved performance, which are expected to be available in the next 2-3 years; and (3) reference limit system, which has the best performance currently thought to be technically feasible. Nominal benefits estimated for Great Britain ranged from £119 million to £385 million annually and for Germany from €63 million to €216 million annually depending on the type of AEB system assumed fitted. Sensitivity calculations showed that the benefit estimated could vary from about half to twice the nominal estimate, depending on factors such as whether or not the system would function at night and the road friction assumed. Based on scaling of estimates made for Great Britain and Germany, the nominal benefit of implementing pedestrian AEB systems on all cars in Europe was estimated to range from about €1 billion per year for current generation AEB systems to about €3.5 billion for a reference limit system (i.e., best performance thought technically feasible at present). Dividing these values by the number of new passenger cars registered in Europe per year gives an indication that the cost of a system per car should be less than ∼€80 to ∼€280 for it to be cost effective. The potential benefit of fitting AEB systems to cars in Europe for pedestrian protection has been estimated and the results interpreted to indicate the upper limit of cost for a system to allow it to be cost effective.
Electric Propulsion System Modeling for the Proposed Prometheus 1 Mission
NASA Technical Reports Server (NTRS)
Fiehler, Douglas; Dougherty, Ryan; Manzella, David
2005-01-01
The proposed Prometheus 1 spacecraft would utilize nuclear electric propulsion to propel the spacecraft to its ultimate destination where it would perform its primary mission. As part of the Prometheus 1 Phase A studies, system models were developed for each of the spacecraft subsystems that were integrated into one overarching system model. The Electric Propulsion System (EPS) model was developed using data from the Prometheus 1 electric propulsion technology development efforts. This EPS model was then used to provide both performance and mass information to the Prometheus 1 system model for total system trades. Development of the EPS model is described, detailing both the performance calculations as well as its evolution over the course of Phase A through three technical baselines. Model outputs are also presented, detailing the performance of the model and its direct relationship to the Prometheus 1 technology development efforts. These EP system model outputs are also analyzed chronologically showing the response of the model development to the four technical baselines during Prometheus 1 Phase A.
NASA Astrophysics Data System (ADS)
Sakamoto, Hiroki; Yamamoto, Toshihiro
2017-09-01
This paper presents improvement and performance evaluation of the "perturbation source method", which is one of the Monte Carlo perturbation techniques. The formerly proposed perturbation source method was first-order accurate, although it is known that the method can be easily extended to an exact perturbation method. A transport equation for calculating an exact flux difference caused by a perturbation is solved. A perturbation particle representing a flux difference is explicitly transported in the perturbed system, instead of in the unperturbed system. The source term of the transport equation is defined by the unperturbed flux and the cross section (or optical parameter) changes. The unperturbed flux is provided by an "on-the-fly" technique during the course of the ordinary fixed source calculation for the unperturbed system. A set of perturbation particle is started at the collision point in the perturbed region and tracked until death. For a perturbation in a smaller portion of the whole domain, the efficiency of the perturbation source method can be improved by using a virtual scattering coefficient or cross section in the perturbed region, forcing collisions. Performance is evaluated by comparing the proposed method to other Monte Carlo perturbation methods. Numerical tests performed for a particle transport in a two-dimensional geometry reveal that the perturbation source method is less effective than the correlated sampling method for a perturbation in a larger portion of the whole domain. However, for a perturbation in a smaller portion, the perturbation source method outperforms the correlated sampling method. The efficiency depends strongly on the adjustment of the new virtual scattering coefficient or cross section.
Output feedback regulator design for jet engine control systems
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1977-01-01
A multivariable control design procedure based on the output feedback regulator formulation is described and applied to turbofan engine model. Full order model dynamics, were incorporated in the example design. The effect of actuator dynamics on closed loop performance was investigaged. Also, the importance of turbine inlet temperature as an element of the dynamic feedback was studied. Step responses were given to indicate the improvement in system performance with this control. Calculation times for all experiments are given in CPU seconds for comparison purposes.
Design of Advanced Blading for a High-Speed HP Compressor Using an S1-S2 Flow Calculation System.
1990-11-01
Howell multistage compressor speed squared) and pressure ratio for the initial prediction method (7), with an arbitrary increase of design are given in...improved performance of axial compressors with leading designs to be produced with the current SI-S2 edge normal shock waves, system. However, it is...performance of the new (7) Howell A R and Calvert W J, A new stage- design was extremely encouraging, with a peak stacking technique for axial -flow
NASA Astrophysics Data System (ADS)
Wei, Xianggeng; Li, Jiang; He, Guoqiang
2017-04-01
The vortex valve solid variable thrust motor is a new solid motor which can achieve Vehicle system trajectory optimization and motor energy management. Numerical calculation was performed to investigate the influence of vortex chamber diameter, vortex chamber shape, and vortex chamber height of the vortex valve solid variable thrust motor on modulation performance. The test results verified that the calculation results are consistent with laboratory results with a maximum error of 9.5%. The research drew the following major conclusions: the optimal modulation performance was achieved in a cylindrical vortex chamber, increasing the vortex chamber diameter improved the modulation performance of the vortex valve solid variable thrust motor, optimal modulation performance could be achieved when the height of the vortex chamber is half of the vortex chamber outlet diameter, and the hot gas control flow could result in an enhancement of modulation performance. The results can provide the basis for establishing the design method of the vortex valve solid variable thrust motor.
Thermodynamic and Mechanical Analysis of a Thermomagnetic Rotary Engine
NASA Astrophysics Data System (ADS)
Fajar, D. M.; Khotimah, S. N.; Khairurrijal
2016-08-01
A heat engine in magnetic system had three thermodynamic coordinates: magnetic intensity ℋ, total magnetization ℳ, and temperature T, where the first two of them are respectively analogous to that of gaseous system: pressure P and volume V. Consequently, Carnot cycle that constitutes the principle of a heat engine in gaseous system is also valid on that in magnetic system. A thermomagnetic rotary engine is one model of it that was designed in the form of a ferromagnetic wheel that can rotates because of magnetization change at Curie temperature. The study is aimed to describe the thermodynamic and mechanical analysis of a thermomagnetic rotary engine and calculate the efficiencies. In thermodynamic view, the ideal processes are isothermal demagnetization, adiabatic demagnetization, isothermal magnetization, and adiabatic magnetization. The values of thermodynamic efficiency depend on temperature difference between hot and cold reservoir. In mechanical view, a rotational work is determined through calculation of moment of inertia and average angular speed. The value of mechanical efficiency is calculated from ratio between rotational work and heat received by system. The study also obtains exergetic efficiency that states the performance quality of the engine.
33 CFR 157.05 - Performing calculations for this part.
Code of Federal Regulations, 2010 CFR
2010-07-01
... those formulas must be in the International System of Units; and (c) Forward and after perpendiculars are located at the forward end and at the after end of the length. The forward perpendicular coincides...
33 CFR 157.05 - Performing calculations for this part.
Code of Federal Regulations, 2011 CFR
2011-07-01
... those formulas must be in the International System of Units; and (c) Forward and after perpendiculars are located at the forward end and at the after end of the length. The forward perpendicular coincides...
DET/MPS - The GSFC Energy Balance Programs
NASA Technical Reports Server (NTRS)
Jagielski, J. M.
1994-01-01
Direct Energy Transfer (DET) and MultiMission Spacecraft Modular Power System (MPS) computer programs perform mathematical modeling and simulation to aid in design and analysis of DET and MPS spacecraft power system performance in order to determine energy balance of subsystem. DET spacecraft power system feeds output of solar photovoltaic array and nickel cadmium batteries directly to spacecraft bus. MPS system, Standard Power Regulator Unit (SPRU) utilized to operate array at array's peak power point. DET and MPS perform minute-by-minute simulation of performance of power system. Results of simulation focus mainly on output of solar array and characteristics of batteries. Both packages limited in terms of orbital mechanics, they have sufficient capability to calculate data on eclipses and performance of arrays for circular or near-circular orbits. DET and MPS written in FORTRAN-77 with some VAX FORTRAN-type extensions. Both available in three versions: GSC-13374, for DEC VAX-series computers running VMS. GSC-13443, for UNIX-based computers. GSC-13444, for Apple Macintosh computers.
40 CFR Appendix A to Part 58 - Quality Assurance Requirements for SLAMS, SPMs and PSD Air Monitoring
Code of Federal Regulations, 2014 CFR
2014-07-01
... monitor. 3.3.4.4Pb Performance Evaluation Program (PEP) Procedures. Each year, one performance evaluation... Information 2. Quality System Requirements 3. Measurement Quality Check Requirements 4. Calculations for Data... 10 of this appendix) and at a national level in references 1, 2, and 3 of this appendix. 1...
40 CFR Appendix A to Part 58 - Quality Assurance Requirements for SLAMS, SPMs and PSD Air Monitoring
Code of Federal Regulations, 2013 CFR
2013-07-01
... monitor. 3.3.4.4Pb Performance Evaluation Program (PEP) Procedures. Each year, one performance evaluation... Information 2. Quality System Requirements 3. Measurement Quality Check Requirements 4. Calculations for Data... 10 of this appendix) and at a national level in references 1, 2, and 3 of this appendix. 1...
NASA-USRP Summer 2013 Internship Final Report
NASA Technical Reports Server (NTRS)
Gurganus, S. Christine
2013-01-01
Three major projects were undertaken during the Summer 2013 USRP Internship: (A) assisting the cTAPS group with component and pressure vessel system analyses and documentation, (B) designing a hoisting fixture for a solid rocket motor, (C) finding an alternative to removing the DOT rated gaseous nitrogen tank from the roof for hydrostatic testing. Hypergolic Material Assessments (HMAs) and pressure calculations were performed on components of pressure systems. Additionally, component information was logged in the Standard Parts Database to provide a location where system designers can find information regarding components, including their specifications and compatibility with fluids. A hoisting fixture was designed to hoist a solid rocket motor and meets the specifications related to stress and size. However, there are issues with the fixtures bolt head allotment, the bolt spacing, and the complexity of the part. Finally, calculations were performed on an expiring DOT rated gaseous nitrogen tank in an attempt to re-rate it per ASME standards. This was unsuccessful so other options are being explored for the tank. While much progress was made on all three projects, there is still work to be performed on each project to achieve the desired results.
NASA Technical Reports Server (NTRS)
Fishbach, L. H.
1980-01-01
The computational techniques are described which are utilized at Lewis Research Center to determine the optimum propulsion systems for future aircraft applications and to identify system tradeoffs and technology requirements. Cycle performance, and engine weight can be calculated along with costs and installation effects as opposed to fuel consumption alone. Almost any conceivable turbine engine cycle can be studied. These computer codes are: NNEP, WATE, LIFCYC, INSTAL, and POD DRG. Examples are given to illustrate how these computer techniques can be applied to analyze and optimize propulsion system fuel consumption, weight and cost for representative types of aircraft and missions.
Redundancy management of electrohydraulic servoactuators by mathematical model referencing
NASA Technical Reports Server (NTRS)
Campbell, R. A.
1971-01-01
A description of a mathematical model reference system is presented which provides redundancy management for an electrohydraulic servoactuator. The mathematical model includes a compensation network that calculates reference parameter perturbations induced by external disturbance forces. This is accomplished by using the measured pressure differential data taken from the physical system. This technique was experimentally verified by tests performed using the H-1 engine thrust vector control system for Saturn IB. The results of these tests are included in this report. It was concluded that this technique improves the tracking accuracy of the model reference system to the extent that redundancy management of electrohydraulic servosystems may be performed using this method.
A RRKM study and a DFT assessment on gas-phase fragmentation of formamide-M(2+) (M = Ca, Sr).
Martín-Sómer, Ana; Gaigeot, Marie-Pierre; Yáñez, Manuel; Spezia, Riccardo
2014-07-28
A kinetic study of the unimolecular reactivity of formamide-M(2+) (M = Ca, Sr) systems was carried out by means of RRKM statistical theory using high-level DFT. The results predict M(2+), [M(NH2)](+) and [HCO](+) as the main products, together with an intermediate that could eventually evolve to produce [M(NH3)](2+) and CO, for high values of internal energy. In this framework, we also evaluated the influence of the external rotational energy on the reaction rate constants. In order to find a method to perform reliable electronic structure calculations for formamide-M(2+) (M = Ca, Sr) at a relatively low computational cost, an assessment of different methods was performed. In the first assessment twenty-one functionals, belonging to different DFT categories, and an MP2 wave function method using a small basis set were evaluated. CCSD(T)/cc-pWCVTZ single point calculations were used as reference. A second assessment has been performed on geometries and energies. We found BLYP/6-31G(d) and G96LYP/6-31+G(d,p) as the best performing methods, for formamide-Ca(2+) and formamide-Sr(2+), respectively. Furthermore, a detailed assessment was done on RRKM reactivity and G96LYP/6-31G(d) provided results in agreement with higher level calculations. The combination of geometrical, energetics and kinetics (RRKM) criteria to evaluate DFT functionals is rather unusual and provides an original assessment procedure. Overall, we suggest using G96LYP as the best performing functional with a small basis set for both systems.
A model for plant lighting system selection.
Ciolkosz, D E; Albright, L D; Sager, J C; Langhans, R W
2002-01-01
A decision model is presented that compares lighting systems for a plant growth scenario and chooses the most appropriate system from a given set of possible choices. The model utilizes a Multiple Attribute Utility Theory approach, and incorporates expert input and performance simulations to calculate a utility value for each lighting system being considered. The system with the highest utility is deemed the most appropriate system. The model was applied to a greenhouse scenario, and analyses were conducted to test the model's output for validity. Parameter variation indicates that the model performed as expected. Analysis of model output indicates that differences in utility among the candidate lighting systems were sufficiently large to give confidence that the model's order of selection was valid.
Changes in the reflectivity of a lithium niobate crystal decorated with a graphene layer
NASA Astrophysics Data System (ADS)
Salas, O.; Garcés, E.; Castillo, F. L.; Magaña, L. F.
2017-01-01
Density functional theory and molecular dynamics were used to study the interaction of a graphene layer with the surface of lithium niobate. The simulations were performed at atmospheric pressure and 300K. We found that the graphene layer is physisorbed with an adsorption energy of -0.8205 eV/C-atom. Subsequently, the optical absorption of the graphene-(lithium niobate) system was calculated and compared with that of graphene solo and lithium niobate alone, respectively. The calculations were performed using the Quantum Espresso code with the GGA approximation and Vdw-DF2 (which includes long-range correlation effects as Van der Waals interactions).
NASA Astrophysics Data System (ADS)
Kumenko, A. I.; Kostyukov, V. N.; Kuz'minykh, N. Yu.
2016-10-01
To visualize the physical processes that occur in the journal bearings of the shafting of power generating turbosets, a technique for preliminary calculation of a set of characteristics of the journal bearings in the domain of possible movements (DPM) of the rotor journals is proposed. The technique is based on interpolation of the oil film characteristics and is designed for use in real-time diagnostic system COMPACS®. According to this technique, for each journal bearing, the domain of possible movement of the shaft journal is computed, then triangulation of the area is performed, and the corresponding mesh is constructed. At each node of the mesh, all characteristics of the journal bearing required by the diagnostic system are calculated. Via shaft-position sensors, the system measures—in the online mode—the instantaneous location of the shaft journal in the bearing and determines the averaged static position of the journals (the pivoting vector). Afterwards, continuous interpolation in the triangulation domain is performed, which allows the real-time calculation of the static and dynamic forces that act on the rotor journal, the flow rate and the temperature of the lubricant, and power friction losses. Use of the proposed method on a running turboset enables diagnosing the technical condition of the shafting support system and promptly identifying the defects that determine the vibrational state and the overall reliability of the turboset. The authors report a number of examples of constructing the DPM and computing the basic static characteristics for elliptical journal bearings typical of large-scale power turbosets. To illustrate the interpolation method, the traditional approach to calculation of bearing properties is applied. This approach is based on a Reynolds two-dimensional isothermal equation that accounts for the mobility of the boundary of the oil film continuity.
NASA Technical Reports Server (NTRS)
Bebis, George (Inventor); Amayeh, Gholamreza (Inventor)
2015-01-01
Hand-based biometric analysis systems and techniques are described which provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an in put image. Additionally, the analysis utilizes re-use of commonly-seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
NASA Technical Reports Server (NTRS)
Bebis, George
2013-01-01
Hand-based biometric analysis systems and techniques provide robust hand-based identification and verification. An image of a hand is obtained, which is then segmented into a palm region and separate finger regions. Acquisition of the image is performed without requiring particular orientation or placement restrictions. Segmentation is performed without the use of reference points on the images. Each segment is analyzed by calculating a set of Zernike moment descriptors for the segment. The feature parameters thus obtained are then fused and compared to stored sets of descriptors in enrollment templates to arrive at an identity decision. By using Zernike moments, and through additional manipulation, the biometric analysis is invariant to rotation, scale, or translation or an input image. Additionally, the analysis uses re-use of commonly seen terms in Zernike calculations to achieve additional efficiencies over traditional Zernike moment calculation.
NASA Astrophysics Data System (ADS)
Szyczewski, A.; Hołderna-Natkaniec, K.; Natkaniec, I.
2004-05-01
Inelastic incoherent neutron scattering spectra of progesterone and testosterone measured at 20 and 290 K were compared with the IR spectra measured at 290 K. The Phonon Density of States spectra display well resolved peaks of low frequency internal vibration modes up to 1200 cm -1. The quantum chemistry calculations were performed by semiempirical PM3 method and by the density functional theory method with different basic sets for isolated molecule, as well as for the dimer system of testosterone. The proposed assignment of internal vibrations of normal modes enable us to conclude about the sequence of the onset of the torsion movements of the CH 3 groups. These conclusions were correlated with the results of proton molecular dynamics studies performed by NMR method. The GAUSSIAN program had been used for calculations.
Heat Production During Countermeasure Exercises Planned for the International Space Station
NASA Technical Reports Server (NTRS)
Rapley, Michael G.; Lee, Stuart M. C.; Guilliams, Mark E.; Greenisen, Michael C.; Schneider, Suzanne M.
2004-01-01
This investigation's purpose was to determine the amount of heat produced when performing aerobic and resistance exercises planned as part of the exercise countermeasures prescription for the ISS. These data will be used to determine thermal control requirements of the Node 1 and other modules where exercise hardware might reside. To determine heat production during resistive exercise, 6 subjects using the iRED performed 5 resistance exercises which form the core exercises of the current ISS resistive exercise countermeasures. Each exerciser performed a warm-up set at 50% effort, then 3 sets of increasing resistance. We measured oxygen consumption and work during each exercise. Heat loss was calculated as the difference between the gross energy expenditure (minus resting metabolism) and the work performed. To determine heat production during aerobic exercise, 14 subjects performed an interval, cycle exercise protocol and 7 subjects performed a continuous, treadmill protocol. Each 30-min. exercise is similar to exercises planned for ISS. Oxygen consumption monitored continuously during the exercises was used to calculate the gross energy expenditure. For cycle exercise, work performed was calculated based on the ergometer's resistance setting and pedaling frequency. For treadmill, total work was estimated by assuming 25% work efficiency and subtracting the calculated heat production and resting metabolic rate from the gross energy expenditure. This heat production needs to be considered when determining the location of exercise hardware on ISS and designing environmental control systems. These values reflect only the human subject s produced heat; heat produced by the exercise hardware also will contribute to the heat load.
Ensuring the validity of calculated subcritical limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, H.K.
1977-01-01
The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less
Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital
Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud
2016-01-01
Background: Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. Objective: This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. Methods: This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. Results: The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. Conclusion: By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department. PMID:26234974
Application of the Activity-Based Costing Method for Unit-Cost Calculation in a Hospital.
Javid, Mahdi; Hadian, Mohammad; Ghaderi, Hossein; Ghaffari, Shahram; Salehi, Masoud
2015-05-17
Choosing an appropriate accounting system for hospital has always been a challenge for hospital managers. Traditional cost system (TCS) causes cost distortions in hospital. Activity-based costing (ABC) method is a new and more effective cost system. This study aimed to compare ABC with TCS method in calculating the unit cost of medical services and to assess its applicability in Kashani Hospital, Shahrekord City, Iran. This cross-sectional study was performed on accounting data of Kashani Hospital in 2013. Data on accounting reports of 2012 and other relevant sources at the end of 2012 were included. To apply ABC method, the hospital was divided into several cost centers and five cost categories were defined: wage, equipment, space, material, and overhead costs. Then activity centers were defined. ABC method was performed into two phases. First, the total costs of cost centers were assigned to activities by using related cost factors. Then the costs of activities were divided to cost objects by using cost drivers. After determining the cost of objects, the cost price of medical services was calculated and compared with those obtained from TCS. The Kashani Hospital had 81 physicians, 306 nurses, and 328 beds with the mean occupancy rate of 67.4% during 2012. Unit cost of medical services, cost price of occupancy bed per day, and cost per outpatient service were calculated. The total unit costs by ABC and TCS were respectively 187.95 and 137.70 USD, showing 50.34 USD more unit cost by ABC method. ABC method represented more accurate information on the major cost components. By utilizing ABC, hospital managers have a valuable accounting system that provides a true insight into the organizational costs of their department.
NASA Astrophysics Data System (ADS)
Karthikeyan, N.; Kumar, R. Ramesh; Jaiganesh, G.; Sivakumar, K.
2018-01-01
The search for thermoelectric materials has been incredibly increased due to the increase in global energy demand. Hence the present work focus on preparation and characterization of thermal transport phenomena of pure and Ba/Ca substituted perovskite LaFeO3 orthoferrite system. The conventional solid state reaction technique is utilized for the preparation of LaFeO3 and La0.9M0.1FeO3 (M = Ca and Ba) compounds. Crystal structure analyses of the prepared samples are analyses using Rietveld refinement process which confirms the orthoferrite crystal structure of all the prepared compounds with induced distortion in position of atoms by the incorporation of substituent atoms. The electronic structure calculations are performed by VASP. As the LaFeO3 compound is a strongly energy correlated system, the Density Functional Theory (DFT) calculations are performed by DFT + U (Hubbard function) method. The computed band gap values are compared with the energy gap values calculated from UV-Vis spectral analysis. Electrical conductivity measurement and Arrhenius behavior for the temperature range of room temperature to 650 K are analyzed and the drift increase in conductivity with respect to temperature is due to the thermally activated mobility of charge carriers. Temperature dependent thermopower analysis is also examined using homemade seebeck coefficient measurement system. The calculation of thermoelectric power factor reveals that the Ba substituted LaFeO3 compound show highest power factor value of 3.73 μW/K2 cm at higher temperature and the superior power factor values observed in the Ba substituted compound determine the material's capability in power generating devices based on thermoelectric effect.
Infrared zone-scanning system.
Belousov, Aleksandr; Popov, Gennady
2006-03-20
Challenges encountered in designing an infrared viewing optical system that uses a small linear detector array based on a zone-scanning approach are discussed. Scanning is performed by a rotating refractive polygon prism with tilted facets, which, along with high-speed line scanning, makes the scanning gear as simple as possible. A method of calculation of a practical optical system to compensate for aberrations during prism rotation is described.
NASA Technical Reports Server (NTRS)
Geyser, L. C.
1978-01-01
A digital computer program, DYGABCD, was developed that generates linearized, dynamic models of simulated turbofan and turbojet engines. DYGABCD is based on an earlier computer program, DYNGEN, that is capable of calculating simulated nonlinear steady-state and transient performance of one- and two-spool turbojet engines or two- and three-spool turbofan engines. Most control design techniques require linear system descriptions. For multiple-input/multiple-output systems such as turbine engines, state space matrix descriptions of the system are often desirable. DYGABCD computes the state space matrices commonly referred to as the A, B, C, and D matrices required for a linear system description. The report discusses the analytical approach and provides a users manual, FORTRAN listings, and a sample case.
Wind Energy Conference, Boulder, Colo., April 9-11, 1980, Technical Papers
NASA Astrophysics Data System (ADS)
1980-03-01
Papers are presented concerning the technology, and economics of wind energy conversion systems. Specific topics include the aerodynamic analysis of the Darrieus rotor, the numerical calculation of the flow near horizontal-axis wind turbine rotors, the calculation of dynamic wind turbine rotor loads, markets for wind energy systems, an oscillating-wing windmill, wind tunnel tests of wind rotors, wind turbine generator wakes, the application of a multi-speed electrical generator to wind turbines, the feasibility of wind-powered systems for dairy farms, and wind characteristics over uniform and complex terrain. Attention is also given to performance tests of the DOE/NASA MOD-1 2000-kW wind turbine generator, the assessment of utility-related test data, offshore wind energy conversion systems, and the optimization of wind energy utilization economics through load management.
Electron-correlated fragment-molecular-orbital calculations for biomolecular and nano systems.
Tanaka, Shigenori; Mochizuki, Yuji; Komeiji, Yuto; Okiyama, Yoshio; Fukuzawa, Kaori
2014-06-14
Recent developments in the fragment molecular orbital (FMO) method for theoretical formulation, implementation, and application to nano and biomolecular systems are reviewed. The FMO method has enabled ab initio quantum-mechanical calculations for large molecular systems such as protein-ligand complexes at a reasonable computational cost in a parallelized way. There have been a wealth of application outcomes from the FMO method in the fields of biochemistry, medicinal chemistry and nanotechnology, in which the electron correlation effects play vital roles. With the aid of the advances in high-performance computing, the FMO method promises larger, faster, and more accurate simulations of biomolecular and related systems, including the descriptions of dynamical behaviors in solvent environments. The current status and future prospects of the FMO scheme are addressed in these contexts.
Near real-time measurement of forces applied by an optical trap to a rigid cylindrical object
NASA Astrophysics Data System (ADS)
Glaser, Joseph; Hoeprich, David; Resnick, Andrew
2014-07-01
An automated data acquisition and processing system is established to measure the force applied by an optical trap to an object of unknown composition in real time. Optical traps have been in use for the past 40 years to manipulate microscopic particles, but the magnitude of applied force is often unknown and requires extensive instrument characterization. Measuring or calculating the force applied by an optical trap to nonspherical particles presents additional difficulties which are also overcome with our system. Extensive experiments and measurements using well-characterized objects were performed to verify the system performance.
Doppler lidar power, aperture diameter, and FFT size trade-off study
NASA Astrophysics Data System (ADS)
Chester, David B.; Budge, Scott E.
2017-05-01
In the design or selection of a Doppler lidar instrument for a spacecraft landing system, it is important to evaluate the balance between performance requirements and cost, weight, and power consumption. Leveraging the capability of LadarSIM, a trade-off study was performed to evaluate the interaction between the laser transmission power, aperture diameter, and FFT size in a Doppler lidar system. For this study the probabilities of detection and false alarm were calculated using LadarSIM to simulate FMCW lidar systems with varying power, aperture diameter, and FFT size. This paper reports the results of this trade-off study.
NASA Astrophysics Data System (ADS)
Lv, Z. H.; Li, Q.; Huang, R. W.; Liu, H. M.; Liu, D.
2016-08-01
Based on the discussion about topology structure of integrated distributed photovoltaic (PV) power generation system and energy storage (ES) in single or mixed type, this paper focuses on analyzing grid-connected performance of integrated distributed photovoltaic and energy storage (PV-ES) systems, and proposes a comprehensive evaluation index system. Then a multi-level fuzzy comprehensive evaluation method based on grey correlation degree is proposed, and the calculations for weight matrix and fuzzy matrix are presented step by step. Finally, a distributed integrated PV-ES power generation system connected to a 380 V low voltage distribution network is taken as the example, and some suggestions are made based on the evaluation results.
RF optics study for DSS-43 ultracone implementation
NASA Technical Reports Server (NTRS)
Lee, P.; Veruttipong, W.
1994-01-01
The Ultracone feed system will be implemented on DSS 43 to support the S-band (2.3 GHz) Galileo contingency mission. The feed system will be installed in the host country's cone, which is normally used for radio astronomy, VLBI, and holography. The design must retain existing radio-astronomy capabilities, which could be impaired by shadowing from the large S-band feed horn. Computer calculations were completed to estimate system performance and shadowing effects for various configurations of the host country's cone feed systems. Also, the DSS-43 system performance using higher gain S-band horns was analyzed. A new S-band horn design with improved return loss and cross-polarization characteristics is presented.
NASA Technical Reports Server (NTRS)
Hu, S.; Kim, M. Y.; McClellan, G. E.; Nikjoo, H.; Cucinotta, F. A.
2007-01-01
In space exploration outside the Earth's geomagnetic field, radiation exposure from solar particle events (SPE) presents a health concern for astronauts, that could impair their performance and result in possibility of failure of the mission. Acute risks are especially of concern during spacewalks on the lunar surface because of the rapid onset of SPE's and science goals that involve long distances to crew habitats. Thus assessing the potential of early radiation effect under such adverse conditions is of prime importance. Here we present a biologic based mathematical model which describes the dose and time-dependent early human responses to ionizing radiation. We examine the possible early effects on crew behind various shielding materials from exposure to some historical large SPEs on the lunar and Mars surfaces. The doses and dose rates were calculated using the BRYNTRN code (Kim, M.Y, Hu, X, and Cucinotta, F.A, Effect of Shielding Materials from SPEs on the Lunar and Mars Surface, AIAA Space 2005, paper number AIAA-2005-6653, Long Beach, CA, August 30-September 1, 2005) and the hazard of the early radiation effects and performance reduction were calculated using the RIPD code (Anno, G.H, McClellan, G.E., Dore, M.A, Protracted Radiation-Induced Performance Decrement, Volume 1 Model Development,1996, Defense Nuclear Agency: Alexandria VA). Based on model assumptions we show that exposure to these historical SPEs do cause early effects to crew members and impair their performance if effective shielding and medical countermeasure tactics are not provided. The calculations show multiple occurrence of large SPEs in a short period of time significantly increase the severity of early illness, however early death from failure of the hematopoietic system is very unlikely because of the dose-rate and dose heterogeneity of SPEs. Results from these types of calculations will be a guide in design of protection systems and medical response strategy for astronauts in case of exposure to high dose irradiation during future space missions.
Ab initio quantum chemical calculation of electron transfer matrix elements for large molecules
NASA Astrophysics Data System (ADS)
Zhang, Linda Yu; Friesner, Richard A.; Murphy, Robert B.
1997-07-01
Using a diabatic state formalism and pseudospectral numerical methods, we have developed an efficient ab initio quantum chemical approach to the calculation of electron transfer matrix elements for large molecules. The theory is developed at the Hartree-Fock level and validated by comparison with results in the literature for small systems. As an example of the power of the method, we calculate the electronic coupling between two bacteriochlorophyll molecules in various intermolecular geometries. Only a single self-consistent field (SCF) calculation on each of the monomers is needed to generate coupling matrix elements for all of the molecular pairs. The largest calculations performed, utilizing 1778 basis functions, required ˜14 h on an IBM 390 workstation. This is considerably less cpu time than would be necessitated with a supermolecule adiabatic state calculation and a conventional electronic structure code.
Pavanello, Michele; Tung, Wei-Cheng; Adamowicz, Ludwik
2009-11-14
Efficient optimization of the basis set is key to achieving a very high accuracy in variational calculations of molecular systems employing basis functions that are explicitly dependent on the interelectron distances. In this work we present a method for a systematic enlargement of basis sets of explicitly correlated functions based on the iterative-complement-interaction approach developed by Nakatsuji [Phys. Rev. Lett. 93, 030403 (2004)]. We illustrate the performance of the method in the variational calculations of H(3) where we use explicitly correlated Gaussian functions with shifted centers. The total variational energy (-1.674 547 421 Hartree) and the binding energy (-15.74 cm(-1)) obtained in the calculation with 1000 Gaussians are the most accurate results to date.
NASA Astrophysics Data System (ADS)
Seo, Sung-Won; Kim, Young-Hyun; Lee, Jung-Ho; Choi, Jang-Young
2018-05-01
This paper presents analytical torque calculation and experimental verification of synchronous permanent magnet couplings (SPMCs) with Halbach arrays. A Halbach array is composed of various numbers of segments per pole; we calculate and compare the magnetic torques for 2, 3, and 4 segments. Firstly, based on the magnetic vector potential, and using a 2D polar coordinate system, we obtain analytical solutions for the magnetic field. Next, through a series of processes, we perform magnetic torque calculations using the derived solutions and a Maxwell stress tensor. Finally, the analytical results are verified by comparison with the results of 2D and 3D finite element analysis and the results of an experiment.
Detection of cat-eye effect echo based on unit APD
NASA Astrophysics Data System (ADS)
Wu, Dong-Sheng; Zhang, Peng; Hu, Wen-Gang; Ying, Jia-Ju; Liu, Jie
2016-10-01
The cat-eye effect echo of optical system can be detected based on CCD, but the detection range is limited within several kilometers. In order to achieve long-range even ultra-long-range detection, it ought to select APD as detector because of the high sensitivity of APD. The detection system of cat-eye effect echo based on unit APD is designed in paper. The implementation scheme and key technology of the detection system is presented. The detection performances of the detection system including detection range, detection probability and false alarm probability are modeled. Based on the model, the performances of the detection system are analyzed using typical parameters. The results of numerical calculation show that the echo signal-to-noise ratio is greater than six, the detection probability is greater than 99.9% and the false alarm probability is less tan 0.1% within 20 km detection range. In order to verify the detection effect, we built the experimental platform of detection system according to the design scheme and carry out the field experiments. The experimental results agree well with the results of numerical calculation, which prove that the detection system based on the unit APD is feasible to realize remote detection for cat-eye effect echo.
Analysis for signal-to-noise ratio of hyper-spectral imaging FTIR interferometer
NASA Astrophysics Data System (ADS)
Li, Xun-niu; Zheng, Wei-jian; Lei, Zheng-gang; Wang, Hai-yang; Fu, Yan-peng
2013-08-01
Signal-to-noise Ratio of hyper-spectral imaging FTIR interferometer system plays a decisive role on the performance of the instrument. It is necessary to analyze them in the development process. Based on the simplified target/background model, the energy transfer model of the LWIR hyper-spectral imaging interferometer has been discussed. The noise equivalent spectral radiance (NESR) and its influencing factors of the interferometer system was analyzed, and the signal-to-noise(SNR) was calculated by using the properties of NESR and incident radiance. In a typical application environment, using standard atmospheric model of USA(1976 COESA) as a background, and set a reasonable target/background temperature difference, and take Michelson spatial modulation Fourier Transform interferometer as an example, the paper had calculated the NESR and the SNR of the interferometer system which using the commercially LWIR cooled FPA and UFPA detector. The system noise sources of the instrument were also analyzed in the paper. The results of those analyses can be used to optimize and pre-estimate the performance of the interferometer system, and analysis the applicable conditions of use different detectors. It has important guiding significance for the LWIR interferometer spectrometer design.
Zhang, Xuezhu; Stortz, Greg; Sossi, Vesna; Thompson, Christopher J; Retière, Fabrice; Kozlowski, Piotr; Thiessen, Jonathan D; Goertzen, Andrew L
2013-12-07
In this study we present a method of 3D system response calculation for analytical computer simulation and statistical image reconstruction for a magnetic resonance imaging (MRI) compatible positron emission tomography (PET) insert system that uses a dual-layer offset (DLO) crystal design. The general analytical system response functions (SRFs) for detector geometric and inter-crystal penetration of coincident crystal pairs are derived first. We implemented a 3D ray-tracing algorithm with 4π sampling for calculating the SRFs of coincident pairs of individual DLO crystals. The determination of which detector blocks are intersected by a gamma ray is made by calculating the intersection of the ray with virtual cylinders with radii just inside the inner surface and just outside the outer-edge of each crystal layer of the detector ring. For efficient ray-tracing computation, the detector block and ray to be traced are then rotated so that the crystals are aligned along the X-axis, facilitating calculation of ray/crystal boundary intersection points. This algorithm can be applied to any system geometry using either single-layer (SL) or multi-layer array design with or without offset crystals. For effective data organization, a direct lines of response (LOR)-based indexed histogram-mode method is also presented in this work. SRF calculation is performed on-the-fly in both forward and back projection procedures during each iteration of image reconstruction, with acceleration through use of eight-fold geometric symmetry and multi-threaded parallel computation. To validate the proposed methods, we performed a series of analytical and Monte Carlo computer simulations for different system geometry and detector designs. The full-width-at-half-maximum of the numerical SRFs in both radial and tangential directions are calculated and compared for various system designs. By inspecting the sinograms obtained for different detector geometries, it can be seen that the DLO crystal design can provide better sampling density than SL or dual-layer no-offset system designs with the same total crystal length. The results of the image reconstruction with SRFs modeling for phantom studies exhibit promising image recovery capability for crystal widths of 1.27-1.43 mm and top/bottom layer lengths of 4/6 mm. In conclusion, we have developed efficient algorithms for system response modeling of our proposed PET insert with DLO crystal arrays. This provides an effective method for both 3D computer simulation and quantitative image reconstruction, and will aid in the optimization of our PET insert system with various crystal designs.
MR Imaging Based Treatment Planning for Radiotherapy of Prostate Cancer
2008-02-01
Radiotherapy, MR-based treatment planning, dosimetry, Monte Carlo dose verification, Prostate Cancer, MRI -based DRRs 16. SECURITY CLASSIFICATION...AcQPlan system Version 5 was used for the study , which is capable of performing dose calculation on both CT and MRI . A four field 3D conformal planning...prostate motion studies for 3DCRT and IMRT of prostate cancer; (2) to investigate and improve the accuracy of MRI -based treatment planning dose calculation
The polyGeVero® software for fast and easy computation of 3D radiotherapy dosimetry data
NASA Astrophysics Data System (ADS)
Kozicki, Marek; Maras, Piotr
2015-01-01
The polyGeVero® software package was elaborated for calculations of 3D dosimetry data such as the polymer gel dosimetry. It comprises four workspaces designed for: i) calculating calibrations, ii) storing calibrations in a database, iii) calculating dose distribution 3D cubes, iv) comparing two datasets e.g. a measured one with a 3D dosimetry with a calculated one with the aid of a treatment planning system. To accomplish calculations the software was equipped with a number of tools such as the brachytherapy isotopes database, brachytherapy dose versus distance calculation based on the line approximation approach, automatic spatial alignment of two 3D dose cubes for comparison purposes, 3D gamma index, 3D gamma angle, 3D dose difference, Pearson's coefficient, histograms calculations, isodoses superimposition for two datasets, and profiles calculations in any desired direction. This communication is to briefly present the main functions of the software and report on the speed of calculations performed by polyGeVero®.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Z; Vijayan, S; Rana, V
2015-06-15
Purpose: A system was developed that automatically calculates the organ and effective dose for individual fluoroscopically-guided procedures using a log of the clinical exposure parameters. Methods: We have previously developed a dose tracking system (DTS) to provide a real-time color-coded 3D- mapping of skin dose. This software produces a log file of all geometry and exposure parameters for every x-ray pulse during a procedure. The data in the log files is input into PCXMC, a Monte Carlo program that calculates organ and effective dose for projections and exposure parameters set by the user. We developed a MATLAB program to readmore » data from the log files produced by the DTS and to automatically generate the definition files in the format used by PCXMC. The processing is done at the end of a procedure after all exposures are completed. Since there are thousands of exposure pulses with various parameters for fluoroscopy, DA and DSA and at various projections, the data for exposures with similar parameters is grouped prior to entry into PCXMC to reduce the number of Monte Carlo calculations that need to be performed. Results: The software developed automatically transfers data from the DTS log file to PCXMC and runs the program for each grouping of exposure pulses. When the dose from all exposure events are calculated, the doses for each organ and all effective doses are summed to obtain procedure totals. For a complicated interventional procedure, the calculations can be completed on a PC without manual intervention in less than 30 minutes depending on the level of data grouping. Conclusion: This system allows organ dose to be calculated for individual procedures for every patient without tedious calculations or data entry so that estimates of stochastic risk can be obtained in addition to the deterministic risk estimate provided by the DTS. Partial support from NIH grant R01EB002873 and Toshiba Medical Systems Corp.« less
A survey of physics and dosimetry practice of permanent prostate brachytherapy in the United States.
Prete, J J; Prestidge, B R; Bice, W S; Friedland, J L; Stock, R G; Grimm, P D
1998-03-01
To obtain data with regard to current physics and dosimetry practice in transperineal interstitial permanent prostate brachytherapy (TIPPB) in the U.S. by conducting a survey of institutions performing this procedure with the greatest frequency. Seventy brachytherapists with the greatest volume of TIPPB cases in 1995 in the U.S. were surveyed. The four-page comprehensive questionnaire included questions on both clinical and physics and dosimetry practice. Individuals not responding initially were sent additional mailings and telephoned. Physics and dosimetry practice summary statistics are reported. Clinical practice data is reported separately. Thirty-five (50%) surveys were returned. Participants included 29 (83%) from the private sector and 6 (17%) from academic programs. Among responding clinicians, 125I (89%) is used with greater frequency than 103Pd (83%). Many use both (71%). Most brachytherapists perform preplans (86%), predominately employing ultrasound imaging (85%). Commercial treatment planning systems are used more frequently (75%) than in-house systems (25%). Preplans take 2.5 h (avg.) to perform and are most commonly performed by a physicist (69%). A wide range of apparent activities (mCi) is used for both 125I (0.16-1.00, avg. 0.41) and 103Pd (0.50-1.90, avg. 1.32). Of those assaying sources (71%), the range in number assayed (1 to all) and maximum accepted difference from vendor stated activity (2-20%) varies greatly. Most respondents feel that the manufacturers criteria for source activity are sufficiently stringent (88%); however, some report that vendors do not always meet their criteria (44%). Most postimplant dosimetry imaging occurs on day 1 (41%) and consists of conventional x-rays (83%), CT (63%), or both (46%). Postimplant dosimetry is usually performed by a physicist (72%), taking 2 h (avg.) to complete. Calculational formalisms and parameters vary substantially. At the time of the survey, few institutions have adopted AAPM TG-43 recommendations (21%). Only half (50%) of those not using TG-43 indicated an intent to do so in the future. Calculated doses at 1 cm from a single 1 mCi apparent activity source permanently implanted varied significantly. For 125I, doses calculated ranged from 13.08-40.00 Gy and for 103Pd, from 3.10 to 8.70 Gy. While several areas of current physics and dosimetry practice are consistent among institutions, treatment planning and dose calculation techniques vary considerably. These data demonstrate a relative lack of consensus with regard to these practices. Furthermore, the wide variety of calculational techniques and benchmark data lead to calculated doses which vary by clinically significant amounts. It is apparent that the lack of standardization with regard to treatment planning and dose calculation practice in TIPPB must be addressed prior to performing any meaningful comparison of clinical results between institutions.
A control system design approach for flexible spacecraft
NASA Technical Reports Server (NTRS)
Silverberg, L. M.
1985-01-01
A control system design approach for flexible spacecraft is presented. The control system design is carried out in two steps. The first step consists of determining the ideal control system in terms of a desirable dynamic performance. The second step consists of designing a control system using a limited number of actuators that possess a dynamic performance that is close to the ideal dynamic performance. The effects of using a limited number of actuators is that the actual closed-loop eigenvalues differ from the ideal closed-loop eigenvalues. A method is presented to approximate the actual closed-loop eigenvalues so that the calculation of the actual closed-loop eigenvalues can be avoided. Depending on the application, it also may be desirable to apply the control forces as impulses. The effect of digitizing the control to produce the appropriate impulses is also examined.
LANL compact laser pumping simulation. Final task report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feldman, B.S.; White, J.
1987-09-28
Rockwell has been tasked with the objective of both qualitatively and quantitatively defining the performance of LANL Compact Laser coupling systems. The performance criteria of the system will be based upon the magnitude and uniformity of the energy distribution in the laser pumping rod. Once this is understood, it will then be possible to improve the device performance via changes in the system`s component parameters. For this study, the authors have chosen to use the Los Alamos Radiometry Code (LARC), which was previously developed by Rockwell. LARC, as an analysis tool, is well suited for this problem because the codemore » contains the needed photometric calculation capability and easily handles the three-dimensionality of the problem. Also, LARC`s internal graphics can provide very informative visual displays of the optical system.« less
Dynamic stability experiment of Maglev systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Mulcahy, T.M.; Chen, S.S.
1995-04-01
This report summarizes the research performed on Maglev vehicle dynamic stability at Argonne National Laboratory during the past few years. It also documents magnetic-force data obtained from both measurements and calculations. Because dynamic instability is not acceptable for any commercial Maglev system, it is important to consider this phenomenon in the development of all Maglev systems. This report presents dynamic stability experiments on Maglev systems and compares their numerical simulation with predictions calculated by a nonlinear dynamic computer code. Instabilities of an electrodynamic system (EDS)-type vehicle model were obtained from both experimental observations and computer simulations for a five-degree-of-freedom Maglevmore » vehicle moving on a guideway consisting of double L-shaped aluminum segments attached to a rotating wheel. The experimental and theoretical analyses developed in this study identify basic stability characteristics and future research needs of Maglev systems.« less
Dynamic stability of repulsive-force maglev suspension systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Y.; Rote, D.M.; Mulcahy, T.M.
1996-11-01
This report summarizes the research performed on maglev vehicle dynamic stability at Argonne National Laboratory during the past few years. It also documents both measured and calculated magnetic-force data. Because dynamic instability is not acceptable for any commercial maglev system, it is important to consider this phenomenon in the development of all maglev systems. This report presents dynamic stability experiments on maglev systems and compares the results with predictions calculated by a nonlinear-dynamics computer code. Instabilities of an electrodynamic-suspension system type vehicle model were obtained by experimental observation and computer simulation of a five-degree-of-freedom maglev vehicle moving on a guidewaymore » that consists of a pair of L-shaped aluminum conductors attached to a rotating wheel. The experimental and theoretical analyses developed in this study identify basic stability characteristics and future research needs of maglev systems.« less
Methodology update for estimating volume to service flow ratio.
DOT National Transportation Integrated Search
2015-12-01
Volume/service flow ratio (VSF) is calculated by the Highway Performance Monitoring System (HPMS) software as an indicator of peak hour congestion. It is an essential input to the Kentucky Transportation Cabinets (KYTC) key planning applications, ...
Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.; ...
2016-03-28
We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nazarov, Roman; Shulenburger, Luke; Morales, Miguel A.
We performed diffusion Monte Carlo (DMC) calculations of the spectroscopic properties of a large set of molecules, assessing the effect of different approximations. In systems containing elements with large atomic numbers, we show that the errors associated with the use of nonlocal mean-field-based pseudopotentials in DMC calculations can be significant and may surpass the fixed-node error. In conclusion, we suggest practical guidelines for reducing these pseudopotential errors, which allow us to obtain DMC-computed spectroscopic parameters of molecules and equation of state properties of solids in excellent agreement with experiment.
A GPU-accelerated and Monte Carlo-based intensity modulated proton therapy optimization system.
Ma, Jiasen; Beltran, Chris; Seum Wan Chan Tseung, Hok; Herman, Michael G
2014-12-01
Conventional spot scanning intensity modulated proton therapy (IMPT) treatment planning systems (TPSs) optimize proton spot weights based on analytical dose calculations. These analytical dose calculations have been shown to have severe limitations in heterogeneous materials. Monte Carlo (MC) methods do not have these limitations; however, MC-based systems have been of limited clinical use due to the large number of beam spots in IMPT and the extremely long calculation time of traditional MC techniques. In this work, the authors present a clinically applicable IMPT TPS that utilizes a very fast MC calculation. An in-house graphics processing unit (GPU)-based MC dose calculation engine was employed to generate the dose influence map for each proton spot. With the MC generated influence map, a modified least-squares optimization method was used to achieve the desired dose volume histograms (DVHs). The intrinsic CT image resolution was adopted for voxelization in simulation and optimization to preserve spatial resolution. The optimizations were computed on a multi-GPU framework to mitigate the memory limitation issues for the large dose influence maps that resulted from maintaining the intrinsic CT resolution. The effects of tail cutoff and starting condition were studied and minimized in this work. For relatively large and complex three-field head and neck cases, i.e., >100,000 spots with a target volume of ∼ 1000 cm(3) and multiple surrounding critical structures, the optimization together with the initial MC dose influence map calculation was done in a clinically viable time frame (less than 30 min) on a GPU cluster consisting of 24 Nvidia GeForce GTX Titan cards. The in-house MC TPS plans were comparable to a commercial TPS plans based on DVH comparisons. A MC-based treatment planning system was developed. The treatment planning can be performed in a clinically viable time frame on a hardware system costing around 45,000 dollars. The fast calculation and optimization make the system easily expandable to robust and multicriteria optimization.
Technical Manual for the SAM Physical Trough Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, M. J.; Gilman, P.
2011-06-01
NREL, in conjunction with Sandia National Lab and the U.S Department of Energy, developed the System Advisor Model (SAM) analysis tool for renewable energy system performance and economic analysis. This paper documents the technical background and engineering formulation for one of SAM's two parabolic trough system models in SAM. The Physical Trough model calculates performance relationships based on physical first principles where possible, allowing the modeler to predict electricity production for a wider range of component geometries than is possible in the Empirical Trough model. This document describes the major parabolic trough plant subsystems in detail including the solar field,more » power block, thermal storage, piping, auxiliary heating, and control systems. This model makes use of both existing subsystem performance modeling approaches, and new approaches developed specifically for SAM.« less
Progress on China nuclear data processing code system
NASA Astrophysics Data System (ADS)
Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu
2017-09-01
China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.
Software Package Completed for Alloy Design at the Atomic Level
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo H.; Noebe, Ronald D.; Abel, Phillip B.; Good, Brian S.
2001-01-01
As a result of a multidisciplinary effort involving solid-state physics, quantum mechanics, and materials and surface science, the first version of a software package dedicated to the atomistic analysis of multicomponent systems was recently completed. Based on the BFS (Bozzolo, Ferrante, and Smith) method for the calculation of alloy and surface energetics, this package includes modules devoted to the analysis of many essential features that characterize any given alloy or surface system, including (1) surface structure analysis, (2) surface segregation, (3) surface alloying, (4) bulk crystalline material properties and atomic defect structures, and (5) thermal processes that allow us to perform phase diagram calculations. All the modules of this Alloy Design Workbench 1.0 (ADW 1.0) are designed to run in PC and workstation environments, and their operation and performance are substantially linked to the needs of the user and the specific application.
Kesteloot, K; Dutreix, A; van der Schueren, E
1993-08-01
The costs of in vivo dosimetry and portal imaging in radiotherapy are estimated, on the basis of a detailed overview of the activities involved in both quality assurance techniques. These activities require the availability of equipment, the use of material and workload. The cost calculations allow to conclude that for most departments in vivo dosimetry with diodes will be a cheaper alternative than in vivo dosimetry with TLD-meters. Whether TLD measurements can be performed cheaper with an automatic reader (with a higher equipment cost, but lower workload) or with a semi-automatic reader (lower equipment cost, but higher workload), depends on the number of checks in the department. LSP-systems (with a very high equipment cost) as well as on-line imaging systems will be cheaper portal imaging techniques than conventional port films (with high material costs) for large departments, or for smaller departments that perform frequent volume checks.
Scaling-up vaccine production: implementation aspects of a biomass growth observer and controller.
Soons, Zita I T A; van den IJssel, Jan; van der Pol, Leo A; van Straten, Gerrit; van Boxtel, Anton J B
2009-04-01
This study considers two aspects of the implementation of a biomass growth observer and specific growth rate controller in scale-up from small- to pilot-scale bioreactors towards a feasible bulk production process for whole-cell vaccine against whooping cough. The first is the calculation of the oxygen uptake rate, the starting point for online monitoring and control of biomass growth, taking into account the dynamics in the gas-phase. Mixing effects and delays are caused by amongst others the headspace and tubing to the analyzer. These gas phase dynamics are modelled using knowledge of the system in order to reconstruct oxygen consumption. The second aspect is to evaluate performance of the monitoring and control system with the required modifications of the oxygen consumption calculation on pilot-scale. In pilot-scale fed-batch cultivation good monitoring and control performance is obtained enabling a doubled concentration of bulk vaccine compared to standard batch production.
Selection of a computer code for Hanford low-level waste engineered-system performance assessment
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGrail, B.P.; Mahoney, L.A.
Planned performance assessments for the proposed disposal of low-level waste (LLW) glass produced from remediation of wastes stored in underground tanks at Hanford, Washington will require calculations of radionuclide release rates from the subsurface disposal facility. These calculations will be done with the aid of computer codes. Currently available computer codes were ranked in terms of the feature sets implemented in the code that match a set of physical, chemical, numerical, and functional capabilities needed to assess release rates from the engineered system. The needed capabilities were identified from an analysis of the important physical and chemical process expected tomore » affect LLW glass corrosion and the mobility of radionuclides. The highest ranked computer code was found to be the ARES-CT code developed at PNL for the US Department of Energy for evaluation of and land disposal sites.« less
Noise studies of communication systems using the SYSTID computer aided analysis program
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Dawson, C. T.
1973-01-01
SYSTID computer aided design is a simple program for simulating data systems and communication links. A trial of the efficiency of the method was carried out by simulating a linear analog communication system to determine its noise performance and by comparing the SYSTID result with the result arrived at by theoretical calculation. It is shown that the SYSTID program is readily applicable to the analysis of these types of systems.
Investigation of the photovoltaic cell/ thermoelectric element hybrid system performance
NASA Astrophysics Data System (ADS)
Cotfas, D. T.; Cotfas, P. A.; Machidon, O. M.; Ciobanu, D.
2016-06-01
The PV/TEG hybrid system, consisting of the photovoltaic cells and thermoelectric element, is presented in the paper. The dependence of the PV/TEG hybrid system parameters on the illumination levels and the temperature is analysed. The maxim power values of the photovoltaic cell, of the thermoelectric element and of the PV/TEG system are calculated and a comparison between them is presented and analysed. An economic analysis is also presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Elander, N.; Oddershede, J.; Beebe, N.H.F.
1977-08-15
Polarization propagator calculation of spectroscopic constants and radiative lifetimes for the A /sup 1/Pi--X /sup 1/..sigma../sup +/ band system are presented. The spectroscopic constants agree well with experimental and other theoretical values. We have also performed an iterative Rydberg-Klein-Rees (RKR) calculation of B/sub e/, ..omega../sub e/, and ..omega../sub e/x/sub e/ for the experimental X /sup 1/..sigma../sup +/ state.The calculated radiative lifetime for the A /sup 1/Pi state (v'=0) is 660 ns and 598 ns, with theoretical and experimental potential energy curves, respectively. This difference (about 60 ns) indicates the inaccuracy in the present calculation. Experimentally the most recent estimate ofmore » the A /sup 1/Pi (v'=0) state is 630 +- 50 ns, and theoretically the Yoshimine et al. transition moment gives tau (A /sup 1/Pi, v'=0) =722 ns.The radiative lifetimes calculated for CD/sup +/ are between 1.3% and 3.9% larger than the corresponding CH/sup +/ lifetimes.« less
Cross-Layer Design for Space-Time coded MIMO Systems over Rice Fading Channel
NASA Astrophysics Data System (ADS)
Yu, Xiangbin; Zhou, Tingting; Liu, Xiaoshuai; Yin, Xin
A cross-layer design (CLD) scheme for space-time coded MIMO systems over Rice fading channel is presented by combining adaptive modulation and automatic repeat request, and the corresponding system performance is investigated well. The fading gain switching thresholds subject to a target packet error rate (PER) and fixed power constraint are derived. According to these results, and using the generalized Marcum Q-function, the calculation formulae of the average spectrum efficiency (SE) and PER of the system with CLD are derived. As a result, closed-form expressions for average SE and PER are obtained. These expressions include some existing expressions in Rayleigh channel as special cases. With these expressions, the system performance in Rice fading channel is evaluated effectively. Numerical results verify the validity of the theoretical analysis. The results show that the system performance in Rice channel is effectively improved as Rice factor increases, and outperforms that in Rayleigh channel.
NASA Astrophysics Data System (ADS)
Abbatiello, L. A.; Nephew, E. A.; Ballou, M. L.
1981-03-01
The efficiency and life cycle costs of the brine chiller minimal annual cycle energy system (ACES) for residential space heating, air conditioning, and water heating requirements are compared with three conventional systems. The conventional systems evaluated are a high performance air-to-air heat pump with an electric resistance water heater, an electric furnace with a central air conditioner and an electric resistance water heater, and a high performance air-to-air heat pump with a superheater unit for hot water production. Monthly energy requirements for a reference single family house are calculated, and the initial cost and annual energy consumption of the systems, providing identical energy services, are computed and compared. The ACES consumes one third to one half ot the electrical energy required by the conventional systems and delivers the same annual loads at comparable costs.
The effects of scattering on the relative LPI performance of optical and mm-wave systems
NASA Astrophysics Data System (ADS)
Oetting, John; Hampton, Jerry
1988-01-01
Previous results comparing the LPI performance of optical and millimeter-wave satellite systems is extended to include the effects of scattering on optical LPI performance. The LPI figure of merit used to compare the two media is the circular equivalent vulnerability radius (CEVR). The CEVR is calculated for typical optical and spread spectrum millimeter-wave systems, and the LPI performance tradeoffs available with each medium are compared. Attention is given to the possibility that light will be scattered into the interceptor's FOV and thereby enable detection in geometries in which interception of the main beam is impossible. The effects of daytime vs. nighttime operation of the optical LPI system are also considered. Some illustrative results for the case of a ground-to-space uplink to a low earth orbit satellite are presented, along with some conclusions and unresolved issues for further study.
Summary of photovoltaic system performance models
NASA Technical Reports Server (NTRS)
Smith, J. H.; Reiter, L. J.
1984-01-01
A detailed overview of photovoltaics (PV) performance modeling capabilities developed for analyzing PV system and component design and policy issues is provided. A set of 10 performance models are selected which span a representative range of capabilities from generalized first order calculations to highly specialized electrical network simulations. A set of performance modeling topics and characteristics is defined and used to examine some of the major issues associated with photovoltaic performance modeling. Each of the models is described in the context of these topics and characteristics to assess its purpose, approach, and level of detail. The issues are discussed in terms of the range of model capabilities available and summarized in tabular form for quick reference. The models are grouped into categories to illustrate their purposes and perspectives.
Hagiwara, Yohsuke; Tateno, Masaru
2010-10-20
We review the recent research on the functional mechanisms of biological macromolecules using theoretical methodologies coupled to ab initio quantum mechanical (QM) treatments of reaction centers in proteins and nucleic acids. Since in most cases such biological molecules are large, the computational costs of performing ab initio calculations for the entire structures are prohibitive. Instead, simulations that are jointed with molecular mechanics (MM) calculations are crucial to evaluate the long-range electrostatic interactions, which significantly affect the electronic structures of biological macromolecules. Thus, we focus our attention on the methodologies/schemes and applications of jointed QM/MM calculations, and discuss the critical issues to be elucidated in biological macromolecular systems. © 2010 IOP Publishing Ltd
Comparative evaluation of distributed-collector solar thermal electric power plants
NASA Technical Reports Server (NTRS)
Fujita, T.; El Gabalawi, N.; Herrera, G. G.; Caputo, R. S.
1978-01-01
Distributed-collector solar thermal-electric power plants are compared by projecting power plant economics of selected systems to the 1990-2000 timeframe. The approach taken is to evaluate the performance of the selected systems under the same weather conditions. Capital and operational costs are estimated for each system. Energy costs are calculated for different plant sizes based on the plant performance and the corresponding capital and maintenance costs. Optimum systems are then determined as the systems with the minimum energy costs for a given load factor. The optimum system is comprised of the best combination of subsystems which give the minimum energy cost for every plant size. Sensitivity analysis is done around the optimum point for various plant parameters.
Wei, Z G; Macwan, A P; Wieringa, P A
1998-06-01
In this paper we quantitatively model degree of automation (DofA) in supervisory control as a function of the number and nature of tasks to be performed by the operator and automation. This model uses a task weighting scheme in which weighting factors are obtained from task demand load, task mental load, and task effect on system performance. The computation of DofA is demonstrated using an experimental system. Based on controlled experiments using operators, analyses of the task effect on system performance, the prediction and assessment of task demand load, and the prediction of mental load were performed. Each experiment had a different DofA. The effect of a change in DofA on system performance and mental load was investigated. It was found that system performance became less sensitive to changes in DofA at higher levels of DofA. The experimental data showed that when the operator controlled a partly automated system, perceived mental load could be predicted from the task mental load for each task component, as calculated by analyzing a situation in which all tasks were manually controlled. Actual or potential applications of this research include a methodology to balance and optimize the automation of complex industrial systems.
NASA Astrophysics Data System (ADS)
Imamura, Seigo; Ono, Kenji; Yokokawa, Mitsuo
2016-07-01
Ensemble computing, which is an instance of capacity computing, is an effective computing scenario for exascale parallel supercomputers. In ensemble computing, there are multiple linear systems associated with a common coefficient matrix. We improve the performance of iterative solvers for multiple vectors by solving them at the same time, that is, by solving for the product of the matrices. We implemented several iterative methods and compared their performance. The maximum performance on Sparc VIIIfx was 7.6 times higher than that of a naïve implementation. Finally, to deal with the different convergence processes of linear systems, we introduced a control method to eliminate the calculation of already converged vectors.
Experimental Evaluation of High Performance Integrated Heat Pump
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, William A; Berry, Robert; Durfee, Neal
2016-01-01
Integrated heat pump (IHP) technology provides significant potential for energy savings and comfort improvement for residential buildings. In this study, we evaluate the performance of a high performance IHP that provides space heating, cooling, and water heating services. Experiments were conducted according to the ASHRAE Standard 206-2013 where 24 test conditions were identified in order to evaluate the IHP performance indices based on the airside performance. Empirical curve fits of the unit s compressor maps are used in conjunction with saturated condensing and evaporating refrigerant conditions to deduce the refrigerant mass flowrate, which, in turn was used to evaluate themore » refrigerant side performance as a check on the airside performance. Heat pump (compressor, fans, and controls) and water pump power were measured separately per requirements of Standard 206. The system was charged per the system manufacturer s specifications. System test results are presented for each operating mode. The overall IHP performance metrics are determined from the test results per the Standard 206 calculation procedures.« less
Comparison of Taxi Time Prediction Performance Using Different Taxi Speed Decision Trees
NASA Technical Reports Server (NTRS)
Lee, Hanbong
2017-01-01
In the STBO modeler and tactical surface scheduler for ATD-2 project, taxi speed decision trees are used to calculate the unimpeded taxi times of flights taxiing on the airport surface. The initial taxi speed values in these decision trees did not show good prediction accuracy of taxi times. Using the more recent, reliable surveillance data, new taxi speed values in ramp area and movement area were computed. Before integrating these values into the STBO system, we performed test runs using live data from Charlotte airport, with different taxi speed settings: 1) initial taxi speed values and 2) new ones. Taxi time prediction performance was evaluated by comparing various metrics. The results show that the new taxi speed decision trees can calculate the unimpeded taxi-out times more accurately.
TRAC-PF1/MOD1 support calculations for the MIST/OTIS program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fujita, R.K.; Knight, T.D.
1984-01-01
We are using the Transient Reactor Analysis Code (TRAC), specifically version TRAC-PF1/MOD1, to perform analyses in support of the MultiLoop Integral-System Test (MIST) and the Once-Through Integral-System (OTIS) experiment program. We have analyzed Geradrohr Dampferzeuger Anlage (GERDA) Test 1605AA to benchmark the TRAC-PF1/MOD1 code against phenomena expected to occur in a raised-loop B and W plant during a small-break loss-of-coolant accident (SBLOCA). These results show that the code can calculate both single- and two-phase natural circulation, flow interruption, boiler-condenser-mode (BCM) heat transfer, and primary-system refill in a B and W-type geometry with low-elevation auxiliary feedwater. 19 figures, 7 tables.
Traceability of Beef Production and Industry in France
NASA Astrophysics Data System (ADS)
Marguin, L.; Balvay, B.
The French cattle tracing system results from a long evolution, which began in the mid sixties for cattle selection purposes. In addition to its main objective which is cattle tracing and sanitary uses, the system is widely used by many breeder organisations for very different uses: parentage recording, performance recording, herd book keeping, breeding value calculation and animal marketing.
2014-10-27
a phase-averaged spectral wind-wave generation and transformation model and its interface in the Surface-water Modeling System (SMS). Ambrose...applications of the Boussinesq (BOUSS-2D) wave model that provides more rigorous calculations for design and performance optimization of integrated...navigation systems . Together these wave models provide reliable predictions on regional and local spatial domains and cost-effective engineering solutions
Conversion of Questionnaire Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Powell, Danny H; Elwood Jr, Robert H
During the survey, respondents are asked to provide qualitative answers (well, adequate, needs improvement) on how well material control and accountability (MC&A) functions are being performed. These responses can be used to develop failure probabilities for basic events performed during routine operation of the MC&A systems. The failure frequencies for individual events may be used to estimate total system effectiveness using a fault tree in a probabilistic risk analysis (PRA). Numeric risk values are required for the PRA fault tree calculations that are performed to evaluate system effectiveness. So, the performance ratings in the questionnaire must be converted to relativemore » risk values for all of the basic MC&A tasks performed in the facility. If a specific material protection, control, and accountability (MPC&A) task is being performed at the 'perfect' level, the task is considered to have a near zero risk of failure. If the task is performed at a less than perfect level, the deficiency in performance represents some risk of failure for the event. As the degree of deficiency in performance increases, the risk of failure increases. If a task that should be performed is not being performed, that task is in a state of failure. The failure probabilities of all basic events contribute to the total system risk. Conversion of questionnaire MPC&A system performance data to numeric values is a separate function from the process of completing the questionnaire. When specific questions in the questionnaire are answered, the focus is on correctly assessing and reporting, in an adjectival manner, the actual performance of the related MC&A function. Prior to conversion, consideration should not be given to the numeric value that will be assigned during the conversion process. In the conversion process, adjectival responses to questions on system performance are quantified based on a log normal scale typically used in human error analysis (see A.D. Swain and H.E. Guttmann, 'Handbook of Human Reliability Analysis with Emphasis on Nuclear Power Plant Applications,' NUREG/CR-1278). This conversion produces the basic event risk of failure values required for the fault tree calculations. The fault tree is a deductive logic structure that corresponds to the operational nuclear MC&A system at a nuclear facility. The conventional Delphi process is a time-honored approach commonly used in the risk assessment field to extract numerical values for the failure rates of actions or activities when statistically significant data is absent.« less
Performance evaluation of the zero-multipole summation method in modern molecular dynamics software.
Sakuraba, Shun; Fukuda, Ikuo
2018-05-04
The zero-multiple summation method (ZMM) is a cutoff-based method for calculating electrostatic interactions in molecular dynamics simulations, utilizing an electrostatic neutralization principle as a physical basis. Since the accuracies of the ZMM have been revealed to be sufficient in previous studies, it is highly desirable to clarify its practical performance. In this paper, the performance of the ZMM is compared with that of the smooth particle mesh Ewald method (SPME), where the both methods are implemented in molecular dynamics software package GROMACS. Extensive performance comparisons against a highly optimized, parameter-tuned SPME implementation are performed for various-sized water systems and two protein-water systems. We analyze in detail the dependence of the performance on the potential parameters and the number of CPU cores. Even though the ZMM uses a larger cutoff distance than the SPME does, the performance of the ZMM is comparable to or better than that of the SPME. This is because the ZMM does not require a time-consuming electrostatic convolution and because the ZMM gains short neighbor-list distances due to the smooth damping feature of the pairwise potential function near the cutoff length. We found, in particular, that the ZMM with quadrupole or octupole cancellation and no damping factor is an excellent candidate for the fast calculation of electrostatic interactions. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Frequency synchronization of a frequency-hopped MFSK communication system
NASA Technical Reports Server (NTRS)
Huth, G. K.; Polydoros, A.; Simon, M. K.
1981-01-01
This paper presents the performance of fine-frequency synchronization. The performance degradation due to imperfect frequency synchronization is found in terms of the effect on bit error probability as a function of full-band or partial-band noise jamming levels and of the number of frequency hops used in the estimator. The effect of imperfect fine-time synchronization is also included in the calculation of fine-frequency synchronization performance to obtain the overall performance degradation due to synchronization errors.
NASA Technical Reports Server (NTRS)
Abdallah, Ayman A.; Barnett, Alan R.; Widrick, Timothy W.; Manella, Richard T.; Miller, Robert P.
1994-01-01
When using all MSC/NASTRAN eigensolution methods except Lanczos, the analyst can replace the coupled system rigid-body modes calculated within DMAP module READ with mass orthogonalized and normalized rigid-body modes generated from the system stiffness. This option is invoked by defining MSC/NASTRAN r-set degrees of freedom via the SUPORT bulk data card. The newly calculated modes are required if the rigid-body modes calculated by the eigensolver are not 'clean' due to numerical roundoffs in the solution. When performing transient structural dynamic load analysis, the numerical roundoffs can result in inaccurate rigid-body accelerations which affect steady-state responses. Unfortunately, when using the Lanczos method and defining r-set degrees of freedom, the rigid-body modes calculated within DMAP module REIGL are retained. To overcome this limitation and to allow MSC/NASTRAN to handle SUPORT degrees of freedom identically for all eigensolvers, a DMAP Alter has been written which replaces Lanczos-calculated rigid-body modes with stiffness-generated rigid-body modes. The newly generated rigid-body modes are normalized with respect to the system mass and orthogonalized using the Gram-Schmidt technique. This algorithm has been implemented as an enhancement to an existing coupled loads methodology.
Tsunami Generation Modelling for Early Warning Systems
NASA Astrophysics Data System (ADS)
Annunziato, A.; Matias, L.; Ulutas, E.; Baptista, M. A.; Carrilho, F.
2009-04-01
In the frame of a collaboration between the European Commission Joint Research Centre and the Institute of Meteorology in Portugal, a complete analytical tool to support Early Warning Systems is being developed. The tool will be part of the Portuguese National Early Warning System and will be used also in the frame of the UNESCO North Atlantic Section of the Tsunami Early Warning System. The system called Tsunami Analysis Tool (TAT) includes a worldwide scenario database that has been pre-calculated using the SWAN-JRC code (Annunziato, 2007). This code uses a simplified fault generation mechanism and the hydraulic model is based on the SWAN code (Mader, 1988). In addition to the pre-defined scenario, a system of computers is always ready to start a new calculation whenever a new earthquake is detected by the seismic networks (such as USGS or EMSC) and is judged capable to generate a Tsunami. The calculation is performed using minimal parameters (epicentre and the magnitude of the earthquake): the programme calculates the rupture length and rupture width by using empirical relationship proposed by Ward (2002). The database calculations, as well the newly generated calculations with the current conditions are therefore available to TAT where the real online analysis is performed. The system allows to analyze also sea level measurements available worldwide in order to compare them and decide if a tsunami is really occurring or not. Although TAT, connected with the scenario database and the online calculation system, is at the moment the only software that can support the tsunami analysis on a global scale, we are convinced that the fault generation mechanism is too simplified to give a correct tsunami prediction. Furthermore short tsunami arrival times especially require a possible earthquake source parameters data on tectonic features of the faults like strike, dip, rake and slip in order to minimize real time uncertainty of rupture parameters. Indeed the earthquake parameters available right after an earthquake are preliminary and could be inaccurate. Determining which earthquake source parameters would affect the initial height and time series of tsunamis will show the sensitivity of the tsunami time series to seismic source details. Therefore a new fault generation model will be adopted, according to the seismotectonics properties of the different regions, and finally included in the calculation scheme. In order to do this, within the collaboration framework of Portuguese authorities, a new model is being defined, starting from the seismic sources in the North Atlantic, Caribbean and Gulf of Cadiz. As earthquakes occurring in North Atlantic and Caribbean sources may affect Portugal mainland, the Azores and Madeira archipelagos also these sources will be included in the analysis. Firstly we have started to examine the geometries of those sources that spawn tsunamis to understand the effect of fault geometry and depths of earthquakes. References: Annunziato, A., 2007. The Tsunami Assesment Modelling System by the Joint Research Center, Science of Tsunami Hazards, Vol. 26, pp. 70-92. Mader, C.L., 1988. Numerical modelling of water waves, University of California Press, Berkeley, California. Ward, S.N., 2002. Tsunamis, Encyclopedia of Physical Science and Technology, Vol. 17, pp. 175-191, ed. Meyers, R.A., Academic Press.
Infrared power cells for satellite power conversion
NASA Technical Reports Server (NTRS)
Summers, Christopher J.
1991-01-01
An analytical investigation is performed to assess the feasibility of long-wavelength power converters for the direct conversion of IR radiation onto electrical power. Because theses devices need to operate between 5 and 30 um the only material system possible for this application is the HgCdTe system which is currently being developed for IR detectors. Thus solar cell and IR detector theories and technologies are combined. The following subject areas are covered: electronic and optical properties of HgCdTe alloys; optimum device geometry; junction theory; model calculation for homojunction power cell efficiency; and calculation for HgCdTe power cell and power beaming.
Lattice dynamics calculations based on density-functional perturbation theory in real space
NASA Astrophysics Data System (ADS)
Shang, Honghui; Carbogno, Christian; Rinke, Patrick; Scheffler, Matthias
2017-06-01
A real-space formalism for density-functional perturbation theory (DFPT) is derived and applied for the computation of harmonic vibrational properties in molecules and solids. The practical implementation using numeric atom-centered orbitals as basis functions is demonstrated exemplarily for the all-electron Fritz Haber Institute ab initio molecular simulations (FHI-aims) package. The convergence of the calculations with respect to numerical parameters is carefully investigated and a systematic comparison with finite-difference approaches is performed both for finite (molecules) and extended (periodic) systems. Finally, the scaling tests and scalability tests on massively parallel computer systems demonstrate the computational efficiency.
Potential energy surfaces of the low-lying electronic states of the Li + LiCs system
NASA Astrophysics Data System (ADS)
Jasik, P.; Kilich, T.; Kozicki, J.; Sienkiewicz, J. E.
2018-03-01
Ab initio quantum chemistry calculations are performed for the mixed alkali triatomic system. Global minima of the ground and first excited doublet states of the trimer are found and Born-Oppenheimer potential energy surfaces of the Li atom interacting with the LiCs molecule were calculated for these states. The lithium atom is placed at various distances and bond angles from the lithium-caesium dimer. Three-body nonadditive forces of the Li2Cs molecule in the global minimum are investigated. Dimer-atom interactions are found to be strongly attractive and may be important in the experiments, particularly involving cold alkali polar dimers.
Impact of multilayered compression bandages on sub-bandage interface pressure: a model.
Al Khaburi, J; Nelson, E A; Hutchinson, J; Dehghani-Sanij, A A
2011-03-01
Multi-component medical compression bandages are widely used to treat venous leg ulcers. The sub-bandage interface pressures induced by individual components of the multi-component compression bandage systems are not always simply additive. Current models to explain compression bandage performance do not take account of the increase in leg circumference when each bandage is applied, and this may account for the difference between predicted and actual pressures. To calculate the interface pressure when a multi-component compression bandage system is applied to a leg. Use thick wall cylinder theory to estimate the sub-bandage pressure over the leg when a multi-component compression bandage is applied to a leg. A mathematical model was developed based on thick cylinder theory to include bandage thickness in the calculation of the interface pressure in multi-component compression systems. In multi-component compression systems, the interface pressure corresponds to the sum of the pressures applied by individual bandage layers. However, the change in the limb diameter caused by additional bandage layers should be considered in the calculation. Adding the interface pressure produced by single components without considering the bandage thickness will result in an overestimate of the overall interface pressure produced by the multi-component compression systems. At the ankle (circumference 25 cm) this error can be 19.2% or even more in the case of four components bandaging systems. Bandage thickness should be considered when calculating the pressure applied using multi-component compression systems.
Microfluidic System Simulation Including the Electro-Viscous Effect
NASA Technical Reports Server (NTRS)
Rojas, Eileen; Chen, C. P.; Majumdar, Alok
2007-01-01
This paper describes a practical approach using a general purpose lumped-parameter computer program, GFSSP (Generalized Fluid System Simulation Program) for calculating flow distribution in a network of micro-channels including electro-viscous effects due to the existence of electrical double layer (EDL). In this study, an empirical formulation for calculating an effective viscosity of ionic solutions based on dimensional analysis is described to account for surface charge and bulk fluid conductivity, which give rise to electro-viscous effect in microfluidics network. Two dimensional slit micro flow data was used to determine the model coefficients. Geometry effect is then included through a Poiseuille number correlation in GFSSP. The bi-power model was used to calculate flow distribution of isotropically etched straight channel and T-junction microflows involving ionic solutions. Performance of the proposed model is assessed against experimental test data.
Stabilizing canonical-ensemble calculations in the auxiliary-field Monte Carlo method
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.
2015-03-01
Quantum Monte Carlo methods are powerful techniques for studying strongly interacting Fermi systems. However, implementing these methods on computers with finite-precision arithmetic requires careful attention to numerical stability. In the auxiliary-field Monte Carlo (AFMC) method, low-temperature or large-model-space calculations require numerically stabilized matrix multiplication. When adapting methods used in the grand-canonical ensemble to the canonical ensemble of fixed particle number, the numerical stabilization increases the number of required floating-point operations for computing observables by a factor of the size of the single-particle model space, and thus can greatly limit the systems that can be studied. We describe an improved method for stabilizing canonical-ensemble calculations in AFMC that exhibits better scaling, and present numerical tests that demonstrate the accuracy and improved performance of the method.
Aldeghi, Matteo; Bodkin, Michael J; Knapp, Stefan; Biggin, Philip C
2017-09-25
Binding free energy calculations that make use of alchemical pathways are becoming increasingly feasible thanks to advances in hardware and algorithms. Although relative binding free energy (RBFE) calculations are starting to find widespread use, absolute binding free energy (ABFE) calculations are still being explored mainly in academic settings due to the high computational requirements and still uncertain predictive value. However, in some drug design scenarios, RBFE calculations are not applicable and ABFE calculations could provide an alternative. Computationally cheaper end-point calculations in implicit solvent, such as molecular mechanics Poisson-Boltzmann surface area (MMPBSA) calculations, could too be used if one is primarily interested in a relative ranking of affinities. Here, we compare MMPBSA calculations to previously performed absolute alchemical free energy calculations in their ability to correlate with experimental binding free energies for three sets of bromodomain-inhibitor pairs. Different MMPBSA approaches have been considered, including a standard single-trajectory protocol, a protocol that includes a binding entropy estimate, and protocols that take into account the ligand hydration shell. Despite the improvements observed with the latter two MMPBSA approaches, ABFE calculations were found to be overall superior in obtaining correlation with experimental affinities for the test cases considered. A difference in weighted average Pearson ([Formula: see text]) and Spearman ([Formula: see text]) correlations of 0.25 and 0.31 was observed when using a standard single-trajectory MMPBSA setup ([Formula: see text] = 0.64 and [Formula: see text] = 0.66 for ABFE; [Formula: see text] = 0.39 and [Formula: see text] = 0.35 for MMPBSA). The best performing MMPBSA protocols returned weighted average Pearson and Spearman correlations that were about 0.1 inferior to ABFE calculations: [Formula: see text] = 0.55 and [Formula: see text] = 0.56 when including an entropy estimate, and [Formula: see text] = 0.53 and [Formula: see text] = 0.55 when including explicit water molecules. Overall, the study suggests that ABFE calculations are indeed the more accurate approach, yet there is also value in MMPBSA calculations considering the lower compute requirements, and if agreement to experimental affinities in absolute terms is not of interest. Moreover, for the specific protein-ligand systems considered in this study, we find that including an explicit ligand hydration shell or a binding entropy estimate in the MMPBSA calculations resulted in significant performance improvements at a negligible computational cost.
Wang, Lilie; Ding, George X
2018-06-12
Therapeutic radiation to cancer patients is accompanied by unintended radiation to organs outside the treatment field. It is known that the model-based dose algorithm has limitation in calculating the out-of-field doses. This study evaluated the out-of-field dose calculated by the Varian Eclipse treatment planning system (v.11 with AAA algorithm) in realistic treatment plans with the goal of estimating the uncertainties of calculated organ doses. Photon beam phase-space files for TrueBeam linear accelerator were provided by Varian. These were used as incident sources in EGSnrc Monte Carlo simulations of radiation transport through the downstream jaws and MLC. Dynamic movements of the MLC leaves were fully modeled based on treatment plans using IMRT or VMAT techniques. The Monte Carlo calculated out-of-field doses were then compared with those calculated by Eclipse. The dose comparisons were performed for different beam energies and treatment sites, including head-and-neck, lung, and pelvis. For 6 MV (FF/FFF), 10 MV (FF/FFF), and 15 MV (FF) beams, Eclipse underestimated out-of-field local doses by 30%-50% compared with Monte Carlo calculations when the local dose was <1% of prescribed dose. The accuracy of out-of-field dose calculations using Eclipse is improved when collimator jaws were set at the smallest possible aperture for MLC openings. The Eclipse system consistently underestimates out-of-field dose by a factor of 2 for all beam energies studied at the local dose level of less than 1% of prescribed dose. These findings are useful in providing information on the uncertainties of out-of-field organ doses calculated by Eclipse treatment planning system. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
Dash, Bibek
2018-04-26
The present work deals with a density functional theory (DFT) study of porous organic framework materials containing - groups for CO 2 capture. In this study, first principle calculations were performed for CO 2 adsorption using N-containing covalent organic framework (COFs) models. Ab initio and DFT-based methods were used to characterize the N-containing porous model system based on their interaction energies upon complexing with CO 2 and nitrogen gas. Binding energies (BEs) of CO 2 and N 2 molecules with the polymer framework were calculated with DFT methods. Hybrid B3LYP and second order MP2 methods combined with of Pople 6-31G(d,p) and correlation consistent basis sets cc-pVDZ, cc-pVTZ and aug-ccVDZ were used to calculate BEs. The effect of linker groups in the designed covalent organic framework model system on the CO 2 and N 2 interactions was studied using quantum calculations.
Verification of ARES transport code system with TAKEDA benchmarks
NASA Astrophysics Data System (ADS)
Zhang, Liang; Zhang, Bin; Zhang, Penghe; Chen, Mengteng; Zhao, Jingchang; Zhang, Shun; Chen, Yixue
2015-10-01
Neutron transport modeling and simulation are central to many areas of nuclear technology, including reactor core analysis, radiation shielding and radiation detection. In this paper the series of TAKEDA benchmarks are modeled to verify the critical calculation capability of ARES, a discrete ordinates neutral particle transport code system. SALOME platform is coupled with ARES to provide geometry modeling and mesh generation function. The Koch-Baker-Alcouffe parallel sweep algorithm is applied to accelerate the traditional transport calculation process. The results show that the eigenvalues calculated by ARES are in excellent agreement with the reference values presented in NEACRP-L-330, with a difference less than 30 pcm except for the first case of model 3. Additionally, ARES provides accurate fluxes distribution compared to reference values, with a deviation less than 2% for region-averaged fluxes in all cases. All of these confirms the feasibility of ARES-SALOME coupling and demonstrate that ARES has a good performance in critical calculation.
Nonlinear analysis of switched semi-active controlled systems
NASA Astrophysics Data System (ADS)
Eslaminasab, Nima; Vahid A., Orang; Golnaraghi, Farid
2011-02-01
Semi-active systems improve suspension performance of the vehicles more effectively than conventional passive systems by simultaneously improving ride comfort and road handling. Also, because of size, weight, price and performance advantages, they have gained more interest over the active as well as passive systems. Probably the most neglected aspect of the semi-active on-off control systems and strategies is the effects of the added nonlinearities of those systems, which are introduced and analysed in this paper. To do so, numerical techniques, analytical method of averaging and experimental analysis are deployed. In this paper, a new method to analyse, calculate and compare the performances of the semi-active controlled systems is proposed; further, a new controller based on the observations of actual test data is proposed to eliminate the adverse effects of added nonlinearities. The significance of the proposed new system is the simplicity of the algorithm and ease of implementation. In fact, this new semi-active control strategy could be easily adopted and used with most of the existing semi-active control systems.
User's Guide for a Modular Flutter Analysis Software System (Fast Version 1.0)
NASA Technical Reports Server (NTRS)
Desmarais, R. N.; Bennett, R. M.
1978-01-01
The use and operation of a group of computer programs to perform a flutter analysis of a single planar wing are described. This system of programs is called FAST for Flutter Analysis System, and consists of five programs. Each program performs certain portions of a flutter analysis and can be run sequentially as a job step or individually. FAST uses natural vibration modes as input data and performs a conventional V-g type of solution. The unsteady aerodynamics programs in FAST are based on the subsonic kernel function lifting-surface theory although other aerodynamic programs can be used. Application of the programs is illustrated by a sample case of a complete flutter calculation that exercises each program.
NASA Technical Reports Server (NTRS)
Gupta, Pramod; Schumann, Johann
2004-01-01
High reliability of mission- and safety-critical software systems has been identified by NASA as a high-priority technology challenge. We present an approach for the performance analysis of a neural network (NN) in an advanced adaptive control system. This problem is important in the context of safety-critical applications that require certification, such as flight software in aircraft. We have developed a tool to measure the performance of the NN during operation by calculating a confidence interval (error bar) around the NN's output. Our tool can be used during pre-deployment verification as well as monitoring the network performance during operation. The tool has been implemented in Simulink and simulation results on a F-15 aircraft are presented.
Highway User Benefit Analysis System Research Project #128
DOT National Transportation Integrated Search
2000-10-01
In this research, a methodology for estimating road user costs of various competing alternatives was developed. Also, software was developed to calculate the road user cost, perform economic analysis and update cost tables. The methodology is based o...
The impact of inertial navigation on air safety.
DOT National Transportation Integrated Search
1971-05-01
An analysis of inertial navigation system performance data was carried out to assess the probable impact of inertial navigation on the aircraft collision risk in the North Atlantic region. These data were used to calculate the collision risk between ...
Detailed Performance Calculations: Harvard Group, Appendix G
NASA Technical Reports Server (NTRS)
1984-01-01
The measurement of OH density in the troposphere has many elements in common with the system that is currently used for in situ measurements of OH in the stratosphere. Techniques proposed by Harvard University and Washington State University are described.
Advanced Analysis and Visualization of Space Weather Phenomena
NASA Astrophysics Data System (ADS)
Murphy, Joshua J.
As the world becomes more technologically reliant, the more susceptible society as a whole is to adverse interactions with the sun. This "space weather'' can produce significant effects on modern technology, from interrupting satellite service, to causing serious damage to Earth-side power grids. These concerns have, over the past several years, prompted an out-welling of research in an attempt to understand the processes governing, and to provide a means of forecasting, space weather events. The research presented in this thesis couples to current work aimed at understanding Coronal Mass Ejections (CMEs) and their influence on the evolution of Earth's magnetic field and associated Van Allen radiation belts. To aid in the analysis of how these solar wind transients affect Earth's magnetic field, a system named Geospace/Heliosphere Observation & Simulation Tool-kit (GHOSTkit), along with its python analysis tools, GHOSTpy, has been devised to calculate the adiabatic invariants of trapped particle motion within Earth's magnetic field. These invariants aid scientists in ordering observations of the radiation belts, providing a more natural presentation of data, but can be computationally expensive to calculate. The GHOSTpy system, in the phase presented here, is aimed at providing invariant calculations based on LFM magnetic field simulation data. This research first examines an ideal dipole application to gain understanding on system performance. Following this, the challenges of applying the algorithms to gridded LFM MHD data is examined. Performance profiles are then presented, followed by a real-world application of the system.
Xu, Peng; Zhang, Cai-Rong; Wang, Wei; Gong, Ji-Jun; Liu, Zi-Jiang; Chen, Hong-Shan
2018-04-10
The understanding of the excited-state properties of electron donors, acceptors and their interfaces in organic optoelectronic devices is a fundamental issue for their performance optimization. In order to obtain a balanced description of the different excitation types for electron-donor-acceptor systems, including the singlet charge transfer (CT), local excitations, and triplet excited states, several ab initio and density functional theory (DFT) methods for excited-state calculations were evaluated based upon the selected model system of benzene-tetracyanoethylene (B-TCNE) complexes. On the basis of benchmark calculations of the equation-of-motion coupled-cluster with single and double excitations method, the arithmetic mean of the absolute errors and standard errors of the electronic excitation energies for the different computational methods suggest that the M11 functional in DFT is superior to the other tested DFT functionals, and time-dependent DFT (TDDFT) with the Tamm-Dancoff approximation improves the accuracy of the calculated excitation energies relative to that of the full TDDFT. The performance of the M11 functional underlines the importance of kinetic energy density, spin-density gradient, and range separation in the development of novel DFT functionals. According to the TDDFT results, the performances of the different TDDFT methods on the CT properties of the B-TCNE complexes were also analyzed.
Bircher, Martin P; Rothlisberger, Ursula
2018-06-12
Linear-response time-dependent density functional theory (LR-TD-DFT) has become a valuable tool in the calculation of excited states of molecules of various sizes. However, standard generalized-gradient approximation and hybrid exchange-correlation (xc) functionals often fail to correctly predict charge-transfer (CT) excitations with low orbital overlap, thus limiting the scope of the method. The Coulomb-attenuation method (CAM) in the form of the CAM-B3LYP functional has been shown to reliably remedy this problem in many CT systems, making accurate predictions possible. However, in spite of a rather consistent performance across different orbital overlap regimes, some pitfalls remain. Here, we present a fully flexible and adaptable implementation of the CAM for Γ-point calculations within the plane-wave pseudopotential molecular dynamics package CPMD and explore how customized xc functionals can improve the optical spectra of some notorious cases. We find that results obtained using plane waves agree well with those from all-electron calculations employing atom-centered bases, and that it is possible to construct a new Coulomb-attenuated xc functional based on simple considerations. We show that such a functional is able to outperform CAM-B3LYP in some cases, while retaining similar accuracy in systems where CAM-B3LYP performs well.
Outcomes of Grazing Impacts between Sub-Neptunes in Kepler Multis
NASA Astrophysics Data System (ADS)
Hwang, Jason; Chatterjee, Sourav; Lombardi, James, Jr.; Steffen, Jason H.; Rasio, Frederic
2018-01-01
Studies of high-multiplicity, tightly packed planetary systems suggest that dynamical instabilities are common and affect both the orbits and planet structures, where the compact orbits and typically low densities make physical collisions likely outcomes. Since the structure of many of these planets is such that the mass is dominated by a rocky core, but the volume is dominated by a tenuous gas envelope, the sticky-sphere approximation, used in dynamical integrators, may be a poor model for these collisions. We perform five sets of collision calculations, including detailed hydrodynamics, sampling mass ratios, and core mass fractions typical in Kepler Multis. In our primary set of calculations, we use Kepler-36 as a nominal remnant system, as the two planets have a small dynamical separation and an extreme density ratio. We use an N-body code, Mercury 6.2, to integrate initially unstable systems and study the resultant collisions in detail. We use these collisions, focusing on grazing collisions, in combination with realistic planet models created using gas profiles from Modules for Experiments in Stellar Astrophysics and core profiles using equations of state from Seager et al. to perform hydrodynamic calculations, finding scatterings, mergers, and even a potential planet–planet binary. We dynamically integrate the remnant systems, examine the stability, and estimate the final densities, finding that the remnant densities are sensitive to the core masses, and collisions result in generally more stable systems. We provide prescriptions for predicting the outcomes and modeling the changes in mass and orbits following collisions for general use in dynamical integrators.
The Influence of Glazing Systems on the Energy Performance of Low-Rise Commercial Buildings.
1985-05-01
calculating the solar flux through the glazing system, the overall transmittance and absorptance of each layer as a function of the angle of...SYSTEM CHARACTERISTICS ...... ............... 16 3.1 Solar Optical Properties .... ............. 16 3.2 Heat Transfer_.. ...... ............... 18 3.3...building types, carries through to the occupancy characteristics and internal loading assignments. Solar glazing film has been studied (Treado,et al.,1983b
van Oostveen, Catharina J; Ubbink, Dirk T; Mens, Marian A; Pompe, Edwin A; Vermeulen, Hester
2016-03-01
To investigate the reliability, validity and feasibility of the RAFAELA workforce planning system (including the Oulu patient classification system - OPCq), before deciding on implementation in Dutch hospitals. The complexity of care, budgetary restraints and demand for high-quality patient care have ignited the need for transparent hospital workforce planning. Nurses from 12 wards of two university hospitals were trained to test the reliability of the OPCq by investigating the absolute agreement of nursing care intensity (NCI) measurements among nurses. Validity was tested by assessing whether optimal NCI/nurse ratio, as calculated by a regression analysis in RAFAELA, was realistic. System feasibility was investigated through a questionnaire among all nurses involved. Almost 67 000 NCI measurements were performed between December 2013 and June 2014. Agreement using the OPCq varied between 38% and 91%. For only 1 in 12 wards was the optimal NCI area calculated judged as valid. Although the majority of respondents was positive about the applicability and user-friendliness, RAFAELA was not accepted as useful workforce planning system. Nurses' performance using the RAFAELA system did not warrant its implementation. Hospital managers should first focus on enlarging the readiness of nurses regarding the implementation of a workforce planning system. © 2015 John Wiley & Sons Ltd.
Extending the length and time scales of Gram-Schmidt Lyapunov vector computations
NASA Astrophysics Data System (ADS)
Costa, Anthony B.; Green, Jason R.
2013-08-01
Lyapunov vectors have found growing interest recently due to their ability to characterize systems out of thermodynamic equilibrium. The computation of orthogonal Gram-Schmidt vectors requires multiplication and QR decomposition of large matrices, which grow as N2 (with the particle count). This expense has limited such calculations to relatively small systems and short time scales. Here, we detail two implementations of an algorithm for computing Gram-Schmidt vectors. The first is a distributed-memory message-passing method using Scalapack. The second uses the newly-released MAGMA library for GPUs. We compare the performance of both codes for Lennard-Jones fluids from N=100 to 1300 between Intel Nahalem/Infiniband DDR and NVIDIA C2050 architectures. To our best knowledge, these are the largest systems for which the Gram-Schmidt Lyapunov vectors have been computed, and the first time their calculation has been GPU-accelerated. We conclude that Lyapunov vector calculations can be significantly extended in length and time by leveraging the power of GPU-accelerated linear algebra.
Comparison of ISRU Excavation System Model Blade Force Methodology and Experimental Results
NASA Technical Reports Server (NTRS)
Gallo, Christopher A.; Wilkinson, R. Allen; Mueller, Robert P.; Schuler, Jason M.; Nick, Andrew J.
2010-01-01
An Excavation System Model has been written to simulate the collection and transportation of regolith on the Moon. The calculations in this model include an estimation of the forces on the digging tool as a result of excavation into the regolith. Verification testing has been performed and the forces recorded from this testing were compared to the calculated theoretical data. A prototype lunar vehicle built at the NASA Johnson Space Center (JSC) was tested with a bulldozer type blade developed at the NASA Kennedy Space Center (KSC) attached to the front. This is the initial correlation of actual field test data to the blade forces calculated by the Excavation System Model and the test data followed similar trends with the predicted values. This testing occurred in soils developed at the NASA Glenn Research Center (GRC) which are a mixture of different types of sands and whose soil properties have been well characterized. Three separate analytical models are compared to the test data.
Electrostatically Embedded Many-Body Expansion for Neutral and Charged Metalloenzyme Model Systems.
Kurbanov, Elbek K; Leverentz, Hannah R; Truhlar, Donald G; Amin, Elizabeth A
2012-01-10
The electrostatically embedded many-body (EE-MB) method has proven accurate for calculating cohesive and conformational energies in clusters, and it has recently been extended to obtain bond dissociation energies for metal-ligand bonds in positively charged inorganic coordination complexes. In the present paper, we present four key guidelines that maximize the accuracy and efficiency of EE-MB calculations for metal centers. Then, following these guidelines, we show that the EE-MB method can also perform well for bond dissociation energies in a variety of neutral and negatively charged inorganic coordination systems representing metalloenzyme active sites, including a model of the catalytic site of the zinc-bearing anthrax toxin lethal factor, a popular target for drug development. In particular, we find that the electrostatically embedded three-body (EE-3B) method is able to reproduce conventionally calculated bond-breaking energies in a series of pentacoordinate and hexacoordinate zinc-containing systems with an average absolute error (averaged over 25 cases) of only 0.98 kcal/mol.
NASA Technical Reports Server (NTRS)
Gallo, Christopher A.; Agui, Juan H.; Creager, Colin M.; Oravec, Heather A.
2012-01-01
An Excavation System Model has been written to simulate the collection and transportation of regolith on the moon. The calculations in this model include an estimation of the forces on the digging tool as a result of excavation into the regolith. Verification testing has been performed and the forces recorded from this testing were compared to the calculated theoretical data. The Northern Centre for Advanced Technology Inc. rovers were tested at the NASA Glenn Research Center Simulated Lunar Operations facility. This testing was in support of the In-Situ Resource Utilization program Innovative Partnership Program. Testing occurred in soils developed at the Glenn Research Center which are a mixture of different types of sands and whose soil properties have been well characterized. This testing is part of an ongoing correlation of actual field test data to the blade forces calculated by the Excavation System Model. The results from this series of tests compared reasonably with the predicted values from the code.
Nayor, Jennifer; Borges, Lawrence F; Goryachev, Sergey; Gainer, Vivian S; Saltzman, John R
2018-07-01
ADR is a widely used colonoscopy quality indicator. Calculation of ADR is labor-intensive and cumbersome using current electronic medical databases. Natural language processing (NLP) is a method used to extract meaning from unstructured or free text data. (1) To develop and validate an accurate automated process for calculation of adenoma detection rate (ADR) and serrated polyp detection rate (SDR) on data stored in widely used electronic health record systems, specifically Epic electronic health record system, Provation ® endoscopy reporting system, and Sunquest PowerPath pathology reporting system. Screening colonoscopies performed between June 2010 and August 2015 were identified using the Provation ® reporting tool. An NLP pipeline was developed to identify adenomas and sessile serrated polyps (SSPs) on pathology reports corresponding to these colonoscopy reports. The pipeline was validated using a manual search. Precision, recall, and effectiveness of the natural language processing pipeline were calculated. ADR and SDR were then calculated. We identified 8032 screening colonoscopies that were linked to 3821 pathology reports (47.6%). The NLP pipeline had an accuracy of 100% for adenomas and 100% for SSPs. Mean total ADR was 29.3% (range 14.7-53.3%); mean male ADR was 35.7% (range 19.7-62.9%); and mean female ADR was 24.9% (range 9.1-51.0%). Mean total SDR was 4.0% (0-9.6%). We developed and validated an NLP pipeline that accurately and automatically calculates ADRs and SDRs using data stored in Epic, Provation ® and Sunquest PowerPath. This NLP pipeline can be used to evaluate colonoscopy quality parameters at both individual and practice levels.
Recent advances in QM/MM free energy calculations using reference potentials.
Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L
2015-05-01
Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
Thrust stand evaluation of engine performance improvement algorithms in an F-15 airplane
NASA Technical Reports Server (NTRS)
Conners, Timothy R.
1992-01-01
An investigation is underway to determine the benefits of a new propulsion system optimization algorithm in an F-15 airplane. The performance seeking control (PSC) algorithm optimizes the quasi-steady-state performance of an F100 derivative turbofan engine for several modes of operation. The PSC algorithm uses an onboard software engine model that calculates thrust, stall margin, and other unmeasured variables for use in the optimization. As part of the PSC test program, the F-15 aircraft was operated on a horizontal thrust stand. Thrust was measured with highly accurate load cells. The measured thrust was compared to onboard model estimates and to results from posttest performance programs. Thrust changes using the various PSC modes were recorded. Those results were compared to benefits using the less complex highly integrated digital electronic control (HIDEC) algorithm. The PSC maximum thrust mode increased intermediate power thrust by 10 percent. The PSC engine model did very well at estimating measured thrust and closely followed the transients during optimization. Quantitative results from the evaluation of the algorithms and performance calculation models are included with emphasis on measured thrust results. The report presents a description of the PSC system and a discussion of factors affecting the accuracy of the thrust stand load measurements.
NASA Astrophysics Data System (ADS)
Morton, Daniel R.
Modern image guided radiation therapy involves the use of an isocentrically mounted imaging system to take radiographs of a patient's position before the start of each treatment. Image guidance helps to minimize errors associated with a patients setup, but the radiation dose received by patients from imaging must be managed to ensure no additional risks. The Varian On-Board Imager (OBI) (Varian Medical Systems, Inc., Palo Alto, CA) does not have an automatic exposure control system and therefore requires exposure factors to be manually selected. Without patient specific exposure factors, images may become saturated and require multiple unnecessary exposures. A software based automatic exposure control system has been developed to predict optimal, patient specific exposure factors. The OBI system was modelled in terms of the x-ray tube output and detector response in order to calculate the level of detector saturation for any exposure situation. Digitally reconstructed radiographs are produced via ray-tracing through the patients' volumetric datasets that are acquired for treatment planning. The ray-trace determines the attenuation of the patient and subsequent x-ray spectra incident on the imaging detector. The resulting spectra are used in the detector response model to determine the exposure levels required to minimize detector saturation. Images calculated for various phantoms showed good agreement with the images that were acquired on the OBI. Overall, regions of detector saturation were accurately predicted and the detector response for non-saturated regions in images of an anthropomorphic phantom were calculated to generally be within 5 to 10 % of the measured values. Calculations were performed on patient data and found similar results as the phantom images, with the calculated images being able to determine detector saturation with close agreement to images that were acquired during treatment. Overall, it was shown that the system model and calculation method could potentially be used to predict patients' exposure factors before their treatment begins, thus preventing the need for multiple exposures.