A program for the calculation of paraboloidal-dish solar thermal power plant performance
NASA Technical Reports Server (NTRS)
Bowyer, J. M., Jr.
1985-01-01
A program capable of calculating the design-point and quasi-steady-state annual performance of a paraboloidal-concentrator solar thermal power plant without energy storage was written for a programmable calculator equipped with suitable printer. The power plant may be located at any site for which a histogram of annual direct normal insolation is available. Inputs required by the program are aperture area and the design and annual efficiencies of the concentrator; the intercept factor and apparent efficiency of the power conversion subsystem and a polynomial representation of its normalized part-load efficiency; the efficiency of the electrical generator or alternator; the efficiency of the electric power conditioning and transport subsystem; and the fractional parasitic loses for the plant. Losses to auxiliaries associated with each individual module are to be deducted when the power conversion subsystem efficiencies are calculated. Outputs provided by the program are the system design efficiency, the annualized receiver efficiency, the annualized power conversion subsystem efficiency, total annual direct normal insolation received per unit area of concentrator aperture, and the system annual efficiency.
Model Energy Efficiency Program Impact Evaluation Guide
Find guidance on model approaches for calculating energy, demand, and emissions savings resulting from energy efficiency programs. It describes several standard approaches that can be used in order to make these programs more efficient.
''Do-it-yourself'' software program calculates boiler efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-03-01
An easy-to-use software package is described which runs on the IBM Personal Computer. The package calculates boiler efficiency, an important parameter of operating costs and equipment wellbeing. The program stores inputs and calculated results for 20 sets of boiler operating data, called cases. Cases can be displayed and modified on the CRT screen through multiple display pages or copied to a printer. All intermediate calculations are performed by this package. They include: steam enthalpy; water enthalpy; air humidity; gas, oil, coal, and wood heat capacity; and radiation losses.
Economic efficiency and risk character of fire management programs, Northern Rocky Mountains
Thomas J. Mills; Frederick W. Bratten
1988-01-01
Economic efficiency and risk have long been considered during the selection of fire management programs and the design of fire management polices. The risk considerations was largely subjective, however, and efficiency has only recently been calculated for selected portions of the fire management program. The highly stochastic behavior of the fire system and the high...
Procedure and computer program to calculate machine contribution to sawmill recovery
Philip H. Steele; Hiram Hallock; Stanford Lunstrum
1981-01-01
The importance of considering individual machine contribution to total mill efficiency is discussed. A method for accurately calculating machine contribution is introduced, and an example is given using this method. A FORTRAN computer program to make the necessary complex calculations automatically is also presented with user instructions.
NASA Technical Reports Server (NTRS)
Galvas, M. R.
1972-01-01
A computer program for predicting design point specific speed - efficiency characteristics of centrifugal compressors is presented with instructions for its use. The method permits rapid selection of compressor geometry that yields maximum total efficiency for a particular application. A numerical example is included to demonstrate the selection procedure.
NASA Astrophysics Data System (ADS)
Pingbo, An; Li, Wang; Hongxi, Lu; Zhiguo, Yu; Lei, Liu; Xin, Xi; Lixia, Zhao; Junxi, Wang; Jinmin, Li
2016-06-01
The internal quantum efficiency (IQE) of the light-emitting diodes can be calculated by the ratio of the external quantum efficiency (EQE) and the light extraction efficiency (LEE). The EQE can be measured experimentally, but the LEE is difficult to calculate due to the complicated LED structures. In this work, a model was established to calculate the LEE by combining the transfer matrix formalism and an in-plane ray tracing method. With the calculated LEE, the IQE was determined and made a good agreement with that obtained by the ABC model and temperature-dependent photoluminescence method. The proposed method makes the determination of the IQE more practical and conventional. Project supported by the National Natural Science Foundation of China (Nos.11574306, 61334009), the China International Science and Technology Cooperation Program (No. 2014DFG62280), and the National High Technology Program of China (No. 2015AA03A101).
Xiao, Kai; Chen, Danny Z; Hu, X Sharon; Zhou, Bo
2012-12-01
The three-dimensional digital differential analyzer (3D-DDA) algorithm is a widely used ray traversal method, which is also at the core of many convolution∕superposition (C∕S) dose calculation approaches. However, porting existing C∕S dose calculation methods onto graphics processing unit (GPU) has brought challenges to retaining the efficiency of this algorithm. In particular, straightforward implementation of the original 3D-DDA algorithm inflicts a lot of branch divergence which conflicts with the GPU programming model and leads to suboptimal performance. In this paper, an efficient GPU implementation of the 3D-DDA algorithm is proposed, which effectively reduces such branch divergence and improves performance of the C∕S dose calculation programs running on GPU. The main idea of the proposed method is to convert a number of conditional statements in the original 3D-DDA algorithm into a set of simple operations (e.g., arithmetic, comparison, and logic) which are better supported by the GPU architecture. To verify and demonstrate the performance improvement, this ray traversal method was integrated into a GPU-based collapsed cone convolution∕superposition (CCCS) dose calculation program. The proposed method has been tested using a water phantom and various clinical cases on an NVIDIA GTX570 GPU. The CCCS dose calculation program based on the efficient 3D-DDA ray traversal implementation runs 1.42 ∼ 2.67× faster than the one based on the original 3D-DDA implementation, without losing any accuracy. The results show that the proposed method can effectively reduce branch divergence in the original 3D-DDA ray traversal algorithm and improve the performance of the CCCS program running on GPU. Considering the wide utilization of the 3D-DDA algorithm, various applications can benefit from this implementation method.
NASA Technical Reports Server (NTRS)
Jaffe, L. D.
1984-01-01
The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Pinella, David; Garrison, Peter
1999-01-01
Collection efficiency and ice accretion calculations were made for a commercial transport using the NASA Lewis LEWICE3D ice accretion code, the ICEGRID3D grid code and the CMARC panel code. All of the calculations were made on a Windows 95 based personal computer. The ice accretion calculations were made for the nose, wing, horizontal tail and vertical tail surfaces. Ice shapes typifying those of a 30 minute hold were generated. Collection efficiencies were also generated for the entire aircraft using the newly developed unstructured collection efficiency method. The calculations highlight the flexibility and cost effectiveness of the LEWICE3D, ICEGRID3D, CMARC combination.
A computer program for calculating relative-transmissivity input arrays to aid model calibration
Weiss, Emanuel
1982-01-01
A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.
Status and Opportunities for Improving the Consistency of Technical Reference Manuals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jayaweera, Tina; Velonis, Aquila; Haeri, Hossein
Across the United States, energy-efficiency program administrators rely on Technical Reference Manuals (TRMs) as sources for calculations and deemed savings values for specific, well-defined efficiency measures. TRMs play an important part in energy efficiency program planning by providing a common and consistent source for calculation of ex ante and often ex post savings. They thus help reduce energy-efficiency resource acquisition costs by obviating the need for extensive measurement and verification and lower performance risk for program administrators and implementation contractors. This paper considers the benefits of establishing region-wide or national TRMs and considers the challenges of such undertaking due tomore » the difficulties in comparing energy savings across jurisdictions. We argue that greater consistency across TRMs in the approaches used to determine deemed savings values, with more transparency about assumptions, would allow better comparisons in savings estimates across jurisdictions as well as improve confidence in reported efficiency measure savings. To support this thesis, we review approaches for the calculation of savings for select measures in TRMs currently in use in 17 jurisdictions. The review reveals differences in the saving methodologies, technical assumptions, and input variables used for estimating deemed savings values. These differences are described and their implications are summarized, using four, common energy-efficiency measures as examples. Recommendations are then offered for establishing a uniform approach for determining deemed savings values.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leventis, Greg; Gopal, Anand; Rue du Can, Stephane de la
Numerous countries use taxpayer funds to subsidize residential electricity for a variety of socioeconomic objectives. These subsidies lower the value of energy efficiency to the consumer while raising it for the government. Further, while it would be especially helpful to have stringent Minimum Energy Performance Standards (MEPS) for appliances and buildings in this environment, they are hard to strengthen without imposing a cost on ratepayers. In this secondbest world, where the presence of subsidies limits the government’s ability to strengthen standards, we find that avoided subsidies are a readily available source of financing for energy efficiency incentive programs. Here, wemore » introduce the LBNL Energy Efficiency Revenue Analysis (LEERA) model to estimate the appliance efficiency improvements that can be achieved in Mexico by the revenue neutral financing of incentive programs from avoided subsidy payments. LEERA uses the detailed techno-economic analysis developed by LBNL for the Super-efficient Equipment and Appliance Deployment (SEAD) Initiative to calculate the incremental costs of appliance efficiency improvements. We analyze Mexico’s tariff structures and the long-run marginal cost of supply to calculate the marginal savings for the government from appliance efficiency. We find that avoided subsidy payments alone can finance incentive programs that cover the full incremental cost of refrigerators that are 27% more efficient and TVs that are 32% more efficient than baseline models. We find less substantial market transformation potential for room ACs primarily because AC energy savings occur at less subsidized tariffs.« less
Saurman, Emily; Lyle, David; Kirby, Sue; Roberts, Russell
2014-07-31
The Mental Health Emergency Care-Rural Access Program (MHEC-RAP) is a telehealth solution providing specialist emergency mental health care to rural and remote communities across western NSW, Australia. This is the first time and motion (T&M) study to examine program efficiency and capacity for a telepsychiatry program. Clinical services are an integral aspect of the program accounting for 6% of all activities and 50% of the time spent conducting program activities, but half of this time is spent completing clinical paperwork. This finding emphasizes the importance of these services to program efficiency and the need to address variability of service provision to impact capacity. Currently, there is no efficiency benchmark for emergency telepsychiatry programs. Findings suggest that MHEC-RAP could increase its activity without affecting program responsiveness. T&M studies not only determine activity and time expenditure, but have a wider application assessing program efficiency by understanding, defining, and calculating capacity. T&M studies can inform future program development of MHEC-RAP and similar telehealth programs, both in Australia and overseas.
Multi-GPU hybrid programming accelerated three-dimensional phase-field model in binary alloy
NASA Astrophysics Data System (ADS)
Zhu, Changsheng; Liu, Jieqiong; Zhu, Mingfang; Feng, Li
2018-03-01
In the process of dendritic growth simulation, the computational efficiency and the problem scales have extremely important influence on simulation efficiency of three-dimensional phase-field model. Thus, seeking for high performance calculation method to improve the computational efficiency and to expand the problem scales has a great significance to the research of microstructure of the material. A high performance calculation method based on MPI+CUDA hybrid programming model is introduced. Multi-GPU is used to implement quantitative numerical simulations of three-dimensional phase-field model in binary alloy under the condition of multi-physical processes coupling. The acceleration effect of different GPU nodes on different calculation scales is explored. On the foundation of multi-GPU calculation model that has been introduced, two optimization schemes, Non-blocking communication optimization and overlap of MPI and GPU computing optimization, are proposed. The results of two optimization schemes and basic multi-GPU model are compared. The calculation results show that the use of multi-GPU calculation model can improve the computational efficiency of three-dimensional phase-field obviously, which is 13 times to single GPU, and the problem scales have been expanded to 8193. The feasibility of two optimization schemes is shown, and the overlap of MPI and GPU computing optimization has better performance, which is 1.7 times to basic multi-GPU model, when 21 GPUs are used.
Xing, Z F; Greenberg, J M
1994-08-20
The analyticity of the complex extinction efficiency is examined numerically in the size-parameter domain for homogeneous prolate and oblate spheroids and finite cylinders. The T-matrix code, which is the most efficient program available to date, is employed to calculate the individual particle-extinction efficiencies. Because of its computational limitations in the size-parameter range, a slightly modified Hilbert-transform algorithm is required to establish the analyticity numerically. The findings concerning analyticity that we reported for spheres (Astrophys. J. 399, 164-175, 1992) apply equally to these nonspherical particles.
Energy Savings Lifetimes and Persistence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Ian M.; Schiller, Steven R.; Todd, Annika
2016-02-01
This technical brief explains the concepts of energy savings lifetimes and savings persistence and discusses how program administrators use these factors to calculate savings for efficiency measures, programs and portfolios. Savings lifetime is the length of time that one or more energy efficiency measures or activities save energy, and savings persistence is the change in savings throughout the functional life of a given efficiency measure or activity. Savings lifetimes are essential for assessing the lifecycle benefits and cost effectiveness of efficiency activities and for forecasting loads in resource planning. The brief also provides estimates of savings lifetimes derived from amore » national collection of costs and savings for electric efficiency programs and portfolios.« less
SR-52 PROGRAMMABLE CALCULATOR PROGRAMS FOR VENTURI SCRUBBERS AND ELECTROSTATIC PRECIPITATORS
The report provides useful tools for estimating particulate removal by venturi scrubbers and electrostatic precipitators. Detailed descriptions are given for programs to predict the penetration (one minus efficiency) for each device. These programs are written specifically for th...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitehead, Camilla Dunham; McNeil, Michael; Dunham_Whitehead, Camilla
2008-02-28
The U.S. Environmental Protection Agency (EPA) influences the market for plumbing fixtures and fittings by encouraging consumers to purchase products that carry the WaterSense label, which certifies those products as performing at low flow rates compared to unlabeled fixtures and fittings. As consumers decide to purchase water-efficient products, water consumption will decline nationwide. Decreased water consumption should prolong the operating life of water and wastewater treatment facilities.This report describes the method used to calculate national water savings attributable to EPA?s WaterSense program. A Microsoft Excel spreadsheet model, the National Water Savings (NWS) analysis model, accompanies this methodology report. Version 1.0more » of the NWS model evaluates indoor residential water consumption. Two additional documents, a Users? Guide to the spreadsheet model and an Impacts Report, accompany the NWS model and this methodology document. Altogether, these four documents represent Phase One of this project. The Users? Guide leads policy makers through the spreadsheet options available for projecting the water savings that result from various policy scenarios. The Impacts Report shows national water savings that will result from differing degrees of market saturation of high-efficiency water-using products.This detailed methodology report describes the NWS analysis model, which examines the effects of WaterSense by tracking the shipments of products that WaterSense has designated as water-efficient. The model estimates market penetration of products that carry the WaterSense label. Market penetration is calculated for both existing and new construction. The NWS model estimates savings based on an accounting analysis of water-using products and of building stock. Estimates of future national water savings will help policy makers further direct the focus of WaterSense and calculate stakeholder impacts from the program.Calculating the total gallons of water the WaterSense program saves nationwide involves integrating two components, or modules, of the NWS model. Module 1 calculates the baseline national water consumption of typical fixtures, fittings, and appliances prior to the program (as described in Section 2.0 of this report). Module 2 develops trends in efficiency for water-using products both in the business-as-usual case and as a result of the program (Section 3.0). The NWS model combines the two modules to calculate total gallons saved by the WaterSense program (Section 4.0). Figure 1 illustrates the modules and the process involved in modeling for the NWS model analysis.The output of the NWS model provides the base case for each end use, as well as a prediction of total residential indoor water consumption during the next two decades. Based on the calculations described in Section 4.0, we can project a timeline of water savings attributable to the WaterSense program. The savings increase each year as the program results in the installation of greater numbers of efficient products, which come to compose more and more of the product stock in households throughout the United States.« less
DYNA3D: A computer code for crashworthiness engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallquist, J.O.; Benson, D.J.
1986-09-01
A finite element program with crashworthiness applications has been developed at LLNL. DYNA3D, an explicit, fully vectorized, finite deformation structural dynamics program, has four capabilities that are critical for the efficient and realistic modeling crash phenomena: (1) fully optimized nonlinear solid, shell, and beam elements for representing a structure; (2) a broad range of constitutive models for simulating material behavior; (3) sophisticated contact algorithms for impact interactions; (4) a rigid body capability to represent the bodies away from the impact region at a greatly reduced cost without sacrificing accuracy in the momentum calculations. Basic methodologies of the program are brieflymore » presented along with several crashworthiness calculations. Efficiencies of the Hughes-Liu and Belytschko-Tsay shell formulations are considered.« less
Digital-computer program for design analysis of salient, wound pole alternators
NASA Technical Reports Server (NTRS)
Repas, D. S.
1973-01-01
A digital computer program for analyzing the electromagnetic design of salient, wound pole alternators is presented. The program, which is written in FORTRAN 4, calculates the open-circuit saturation curve, the field-current requirements at rated voltage for various loads and losses, efficiency, reactances, time constants, and weights. The methods used to calculate some of these items are presented or appropriate references are cited. Instructions for using the program and typical program input and output for an alternator design are given, and an alphabetical list of most FORTRAN symbols and the complete program listing with flow charts are included.
An investigation of a mathematical model for atmospheric absorption spectra
NASA Technical Reports Server (NTRS)
Niple, E. R.
1979-01-01
A computer program that calculates absorption spectra for slant paths through the atmosphere is described. The program uses an efficient convolution technique (Romberg integration) to simulate instrument resolution effects. A brief information analysis is performed on a set of calculated spectra to illustrate how such techniques may be used to explore the quality of the information in a spectrum.
Energy-efficiency program for clothes washers, clothes dryers, and dishwashers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-11-01
The objectives of this study of dishwashers, clothes washers, and clothes dryers are: to evaluate existing energy efficiency test procedures and recommend the use of specific test procedures for each appliance group and to establish the maximum economically and technologically feasible energy-efficiency improvement goals for each appliance group. Specifically, the program requirements were to determine the energy efficiency of the 1972 models, to evaluate the feasibility improvements that could be implemented by 1980 to maximize energy efficiency, and to calculate the percentage efficiency improvement based on the 1972 baseline and the recommended 1980 targets. The test program was conducted usingmore » 5 dishwashers, 4 top-loading clothes washers, one front-loading clothes washer, 4 electric clothes dryers, and 4 gas clothes dryers. (MCW)« less
An efficient routine for infrared radiative transfer in a cloudy atmosphere
NASA Technical Reports Server (NTRS)
Chou, M. D.; Kouvaris, L.
1981-01-01
A FORTRAN program that calculates the atmospheric cooling rate and infrared fluxes for partly cloudy atmospheres is documented. The IR fluxes in the water bands and the 9.6 and 15 micron bands are calculated at 15 levels ranging from 1.39 mb to the surface. The program is generalized to accept any arbitrary atmospheric temperature and humidity profiles and clouds as input and return the cooling rate and fluxes as output. Sample calculations for various atmospheric profiles and cloud situations are demonstrated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Letschert, Virginie E.; McNeil, Michael A.; Leiva Ibanez, Francisco Humberto
2011-06-01
Minimum Efficiency Performance Standards (MEPS) have been chosen as part of Chile's national energy efficiency action plan. As a first MEPS, the Ministry of Energy has decided to focus on a regulation for lighting that would ban the sale of inefficient bulbs, effectively phasing out the use of incandescent lamps. Following major economies such as the US (EISA, 2007) , the EU (Ecodesign, 2009) and Australia (AS/NZS, 2008) who planned a phase out based on minimum efficacy requirements, the Ministry of Energy has undertaken the impact analysis of a MEPS on the residential lighting sector. Fundacion Chile (FC) and Lawrencemore » Berkeley National Laboratory (LBNL) collaborated with the Ministry of Energy and the National Energy Efficiency Program (Programa Pais de Eficiencia Energetica, or PPEE) in order to produce a techno-economic analysis of this future policy measure. LBNL has developed for CLASP (CLASP, 2007) a spreadsheet tool called the Policy Analysis Modeling System (PAMS) that allows for evaluation of costs and benefits at the consumer level but also a wide range of impacts at the national level, such as energy savings, net present value of savings, greenhouse gas (CO2) emission reductions and avoided capacity generation due to a specific policy. Because historically Chile has followed European schemes in energy efficiency programs (test procedures, labelling program definitions), we take the Ecodesign commission regulation No 244/2009 as a starting point when defining our phase out program, which means a tiered phase out based on minimum efficacy per lumen category. The following data were collected in order to perform the techno-economic analysis: (1) Retail prices, efficiency and wattage category in the current market, (2) Usage data (hours of lamp use per day), and (3) Stock data, penetration of efficient lamps in the market. Using these data, PAMS calculates the costs and benefits of efficiency standards from two distinct but related perspectives: (1) The Life-Cycle Cost (LCC) calculation examines costs and benefits from the perspective of the individual household; and (2) The National Perspective projects the total national costs and benefits including both financial benefits, and energy savings and environmental benefits. The national perspective calculations are called the National Energy Savings (NES) and the Net Present Value (NPV) calculations. PAMS also calculate total emission mitigation and avoided generation capacity. This paper describes the data and methodology used in PAMS and presents the results of the proposed phase out of incandescent bulbs in Chile.« less
NASA Technical Reports Server (NTRS)
Ricks, Wendell R.; Abbott, Kathy H.
1987-01-01
A traditional programming technique for controlling the display of optional flight information in a civil transport cockpit is compared to a rule-based technique for the same function. This application required complex decision logic and a frequently modified rule base. The techniques are evaluated for execution efficiency and implementation ease; the criterion used to calculate the execution efficiency is the total number of steps required to isolate hypotheses that were true and the criteria used to evaluate the implementability are ease of modification and verification and explanation capability. It is observed that the traditional program is more efficient than the rule-based program; however, the rule-based programming technique is more applicable for improving programmer productivity.
A Recursive Method for Calculating Certain Partition Functions.
ERIC Educational Resources Information Center
Woodrum, Luther; And Others
1978-01-01
Describes a simple recursive method for calculating the partition function and average energy of a system consisting of N electrons and L energy levels. Also, presents an efficient APL computer program to utilize the recursion relation. (Author/GA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walters, T.; Savage, S.; Brown, J.
At the request of the U. S. Department of Agriculture (USDA) Rural Development, the National Renewable Energy Laboratory reviewed projects awarded in the Section 9006 Program: Renewable Energy Systems and Energy Efficiency Improvements Program. This report quantifies federal and private investment, outlines project status based on recent field updates, and calculates the effects on energy and emissions of energy efficiency and renewable energy projects awarded grants in FY 2003, FY 2004, and FY 2005. An overview of the program challenges and modifications in the first three years of operation is also included.
TURBINE COOLING FLOW AND THE RESULTING DECREASE IN TURBINE EFFICIENCY
NASA Technical Reports Server (NTRS)
Gauntner, J. W.
1994-01-01
This algorithm has been developed for calculating both the quantity of compressor bleed flow required to cool a turbine and the resulting decrease in efficiency due to cooling air injected into the gas stream. Because of the trend toward higher turbine inlet temperatures, it is important to accurately predict the required cooling flow. This program is intended for use with axial flow, air-breathing jet propulsion engines with a variety of airfoil cooling configurations. The algorithm results have compared extremely well with figures given by major engine manufacturers for given bulk metal temperatures and cooling configurations. The program calculates the required cooling flow and corresponding decrease in stage efficiency for each row of airfoils throughout the turbine. These values are combined with the thermodynamic efficiency of the uncooled turbine to predict the total bleed airflow required and the altered turbine efficiency. There are ten airfoil cooling configurations and the algorithm allows a different option for each row of cooled airfoils. Materials technology is incorporated and requires the date of the first year of service for the turbine stator vane and rotor blade. The user must specify pressure, temperatures, and gas flows into the turbine. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 3080 series computer with a central memory requirement of approximately 61K of 8 bit bytes. This program was developed in 1980.
NASA Technical Reports Server (NTRS)
Bidwell, Colin S.; Papadakis, Michael
2005-01-01
Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.
NASA Astrophysics Data System (ADS)
Iwasawa, Masaki; Tanikawa, Ataru; Hosono, Natsuki; Nitadori, Keigo; Muranushi, Takayuki; Makino, Junichiro
2016-08-01
We present the basic idea, implementation, measured performance, and performance model of FDPS (Framework for Developing Particle Simulators). FDPS is an application-development framework which helps researchers to develop simulation programs using particle methods for large-scale distributed-memory parallel supercomputers. A particle-based simulation program for distributed-memory parallel computers needs to perform domain decomposition, exchange of particles which are not in the domain of each computing node, and gathering of the particle information in other nodes which are necessary for interaction calculation. Also, even if distributed-memory parallel computers are not used, in order to reduce the amount of computation, algorithms such as the Barnes-Hut tree algorithm or the Fast Multipole Method should be used in the case of long-range interactions. For short-range interactions, some methods to limit the calculation to neighbor particles are required. FDPS provides all of these functions which are necessary for efficient parallel execution of particle-based simulations as "templates," which are independent of the actual data structure of particles and the functional form of the particle-particle interaction. By using FDPS, researchers can write their programs with the amount of work necessary to write a simple, sequential and unoptimized program of O(N2) calculation cost, and yet the program, once compiled with FDPS, will run efficiently on large-scale parallel supercomputers. A simple gravitational N-body program can be written in around 120 lines. We report the actual performance of these programs and the performance model. The weak scaling performance is very good, and almost linear speed-up was obtained for up to the full system of the K computer. The minimum calculation time per timestep is in the range of 30 ms (N = 107) to 300 ms (N = 109). These are currently limited by the time for the calculation of the domain decomposition and communication necessary for the interaction calculation. We discuss how we can overcome these bottlenecks.
NASA Astrophysics Data System (ADS)
Yu, Haoyu S.; Fiedler, Lucas J.; Alecu, I. M.; Truhlar, Donald G.
2017-01-01
We present a Python program, FREQ, for calculating the optimal scale factors for calculating harmonic vibrational frequencies, fundamental vibrational frequencies, and zero-point vibrational energies from electronic structure calculations. The program utilizes a previously published scale factor optimization model (Alecu et al., 2010) to efficiently obtain all three scale factors from a set of computed vibrational harmonic frequencies. In order to obtain the three scale factors, the user only needs to provide zero-point energies of 15 or 6 selected molecules. If the user has access to the Gaussian 09 or Gaussian 03 program, we provide the option for the user to run the program by entering the keywords for a certain method and basis set in the Gaussian 09 or Gaussian 03 program. Four other Python programs, input.py, input6, pbs.py, and pbs6.py, are also provided for generating Gaussian 09 or Gaussian 03 input and PBS files. The program can also be used with data from any other electronic structure package. A manual of how to use this program is included in the code package.
NASA Astrophysics Data System (ADS)
Lundberg, J.; Conrad, J.; Rolke, W.; Lopez, A.
2010-03-01
A C++ class was written for the calculation of frequentist confidence intervals using the profile likelihood method. Seven combinations of Binomial, Gaussian, Poissonian and Binomial uncertainties are implemented. The package provides routines for the calculation of upper and lower limits, sensitivity and related properties. It also supports hypothesis tests which take uncertainties into account. It can be used in compiled C++ code, in Python or interactively via the ROOT analysis framework. Program summaryProgram title: TRolke version 2.0 Catalogue identifier: AEFT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: MIT license No. of lines in distributed program, including test data, etc.: 3431 No. of bytes in distributed program, including test data, etc.: 21 789 Distribution format: tar.gz Programming language: ISO C++. Computer: Unix, GNU/Linux, Mac. Operating system: Linux 2.6 (Scientific Linux 4 and 5, Ubuntu 8.10), Darwin 9.0 (Mac-OS X 10.5.8). RAM:˜20 MB Classification: 14.13. External routines: ROOT ( http://root.cern.ch/drupal/) Nature of problem: The problem is to calculate a frequentist confidence interval on the parameter of a Poisson process with statistical or systematic uncertainties in signal efficiency or background. Solution method: Profile likelihood method, Analytical Running time:<10 seconds per extracted limit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ashford, Mike
The report describes the prospects for energy efficiency and greenhouse gas emissions reductions in Mexico, along with renewable energy potential. A methodology for developing emissions baselines is shown, in order to prepare project emissions reductions calculations. An application to the USIJI program was also prepared through this project, for a portfolio of energy efficiency projects.
Computer supplies insulation recipe for Cookie Company Roof
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Roofing contractors no longer have to rely on complicated calculations and educated guesses to determine cost-efficient levels of roof insulation. A simple hand-held calculator and printer offers seven different programs for fast figuring insulation thickness based on job type, roof size, tax rates, and heating and cooling cost factors.
Scintillation detector efficiencies for neutrons in the energy region above 20 MeV
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dickens, J.K.
1991-01-01
The computer program SCINFUL (for SCINtillator FUL1 response) is a program designed to provide a calculated complete pulse-height response anticipated for neutrons being detected by either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator in the shape of a right circular cylinder. The point neutron source may be placed at any location with respect to the detector, even inside of it. The neutron source may be monoenergetic, or Maxwellian distributed, or distributed between chosen lower and upper bounds. The calculational method uses Monte Carlo techniques, and it is relativistically correct. Extensive comparisons with a variety of experimental data havemore » been made. There is generally overall good agreement (less than 10% differences) of results for SCINFUL calculations with measured integral detector efficiencies for the design incident neutron energy range of 0.1 to 80 MeV. Calculations of differential detector responses, i.e. yield versus response pulse height, are generally within about 5% on the average for incident neutron energies between 16 and 50 MeV and for the upper 70% of the response pulse height. For incident neutron energies between 50 and 80 MeV, the calculated shape of the response agrees with measurements, but the calculations tend to underpredict the absolute values of the measured responses. Extension of the program to compute responses for incident neutron energies greater than 80 MeV will require new experimental data on neutron interactions with carbon. 32 refs., 6 figs., 2 tabs.« less
Scintillation detector efficiencies for neutrons in the energy region above 20 MeV
NASA Astrophysics Data System (ADS)
Dickens, J. K.
The computer program SCINFUL (for SCINtillator FUL1 response) is a program designed to provide a calculated complete pulse-height response anticipated for neutrons being detected by either an NE-213 (liquid) scintillator or an NE-110 (solid) scintillator in the shape of a right circular cylinder. The point neutron source may be placed at any location with respect to the detector, even inside of it. The neutron source may be monoenergetic, or Maxwellian distributed, or distributed between chosen lower and upper bounds. The calculational method uses Monte Carlo techniques, and it is relativistically correct. Extensive comparisons with a variety of experimental data were made. There is generally overall good agreement (less than 10 pct. differences) of results for SCINFUL calculations with measured integral detector efficiencies for the design incident neutron energy range of 0.1 to 80 MeV. Calculations of differential detector responses, i.e., yield versus response pulse height, are generally within about 5 pct. on the average for incident neutron energies between 16 and 50 MeV and for the upper 70 pct. of the response pulse height. For incident neutron energies between 50 and 80 MeV, the calculated shape of the response agrees with measurements, but the calculations tend to underpredict the absolute values of the measured responses. Extension of the program to compute responses for incident neutron energies greater than 80 MeV will require new experimental data on neutron interactions with carbon.
NASA Technical Reports Server (NTRS)
Koenig, R. W.; Fishbach, L. H.
1972-01-01
A computer program entitled GENENG employs component performance maps to perform analytical, steady state, engine cycle calculations. Through a scaling procedure, each of the component maps can be used to represent a family of maps (different design values of pressure ratios, efficiency, weight flow, etc.) Either convergent or convergent-divergent nozzles may be used. Included is a complete FORTRAN 4 listing of the program. Sample results and input explanations are shown for one-spool and two-spool turbojets and two-spool separate- and mixed-flow turbofans operating at design and off-design conditions.
DSN 100-meter X and S band microwave antenna design and performance
NASA Technical Reports Server (NTRS)
Williams, W. F.
1978-01-01
The RF performance is studied for large reflector antenna systems (100 meters) when using the high efficiency dual shaped reflector approach. An altered phase was considered so that the scattered field from a shaped surface could be used in the JPL efficiency program. A new dual band (X-S) microwave feed horn was used in the shaping calculations. A great many shaping calculations were made for various horn sizes and locations and final RF efficiencies are reported. A conclusion is reached that when using the new dual band horn, shaping should probably be performed using the pattern of the lower frequency
Spreadsheet Applications using VisiCalc and Lotus 1-2-3 Programs.
ERIC Educational Resources Information Center
Cortland-Madison Board of Cooperative Educational Services, Cortland, NY.
The VisiCalc program is visual calculation on a computer making use of an electronic worksheet that is beneficial to the business user in dealing with numerous accounting and clerical procedures. The Lotus 1-2-3 program begins with VisiCalc and improves upon it by adding graphics and a database as well as more efficient ways to manipulate and…
NASA Technical Reports Server (NTRS)
Cebeci, T.; Carr, L. W.
1978-01-01
A computer program is described which provides solutions of two dimensional equations appropriate to laminar and turbulent boundary layers for boundary conditions with an external flow which fluctuates in magnitude. The program is based on the numerical solution of the governing boundary layer equations by an efficient two point finite difference method. An eddy viscosity formulation was used to model the Reynolds shear stress term. The main features of the method are briefly described and instructions for the computer program with a listing are provided. Sample calculations to demonstrate its usage and capabilities for laminar and turbulent unsteady boundary layers with an external flow which fluctuated in magnitude are presented.
Assessment of the Impacts of Standards and Labeling Programs inMexico (four products).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Itha; Pulido, Henry; McNeil, Michael A.
2007-06-12
This study analyzes impacts from energy efficiency standards and labeling in Mexico from 1994 through 2005 for four major products: household refrigerators, room air conditioners, three-phase (squirrel cage) induction motors, and clothes washers. It is a retrospective analysis, seeking to assess verified impacts on product efficiency in the Mexican market in the first ten years after standards were implemented. Such an analysis allows the Mexican government to compare actual to originally forecast program benefits. In addition, it provides an extremely valuable benchmark for other countries considering standards, and to the energy policy community as a whole. The methodology for evaluationmore » begins with historical test data taken for a large number of models of each product type between 1994 and 2005. The pre-standard efficiency of models in 1994 is taken as a baseline throughout the analysis. Model efficiency data were provided by an independent certification laboratory (ANCE), which tested products as part of the certification and enforcement mechanism defined by the standards program. Using this data, together with economic and market data provided by both government and private sector sources, the analysis considers several types of national level program impacts. These include: Energy savings; Environmental (emissions) impacts, and Net financial impacts to consumers, manufacturers and utilities. Energy savings impacts are calculated using the same methodology as the original projections, allowing a comparison. Other impacts are calculated using a robust and sophisticated methodology developed by the Instituto de Investigaciones Electricas (IIE) and Lawrence Berkeley National Laboratory (LBNL), in a collaboration supported by the Collaborative Labeling and Standards Program (CLASP).« less
Exergetic analysis of autonomous power complex for drilling rig
NASA Astrophysics Data System (ADS)
Lebedev, V. A.; Karabuta, V. S.
2017-10-01
The article considers the issue of increasing the energy efficiency of power equipment of the drilling rig. At present diverse types of power plants are used in power supply systems. When designing and choosing a power plant, one of the main criteria is its energy efficiency. The main indicator in this case is the effective efficiency factor calculated by the method of thermal balances. In the article, it is suggested to use the exergy method to determine energy efficiency, which allows to perform estimations of the thermodynamic perfection degree of the system by the example of a gas turbine plant: relative estimation (exergetic efficiency factor) and an absolute estimation. An exergetic analysis of the gas turbine plant operating in a simple scheme was carried out using the program WaterSteamPro. Exergy losses in equipment elements are calculated.
Efficient calculation of general Voigt profiles
NASA Astrophysics Data System (ADS)
Cope, D.; Khoury, R.; Lovett, R. J.
1988-02-01
An accurate and efficient program is presented for the computation of OIL profiles, generalizations of the Voigt profile resulting from the one-interacting-level model of Ward et al. (1974). These profiles have speed dependent shift and width functions and have asymmetric shapes. The program contains an adjustable error control parameter and includes the Voigt profile as a special case, although the general nature of this program renders it slower than a specialized Voigt profile method. Results on accuracy and computation time are presented for a broad set of test parameters, and a comparison is made with previous work on the asymptotic behavior of general Voigt profiles.
Slurry combustion. Volume 2: Appendices, Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Essenhigh, R.
1993-06-01
Volume II contains the following appendices: coal analyses and slurryability characteristics; listings of programs used to call and file experimental data, and to reduce data in enthalpy and efficiency calculations; and tabulated data sets.
31 CFR 205.6 - What is a Treasury-State agreement?
Code of Federal Regulations, 2010 CFR
2010-07-01
... EFFICIENT FEDERAL-STATE FUNDS TRANSFERS Rules Applicable to Federal Assistance Programs Included in a... documents the accepted funding techniques and methods for calculating interest agreed upon by us and a State...
NASA Technical Reports Server (NTRS)
Ruo, S. Y.
1978-01-01
A computer program was developed to account approximately for the effects of finite wing thickness in transonic potential flow over an oscillation wing of finite span. The program is based on the original sonic box computer program for planar wing which was extended to account for the effect of wing thickness. Computational efficiency and accuracy were improved and swept trailing edges were accounted for. Account for the nonuniform flow caused by finite thickness was made by application of the local linearization concept with appropriate coordinate transformation. A brief description of each computer routine and the applications of cubic spline and spline surface data fitting techniques used in the program are given, and the method of input was shown in detail. Sample calculations as well as a complete listing of the computer program listing are presented.
FORTRAN program for induction motor analysis
NASA Technical Reports Server (NTRS)
Bollenbacher, G.
1976-01-01
A FORTRAN program for induction motor analysis is described. The analysis includes calculations of torque-speed characteristics, efficiency, losses, magnetic flux densities, weights, and various electrical parameters. The program is limited to three-phase Y-connected, squirrel-cage motors. Detailed instructions for using the program are given. The analysis equations are documented, and the sources of the equations are referenced. The appendixes include a FORTRAN symbol list, a complete explanation of input requirements, and a list of error messages.
NASA Technical Reports Server (NTRS)
Anderson, M. S.; Warnaar, D. B.; Ling, B. J. AEHERSTROM, C. l. afkennedy, d
1986-01-01
A computer program is described which is especially suited for making vibration and buckling calculations for prestressed lattice structures that might be used for space application. Structures having repetitive geometry are treated in a very efficient manner. Detailed instructions for data input are given along with several example problems illustrating the use and capability of the program.
An analytical method to predict efficiency of aircraft gearboxes
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.; Black, J. D.
1984-01-01
A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.
Computationally Efficient Multiconfigurational Reactive Molecular Dynamics
Yamashita, Takefumi; Peng, Yuxing; Knight, Chris; Voth, Gregory A.
2012-01-01
It is a computationally demanding task to explicitly simulate the electronic degrees of freedom in a system to observe the chemical transformations of interest, while at the same time sampling the time and length scales required to converge statistical properties and thus reduce artifacts due to initial conditions, finite-size effects, and limited sampling. One solution that significantly reduces the computational expense consists of molecular models in which effective interactions between particles govern the dynamics of the system. If the interaction potentials in these models are developed to reproduce calculated properties from electronic structure calculations and/or ab initio molecular dynamics simulations, then one can calculate accurate properties at a fraction of the computational cost. Multiconfigurational algorithms model the system as a linear combination of several chemical bonding topologies to simulate chemical reactions, also sometimes referred to as “multistate”. These algorithms typically utilize energy and force calculations already found in popular molecular dynamics software packages, thus facilitating their implementation without significant changes to the structure of the code. However, the evaluation of energies and forces for several bonding topologies per simulation step can lead to poor computational efficiency if redundancy is not efficiently removed, particularly with respect to the calculation of long-ranged Coulombic interactions. This paper presents accurate approximations (effective long-range interaction and resulting hybrid methods) and multiple-program parallelization strategies for the efficient calculation of electrostatic interactions in reactive molecular simulations. PMID:25100924
Bellucci, Michael A; Coker, David F
2011-07-28
We describe a new method for constructing empirical valence bond potential energy surfaces using a parallel multilevel genetic program (PMLGP). Genetic programs can be used to perform an efficient search through function space and parameter space to find the best functions and sets of parameters that fit energies obtained by ab initio electronic structure calculations. Building on the traditional genetic program approach, the PMLGP utilizes a hierarchy of genetic programming on two different levels. The lower level genetic programs are used to optimize coevolving populations in parallel while the higher level genetic program (HLGP) is used to optimize the genetic operator probabilities of the lower level genetic programs. The HLGP allows the algorithm to dynamically learn the mutation or combination of mutations that most effectively increase the fitness of the populations, causing a significant increase in the algorithm's accuracy and efficiency. The algorithm's accuracy and efficiency is tested against a standard parallel genetic program with a variety of one-dimensional test cases. Subsequently, the PMLGP is utilized to obtain an accurate empirical valence bond model for proton transfer in 3-hydroxy-gamma-pyrone in gas phase and protic solvent. © 2011 American Institute of Physics
Quadratic Programming for Allocating Control Effort
NASA Technical Reports Server (NTRS)
Singh, Gurkirpal
2005-01-01
A computer program calculates an optimal allocation of control effort in a system that includes redundant control actuators. The program implements an iterative (but otherwise single-stage) algorithm of the quadratic-programming type. In general, in the quadratic-programming problem, one seeks the values of a set of variables that minimize a quadratic cost function, subject to a set of linear equality and inequality constraints. In this program, the cost function combines control effort (typically quantified in terms of energy or fuel consumed) and control residuals (differences between commanded and sensed values of variables to be controlled). In comparison with prior control-allocation software, this program offers approximately equal accuracy but much greater computational efficiency. In addition, this program offers flexibility, robustness to actuation failures, and a capability for selective enforcement of control requirements. The computational efficiency of this program makes it suitable for such complex, real-time applications as controlling redundant aircraft actuators or redundant spacecraft thrusters. The program is written in the C language for execution in a UNIX operating system.
FreeSASA: An open source C library for solvent accessible surface area calculations.
Mitternacht, Simon
2016-01-01
Calculating solvent accessible surface areas (SASA) is a run-of-the-mill calculation in structural biology. Although there are many programs available for this calculation, there are no free-standing, open-source tools designed for easy tool-chain integration. FreeSASA is an open source C library for SASA calculations that provides both command-line and Python interfaces in addition to its C API. The library implements both Lee and Richards' and Shrake and Rupley's approximations, and is highly configurable to allow the user to control molecular parameters, accuracy and output granularity. It only depends on standard C libraries and should therefore be easy to compile and install on any platform. The library is well-documented, stable and efficient. The command-line interface can easily replace closed source legacy programs, with comparable or better accuracy and speed, and with some added functionality.
Ye, Congting; Ji, Guoli; Li, Lei; Liang, Chun
2014-01-01
Inverted repeats are present in abundance in both prokaryotic and eukaryotic genomes and can form DNA secondary structures--hairpins and cruciforms that are involved in many important biological processes. Bioinformatics tools for efficient and accurate detection of inverted repeats are desirable, because existing tools are often less accurate and time consuming, sometimes incapable of dealing with genome-scale input data. Here, we present a MATLAB-based program called detectIR for the perfect and imperfect inverted repeat detection that utilizes complex numbers and vector calculation and allows genome-scale data inputs. A novel algorithm is adopted in detectIR to convert the conventional sequence string comparison in inverted repeat detection into vector calculation of complex numbers, allowing non-complementary pairs (mismatches) in the pairing stem and a non-palindromic spacer (loop or gaps) in the middle of inverted repeats. Compared with existing popular tools, our program performs with significantly higher accuracy and efficiency. Using genome sequence data from HIV-1, Arabidopsis thaliana, Homo sapiens and Zea mays for comparison, detectIR can find lots of inverted repeats missed by existing tools whose outputs often contain many invalid cases. detectIR is open source and its source code is freely available at: https://sourceforge.net/projects/detectir.
Noise studies of communication systems using the SYSTID computer aided analysis program
NASA Technical Reports Server (NTRS)
Tranter, W. H.; Dawson, C. T.
1973-01-01
SYSTID computer aided design is a simple program for simulating data systems and communication links. A trial of the efficiency of the method was carried out by simulating a linear analog communication system to determine its noise performance and by comparing the SYSTID result with the result arrived at by theoretical calculation. It is shown that the SYSTID program is readily applicable to the analysis of these types of systems.
An interactive computer code for calculation of gas-phase chemical equilibrium (EQLBRM)
NASA Technical Reports Server (NTRS)
Pratt, B. S.; Pratt, D. T.
1984-01-01
A user friendly, menu driven, interactive computer program known as EQLBRM which calculates the adiabatic equilibrium temperature and product composition resulting from the combustion of hydrocarbon fuels with air, at specified constant pressure and enthalpy is discussed. The program is developed primarily as an instructional tool to be run on small computers to allow the user to economically and efficiency explore the effects of varying fuel type, air/fuel ratio, inlet air and/or fuel temperature, and operating pressure on the performance of continuous combustion devices such as gas turbine combustors, Stirling engine burners, and power generation furnaces.
Cost of Sawing Timber (COST) Module (Version 1.0) for Windows®
A. Jefferson, Jr. Palmer; Janice K. Wiedenbeck; Robert W. Mayer; Robert W. Mayer
2005-01-01
The Cost of Sawing Timber (COST) Module calculates the cost of operations per minute and per thousand board feet for a hardwood sawmill. It may be used independently or as a source of cost information for use in sawmill efficiency software such as the SOLVE program. Cost figures are calculated on the basis of information entered by the user. Sawmill managers use these...
Estimating aquifer transmissivity from specific capacity using MATLAB.
McLin, Stephen G
2005-01-01
Historically, specific capacity information has been used to calculate aquifer transmissivity when pumping test data are unavailable. This paper presents a simple computer program written in the MATLAB programming language that estimates transmissivity from specific capacity data while correcting for aquifer partial penetration and well efficiency. The program graphically plots transmissivity as a function of these factors so that the user can visually estimate their relative importance in a particular application. The program is compatible with any computer operating system running MATLAB, including Windows, Macintosh OS, Linux, and Unix. Two simple examples illustrate program usage.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carvill, Anna; Bushman, Kate; Ellsworth, Amy
2014-06-17
The EnergyFit Nevada (EFN) Better Buildings Neighborhood Program (BBNP, and referred to in this document as the EFN program) currently encourages Nevada residents to make whole-house energy-efficient improvements by providing rebates, financing, and access to a network of qualified home improvement contractors. The BBNP funding, consisting of 34 Energy Efficiency Conservation Block Grants (EECBG) and seven State Energy Program (SEP) grants, was awarded for a three-year period to the State of Nevada in 2010 and used for initial program design and implementation. By the end of first quarter in 2014, the program had achieved upgrades in 553 homes, with anmore » average energy reduction of 32% per home. Other achievements included: Completed 893 residential energy audits and installed upgrades in 0.05% of all Nevada single-family homes1 Achieved an overall conversation rate of 38.1%2 7,089,089 kWh of modeled energy savings3 Total annual homeowner energy savings of approximately $525,7523 Efficiency upgrades completed on 1,100,484 square feet of homes3 $139,992 granted in loans to homeowners for energy-efficiency upgrades 29,285 hours of labor and $3,864,272 worth of work conducted by Nevada auditors and contractors4 40 contractors trained in Nevada 37 contractors with Building Performance Institute (BPI) certification in Nevada 19 contractors actively participating in the EFN program in Nevada 1 Calculated using 2012 U.S. Census data reporting 1,182,870 homes in Nevada. 2 Conversion rate through March 31, 2014, for all Nevada Retrofit Initiative (NRI)-funded projects, calculated using the EFN tracking database. 3 OptiMiser energy modeling, based on current utility rates. 4 This is the sum of $3,596,561 in retrofit invoice value and $247,711 in audit invoice value.« less
Calculation of four-particle harmonic-oscillator transformation brackets
NASA Astrophysics Data System (ADS)
Germanas, D.; Kalinauskas, R. K.; Mickevičius, S.
2010-02-01
A procedure for precise calculation of the three- and four-particle harmonic-oscillator (HO) transformation brackets is presented. The analytical expressions of the four-particle HO transformation brackets are given. The computer code for the calculations of HO transformation brackets proves to be quick, efficient and produces results with small numerical uncertainties. Program summaryProgram title: HOTB Catalogue identifier: AEFQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEFQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 1247 No. of bytes in distributed program, including test data, etc.: 6659 Distribution format: tar.gz Programming language: FORTRAN 90 Computer: Any computer with FORTRAN 90 compiler Operating system: Windows, Linux, FreeBSD, True64 Unix RAM: 8 MB Classification: 17.17 Nature of problem: Calculation of the three-particle and four-particle harmonic-oscillator transformation brackets. Solution method: The method is based on compact expressions of the three-particle harmonics oscillator brackets, presented in [1] and expressions of the four-particle harmonics oscillator brackets, presented in this paper. Restrictions: The three- and four-particle harmonic-oscillator transformation brackets up to the e=28. Unusual features: Possibility of calculating the four-particle harmonic-oscillator transformation brackets. Running time: Less than one second for the single harmonic-oscillator transformation bracket. References:G.P. Kamuntavičius, R.K. Kalinauskas, B.R. Barret, S. Mickevičius, D. Germanas, Nuclear Physics A 695 (2001) 191.
LENMODEL: A forward model for calculating length distributions and fission-track ages in apatite
NASA Astrophysics Data System (ADS)
Crowley, Kevin D.
1993-05-01
The program LENMODEL is a forward model for annealing of fission tracks in apatite. It provides estimates of the track-length distribution, fission-track age, and areal track density for any user-supplied thermal history. The program approximates the thermal history, in which temperature is represented as a continuous function of time, by a series of isothermal steps of various durations. Equations describing the production of tracks as a function of time and annealing of tracks as a function of time and temperature are solved for each step. The step calculations are summed to obtain estimates for the entire thermal history. Computational efficiency is maximized by performing the step calculations backwards in model time. The program incorporates an intuitive and easy-to-use graphical interface. Thermal history is input to the program using a mouse. Model options are specified by selecting context-sensitive commands from a bar menu. The program allows for considerable selection of equations and parameters used in the calculations. The program was written for PC-compatible computers running DOS TM 3.0 and above (and Windows TM 3.0 or above) with VGA or SVGA graphics and a Microsoft TM-compatible mouse. Single copies of a runtime version of the program are available from the author by written request as explained in the last section of this paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Touzani, Samir; Taylor, Cody
Trustworthy savings calculations are critical to convincing regulators of both the cost-effectiveness of energy efficiency program investments and their ability to defer supply-side capital investments. Today’s methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of energy efficiency programs. They also require time-consuming data acquisition. A spectrum of savings calculation approaches is used, with some relying more heavily on measured data and others relying more heavily on estimated, modeled, or stipulated data. The rising availability of “smart” meters and devices that report near-real time data, combined with new analytical approaches to quantifyingmore » savings, offers potential to conduct M&V more quickly and at lower cost, with comparable or improved accuracy. Commercial energy management and information systems (EMIS) technologies are beginning to offer M&V capabilities, and program administrators want to understand how they might assist programs in quickly and accurately measuring energy savings. This paper presents the results of recent testing of the ability to use automation to streamline some parts of M&V. Here in this paper, we detail metrics to assess the performance of these new M&V approaches, and a framework to compute the metrics. We also discuss the accuracy, cost, and time trade-offs between more traditional M&V, and these emerging streamlined methods that use high-resolution energy data and automated computational intelligence. Finally we discuss the potential evolution of M&V and early results of pilots currently underway to incorporate M&V automation into ratepayer-funded programs and professional implementation and evaluation practice.« less
NASA Technical Reports Server (NTRS)
Rarig, P. L.
1980-01-01
A program to calculate upwelling infrared radiation was modified to operate efficiently on the STAR-100. The modified software processes specific test cases significantly faster than the initial STAR-100 code. For example, a midlatitude summer atmospheric model is executed in less than 2% of the time originally required on the STAR-100. Furthermore, the optimized program performs extra operations to save the calculated absorption coefficients. Some of the advantages and pitfalls of virtual memory and vector processing are discussed along with strategies used to avoid loss of accuracy and computing power. Results from the vectorized code, in terms of speed, cost, and relative error with respect to serial code solutions are encouraging.
Gas dynamic design of the pipe line compressor with 90% efficiency. Model test approval
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Rekstin, A.; Soldatova, K.
2015-08-01
Gas dynamic design of the pipe line compressor 32 MW was made for PAO SMPO (Sumy, Ukraine). The technical specification requires compressor efficiency of 90%. The customer offered favorable scheme - single-stage design with console impeller and axial inlet. The authors used the standard optimization methodology of 2D impellers. The original methodology of internal scroll profiling was used to minimize efficiency losses. Radically improved 5th version of the Universal modeling method computer programs was used for precise calculation of expected performances. The customer fulfilled model tests in a 1:2 scale. Tests confirmed the calculated parameters at the design point (maximum efficiency of 90%) and in the whole range of flow rates. As far as the authors know none of compressors have achieved such efficiency. The principles and methods of gas-dynamic design are presented below. The data of the 32 MW compressor presented by the customer in their report at the 16th International Compressor conference (September 2014, Saint- Petersburg) and later transferred to the authors.
Measuring Efficiency of Secondary Healthcare Providers in Slovenia
Blatnik, Patricia; Bojnec, Štefan; Tušak, Matej
2017-01-01
Abstract The chief aim of this study was to analyze secondary healthcare providers' efficiency, focusing on the efficiency analysis of Slovene general hospitals. We intended to present a complete picture of technical, allocative, and cost or economic efficiency of general hospitals. Methods We researched the aspects of efficiency with two econometric methods. First, we calculated the necessary quotients of efficiency with the stochastic frontier analyze (SFA), which are realized by econometric evaluation of stochastic frontier functions; then, with the data envelopment analyze (DEA), we calculated the necessary quotients that are based on the linear programming method. Results Results on measures of efficiency showed that the two chosen methods produced two different conclusions. The SFA method concluded Celje General Hospital is the most efficient general hospital, whereas the DEA method concluded Brežice General Hospital was the hospital to be declared as the most efficient hospital. Conclusion Our results are a useful tool that can aid managers, payers, and designers of healthcare policy to better understand how general hospitals operate. The participants can accordingly decide with less difficulty on any further business operations of general hospitals, having the best practices of general hospitals at their disposal. PMID:28730180
Lee, Hyo Taek; Roh, Hyo Lyun; Kim, Yoon Sang
2016-01-01
[Purpose] Efficient management using exercise programs with various benefits should be provided by educational institutions for children in their growth phase. We analyzed the heart rates of children during ski simulator exercise and the Harvard step test to evaluate the cardiopulmonary endurance by calculating their post-exercise recovery rate. [Subjects and Methods] The subjects (n = 77) were categorized into a normal weight and an overweight/obesity group by body mass index. They performed each exercise for 3 minutes. The cardiorespiratory endurance was calculated using the Physical Efficiency Index formula. [Results] The ski simulator and Harvard step test showed that there was a significant difference in the heart rates of the 2 body mass index-based groups at each minute. The normal weight and the ski-simulator group had higher Physical Efficiency Index levels. [Conclusion] This study showed that a simulator exercise can produce a cumulative load even when performed at low intensity, and can be effectively utilized as exercise equipment since it resulted in higher Physical Efficiency Index levels than the Harvard step test. If schools can increase sport durability by stimulating students' interests, the ski simulator exercise can be used in programs designed to improve and strengthen students' physical fitness.
Weiss, Julius; Elmer, Andreas; Mahíllo, Beatriz; Domínguez-Gil, Beatriz; Avsec, Danica; Costa, Alessandro Nanni; Haase-Kromwijk, Bernadette J J M; Laouabdia, Karim; Immer, Franz F
2018-04-19
The donation rate (DR) per million population is not ideal for an efficiency comparison of national deceased organ donation programs. The DR does not account for variabilities in the potential for deceased donation which mainly depends on fatalities from causes leading to brain death. In this study, the donation activity was put into relation to the mortality from selected causes. Based on that metric, this study assesses the efficiency of different donation programs. This is a retrospective analysis of 2001-2015 deceased organ donation and mortality registry data. Included are 27 Council of Europe countries, as well as the USA. A donor conversion index (DCI) was calculated for assessing donation program efficiency over time and in international comparisons. According to the DCI and of the countries included in the study, Spain, France, and the USA had the most efficient donation programs in 2015. Even though mortality from the selected causes decreased in most countries during the study period, differences in international comparisons persist. This indicates that the potential for deceased organ donation and its conversion into actual donation is far from being similar internationally. Compared with the DR, the DCI takes into account the potential for deceased organ donation, and therefore is a more accurate metric of performance. National donation programs could optimize performance by identifying the areas where most potential is lost, and by implementing measures to tackle these issues.
Programming PHREEQC calculations with C++ and Python a comparative study
Charlton, Scott R.; Parkhurst, David L.; Muller, Mike
2011-01-01
The new IPhreeqc module provides an application programming interface (API) to facilitate coupling of other codes with the U.S. Geological Survey geochemical model PHREEQC. Traditionally, loose coupling of PHREEQC with other applications required methods to create PHREEQC input files, start external PHREEQC processes, and process PHREEQC output files. IPhreeqc eliminates most of this effort by providing direct access to PHREEQC capabilities through a component object model (COM), a library, or a dynamically linked library (DLL). Input and calculations can be specified through internally programmed strings, and all data exchange between an application and the module can occur in computer memory. This study compares simulations programmed in C++ and Python that are tightly coupled with IPhreeqc modules to the traditional simulations that are loosely coupled to PHREEQC. The study compares performance, quantifies effort, and evaluates lines of code and the complexity of the design. The comparisons show that IPhreeqc offers a more powerful and simpler approach for incorporating PHREEQC calculations into transport models and other applications that need to perform PHREEQC calculations. The IPhreeqc module facilitates the design of coupled applications and significantly reduces run times. Even a moderate knowledge of one of the supported programming languages allows more efficient use of PHREEQC than the traditional loosely coupled approach.
Additional development of the XTRAN3S computer program
NASA Technical Reports Server (NTRS)
Borland, C. J.
1989-01-01
Additional developments and enhancements to the XTRAN3S computer program, a code for calculation of steady and unsteady aerodynamics, and associated aeroelastic solutions, for 3-D wings in the transonic flow regime are described. Algorithm improvements for the XTRAN3S program were provided including an implicit finite difference scheme to enhance the allowable time step and vectorization for improved computational efficiency. The code was modified to treat configurations with a fuselage, multiple stores/nacelles/pylons, and winglets. Computer program changes (updates) for error corrections and updates for version control are provided.
NASA Astrophysics Data System (ADS)
Kler, A. M.; Zakharov, Yu. B.; Potanina, Yu. M.
2017-05-01
The objects of study are the gas turbine (GT) plant and combined cycle power plant (CCPP) with opportunity for injection between the stages of air compressor. The objective of this paper is technical and economy optimization calculations for these classes of plants with water interstage injection. The integrated development environment "System of machine building program" was a tool for creating the mathematic models for these classes of power plants. Optimization calculations with the criterion of minimum for specific capital investment as a function of the unit efficiency have been carried out. For a gas-turbine plant, the economic gain from water injection exists for entire range of power efficiency. For the combined cycle plant, the economic benefit was observed only for a certain range of plant's power efficiency.
Computing Cooling Flows in Turbines
NASA Technical Reports Server (NTRS)
Gauntner, J.
1986-01-01
Algorithm developed for calculating both quantity of compressor bleed flow required to cool turbine and resulting decrease in efficiency due to cooling air injected into gas stream. Program intended for use with axial-flow, air-breathing, jet-propulsion engines with variety of airfoil-cooling configurations. Algorithm results compared extremely well with figures given by major engine manufacturers for given bulk-metal temperatures and cooling configurations. Program written in FORTRAN IV for batch execution.
Structural optimization with approximate sensitivities
NASA Technical Reports Server (NTRS)
Patnaik, S. N.; Hopkins, D. A.; Coroneos, R.
1994-01-01
Computational efficiency in structural optimization can be enhanced if the intensive computations associated with the calculation of the sensitivities, that is, gradients of the behavior constraints, are reduced. Approximation to gradients of the behavior constraints that can be generated with small amount of numerical calculations is proposed. Structural optimization with these approximate sensitivities produced correct optimum solution. Approximate gradients performed well for different nonlinear programming methods, such as the sequence of unconstrained minimization technique, method of feasible directions, sequence of quadratic programming, and sequence of linear programming. Structural optimization with approximate gradients can reduce by one third the CPU time that would otherwise be required to solve the problem with explicit closed-form gradients. The proposed gradient approximation shows potential to reduce intensive computation that has been associated with traditional structural optimization.
Programmable calculator software for computation of the plasma binding of ligands.
Conner, D P; Rocci, M L; Larijani, G E
1986-01-01
The computation of the extent of plasma binding of a ligand to plasma constituents using radiolabeled ligand and equilibrium dialysis is complex and tedious. A computer program for the HP-41C Handheld Computer Series (Hewlett-Packard) was developed to perform these calculations. The first segment of the program constructs a standard curve for quench correction of post-dialysis plasma and buffer samples, using either external standard ratio (ESR) or sample channels ratio (SCR) techniques. The remainder of the program uses the counts per minute, SCR or ESR, and post-dialysis volume of paired plasma and buffer samples generated from the dialysis procedure to compute the extent of binding after correction for background radiation, counting efficiency, and intradialytic shifts of fluid between plasma and buffer compartments during dialysis. This program greatly simplifies the analysis of equilibrium dialysis data and has been employed in the analysis of dexamethasone binding in normal and uremic sera.
NASA Astrophysics Data System (ADS)
Xie, Dexuan
2014-10-01
The Poisson-Boltzmann equation (PBE) is one widely-used implicit solvent continuum model in the calculation of electrostatic potential energy for biomolecules in ionic solvent, but its numerical solution remains a challenge due to its strong singularity and nonlinearity caused by its singular distribution source terms and exponential nonlinear terms. To effectively deal with such a challenge, in this paper, new solution decomposition and minimization schemes are proposed, together with a new PBE analysis on solution existence and uniqueness. Moreover, a PBE finite element program package is developed in Python based on the FEniCS program library and GAMer, a molecular surface and volumetric mesh generation program package. Numerical tests on proteins and a nonlinear Born ball model with an analytical solution validate the new solution decomposition and minimization schemes, and demonstrate the effectiveness and efficiency of the new PBE finite element program package.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Messenger, Mike; Bharvirkar, Ranjit; Golemboski, Bill
Public and private funding for end-use energy efficiency actions is expected to increase significantly in the United States over the next decade. For example, Barbose et al (2009) estimate that spending on ratepayer-funded energy efficiency programs in the U.S. could increase frommore » $3.1 billion in 2008 to $$7.5 and 12.4 billion by 2020 under their medium and high scenarios. This increase in spending could yield annual electric energy savings ranging from 0.58% - 0.93% of total U.S. retail sales in 2020, up from 0.34% of retail sales in 2008. Interest in and support for energy efficiency has broadened among national and state policymakers. Prominent examples include {approx}$$18 billion in new funding for energy efficiency programs (e.g., State Energy Program, Weatherization, and Energy Efficiency and Conservation Block Grants) in the 2009 American Recovery and Reinvestment Act (ARRA). Increased funding for energy efficiency should result in more benefits as well as more scrutiny of these results. As energy efficiency becomes a more prominent component of the U.S. national energy strategy and policies, assessing the effectiveness and energy saving impacts of energy efficiency programs is likely to become increasingly important for policymakers and private and public funders of efficiency actions. Thus, it is critical that evaluation, measurement, and verification (EM&V) is carried out effectively and efficiently, which implies that: (1) Effective program evaluation, measurement, and verification (EM&V) methodologies and tools are available to key stakeholders (e.g., regulatory agencies, program administrators, consumers, and evaluation consultants); and (2) Capacity (people and infrastructure resources) is available to conduct EM&V activities and report results in ways that support program improvement and provide data that reliably compares achieved results against goals and similar programs in other jurisdictions (benchmarking). The National Action Plan for Energy Efficiency (2007) presented commonly used definitions for EM&V in the context of energy efficiency programs: (1) Evaluation (E) - The performance of studies and activities aimed at determining the effects and effectiveness of EE programs; (2) Measurement and Verification (M&V) - Data collection, monitoring, and analysis associated with the calculation of gross energy and demand savings from individual measures, sites or projects. M&V can be a subset of program evaluation; and (3) Evaluation, Measurement, and Verification (EM&V) - This term is frequently seen in evaluation literature. EM&V is a catchall acronym for determining both the effectiveness of program designs and estimates of load impacts at the portfolio, program and project level. This report is a scoping study that assesses current practices and methods in the evaluation, measurement and verification (EM&V) of ratepayer-funded energy efficiency programs, with a focus on methods and practices currently used for determining whether projected (ex-ante) energy and demand savings have been achieved (ex-post). M&V practices for privately-funded energy efficiency projects (e.g., ESCO projects) or programs where the primary focus is greenhouse gas reductions were not part of the scope of this study. We identify and discuss key purposes and uses of current evaluations of end-use energy efficiency programs, methods used to evaluate these programs, processes used to determine those methods; and key issues that need to be addressed now and in the future, based on discussions with regulatory agencies, policymakers, program administrators, and evaluation practitioners in 14 states and national experts in the evaluation field. We also explore how EM&V may evolve in a future in which efficiency funding increases significantly, innovative mechanisms for rewarding program performance are adopted, the role of efficiency in greenhouse gas mitigation is more closely linked, and programs are increasingly funded from multiple sources often with multiple program administrators and intended to meet multiple purposes.« less
Cutting efficiency of Reciproc and waveOne reciprocating instruments.
Plotino, Gianluca; Giansiracusa Rubini, Alessio; Grande, Nicola M; Testarelli, Luca; Gambarini, Gianluca
2014-08-01
The aim of the present study was to evaluate the cutting efficiency of 2 new reciprocating instruments, Reciproc and WaveOne. Twenty-four new Reciproc R25 and 24 new WaveOne Primary files were activated by using a torque-controlled motor (Silver Reciproc) and divided into 4 groups (n = 12): group 1, Reciproc activated by Reciproc ALL program; group 2, Reciproc activated by WaveOne ALL program; group 3, WaveOne activated by Reciproc ALL program; and group 4, WaveOne activated by WaveOne ALL program. The device used for the cutting test consisted of a main frame to which a mobile plastic support for the handpiece is connected and a stainless steel block containing a Plexiglas block (inPlexiglass, Rome, Italy) against which the cutting efficiency of the instruments was tested. The length of the block cut in 1 minute was measured in a computerized program with a precision of 0.1 mm. Means and standard deviations of each group were calculated, and data were statistically analyzed with 1-way analysis of variance and Bonferroni test (P < .05). Reciproc R25 displayed greater cutting efficiency than WaveOne Primary for both the movements used (P < .05); in particular, Reciproc instruments used with their proper reciprocating motion presented a statistically significant higher cutting efficiency than WaveOne instruments used with their proper reciprocating motion (P < .05). There was no statistically significant difference between the 2 movements for both instruments (P > .05). Reciproc instruments demonstrated statistically higher cutting efficiency than WaveOne instruments. Copyright © 2014 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.
Cuffney, Thomas F.
2003-01-01
The Invertebrate Data Analysis System (IDAS) software provides an accurate, consistent, and efficient mechanism for analyzing invertebrate data collected as part of the National Water-Quality Assessment Program and stored in the Biological Transactional Database (Bio-TDB). The IDAS software is a stand-alone program for personal computers that run Microsoft (MS) Windows?. It allows users to read data downloaded from Bio-TDB and stored either as MS Excel? or MS Access? files. The program consists of five modules. The Edit Data module allows the user to subset, combine, delete, and summarize community data. The Data Preparation module allows the user to select the type(s) of sample(s) to process, calculate densities, delete taxa based on laboratory processing notes, combine lifestages or keep them separate, select a lowest taxonomic level for analysis, delete rare taxa, and resolve taxonomic ambiguities. The Calculate Community Metrics module allows the user to calculate over 130 community metrics, including metrics based on organism tolerances and functional feeding groups. The Calculate Diversities and Similarities module allows the user to calculate nine diversity and eight similarity indices. The Data export module allows the user to export data to other software packages and produce tables of community data that can be imported into spreadsheet and word-processing programs. Though the IDAS program was developed to process invertebrate data downloaded from USGS databases, it will work with other data sets that are converted to the USGS (Bio-TDB) format. Consequently, the data manipulation, analysis, and export procedures provided by the IDAS program can be used by anyone involved in using benthic macroinvertebrates in applied or basic research.
Altzitzoglou, Timotheos; Rožkov, Andrej
2016-03-01
The (129)I, (151)Sm and (166m)Ho standardisations using the CIEMAT/NIST efficiency tracing method, that have been carried out in the frame of the European Metrology Research Program project "Metrology for Radioactive Waste Management" are described. The radionuclide beta counting efficiencies were calculated using two computer codes CN2005 and MICELLE2. The sensitivity analysis of the code input parameters (ionization quenching factor, beta shape factor) on the calculated efficiencies was performed, and the results are discussed. The combined relative standard uncertainty of the standardisations of the (129)I, (151)Sm and (166m)Ho solutions were 0.4%, 0.5% and 0.4%, respectively. The stated precision obtained using the CIEMAT/NIST method is better than that previously reported in the literature obtained by the TDCR ((129)I), the 4πγ-NaI ((166m)Ho) counting or the CIEMAT/NIST method ((151)Sm). Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.
1986-01-01
The goal of this project is the development of an optimization algorithm for use with a solar cell model. It is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junctions depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm has been developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAPID). SCAPID uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the operation of a solar cell. A major obstacle is that the numerical methods used in SCAPID require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the value associated with the maximum efficiency. This problem has been alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution. Adapting SCAPID so that it could be called iteratively by the optimization code provided another means of reducing the cpu time required to complete an optimization. Instead of calculating the entire I-V curve, as is usually done in SCAPID, only the efficiency is calculated (maximum power voltage and current) and the solution from previous calculations is used to initiate the next solution.
NASA Technical Reports Server (NTRS)
Gordon, L. H.; Phillips, B. R.; Evangelista, J.
1978-01-01
Computer program represents attempt to understand and model characteristics of electrolysis cells. It allows user to determine how cell efficiency is affected by temperature, pressure, current density, electrolyte concentration, characteristic dimensions, membrane resistance, and electrolyte circulation rate. It also calculates ratio of bubble velocity to electrolyte velocity for anode and cathode chambers.
GW/Bethe-Salpeter calculations for charged and model systems from real-space DFT
NASA Astrophysics Data System (ADS)
Strubbe, David A.
GW and Bethe-Salpeter (GW/BSE) calculations use mean-field input from density-functional theory (DFT) calculations to compute excited states of a condensed-matter system. Many parts of a GW/BSE calculation are efficiently performed in a plane-wave basis, and extensive effort has gone into optimizing and parallelizing plane-wave GW/BSE codes for large-scale computations. Most straightforwardly, plane-wave DFT can be used as a starting point, but real-space DFT is also an attractive starting point: it is systematically convergeable like plane waves, can take advantage of efficient domain parallelization for large systems, and is well suited physically for finite and especially charged systems. The flexibility of a real-space grid also allows convenient calculations on non-atomic model systems. I will discuss the interfacing of a real-space (TD)DFT code (Octopus, www.tddft.org/programs/octopus) with a plane-wave GW/BSE code (BerkeleyGW, www.berkeleygw.org), consider performance issues and accuracy, and present some applications to simple and paradigmatic systems that illuminate fundamental properties of these approximations in many-body perturbation theory.
Techno-Economic Analysis of Indian Draft Standard Levels for RoomAir Conditioners
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNeil, Michael A.; Iyer, Maithili
The Indian Bureau of Energy Efficiency (BEE) finalized its first set of efficiency standards and labels for room air conditioners in July of 2006. These regulations followed soon after the publication of levels for frost-free refrigerators in the same year. As in the case of refrigerators, the air conditioner program introduces Minimum Efficiency Performance Standards (MEPS) and comparative labels simultaneously, with levels for one to five stars. Also like the refrigerator program, BEE defined several successive program phases of increasing stringency. In support of BEE's refrigerator program, Lawrence Berkeley National Laboratory (LBNL) produced an analysis of national impacts of standardsmore » in collaboration with the Collaborative Labeling and Standards Program (CLASP). That analysis drew on LBNL's experience with standards programs in the United States, as well as many other countries. Subsequently, as part of the process for setting optimal levels for air conditioner regulations, CLASP commissioned LBNL to provide support to BEE in the form of a techno-economic evaluation of air conditioner efficiency technologies. This report describes the methodology and results of this techno-economic evaluation. The analysis consists of three components: (1) Cost effectiveness to consumers of efficiency technologies relative to current baseline. (2) Impacts on the current market from efficiency regulations. (3) National energy and financial impacts. The analysis relied on detailed and up-to-date technical data made available by BEE and industry representatives. Technical parameters were used in conjunction with knowledge about air conditioner use patterns in the residential and commercial sectors, and prevailing marginal electricity prices, in order to give an estimate of per-unit financial impacts. In addition, the overall impact of the program was evaluated by combining unit savings with market forecasts in order to yield national impacts. LBNL presented preliminary results of these analyses in May 2006, at a meeting of BEEs Technical Committee for Air Conditioners. This meeting was attended by a wide array of stakeholder, including industry representatives, engineers and consumer advocates. Comments made by stakeholders at this meeting are incorporated into the final analysis presented in this report. The current analysis begins with the Rating Plan drafted by BEE in 2006, along with an evaluation of the market baseline according to test data submitted by manufacturers. MEPS, label rating levels, and baseline efficiencies are presented in Section 2. First, we compare Indian MEPS with current standards in other countries, and assess their relative stringency. Baseline efficiencies are then used to estimate the fraction of models likely to remain on the market at each phase of the program, and the impact on market-weighted efficiency levels. Section 3 deals with cost-effectiveness of higher efficiency design options. The cost-benefit analysis is grounded in technical parameters provided by industry representatives in India. This data allows for an assessment of financial costs and benefits to consumers as a result of the standards and labeling program. A Life-Cycle Cost (LCC) calculation is used to evaluate the impacts of the program at the unit level, thus providing some insight into the appropriateness of the levels chosen, and additional opportunities for further ratcheting. In addition to LCC, we also calculate payback periods, cost of conserved energy (CCE), and return on investment (ROI). Finally, Section 4 covers national impacts. This is an extension of unit level estimates in the two previous sections. Extrapolation to the national level depends on a forecast of air conditioner purchases (shipments), which we describe here. Following the cost-benefit analysis, we construct several efficiency scenarios including the BEE plan, but also considering further potential for efficiency improvement. These are combined with shipments through a stock accounting model in order to forecast air conditioner energy consumption in each scenario, and associated electricity savings and carbon emission mitigation. Finally, financial costs and savings are scaled to the national level to evaluate net fiscal benefits.« less
MBGD update 2013: the microbial genome database for exploring the diversity of microbial world.
Uchiyama, Ikuo; Mihara, Motohiro; Nishide, Hiroyo; Chiba, Hirokazu
2013-01-01
The microbial genome database for comparative analysis (MBGD, available at http://mbgd.genome.ad.jp/) is a platform for microbial genome comparison based on orthology analysis. As its unique feature, MBGD allows users to conduct orthology analysis among any specified set of organisms; this flexibility allows MBGD to adapt to a variety of microbial genomic study. Reflecting the huge diversity of microbial world, the number of microbial genome projects now becomes several thousands. To efficiently explore the diversity of the entire microbial genomic data, MBGD now provides summary pages for pre-calculated ortholog tables among various taxonomic groups. For some closely related taxa, MBGD also provides the conserved synteny information (core genome alignment) pre-calculated using the CoreAligner program. In addition, efficient incremental updating procedure can create extended ortholog table by adding additional genomes to the default ortholog table generated from the representative set of genomes. Combining with the functionalities of the dynamic orthology calculation of any specified set of organisms, MBGD is an efficient and flexible tool for exploring the microbial genome diversity.
Qi, Hong; Qiao, Yao-Bin; Ren, Ya-Tao; Shi, Jing-Wen; Zhang, Ze-Yu; Ruan, Li-Ming
2016-10-17
Sequential quadratic programming (SQP) is used as an optimization algorithm to reconstruct the optical parameters based on the time-domain radiative transfer equation (TD-RTE). Numerous time-resolved measurement signals are obtained using the TD-RTE as forward model. For a high computational efficiency, the gradient of objective function is calculated using an adjoint equation technique. SQP algorithm is employed to solve the inverse problem and the regularization term based on the generalized Gaussian Markov random field (GGMRF) model is used to overcome the ill-posed problem. Simulated results show that the proposed reconstruction scheme performs efficiently and accurately.
An object oriented Python interface for atomistic simulations
NASA Astrophysics Data System (ADS)
Hynninen, T.; Himanen, L.; Parkkinen, V.; Musso, T.; Corander, J.; Foster, A. S.
2016-01-01
Programmable simulation environments allow one to monitor and control calculations efficiently and automatically before, during, and after runtime. Environments directly accessible in a programming environment can be interfaced with powerful external analysis tools and extensions to enhance the functionality of the core program, and by incorporating a flexible object based structure, the environments make building and analysing computational setups intuitive. In this work, we present a classical atomistic force field with an interface written in Python language. The program is an extension for an existing object based atomistic simulation environment.
Time-Varying Value of Energy Efficiency in Michigan
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mims, Natalie; Eckman, Tom; Schwartz, Lisa C.
Quantifying the time-varying value of energy efficiency is necessary to properly account for all of its benefits and costs and to identify and implement efficiency resources that contribute to a low-cost, reliable electric system. Historically, most quantification of the benefits of efficiency has focused largely on the economic value of annual energy reduction. Due to the lack of statistically representative metered end-use load shape data in Michigan (i.e., the hourly or seasonal timing of electricity savings), the ability to confidently characterize the time-varying value of energy efficiency savings in the state, especially for weather-sensitive measures such as central air conditioning,more » is limited. Still, electric utilities in Michigan can take advantage of opportunities to incorporate the time-varying value of efficiency into their planning. For example, end-use load research and hourly valuation of efficiency savings can be used for a variety of electricity planning functions, including load forecasting, demand-side management and evaluation, capacity planning, long-term resource planning, renewable energy integration, assessing potential grid modernization investments, establishing rates and pricing, and customer service (KEMA 2012). In addition, accurately calculating the time-varying value of efficiency may help energy efficiency program administrators prioritize existing offerings, set incentive or rebate levels that reflect the full value of efficiency, and design new programs.« less
The New Southern FIA Data Compilation System
V. Clark Baldwin; Larry Royer
2001-01-01
In general, the major national Forest Inventory and Analysis annual inventory emphasis has been on data-base design and not on data processing and calculation of various new attributes. Two key programming techniques required for efficient data processing are indexing and modularization. The Southern Research Station Compilation System utilizes modular and indexing...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pastore, Giovanni; Rabiti, Cristian; Pizzocri, Davide
PolyPole is a numerical algorithm for the calculation of intra-granular fission gas release. In particular, the algorithm solves the gas diffusion problem in a fuel grain in time-varying conditions. The program has been extensively tested. PolyPole combines a high accuracy with a high computational efficiency and is ideally suited for application in fuel performance codes.
Cuffney, Thomas F.; Brightbill, Robin A.
2011-01-01
The Invertebrate Data Analysis System (IDAS) software was developed to provide an accurate, consistent, and efficient mechanism for analyzing invertebrate data collected as part of the U.S. Geological Survey National Water-Quality Assessment (NAWQA) Program. The IDAS software is a stand-alone program for personal computers that run Microsoft Windows(Registered). It allows users to read data downloaded from the NAWQA Program Biological Transactional Database (Bio-TDB) or to import data from other sources either as Microsoft Excel(Registered) or Microsoft Access(Registered) files. The program consists of five modules: Edit Data, Data Preparation, Calculate Community Metrics, Calculate Diversities and Similarities, and Data Export. The Edit Data module allows the user to subset data on the basis of taxonomy or sample type, extract a random subsample of data, combine or delete data, summarize distributions, resolve ambiguous taxa (see glossary) and conditional/provisional taxa, import non-NAWQA data, and maintain and create files of invertebrate attributes that are used in the calculation of invertebrate metrics. The Data Preparation module allows the user to select the type(s) of sample(s) to process, calculate densities, delete taxa on the basis of laboratory processing notes, delete pupae or terrestrial adults, combine lifestages or keep them separate, select a lowest taxonomic level for analysis, delete rare taxa on the basis of the number of sites where a taxon occurs and (or) the abundance of a taxon in a sample, and resolve taxonomic ambiguities by one of four methods. The Calculate Community Metrics module allows the user to calculate 184 community metrics, including metrics based on organism tolerances, functional feeding groups, and behavior. The Calculate Diversities and Similarities module allows the user to calculate nine diversity and eight similarity indices. The Data Export module allows the user to export data to other software packages (CANOCO, Primer, PC-ORD, MVSP) and produce tables of community data that can be imported into spreadsheet, database, graphics, statistics, and word-processing programs. The IDAS program facilitates the documentation of analyses by keeping a log of the data that are processed, the files that are generated, and the program settings used to process the data. Though the IDAS program was developed to process NAWQA Program invertebrate data downloaded from Bio-TDB, the Edit Data module includes tools that can be used to convert non-NAWQA data into Bio-TDB format. Consequently, the data manipulation, analysis, and export procedures provided by the IDAS program can be used to process data generated outside of the NAWQA Program.
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L 1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
A Continuous Method for Gene Flow
Palczewski, Michal; Beerli, Peter
2013-01-01
Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937
NASA Astrophysics Data System (ADS)
Stock, Joachim W.; Kitzmann, Daniel; Patzer, A. Beate C.; Sedlmayr, Erwin
2018-06-01
For the calculation of complex neutral/ionized gas phase chemical equilibria, we present a semi-analytical versatile and efficient computer program, called FastChem. The applied method is based on the solution of a system of coupled nonlinear (and linear) algebraic equations, namely the law of mass action and the element conservation equations including charge balance, in many variables. Specifically, the system of equations is decomposed into a set of coupled nonlinear equations in one variable each, which are solved analytically whenever feasible to reduce computation time. Notably, the electron density is determined by using the method of Nelder and Mead at low temperatures. The program is written in object-oriented C++ which makes it easy to couple the code with other programs, although a stand-alone version is provided. FastChem can be used in parallel or sequentially and is available under the GNU General Public License version 3 at https://github.com/exoclime/FastChem together with several sample applications. The code has been successfully validated against previous studies and its convergence behavior has been tested even for extreme physical parameter ranges down to 100 K and up to 1000 bar. FastChem converges stable and robust in even most demanding chemical situations, which posed sometimes extreme challenges for previous algorithms.
NASA Astrophysics Data System (ADS)
Nahar, J.; Rusyaman, E.; Putri, S. D. V. E.
2018-03-01
This research was conducted at Perum BULOG Sub-Divre Medan which is the implementing institution of Raskin program for several regencies and cities in North Sumatera. Raskin is a program of distributing rice to the poor. In order to minimize rice distribution costs then rice should be allocated optimally. The method used in this study consists of the Improved Vogel Approximation Method (IVAM) to analyse the initial feasible solution, and Modified Distribution (MODI) to test the optimum solution. This study aims to determine whether the IVAM method can provide savings or cost efficiency of rice distribution. From the calculation with IVAM obtained the optimum cost is lower than the company's calculation of Rp945.241.715,5 while the cost of the company's calculation of Rp958.073.750,40. Thus, the use of IVAM can save rice distribution costs of Rp12.832.034,9.
BIM cost analysis of transport infrastructure projects
NASA Astrophysics Data System (ADS)
Volkov, Andrey; Chelyshkov, Pavel; Grossman, Y.; Khromenkova, A.
2017-10-01
The article describes the method of analysis of the energy costs of transport infrastructure objects using BIM software. The paper consideres several options of orientation of a building using SketchUp and IES VE software programs. These options allow to choose the best direction of the building facades. Particular attention is given to a distribution of a temperature field in a cross-section of the wall according to the calculation made in the ELCUT software. The issues related to calculation of solar radiation penetration into a building and selection of translucent structures are considered in the paper. The article presents data on building codes relating to the transport sector, on the basis of which the calculations were made. The author emphasizes that BIM-programs should be implemented and used in order to optimize a thermal behavior of a building and increase its energy efficiency using climatic data.
Brodin, N. Patrik; Guha, Chandan; Tomé, Wolfgang A.
2015-01-01
Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first six months experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (± 3 %) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis. PMID:26425981
Brodin, N Patrik; Guha, Chandan; Tomé, Wolfgang A
2015-11-01
Modern pre-clinical radiation therapy (RT) research requires high precision and accurate dosimetry to facilitate the translation of research findings into clinical practice. Several systems are available that provide precise delivery and on-board imaging capabilities, highlighting the need for a quality management program (QMP) to ensure consistent and accurate radiation dose delivery. An ongoing, simple, and efficient QMP for image-guided robotic small animal irradiators used in pre-clinical RT research is described. Protocols were developed and implemented to assess the dose output constancy (based on the AAPM TG-61 protocol), cone-beam computed tomography (CBCT) image quality and object representation accuracy (using a custom-designed imaging phantom), CBCT-guided target localization accuracy and consistency of the CBCT-based dose calculation. To facilitate an efficient read-out and limit the user dependence of the QMP data analysis, a semi-automatic image analysis and data representation program was developed using the technical computing software MATLAB. The results of the first 6-mo experience using the suggested QMP for a Small Animal Radiation Research Platform (SARRP) are presented, with data collected on a bi-monthly basis. The dosimetric output constancy was established to be within ±1 %, the consistency of the image resolution was within ±0.2 mm, the accuracy of CBCT-guided target localization was within ±0.5 mm, and dose calculation consistency was within ±2 s (±3%) per treatment beam. Based on these results, this simple quality assurance program allows for the detection of inconsistencies in dosimetric or imaging parameters that are beyond the acceptable variability for a reliable and accurate pre-clinical RT system, on a monthly or bi-monthly basis.
Application of automated measurement and verification to utility energy efficiency program data
Granderson, Jessica; Touzani, Samir; Fernandes, Samuel; ...
2017-02-17
Trustworthy savings calculations are critical to convincing regulators of both the cost-effectiveness of energy efficiency program investments and their ability to defer supply-side capital investments. Today’s methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of energy efficiency programs. They also require time-consuming data acquisition. A spectrum of savings calculation approaches is used, with some relying more heavily on measured data and others relying more heavily on estimated, modeled, or stipulated data. The increasing availability of “smart” meters and devices that report near-real time data, combined with new analytical approaches to quantifymore » savings, offers the potential to conduct M&V more quickly and at lower cost, with comparable or improved accuracy. Commercial energy management and information systems (EMIS) technologies are beginning to offer these ‘M&V 2.0’ capabilities, and program administrators want to understand how they might assist programs in quickly and accurately measuring energy savings. This paper presents the results of recent testing of the ability to use automation to streamline the M&V process. In this paper, we apply an automated whole-building M&V tool to historic data sets from energy efficiency programs to begin to explore the accuracy, cost, and time trade-offs between more traditional M&V, and these emerging streamlined methods that use high-resolution energy data and automated computational intelligence. For the data sets studied we evaluate the fraction of buildings that are well suited to automated baseline characterization, the uncertainty in gross savings that is due to M&V 2.0 tools’ model error, and indications of labor time savings, and how the automated savings results compare to prior, traditionally determined savings results. The results show that 70% of the buildings were well suited to the automated approach. In a majority of the cases (80%) savings and uncertainties for each individual building were quantified to levels above the criteria in ASHRAE Guideline 14. In addition the findings suggest that M&V 2.0 methods may also offer time-savings relative to traditional approaches. Lastly, we discuss the implications of these findings relative to the potential evolution of M&V, and pilots currently being launched to test how M&V automation can be integrated into ratepayer-funded programs and professional implementation and evaluation practice.« less
Application of automated measurement and verification to utility energy efficiency program data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Touzani, Samir; Fernandes, Samuel
Trustworthy savings calculations are critical to convincing regulators of both the cost-effectiveness of energy efficiency program investments and their ability to defer supply-side capital investments. Today’s methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of energy efficiency programs. They also require time-consuming data acquisition. A spectrum of savings calculation approaches is used, with some relying more heavily on measured data and others relying more heavily on estimated, modeled, or stipulated data. The increasing availability of “smart” meters and devices that report near-real time data, combined with new analytical approaches to quantifymore » savings, offers the potential to conduct M&V more quickly and at lower cost, with comparable or improved accuracy. Commercial energy management and information systems (EMIS) technologies are beginning to offer these ‘M&V 2.0’ capabilities, and program administrators want to understand how they might assist programs in quickly and accurately measuring energy savings. This paper presents the results of recent testing of the ability to use automation to streamline the M&V process. In this paper, we apply an automated whole-building M&V tool to historic data sets from energy efficiency programs to begin to explore the accuracy, cost, and time trade-offs between more traditional M&V, and these emerging streamlined methods that use high-resolution energy data and automated computational intelligence. For the data sets studied we evaluate the fraction of buildings that are well suited to automated baseline characterization, the uncertainty in gross savings that is due to M&V 2.0 tools’ model error, and indications of labor time savings, and how the automated savings results compare to prior, traditionally determined savings results. The results show that 70% of the buildings were well suited to the automated approach. In a majority of the cases (80%) savings and uncertainties for each individual building were quantified to levels above the criteria in ASHRAE Guideline 14. In addition the findings suggest that M&V 2.0 methods may also offer time-savings relative to traditional approaches. Lastly, we discuss the implications of these findings relative to the potential evolution of M&V, and pilots currently being launched to test how M&V automation can be integrated into ratepayer-funded programs and professional implementation and evaluation practice.« less
The application of dynamic programming in production planning
NASA Astrophysics Data System (ADS)
Wu, Run
2017-05-01
Nowadays, with the popularity of the computers, various industries and fields are widely applying computer information technology, which brings about huge demand for a variety of application software. In order to develop software meeting various needs with most economical cost and best quality, programmers must design efficient algorithms. A superior algorithm can not only soul up one thing, but also maximize the benefits and generate the smallest overhead. As one of the common algorithms, dynamic programming algorithms are used to solving problems with some sort of optimal properties. When solving problems with a large amount of sub-problems that needs repetitive calculations, the ordinary sub-recursive method requires to consume exponential time, and dynamic programming algorithm can reduce the time complexity of the algorithm to the polynomial level, according to which we can conclude that dynamic programming algorithm is a very efficient compared to other algorithms reducing the computational complexity and enriching the computational results. In this paper, we expound the concept, basic elements, properties, core, solving steps and difficulties of the dynamic programming algorithm besides, establish the dynamic programming model of the production planning problem.
NASA Technical Reports Server (NTRS)
Cline, M. C.
1981-01-01
A computer program, VNAP2, for calculating turbulent (as well as laminar and inviscid), steady, and unsteady flow is presented. It solves the two dimensional, time dependent, compressible Navier-Stokes equations. The turbulence is modeled with either an algebraic mixing length model, a one equation model, or the Jones-Launder two equation model. The geometry may be a single or a dual flowing stream. The interior grid points are computed using the unsplit MacCormack scheme. Two options to speed up the calculations for high Reynolds number flows are included. The boundary grid points are computed using a reference plane characteristic scheme with the viscous terms treated as source functions. An explicit artificial viscosity is included for shock computations. The fluid is assumed to be a perfect gas. The flow boundaries may be arbitrary curved solid walls, inflow/outflow boundaries, or free jet envelopes. Typical problems that can be solved concern nozzles, inlets, jet powered afterbodies, airfoils, and free jet expansions. The accuracy and efficiency of the program are shown by calculations of several inviscid and turbulent flows. The program and its use are described completely, and six sample cases and a code listing are included.
NASA Technical Reports Server (NTRS)
Spiers, Gary D.
1991-01-01
The final report for work done during the reporting period of January 25, 1990 to January 24, 1991 is presented. A literature survey was conducted to identify the required parameters for effective preionization in TEA CO2 lasers and the methods and techniques for characterizing preionizers are reviewed. A numerical model of the LP-140 cavity was used to determine the cause of the transverse mode stability improvement obtained when the cavity was lengthened. The measurement of the voltage and current discharge pulses on the LP-140 were obtained and their subsequent analysis resulted in an explanation for the low efficiency of the laser. An assortment of items relating to the development of high-voltage power supplies is also provided. A program for analyzing the frequency chirp data files obtained with the HP time and frequency analyzer is included. A program to calculate the theoretical LIMP chirp is also included and a comparison between experiment and theory is made. A program for calculating the CO2 linewidth and its dependence on gas composition and pressure is presented. The program also calculates the number of axial modes under the FWHM of the line for a given resonator length. A graphical plot of the results is plotted.
Energy efficient transport technology: Program summary and bibliography
NASA Technical Reports Server (NTRS)
Middleton, D. B.; Bartlett, D. W.; Hood, R. V.
1985-01-01
The Energy Efficient Transport (EET) Program began in 1976 as an element of the NASA Aircraft Energy Efficiency (ACEE) Program. The EET Program and the results of various applications of advanced aerodynamics and active controls technology (ACT) as applicable to future subsonic transport aircraft are discussed. Advanced aerodynamics research areas included high aspect ratio supercritical wings, winglets, advanced high lift devices, natural laminar flow airfoils, hybrid laminar flow control, nacelle aerodynamic and inertial loads, propulsion/airframe integration (e.g., long duct nacelles) and wing and empennage surface coatings. In depth analytical/trade studies, numerous wind tunnel tests, and several flight tests were conducted. Improved computational methodology was also developed. The active control functions considered were maneuver load control, gust load alleviation, flutter mode control, angle of attack limiting, and pitch augmented stability. Current and advanced active control laws were synthesized and alternative control system architectures were developed and analyzed. Integrated application and fly by wire implementation of the active control functions were design requirements in one major subprogram. Additional EET research included interdisciplinary technology applications, integrated energy management, handling qualities investigations, reliability calculations, and economic evaluations related to fuel savings and cost of ownership of the selected improvements.
Thermal radiation view factor: Methods, accuracy and computer-aided procedures
NASA Technical Reports Server (NTRS)
Kadaba, P. V.
1982-01-01
The computer aided thermal analysis programs which predicts the result of predetermined acceptable temperature range prior to stationing of these orbiting equipment in various attitudes with respect to the Sun and the Earth was examined. Complexity of the surface geometries suggests the use of numerical schemes for the determination of these viewfactors. Basic definitions and standard methods which form the basis for various digital computer methods and various numerical methods are presented. The physical model and the mathematical methods on which a number of available programs are built are summarized. The strength and the weaknesses of the methods employed, the accuracy of the calculations and the time required for computations are evaluated. The situations where accuracies are important for energy calculations are identified and methods to save computational times are proposed. Guide to best use of the available programs at several centers and the future choices for efficient use of digital computers are included in the recommendations.
Applications of Java and Vector Graphics to Astrophysical Visualization
NASA Astrophysics Data System (ADS)
Edirisinghe, D.; Budiardja, R.; Chae, K.; Edirisinghe, G.; Lingerfelt, E.; Guidry, M.
2002-12-01
We describe a series of projects utilizing the portability of Java programming coupled with the compact nature of vector graphics (SVG and SWF formats) for setup and control of calculations, local and collaborative visualization, and interactive 2D and 3D animation presentations in astrophysics. Through a set of examples, we demonstrate how such an approach can allow efficient and user-friendly control of calculations in compiled languages such as Fortran 90 or C++ through portable graphical interfaces written in Java, and how the output of such calculations can be packaged in vector-based animation having interactive controls and extremely high visual quality, but very low bandwidth requirements.
NASA Astrophysics Data System (ADS)
Goedecker, Stefan; Boulet, Mireille; Deutsch, Thierry
2003-08-01
Three-dimensional Fast Fourier Transforms (FFTs) are the main computational task in plane wave electronic structure calculations. Obtaining a high performance on a large numbers of processors is non-trivial on the latest generation of parallel computers that consist of nodes made up of a shared memory multiprocessors. A non-dogmatic method for obtaining high performance for such 3-dim FFTs in a combined MPI/OpenMP programming paradigm will be presented. Exploiting the peculiarities of plane wave electronic structure calculations, speedups of up to 160 and speeds of up to 130 Gflops were obtained on 256 processors.
Localized Plasmon resonance in metal nanoparticles using Mie theory
NASA Astrophysics Data System (ADS)
Duque, J. S.; Blandón, J. S.; Riascos, H.
2017-06-01
In this work, scattering light by colloidal metal nanoparticles with spherical shape was studied. Optical properties such as diffusion efficiencies of extinction and absorption Q ext and Q abs were calculated using Mie theory. We employed a MATLAB program to calculate the Mie efficiencies and the radial dependence of electric field intensities emitted for colloidal metal nanoparticles (MNPs). By UV-Vis spectroscopy we have determined the LSPR for Cu nanoparticles (CuNPs), Ni nanoparticles (NiNPs) and Co nanoparticles (CoNPs) grown by laser ablation technique. The peaks of resonances appear in 590nm, 384nm and 350nm for CuNPs, NiNPs and CoNPs respectively suspended in water. Changing the medium to acetone and ethanol we observed a shift of the resonance peaks, these values agreed with our simulations results.
Bird impact analysis package for turbine engine fan blades
NASA Technical Reports Server (NTRS)
Hirschbein, M. S.
1982-01-01
A computer program has been developed to analyze the gross structural response of turbine engine fan blades subjected to bird strikes. The program couples a NASTRAN finite element model and modal analysis of a fan blade with a multi-mode bird impact analysis computer program. The impact analysis uses the NASTRAN blade model and a fluid jet model of the bird to interactively calculate blade loading during a bird strike event. The analysis package is computationaly efficient, easy to use and provides a comprehensive history of the gross structual blade response. Example cases are presented for a representative fan blade.
An analysis of thermal response factors and how to reduce their computational time requirement
NASA Technical Reports Server (NTRS)
Wiese, M. R.
1982-01-01
Te RESFAC2 version of the Thermal Response Factor Program (RESFAC) is the result of numerous modifications and additions to the original RESFAC. These modifications and additions have significantly reduced the program's computational time requirement. As a result of this work, the program is more efficient and its code is both readable and understandable. This report describes what a thermal response factor is; analyzes the original matrix algebra calculations and root finding techniques; presents a new root finding technique and streamlined matrix algebra; supplies ten validation cases and their results.
Energy-efficient Public Procurement: Best Practice in Program Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Payne, Christopher; Weber, Andrew; Semple, Abby
2013-02-15
This document illustrates the key issues and considerations involved in implementing energy-efficient public procurement. Our primary sources of information have been our partners in the Super Efficient Equipment and Appliance Deployment (SEAD) Initiative Procurement Working Group. Where applicable, we have highlighted specific ways in which working group participants have successfully overcome barriers to delivering effective programs. The following key points emerge from this analysis of programs for energy-efficient public procurement. Lessons for both developed and developing programs are highlighted throughout the guide. 1. Policy: Policy provides the initiative to begin a transition from first cost to life-cycle cost based purchasingmore » methods and culture. Effective policy is well-communicated, establishes accountability from top to bottom of organizations and simplifies the processes necessary to comply. Flexibility and responsiveness are essential in policy development and implementation. Mandatory and voluntary policies may complement one another. 2. Procurement Criteria: Procurement staff must be confident that energy-efficient procurement criteria offer the best long-term value for their organization’s money and represent real environmental gains. Involving multiple stakeholders at the early stages of the criteria creation process can result in greater levels of cooperation from private industry. Criteria should make comparison of products easy for purchasers and require minimal additional calculations. Criteria will need to be regularly updated to reflect market developments. 3. Training: Resources for the creation of training programs are usually very limited, but well-targeted training is necessary in order for a program to be effective. Training must emphasize a process that is efficient for purchasers and simplifies compliance. Purchaser resources and policy must be well designed for training to be effective. Training program development is an excellent opportunity for collaboration amongst public authorities. 4. Procurement Processes: Many tools and guides intended to help buyers comply with energy-efficient procurement policy are designed without detailed knowledge of the procurement process. A deeper understanding of purchasing pathways allows resources to be better directed. Current research by national and international bodies aims to analyze purchasing pathways and can assist in developing future resources.« less
Parametric modeling and stagger angle optimization of an axial flow fan
NASA Astrophysics Data System (ADS)
Li, M. X.; Zhang, C. H.; Liu, Y.; Y Zheng, S.
2013-12-01
Axial flow fans are widely used in every field of social production. Improving their efficiency is a sustained and urgent demand of domestic industry. The optimization of stagger angle is an important method to improve fan performance. Parametric modeling and calculation process automation are realized in this paper to improve optimization efficiency. Geometric modeling and mesh division are parameterized based on GAMBIT. Parameter setting and flow field calculation are completed in the batch mode of FLUENT. A control program is developed in Visual C++ to dominate the data exchange of mentioned software. It also extracts calculation results for optimization algorithm module (provided by Matlab) to generate directive optimization control parameters, which as feedback are transferred upwards to modeling module. The center line of the blade airfoil, based on CLARK y profile, is constructed by non-constant circulation and triangle discharge method. Stagger angles of six airfoil sections are optimized, to reduce the influence of inlet shock loss as well as gas leak in blade tip clearance and hub resistance at blade root. Finally an optimal solution is obtained, which meets the total pressure requirement under given conditions and improves total pressure efficiency by about 6%.
Computer Program for the Design and Off-Design Performance of Turbojet and Turbofan Engine Cycles
NASA Technical Reports Server (NTRS)
Morris, S. J.
1978-01-01
The rapid computer program is designed to be run in a stand-alone mode or operated within a larger program. The computation is based on a simplified one-dimensional gas turbine cycle. Each component in the engine is modeled thermo-dynamically. The component efficiencies used in the thermodynamic modeling are scaled for the off-design conditions from input design point values using empirical trends which are included in the computer code. The engine cycle program is capable of producing reasonable engine performance prediction with a minimum of computer execute time. The current computer execute time on the IBM 360/67 for one Mach number, one altitude, and one power setting is about 0.1 seconds. about 0.1 seconds. The principal assumption used in the calculation is that the compressor is operated along a line of maximum adiabatic efficiency on the compressor map. The fluid properties are computed for the combustion mixture, but dissociation is not included. The procedure included in the program is only for the combustion of JP-4, methane, or hydrogen.
IVisTMSA: Interactive Visual Tools for Multiple Sequence Alignments.
Pervez, Muhammad Tariq; Babar, Masroor Ellahi; Nadeem, Asif; Aslam, Naeem; Naveed, Nasir; Ahmad, Sarfraz; Muhammad, Shah; Qadri, Salman; Shahid, Muhammad; Hussain, Tanveer; Javed, Maryam
2015-01-01
IVisTMSA is a software package of seven graphical tools for multiple sequence alignments. MSApad is an editing and analysis tool. It can load 409% more data than Jalview, STRAP, CINEMA, and Base-by-Base. MSA comparator allows the user to visualize consistent and inconsistent regions of reference and test alignments of more than 21-MB size in less than 12 seconds. MSA comparator is 5,200% efficient and more than 40% efficient as compared to BALiBASE c program and FastSP, respectively. MSA reconstruction tool provides graphical user interfaces for four popular aligners and allows the user to load several sequence files at a time. FASTA generator converts seven formats of alignments of unlimited size into FASTA format in a few seconds. MSA ID calculator calculates identity matrix of more than 11,000 sequences with a sequence length of 2,696 base pairs in less than 100 seconds. Tree and Distance Matrix calculation tools generate phylogenetic tree and distance matrix, respectively, using neighbor joining% identity and BLOSUM 62 matrix.
Parametric Design and Mechanical Analysis of Beams based on SINOVATION
NASA Astrophysics Data System (ADS)
Xu, Z. G.; Shen, W. D.; Yang, D. Y.; Liu, W. M.
2017-07-01
In engineering practice, engineer needs to carry out complicated calculation when the loads on the beam are complex. The processes of analysis and calculation take a lot of time and the results are unreliable. So VS2005 and ADK are used to develop a software for beams design based on the 3D CAD software SINOVATION with C ++ programming language. The software can realize the mechanical analysis and parameterized design of various types of beams and output the report of design in HTML format. Efficiency and reliability of design of beams are improved.
Jdpd: an open java simulation kernel for molecular fragment dissipative particle dynamics.
van den Broek, Karina; Kuhn, Hubert; Zielesny, Achim
2018-05-21
Jdpd is an open Java simulation kernel for Molecular Fragment Dissipative Particle Dynamics with parallelizable force calculation, efficient caching options and fast property calculations. It is characterized by an interface and factory-pattern driven design for simple code changes and may help to avoid problems of polyglot programming. Detailed input/output communication, parallelization and process control as well as internal logging capabilities for debugging purposes are supported. The new kernel may be utilized in different simulation environments ranging from flexible scripting solutions up to fully integrated "all-in-one" simulation systems.
Nagy, Peter; Szabó, Ágnes; Váradi, Tímea; Kovács, Tamás; Batta, Gyula; Szöllősi, János
2016-04-01
Fluorescence or Förster resonance energy transfer (FRET) remains one of the most widely used methods for assessing protein clustering and conformation. Although it is a method with solid physical foundations, many applications of FRET fall short of providing quantitative results due to inappropriate calibration and controls. This shortcoming is especially valid for microscopy where currently available tools have limited or no capability at all to display parameter distributions or to perform gating. Since users of multiparameter flow cytometry usually apply these tools, the absence of these features in applications developed for microscopic FRET analysis is a significant limitation. Therefore, we developed a graphical user interface-controlled Matlab application for the evaluation of ratiometric, intensity-based microscopic FRET measurements. The program can calculate all the necessary overspill and spectroscopic correction factors and the FRET efficiency and it displays the results on histograms and dot plots. Gating on plots and mask images can be used to limit the calculation to certain parts of the image. It is an important feature of the program that the calculated parameters can be determined by regression methods, maximum likelihood estimation (MLE) and from summed intensities in addition to pixel-by-pixel evaluation. The confidence interval of calculated parameters can be estimated using parameter simulations if the approximate average number of detected photons is known. The program is not only user-friendly, but it provides rich output, it gives the user freedom to choose from different calculation modes and it gives insight into the reliability and distribution of the calculated parameters. © 2016 International Society for Advancement of Cytometry. © 2016 International Society for Advancement of Cytometry.
Satellite Power Systems (SPS) concept definition study. Volume 6: In-depth element investigation
NASA Technical Reports Server (NTRS)
Hanley, G. M.
1980-01-01
The fabrication parameters of GaAs MESFET solid-state amplifiers considering a power added conversion efficiency of at least 80% and power gains of at least 10dB were determined. Operating frequency was 2.45 GHz although 914 MHz was also considered. Basic circuit to be considered was either Class C or Class E amplification. Two modeling programs were utilized. The results of several computer calculations considering differing loads, temperatures, and efficiencies are presented. Parametric data in both tabular and plotted form are presented.
User News. Volume 17, Number 1 -- Spring 1996
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
This is a newsletter for users of the DOE-2, PowerDOE, SPARK, and BLAST building energy simulation programs. The topics for the Spring 1996 issue include the SPARK simulation environment, DOE-2 validation, listing of free fenestration software from LBNL, Web sites for building energy efficiency, the heat balance method of calculating building heating and cooling loads.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
Computer program for design analysis of radial-inflow turbines
NASA Technical Reports Server (NTRS)
Glassman, A. J.
1976-01-01
A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.
McLaren, D G; Buchanan, D S; Williams, J E
1987-10-01
A static, deterministic computer model, programmed in Microsoft Basic for IBM PC and Apple Macintosh computers, was developed to calculate production efficiency (cost per kg of product) for nine alternative types of crossbreeding system involving four breeds of swine. The model simulates efficiencies for four purebred and 60 alternative two-, three- and four-breed rotation, rotaterminal, backcross and static cross systems. Crossbreeding systems were defined as including all purebred, crossbred and commercial matings necessary to maintain a total of 10,000 farrowings. Driving variables for the model are mean conception rate at first service and for an 8-wk breeding season, litter size born, preweaning survival rate, postweaning average daily gain, feed-to-gain ratio and carcass backfat. Predictions are computed using breed direct genetic and maternal effects for the four breeds, plus individual, maternal and paternal specific heterosis values, input by the user. Inputs required to calculate the number of females farrowing in each sub-system include the proportion of males and females replaced each breeding cycle in purebred and crossbred populations, the proportion of male and female offspring in seedstock herds that become breeding animals, and the number of females per boar. Inputs required to calculate the efficiency of terminal production (cost-to-product ratio) for each sub-system include breeding herd feed intake, gilt development costs, feed costs and labor and overhead costs. Crossbreeding system efficiency is calculated as the weighted average of sub-system cost-to-product ratio values, weighting by the number of females farrowing in each sub-system.
Conceptual study of a 250 kW planar SOFC system for CHP application
NASA Astrophysics Data System (ADS)
Fontell, E.; Kivisaari, T.; Christiansen, N.; Hansen, J.-B.; Pålsson, J.
In August 2002, Wärtsilä Corporation and Haldor Topsøe A/S entered into a co-operation agreement to start joint development program within the planar SOFC technology. The development program aims to bring to the market highly efficient, clean and cost competitive fuel cell systems with power outputs above 200 kW for distributed power generation with CHP and for marine applications. In this study, the product concept for a 250 kW natural gas-fuelled atmospheric SOFC plant has been studied. The process has been calculated and optimised for high electrical efficiency. In the calculations, system efficiencies more than 55-85% (electrical co-generation) have been reached. The necessary balance of plant (BoP) components have been identified and the concept for grid connection has been defined. The BoP includes fuel and air supply, anode re-circulation, start-up steam, purge gas, exhaust gas heat recovery, back-up power, power electronics and control system. Based on the analysed system and component information, a conceptual design and cost break down structure for the product have been made. The cost breakdown shows that the stack, system control and power electronics are the major cost factors, while the remaining BoP equipment stands for a minor share of the manufacturing cost. Finally, the feasibility of the SOFC plants has been compared to gas engines.
A semi-automated tool for treatment plan-quality evaluation and clinical trial quality assurance
NASA Astrophysics Data System (ADS)
Wang, Jiazhou; Chen, Wenzhou; Studenski, Matthew; Cui, Yunfeng; Lee, Andrew J.; Xiao, Ying
2013-07-01
The goal of this work is to develop a plan-quality evaluation program for clinical routine and multi-institutional clinical trials so that the overall evaluation efficiency is improved. In multi-institutional clinical trials evaluating the plan quality is a time-consuming and labor-intensive process. In this note, we present a semi-automated plan-quality evaluation program which combines MIMVista, Java/MATLAB, and extensible markup language (XML). More specifically, MIMVista is used for data visualization; Java and its powerful function library are implemented for calculating dosimetry parameters; and to improve the clarity of the index definitions, XML is applied. The accuracy and the efficiency of the program were evaluated by comparing the results of the program with the manually recorded results in two RTOG trials. A slight difference of about 0.2% in volume or 0.6 Gy in dose between the semi-automated program and manual recording was observed. According to the criteria of indices, there are minimal differences between the two methods. The evaluation time is reduced from 10-20 min to 2 min by applying the semi-automated plan-quality evaluation program.
Neutron Transmission of Single-crystal Sapphire Filters
NASA Astrophysics Data System (ADS)
Adib, M.; Kilany, M.; Habib, N.; Fathallah, M.
2005-05-01
An additive formula is given that permits the calculation of the nuclear capture, thermal diffuse and Bragg scattering cross-sections as a function of sapphire temperature and crystal parameters. We have developed a computer program that allows calculations of the thermal neutron transmission for the sapphire rhombohedral structure and its equivalent trigonal structure. The calculated total cross-section values and effective attenuation coefficient for single-crystalline sapphire at different temperatures are compared with measured values. Overall agreement is indicated between the formula and experimental data. We discuss the use of sapphire single crystal as a thermal neutron filter in terms of the optimum cystal thickness, mosaic spread, temperature, cutting plane and tuning for efficient transmission of thermal-reactor neutrons.
A theoretical study on 2-amino-5-nitroprydinium trifluoroaceta
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arioğlu, Çağla, E-mail: caglaarioglu@gmail.com; Tamer, Ömer, E-mail: omertamer@sakarya.edu.tr; Başoğlu, Adil, E-mail: abasoglu@sakarya.edu.tr
The geometry optimization of 2-amino-5-nitroprydinium trifluoroacetate molecule was carried out by using Becke’s three-parameter exchange functional in conjunction with the Lee-Yang-Parr correlation functional (B3LYP) level of density functional theory (DFT) and 6-311++G(d,p) basis set at GAUSSIAN 09 program. The vibration spectrum of the title compound was simulated to predict the presence of functional groups and their vibrational modes. The highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) energies were calculated at the same level, and the obtained small energy gap shows that charge transfer occurs in the title compound. The molecular dipole moment, polarizability and hyperpolarizability parametersmore » were determined to evaluate nonlinear optical efficiency of the title compound. Finally, the {sup 13}C and {sup 1}H Nuclear Magnetic Resonance (NMR) chemical shift values were calculated by the application of the gauge independent atomic orbital (GIAO) method. All of the calculations were carried out by using GAUSSIAN 09 program.« less
Full Parallel Implementation of an All-Electron Four-Component Dirac-Kohn-Sham Program.
Rampino, Sergio; Belpassi, Leonardo; Tarantelli, Francesco; Storchi, Loriano
2014-09-09
A full distributed-memory implementation of the Dirac-Kohn-Sham (DKS) module of the program BERTHA (Belpassi et al., Phys. Chem. Chem. Phys. 2011, 13, 12368-12394) is presented, where the self-consistent field (SCF) procedure is replicated on all the parallel processes, each process working on subsets of the global matrices. The key feature of the implementation is an efficient procedure for switching between two matrix distribution schemes, one (integral-driven) optimal for the parallel computation of the matrix elements and another (block-cyclic) optimal for the parallel linear algebra operations. This approach, making both CPU-time and memory scalable with the number of processors used, virtually overcomes at once both time and memory barriers associated with DKS calculations. Performance, portability, and numerical stability of the code are illustrated on the basis of test calculations on three gold clusters of increasing size, an organometallic compound, and a perovskite model. The calculations are performed on a Beowulf and a BlueGene/Q system.
EGRET High Energy Capability and Multiwavelength Flare Studies and Solar Flare Proton Spectra
NASA Technical Reports Server (NTRS)
Chupp, Edward L.
1997-01-01
UNH was assigned the responsibility to use their accelerator neutron measurements to verify the TASC response function and to modify the TASC fitting program to include a high energy neutron contribution. Direct accelerator-based measurements by UNH of the energy-dependent efficiencies for detecting neutrons with energies from 36 to 720 MeV in NaI were compared with Monte Carlo TASC calculations. The calculated TASC efficiencies are somewhat lower (by about 20%) than the accelerator results in the energy range 70-300 MeV. The measured energy-loss spectrum for 207 MeV neutron interactions in NaI were compared with the Monte Carlo response for 200 MeV neutrons in the TASC indicating good agreement. Based on this agreement, the simulation was considered to be sufficiently accurate to generate a neutron response library to be used by UNH in modifying the TASC fitting program to include a neutron component in the flare spectrum modeling. TASC energy-loss data on the 1991 June 11 flare was transferred to UNH. Also included appendix: Gamma-rays and neutrons as a probe of flare proton spectra: the solar flare of 11 June 1991.
NASA Astrophysics Data System (ADS)
Ha, P. T. H.
2018-04-01
The architectural design orientation at the first design stage plays a key role and has a great impact on the energy consumption of a building throughout its life-cycle. To provide designers with a simple and useful tool in quantitatively determining and simply optimizing the energy efficiency of a building at the very first stage of conceptual design, a factor namely building envelope energy efficiency (Khqnl ) should be investigated and proposed. Heat transfer through windows and other glazed areas of mezzanine floors accounts for 86% of overall thermal transfer through building envelope, so the factor Khqnl of high-rise buildings largely depends on shading solutions. The author has established tables and charts to make reference to the values of Khqnl factor in certain high-rise apartment buildings in Hanoi calculated with a software program subject to various inputs including: types and sizes of shading devices, building orientations and at different points of time to be respectively analyzed. It is possible and easier for architects to refer to these tables and charts in façade design for a higher level of energy efficiency.
[Cost-effectiveness analysis on colorectal cancer screening program].
Huang, Q C; Ye, D; Jiang, X Y; Li, Q L; Yao, K Y; Wang, J B; Jin, M J; Chen, K
2017-01-10
Objective: To evaluate the cost-effectiveness of colorectal cancer screening program in different age groups from the view of health economics. Methods: The screening compliance rates, detection rates in different age groups were calculated by using the data from colorectal cancer screening program in Jiashan county, Zhejiang province. The differences in indicator among age groups were analyzed with χ (2) test or trend χ (2) test. The ratios of cost to the number of case were calculated according to cost statistics. Results: The detection rates of immunochemical fecal occult blood test (iFOBT) positivity, advanced adenoma and colorectal cancer and early stage cancer increased with age, while the early diagnosis rates were negatively associated with age. After exclusion the younger counterpart, the cost-effectiveness of individuals aged >50 years could be reduced by 15 %- 30 % . Conclusion: From health economic perspective, it is beneficial to start colorectal cancer screening at age of 50 years to improve the efficiency of the screening.
Prediction of sound radiated from different practical jet engine inlets
NASA Technical Reports Server (NTRS)
Zinn, B. T.; Meyer, W. L.
1980-01-01
Existing computer codes for calculating the far field radiation patterns surrounding various practical jet engine inlet configurations under different excitation conditions were upgraded. The computer codes were refined and expanded so that they are now more efficient computationally by a factor of about three and they are now capable of producing accurate results up to nondimensional wave numbers of twenty. Computer programs were also developed to help generate accurate geometrical representations of the inlets to be investigated. This data is required as input for the computer programs which calculate the sound fields. This new geometry generating computer program considerably reduces the time required to generate the input data which was one of the most time consuming steps in the process. The results of sample runs using the NASA-Lewis QCSEE inlet are presented and comparison of run times and accuracy are made between the old and upgraded computer codes. The overall accuracy of the computations is determined by comparison of the results of the computations with simple source solutions.
BEST3D user's manual: Boundary Element Solution Technology, 3-Dimensional Version 3.0
NASA Technical Reports Server (NTRS)
1991-01-01
The theoretical basis and programming strategy utilized in the construction of the computer program BEST3D (boundary element solution technology - three dimensional) and detailed input instructions are provided for the use of the program. An extensive set of test cases and sample problems is included in the manual and is also available for distribution with the program. The BEST3D program was developed under the 3-D Inelastic Analysis Methods for Hot Section Components contract (NAS3-23697). The overall objective of this program was the development of new computer programs allowing more accurate and efficient three-dimensional thermal and stress analysis of hot section components, i.e., combustor liners, turbine blades, and turbine vanes. The BEST3D program allows both linear and nonlinear analysis of static and quasi-static elastic problems and transient dynamic analysis for elastic problems. Calculation of elastic natural frequencies and mode shapes is also provided.
NASA Astrophysics Data System (ADS)
Luo, Ye; Esler, Kenneth; Kent, Paul; Shulenburger, Luke
Quantum Monte Carlo (QMC) calculations of giant molecules, surface and defect properties of solids have been feasible recently due to drastically expanding computational resources. However, with the most computationally efficient basis set, B-splines, these calculations are severely restricted by the memory capacity of compute nodes. The B-spline coefficients are shared on a node but not distributed among nodes, to ensure fast evaluation. A hybrid representation which incorporates atomic orbitals near the ions and B-spline ones in the interstitial regions offers a more accurate and less memory demanding description of the orbitals because they are naturally more atomic like near ions and much smoother in between, thus allowing coarser B-spline grids. We will demonstrate the advantage of hybrid representation over pure B-spline and Gaussian basis sets and also show significant speed-up like computing the non-local pseudopotentials with our new scheme. Moreover, we discuss a new algorithm for atomic orbital initialization which used to require an extra workflow step taking a few days. With this work, the highly efficient hybrid representation paves the way to simulate large size even in-homogeneous systems using QMC. This work was supported by the U.S. Department of Energy, Office of Science, Basic Energy Sciences, Computational Materials Sciences Program.
NASA Technical Reports Server (NTRS)
Barmatz, M.
1985-01-01
There is a need for high temperature containerless processing facilities that can efficiently position and manipulate molten samples in the reduced gravity environment of space. The goal of the research is to develop sophisticated high temperature manipulation capabilities such as selection of arbitrary axes rotation and rapid sample cooling. This program will investigate new classes of acoustic levitation in rectangular, cylindrical and spherical geometries. The program tasks include calculating theoretical expressions of the acoustic forces in these geometries for the excitation of up to three acoustic modes (multimodes). These calculations are used to: (1) determine those acoustic modes that produce stable levitation, (2) isolate the levitation and rotation capabilities to produce more than one axis of rotation, and (3) develop methods to translate samples down long tube cylindrical chambers. Experimental levitators will then be constructed to verify the stable levitation and rotation predictions of the models.
Acoustic environmental accuracy requirements for response determination
NASA Technical Reports Server (NTRS)
Pettitt, M. R.
1983-01-01
A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.
Vladimirov, N V; Likhoshvaĭ, V A; Matushkin, Iu G
2007-01-01
Gene expression is known to correlate with degree of codon bias in many unicellular organisms. However, such correlation is absent in some organisms. Recently we demonstrated that inverted complementary repeats within coding DNA sequence must be considered for proper estimation of translation efficiency, since they may form secondary structures that obstruct ribosome movement. We have developed a program for estimation of potential coding DNA sequence expression in defined unicellular organism using its genome sequence. The program computes elongation efficiency index. Computation is based on estimation of coding DNA sequence elongation efficiency, taking into account three key factors: codon bias, average number of inverted complementary repeats, and free energy of potential stem-loop structures formed by the repeats. The influence of these factors on translation is numerically estimated. An optimal proportion of these factors is computed for each organism individually. Quantitative translational characteristics of 384 unicellular organisms (351 bacteria, 28 archaea, 5 eukaryota) have been computed using their annotated genomes from NCBI GenBank. Five potential evolutionary strategies of translational optimization have been determined among studied organisms. A considerable difference of preferred translational strategies between Bacteria and Archaea has been revealed. Significant correlations between elongation efficiency index and gene expression levels have been shown for two organisms (S. cerevisiae and H. pylori) using available microarray data. The proposed method allows to estimate numerically the coding DNA sequence translation efficiency and to optimize nucleotide composition of heterologous genes in unicellular organisms. http://www.mgs.bionet.nsc.ru/mgs/programs/eei-calculator/.
Algorithm applying a modified BRDF function in Λ-ridge concentrator of solar radiation
NASA Astrophysics Data System (ADS)
Plachta, Kamil
2015-05-01
This paper presents an algorithm that uses the modified BRDF function. It allows the calculation of the parameters of Λ-ridge concentrator system. The concentrator directs reflected solar radiation on photovoltaic surface, increasing its efficiency. The efficiency of the concentrator depends on the surface characteristics of the material which it is made of, the angle of the photovoltaic panel and the resolution of the tracking system. It shows a method of modeling the surface by using the BRDF function and describes its basic parameters, e.g. roughness and the components of the reflected stream. A cost calculation of chosen models with presented in this article BRDF function modification has been made. The author's own simulation program allows to choose the appropriate material for construction of a Λ-ridge concentrator, generate micro surface of the material, and simulate the shape and components of the reflected stream.
NASA Astrophysics Data System (ADS)
Zhang, Jilin; Sha, Chaoqun; Wu, Yusen; Wan, Jian; Zhou, Li; Ren, Yongjian; Si, Huayou; Yin, Yuyu; Jing, Ya
2017-02-01
GPU not only is used in the field of graphic technology but also has been widely used in areas needing a large number of numerical calculations. In the energy industry, because of low carbon, high energy density, high duration and other characteristics, the development of nuclear energy cannot easily be replaced by other energy sources. Management of core fuel is one of the major areas of concern in a nuclear power plant, and it is directly related to the economic benefits and cost of nuclear power. The large-scale reactor core expansion equation is large and complicated, so the calculation of the diffusion equation is crucial in the core fuel management process. In this paper, we use CUDA programming technology on a GPU cluster to run the LU-SGS parallel iterative calculation against the background of the diffusion equation of the reactor. We divide one-dimensional and two-dimensional mesh into a plurality of domains, with each domain evenly distributed on the GPU blocks. A parallel collision scheme is put forward that defines the virtual boundary of the grid exchange information and data transmission by non-stop collision. Compared with the serial program, the experiment shows that GPU greatly improves the efficiency of program execution and verifies that GPU is playing a much more important role in the field of numerical calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staples, P.A.; Egan, J.J.; Kegel, G.H.R.
1994-06-01
Prompt fission neutron spectrum measurements at the University of Massachusetts Lowell 5.5 MV Van de Graaff accelerator laboratory require that the neutron detector efficiency be well known over a neutron energy range of 100 keV to 20 MeV. The efficiency of the detector, has been determined for energies greater than 5.0 MeV using the Weapons Neutron Research (WNR) white neutron source at the Los Alamos Meson Physics Facility (LAMPF) in a pulsed beam, time-of-flight (TOF) experiment. Carbon matched polyethylene and graphite scatterers were used to obtain a hydrogen spectrum. The detector efficiency was determined using the well known H(n,n) scatteringmore » cross section. Results are compared to the detector efficiency calculation program SCINFUL available from the Radiation Shielding Information Center at Oak Ridge National Laboratory.« less
NASA Astrophysics Data System (ADS)
Galerkin, Y. B.; Voinov, I. B.; Drozdov, A. A.
2017-08-01
Computational Fluid Dynamics (CFD) methods are widely used for centrifugal compressors design and flow analysis. The calculation results are dependent on the chosen software, turbulence models and solver settings. Two of the most widely applicable programs are NUMECA Fine Turbo and ANSYS CFX. The objects of the study were two different stages. CFD-calculations were made for a single blade channel and for full 360-degree flow paths. Stage 1 with 3D impeller and vaneless diffuser was tested experimentally. Its flow coefficient is 0.08 and loading factor is 0.74. For stage 1 calculations were performed with different grid quality, a different number of cells and different models of turbulence. The best results have demonstrated the Spalart-Allmaras model and mesh with 1.854 million cells. Stage 2 with return channel, vaneless diffuser and 3D impeller with flow coefficient 0.15 and loading factor 0.5 was designed by the known Universal Modeling Method. Its performances were calculated by the well identified Math model. Stage 2 performances by CFD calculations shift to higher flow rate in comparison with design performances. The same result was obtained for stage 1 in comparison with measured performances. Calculated loading factor is higher in both cases for a single blade channel. Loading factor performance calculated for full flow path (“360 degrees”) by ANSYS CFX is in satisfactory agreement with the stage 2 design performance. Maximum efficiency is predicted accurately by the ANSYS CFX “360 degrees” calculation. “Sector” calculation is less accurate. Further research is needed to solve the problem of performances mismatch.
Certifying Domain-Specific Policies
NASA Technical Reports Server (NTRS)
Lowry, Michael; Pressburger, Thomas; Rosu, Grigore; Koga, Dennis (Technical Monitor)
2001-01-01
Proof-checking code for compliance to safety policies potentially enables a product-oriented approach to certain aspects of software certification. To date, previous research has focused on generic, low-level programming-language properties such as memory type safety. In this paper we consider proof-checking higher-level domain -specific properties for compliance to safety policies. The paper first describes a framework related to abstract interpretation in which compliance to a class of certification policies can be efficiently calculated Membership equational logic is shown to provide a rich logic for carrying out such calculations, including partiality, for certification. The architecture for a domain-specific certifier is described, followed by an implemented case study. The case study considers consistency of abstract variable attributes in code that performs geometric calculations in Aerospace systems.
Heliostat cost optimization study
NASA Astrophysics Data System (ADS)
von Reeken, Finn; Weinrebe, Gerhard; Keck, Thomas; Balz, Markus
2016-05-01
This paper presents a methodology for a heliostat cost optimization study. First different variants of small, medium sized and large heliostats are designed. Then the respective costs, tracking and optical quality are determined. For the calculation of optical quality a structural model of the heliostat is programmed and analyzed using finite element software. The costs are determined based on inquiries and from experience with similar structures. Eventually the levelised electricity costs for a reference power tower plant are calculated. Before each annual simulation run the heliostat field is optimized. Calculated LCOEs are then used to identify the most suitable option(s). Finally, the conclusions and findings of this extensive cost study are used to define the concept of a new cost-efficient heliostat called `Stellio'.
Validation of a program for supercritical power plant calculations
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Łukowicz, Henryk; Bartela, Łukasz; Michalski, Sebastian
2011-12-01
This article describes the validation of a supercritical steam cycle. The cycle model was created with the commercial program GateCycle and validated using in-house code of the Institute of Power Engineering and Turbomachinery. The Institute's in-house code has been used extensively for industrial power plants calculations with good results. In the first step of the validation process, assumptions were made about the live steam temperature and pressure, net power, characteristic quantities for high- and low-pressure regenerative heat exchangers and pressure losses in heat exchangers. These assumptions were then used to develop a steam cycle model in Gate-Cycle and a model based on the code developed in-house at the Institute of Power Engineering and Turbomachinery. Properties, such as thermodynamic parameters at characteristic points of the steam cycle, net power values and efficiencies, heat provided to the steam cycle and heat taken from the steam cycle, were compared. The last step of the analysis was calculation of relative errors of compared values. The method used for relative error calculations is presented in the paper. The assigned relative errors are very slight, generally not exceeding 0.1%. Based on our analysis, it can be concluded that using the GateCycle software for calculations of supercritical power plants is possible.
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations
NASA Astrophysics Data System (ADS)
Vergara-Perez, Sandra; Marucho, Marcelo
2016-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson-Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post-analysis of structural and electrical properties of biomolecules.
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations
Vergara-Perez, Sandra; Marucho, Marcelo
2015-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules. PMID:26924848
MPBEC, a Matlab Program for Biomolecular Electrostatic Calculations.
Vergara-Perez, Sandra; Marucho, Marcelo
2016-01-01
One of the most used and efficient approaches to compute electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation. There are several software packages available that solve the PB equation for molecules in aqueous electrolyte solutions. Most of these software packages are useful for scientists with specialized training and expertise in computational biophysics. However, the user is usually required to manually take several important choices, depending on the complexity of the biological system, to successfully obtain the numerical solution of the PB equation. This may become an obstacle for researchers, experimentalists, even students with no special training in computational methodologies. Aiming to overcome this limitation, in this article we present MPBEC, a free, cross-platform, open-source software that provides non-experts in the field an easy and efficient way to perform biomolecular electrostatic calculations on single processor computers. MPBEC is a Matlab script based on the Adaptative Poisson Boltzmann Solver, one of the most popular approaches used to solve the PB equation. MPBEC does not require any user programming, text editing or extensive statistical skills, and comes with detailed user-guide documentation. As a unique feature, MPBEC includes a useful graphical user interface (GUI) application which helps and guides users to configure and setup the optimal parameters and approximations to successfully perform the required biomolecular electrostatic calculations. The GUI also incorporates visualization tools to facilitate users pre- and post- analysis of structural and electrical properties of biomolecules.
ERIC Educational Resources Information Center
Mislevy, Robert J.; Bock, R. Darrell
New legislation in 1972 shifted the emphasis of the California Assessment Program (CAP) from traditional every pupil achievement testing to a more efficient multiple-matrix testing design, under which a broad spectrum of skills could be surveyed without undue expenditure of educational resources. Scale score reporting was introduced to the grade 6…
Hortness, J.E.
2004-01-01
The U.S. Geological Survey (USGS) measures discharge in streams using several methods. However, measurement of peak discharges is often impossible or impractical due to difficult access, inherent danger of making measurements during flood events, and timing often associated with flood events. Thus, many peak discharge values often are calculated after the fact by use of indirect methods. The most common indirect method for estimating peak dis- charges in streams is the slope-area method. This, like other indirect methods, requires measuring the flood profile through detailed surveys. Processing the survey data for efficient entry into computer streamflow models can be time demanding; SAM 2.1 is a program designed to expedite that process. The SAM 2.1 computer program is designed to be run in the field on a portable computer. The program processes digital surveying data obtained from an electronic surveying instrument during slope- area measurements. After all measurements have been completed, the program generates files to be input into the SAC (Slope-Area Computation program; Fulford, 1994) or HEC-RAS (Hydrologic Engineering Center-River Analysis System; Brunner, 2001) computer streamflow models so that an estimate of the peak discharge can be calculated.
GenLocDip: A Generalized Program to Calculate and Visualize Local Electric Dipole Moments.
Groß, Lynn; Herrmann, Carmen
2016-09-30
Local dipole moments (i.e., dipole moments of atomic or molecular subsystems) are essential for understanding various phenomena in nanoscience, such as solvent effects on the conductance of single molecules in break junctions or the interaction between the tip and the adsorbate in atomic force microscopy. We introduce GenLocDip, a program for calculating and visualizing local dipole moments of molecular subsystems. GenLocDip currently uses the Atoms-In-Molecules (AIM) partitioning scheme and is interfaced to various AIM programs. This enables postprocessing of a variety of electronic structure output formats including cube and wavefunction files, and, in general, output from any other code capable of writing the electron density on a three-dimensional grid. It uses a modified version of Bader's and Laidig's approach for achieving origin-independence of local dipoles by referring to internal reference points which can (but do not need to be) bond critical points (BCPs). Furthermore, the code allows the export of critical points and local dipole moments into a POVray readable input format. It is particularly designed for fragments of large systems, for which no BCPs have been calculated for computational efficiency reasons, because large interfragment distances prevent their identification, or because a local partitioning scheme different from AIM was used. The program requires only minimal user input and is written in the Fortran90 programming language. To demonstrate the capabilities of the program, examples are given for covalently and non-covalently bound systems, in particular molecular adsorbates. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
MO-D-213-07: RadShield: Semi- Automated Calculation of Air Kerma Rate and Barrier Thickness
DOE Office of Scientific and Technical Information (OSTI.GOV)
DeLorenzo, M; Wu, D; Rutel, I
2015-06-15
Purpose: To develop the first Java-based semi-automated calculation program intended to aid professional radiation shielding design. Air-kerma rate and barrier thickness calculations are performed by implementing NCRP Report 147 formalism into a Graphical User Interface (GUI). The ultimate aim of this newly created software package is to reduce errors and improve radiographic and fluoroscopic room designs over manual approaches. Methods: Floor plans are first imported as images into the RadShield software program. These plans serve as templates for drawing barriers, occupied regions and x-ray tube locations. We have implemented sub-GUIs that allow the specification in regions and equipment for occupancymore » factors, design goals, number of patients, primary beam directions, source-to-patient distances and workload distributions. Once the user enters the above parameters, the program automatically calculates air-kerma rate at sampled points beyond all barriers. For each sample point, a corresponding minimum barrier thickness is calculated to meet the design goal. RadShield allows control over preshielding, sample point location and material types. Results: A functional GUI package was developed and tested. Examination of sample walls and source distributions yields a maximum percent difference of less than 0.1% between hand-calculated air-kerma rates and RadShield. Conclusion: The initial results demonstrated that RadShield calculates air-kerma rates and required barrier thicknesses with reliable accuracy and can be used to make radiation shielding design more efficient and accurate. This newly developed approach differs from conventional calculation methods in that it finds air-kerma rates and thickness requirements for many points outside the barriers, stores the information and selects the largest value needed to comply with NCRP Report 147 design goals. Floor plans, parameters, designs and reports can be saved and accessed later for modification and recalculation. We have confirmed that this software accurately calculates air-kerma rates and required barrier thicknesses for diagnostic radiography and fluoroscopic rooms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
An Apple IIe microcomputer is being used to collect data and to control a pyrolysis system. Pyrolysis data for bitumen and kerogen are widely used to estimate source rock maturity. For a detailed analysis of kinetic parameters, however, data must be obtained more precisely than for routine pyrolysis. The authors discuss the program which controls the temperature ramp of the furnace that heats the sample, and collects data from a thermocouple in the furnace and from the flame ionization detector measuring evolved hydrocarbons. These data are stored on disk for later use by programs that display the results of themore » experiment or calculate kinetic parameters. The program is written in Applesoft BASIC with subroutines in Apple assembler for speed and efficiency.« less
The business of pediatric hospital medicine.
Percelay, Jack M; Zipes, David G
2014-07-01
Pediatric hospital medicine (PHM) programs are mission driven, not margin driven. Very rarely do professional fee revenues exceed physician billing collections. In general, inpatient hospital care codes reimburse less than procedures, payer mix is poor, and pediatric inpatient care is inherently time-consuming. Using traditional accounting principles, almost all PHM programs will have a negative bottom line in the narrow sense of program costs and revenues generated. However, well-run PHM programs contribute positively to the bottom line of the system as a whole through the value-added services hospitalists provide and hospitalists' ability to improve overall system efficiency and productivity. This article provides an overview of the business of hospital medicine with emphasis on the basics of designing and maintaining a program that attends carefully to physician staffing (the major cost component of a program) and physician charges (the major revenue component of the program). Outside of these traditional calculations, resource stewardship is discussed as a way to reduce hospital costs in a capitated or diagnosis-related group reimbursement model and further improve profit-or at least limit losses. Shortening length of stay creates bed capacity for a program already running at capacity. The article concludes with a discussion of how hospitalists add value to the system by making other providers and other parts of the hospital more efficient and productive. Copyright 2014, SLACK Incorporated.
A program code generator for multiphysics biological simulation using markup languages.
Amano, Akira; Kawabata, Masanari; Yamashita, Yoshiharu; Rusty Punzalan, Florencio; Shimayoshi, Takao; Kuwabara, Hiroaki; Kunieda, Yoshitoshi
2012-01-01
To cope with the complexity of the biological function simulation models, model representation with description language is becoming popular. However, simulation software itself becomes complex in these environment, thus, it is difficult to modify the simulation conditions, target computation resources or calculation methods. In the complex biological function simulation software, there are 1) model equations, 2) boundary conditions and 3) calculation schemes. Use of description model file is useful for first point and partly second point, however, third point is difficult to handle for various calculation schemes which is required for simulation models constructed from two or more elementary models. We introduce a simulation software generation system which use description language based description of coupling calculation scheme together with cell model description file. By using this software, we can easily generate biological simulation code with variety of coupling calculation schemes. To show the efficiency of our system, example of coupling calculation scheme with three elementary models are shown.
High Quantum Efficiency OLED Lighting Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shiang, Joseph
The overall goal of the program was to apply improvements in light outcoupling technology to a practical large area plastic luminaire, and thus enable the product vision of an extremely thin form factor high efficiency large area light source. The target substrate was plastic and the baseline device was operating at 35 LPW at the start of the program. The target LPW of the program was a >2x improvement in the LPW efficacy and the overall amount of light to be delivered was relatively high 900 lumens. Despite the extremely difficult challenges associated with scaling up a wet solution processmore » on plastic substrates, the program was able to make substantial progress. A small molecule wet solution process was successfully implemented on plastic substrates with almost no loss in efficiency in transitioning from the laboratory scale glass to large area plastic substrates. By transitioning to a small molecule based process, the LPW entitlement increased from 35 LPW to 60 LPW. A further 10% improvement in outcoupling efficiency was demonstrated via the use of a highly reflecting cathode, which reduced absorptive loss in the OLED device. The calculated potential improvement in some cases is even larger, ~30%, and thus there is considerable room for optimism in improving the net light coupling efficacy, provided absorptive loss mechanisms are eliminated. Further improvements are possible if scattering schemes such as the silver nanowire based hard coat structure are fully developed. The wet coating processes were successfully scaled to large area plastic substrate and resulted in the construction of a 900 lumens luminaire device.« less
NASA Technical Reports Server (NTRS)
Tuma, Margaret L.; Beheim, Glenn
1995-01-01
The effective-index method and Marcatili's technique were utilized independently to calculate the electric field profile of a rib channel waveguide. Using the electric field profile calculated from each method, the theoretical coupling efficiency between a single-mode optical fiber and a rib waveguide was calculated using the overlap integral. Perfect alignment was assumed and the coupling efficiency calculated. The coupling efficiency calculation was then repeated for a range of transverse offsets.
Combustion of hydrogen injected into a supersonic airstream (a guide to the HISS computer program)
NASA Technical Reports Server (NTRS)
Dyer, D. F.; Maples, G.; Spalding, D. B.
1976-01-01
A computer program based on a finite-difference, implicit numerical integration scheme is described for the prediction of hydrogen injected into a supersonic airstream at an angle ranging from normal to parallel to the airstream main flow direction. Results of calculations for flow and thermal property distributions were compared with 'cold flow data' taken by NASA/Langley and show excellent correlation. Typical results for equilibrium combustion are presented and exhibit qualitatively plausible behavior. Computer time required for a given case is approximately one minute on a CDC 7600. A discussion of the assumption of parabolic flow in the injection region is given which demonstrates that improvement in calculation in this region could be obtained by a partially-parabolic procedure which has been developed. It is concluded that the technique described provides an efficient and reliable means for analyzing hydrogen injection into supersonic airstreams and the subsequent combustion.
Energy Efficient Engine exhaust mixer model technology report addendum; phase 3 test program
NASA Technical Reports Server (NTRS)
Larkin, M. J.; Blatt, J. R.
1984-01-01
The Phase 3 exhaust mixer test program was conducted to explore the trends established during previous Phases 1 and 2. Combinations of mixer design parameters were tested. Phase 3 testing showed that the best performance achievable within tailpipe length and diameter constraints is 2.55 percent better than an optimized separate flow base line. A reduced penetration design achieved about the same overall performance level at a substantially lower level of excess pressure loss but with a small reduction in mixing. To improve reliability of the data, the hot and cold flow thrust coefficient analysis used in Phases 1 and 2 was augmented by calculating percent mixing from traverse data. Relative change in percent mixing between configurations was determined from thrust and flow coefficient increments. The calculation procedure developed was found to be a useful tool in assessing mixer performance. Detailed flow field data were obtained to facilitate calibration of computer codes.
[Impact of the funding reform of teaching hospitals in Brazil].
Lobo, M S C; Silva, A C M; Lins, M P E; Fiszman, R
2009-06-01
To assess the impact of funding reform on the productivity of teaching hospitals. Based on the Information System of Federal University Hospitals of Brazil, 2003 and 2006 efficiency and productivity were measured using frontier methods with a linear programming technique, data envelopment analysis, and input-oriented variable returns to scale model. The Malmquist index was calculated to detect changes during the study period: 'technical efficiency change,' or the relative variation of the efficiency of each unit; and 'technological change' after frontier shift. There was 51% mean budget increase and improvement of technical efficiency of teaching hospitals (previously 11, 17 hospitals reached the empirical efficiency frontier) but the same was not seen for the technology frontier. Data envelopment analysis set benchmark scores for each inefficient unit (before and after reform) and there was a positive correlation between technical efficiency and teaching intensity and dedication. The reform promoted management improvements but there is a need of further follow-up to assess the effectiveness of funding changes.
Development of a Peer Teaching-Assessment Program and a Peer Observation and Evaluation Tool
Trujillo, Jennifer M.; Barr, Judith; Gonyeau, Michael; Van Amburgh, Jenny A.; Matthews, S. James; Qualters, Donna
2008-01-01
Objectives To develop a formalized, comprehensive, peer-driven teaching assessment program and a valid and reliable assessment tool. Methods A volunteer taskforce was formed and a peer-assessment program was developed using a multistep, sequential approach and the Peer Observation and Evaluation Tool (POET). A pilot study was conducted to evaluate the efficiency and practicality of the process and to establish interrater reliability of the tool. Intra-class correlation coefficients (ICC) were calculated. Results ICCs for 8 separate lectures evaluated by 2-3 observers ranged from 0.66 to 0.97, indicating good interrater reliability of the tool. Conclusion Our peer assessment program for large classroom teaching, which includes a valid and reliable evaluation tool, is comprehensive, feasible, and can be adopted by other schools of pharmacy. PMID:19325963
Rapid Calculation of Spacecraft Trajectories Using Efficient Taylor Series Integration
NASA Technical Reports Server (NTRS)
Scott, James R.; Martini, Michael C.
2011-01-01
A variable-order, variable-step Taylor series integration algorithm was implemented in NASA Glenn's SNAP (Spacecraft N-body Analysis Program) code. SNAP is a high-fidelity trajectory propagation program that can propagate the trajectory of a spacecraft about virtually any body in the solar system. The Taylor series algorithm's very high order accuracy and excellent stability properties lead to large reductions in computer time relative to the code's existing 8th order Runge-Kutta scheme. Head-to-head comparison on near-Earth, lunar, Mars, and Europa missions showed that Taylor series integration is 15.8 times faster than Runge- Kutta on average, and is more accurate. These speedups were obtained for calculations involving central body, other body, thrust, and drag forces. Similar speedups have been obtained for calculations that include J2 spherical harmonic for central body gravitation. The algorithm includes a step size selection method that directly calculates the step size and never requires a repeat step. High-order Taylor series integration algorithms have been shown to provide major reductions in computer time over conventional integration methods in numerous scientific applications. The objective here was to directly implement Taylor series integration in an existing trajectory analysis code and demonstrate that large reductions in computer time (order of magnitude) could be achieved while simultaneously maintaining high accuracy. This software greatly accelerates the calculation of spacecraft trajectories. At each time level, the spacecraft position, velocity, and mass are expanded in a high-order Taylor series whose coefficients are obtained through efficient differentiation arithmetic. This makes it possible to take very large time steps at minimal cost, resulting in large savings in computer time. The Taylor series algorithm is implemented primarily through three subroutines: (1) a driver routine that automatically introduces auxiliary variables and sets up initial conditions and integrates; (2) a routine that calculates system reduced derivatives using recurrence relations for quotients and products; and (3) a routine that determines the step size and sums the series. The order of accuracy used in a trajectory calculation is arbitrary and can be set by the user. The algorithm directly calculates the motion of other planetary bodies and does not require ephemeris files (except to start the calculation). The code also runs with Taylor series and Runge-Kutta used interchangeably for different phases of a mission.
Modeling aluminum-lithium alloy welding characteristics
NASA Technical Reports Server (NTRS)
Bernstein, Edward L.
1996-01-01
The purpose of this project was to develop a finite element model of the heat-affected zone in the vicinity of a weld line on a plate in order to determine an accurate plastic strain history. The resulting plastic strain increments calculated by the finite element program were then to be used to calculate the measure of damage D. It was hoped to determine the effects of varying welding parameters, such as beam power, efficiency, and weld speed, and the effect of different material properties on the occurrence of microfissuring. The results were to be compared first to the previous analysis of Inconel 718, and then extended to aluminum 2195.
NASA Technical Reports Server (NTRS)
1982-01-01
Williams International's F107 fanjet engine is used in two types of cruise missiles, Navy-sponsored Tomahawk and the Air Force AGM-86B Air Launched Cruise Missile (ALCM). Engine produces about 600 pounds thrust, is one foot in diameter and weighs only 141 pounds. Design was aided by use of a COSMIC program in calculating airflows in engine's internal ducting, resulting in a more efficient engine with increased thrust and reduced fuel consumption.
A new parallel algorithm of MP2 energy calculations.
Ishimura, Kazuya; Pulay, Peter; Nagase, Shigeru
2006-03-01
A new parallel algorithm has been developed for second-order Møller-Plesset perturbation theory (MP2) energy calculations. Its main projected applications are for large molecules, for instance, for the calculation of dispersion interaction. Tests on a moderate number of processors (2-16) show that the program has high CPU and parallel efficiency. Timings are presented for two relatively large molecules, taxol (C(47)H(51)NO(14)) and luciferin (C(11)H(8)N(2)O(3)S(2)), the former with the 6-31G* and 6-311G** basis sets (1,032 and 1,484 basis functions, 164 correlated orbitals), and the latter with the aug-cc-pVDZ and aug-cc-pVTZ basis sets (530 and 1,198 basis functions, 46 correlated orbitals). An MP2 energy calculation on C(130)H(10) (1,970 basis functions, 265 correlated orbitals) completed in less than 2 h on 128 processors.
OFF-DESIGN PERFORMANCE OF RADIAL INFLOW TURBINES
NASA Technical Reports Server (NTRS)
Wasserbauer, C. A.
1994-01-01
This program calculates off design performance of radial inflow turbines. The program uses a one dimensional solution of flow conditions through the turbine along the main streamline. The loss model accounts for stator, rotor, incidence, and exit losses. Program features include consideration of stator and rotor trailing edge blockage and computation of performance to limiting load. Stator loss (loss in kinetic energy across the stator) is proportional to the average kinetic energy in the blade row and is represented in the program by an equation which includes a stator loss coefficient determined from design point performance and then assumed to be constant for the off design calculations. Minimum incidence loss does not occur at zero incidence angle with respect to the rotor blade, but at some optimum flow angle. At high pressure ratios the level of rotor inlet velocity seemed to have an excessive influence on the loss. Using the component of velocity in the direction of the optimum flow angle gave better correlations with experimental results. Overall turbine geometry and design point values of efficiency, pressure ratio, and mass flow are needed as input information. The output includes performance and velocity diagram parameters for any number of given speeds over a range of turbine pressure ratio. The program has been implemented on the IBM 7094 and operates in batch mode.
Kalendar, Ruslan; Tselykh, Timofey V; Khassenov, Bekbolat; Ramanculov, Erlan M
2017-01-01
This chapter introduces the FastPCR software as an integrated tool environment for PCR primer and probe design, which predicts properties of oligonucleotides based on experimental studies of the PCR efficiency. The software provides comprehensive facilities for designing primers for most PCR applications and their combinations. These include the standard PCR as well as the multiplex, long-distance, inverse, real-time, group-specific, unique, overlap extension PCR for multi-fragments assembling cloning and loop-mediated isothermal amplification (LAMP). It also contains a built-in program to design oligonucleotide sets both for long sequence assembly by ligase chain reaction and for design of amplicons that tile across a region(s) of interest. The software calculates the melting temperature for the standard and degenerate oligonucleotides including locked nucleic acid (LNA) and other modifications. It also provides analyses for a set of primers with the prediction of oligonucleotide properties, dimer and G/C-quadruplex detection, linguistic complexity as well as a primer dilution and resuspension calculator. The program consists of various bioinformatical tools for analysis of sequences with the GC or AT skew, CG% and GA% content, and the purine-pyrimidine skew. It also analyzes the linguistic sequence complexity and performs generation of random DNA sequence as well as restriction endonucleases analysis. The program allows to find or create restriction enzyme recognition sites for coding sequences and supports the clustering of sequences. It performs efficient and complete detection of various repeat types with visual display. The FastPCR software allows the sequence file batch processing that is essential for automation. The program is available for download at http://primerdigital.com/fastpcr.html , and its online version is located at http://primerdigital.com/tools/pcr.html .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Granderson, Jessica; Touzani, Samir; Custodio, Claudine
Trustworthy savings calculations are critical to convincing investors in energy efficiency projects of the benefit and cost-effectiveness of such investments and their ability to replace or defer supply-side capital investments. However, today’s methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of efficiency projects. They also require time-consuming manual data acquisition and often do not deliver results until years after the program period has ended. The rising availability of “smart” meters, combined with new analytical approaches to quantifying savings, has opened the door to conducting M&V more quickly and at lower cost,more » with comparable or improved accuracy. These meter- and software-based approaches, increasingly referred to as “M&V 2.0”, are the subject of surging industry interest, particularly in the context of utility energy efficiency programs. Program administrators, evaluators, and regulators are asking how M&V 2.0 compares with more traditional methods, how proprietary software can be transparently performance tested, how these techniques can be integrated into the next generation of whole-building focused efficiency programs. This paper expands recent analyses of public-domain whole-building M&V methods, focusing on more novel M&V2.0 modeling approaches that are used in commercial technologies, as well as approaches that are documented in the literature, and/or developed by the academic building research community. We present a testing procedure and metrics to assess the performance of whole-building M&V methods. We then illustrate the test procedure by evaluating the accuracy of ten baseline energy use models, against measured data from a large dataset of 537 buildings. The results of this study show that the already available advanced interval data baseline models hold great promise for scaling the adoption of building measured savings calculations using Advanced Metering Infrastructure (AMI) data. Median coefficient of variation of the root mean squared error (CV(RMSE)) was less than 25% for every model tested when twelve months of training data were used. With even six months of training data, median CV(RMSE) for daily energy total was under 25% for all models tested. Finally, these findings can be used to build confidence in model robustness, and the readiness of these approaches for industry uptake and adoption« less
Granderson, Jessica; Touzani, Samir; Custodio, Claudine; ...
2016-04-16
Trustworthy savings calculations are critical to convincing investors in energy efficiency projects of the benefit and cost-effectiveness of such investments and their ability to replace or defer supply-side capital investments. However, today’s methods for measurement and verification (M&V) of energy savings constitute a significant portion of the total costs of efficiency projects. They also require time-consuming manual data acquisition and often do not deliver results until years after the program period has ended. The rising availability of “smart” meters, combined with new analytical approaches to quantifying savings, has opened the door to conducting M&V more quickly and at lower cost,more » with comparable or improved accuracy. These meter- and software-based approaches, increasingly referred to as “M&V 2.0”, are the subject of surging industry interest, particularly in the context of utility energy efficiency programs. Program administrators, evaluators, and regulators are asking how M&V 2.0 compares with more traditional methods, how proprietary software can be transparently performance tested, how these techniques can be integrated into the next generation of whole-building focused efficiency programs. This paper expands recent analyses of public-domain whole-building M&V methods, focusing on more novel M&V2.0 modeling approaches that are used in commercial technologies, as well as approaches that are documented in the literature, and/or developed by the academic building research community. We present a testing procedure and metrics to assess the performance of whole-building M&V methods. We then illustrate the test procedure by evaluating the accuracy of ten baseline energy use models, against measured data from a large dataset of 537 buildings. The results of this study show that the already available advanced interval data baseline models hold great promise for scaling the adoption of building measured savings calculations using Advanced Metering Infrastructure (AMI) data. Median coefficient of variation of the root mean squared error (CV(RMSE)) was less than 25% for every model tested when twelve months of training data were used. With even six months of training data, median CV(RMSE) for daily energy total was under 25% for all models tested. Finally, these findings can be used to build confidence in model robustness, and the readiness of these approaches for industry uptake and adoption« less
NASA Technical Reports Server (NTRS)
Piszczor, M. F.; Brinker, D. J.; Flood, D. J.; Avery, J. E.; Fraas, L. M.; Fairbanks, E. S.; Yerkes, J. W.; O'Neill, M. J.
1991-01-01
A high-efficiency, lightweight space photovoltaic concentrator array is described. Previous work on the minidome Fresnel lens concentrator concept is being integrated with Boeing's 30 percent efficient tandem GaAs/GaSb concentrator cells into a high-performance photovoltaic array. Calculations indicate that, in the near term, such an array can achieve 300 W/sq m at a specific power of 100 W/kg. Emphasis of the program has now shifted to integrating the concentrator lens, tandem cell, and supporting panel structure into a space-qualifiable array. A description is presented of the current status of component and prototype panel testing and the development of a flight panel for the Photovoltaic Array Space Power Plus Diagnostics (PASP PLUS) flight experiment.
NASA Astrophysics Data System (ADS)
Piszczor, M. F.; Brinker, D. J.; Flood, D. J.; Avery, J. E.; Fraas, L. M.; Fairbanks, E. S.; Yerkes, J. W.; O'Neill, M. J.
A high-efficiency, lightweight space photovoltaic concentrator array is described. Previous work on the minidome Fresnel lens concentrator concept is being integrated with Boeing's 30 percent efficient tandem GaAs/GaSb concentrator cells into a high-performance photovoltaic array. Calculations indicate that, in the near term, such an array can achieve 300 W/sq m at a specific power of 100 W/kg. Emphasis of the program has now shifted to integrating the concentrator lens, tandem cell, and supporting panel structure into a space-qualifiable array. A description is presented of the current status of component and prototype panel testing and the development of a flight panel for the Photovoltaic Array Space Power Plus Diagnostics (PASP PLUS) flight experiment.
Variable Complexity Structural Optimization of Shells
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Venkataraman, Satchi
1999-01-01
Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-2110 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition, several modeling issues for the design of shells of revolution were studied.
Variable Complexity Structural Optimization of Shells
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Venkataraman, Satchi
1998-01-01
Structural designers today face both opportunities and challenges in a vast array of available analysis and optimization programs. Some programs such as NASTRAN, are very general, permitting the designer to model any structure, to any degree of accuracy, but often at a higher computational cost. Additionally, such general procedures often do not allow easy implementation of all constraints of interest to the designer. Other programs, based on algebraic expressions used by designers one generation ago, have limited applicability for general structures with modem materials. However, when applicable, they provide easy understanding of design decisions trade-off. Finally, designers can also use specialized programs suitable for designing efficiently a subset of structural problems. For example, PASCO and PANDA2 are panel design codes, which calculate response and estimate failure much more efficiently than general-purpose codes, but are narrowly applicable in terms of geometry and loading. Therefore, the problem of optimizing structures based on simultaneous use of several models and computer programs is a subject of considerable interest. The problem of using several levels of models in optimization has been dubbed variable complexity modeling. Work under NASA grant NAG1-1808 has been concerned with the development of variable complexity modeling strategies with special emphasis on response surface techniques. In addition several modeling issues for the design of shells of revolution were studied.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sofronov, I.D.; Voronin, B.L.; Butnev, O.I.
1997-12-31
The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle.more » The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.« less
On the performance of explicit and implicit algorithms for transient thermal analysis
NASA Astrophysics Data System (ADS)
Adelman, H. M.; Haftka, R. T.
1980-09-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit and implicit algorithms are discussed. A promising set of implicit algorithms, known as the GEAR package is described. Four test problems, used for evaluating and comparing various algorithms, have been selected and finite element models of the configurations are discribed. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system and a model of the space shuttle orbiter wing. Calculations were carried out using the SPAR finite element program, the MITAS lumped parameter program and a special purpose finite element program incorporating the GEAR algorithms. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff. Careful attention to modeling detail such as avoiding thin or short high-conducting elements can sometimes reduce the stiffness to the extent that explicit methods become advantageous.
Analyzing and modeling gravity and magnetic anomalies using the SPHERE program and Magsat data
NASA Technical Reports Server (NTRS)
Braile, L. W.; Hinze, W. J.; Vonfrese, R. R. B. (Principal Investigator)
1981-01-01
Computer codes were completed, tested, and documented for analyzing magnetic anomaly vector components by equivalent point dipole inversion. The codes are intended for use in inverting the magnetic anomaly due to a spherical prism in a horizontal geomagnetic field and for recomputing the anomaly in a vertical geomagnetic field. Modeling of potential fields at satellite elevations that are derived from three dimensional sources by program SPHERE was made significantly more efficient by improving the input routines. A preliminary model of the Andean subduction zone was used to compute the anomaly at satellite elevations using both actual geomagnetic parameters and vertical polarization. Program SPHERE is also being used to calculate satellite level magnetic and gravity anomalies from the Amazon River Aulacogen.
[Stochastic model of infectious diseases transmission].
Ruiz-Ramírez, Juan; Hernández-Rodríguez, Gabriela Eréndira
2009-01-01
Propose a mathematic model that shows how population structure affects the size of infectious disease epidemics. This study was conducted during 2004 at the University of Colima. It used generalized small-world network topology to represent contacts that occurred within and between families. To that end, two programs in MATLAB were conducted to calculate the efficiency of the network. The development of a program in the C programming language was also required, that represents the stochastic susceptible-infectious-removed model, and simultaneous results were obtained for the number of infected people. An increased number of families connected by meeting sites impacted the size of the infectious diseases by roughly 400%. Population structure influences the rapid spread of infectious diseases, reaching epidemic effects.
Numerical implementation of the S-matrix algorithm for modeling of relief diffraction gratings
NASA Astrophysics Data System (ADS)
Yaremchuk, Iryna; Tamulevičius, Tomas; Fitio, Volodymyr; Gražulevičiūte, Ieva; Bobitski, Yaroslav; Tamulevičius, Sigitas
2013-11-01
A new numerical implementation is developed to calculate the diffraction efficiency of relief diffraction gratings. In the new formulation, vectors containing the expansion coefficients of electric and magnetic fields on boundaries of the grating layer are expressed by additional constants. An S-matrix algorithm has been systematically described in detail and adapted to a simple matrix form. This implementation is suitable for the study of optical characteristics of periodic structures by using modern object-oriented programming languages and different standard mathematical software. The modeling program has been developed on the basis of this numerical implementation and tested by comparison with other commercially available programs and experimental data. Numerical examples are given to show the usefulness of the new implementation.
Gerwin, Philip M; Norinsky, Rada M; Tolwani, Ravi J
2018-03-01
Laboratory animal programs and core laboratories often set service rates based on cost estimates. However, actual costs may be unknown, and service rates may not reflect the actual cost of services. Accurately evaluating the actual costs of services can be challenging and time-consuming. We used a time-driven activity-based costing (ABC) model to determine the cost of services provided by a resource laboratory at our institution. The time-driven approach is a more efficient approach to calculating costs than using a traditional ABC model. We calculated only 2 parameters: the time required to perform an activity and the unit cost of the activity based on employee cost. This method allowed us to rapidly and accurately calculate the actual cost of services provided, including microinjection of a DNA construct, microinjection of embryonic stem cells, embryo transfer, and in vitro fertilization. We successfully implemented a time-driven ABC model to evaluate the cost of these services and the capacity of labor used to deliver them. We determined how actual costs compared with current service rates. In addition, we determined that the labor supplied to conduct all services (10,645 min/wk) exceeded the practical labor capacity (8400 min/wk), indicating that the laboratory team was highly efficient and that additional labor capacity was needed to prevent overloading of the current team. Importantly, this time-driven ABC approach allowed us to establish a baseline model that can easily be updated to reflect operational changes or changes in labor costs. We demonstrated that a time-driven ABC model is a powerful management tool that can be applied to other core facilities as well as to entire animal programs, providing valuable information that can be used to set rates based on the actual cost of services and to improve operating efficiency.
SEISRISK II; a computer program for seismic hazard estimation
Bender, Bernice; Perkins, D.M.
1982-01-01
The computer program SEISRISK II calculates probabilistic ground motion values for use in seismic hazard mapping. SEISRISK II employs a model that allows earthquakes to occur as points within source zones and as finite-length ruptures along faults. It assumes that earthquake occurrences have a Poisson distribution, that occurrence rates remain constant during the time period considered, that ground motion resulting from an earthquake is a known function of magnitude and distance, that seismically homogeneous source zones are defined, that fault locations are known, that fault rupture lengths depend on magnitude, and that earthquake rates as a function of magnitude are specified for each source. SEISRISK II calculates for each site on a grid of sites the level of ground motion that has a specified probability of being exceeded during a given time period. The program was designed to process a large (essentially unlimited) number of sites and sources efficiently and has been used to produce regional and national maps of seismic hazard.}t is a substantial revision of an earlier program SEISRISK I, which has never been documented. SEISRISK II runs considerably [aster and gives more accurate results than the earlier program and in addition includes rupture length and acceleration variability which were not contained in the original version. We describe the model and how it is implemented in the computer program and provide a flowchart and listing of the code.
NASA Astrophysics Data System (ADS)
Lin, Lin
The computational cost of standard Kohn-Sham density functional theory (KSDFT) calculations scale cubically with respect to the system size, which limits its use in large scale applications. In recent years, we have developed an alternative procedure called the pole expansion and selected inversion (PEXSI) method. The PEXSI method solves KSDFT without solving any eigenvalue and eigenvector, and directly evaluates physical quantities including electron density, energy, atomic force, density of states, and local density of states. The overall algorithm scales as at most quadratically for all materials including insulators, semiconductors and the difficult metallic systems. The PEXSI method can be efficiently parallelized over 10,000 - 100,000 processors on high performance machines. The PEXSI method has been integrated into a number of community electronic structure software packages such as ATK, BigDFT, CP2K, DGDFT, FHI-aims and SIESTA, and has been used in a number of applications with 2D materials beyond 10,000 atoms. The PEXSI method works for LDA, GGA and meta-GGA functionals. The mathematical structure for hybrid functional KSDFT calculations is significantly different. I will also discuss recent progress on using adaptive compressed exchange method for accelerating hybrid functional calculations. DOE SciDAC Program, DOE CAMERA Program, LBNL LDRD, Sloan Fellowship.
Coalescent: an open-science framework for importance sampling in coalescent theory.
Tewari, Susanta; Spouge, John L
2015-01-01
Background. In coalescent theory, computer programs often use importance sampling to calculate likelihoods and other statistical quantities. An importance sampling scheme can exploit human intuition to improve statistical efficiency of computations, but unfortunately, in the absence of general computer frameworks on importance sampling, researchers often struggle to translate new sampling schemes computationally or benchmark against different schemes, in a manner that is reliable and maintainable. Moreover, most studies use computer programs lacking a convenient user interface or the flexibility to meet the current demands of open science. In particular, current computer frameworks can only evaluate the efficiency of a single importance sampling scheme or compare the efficiencies of different schemes in an ad hoc manner. Results. We have designed a general framework (http://coalescent.sourceforge.net; language: Java; License: GPLv3) for importance sampling that computes likelihoods under the standard neutral coalescent model of a single, well-mixed population of constant size over time following infinite sites model of mutation. The framework models the necessary core concepts, comes integrated with several data sets of varying size, implements the standard competing proposals, and integrates tightly with our previous framework for calculating exact probabilities. For a given dataset, it computes the likelihood and provides the maximum likelihood estimate of the mutation parameter. Well-known benchmarks in the coalescent literature validate the accuracy of the framework. The framework provides an intuitive user interface with minimal clutter. For performance, the framework switches automatically to modern multicore hardware, if available. It runs on three major platforms (Windows, Mac and Linux). Extensive tests and coverage make the framework reliable and maintainable. Conclusions. In coalescent theory, many studies of computational efficiency consider only effective sample size. Here, we evaluate proposals in the coalescent literature, to discover that the order of efficiency among the three importance sampling schemes changes when one considers running time as well as effective sample size. We also describe a computational technique called "just-in-time delegation" available to improve the trade-off between running time and precision by constructing improved importance sampling schemes from existing ones. Thus, our systems approach is a potential solution to the "2(8) programs problem" highlighted by Felsenstein, because it provides the flexibility to include or exclude various features of similar coalescent models or importance sampling schemes.
Kossert, K; Cassette, Ph; Carles, A Grau; Jörg, G; Gostomski, Christroph Lierse V; Nähle, O; Wolf, Ch
2014-05-01
The triple-to-double coincidence ratio (TDCR) method is frequently used to measure the activity of radionuclides decaying by pure β emission or electron capture (EC). Some radionuclides with more complex decays have also been studied, but accurate calculations of decay branches which are accompanied by many coincident γ transitions have not yet been investigated. This paper describes recent extensions of the model to make efficiency computations for more complex decay schemes possible. In particular, the MICELLE2 program that applies a stochastic approach of the free parameter model was extended. With an improved code, efficiencies for β(-), β(+) and EC branches with up to seven coincident γ transitions can be calculated. Moreover, a new parametrization for the computation of electron stopping powers has been implemented to compute the ionization quenching function of 10 commercial scintillation cocktails. In order to demonstrate the capabilities of the TDCR method, the following radionuclides are discussed: (166m)Ho (complex β(-)/γ), (59)Fe (complex β(-)/γ), (64)Cu (β(-), β(+), EC and EC/γ) and (229)Th in equilibrium with its progenies (decay chain with many α, β and complex β(-)/γ transitions). © 2013 Published by Elsevier Ltd.
On-board computer progress in development of A 310 flight testing program
NASA Technical Reports Server (NTRS)
Reau, P.
1981-01-01
Onboard computer progress in development of an Airbus A 310 flight testing program is described. Minicomputers were installed onboard three A 310 airplanes in 1979 in order to: (1) assure the flight safety by exercising a limit check of a given set of parameters; (2) improve the efficiency of flight tests and allow cost reduction; and (3) perform test analysis on an external basis by utilizing onboard flight types. The following program considerations are discussed: (1) conclusions based on simulation of an onboard computer system; (2) brief descriptions of A 310 airborne computer equipment, specifically the onboard universal calculator (CUB) consisting of a ROLM 1666 system and visualization system using an AFIGRAF CRT; (3) the ground system and flight information inputs; and (4) specifications and execution priorities for temporary and permanent programs.
User Instructions for the Policy Analysis Modeling System (PAMS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
McNeil, Michael A.; Letschert, Virginie E.; Van Buskirk, Robert D.
PAMS uses country-specific and product-specific data to calculate estimates of impacts of a Minimum Efficiency Performance Standard (MEPS) program. The analysis tool is self-contained in a Microsoft Excel spreadsheet, and requires no links to external data, or special code additions to run. The analysis can be customized to a particular program without additional user input, through the use of the pull-down menus located on the Summary page. In addition, the spreadsheet contains many areas into which user-generated input data can be entered for increased accuracy of projection. The following is a step-by-step guide for using and customizing the tool.
University of Arizona High Energy Physics Program at the Cosmic Frontier 2014-2016
DOE Office of Scientific and Technical Information (OSTI.GOV)
abate, alex; cheu, elliott
This is the final technical report from the University of Arizona High Energy Physics program at the Cosmic Frontier covering the period 2014-2016. The work aims to advance the understanding of dark energy using the Large Synoptic Survey Telescope (LSST). Progress on the engineering design of the power supplies for the LSST camera is discussed. A variety of contributions to photometric redshift measurement uncertainties were studied. The effect of the intergalactic medium on the photometric redshift of very distant galaxies was evaluated. Computer code was developed realizing the full chain of calculations needed to accurately and efficiently run large-scale simulations.
Computer Model Of Fragmentation Of Atomic Nuclei
NASA Technical Reports Server (NTRS)
Wilson, John W.; Townsend, Lawrence W.; Tripathi, Ram K.; Norbury, John W.; KHAN FERDOUS; Badavi, Francis F.
1995-01-01
High Charge and Energy Semiempirical Nuclear Fragmentation Model (HZEFRG1) computer program developed to be computationally efficient, user-friendly, physics-based program for generating data bases on fragmentation of atomic nuclei. Data bases generated used in calculations pertaining to such radiation-transport applications as shielding against radiation in outer space, radiation dosimetry in outer space, cancer therapy in laboratories with beams of heavy ions, and simulation studies for designing detectors for experiments in nuclear physics. Provides cross sections for production of individual elements and isotopes in breakups of high-energy heavy ions by combined nuclear and Coulomb fields of interacting nuclei. Written in ANSI FORTRAN 77.
Aerodynamic penalties of heavy rain on a landing aircraft
NASA Technical Reports Server (NTRS)
Haines, P. A.; Luers, J. K.
1982-01-01
The aerodynamic penalties of very heavy rain on landing aircraft were investigated. Based on severity and frequency of occurrence, the rainfall rates of 100 mm/hr, 500 mm/hr, and 2000 mm/hr were designated, respectively, as heavy, severe, and incredible. The overall and local collection efficiencies of an aircraft encountering these rains were calculated. The analysis was based on raindrop trajectories in potential flow about an aircraft. All raindrops impinging on the aircraft are assumed to take on its speed. The momentum loss from the rain impact was later used in a landing simulation program. The local collection efficiency was used in estimating the aerodynamic roughness of an aircraft in heavy rain. The drag increase from this roughness was calculated. A number of landing simulations under a fixed stick assumption were done. Serious landing shortfalls were found for either momentum or drag penalties and especially large shortfalls for the combination of both. The latter shortfalls are comparable to those found for severe wind shear conditions.
NASA Astrophysics Data System (ADS)
Kitao, Akio; Harada, Ryuhei; Nishihara, Yasutaka; Tran, Duy Phuoc
2016-12-01
Parallel Cascade Selection Molecular Dynamics (PaCS-MD) was proposed as an efficient conformational sampling method to investigate conformational transition pathway of proteins. In PaCS-MD, cycles of (i) selection of initial structures for multiple independent MD simulations and (ii) conformational sampling by independent MD simulations are repeated until the convergence of the sampling. The selection is conducted so that protein conformation gradually approaches a target. The selection of snapshots is a key to enhance conformational changes by increasing the probability of rare event occurrence. Since the procedure of PaCS-MD is simple, no modification of MD programs is required; the selections of initial structures and the restart of the next cycle in the MD simulations can be handled with relatively simple scripts with straightforward implementation. Trajectories generated by PaCS-MD were further analyzed by the Markov state model (MSM), which enables calculation of free energy landscape. The combination of PaCS-MD and MSM is reported in this work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Winkler, Jon; Booten, Chuck
Residential building codes and voluntary labeling programs are continually increasing the energy efficiency requirements of residential buildings. Improving a building's thermal enclosure and installing energy-efficient appliances and lighting can result in significant reductions in sensible cooling loads leading to smaller air conditioners and shorter cooling seasons. However due to fresh air ventilation requirements and internal gains, latent cooling loads are not reduced by the same proportion. Thus, it's becoming more challenging for conventional cooling equipment to control indoor humidity at part-load cooling conditions and using conventional cooling equipment in a non-conventional building poses the potential risk of high indoor humidity.more » The objective of this project was to investigate the impact the chosen design condition has on the calculated part-load cooling moisture load, and compare calculated moisture loads and the required dehumidification capacity to whole-building simulations. Procedures for sizing whole-house supplemental dehumidification equipment have yet to be formalized; however minor modifications to current Air-Conditioner Contractors of America (ACCA) Manual J load calculation procedures are appropriate for calculating residential part-load cooling moisture loads. Though ASHRAE 1% DP design conditions are commonly used to determine the dehumidification requirements for commercial buildings, an appropriate DP design condition for residential buildings has not been investigated. Two methods for sizing supplemental dehumidification equipment were developed and tested. The first method closely followed Manual J cooling load calculations; whereas the second method made more conservative assumptions impacting both sensible and latent loads.« less
NASA Astrophysics Data System (ADS)
Murni, Bustamam, A.; Ernastuti, Handhika, T.; Kerami, D.
2017-07-01
Calculation of the matrix-vector multiplication in the real-world problems often involves large matrix with arbitrary size. Therefore, parallelization is needed to speed up the calculation process that usually takes a long time. Graph partitioning techniques that have been discussed in the previous studies cannot be used to complete the parallelized calculation of matrix-vector multiplication with arbitrary size. This is due to the assumption of graph partitioning techniques that can only solve the square and symmetric matrix. Hypergraph partitioning techniques will overcome the shortcomings of the graph partitioning technique. This paper addresses the efficient parallelization of matrix-vector multiplication through hypergraph partitioning techniques using CUDA GPU-based parallel computing. CUDA (compute unified device architecture) is a parallel computing platform and programming model that was created by NVIDIA and implemented by the GPU (graphics processing unit).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, S; Guerrero, M; Zhang, B
Purpose: To implement a comprehensive non-measurement-based verification program for patient-specific IMRT QA Methods: Based on published guidelines, a robust IMRT QA program should assess the following components: 1) accuracy of dose calculation, 2) accuracy of data transfer from the treatment planning system (TPS) to the record-and-verify (RV) system, 3) treatment plan deliverability, and 4) accuracy of plan delivery. Results: We have implemented an IMRT QA program that consist of four components: 1) an independent re-calculation of the dose distribution in the patient anatomy with a commercial secondary dose calculation program: Mobius3D (Mobius Medical Systems, Houston, TX), with dose accuracy evaluationmore » using gamma analysis, PTV mean dose, PTV coverage to 95%, and organ-at-risk mean dose; 2) an automated in-house-developed plan comparison system that compares all relevant plan parameters such as MU, MLC position, beam iso-center position, collimator, gantry, couch, field size settings, and bolus placement, etc. between the plan and the RV system; 3) use of the RV system to check the plan deliverability and further confirm using “mode-up” function on treatment console for plans receiving warning; and 4) implementation of a comprehensive weekly MLC QA, in addition to routine accelerator monthly and daily QA. Among 1200 verifications, there were 9 cases of suspicious calculations, 5 cases of delivery failure, no data transfer errors, and no failure of weekly MLC QA. These 9 suspicious cases were due to the PTV extending to the skin or to heterogeneity correction effects, which would not have been caught using phantom measurement-based QA. The delivery failure was due to the rounding variation of MLC position between the planning system and RV system. Conclusion: A very efficient, yet comprehensive, non-measurement-based patient-specific QA program has been implemented and used clinically for about 18 months with excellent results.« less
Single, composite, and ceramic Nd:YAG 946-nm lasers
NASA Astrophysics Data System (ADS)
Lan, Rui-Jun; Yang, Guang; Zheng-Ping, Wang
2015-06-01
Single, composite crystal and ceramic continuous wave (CW) 946-nm Nd:YAG lasers are demonstrated, respectively. The ceramic laser behaves better than the crystal laser. With 5-mm long ceramic, a CW output power of 1.46 W is generated with an optical conversion efficiency of 13.9%, while the slope efficiency is 17.9%. The optimal ceramic length for a 946-nm laser is also calculated. Project supported by the National Natural Science Foundation of China (Grant No. 61405171), the Natural Science Foundation of Shandong Province, China (Grant No. ZR2012FQ014), and the Science and Technology Program of the Shandong Higher Education Institutions of China (Grant No. J13LJ05).
Automated optimization techniques for aircraft synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1976-01-01
Application of numerical optimization techniques to automated conceptual aircraft design is examined. These methods are shown to be a general and efficient way to obtain quantitative information for evaluating alternative new vehicle projects. Fully automated design is compared with traditional point design methods and time and resource requirements for automated design are given. The NASA Ames Research Center aircraft synthesis program (ACSYNT) is described with special attention to calculation of the weight of a vehicle to fly a specified mission. The ACSYNT procedures for automatically obtaining sensitivity of the design (aircraft weight, performance and cost) to various vehicle, mission, and material technology parameters are presented. Examples are used to demonstrate the efficient application of these techniques.
Parallel computation using boundary elements in solid mechanics
NASA Technical Reports Server (NTRS)
Chien, L. S.; Sun, C. T.
1990-01-01
The inherent parallelism of the boundary element method is shown. The boundary element is formulated by assuming the linear variation of displacements and tractions within a line element. Moreover, MACSYMA symbolic program is employed to obtain the analytical results for influence coefficients. Three computational components are parallelized in this method to show the speedup and efficiency in computation. The global coefficient matrix is first formed concurrently. Then, the parallel Gaussian elimination solution scheme is applied to solve the resulting system of equations. Finally, and more importantly, the domain solutions of a given boundary value problem are calculated simultaneously. The linear speedups and high efficiencies are shown for solving a demonstrated problem on Sequent Symmetry S81 parallel computing system.
A theoretical study of heterojunction and graded band gap type solar cells
NASA Technical Reports Server (NTRS)
Sutherland, J. E.; Hauser, J. R.
1977-01-01
A computer program was designed for the analysis of variable composition solar cells and applied to several proposed solar cell structures using appropriate semiconductor materials. The program simulates solar cells made of a ternary alloy of two binary semiconductors with an arbitrary composition profile, and an abrupt or Gaussian doping profile of polarity n-on-p or p-on-n with arbitrary doping levels. Once the device structure is specified, the program numerically solves a complete set of differential equations and calculates electrostatic potential, quasi-Fermi levels, carrier concentrations and current densities, total current density and efficiency as functions of terminal voltage and position within the cell. These results are then recorded by computer in tabulated or plotted form for interpretation by the user.
NASA Technical Reports Server (NTRS)
Saunders, David A.
2005-01-01
Trajectory optimization program Traj_opt was developed at Ames Research Center to help assess the potential benefits of ultrahigh temperature ceramic materials applied to reusable space vehicles with sharp noses and wing leading edges. Traj_opt loosely couples the Ames three-degrees-of-freedom trajectory package Traj (see NASA-TM-2004-212847) with the SNOPT optimization package (Stanford University Technical Report SOL 98-1). Traj_opt version January 22, 2003 is covered by this user guide. The program has been applied extensively to entry and ascent abort trajectory calculations for sharp and blunt crew transfer vehicles. The main optimization variables are control points for the angle of attack and bank angle time histories. No propulsion options are provided, but numerous objective functions may be specified and the nonlinear constraints implemented include a distributed surface heating constraint capability. Aero-capture calculations are also treated with an option to minimize orbital eccentricity at apoapsis. Traj_opt runs efficiently on a single processor, using forward or central differences for the gradient calculations. Results may be displayed conveniently with Gnuplot scripts. Control files recommended for five standard reentry and ascent abort trajectories are included along with detailed descriptions of the inputs and outputs.
Numerical simulation of hydrogen fluorine overtone chemical lasers
NASA Astrophysics Data System (ADS)
Chen, Jinbao; Jiang, Zhongfu; Hua, Weihong; Liu, Zejin; Shu, Baihong
1998-08-01
A two-dimensional program was applied to simulate the chemical dynamic process, gas dynamic process and lasing process of a combustion-driven CW HF overtone chemical lasers. Some important parameters in the cavity were obtained. The calculated results included HF molecule concentration on each vibration energy level while lasing, averaged pressure and temperature, zero power gain coefficient of each spectral line, laser spectrum, the averaged laser intensity, output power, chemical efficiency and the length of lasing zone.
The NRL (Naval Research Laboratory) Phase-Locked Gyrotron Oscillator Program for SDIO/IST
1988-07-11
are neglected as are space - charge effects . The cold cavity eigenfrequency for the TE6 2 1 mode is 35.08 GHz. The calculated efficiency, output power...improved beam quality on the gyrotron operation, and to eliminate the unknown space charge effects present in the original experiment, in which a...substantial fraction of the diode current is reflected before reaching the gyrotron cavity and may cause space charge problems before being collected on
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Ibrahim, Khaled Z.; Epifanovsky, Evgeny; Williams, Samuel; ...
2017-03-08
Coupled-cluster methods provide highly accurate models of molecular structure through explicit numerical calculation of tensors representing the correlation between electrons. These calculations are dominated by a sequence of tensor contractions, motivating the development of numerical libraries for such operations. While based on matrix–matrix multiplication, these libraries are specialized to exploit symmetries in the molecular structure and in electronic interactions, and thus reduce the size of the tensor representation and the complexity of contractions. The resulting algorithms are irregular and their parallelization has been previously achieved via the use of dynamic scheduling or specialized data decompositions. We introduce our efforts tomore » extend the Libtensor framework to work in the distributed memory environment in a scalable and energy-efficient manner. We achieve up to 240× speedup compared with the optimized shared memory implementation of Libtensor. We attain scalability to hundreds of thousands of compute cores on three distributed-memory architectures (Cray XC30 and XC40, and IBM Blue Gene/Q), and on a heterogeneous GPU-CPU system (Cray XK7). As the bottlenecks shift from being compute-bound DGEMM's to communication-bound collectives as the size of the molecular system scales, we adopt two radically different parallelization approaches for handling load-imbalance, tasking and bulk synchronous models. Nevertheless, we preserve a unified interface to both programming models to maintain the productivity of computational quantum chemists.« less
Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K
2014-01-01
In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in.
Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K
2014-01-01
In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. Availability PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in PMID:24616564
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Weatherill, W. H.; Yip, E. L.
1984-01-01
A finite difference method to solve the unsteady transonic flow about harmonically oscillating wings was investigated. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The differential equation for the unsteady velocity potential is linear with spatially varying coefficients and with the time variable eliminated by assuming harmonic motion. An alternating direction implicit procedure was investigated, and a pilot program was developed for both two and three dimensional wings. This program provides a relatively efficient relaxation solution without previously encountered solution instability problems. Pressure distributions for two rectangular wings are calculated. Conjugate gradient techniques were developed for the asymmetric, indefinite problem. The conjugate gradient procedure is evaluated for applications to the unsteady transonic problem. Different equations for the alternating direction procedure are derived using a coordinate transformation for swept and tapered wing planforms. Pressure distributions for swept, untaped wings of vanishing thickness are correlated with linear results for sweep angles up to 45 degrees.
Dynamic programming algorithms for biological sequence comparison.
Pearson, W R; Miller, W
1992-01-01
Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.
Kang, Hee-Chung; Hong, Jae-Seok
2011-08-16
With a greater emphasis on cost containment in many health care systems, it has become common to evaluate each physician's relative resource use. This study explored the major factors that influence the economic performance rankings of medical clinics in the Korea National Health Insurance (NHI) program by assessing the consistency between cost-efficiency indices constructed using different profiling criteria. Data on medical care benefit costs for outpatient care at medical clinics nationwide were collected from the NHI claims database. We calculated eight types of cost-efficiency index with different profiling criteria for each medical clinic and investigated the agreement between the decile rankings of each index pair using the weighted kappa statistic. The exclusion of pharmacy cost lowered agreement between rankings to the lowest level, and differences in case-mix classification also lowered agreement considerably. A medical clinic may be identified as either cost-efficient or cost-inefficient, even when using the same index, depending on the profiling criteria applied. Whether a country has a single insurance or a multiple-insurer system, it is very important to have standardized profiling criteria for the consolidated management of health care costs.
New FEDS Software Helps You Design for Maximum Energy Efficiency, Minimum Cost
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilbride, Theresa L.
2003-01-30
This article was written for the Partner Update a newsletter put out by Potomac Communications for DOE's Rebuild America program. The article describes the FEDS (Federal Energy Decision System) software, the official analytical tool of the Rebuild America program. This software, developed by PNNL with support from DOE, FEMP and Rebuild, helps government entities and contractors make informed decisions about which energy efficiency improvements are the most cost effective for their facilities. FEDS churns thru literally thousands of calculations accounting for energy uses, costs, and interactions from different types of HVAC systems, lighting types, insulation levels, building types, occupancy levelsmore » and times. FEDS crunchs the numbers so decision makers can get fast reliable answers on which alternatives are the best for their particular building. In this article, we're touting the improvements in the latest upgrade of FEDS, which is available free to Rebuild America partners. We tell partners what FEDS does, how to order it, and even where to get tech support and training.« less
Development of MCAERO wing design panel method with interactive graphics module
NASA Technical Reports Server (NTRS)
Hawk, J. D.; Bristow, D. R.
1984-01-01
A reliable and efficient iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical pressure distribution. The design process is initialized by using MCAERO (MCAIR 3-D Subsonic Potential Flow Analysis Code) to analyze a baseline configuration. A second program DMCAERO is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter by applying a first-order expansion to the baseline equations in MCAERO. This matrix is calculated only once but is used in each iteration cycle to calculate the geometry perturbation and to analyze the perturbed geometry. The potential on the new geometry is calculated by linear extrapolation from the baseline solution. This extrapolated potential is converted to velocity by numerical differentiation, and velocity is converted to pressure by using Bernoulli's equation. There is an interactive graphics option which allows the user to graphically display the results of the design process and to interactively change either the geometry or the prescribed pressure distribution.
Latent uncertainties of the precalculated track Monte Carlo method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, Marc-André; Seuntjens, Jan; Roberge, David
Purpose: While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited numbermore » of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pregenerated for electrons and protons using EGSnrc and GEANT4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (CUDA) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a “ground truth” benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of D{sub max}. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Results: Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. Conclusions: The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.« less
Latent uncertainties of the precalculated track Monte Carlo method.
Renaud, Marc-André; Roberge, David; Seuntjens, Jan
2015-01-01
While significant progress has been made in speeding up Monte Carlo (MC) dose calculation methods, they remain too time-consuming for the purpose of inverse planning. To achieve clinically usable calculation speeds, a precalculated Monte Carlo (PMC) algorithm for proton and electron transport was developed to run on graphics processing units (GPUs). The algorithm utilizes pregenerated particle track data from conventional MC codes for different materials such as water, bone, and lung to produce dose distributions in voxelized phantoms. While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from the limited number of unique tracks in the pregenerated track bank is missing from the paper. With a proper uncertainty analysis, an optimal number of tracks in the pregenerated track bank can be selected for a desired dose calculation uncertainty. Particle tracks were pregenerated for electrons and protons using EGSnrc and geant4 and saved in a database. The PMC algorithm for track selection, rotation, and transport was implemented on the Compute Unified Device Architecture (cuda) 4.0 programming framework. PMC dose distributions were calculated in a variety of media and compared to benchmark dose distributions simulated from the corresponding general-purpose MC codes in the same conditions. A latent uncertainty metric was defined and analysis was performed by varying the pregenerated track bank size and the number of simulated primary particle histories and comparing dose values to a "ground truth" benchmark dose distribution calculated to 0.04% average uncertainty in voxels with dose greater than 20% of Dmax. Efficiency metrics were calculated against benchmark MC codes on a single CPU core with no variance reduction. Dose distributions generated using PMC and benchmark MC codes were compared and found to be within 2% of each other in voxels with dose values greater than 20% of the maximum dose. In proton calculations, a small (≤ 1 mm) distance-to-agreement error was observed at the Bragg peak. Latent uncertainty was characterized for electrons and found to follow a Poisson distribution with the number of unique tracks per energy. A track bank of 12 energies and 60000 unique tracks per pregenerated energy in water had a size of 2.4 GB and achieved a latent uncertainty of approximately 1% at an optimal efficiency gain over DOSXYZnrc. Larger track banks produced a lower latent uncertainty at the cost of increased memory consumption. Using an NVIDIA GTX 590, efficiency analysis showed a 807 × efficiency increase over DOSXYZnrc for 16 MeV electrons in water and 508 × for 16 MeV electrons in bone. The PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty of 1% with a large efficiency gain over conventional MC codes. Before performing clinical dose calculations, models to calculate dose contributions from uncharged particles must be implemented. Following the successful implementation of these models, the PMC method will be evaluated as a candidate for inverse planning of modulated electron radiation therapy and scanned proton beams.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yuan; Ning, Chuangang, E-mail: ningcg@tsinghua.edu.cn; Collaborative Innovation Center of Quantum Matter, Beijing
2015-10-14
Recently, the development of photoelectron velocity map imaging makes it much easier to obtain the photoelectron angular distributions (PADs) experimentally. However, explanations of PADs are only qualitative in most cases, and very limited works have been reported on how to calculate PAD of anions. In the present work, we report a method using the density-functional-theory Kohn-Sham orbitals to calculate the photodetachment cross sections and the anisotropy parameter β. The spherical average over all random molecular orientation is calculated analytically. A program which can handle both the Gaussian type orbital and the Slater type orbital has been coded. The testing calculationsmore » on Li{sup −}, C{sup −}, O{sup −}, F{sup −}, CH{sup −}, OH{sup −}, NH{sub 2}{sup −}, O{sub 2}{sup −}, and S{sub 2}{sup −} show that our method is an efficient way to calculate the photodetachment cross section and anisotropy parameter β for anions, thus promising for large systems.« less
NASA Astrophysics Data System (ADS)
Sokolova, Tatiana S.; Dorogokupets, Peter I.; Dymshits, Anna M.; Danilov, Boris S.; Litasov, Konstantin D.
2016-09-01
We present Microsoft Excel spreadsheets for calculation of thermodynamic functions and P-V-T properties of MgO, diamond and 9 metals, Al, Cu, Ag, Au, Pt, Nb, Ta, Mo, and W, depending on temperature and volume or temperature and pressure. The spreadsheets include the most common pressure markers used in in situ experiments with diamond anvil cell and multianvil techniques. The calculations are based on the equation of state formalism via the Helmholtz free energy. The program was developed using Visual Basic for Applications in Microsoft Excel and is a time-efficient tool to evaluate volume, pressure and other thermodynamic functions using T-P and T-V data only as input parameters. This application is aimed to solve practical issues of high pressure experiments in geosciences and mineral physics.
NASA Technical Reports Server (NTRS)
Sinharoy, Samar; Patton, Martin O.; Valko, Thomas M., Sr.; Weizer, Victor G.
2002-01-01
Theoretical calculations have shown that highest efficiency III-V multi-junction solar cells require alloy structures that cannot be grown on a lattice-matched substrate. Ever since the first demonstration of high efficiency metamorphic single junction 1.1 eV and 1.2 eV InGaAs solar cells by Essential Research Incorporated (ERI), interest has grown in the development of multi-junction cells of this type using graded buffer layer technology. ERI is currently developing a dual-junction 1.6 eV InGaP/1.1 eV InGaAs tandem cell (projected practical air-mass zero (AM0), one-sun efficiency of 28%, and 100-sun efficiency of 37.5%) under a Ballistic Missile Defense Command (BMDO) SBIR Phase II program. A second ongoing research effort at ERI involves the development of a 2.1 eV AlGaInP/1.6 eV InGaAsP/1.2 eV InGaAs triple-junction concentrator tandem cell (projected practical AM0 efficiency of 36.5% under 100 suns) under a SBIR Phase II program funded by the Air Force. We are in the process of optimizing the dual-junction cell performance. In case of the triple-junction cell, we have developed the bottom and the middle cell, and are in the process of developing the layer structures needed for the top cell. A progress report is presented in this paper.
NASA Astrophysics Data System (ADS)
Nowak, Bernard; Życzkowski, Piotr; Łuczak, Rafał
2017-03-01
The authors of this article dealt with the issue of modeling the thermodynamic and thermokinetic properties (parameters) of refrigerants. The knowledge of these parameters is essential to design refrigeration equipment, to perform their energy efficiency analysis, or to compare the efficiency of air refrigerators using different refrigerants. One of the refrigerants used in mine air compression refrigerators is R407C. For this refrigerant, 23 dependencies were developed, determining its thermodynamic and thermokinetic parameters in the states of saturated liquid, dry saturated vapour, superheated vapor, subcooled liquid, and in the two-phase region. The created formulas have been presented in Tables 2, 5, 8, 10 and 12, respectively. It should be noted that the scope of application of these formulas is wider than the range of changes of that refrigerant during the normal operation of mine refrigeration equipment. The article ends with the statistical verification of the developed dependencies. For this purpose, for each model correlation coefficients and coefficients of determination were calculated, as well as absolute and relative deviations between the given values from the program REFPROP 7 (Lemmon et al., 2002) and the calculated ones. The results of these calculations have been contained in Tables 14 and 15.
NASA Technical Reports Server (NTRS)
Hamilton, H. B.; Strangas, E.
1980-01-01
The conventional series motor model is discussed as well as procedures for obtaining, by test, the parameters necessary for calculating performance and losses. The calculated results for operation from ripple free DC are compared with observed test results, indicating approximately 5% or less error. Experimental data indicating the influence of brush shift and chopper frequency are also presented. Both factors have a significant effect on the speed and torque relationships. The losses and loss mechanisms present in a DC series motor are examined and an attempt is made to evaluate the added losses due to harmonic currents and fluxes. Findings with respect to these losses is summarized.
Optimized multisectioned acoustic liners
NASA Technical Reports Server (NTRS)
Baumeister, K. J.
1979-01-01
New calculations show that segmenting is most efficient at high frequencies with relatively long duct lengths where the attenuation is low for both uniform and segmented liners. Statistical considerations indicate little advantage in using optimized liners with more than two segments while the bandwidth of an optimized two-segment liner is shown to be nearly equal to that of a uniform liner. Multielement liner calculations show a large degradation in performance due to changes in assumed input modal structure. Computer programs are used to generate theoretical attenuations for a number of liner configurations for liners in a rectangular duct with no mean flow. Overall, the use of optimized multisectioned liners fails to offer sufficient advantage over a uniform liner to warrant their use except in low frequency single mode application.
About Losses in Pumping Generators of High-Power Electrodischarge Excimer Lasers
NASA Astrophysics Data System (ADS)
Ivanov, N. G.; Losev, V. F.
2015-04-01
Energy losses in pumping systems of discharge high-power lasers are investigated. To estimate the losses, the discharge circuit operation was modeled, and its calculation was performed using the program PSpice. Results of measurements and calculations demonstrate that the resistance of a rail gap with electric field distortion exceeds several times the resistance of a single-channel gap without field distortion. A difference in the resistances is explained by different mechanisms of discharge burning: in the first case diffusion mechanism and in the second case the spark mechanism. The low efficiency of the high-power excimer lasers (~1%) is explained by high energy losses in the rail gap that reach more than 50% of the initially stored energy.
NASA Technical Reports Server (NTRS)
1987-01-01
GWS takes plans for a new home and subjects them to intensive computerized analysis that does 10,000 calculations relative to expected heat loss and heat gain, then provides specifications designed specifically for each structure as to heating, cooling, ventilation and insulation. As construction progresses, GWS inspects the work of the electrical, plumbing and insulation contractors and installs its own Smart House Radiant Barrier. On completion of the home, GWS technicians use a machine that creates a vacuum in the house and enables computer calculation of the air exchanged, a measure of energy efficiency. Key factor is the radiant barrier, borrowed from the Apollo program. This is an adaptation of a highly effective aluminized heat shield as a radiation barrier holding in or keeping out heat, cold air and water vapor.
PREDICTING TURBINE STAGE PERFORMANCE
NASA Technical Reports Server (NTRS)
Boyle, R. J.
1994-01-01
This program was developed to predict turbine stage performance taking into account the effects of complex passage geometries. The method uses a quasi-3D inviscid-flow analysis iteratively coupled to calculated losses so that changes in losses result in changes in the flow distribution. In this manner the effects of both the geometry on the flow distribution and the flow distribution on losses are accounted for. The flow may be subsonic or shock-free transonic. The blade row may be fixed or rotating, and the blades may be twisted and leaned. This program has been applied to axial and radial turbines, and is helpful in the analysis of mixed flow machines. This program is a combination of the flow analysis programs MERIDL and TSONIC coupled to the boundary layer program BLAYER. The subsonic flow solution is obtained by a finite difference, stream function analysis. Transonic blade-to-blade solutions are obtained using information from the finite difference, stream function solution with a reduced flow factor. Upstream and downstream flow variables may vary from hub to shroud and provision is made to correct for loss of stagnation pressure. Boundary layer analyses are made to determine profile and end-wall friction losses. Empirical loss models are used to account for incidence, secondary flow, disc windage, and clearance losses. The total losses are then used to calculate stator, rotor, and stage efficiency. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370/3033 under TSS with a central memory requirement of approximately 4.5 Megs of 8 bit bytes. This program was developed in 1985.
An Efficient Multiblock Method for Aerodynamic Analysis and Design on Distributed Memory Systems
NASA Technical Reports Server (NTRS)
Reuther, James; Alonso, Juan Jose; Vassberg, John C.; Jameson, Antony; Martinelli, Luigi
1997-01-01
The work presented in this paper describes the application of a multiblock gridding strategy to the solution of aerodynamic design optimization problems involving complex configurations. The design process is parallelized using the MPI (Message Passing Interface) Standard such that it can be efficiently run on a variety of distributed memory systems ranging from traditional parallel computers to networks of workstations. Substantial improvements to the parallel performance of the baseline method are presented, with particular attention to their impact on the scalability of the program as a function of the mesh size. Drag minimization calculations at a fixed coefficient of lift are presented for a business jet configuration that includes the wing, body, pylon, aft-mounted nacelle, and vertical and horizontal tails. An aerodynamic design optimization is performed with both the Euler and Reynolds Averaged Navier-Stokes (RANS) equations governing the flow solution and the results are compared. These sample calculations establish the feasibility of efficient aerodynamic optimization of complete aircraft configurations using the RANS equations as the flow model. There still exists, however, the need for detailed studies of the importance of a true viscous adjoint method which holds the promise of tackling the minimization of not only the wave and induced components of drag, but also the viscous drag.
Robust design of microchannel cooler
NASA Astrophysics Data System (ADS)
He, Ye; Yang, Tao; Hu, Li; Li, Leimin
2005-12-01
Microchannel cooler has offered a new method for the cooling of high power diode lasers, with the advantages of small volume, high efficiency of thermal dissipation and low cost when mass-produced. In order to reduce the sensitivity of design to manufacture errors or other disturbances, Taguchi method that is one of robust design method was chosen to optimize three parameters important to the cooling performance of roof-like microchannel cooler. The hydromechanical and thermal mathematical model of varying section microchannel was calculated using finite volume method by FLUENT. A special program was written to realize the automation of the design process for improving efficiency. The optimal design is presented which compromises between optimal cooling performance and its robustness. This design method proves to be available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
Broad Funding Opportunity Announcement Project: A team of researchers from more than 10 departments at Stanford University is collaborating to transform the way Americans interact with our energy-use data. The team built a web-based platform that collects historical electricity data which it uses to perform a variety of experiments to learn what triggers people to respond. Experiments include new financial incentives, a calculator to understand the potential savings of efficient appliances, new Facebook interface designs, communication studies using Twitter, and educational programs with the Girl Scouts. Economic modeling is underway to better understand how results from the San Francisco Baymore » Area can be broadened to other parts of the country.« less
NASA Technical Reports Server (NTRS)
Lohmann, R. P.; Mador, R. J.
1979-01-01
An evaluation was conducted with a three stage Vorbix duct burner to determine the performance and emissions characteristics of the concept and to refine the configuration to provide acceptable durability and operational characteristics for its use in the variable cycle engine (VCE) testbed program. The tests were conducted at representative takeoff, transonic climb, and supersonic cruise inlet conditions for the VSCE-502B study engine. The test stand, the emissions sampling and analysis equipment, and the supporting flow visualization rigs are described. The performance parameters including the fuel-air ratio, the combustion efficiency/exit temperature, thrust efficiency, and gaseous emissions calculations are defined. The test procedures are reviewed and the results are discussed.
NASA Astrophysics Data System (ADS)
Wu, Dongmei; Wang, Zhongcheng
2006-03-01
According to Mickens [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563], the general HB (harmonic balance) method is an approximation to the convergent Fourier series representation of the periodic solution of a nonlinear oscillator and not an approximation to an expansion in terms of a small parameter. Consequently, for a nonlinear undamped Duffing equation with a driving force Bcos(ωx), to find a periodic solution when the fundamental frequency is identical to ω, the corresponding Fourier series can be written as y˜(x)=∑n=1m acos[(2n-1)ωx]. How to calculate the coefficients of the Fourier series efficiently with a computer program is still an open problem. For HB method, by substituting approximation y˜(x) into force equation, expanding the resulting expression into a trigonometric series, then letting the coefficients of the resulting lowest-order harmonic be zero, one can obtain approximate coefficients of approximation y˜(x) [R.E. Mickens, Comments on a Generalized Galerkin's method for non-linear oscillators, J. Sound Vib. 118 (1987) 563]. But for nonlinear differential equations such as Duffing equation, it is very difficult to construct higher-order analytical approximations, because the HB method requires solving a set of algebraic equations for a large number of unknowns with very complex nonlinearities. To overcome the difficulty, forty years ago, Urabe derived a computational method for Duffing equation based on Galerkin procedure [M. Urabe, A. Reiter, Numerical computation of nonlinear forced oscillations by Galerkin's procedure, J. Math. Anal. Appl. 14 (1966) 107-140]. Dooren obtained an approximate solution of the Duffing oscillator with a special set of parameters by using Urabe's method [R. van Dooren, Stabilization of Cowell's classic finite difference method for numerical integration, J. Comput. Phys. 16 (1974) 186-192]. In this paper, in the frame of the general HB method, we present a new iteration algorithm to calculate the coefficients of the Fourier series. By using this new method, the iteration procedure starts with a(x)cos(ωx)+b(x)sin(ωx), and the accuracy may be improved gradually by determining new coefficients a,a,… will be produced automatically in an one-by-one manner. In all the stage of calculation, we need only to solve a cubic equation. Using this new algorithm, we develop a Mathematica program, which demonstrates following main advantages over the previous HB method: (1) it avoids solving a set of associate nonlinear equations; (2) it is easier to be implemented into a computer program, and produces a highly accurate solution with analytical expression efficiently. It is interesting to find that, generally, for a given set of parameters, a nonlinear Duffing equation can have three independent oscillation modes. For some sets of the parameters, it can have two modes with complex displacement and one with real displacement. But in some cases, it can have three modes, all of them having real displacement. Therefore, we can divide the parameters into two classes, according to the solution property: there is only one mode with real displacement and there are three modes with real displacement. This program should be useful to study the dynamically periodic behavior of a Duffing oscillator and can provide an approximate analytical solution with high-accuracy for testing the error behavior of newly developed numerical methods with a wide range of parameters. Program summaryTitle of program:AnalyDuffing.nb Catalogue identifier:ADWR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWR_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions:none Computer for which the program is designed and others on which it has been tested:the program has been designed for a microcomputer and been tested on the microcomputer. Computers:IBM PC Installations:the address(es) of your computer(s) Operating systems under which the program has been tested:Windows XP Programming language used:Software Mathematica 4.2, 5.0 and 5.1 No. of lines in distributed program, including test data, etc.:23 663 No. of bytes in distributed program, including test data, etc.:152 321 Distribution format:tar.gz Memory required to execute with typical data:51 712 Bytes No. of bits in a word: No. of processors used:1 Has the code been vectorized?:no Peripherals used:no Program Library subprograms used:no Nature of physical problem:To find an approximate solution with analytical expressions for the undamped nonlinear Duffing equation with periodic driving force when the fundamental frequency is identical to the driving force. Method of solution:In the frame of the general HB method, by using a new iteration algorithm to calculate the coefficients of the Fourier series, we can obtain an approximate analytical solution with high-accuracy efficiently. Restrictions on the complexity of the problem:For problems, which have a large driving frequency, the convergence may be a little slow, because more iterative times are needed. Typical running time:several seconds Unusual features of the program:For an undamped Duffing equation, it can provide all the solutions or the oscillation modes with real displacement for any interesting parameters, for the required accuracy, efficiently. The program can be used to study the dynamically periodic behavior of a nonlinear oscillator, and can provide a high-accurate approximate analytical solution for developing high-accurate numerical method.
A Cost-Effectiveness Analysis of Community Health Workers in Mozambique.
Bowser, Diana; Okunogbe, Adeyemi; Oliveras, Elizabeth; Subramanian, Laura; Morrill, Tyler
2015-10-01
Community health worker (CHW) programs are a key strategy for reducing mortality and morbidity. Despite this, there is a gap in the literature on the cost and cost-effectiveness of CHW programs, especially in developing countries. This study assessed the costs of a CHW program in Mozambique over the period 2010-2012. Incremental cost-effectiveness ratios, comparing the change in costs to the change in 3 output measures, as well as gains in efficiency were calculated over the periods 2010-2011 and 2010-2012. The results were reported both excluding and including salaries for CHWs. The results of the study showed total costs of the CHW program increased from US$1.34 million in 2010 to US$1.67 million in 2012. The highest incremental cost-effectiveness ratio was for the cost per beneficiary covered including CHW salaries, estimated at US$47.12 for 2010-2011. The smallest incremental cost-effectiveness ratio was for the cost per household visit not including CHW salaries, estimated at US$0.09 for 2010-2012. Adding CHW salaries would not only have increased total program costs by 362% in 2012 but also led to the largest efficiency gains in program implementation; a 56% gain in cost per output in the long run as compared with the short run after including CHW salaries. Our findings can be used to inform future CHW program policy both in Mozambique and in other countries, as well as provide a set of incremental cost per output measures to be used in benchmarking to other CHW costing analyses. © The Author(s) 2015.
A computer program for two-particle generalized coefficients of fractional parentage
NASA Astrophysics Data System (ADS)
Deveikis, A.; Juodagalvis, A.
2008-10-01
We present a FORTRAN90 program GCFP for the calculation of the generalized coefficients of fractional parentage (generalized CFPs or GCFP). The approach is based on the observation that the multi-shell CFPs can be expressed in terms of single-shell CFPs, while the latter can be readily calculated employing a simple enumeration scheme of antisymmetric A-particle states and an efficient method of construction of the idempotent matrix eigenvectors. The program provides fast calculation of GCFPs for a given particle number and produces results possessing numerical uncertainties below the desired tolerance. A single j-shell is defined by four quantum numbers, (e,l,j,t). A supplemental C++ program parGCFP allows calculation to be done in batches and/or in parallel. Program summaryProgram title:GCFP, parGCFP Catalogue identifier: AEBI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 17 199 No. of bytes in distributed program, including test data, etc.: 88 658 Distribution format: tar.gz Programming language: FORTRAN 77/90 ( GCFP), C++ ( parGCFP) Computer: Any computer with suitable compilers. The program GCFP requires a FORTRAN 77/90 compiler. The auxiliary program parGCFP requires GNU-C++ compatible compiler, while its parallel version additionally requires MPI-1 standard libraries Operating system: Linux (Ubuntu, Scientific) (all programs), also checked on Windows XP ( GCFP, serial version of parGCFP) RAM: The memory demand depends on the computation and output mode. If this mode is not 4, the program GCFP demands the following amounts of memory on a computer with Linux operating system. It requires around 2 MB of RAM for the A=12 system at E⩽2. Computation of the A=50 particle system requires around 60 MB of RAM at E=0 and ˜70 MB at E=2 (note, however, that the calculation of this system will take a very long time). If the computation and output mode is set to 4, the memory demands by GCFP are significantly larger. Calculation of GCFPs of A=12 system at E=1 requires 145 MB. The program parGCFP requires additional 2.5 and 4.5 MB of memory for the serial and parallel version, respectively. Classification: 17.18 Nature of problem: The program GCFP generates a list of two-particle coefficients of fractional parentage for several j-shells with isospin. Solution method: The method is based on the observation that multishell coefficients of fractional parentage can be expressed in terms of single-shell CFPs [1]. The latter are calculated using the algorithm [2,3] for a spectral decomposition of an antisymmetrization operator matrix Y. The coefficients of fractional parentage are those eigenvectors of the antisymmetrization operator matrix Y that correspond to unit eigenvalues. A computer code for these coefficients is available [4]. The program GCFP offers computation of two-particle multishell coefficients of fractional parentage. The program parGCFP allows a batch calculation using one input file. Sets of GCFPs are independent and can be calculated in parallel. Restrictions:A<86 when E=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations, though the shell with j⩾11/2 cannot get full (it is the implementation constraint). Unusual features: Using the program GCFP it is possible to determine allowed particle configurations without the GCFP computation. The GCFPs can be calculated either for all particle configurations at once or for a specified particle configuration. The values of GCFPs can be printed out with a complete specification in either one file or with the parent and daughter configurations printed in separate files. The latter output mode requires additional time and RAM memory. It is possible to restrict the ( J,T) values of the considered particle configurations. (Here J is the total angular momentum and T is the total isospin of the system.) The program parGCFP produces several result files the number of which equals to the number of particle configurations. To work correctly, the program GCFP needs to be compiled to read parameters from the standard input (the default setting). Running time: It depends on the size of the problem. The minimum time is required, if the computation and output mode ( CompMode) is not 4, but the resulting file is larger. A system with A=12 particles at E=0 (all 9411 GCFPs) took around 1 sec on a Pentium4 2.8 GHz processor with 1 MB L2 cache. The program required about 14 min to calculate all 1.3×10 GCFPs of E=1. The time for all 5.5×10 GCFPs of E=2 was about 53 hours. For this number of particles, the calculation time of both E=0 and E=1 with CompMode = 1 and 4 is nearly the same, when no other processes are running. The case of E=2 could not be calculated with CompMode = 4, because the RAM memory was insufficient. In general, the latter CompMode requires a longer computation time, although the resulting files are smaller in size. The program parGCFP puts virtually no time overhead. Its parallel version speeds-up the calculation. However, the results need to be collected from several files created for each configuration. References: [1] J. Levinsonas, Works of Lithuanian SSR Academy of Sciences 4 (1957) 17. [2] A. Deveikis, A. Bončkus, R. Kalinauskas, Lithuanian Phys. J. 41 (2001) 3. [3] A. Deveikis, R.K. Kalinauskas, B.R. Barrett, Ann. Phys. 296 (2002) 287. [4] A. Deveikis, Comput. Phys. Comm. 173 (2005) 186. (CPC Catalogue ID. ADWI_v1_0)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldman, Charles A.; Stuart, Elizabeth; Hoffman, Ian
2011-02-25
Since the spring of 2009, billions of federal dollars have been allocated to state and local governments as grants for energy efficiency and renewable energy projects and programs. The scale of this American Reinvestment and Recovery Act (ARRA) funding, focused on 'shovel-ready' projects to create and retain jobs, is unprecedented. Thousands of newly funded players - cities, counties, states, and tribes - and thousands of programs and projects are entering the existing landscape of energy efficiency programs for the first time or expanding their reach. The nation's experience base with energy efficiency is growing enormously, fed by federal dollars andmore » driven by broader objectives than saving energy alone. State and local officials made countless choices in developing portfolios of ARRA-funded energy efficiency programs and deciding how their programs would relate to existing efficiency programs funded by utility customers. Those choices are worth examining as bellwethers of a future world where there may be multiple program administrators and funding sources in many states. What are the opportunities and challenges of this new environment? What short- and long-term impacts will this large, infusion of funds have on utility customer-funded programs; for example, on infrastructure for delivering energy efficiency services or on customer willingness to invest in energy efficiency? To what extent has the attribution of energy savings been a critical issue, especially where administrators of utility customer-funded energy efficiency programs have performance or shareholder incentives? Do the new ARRA-funded energy efficiency programs provide insights on roles or activities that are particularly well-suited to state and local program administrators vs. administrators or implementers of utility customer-funded programs? The answers could have important implications for the future of U.S. energy efficiency. This report focuses on a selected set of ARRA-funded energy efficiency programs administered by state energy offices: the State Energy Program (SEP) formula grants, the portion of Energy Efficiency and Conservation Block Grant (EECBG) formula funds administered directly by states, and the State Energy Efficient Appliance Rebate Program (SEEARP). Since these ARRA programs devote significant monies to energy efficiency and serve similar markets as utility customer-funded programs, there are frequent interactions between programs. We exclude the DOE low-income weatherization program and EECBG funding awarded directly to the over 2,200 cities, counties and tribes from our study to keep its scope manageable. We summarize the energy efficiency program design and funding choices made by the 50 state energy offices, 5 territories and the District of Columbia. We then focus on the specific choices made in 12 case study states. These states were selected based on the level of utility customer program funding, diversity of program administrator models, and geographic diversity. Based on interviews with more than 80 energy efficiency actors in those 12 states, we draw observations about states strategies for use of Recovery Act funds. We examine interactions between ARRA programs and utility customer-funded energy efficiency programs in terms of program planning, program design and implementation, policy issues, and potential long-term impacts. We consider how the existing regulatory policy framework and energy efficiency programs in these 12 states may have impacted development of these selected ARRA programs. Finally, we summarize key trends and highlight issues that evaluators of these ARRA programs may want to examine in more depth in their process and impact evaluations.« less
NASA Technical Reports Server (NTRS)
Pickett, G. F.; Wells, R. A.; Love, R. A.
1977-01-01
A computer user's manual describing the operation and the essential features of the Modal Calculation Program is presented. The modal Calculation Program calculates the amplitude and phase of modal structures by means of acoustic pressure measurements obtained from microphones placed at selected locations within the fan inlet duct. In addition, the Modal Calculation Program also calculates the first-order errors in the modal coefficients that are due to tolerances in microphone location coordinates and inaccuracies in the acoustic pressure measurements.
43 CFR Appendix A to Part 418 - Calculation of Efficiency Equation
Code of Federal Regulations, 2011 CFR
2011-10-01
... 43 Public Lands: Interior 1 2011-10-01 2011-10-01 false Calculation of Efficiency Equation A Appendix A to Part 418 Public Lands: Interior Regulations Relating to Public Lands BUREAU OF RECLAMATION.... 418, App. A Appendix A to Part 418—Calculation of Efficiency Equation ER18DE97.008 ER18DE97.009 ...
43 CFR Appendix A to Part 418 - Calculation of Efficiency Equation
Code of Federal Regulations, 2010 CFR
2010-10-01
... 43 Public Lands: Interior 1 2010-10-01 2010-10-01 false Calculation of Efficiency Equation A Appendix A to Part 418 Public Lands: Interior Regulations Relating to Public Lands BUREAU OF RECLAMATION.... 418, App. A Appendix A to Part 418—Calculation of Efficiency Equation ER18DE97.008 ER18DE97.009 ...
García-Jacas, César R; Marrero-Ponce, Yovani; Acevedo-Martínez, Liesner; Barigye, Stephen J; Valdés-Martiní, José R; Contreras-Torres, Ernesto
2014-07-05
The present report introduces the QuBiLS-MIDAS software belonging to the ToMoCoMD-CARDD suite for the calculation of three-dimensional molecular descriptors (MDs) based on the two-linear (bilinear), three-linear, and four-linear (multilinear or N-linear) algebraic forms. Thus, it is unique software that computes these tensor-based indices. These descriptors, establish relations for two, three, and four atoms by using several (dis-)similarity metrics or multimetrics, matrix transformations, cutoffs, local calculations and aggregation operators. The theoretical background of these N-linear indices is also presented. The QuBiLS-MIDAS software was developed in the Java programming language and employs the Chemical Development Kit library for the manipulation of the chemical structures and the calculation of the atomic properties. This software is composed by a desktop user-friendly interface and an Abstract Programming Interface library. The former was created to simplify the configuration of the different options of the MDs, whereas the library was designed to allow its easy integration to other software for chemoinformatics applications. This program provides functionalities for data cleaning tasks and for batch processing of the molecular indices. In addition, it offers parallel calculation of the MDs through the use of all available processors in current computers. The studies of complexity of the main algorithms demonstrate that these were efficiently implemented with respect to their trivial implementation. Lastly, the performance tests reveal that this software has a suitable behavior when the amount of processors is increased. Therefore, the QuBiLS-MIDAS software constitutes a useful application for the computation of the molecular indices based on N-linear algebraic maps and it can be used freely to perform chemoinformatics studies. Copyright © 2014 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Xu, Weimin; Chen, Shi; Lu, Hongyan
2016-04-01
Integrated gravity is an efficient way in studying spatial and temporal characteristics of the dynamics and tectonics. Differential measurements based on the continuous and discrete gravity observations shows highly competitive in terms of both efficiency and precision with single result. The differential continuous gravity variation between the nearby stations, which is based on the observation of Scintrex g-Phone relative gravimeters in every single station. It is combined with the repeated mobile relative measurements or absolute results to study the regional integrated gravity changes. Firstly we preprocess the continuous records by Tsoft software, and calculate the theoretical earth tides and ocean tides by "MT80TW" program through high precision tidal parameters from "WPARICET". The atmospheric loading effects and complex drift are strictly considered in the procedure. Through above steps we get the continuous gravity in every station and we can calculate the continuous gravity variation between nearby stations, which is called the differential continuous gravity changes. Then the differential results between related stations is calculated based on the repeated gravity measurements, which are carried out once or twice every year surrounding the gravity stations. Hence we get the discrete gravity results between the nearby stations. Finally, the continuous and discrete gravity results are combined in the same related stations, including the absolute gravity results if necessary, to get the regional integrated gravity changes. This differential gravity results is more accurate and effective in dynamical monitoring, regional hydrologic effects studying, tectonic activity and other geodynamical researches. The time-frequency characteristics of continuous gravity results are discussed to insure the accuracy and efficiency in the procedure.
Texture functions in image analysis: A computationally efficient solution
NASA Technical Reports Server (NTRS)
Cox, S. C.; Rose, J. F.
1983-01-01
A computationally efficient means for calculating texture measurements from digital images by use of the co-occurrence technique is presented. The calculation of the statistical descriptors of image texture and a solution that circumvents the need for calculating and storing a co-occurrence matrix are discussed. The results show that existing efficient algorithms for calculating sums, sums of squares, and cross products can be used to compute complex co-occurrence relationships directly from the digital image input.
NWChem: A comprehensive and scalable open-source solution for large scale molecular simulations
NASA Astrophysics Data System (ADS)
Valiev, M.; Bylaska, E. J.; Govind, N.; Kowalski, K.; Straatsma, T. P.; Van Dam, H. J. J.; Wang, D.; Nieplocha, J.; Apra, E.; Windus, T. L.; de Jong, W. A.
2010-09-01
The latest release of NWChem delivers an open-source computational chemistry package with extensive capabilities for large scale simulations of chemical and biological systems. Utilizing a common computational framework, diverse theoretical descriptions can be used to provide the best solution for a given scientific problem. Scalable parallel implementations and modular software design enable efficient utilization of current computational architectures. This paper provides an overview of NWChem focusing primarily on the core theoretical modules provided by the code and their parallel performance. Program summaryProgram title: NWChem Catalogue identifier: AEGI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open Source Educational Community License No. of lines in distributed program, including test data, etc.: 11 709 543 No. of bytes in distributed program, including test data, etc.: 680 696 106 Distribution format: tar.gz Programming language: Fortran 77, C Computer: all Linux based workstations and parallel supercomputers, Windows and Apple machines Operating system: Linux, OS X, Windows Has the code been vectorised or parallelized?: Code is parallelized Classification: 2.1, 2.2, 3, 7.3, 7.7, 16.1, 16.2, 16.3, 16.10, 16.13 Nature of problem: Large-scale atomistic simulations of chemical and biological systems require efficient and reliable methods for ground and excited solutions of many-electron Hamiltonian, analysis of the potential energy surface, and dynamics. Solution method: Ground and excited solutions of many-electron Hamiltonian are obtained utilizing density-functional theory, many-body perturbation approach, and coupled cluster expansion. These solutions or a combination thereof with classical descriptions are then used to analyze potential energy surface and perform dynamical simulations. Additional comments: Full documentation is provided in the distribution file. This includes an INSTALL file giving details of how to build the package. A set of test runs is provided in the examples directory. The distribution file for this program is over 90 Mbytes and therefore is not delivered directly when download or Email is requested. Instead a html file giving details of how the program can be obtained is sent. Running time: Running time depends on the size of the chemical system, complexity of the method, number of cpu's and the computational task. It ranges from several seconds for serial DFT energy calculations on a few atoms to several hours for parallel coupled cluster energy calculations on tens of atoms or ab-initio molecular dynamics simulation on hundreds of atoms.
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2011 CFR
2011-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
40 CFR 63.753 - Reporting requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... the control system is calculated to be less than 81%, the initial material balance calculation, and... used, (A) each rolling period when the overall control efficiency of the control system is calculated... the overall control efficiency of the control system is calculated to be less than 81%, the initial...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Han, S; Ji, Y; Kim, K
Purpose: A diagnostics Multileaf Collimator (MLC) was designed for diagnostic radiography dose reduction. Monte Carlo simulation was used to evaluate efficiency of shielding material for producing leaves of Multileaf collimator. Material & Methods: The general radiography unit (Rex-650R, Listem, Korea) was modeling with Monte Carlo simulation (MCNPX, LANL, USA) and we used SRS-78 program to calculate the energy spectrum of tube voltage (80, 100, 120 kVp). The shielding materials was SKD 11 alloy tool steel that is composed of 1.6% carbon(C), 0.4% silicon (Si), 0.6% manganese (Mn), 5% chromium (Cr), 1% molybdenum (Mo), and vanadium (V). The density of itmore » was 7.89 g/m3. We simulated leafs diagnostic MLC using SKD 11 with general radiography unit. We calculated efficiency of diagnostic MLC using tally6 card of MCNPX depending on energy. Results: The diagnostic MLC consisted of 25 individual metal shielding leaves on both sides, with dimensions of 10 × 0.5 × 0.5 cm3. The leaves of MLC were controlled by motors positioned on both sides of the MLC. According to energy (tube voltage), the shielding efficiency of MLC in Monte Carlo simulation was 99% (80 kVp), 96% (100 kVp) and 93% (120 kVp). Conclusion: We certified efficiency of diagnostic MLC fabricated from SKD11 alloy tool steel. Based on the results, the diagnostic MLC was designed. We will make the diagnostic MLC for dose reduction of diagnostic radiography.« less
Code of Federal Regulations, 2010 CFR
2010-10-01
... for improving Medicare program efficiency and to reward suggesters for monetary savings. 420.410... Program Efficiency and to Reward Suggesters for Monetary Savings § 420.410 Establishment of a program to collect suggestions for improving Medicare program efficiency and to reward suggesters for monetary...
Code of Federal Regulations, 2011 CFR
2011-10-01
... for improving Medicare program efficiency and to reward suggesters for monetary savings. 420.410... Program Efficiency and to Reward Suggesters for Monetary Savings § 420.410 Establishment of a program to collect suggestions for improving Medicare program efficiency and to reward suggesters for monetary...
Hill, Mary C.; Banta, E.R.; Harbaugh, A.W.; Anderman, E.R.
2000-01-01
This report documents the Observation, Sensitivity, and Parameter-Estimation Processes of the ground-water modeling computer program MODFLOW-2000. The Observation Process generates model-calculated values for comparison with measured, or observed, quantities. A variety of statistics is calculated to quantify this comparison, including a weighted least-squares objective function. In addition, a number of files are produced that can be used to compare the values graphically. The Sensitivity Process calculates the sensitivity of hydraulic heads throughout the model with respect to specified parameters using the accurate sensitivity-equation method. These are called grid sensitivities. If the Observation Process is active, it uses the grid sensitivities to calculate sensitivities for the simulated values associated with the observations. These are called observation sensitivities. Observation sensitivities are used to calculate a number of statistics that can be used (1) to diagnose inadequate data, (2) to identify parameters that probably cannot be estimated by regression using the available observations, and (3) to evaluate the utility of proposed new data. The Parameter-Estimation Process uses a modified Gauss-Newton method to adjust values of user-selected input parameters in an iterative procedure to minimize the value of the weighted least-squares objective function. Statistics produced by the Parameter-Estimation Process can be used to evaluate estimated parameter values; statistics produced by the Observation Process and post-processing program RESAN-2000 can be used to evaluate how accurately the model represents the actual processes; statistics produced by post-processing program YCINT-2000 can be used to quantify the uncertainty of model simulated values. Parameters are defined in the Ground-Water Flow Process input files and can be used to calculate most model inputs, such as: for explicitly defined model layers, horizontal hydraulic conductivity, horizontal anisotropy, vertical hydraulic conductivity or vertical anisotropy, specific storage, and specific yield; and, for implicitly represented layers, vertical hydraulic conductivity. In addition, parameters can be defined to calculate the hydraulic conductance of the River, General-Head Boundary, and Drain Packages; areal recharge rates of the Recharge Package; maximum evapotranspiration of the Evapotranspiration Package; pumpage or the rate of flow at defined-flux boundaries of the Well Package; and the hydraulic head at constant-head boundaries. The spatial variation of model inputs produced using defined parameters is very flexible, including interpolated distributions that require the summation of contributions from different parameters. Observations can include measured hydraulic heads or temporal changes in hydraulic heads, measured gains and losses along head-dependent boundaries (such as streams), flows through constant-head boundaries, and advective transport through the system, which generally would be inferred from measured concentrations. MODFLOW-2000 is intended for use on any computer operating system. The program consists of algorithms programmed in Fortran 90, which efficiently performs numerical calculations and is fully compatible with the newer Fortran 95. The code is easily modified to be compatible with FORTRAN 77. Coordination for multiple processors is accommodated using Message Passing Interface (MPI) commands. The program is designed in a modular fashion that is intended to support inclusion of new capabilities.
NASA Astrophysics Data System (ADS)
Sanna, N.; Morelli, G.
2004-09-01
In this paper we present the new version of the SCELib program (CPC Catalogue identifier ADMG) a full numerical implementation of the Single Center Expansion (SCE) method. The physics involved is that of producing the SCE description of molecular electronic densities, of molecular electrostatic potentials and of molecular perturbed potentials due to a point negative or positive charge. This new revision of the program has been optimized to run in serial as well as in parallel execution mode, to support a larger set of molecular symmetries and to permit the restart of long-lasting calculations. To measure the performance of this new release, a comparative study has been carried out on the most powerful computing architectures in serial and parallel runs. The results of the calculations reported in this paper refer to real cases medium to large molecular systems and they are reported in full details to benchmark at best the parallel architectures the new SCELib code will run on. Program summaryTitle of program: SCELib2 Catalogue identifier: ADGU Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADGU Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Reference to previous versions: Comput. Phys. Commun. 128 (2) (2000) 139 (CPC catalogue identifier: ADMG) Does the new version supersede the original program?: Yes Computer for which the program is designed and others on which it has been tested: HP ES45 and rx2600, SUN ES4500, IBM SP and any single CPU workstation based on Alpha, SPARC, POWER, Itanium2 and X86 processors Installations: CASPUR, local Operating systems under which the program has been tested: HP Tru64 V5.X, SUNOS V5.8, IBM AIX V5.X, Linux RedHat V8.0 Programming language used: C Memory required to execute with typical data: 10 Mwords. Up to 2000 Mwords depending on the molecular system and runtime parameters No. of bits in a word: 64 No. of processors used: 1 to 32 Has the code been vectorized or parallelized?: Yes No. of bytes in distributed program, including test data, etc.: 3 798 507 No. of lines in distributed program, including test data, etc.: 187 226 Distribution format: tar.gz Nature of physical problem: In this set of codes an efficient procedure is implemented to describe the wavefunction and related molecular properties of a polyatomic molecular system within the Single Center of Expansion (SCE) approximation. The resulting SCE wavefunction, electron density, electrostatic and exchange/correlation potentials can then be used via a proper Application Programming Interface (API) to describe the target molecular system which can be employed in electron-molecule scattering calculations. The molecular properties expanded over a single center turn out to also be of more general application and some possible uses in quantum chemistry, biomodelling and drug design are also outlined. Method of solution: The polycentre Hartee-Fock solution for a molecule of arbitrary geometry, based on linear combination of Gaussian-Type Orbital (GTO), is expanded over a single center, typically the Center Of Mass (C.O.M.), by means of a Gauss-Legendre/Chebyschev quadrature over the θ, φ angular coordinates. The resulting SCE numerical wavefunction is then used to calculate the one-particle electron density, the electrostatic potential and two different models for the correlation/polarization potentials induced by the impinging electron, which have the correct asymptotic behaviour for the leading dipole molecular polarizabilities. Restrictions on the complexity of the problem: Depending on the molecular system under study and on the operating conditions the program may or may not fit into available RAM memory. In this case a feature of the program is to memory map a disk file in order to efficiently access the memory data through a disk device. Typical running time: The execution time strongly depends on the molecular target description and on the hardware/OS chosen, it is directly proportional to the ( r, θ, φ) grid size and to the number of angular basis functions used. Thus, from the program printout of the main arrays memory occupancy, the user can approximately derive the expected computer time needed for a given calculation executed in serial mode. For parallel executions the overall efficiency must be further taken into account, and this depends on the no. of processors used as well as on the parallel architecture chosen, so a simple general law is at present not determinable. Unusual features of the program: The code has been engineered to use dynamical, runtime determined, global parameters with the aim to have all the data fitted in the RAM memory. Some unusual circumstances, e.g., when using large values of those parameters, may cause the program to run with unexpected performance reductions due to runtime bottlenecks like those caused by memory swap operations which strongly depend on the hardware used. In such cases, a parallel execution of the code is generally sufficient to fix the problem since the data size is partitioned over the available processors. When a suitable parallel system is not available for execution, a mechanism of memory mapped file can be used; with this option on, all the available memory will be used as a buffer for a disk file which contains the whole data set, thus having a better throughput with respect to the traditional swapping/paging of the Unix OS.
Chen, Peichen; Liu, Shih-Chia; Liu, Hung-I; Chen, Tse-Wei
2011-01-01
For quarantine sampling, it is of fundamental importance to determine the probability of finding an infestation when a specified number of units are inspected. In general, current sampling procedures assume 100% probability (perfect) of detecting a pest if it is present within a unit. Ideally, a nematode extraction method should remove all stages of all species with 100% efficiency regardless of season, temperature, or other environmental conditions; in practice however, no method approaches these criteria. In this study we determined the probability of detecting nematode infestations for quarantine sampling with imperfect extraction efficacy. Also, the required sample and the risk involved in detecting nematode infestations with imperfect extraction efficacy are presented. Moreover, we developed a computer program to calculate confidence levels for different scenarios with varying proportions of infestation and efficacy of detection. In addition, a case study, presenting the extraction efficacy of the modified Baermann's Funnel method on Aphelenchoides besseyi, is used to exemplify the use of our program to calculate the probability of detecting nematode infestations in quarantine sampling with imperfect extraction efficacy. The result has important implications for quarantine programs and highlights the need for a very large number of samples if perfect extraction efficacy is not achieved in such programs. We believe that the results of the study will be useful for the determination of realistic goals in the implementation of quarantine sampling. PMID:22791911
DOE Office of Scientific and Technical Information (OSTI.GOV)
Henry, C.P.; Marsh, E.J.
1997-06-01
In 1990, the Governor of New York State issued Executive Order No. 132, directing all state agencies to reduce energy consumption by 20% from the base year of 1988/89 by the year 2000. To assist in meeting this goal, the New York State Office of Mental Health (OMH) established the Lighting Revitalization Program in 1992. State facilities are divided into five regions, each served by existing Environmental Revitalization Teams. OMH supplemented these teams with lighting technicians in this new program. The program`s goal was to rehabilitate outdated, inefficient lighting systems throughout 28 OMH facilities, totaling 28 million square feet inmore » area. OMH requested the former Facility Development Corporation (FDC), now the Dormitory Authority of the State of New York (DASNY), to contract with Novus Engineering to evaluate the relative efficiency of T8 and T12 ballasts. Novus contracted an independent laboratory, Eastern Testing Laboratories (ETL), for performance testing. ETL tested four ballast/lamp configurations for light Output and input power, and Novus analyzed the results for relative efficiency and also calculated 25-year life cycle costs. The test results indicated that the efficiencies of the T12/34W and T8/32W ballast/lamp technologies were nearly identical. The input power and light output of these systems were similar. The lumens per Watt ratings for the two systems were nearly equal, with the T8 technology being only about two percent more efficient, generating more light with similar input power. The life cycle costs for the two systems were nearly identical, with the T12 system providing a slightly lower life cycle cost. Given the above considerations, the agency has been installing T12 electronic ballasts and 34W lamps in buildings where fluorescent fixtures warranted upgrading. This type of retrofit goes against current trends, but the use of T8 system could not be justified in buildings undergoing minor retrofitting.« less
Rowan Gorilla I rigged up, heads for eastern Canada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-03-01
Designed to operate in very hostile offshore environments, the first of the Rowan Gorilla class of self-elevating drilling rigs has been towed to its drilling assignment offshore Nova Scotia. About 40% larger than other jackups, these rigs can operate in 300 ft of water, drilling holes as deep as 30,000 ft. They also feature unique high-pressure and solids control systems that are expected to improve drilling procedures and efficiencies. A quantitative formation pressure evaluation program for the Hewlett-Packard HP-41 handheld calculator computes formation pressures by three independent methods - the corrected d exponent, Bourgoyne and Young, and normalized penetration ratemore » techniques for abnormal pressure detection and computation. Based on empirically derived drilling rate equations, each of the methods can be calculated separately, without being dependent on or influenced by the results or stored data from the other two subprograms. The quantitative interpretation procedure involves establishing a normal drilling rate trend and calculating the pore pressure from the magnitude of the drilling rate trend or plotting parameter increases above the trend line. Mobil's quick, accurate program could aid drilling operators in selecting the casing point, minimizing differential sticking, maintaining the proper mud weights to avoid kicks and lost circulation, and maximizing penetration rates.« less
OpenStudio: A Platform for Ex Ante Incentive Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, Amir; Brackney, Larry; Parker, Andrew
Many utilities operate programs that provide ex ante (up front) incentives for building energy conservation measures (ECMs). A typical incentive program covers two kinds of ECMs. ECMs that deliver similar savings in different contexts are associated with pre-calculated 'deemed' savings values. ECMs that deliver different savings in different contexts are evaluated on a 'custom' per-project basis. Incentive programs often operate at less than peak efficiency because both deemed ECMs and custom projects have lengthy and effort-intensive review processes--deemed ECMs to gain confidence that they are sufficiently context insensitive, custom projects to ensure that savings are claimed appropriately. DOE's OpenStudio platformmore » can be used to automate ex ante processes and help utilities operate programs more efficiently, consistently, and transparently, resulting in greater project throughput and energy savings. A key concept of the platform is the OpenStudio Measure, a script that queries and transforms building energy models. Measures can be simple or surgical, e.g., applying different transformations based on space-type, orientation, etc. Measures represent ECMs explicitly and are easier to review than ECMs that are represented implicitly as the difference between a with-ECM and without-ECM models. Measures can be automatically applied to large numbers of prototype models--and instantiated from uncertainty distributions--facilitating the large scale analysis required to develop deemed savings values. For custom projects, Measures can also be used to calibrate existing building models, to automatically create code baseline models, and to perform quality assurance screening.« less
NASA Technical Reports Server (NTRS)
1987-01-01
The United States and other countries face the problem of waste disposal in an economical, environmentally safe manner. A widely applied solution adopted by Americans is "waste to energy," incinerating the refuse and using the steam produced by trash burning to drive an electricity producing generator. NASA's computer program PRESTO II, (Performance of Regenerative Superheated Steam Turbine Cycles), provides power engineering companies, including Blount Energy Resources Corporation of Alabama, with the ability to model such features as process steam extraction, induction and feedwater heating by external sources, peaking and high back pressure. Expansion line efficiency, exhaust loss, leakage, mechanical losses and generator losses are used to calculate the cycle heat rate. The generator output program is sufficiently precise that it can be used to verify performance quoted in turbine generator supplier's proposals.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-12
... DEPARTMENT OF ENERGY [Docket No. EESEP0216] State Energy Program and Energy Efficiency and Conservation Block Grant (EECBG) Program; Request for Information AGENCY: Office of Energy Efficiency and... (SEP) and Energy Efficiency and Conservation Block Grant (EECBG) program, in support of energy...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goldberg, Amelie; Hedman, Bruce; Taylor, Robert P.
Many states have implemented ratepayer-funded programs to acquire energy efficiency as a predictable and reliable resource for meeting existing and future energy demand. These programs have become a fixture in many U.S. electricity and natural gas markets as they help postpone or eliminate the need for expensive generation and transmission investments. Industrial energy efficiency (IEE) is an energy efficiency resource that is not only a low cost option for many of these efficiency programs, but offers productivity and competitive benefits to manufacturers as it reduces their energy costs. However, some industrial customers are less enthusiastic about participating in these programs.more » IEE ratepayer programs suffer low participation by industries across many states today despite a continual increase in energy efficiency program spending across all types of customers, and significant energy efficiency funds can often go unused for industrial customers. This paper provides four detailed case studies of companies that benefited from participation in their utility’s energy efficiency program offerings and highlights the business value brought to them by participation in these programs. The paper is designed both for rate-payer efficiency program administrators interested in improving the attractiveness and effectiveness of industrial efficiency programs for their industrial customers and for industrial customers interested in maximizing the value of participating in efficiency programs.« less
2011-01-01
Background Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. Results This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. Conclusions AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements. PMID:21798025
Stålring, Jonna C; Carlsson, Lars A; Almeida, Pedro; Boyer, Scott
2011-07-28
Machine learning has a vast range of applications. In particular, advanced machine learning methods are routinely and increasingly used in quantitative structure activity relationship (QSAR) modeling. QSAR data sets often encompass tens of thousands of compounds and the size of proprietary, as well as public data sets, is rapidly growing. Hence, there is a demand for computationally efficient machine learning algorithms, easily available to researchers without extensive machine learning knowledge. In granting the scientific principles of transparency and reproducibility, Open Source solutions are increasingly acknowledged by regulatory authorities. Thus, an Open Source state-of-the-art high performance machine learning platform, interfacing multiple, customized machine learning algorithms for both graphical programming and scripting, to be used for large scale development of QSAR models of regulatory quality, is of great value to the QSAR community. This paper describes the implementation of the Open Source machine learning package AZOrange. AZOrange is specially developed to support batch generation of QSAR models in providing the full work flow of QSAR modeling, from descriptor calculation to automated model building, validation and selection. The automated work flow relies upon the customization of the machine learning algorithms and a generalized, automated model hyper-parameter selection process. Several high performance machine learning algorithms are interfaced for efficient data set specific selection of the statistical method, promoting model accuracy. Using the high performance machine learning algorithms of AZOrange does not require programming knowledge as flexible applications can be created, not only at a scripting level, but also in a graphical programming environment. AZOrange is a step towards meeting the needs for an Open Source high performance machine learning platform, supporting the efficient development of highly accurate QSAR models fulfilling regulatory requirements.
Airplane stability calculations with a card programmable pocket calculator
NASA Technical Reports Server (NTRS)
Sherman, W. L.
1978-01-01
Programs are presented for calculating airplane stability characteristics with a card programmable pocket calculator. These calculations include eigenvalues of the characteristic equations of lateral and longitudinal motion as well as stability parameters such as the time to damp to one-half amplitude or the damping ratio. The effects of wind shear are included. Background information and the equations programmed are given. The programs are written for the International System of Units, the dimensional form of the stability derivatives, and stability axes. In addition to programs for stability calculations, an unusual and short program is included for the Euler transformation of coordinates used in airplane motions. The programs have been written for a Hewlett Packard HP-67 calculator. However, the use of this calculator does not constitute an endorsement of the product by the National Aeronautics and Space Administration.
The Programmable Calculator in the Classroom.
ERIC Educational Resources Information Center
Stolarz, Theodore J.
The uses of programable calculators in the mathematics classroom are presented. A discussion of the "microelectronics revolution" that has brought programable calculators into our society is also included. Pointed out is that the logical or mental processes used to program the programable calculator are identical to those used to program…
Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables
NASA Astrophysics Data System (ADS)
Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.
2018-02-01
In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.
Quantum simulation of an ultrathin body field-effect transistor with channel imperfections
NASA Astrophysics Data System (ADS)
Vyurkov, V.; Semenikhin, I.; Filippov, S.; Orlikovsky, A.
2012-04-01
An efficient program for the all-quantum simulation of nanometer field-effect transistors is elaborated. The model is based on the Landauer-Buttiker approach. Our calculation of transmission coefficients employs a transfer-matrix technique involving the arbitrary precision (multiprecision) arithmetic to cope with evanescent modes. Modified in such way, the transfer-matrix technique turns out to be much faster in practical simulations than that of scattering-matrix. Results of the simulation demonstrate the impact of realistic channel imperfections (random charged centers and wall roughness) on transistor characteristics. The Landauer-Buttiker approach is developed to incorporate calculation of the noise at an arbitrary temperature. We also validate the ballistic Landauer-Buttiker approach for the usual situation when heavily doped contacts are indispensably included into the simulation region.
NASA Technical Reports Server (NTRS)
Hess, J. L.; Mack, D. P.; Stockman, N. O.
1979-01-01
A panel method is used to calculate incompressible flow about arbitrary three-dimensional inlets with or without centerbodies for four fundamental flow conditions: unit onset flows parallel to each of the coordinate axes plus static operation. The computing time is scarcely longer than for a single solution. A linear superposition of these solutions quite rigorously gives incompressible flow about the inlet for any angle of attack, angle of yaw, and mass flow rate. Compressibility is accounted for by applying a well-proven correction to the incompressible flow. Since the computing times for the combination and the compressibility correction are small, flows at a large number of inlet operating conditions are obtained rather cheaply. Geometric input is aided by an automatic generating program. A number of graphical output features are provided to aid the user, including surface streamline tracing and automatic generation of curves of curves of constant pressure, Mach number, and flow inclination at selected inlet cross sections. The inlet method and use of the program are described. Illustrative results are presented.
A Computer Program for the Calculation of Three-Dimensional Transonic Nacelle/Inlet Flowfields
NASA Technical Reports Server (NTRS)
Vadyak, J.; Atta, E. H.
1983-01-01
A highly efficient computer analysis was developed for predicting transonic nacelle/inlet flowfields. This algorithm can compute the three dimensional transonic flowfield about axisymmetric (or asymmetric) nacelle/inlet configurations at zero or nonzero incidence. The flowfield is determined by solving the full-potential equation in conservative form on a body-fitted curvilinear computational mesh. The difference equations are solved using the AF2 approximate factorization scheme. This report presents a discussion of the computational methods used to both generate the body-fitted curvilinear mesh and to obtain the inviscid flow solution. Computed results and correlations with existing methods and experiment are presented. Also presented are discussions on the organization of the grid generation (NGRIDA) computer program and the flow solution (NACELLE) computer program, descriptions of the respective subroutines, definitions of the required input parameters for both algorithms, a brief discussion on interpretation of the output, and sample cases to illustrate application of the analysis.
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Grose, G. G.
1978-01-01
The Douglas Neumann method for low-speed potential flow on arbitrary three-dimensional lifting bodies was modified by substituting the combined source and doublet surface paneling based on Green's identity for the original source panels. Numerical studies show improved accuracy and stability for thin lifting surfaces, permitting reduced panel number for high-lift devices and supercritical airfoil sections. The accuracy of flow in concave corners is improved. A method of airfoil section design for a given pressure distribution, based on Green's identity, was demonstrated. The program uses panels on the body surface with constant source strength and parabolic distribution of doublet strength, and a doublet sheet on the wake. The program is written for the CDC CYBER 175 computer. Results of calculations are presented for isolated bodies, wings, wing-body combinations, and internal flow.
Development of theoretical models of integrated millimeter wave antennas
NASA Technical Reports Server (NTRS)
Yngvesson, K. Sigfrid; Schaubert, Daniel H.
1991-01-01
Extensive radiation patterns for Linear Tapered Slot Antenna (LTSA) Single Elements are presented. The directivity of LTSA elements is predicted correctly by taking the cross polarized pattern into account. A moment method program predicts radiation patterns for air LTSAs with excellent agreement with experimental data. A moment method program was also developed for the task LTSA Array Modeling. Computations performed with this program are in excellent agreement with published results for dipole and monopole arrays, and with waveguide simulator experiments, for more complicated structures. Empirical modeling of LTSA arrays demonstrated that the maximum theoretical element gain can be obtained. Formulations were also developed for calculating the aperture efficiency of LTSA arrays used in reflector systems. It was shown that LTSA arrays used in multibeam systems have a considerable advantage in terms of higher packing density, compared with waveguide feeds. Conversion loss of 10 dB was demonstrated at 35 GHz.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
The report is an overview of electric energy efficiency programs. It takes a concise look at what states are doing to encourage energy efficiency and how it impacts electric utilities. Energy efficiency programs began to be offered by utilities as a response to the energy crises of the 1970s. These regulatory-driven programs peaked in the early-1990s and then tapered off as deregulation took hold. Today, rising electricity prices, environmental concerns, and national security issues have renewed interest in increasing energy efficiency as an alternative to additional supply. In response, new methods for administering, managing, and delivering energy efficiency programs aremore » being implemented. Topics covered in the report include: Analysis of the benefits of energy efficiency and key methods for achieving energy efficiency; evaluation of the business drivers spurring increased energy efficiency; Discussion of the major barriers to expanding energy efficiency programs; evaluation of the economic impacts of energy efficiency; discussion of the history of electric utility energy efficiency efforts; analysis of the impact of energy efficiency on utility profits and methods for protecting profitability; Discussion of non-utility management of energy efficiency programs; evaluation of major methods to spur energy efficiency - systems benefit charges, resource planning, and resource standards; and, analysis of the alternatives for encouraging customer participation in energy efficiency programs.« less
A computer program for two-particle intrinsic coefficients of fractional parentage
NASA Astrophysics Data System (ADS)
Deveikis, A.
2012-06-01
A Fortran 90 program CESOS for the calculation of the two-particle intrinsic coefficients of fractional parentage for several j-shells with isospin and an arbitrary number of oscillator quanta (CESOs) is presented. The implemented procedure for CESOs calculation consistently follows the principles of antisymmetry and translational invariance. The approach is based on a simple enumeration scheme for antisymmetric many-particle states, efficient algorithms for calculation of the coefficients of fractional parentage for j-shells with isospin, and construction of the subspace of the center-of-mass Hamiltonian eigenvectors corresponding to the minimal eigenvalue equal to 3/2 (in ℏω). The program provides fast calculation of CESOs for a given particle number and produces results possessing small numerical uncertainties. The introduced CESOs may be used for calculation of expectation values of two-particle nuclear shell-model operators within the isospin formalism. Program summaryProgram title: CESOS Catalogue identifier: AELT_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELT_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 10 932 No. of bytes in distributed program, including test data, etc.: 61 023 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a Fortran 90 compiler Operating system: Windows XP, Linux RAM: The memory demand depends on the number of particles A and the excitation energy of the system E. Computation of the A=6 particle system with the total angular momentum J=0 and the total isospin T=1 requires around 4 kB of RAM at E=0,˜3 MB at E=3, and ˜172 MB at E=5. Classification: 17.18 Nature of problem: The code CESOS generates a list of two-particle intrinsic coefficients of fractional parentage for several j-shells with isospin. Solution method: The method is based on the observation that CESOs may be obtained by diagonalizing the center-of-mass Hamiltonian in the basis set of antisymmetric A-particle oscillator functions with singled out dependence on Jacobi coordinates of two last particles and choosing the subspace of its eigenvectors corresponding to the minimal eigenvalue equal to 3/2. Restrictions: One run of the code CESOS generates CESOs for one specified set of (A,E,J,T) values only. The restrictions on the (A,E,J,T) values are completely determined by the restrictions on the computation of the single-shell CFPs and two-particle multishell CFPs (GCFPs) [1]. The full sets of single-shell CFPs may be calculated up to the j=9/2 shell (for any particular shell of the configuration); the shell with j⩾11/2 cannot get full (it is the implementation constraint). The calculation of GCFPs is limited by A<86 when E=0 (due to the memory constraints); small numbers of particles allow significantly higher excitations. Any allowed values of J and T may be chosen for the specified values of A and E. The complete list of allowed values of J and T for the chosen values of A and E may be generated by the GCFP program - CPC Program Library, Catalogue Id. AEBI_v1_0. The actual scale of the CESOs computation problem depends strongly on the magnitude of the A and E values. Though there are no limitations on A and E values (within the limits of single-shell CFPs and multishell CFPs calculation), however the generation of corresponding list of CESOs is the subject of available computing resources. For example, the computing time of CESOs for A=6, JT=10 at E=5 took around 14 hours. The system with A=11, JT=1/23/2 at E=2 requires around 15 hours. These computations were performed on Pentium 3 GHz PC with 1 GB RAM [2]. Unusual features: It is possible to test the computed CESOs without saving them to a file. This allows the user to learn their number and approximate computation time and to evaluate the accuracy of calculations. Additional comments: The program CESOS uses the code from GCFP program for calculation of the two-particle multishell coefficients of fractional parentage. Running time: It depends on the size of the problem. The A=6 particle system with the JT=01 took around 31 seconds on Pentium 3 GHz PC with 1 GB RAM at E=3 and about 2.6 hours at E=5.
Expansion of Tabulated Scattering Matrices in Generalized Spherical Functions
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Geogdzhayev, Igor V.; Yang, Ping
2016-01-01
An efficient way to solve the vector radiative transfer equation for plane-parallel turbid media is to Fourier-decompose it in azimuth. This methodology is typically based on the analytical computation of the Fourier components of the phase matrix and is predicated on the knowledge of the coefficients appearing in the expansion of the normalized scattering matrix in generalized spherical functions. Quite often the expansion coefficients have to be determined from tabulated values of the scattering matrix obtained from measurements or calculated by solving the Maxwell equations. In such cases one needs an efficient and accurate computer procedure converting a tabulated scattering matrix into the corresponding set of expansion coefficients. This short communication summarizes the theoretical basis of this procedure and serves as the user guide to a simple public-domain FORTRAN program.
NASA Astrophysics Data System (ADS)
Ziegler, Benjamin; Rauhut, Guntram
2016-03-01
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
Ziegler, Benjamin; Rauhut, Guntram
2016-03-21
The transformation of multi-dimensional potential energy surfaces (PESs) from a grid-based multimode representation to an analytical one is a standard procedure in quantum chemical programs. Within the framework of linear least squares fitting, a simple and highly efficient algorithm is presented, which relies on a direct product representation of the PES and a repeated use of Kronecker products. It shows the same scalings in computational cost and memory requirements as the potfit approach. In comparison to customary linear least squares fitting algorithms, this corresponds to a speed-up and memory saving by several orders of magnitude. Different fitting bases are tested, namely, polynomials, B-splines, and distributed Gaussians. Benchmark calculations are provided for the PESs of a set of small molecules.
NASA Technical Reports Server (NTRS)
Cavalleri, R. J.; Agnone, A. M.
1972-01-01
A computer program for calculating internal supersonic flow fields with chemical reactions and shock waves typical of supersonic combustion chambers with either wall or mid-stream injectors is described. The usefulness and limitations of the program are indicated. The program manual and listing are presented along with a sample calculation.
A general higher-order remap algorithm for ALE calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chiravalle, Vincent P
2011-01-05
A numerical technique for solving the equations of fluid dynamics with arbitrary mesh motion is presented. The three phases of the Arbitrary Lagrangian Eulerian (ALE) methodology are outlined: the Lagrangian phase, grid relaxation phase and remap phase. The Lagrangian phase follows a well known approach from the HEMP code; in addition the strain rate andflow divergence are calculated in a consistent manner according to Margolin. A donor cell method from the SALE code forms the basis of the remap step, but unlike SALE a higher order correction based on monotone gradients is also added to the remap. Four test problemsmore » were explored to evaluate the fidelity of these numerical techniques, as implemented in a simple test code, written in the C programming language, called Cercion. Novel cell-centered data structures are used in Cercion to reduce the complexity of the programming and maximize the efficiency of memory usage. The locations of the shock and contact discontinuity in the Riemann shock tube problem are well captured. Cercion demonstrates a high degree of symmetry when calculating the Sedov blast wave solution, with a peak density at the shock front that is similar to the value determined by the RAGE code. For a flyer plate test problem both Cercion and FLAG give virtually the same velocity temporal profile at the target-vacuum interface. When calculating a cylindrical implosion of a steel shell, Cercion and FLAG agree well and the Cercion results are insensitive to the use of ALE.« less
Numerical simulation of hypersonic inlet flows with equilibrium or finite rate chemistry
NASA Technical Reports Server (NTRS)
Yu, Sheng-Tao; Hsieh, Kwang-Chung; Shuen, Jian-Shun; Mcbride, Bonnie J.
1988-01-01
An efficient numerical program incorporated with comprehensive high temperature gas property models has been developed to simulate hypersonic inlet flows. The computer program employs an implicit lower-upper time marching scheme to solve the two-dimensional Navier-Stokes equations with variable thermodynamic and transport properties. Both finite-rate and local-equilibrium approaches are adopted in the chemical reaction model for dissociation and ionization of the inlet air. In the finite rate approach, eleven species equations coupled with fluid dynamic equations are solved simultaneously. In the local-equilibrium approach, instead of solving species equations, an efficient chemical equilibrium package has been developed and incorporated into the flow code to obtain chemical compositions directly. Gas properties for the reaction products species are calculated by methods of statistical mechanics and fit to a polynomial form for C(p). In the present study, since the chemical reaction time is comparable to the flow residence time, the local-equilibrium model underpredicts the temperature in the shock layer. Significant differences of predicted chemical compositions in shock layer between finite rate and local-equilibrium approaches have been observed.
Boyd, O.S.
2006-01-01
We have created a second-order finite-difference solution to the anisotropic elastic wave equation in three dimensions and implemented the solution as an efficient Matlab script. This program allows the user to generate synthetic seismograms for three-dimensional anisotropic earth structure. The code was written for teleseismic wave propagation in the 1-0.1 Hz frequency range but is of general utility and can be used at all scales of space and time. This program was created to help distinguish among various types of lithospheric structure given the uneven distribution of sources and receivers commonly utilized in passive source seismology. Several successful implementations have resulted in a better appreciation for subduction zone structure, the fate of a transform fault with depth, lithospheric delamination, and the effects of wavefield focusing and defocusing on attenuation. Companion scripts are provided which help the user prepare input to the finite-difference solution. Boundary conditions including specification of the initial wavefield, absorption and two types of reflection are available. ?? 2005 Elsevier Ltd. All rights reserved.
The methodology of the gas turbine efficiency calculation
NASA Astrophysics Data System (ADS)
Kotowicz, Janusz; Job, Marcin; Brzęczek, Mateusz; Nawrat, Krzysztof; Mędrych, Janusz
2016-12-01
In the paper a calculation methodology of isentropic efficiency of a compressor and turbine in a gas turbine installation on the basis of polytropic efficiency characteristics is presented. A gas turbine model is developed into software for power plant simulation. There are shown the calculation algorithms based on iterative model for isentropic efficiency of the compressor and for isentropic efficiency of the turbine based on the turbine inlet temperature. The isentropic efficiency characteristics of the compressor and the turbine are developed by means of the above mentioned algorithms. The gas turbine development for the high compressor ratios was the main driving force for this analysis. The obtained gas turbine electric efficiency characteristics show that an increase of pressure ratio above 50 is not justified due to the slight increase in the efficiency with a significant increase of turbine inlet combustor outlet and temperature.
Nishizawa, Hiroaki; Nishimura, Yoshifumi; Kobayashi, Masato; Irle, Stephan; Nakai, Hiromi
2016-08-05
The linear-scaling divide-and-conquer (DC) quantum chemical methodology is applied to the density-functional tight-binding (DFTB) theory to develop a massively parallel program that achieves on-the-fly molecular reaction dynamics simulations of huge systems from scratch. The functions to perform large scale geometry optimization and molecular dynamics with DC-DFTB potential energy surface are implemented to the program called DC-DFTB-K. A novel interpolation-based algorithm is developed for parallelizing the determination of the Fermi level in the DC method. The performance of the DC-DFTB-K program is assessed using a laboratory computer and the K computer. Numerical tests show the high efficiency of the DC-DFTB-K program, a single-point energy gradient calculation of a one-million-atom system is completed within 60 s using 7290 nodes of the K computer. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Power flows and Mechanical Intensities in structural finite element analysis
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.
1989-01-01
The identification of power flow paths in dynamically loaded structures is an important, but currently unavailable, capability for the finite element analyst. For this reason, methods for calculating power flows and mechanical intensities in finite element models are developed here. Formulations for calculating input and output powers, power flows, mechanical intensities, and power dissipations for beam, plate, and solid element types are derived. NASTRAN is used to calculate the required velocity, force, and stress results of an analysis, which a post-processor then uses to calculate power flow quantities. The SDRC I-deas Supertab module is used to view the final results. Test models include a simple truss and a beam-stiffened cantilever plate. Both test cases showed reasonable power flow fields over low to medium frequencies, with accurate power balances. Future work will include testing with more complex models, developing an interactive graphics program to view easily and efficiently the analysis results, applying shape optimization methods to the problem with power flow variables as design constraints, and adding the power flow capability to NASTRAN.
New vibration-rotation code for tetraatomic molecules exhibiting wide-amplitude motion: WAVR4
NASA Astrophysics Data System (ADS)
Kozin, Igor N.; Law, Mark M.; Tennyson, Jonathan; Hutson, Jeremy M.
2004-11-01
A general computational method for the accurate calculation of rotationally and vibrationally excited states of tetraatomic molecules is developed. The resulting program is particularly appropriate for molecules executing wide-amplitude motions and isomerizations. The program offers a choice of coordinate systems based on Radau, Jacobi, diatom-diatom and orthogonal satellite vectors. The method includes all six vibrational dimensions plus three rotational dimensions. Vibration-rotation calculations with reduced dimensionality in the radial degrees of freedom are easily tackled via constraints imposed on the radial coordinates via the input file. Program summaryTitle of program: WAVR4 Catalogue number: ADUN Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUN Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Licensing provisions: Persons requesting the program must sign the standard CPC nonprofit use license Computer: Developed under Tru64 UNIX, ported to Microsoft Windows and Sun Unix Operating systems under which the program has been tested: Tru64 Unix, Microsoft Windows, Sun Unix Programming language used: Fortran 90 Memory required to execute with typical data: case dependent No. of lines in distributed program, including test data, etc.: 11 937 No. of bytes in distributed program, including test data, etc.: 84 770 Distribution format: tar.gz Nature of physical problem: WAVR4 calculates the bound ro-vibrational levels and wavefunctions of a tetraatomic system using body-fixed coordinates based on generalised orthogonal vectors. Method of solution: The angular coordinates are treated using a finite basis representation (FBR) based on products of spherical harmonics. A discrete variable representation (DVR) [1] based on either Morse-oscillator-like or spherical-oscillator functions [2] is used for the radial coordinates. Matrix elements are computed using an efficient Gaussian quadrature in the angular coordinates and the DVR approximation in the radial coordinates. The solution of the secular problem is carried through a series of intermediate diagonalisations and truncations. Restrictions on the complexity of the problem: (1) The size of the final Hamiltonian matrix that can be practically diagonalised; (2) The DVR approximation for a radial coordinate fails for values of the coordinate near zero—this is remedied only for one radial coordinate by using analytical integration. Typical running time: problem-dependent Unusual features of the program: A user-supplied subroutine to evaluate the potential energy is a program requirement. External routines: BLAS and LAPACK are required. References: [1] J.C. Light, I.P. Hamilton, J.V. Lill, J. Chem. Phys. 92 (1985) 1400. [2] J.R. Henderson, C.R. Le Sueur, J. Tennyson, Comp. Phys. Comm. 75 (1993) 379.
Comparison of Building Energy Modeling Programs: Building Loads
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, Dandan; Hong, Tianzhen; Yan, Da
This technical report presented the methodologies, processes, and results of comparing three Building Energy Modeling Programs (BEMPs) for load calculations: EnergyPlus, DeST and DOE-2.1E. This joint effort, between Lawrence Berkeley National Laboratory, USA and Tsinghua University, China, was part of research projects under the US-China Clean Energy Research Center on Building Energy Efficiency (CERC-BEE). Energy Foundation, an industrial partner of CERC-BEE, was the co-sponsor of this study work. It is widely known that large discrepancies in simulation results can exist between different BEMPs. The result is a lack of confidence in building simulation amongst many users and stakeholders. In themore » fields of building energy code development and energy labeling programs where building simulation plays a key role, there are also confusing and misleading claims that some BEMPs are better than others. In order to address these problems, it is essential to identify and understand differences between widely-used BEMPs, and the impact of these differences on load simulation results, by detailed comparisons of these BEMPs from source code to results. The primary goal of this work was to research methods and processes that would allow a thorough scientific comparison of the BEMPs. The secondary goal was to provide a list of strengths and weaknesses for each BEMP, based on in-depth understandings of their modeling capabilities, mathematical algorithms, advantages and limitations. This is to guide the use of BEMPs in the design and retrofit of buildings, especially to support China’s building energy standard development and energy labeling program. The research findings could also serve as a good reference to improve the modeling capabilities and applications of the three BEMPs. The methodologies, processes, and analyses employed in the comparison work could also be used to compare other programs. The load calculation method of each program was analyzed and compared to identify the differences in solution algorithms, modeling assumptions and simplifications. Identifying inputs of each program and their default values or algorithms for load simulation was a critical step. These tend to be overlooked by users, but can lead to large discrepancies in simulation results. As weather data was an important input, weather file formats and weather variables used by each program were summarized. Some common mistakes in the weather data conversion process were discussed. ASHRAE Standard 140-2007 tests were carried out to test the fundamental modeling capabilities of the load calculations of the three BEMPs, where inputs for each test case were strictly defined and specified. The tests indicated that the cooling and heating load results of the three BEMPs fell mostly within the range of spread of results from other programs. Based on ASHRAE 140-2007 test results, the finer differences between DeST and EnergyPlus were further analyzed by designing and conducting additional tests. Potential key influencing factors (such as internal gains, air infiltration, convection coefficients of windows and opaque surfaces) were added one at a time to a simple base case with an analytical solution, to compare their relative impacts on load calculation results. Finally, special tests were designed and conducted aiming to ascertain the potential limitations of each program to perform accurate load calculations. The heat balance module was tested for both single and double zone cases. Furthermore, cooling and heating load calculations were compared between the three programs by varying the heat transfer between adjacent zones, the occupancy of the building, and the air-conditioning schedule.« less
New generation of universal modeling for centrifugal compressors calculation
NASA Astrophysics Data System (ADS)
Galerkin, Y.; Drozdov, A.
2015-08-01
The Universal Modeling method is in constant use from mid - 1990th. Below is presented the newest 6th version of the Method. The flow path configuration of 3D impellers is presented in details. It is possible to optimize meridian configuration including hub/shroud curvatures, axial length, leading edge position, etc. The new model of vaned diffuser includes flow non-uniformity coefficient based on CFD calculations. The loss model was built from the results of 37 experiments with compressors stages of different flow rates and loading factors. One common set of empirical coefficients in the loss model guarantees the efficiency definition within an accuracy of 0.86% at the design point and 1.22% along the performance curve. The model verification was made. Four multistage compressors performances with vane and vaneless diffusers were calculated. As the model verification was made, four multistage compressors performances with vane and vaneless diffusers were calculated. Two of these compressors have quite unusual flow paths. The modeling results were quite satisfactory in spite of these peculiarities. One sample of the verification calculations is presented in the text. This 6th version of the developed computer program is being already applied successfully in the design practice.
GVVPT2 energy gradient using a Lagrangian formulation.
Theis, Daniel; Khait, Yuriy G; Hoffmann, Mark R
2011-07-28
A Lagrangian based approach was used to obtain analytic formulas for GVVPT2 energy nuclear gradients. The formalism can use either complete or incomplete model (or reference) spaces, and is limited, in this regard, only by the capabilities of the MCSCF program. An efficient means of evaluating the gradient equations is described. Demonstrative calculations were performed and compared with finite difference calculations on several molecules and show that the GVVPT2 gradients are accurate. Of particular interest, the suggested formalism can straightforwardly use state-averaged MCSCF descriptions of the reference space in which the states have arbitrary weights. This capability is demonstrated by some calculations on the ground and first excited singlet states of LiH, including calculations near an avoided crossing. The accuracy and usefulness of the GVVPT2 method and its gradient are highlighted by comparing the geometry of the near-C(2v) minimum on the conical intersection seam between the 1 (1)A(1) and 2 (1)A(1) surfaces of O(3) with values that were calculated at the multireference configuration interaction, including single and double excitations (MRCISD), level of theory. © 2011 American Institute of Physics
NASA Astrophysics Data System (ADS)
Work, Paul R.
1991-12-01
This thesis investigates the parallelization of existing serial programs in computational electromagnetics for use in a parallel environment. Existing algorithms for calculating the radar cross section of an object are covered, and a ray-tracing code is chosen for implementation on a parallel machine. Current parallel architectures are introduced and a suitable parallel machine is selected for the implementation of the chosen ray-tracing algorithm. The standard techniques for the parallelization of serial codes are discussed, including load balancing and decomposition considerations, and appropriate methods for the parallelization effort are selected. A load balancing algorithm is modified to increase the efficiency of the application, and a high level design of the structure of the serial program is presented. A detailed design of the modifications for the parallel implementation is also included, with both the high level and the detailed design specified in a high level design language called UNITY. The correctness of the design is proven using UNITY and standard logic operations. The theoretical and empirical results show that it is possible to achieve an efficient parallel application for a serial computational electromagnetic program where the characteristics of the algorithm and the target architecture critically influence the development of such an implementation.
Calculation of Cosmic Ray Induced Single Event Upsets: Program CRUP, Cosmic Ray Upset Program
1983-09-14
1.., 0 .j ~ u M ~ t R A’- ~~ ’ .~ ; I .: ’ 1 J., ) ’- CALCULATION OF COSMIC RAY INDUCED SINGLE EVEI’o"T UPSETS: PROGRAM CRUP , COSMIC RAY UPSET...neceuety end Identity by blo..;k number) 0Thls report documents PROGR.Al\\1 CRUP , COSMIC RAY UPSET PROGRAM. The computer program calculates cosmic...34. » » •-, " 1 » V »1T"~ Calculation of Cosmic Ray Induced Single Event Upsets: PROGRAM CRUP , COSMIC RAY UPSET PROGRAM I. INTRODUCTION Since the
A new device to test cutting efficiency of mechanical endodontic instruments.
Giansiracusa Rubini, Alessio; Plotino, Gianluca; Al-Sudani, Dina; Grande, Nicola M; Sonnino, Gianpaolo; Putorti, Ermanno; Cotti, Elisabetta; Testarelli, Luca; Gambarini, Gianluca
2014-03-06
The purpose of the present study was to introduce a new device specifically designed to evaluate the cutting efficiency of mechanically driven endodontic instruments. Twenty new Reciproc R25 (VDW, Munich, Germany) files were used to be investigated in the new device developed to test the cutting ability of endodontic instruments. The device consists of a main frame to which a mobile plastic support for the hand-piece is connected and a stainless-steel block containing a Plexiglas block against which the cutting efficiency of the instruments was tested. The length of the block cut in 1 minute was measured in a computerized program with a precision of 0.1mm. The instruments were activated by using a torque-controlled motor (Silver Reciproc; VDW, Munich, Germany) in a reciprocating movement by the "Reciproc ALL" program (Group 1) and in counter-clockwise rotation at 300 rpm (Group 2). Mean and standard deviations of each group were calculated and data were statistically analyzed with a one-way ANOVA test (P<0.05). Reciproc in reciprocation (Group 1) mean cut in the Plexiglas block was 8.6 mm (SD=0.6 mm), while Reciproc in rotation mean cut was 8.9 mm (SD=0.7 mm). There was no statistically significant difference between the 2 groups investigated (P>0.05). The cutting testing device evaluated in the present study was reliable and easy to use and may be effectively used to test cutting efficiency of both rotary and reciprocating mechanical endodontic instruments.
Time and financial costs of programs for live trapping feral cats.
Nutter, Felicia B; Stoskopf, Michael K; Levine, Jay F
2004-11-01
To determine the time and financial costs of programs for live trapping feral cats and determine whether allowing cats to become acclimated to the traps improved trapping effectiveness. Prospective cohort study. 107 feral cats in 9 colonies. 15 traps were set at each colony for 5 consecutive nights, and 5 traps were then set per night until trapping was complete. In 4 colonies, traps were immediately baited and set; in the remaining 5 colonies, traps were left open and cats were fed in the traps for 3 days prior to the initiation of trapping. Costs for bait and labor were calculated, and trapping effort and efficiency were assessed. Mean +/- SD overall trapping effort (ie, number of trap-nights until at least 90% of the cats in the colony had been captured or until no more than 1 cat remained untrapped) was 8.9 +/- 3.9 trap-nights per cat captured. Mean overall trapping efficiency (ie, percentage of cats captured per colony) was 98.0 +/- 4.0%. There were no significant differences in trapping effort or efficiency between colonies that were provided an acclimation period and colonies that were not. Overall trapping costs were significantly higher for colonies provided an acclimation period. Results suggest that these live-trapping protocols were effective. Feeding cats their regular diets in the traps for 3 days prior to the initiation of trapping did not have a significant effect on trapping effort or efficiency in the present study but was associated with significant increases in trapping costs.
Computation of Reacting Flows in Combustion Processes
NASA Technical Reports Server (NTRS)
Keith, Theo G., Jr.; Chen, Kuo-Huey
1997-01-01
The main objective of this research was to develop an efficient three-dimensional computer code for chemically reacting flows. The main computer code developed is ALLSPD-3D. The ALLSPD-3D computer program is developed for the calculation of three-dimensional, chemically reacting flows with sprays. The ALL-SPD code employs a coupled, strongly implicit solution procedure for turbulent spray combustion flows. A stochastic droplet model and an efficient method for treatment of the spray source terms in the gas-phase equations are used to calculate the evaporating liquid sprays. The chemistry treatment in the code is general enough that an arbitrary number of reaction and species can be defined by the users. Also, it is written in generalized curvilinear coordinates with both multi-block and flexible internal blockage capabilities to handle complex geometries. In addition, for general industrial combustion applications, the code provides both dilution and transpiration cooling capabilities. The ALLSPD algorithm, which employs the preconditioning and eigenvalue rescaling techniques, is capable of providing efficient solution for flows with a wide range of Mach numbers. Although written for three-dimensional flows in general, the code can be used for two-dimensional and axisymmetric flow computations as well. The code is written in such a way that it can be run in various computer platforms (supercomputers, workstations and parallel processors) and the GUI (Graphical User Interface) should provide a user-friendly tool in setting up and running the code.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
SURE - SEMI-MARKOV UNRELIABILITY RANGE EVALUATOR (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
The Semi-Markov Unreliability Range Evaluator, SURE, is an analysis tool for reconfigurable, fault-tolerant systems. Traditional reliability analyses are based on aggregates of fault-handling and fault-occurrence models. SURE provides an efficient means for calculating accurate upper and lower bounds for the death state probabilities for a large class of semi-Markov models, not just those which can be reduced to critical-pair architectures. The calculated bounds are close enough (usually within 5 percent of each other) for use in reliability studies of ultra-reliable computer systems. The SURE bounding theorems have algebraic solutions and are consequently computationally efficient even for large and complex systems. SURE can optionally regard a specified parameter as a variable over a range of values, enabling an automatic sensitivity analysis. Highly reliable systems employ redundancy and reconfiguration as methods of ensuring operation. When such systems are modeled stochastically, some state transitions are orders of magnitude faster than others; that is, fault recovery is usually faster than fault arrival. SURE takes these time differences into account. Slow transitions are described by exponential functions and fast transitions are modeled by either the White or Lee theorems based on means, variances, and percentiles. The user must assign identifiers to every state in the system and define all transitions in the semi-Markov model. SURE input statements are composed of variables and constants related by FORTRAN-like operators such as =, +, *, SIN, EXP, etc. There are a dozen major commands such as READ, READO, SAVE, SHOW, PRUNE, TRUNCate, CALCulator, and RUN. Once the state transitions have been defined, SURE calculates the upper and lower probability bounds for entering specified death states within a specified mission time. SURE output is tabular. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST specification interface program (LAR-14193, LAR-14923), PAWS/STEM reliability analysis programs (LAR-14165, LAR-14920); and the FTC fault tree tool (LAR-14586, LAR-14922). FTC is used to calculate the top-event probability for a fault tree. PAWS/STEM and SURE are programs which interpret the same SURE language, but utilize different solution methods. ASSIST is a preprocessor that generates SURE language from a more abstract definition. SURE, ASSIST, and PAWS/STEM are also offered as a bundle. Please see the abstract for COS-10039/COS-10041, SARA - SURE/ASSIST Reliability Analysis Workstation, for pricing details. SURE was originally developed for DEC VAX series computers running VMS and was later ported for use on Sun computers running SunOS. The VMS version (LAR13789) is written in PASCAL, C-language, and FORTRAN 77. The standard distribution medium for the VMS version of SURE is a 9-track 1600 BPI magnetic tape in VMSINSTAL format. It is also available on a TK50 tape cartridge in VMSINSTAL format. Executables are included. The Sun UNIX version (LAR14921) is written in ANSI C-language and PASCAL. An ANSI compliant C compiler is required in order to compile the C portion of this package. The standard distribution medium for the Sun version of SURE is a .25 inch streaming magnetic tape cartridge in UNIX tar format. Both Sun3 and Sun4 executables are included. SURE was developed in 1988 and last updated in 1992. DEC, VAX, VMS, and TK50 are trademarks of Digital Equipment Corporation. TEMPLATE is a registered trademark of Template Graphics Software, Inc. UNIX is a registered trademark of AT&T Bell Laboratories. Sun3 and Sun4 are trademarks of Sun Microsystems, Inc.
NASA Astrophysics Data System (ADS)
Xu, Jiuping; Ma, Ning; Lv, Chengwei
2016-08-01
Efficient water transfer and allocation are critical for disaster mitigation in drought emergencies. This is especially important when the different interests of the multiple decision makers and the fluctuating water resource supply and demand simultaneously cause space and time conflicts. To achieve more effective and efficient water transfers and allocations, this paper proposes a novel optimization method with an integrated bi-level structure and a dynamic strategy, in which the bi-level structure works to deal with space dimension conflicts in drought emergencies, and the dynamic strategy is used to deal with time dimension conflicts. Combining these two optimization methods, however, makes calculation complex, so an integrated interactive fuzzy program and a PSO-POA are combined to develop a hybrid-heuristic algorithm. The successful application of the proposed model in a real world case region demonstrates its practicality and efficiency. Dynamic cooperation between multiple reservoirs under the coordination of a global regulator reflects the model's efficiency and effectiveness in drought emergency water transfer and allocation, especially in a fluctuating environment. On this basis, some corresponding management recommendations are proposed to improve practical operations.
NASA Technical Reports Server (NTRS)
Katsanis, T.
1994-01-01
This computer program was developed for calculating the subsonic or transonic flow on the hub-shroud mid-channel stream surface of a single blade row of a turbomachine. The design and analysis of blades for compressors and turbines ideally requires methods for analyzing unsteady, three-dimensional, turbulent viscous flow through a turbomachine. Since an exact solution is impossible at present, solutions on two-dimensional surfaces are calculated to obtain a quasi-three dimensional solution. When three-dimensional effects are important, significant information can be obtained from a solution on a cross-sectional surface of the passage normal to the flow. With this program, a solution to the equations of flow on the meridional surface can be carried out. This solution is chosen when the turbomachine under consideration has significant variation in flow properties in the hubshroud direction, especially when input is needed for use in blade-to-blade calculations. The program can also perform flow calculations for annular ducts without blades. This program should prove very useful in the design and analysis of any turbomachine. This program calculates a solution for two-dimensional, adiabatic shockfree flow. The flow must be essentially subsonic, but there may be local areas of supersonic flow. To obtain the solution, this program uses both the finite difference and the quasi-orthogonal (velocity gradient) methods combined in a way that takes maximum advantage of both. The finite-difference method solves a finite-difference equation along the meridional stream surface in a very efficient manner but is limited to subsonic velocities. This approach must be used in cases where the blade aspect ratios are above one, cases where the passage is curved, and cases with low hub-tip-ratio blades. The quasi-orthogonal method solves the velocity gradient equation on the meridional surface and is used if it is necessary to extend the range of solutions into the transonic regime. In general the blade row may be fixed or rotating and the blades may be twisted and leaned. The flow may be axial, radial, or mixed. The upstream and downstream flow conditions can vary from hub to shroud with provisions made for an approximate correction for loss of stagnation pressure. Also, viscous forces are neglected along solution mesh lines running from hub to tip. The capabilities of this program include handling of nonaxial flows without restriction, annular ducts without blades, and specified streamwise loss distributions. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 360 computer with a central memory requirement of approximately 700K of 8 bit bytes. This core requirement can be reduced depending on the size of the problem and the desired solution accuracy. This program was developed in 1977.
Optimization design of wind turbine drive train based on Matlab genetic algorithm toolbox
NASA Astrophysics Data System (ADS)
Li, R. N.; Liu, X.; Liu, S. J.
2013-12-01
In order to ensure the high efficiency of the whole flexible drive train of the front-end speed adjusting wind turbine, the working principle of the main part of the drive train is analyzed. As critical parameters, rotating speed ratios of three planetary gear trains are selected as the research subject. The mathematical model of the torque converter speed ratio is established based on these three critical variable quantity, and the effect of key parameters on the efficiency of hydraulic mechanical transmission is analyzed. Based on the torque balance and the energy balance, refer to hydraulic mechanical transmission characteristics, the transmission efficiency expression of the whole drive train is established. The fitness function and constraint functions are established respectively based on the drive train transmission efficiency and the torque converter rotating speed ratio range. And the optimization calculation is carried out by using MATLAB genetic algorithm toolbox. The optimization method and results provide an optimization program for exact match of wind turbine rotor, gearbox, hydraulic mechanical transmission, hydraulic torque converter and synchronous generator, ensure that the drive train work with a high efficiency, and give a reference for the selection of the torque converter and hydraulic mechanical transmission.
Li, Pei-Nan; Li, Hong; Wu, Mo-Li; Wang, Shou-Yu; Kong, Qing-You; Zhang, Zhen; Sun, Yuan; Liu, Jia; Lv, De-Cheng
2012-01-01
Wound measurement is an objective and direct way to trace the course of wound healing and to evaluate therapeutic efficacy. Nevertheless, the accuracy and efficiency of the current measurement methods need to be improved. Taking the advantages of reliability of transparency tracing and the accuracy of computer-aided digital imaging, a transparency-based digital imaging approach is established, by which data from 340 wound tracing were collected from 6 experimental groups (8 rats/group) at 8 experimental time points (Day 1, 3, 5, 7, 10, 12, 14 and 16) and orderly archived onto a transparency model sheet. This sheet was scanned and its image was saved in JPG form. Since a set of standard area units from 1 mm2 to 1 cm2 was integrated into the sheet, the tracing areas in JPG image were measured directly, using the “Magnetic lasso tool” in Adobe Photoshop program. The pixel values/PVs of individual outlined regions were obtained and recorded in an average speed of 27 second/region. All PV data were saved in an excel form and their corresponding areas were calculated simultaneously by the formula of Y (PV of the outlined region)/X (PV of standard area unit) × Z (area of standard unit). It took a researcher less than 3 hours to finish area calculation of 340 regions. In contrast, over 3 hours were expended by three skillful researchers to accomplish the above work with traditional transparency-based method. Moreover, unlike the results obtained traditionally, little variation was found among the data calculated by different persons and the standard area units in different sizes and shapes. Given its accurate, reproductive and efficient properties, this transparency-based digital imaging approach would be of significant values in basic wound healing research and clinical practice. PMID:22666449
Efficient grid-based techniques for density functional theory
NASA Astrophysics Data System (ADS)
Rodriguez-Hernandez, Juan Ignacio
Understanding the chemical and physical properties of molecules and materials at a fundamental level often requires quantum-mechanical models for these substance's electronic structure. This type of many body quantum mechanics calculation is computationally demanding, hindering its application to substances with more than a few hundreds atoms. The supreme goal of many researches in quantum chemistry---and the topic of this dissertation---is to develop more efficient computational algorithms for electronic structure calculations. In particular, this dissertation develops two new numerical integration techniques for computing molecular and atomic properties within conventional Kohn-Sham-Density Functional Theory (KS-DFT) of molecular electronic structure. The first of these grid-based techniques is based on the transformed sparse grid construction. In this construction, a sparse grid is generated in the unit cube and then mapped to real space according to the pro-molecular density using the conditional distribution transformation. The transformed sparse grid was implemented in program deMon2k, where it is used as the numerical integrator for the exchange-correlation energy and potential in the KS-DFT procedure. We tested our grid by computing ground state energies, equilibrium geometries, and atomization energies. The accuracy on these test calculations shows that our grid is more efficient than some previous integration methods: our grids use fewer points to obtain the same accuracy. The transformed sparse grids were also tested for integrating, interpolating and differentiating in different dimensions (n = 1,2,3,6). The second technique is a grid-based method for computing atomic properties within QTAIM. It was also implemented in deMon2k. The performance of the method was tested by computing QTAIM atomic energies, charges, dipole moments, and quadrupole moments. For medium accuracy, our method is the fastest one we know of.
On the use of bismuth as a neutron filter
NASA Astrophysics Data System (ADS)
Adib, M.; Kilany, M.
2003-02-01
A formula is given which, for neutron energies in the range 10 -4< E<10 eV, permits calculation of the nuclear capture, thermal diffuse and Bragg scattering cross-sections as a function of bismuth temperature and crystalline form. Computer programs have been developed which allow calculations for the Bi rhombohedral structure in its poly-crystalline form and its equivalent hexagonal close-packed structure. The calculated total neutron cross-sections for poly-crystalline Bi at different temperatures were compared with the measured values. An overall agreement is indicated between the formula fits and experimental data. Agreement was also obtained for values of Bi-single crystals, at room and liquid nitrogen temperatures. A feasibility study for use of Bi in powdered form, as a cold neutron filter, is detailed in terms of the optimum Bi-single crystal thickness, mosaic spread, temperature and cutting plane for efficient transmission of thermal-reactor neutrons, and also for rejection of the accompanying fast neutrons and gamma rays.
NASA Astrophysics Data System (ADS)
Mabu, Shingo; Hirasawa, Kotaro; Furuzuki, Takayuki
Genetic Network Programming (GNP) is an evolutionary computation which represents its solutions using graph structures. Since GNP can create quite compact programs and has an implicit memory function, it has been clarified that GNP works well especially in dynamic environments. In addition, a study on creating trading rules on stock markets using GNP with Importance Index (GNP-IMX) has been done. IMX is a new element which is a criterion for decision making. In this paper, we combined GNP-IMX with Actor-Critic (GNP-IMX&AC) and create trading rules on stock markets. Evolution-based methods evolve their programs after enough period of time because they must calculate fitness values, however reinforcement learning can change programs during the period, therefore the trading rules can be created efficiently. In the simulation, the proposed method is trained using the stock prices of 10 brands in 2002 and 2003. Then the generalization ability is tested using the stock prices in 2004. The simulation results show that the proposed method can obtain larger profits than GNP-IMX without AC and Buy&Hold.
Petrenko, Taras; Kossmann, Simone; Neese, Frank
2011-02-07
In this paper, we present the implementation of efficient approximations to time-dependent density functional theory (TDDFT) within the Tamm-Dancoff approximation (TDA) for hybrid density functionals. For the calculation of the TDDFT/TDA excitation energies and analytical gradients, we combine the resolution of identity (RI-J) algorithm for the computation of the Coulomb terms and the recently introduced "chain of spheres exchange" (COSX) algorithm for the calculation of the exchange terms. It is shown that for extended basis sets, the RIJCOSX approximation leads to speedups of up to 2 orders of magnitude compared to traditional methods, as demonstrated for hydrocarbon chains. The accuracy of the adiabatic transition energies, excited state structures, and vibrational frequencies is assessed on a set of 27 excited states for 25 molecules with the configuration interaction singles and hybrid TDDFT/TDA methods using various basis sets. Compared to the canonical values, the typical error in transition energies is of the order of 0.01 eV. Similar to the ground-state results, excited state equilibrium geometries differ by less than 0.3 pm in the bond distances and 0.5° in the bond angles from the canonical values. The typical error in the calculated excited state normal coordinate displacements is of the order of 0.01, and relative error in the calculated excited state vibrational frequencies is less than 1%. The errors introduced by the RIJCOSX approximation are, thus, insignificant compared to the errors related to the approximate nature of the TDDFT methods and basis set truncation. For TDDFT/TDA energy and gradient calculations on Ag-TB2-helicate (156 atoms, 2732 basis functions), it is demonstrated that the COSX algorithm parallelizes almost perfectly (speedup ~26-29 for 30 processors). The exchange-correlation terms also parallelize well (speedup ~27-29 for 30 processors). The solution of the Z-vector equations shows a speedup of ~24 on 30 processors. The parallelization efficiency for the Coulomb terms can be somewhat smaller (speedup ~15-25 for 30 processors), but their contribution to the total calculation time is small. Thus, the parallel program completes a Becke3-Lee-Yang-Parr energy and gradient calculation on the Ag-TB2-helicate in less than 4 h on 30 processors. We also present the necessary extension of the Lagrangian formalism, which enables the calculation of the TDDFT excited state properties in the frozen-core approximation. The algorithms described in this work are implemented into the ORCA electronic structure system.
Evaluation of high-perimeter electrode designs for deep brain stimulation
NASA Astrophysics Data System (ADS)
Howell, Bryan; Grill, Warren M.
2014-08-01
Objective. Deep brain stimulation (DBS) is an effective treatment for movement disorders and a promising therapy for treating epilepsy and psychiatric disorders. Despite its clinical success, complications including infections and mis-programing following surgical replacement of the battery-powered implantable pulse generator adversely impact the safety profile of this therapy. We sought to decrease power consumption and extend battery life by modifying the electrode geometry to increase stimulation efficiency. The specific goal of this study was to determine whether electrode contact perimeter or area had a greater effect on increasing stimulation efficiency. Approach. Finite-element method (FEM) models of eight prototype electrode designs were used to calculate the electrode access resistance, and the FEM models were coupled with cable models of passing axons to quantify stimulation efficiency. We also measured in vitro the electrical properties of the prototype electrode designs and measured in vivo the stimulation efficiency following acute implantation in anesthetized cats. Main results. Area had a greater effect than perimeter on altering the electrode access resistance; electrode (access or dynamic) resistance alone did not predict stimulation efficiency because efficiency was dependent on the shape of the potential distribution in the tissue; and, quantitative assessment of stimulation efficiency required consideration of the effects of the electrode-tissue interface impedance. Significance. These results advance understanding of the features of electrode geometry that are important for designing the next generation of efficient DBS electrodes.
Efficient and Scalable Graph Similarity Joins in MapReduce
Chen, Yifan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results. PMID:25121135
Efficient and scalable graph similarity joins in MapReduce.
Chen, Yifan; Zhao, Xiang; Xiao, Chuan; Zhang, Weiming; Tang, Jiuyang
2014-01-01
Along with the emergence of massive graph-modeled data, it is of great importance to investigate graph similarity joins due to their wide applications for multiple purposes, including data cleaning, and near duplicate detection. This paper considers graph similarity joins with edit distance constraints, which return pairs of graphs such that their edit distances are no larger than a given threshold. Leveraging the MapReduce programming model, we propose MGSJoin, a scalable algorithm following the filtering-verification framework for efficient graph similarity joins. It relies on counting overlapping graph signatures for filtering out nonpromising candidates. With the potential issue of too many key-value pairs in the filtering phase, spectral Bloom filters are introduced to reduce the number of key-value pairs. Furthermore, we integrate the multiway join strategy to boost the verification, where a MapReduce-based method is proposed for GED calculation. The superior efficiency and scalability of the proposed algorithms are demonstrated by extensive experimental results.
Using RFID Positioning Technology to Construct an Automatic Rehabilitation Scheduling Mechanism.
Wang, Ching-Sheng; Hung, Lun-Ping; Yen, Neil Y
2016-01-01
Accurately and efficiently identifying the location of patients during the course of rehabilitation is an important issue. Wireless transmission technology can reach this goal. Tracking technologies such as RFID (Radio frequency identification) can support process improvement and improve efficiencies of rehabilitation. There are few published models or methods to solve the problem of positioning and apply this technology in the rehabilitation center. We propose a mechanism to enhance the accuracy of positioning technology and provide information about turns and obstacles on the path; and user-centered services based on location-aware to enhanced quality care in rehabilitation environment. This paper outlines the requirements and the role of RFID in assisting rehabilitation environment. A prototype RFID hospital support tool is established. It is designed to provide assistance for monitoring rehabilitation patients. It can simultaneously calculate the rehabilitant's location and the duration of treatment, and automatically record the rehabilitation course of the rehabilitant, so as to improve the management efficiency of the rehabilitation program.
Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G
2014-09-05
A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.
Structural Optimization Methodology for Rotating Disks of Aircraft Engines
NASA Technical Reports Server (NTRS)
Armand, Sasan C.
1995-01-01
In support of the preliminary evaluation of various engine technologies, a methodology has been developed for structurally designing the rotating disks of an aircraft engine. The structural design methodology, along with a previously derived methodology for predicting low-cycle fatigue life, was implemented in a computer program. An interface computer program was also developed that gathers the required data from a flowpath analysis program (WATE) being used at NASA Lewis. The computer program developed for this study requires minimum interaction with the user, thus allowing engineers with varying backgrounds in aeropropulsion to successfully execute it. The stress analysis portion of the methodology and the computer program were verified by employing the finite element analysis method. The 10th- stage, high-pressure-compressor disk of the Energy Efficient Engine Program (E3) engine was used to verify the stress analysis; the differences between the stresses and displacements obtained from the computer program developed for this study and from the finite element analysis were all below 3 percent for the problem solved. The computer program developed for this study was employed to structurally optimize the rotating disks of the E3 high-pressure compressor. The rotating disks designed by the computer program in this study were approximately 26 percent lighter than calculated from the E3 drawings. The methodology is presented herein.
Hybrid thermocouple development program
NASA Technical Reports Server (NTRS)
Garvey, L. P.; Krebs, T. R.; Lee, E.
1971-01-01
The design and development of a hybrid thermocouple, having a segmented SiGe-PbTe n-leg encapsulated within a hollow cylindrical p-SiGe leg, is described. Hybrid couple efficiency is calculated to be 10% to 15% better than that of a all-SiGe couple. A preliminary design of a planar RTG, employing hybrid couples and a water heat pipe radiator, is described as an example of a possible system application. Hybrid couples, fabricated initially, were characterized by higher than predicted resistance and, in some cases, bond separations. Couples made later in the program, using improved fabrication techniques, exhibited normal resistances, both as-fabricated and after 700 hours of testing. Two flat-plate sections of the reference design thermoelectric converter were fabricated and delivered to NASA Lewis for testing and evaluation.
Developing Matlab scripts for image analysis and quality assessment
NASA Astrophysics Data System (ADS)
Vaiopoulos, A. D.
2011-11-01
Image processing is a very helpful tool in many fields of modern sciences that involve digital imaging examination and interpretation. Processed images however, often need to be correlated with the original image, in order to ensure that the resulting image fulfills its purpose. Aside from the visual examination, which is mandatory, image quality indices (such as correlation coefficient, entropy and others) are very useful, when deciding which processed image is the most satisfactory. For this reason, a single program (script) was written in Matlab language, which automatically calculates eight indices by utilizing eight respective functions (independent function scripts). The program was tested in both fused hyperspectral (Hyperion-ALI) and multispectral (ALI, Landsat) imagery and proved to be efficient. Indices were found to be in agreement with visual examination and statistical observations.
Hazard calculations of diffuse reflected laser radiation for the SELENE program
NASA Technical Reports Server (NTRS)
Miner, Gilda A.; Babb, Phillip D.
1993-01-01
The hazards from diffuse laser light reflections off water clouds, ice clouds, and fog and from possible specular reflections off ice clouds were assessed with the American National Standards (ANSI Z136.1-1986) for the free-electron-laser parameters under consideration for the Segmented Efficient Laser Emission for Non-Nuclear Electricity (SELENE) Program. Diffuse laser reflection hazards exist for water cloud surfaces less than 722 m in altitude and ice cloud surfaces less than 850 m in altitude. Specular reflections from ice crystals in cirrus clouds are not probable; however, any specular reflection is a hazard to ground observers. The hazard to the laser operators and any ground observers during heavy fog conditions is of such significant magnitude that the laser should not be operated in fog.
Factorizing the motion sensitivity function into equivalent input noise and calculation efficiency.
Allard, Rémy; Arleo, Angelo
2017-01-01
The photopic motion sensitivity function of the energy-based motion system is band-pass peaking around 8 Hz. Using an external noise paradigm to factorize the sensitivity into equivalent input noise and calculation efficiency, the present study investigated if the variation in photopic motion sensitivity as a function of the temporal frequency is due to a variation of equivalent input noise (e.g., early temporal filtering) or calculation efficiency (ability to select and integrate motion). For various temporal frequencies, contrast thresholds for a direction discrimination task were measured in presence and absence of noise. Up to 15 Hz, the sensitivity variation was mainly due to a variation of equivalent input noise and little variation in calculation efficiency was observed. The sensitivity fall-off at very high temporal frequencies (from 15 to 30 Hz) was due to a combination of a drop of calculation efficiency and a rise of equivalent input noise. A control experiment in which an artificial temporal integration was applied to the stimulus showed that an early temporal filter (generally assumed to affect equivalent input noise, not calculation efficiency) could impair both the calculation efficiency and equivalent input noise at very high temporal frequencies. We conclude that at the photopic luminance intensity tested, the variation of motion sensitivity as a function of the temporal frequency was mainly due to early temporal filtering, not to the ability to select and integrate motion. More specifically, we conclude that photopic motion sensitivity at high temporal frequencies is limited by internal noise occurring after the transduction process (i.e., neural noise), not by quantal noise resulting from the probabilistic absorption of photons by the photoreceptors as previously suggested.
DASS: efficient discovery and p-value calculation of substructures in unordered data.
Hollunder, Jens; Friedel, Maik; Beyer, Andreas; Workman, Christopher T; Wilhelm, Thomas
2007-01-01
Pattern identification in biological sequence data is one of the main objectives of bioinformatics research. However, few methods are available for detecting patterns (substructures) in unordered datasets. Data mining algorithms mainly developed outside the realm of bioinformatics have been adapted for that purpose, but typically do not determine the statistical significance of the identified patterns. Moreover, these algorithms do not exploit the often modular structure of biological data. We present the algorithm DASS (Discovery of All Significant Substructures) that first identifies all substructures in unordered data (DASS(Sub)) in a manner that is especially efficient for modular data. In addition, DASS calculates the statistical significance of the identified substructures, for sets with at most one element of each type (DASS(P(set))), or for sets with multiple occurrence of elements (DASS(P(mset))). The power and versatility of DASS is demonstrated by four examples: combinations of protein domains in multi-domain proteins, combinations of proteins in protein complexes (protein subcomplexes), combinations of transcription factor target sites in promoter regions and evolutionarily conserved protein interaction subnetworks. The program code and additional data are available at http://www.fli-leibniz.de/tsb/DASS
Yamagata, Yoshitaka; Terada, Yuko; Suzuki, Atsushi; Mimura, Osamu
2010-01-01
The visual efficiency scale currently adopted to determine the legal grade of visual disability associated with visual field loss in Japan is not appropriate for the evaluation of disability regarding daily living activities. We investigated whether Esterman disability score (EDS) is suitable for the assessment of mobility difficulty in patients with visual field loss. The correlation between the EDS calculated from Goldmann's kinetic visual field and the degree of subjective mobility difficulty determined by a questionnaire was investigated in 164 patients with visual field loss. The correlation between the EDS determined using a program built into the Humphrey field analyzer and that calculated from Goldmann's kinetic visual field was also investigated. The EDS based on the kinetic visual field was correlated well with the degree of subjective mobility difficulty, and the EDS measured using the Humphrey field analyzer could be estimated from the kinetic visual field-based EDS. Instead of the currently adopted visual efficiency scale, EDS should be employed for the assessment of mobility difficulty in patients with visual field loss, also to establish new judgment criteria concerning the visual field.
Automated symbolic calculations in nonequilibrium thermodynamics
NASA Astrophysics Data System (ADS)
Kröger, Martin; Hütter, Markus
2010-12-01
We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.
Cutting efficiency of nickel-titanium rotary and reciprocating instruments after prolonged use.
Gambarini, Gianluca; Giansiracusa Rubini, Alessio; Sannino, Giampaolo; Di Giorgio, Gianni; Di Giorgio, Fabrizio; Piasecki, Lucila; Al-Sudani, Dina; Plotino, Gianluca; Testarelli, Luca
2016-01-01
The aim of the present study was to compare the cutting efficiency of Twisted File instruments used in continuous rotation or TF Adaptive motion and evaluate if prolonged use significantly affected their cutting ability. 20 new NiTi instruments were used in the present study (TF tip size 35, 0.06 taper; Sybron-Endo, Orange, CA, USA), divided into 2 subgroups of 10 instruments each, depending on which movement was selected on the endodontic motor. Group 1: TF instruments were activated using the program TF continuous rotation at 500 rpm and torque set at 2 N; Group 2: TF instruments were activated using the reciprocating TF Adaptive motion. Cutting efficiency was tested in a device developed to test the cutting ability of endodontic instruments. Each instrument cut 10 plastic blocks (10 uses) and the length of the surface cut in a plastic block after 1 min was measured in a computerized program with a precision of 0.1 mm. Maximum penetration depth was calculated after 1 use and after 10 uses, and mean and standard deviation (SD) of each group was calculated. Data were statistically analyzed with a one-way ANOVA test (P < 0.05). TF instruments used in continuous rotation (Group 1) cut a mean depth of 10.4 mm (SD = 0.6 mm) after the first use and 10.1 mm (SD 1.1 mm) after 10 uses, while TF instruments used with the Adaptive motion cut a mean depth of 9.9 mm (SD = 0.7 mm) after the first use and 9.6 mm (SD = 0.9 mm) after 10 uses. There was no statistically significant difference between the two groups investigated (P > 0.05) nor between instruments after 1 or 10 uses. In conclusion, the TFA motion showed a lateral cutting ability similar to continuous rotation and all tested instruments exhibited the same cutting ability after prolonged use.
Tian, Ye; Schwieters, Charles D; Opella, Stanley J; Marassi, Francesca M
2017-01-01
Structure determination of proteins by NMR is unique in its ability to measure restraints, very accurately, in environments and under conditions that closely mimic those encountered in vivo. For example, advances in solid-state NMR methods enable structure determination of membrane proteins in detergent-free lipid bilayers, and of large soluble proteins prepared by sedimentation, while parallel advances in solution NMR methods and optimization of detergent-free lipid nanodiscs are rapidly pushing the envelope of the size limit for both soluble and membrane proteins. These experimental advantages, however, are partially squandered during structure calculation, because the commonly used force fields are purely repulsive and neglect solvation, Van der Waals forces and electrostatic energy. Here we describe a new force field, and updated energy functions, for protein structure calculations with EEFx implicit solvation, electrostatics, and Van der Waals Lennard-Jones forces, in the widely used program Xplor-NIH. The new force field is based primarily on CHARMM22, facilitating calculations with a wider range of biomolecules. The new EEFx energy function has been rewritten to enable OpenMP parallelism, and optimized to enhance computation efficiency. It implements solvation, electrostatics, and Van der Waals energy terms together, thus ensuring more consistent and efficient computation of the complete nonbonded energy lists. Updates in the related python module allow detailed analysis of the interaction energies and associated parameters. The new force field and energy function work with both soluble proteins and membrane proteins, including those with cofactors or engineered tags, and are very effective in situations where there are sparse experimental restraints. Results obtained for NMR-restrained calculations with a set of five soluble proteins and five membrane proteins show that structures calculated with EEFx have significant improvements in accuracy, precision, and conformation, and that structure refinement can be obtained by short relaxation with EEFx to obtain improvements in these key metrics. These developments broaden the range of biomolecular structures that can be calculated with high fidelity from NMR restraints.
Second derivative in the model of classical binary system
NASA Astrophysics Data System (ADS)
Abubekerov, M. K.; Gostev, N. Yu.
2016-06-01
We have obtained an analytical expression for the second derivatives of the light curve with respect to geometric parameters in the model of eclipsing classical binary systems. These expressions are essentially efficient algorithm to calculate the numerical values of these second derivatives for all physical values of geometric parameters. Knowledge of the values of second derivatives of the light curve at some point provides additional information about asymptotical behaviour of the function near this point and can significantly improve the search for the best-fitting light curve through the use of second-order optimization method. We write the expression for the second derivatives in a form which is most compact and uniform for all values of the geometric parameters and so make it easy to write a computer program to calculate the values of these derivatives.
Modeling and analysis of the solar concentrator in photovoltaic systems
NASA Astrophysics Data System (ADS)
Mroczka, Janusz; Plachta, Kamil
2015-06-01
The paper presents the Λ-ridge and V-trough concentrator system with a low concentration ratio. Calculations and simulations have been made in the program created by the author. The results of simulation allow to choose the best parameters of photovoltaic system: the opening angle between the surface of the photovoltaic module and mirrors, resolution of the tracking system and the material for construction of the concentrator mirrors. The research shows the effect each of these parameters on the efficiency of the photovoltaic system and method of surface modeling using BRDF function. The parameters of concentrator surface (eg. surface roughness) were calculated using a new algorithm based on the BRDF function. The algorithm uses a combination of model Torrance-Sparrow and HTSG. The simulation shows the change in voltage, current and output power depending on system parameters.
Prediction of energy balance and utilization for solar electric cars
NASA Astrophysics Data System (ADS)
Cheng, K.; Guo, L. M.; Wang, Y. K.; Zafar, M. T.
2017-11-01
Solar irradiation and ambient temperature are characterized by region, season and time-domain, which directly affects the performance of solar energy based car system. In this paper, the model of solar electric cars used was based in Xi’an. Firstly, the meteorological data are modelled to simulate the change of solar irradiation and ambient temperature, and then the temperature change of solar cell is calculated using the thermal equilibrium relation. The above work is based on the driving resistance and solar cell power generation model, which is simulated under the varying radiation conditions in a day. The daily power generation and solar electric car cruise mileage can be predicted by calculating solar cell efficiency and power. The above theoretical approach and research results can be used in the future for solar electric car program design and optimization for the future developments.
NASA Astrophysics Data System (ADS)
Owens, Alec; Yachmenev, Andrey
2018-03-01
In this paper, a general variational approach for computing the rovibrational dynamics of polyatomic molecules in the presence of external electric fields is presented. Highly accurate, full-dimensional variational calculations provide a basis of field-free rovibrational states for evaluating the rovibrational matrix elements of high-rank Cartesian tensor operators and for solving the time-dependent Schrödinger equation. The effect of the external electric field is treated as a multipole moment expansion truncated at the second hyperpolarizability interaction term. Our fully numerical and computationally efficient method has been implemented in a new program, RichMol, which can simulate the effects of multiple external fields of arbitrary strength, polarization, pulse shape, and duration. Illustrative calculations of two-color orientation and rotational excitation with an optical centrifuge of NH3 are discussed.
Computational methods for yeast prion curing curves.
Ridout, Martin S
2008-10-01
If the chemical guanidine hydrochloride is added to a dividing culture of yeast cells in which some of the protein Sup35p is in its prion form, the proportion of cells that carry replicating units of the prion, termed propagons, decreases gradually over time. Stochastic models to describe this process of 'curing' have been developed in earlier work. The present paper investigates the use of numerical methods of Laplace transform inversion to calculate curing curves and contrasts this with an alternative, more direct, approach that involves numerical integration. Transform inversion is found to provide a much more efficient computational approach that allows different models to be investigated with minimal programming effort. The method is used to investigate the robustness of the curing curve to changes in the assumed distribution of cell generation times. Matlab code is available for carrying out the calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huthmacher, Klaus; Molberg, Andreas K.; Rethfeld, Bärbel
2016-10-01
A split-step numerical method for calculating ultrafast free-electron dynamics in dielectrics is introduced. The two split steps, independently programmed in C++11 and FORTRAN 2003, are interfaced via the presented open source wrapper. The first step solves a deterministic extended multi-rate equation for the ionization, electron–phonon collisions, and single photon absorption by free-carriers. The second step is stochastic and models electron–electron collisions using Monte-Carlo techniques. This combination of deterministic and stochastic approaches is a unique and efficient method of calculating the nonlinear dynamics of 3D materials exposed to high intensity ultrashort pulses. Results from simulations solving the proposed model demonstrate howmore » electron–electron scattering relaxes the non-equilibrium electron distribution on the femtosecond time scale.« less
A versatile program for the calculation of linear accelerator room shielding.
Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M
2018-03-22
This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.
A new algorithm to reduce noise in microscopy images implemented with a simple program in python.
Papini, Alessio
2012-03-01
All microscopical images contain noise, increasing when (e.g., transmission electron microscope or light microscope) approaching the resolution limit. Many methods are available to reduce noise. One of the most commonly used is image averaging. We propose here to use the mode of pixel values. Simple Python programs process a given number of images, recorded consecutively from the same subject. The programs calculate the mode of the pixel values in a given position (a, b). The result is a new image containing in (a, b) the mode of the values. Therefore, the final pixel value corresponds to that read in at least two of the pixels in position (a, b). The application of the program on a set of images obtained by applying salt and pepper noise and GIMP hurl noise with 10-90% standard deviation showed that the mode performs better than averaging with three-eight images. The data suggest that the mode would be more efficient (in the sense of a lower number of recorded images to process to reduce noise below a given limit) for lower number of total noisy pixels and high standard deviation (as impulse noise and salt and pepper noise), while averaging would be more efficient when the number of varying pixels is high, and the standard deviation is low, as in many cases of Gaussian noise affected images. The two methods may be used serially. Copyright © 2011 Wiley Periodicals, Inc.
Electron tunneling in proteins program.
Hagras, Muhammad A; Stuchebrukhov, Alexei A
2016-06-05
We developed a unique integrated software package (called Electron Tunneling in Proteins Program or ETP) which provides an environment with different capabilities such as tunneling current calculation, semi-empirical quantum mechanical calculation, and molecular modeling simulation for calculation and analysis of electron transfer reactions in proteins. ETP program is developed as a cross-platform client-server program in which all the different calculations are conducted at the server side while only the client terminal displays the resulting calculation outputs in the different supported representations. ETP program is integrated with a set of well-known computational software packages including Gaussian, BALLVIEW, Dowser, pKip, and APBS. In addition, ETP program supports various visualization methods for the tunneling calculation results that assist in a more comprehensive understanding of the tunneling process. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Turbulent Radiation Effects in HSCT Combustor Rich Zone
NASA Technical Reports Server (NTRS)
Hall, Robert J.; Vranos, Alexander; Yu, Weiduo
1998-01-01
A joint UTRC-University of Connecticut theoretical program was based on describing coupled soot formation and radiation in turbulent flows using stretched flamelet theory. This effort was involved with using the model jet fuel kinetics mechanism to predict soot growth in flamelets at elevated pressure, to incorporate an efficient model for turbulent thermal radiation into a discrete transfer radiation code, and to couple die soot growth, flowfield, and radiation algorithm. The soot calculations used a recently developed opposed jet code which couples the dynamical equations of size-class dependent particle growth with complex chemistry. Several of the tasks represent technical firsts; among these are the prediction of soot from a detailed jet fuel kinetics mechanism, the inclusion of pressure effects in the soot particle growth equations, and the inclusion of the efficient turbulent radiation algorithm in a combustor code.
Canseco Grellet, M A; Castagnaro, A; Dantur, K I; De Boeck, G; Ahmed, P M; Cárdenas, G J; Welin, B; Ruiz, R M
2016-10-01
To calculate fermentation efficiency in a continuous ethanol production process, we aimed to develop a robust mathematical method based on the analysis of metabolic by-product formation. This method is in contrast to the traditional way of calculating ethanol fermentation efficiency, where the ratio between the ethanol produced and the sugar consumed is expressed as a percentage of the theoretical conversion yield. Comparison between the two methods, at industrial scale and in sensitivity studies, showed that the indirect method was more robust and gave slightly higher fermentation efficiency values, although fermentation efficiency of the industrial process was found to be low (~75%). The traditional calculation method is simpler than the indirect method as it only requires a few chemical determinations in samples collected. However, a minor error in any measured parameter will have an important impact on the calculated efficiency. In contrast, the indirect method of calculation requires a greater number of determinations but is much more robust since an error in any parameter will only have a minor effect on the fermentation efficiency value. The application of the indirect calculation methodology in order to evaluate the real situation of the process and to reach an optimum fermentation yield for an industrial-scale ethanol production is recommended. Once a high fermentation yield has been reached the traditional method should be used to maintain the control of the process. Upon detection of lower yields in an optimized process the indirect method should be employed as it permits a more accurate diagnosis of causes of yield losses in order to correct the problem rapidly. The low fermentation efficiency obtained in this study shows an urgent need for industrial process optimization where the indirect calculation methodology will be an important tool to determine process losses. © 2016 The Society for Applied Microbiology.
NASA Technical Reports Server (NTRS)
Scherb, Megan Kay
1993-01-01
Since 1988 an interactive computer model of the human body during exercise has been under development by a number of undergraduate students in the Department of Chemical Engineering at Iowa State University. The program, written under the direction of Dr. Richard C. Seagrave, uses physical characteristics of the user, environmental conditions and activity information to predict the onset of hypothermia, hyperthermia, dehydration, or exhaustion for various levels and durations of a specified exercise. The program however, was severely limited in predicting the onset of dehydration due to the lack of sophistication with which the program predicts sweat rate and its relationship to sensible water loss, degree of acclimatization, and level of physical training. Additionally, it was not known whether sweat rate also depended on age and gender. For these reasons, the goal of this creative component was to modify the program in the above mentioned areas by applying known information and empirical relationships from literature. Furthermore, a secondary goal was to improve the consistency with which the program was written by modifying user input statements and improving the efficiency and logic of the program calculations.
Impact of Alloy Fluctuations on Radiative and Auger Recombination in InGaN Quantum Wells
NASA Astrophysics Data System (ADS)
Jones, Christina; Teng, Chu-Hsiang; Yan, Qimin; Ku, Pei-Cheng; Kioupakis, Emmanouil
Light-emitting diodes (LEDs) based on indium gallium nitride (InGaN) are important for efficient solid-state lighting (2014 Nobel Prize in Physics). Despite its many successes, InGaN suffers from issues that reduce the efficiency of devices at high power, such as the green gap and efficiency droop. The origin of the droop has been attributed to Auger recombination, mediated by carrier scattering due to phonons and alloy disorder. Additionally, InGaN exhibits atomic-scale composition fluctuations that localize carriers and may affect the efficiency. In this work, we study the effect of local composition fluctuations on the radiative recombination rate, Auger recombination rate, and efficiency of InGaN/GaN quantum wells. We apply k.p calculations to simulate band edges and wave functions of quantum wells with fluctuating alloy distributions based on atom probe tomography data, and we evaluate double and triple overlaps of electron and hole wave functions. We compare results for quantum wells with fluctuating alloy distributions to those with uniform alloy compositions and to published work. Our results demonstrate that alloy-composition fluctuations aggravate the efficiency-droop and green-gap problems and further reduce LED efficiency at high power. We acknowledge the NSF CAREER award DMR-1254314, the NSF Graduate Research Fellowship Program DGE-1256260, and the DOE NERSC facility (DE-AC02-05CH11231).
[The effects of instruction about strategies for efficient calculation].
Suzuki, Masayuki; Ichikawa, Shin'ichi
2016-06-01
Calculation problems such as "12x7÷3" can be solved rapidly and easily by using certain techniques; we call these problems "efficient calculation problems." However, it has been pointed out that many students do not always solve them efficiently. In the present study, we examined the effects of an intervention on 35 seventh grade students (23 males, 12 females). The students were instructed to use an overview strategy that stated, "Think carefully about the whole expression", and were then taught three sub-strategies. The results showed that students solved similar problems efficiently after the intervention and the effects were preserved for five months.
Duan, Lili; Liu, Xiao; Zhang, John Z H
2016-05-04
Efficient and reliable calculation of protein-ligand binding free energy is a grand challenge in computational biology and is of critical importance in drug design and many other molecular recognition problems. The main challenge lies in the calculation of entropic contribution to protein-ligand binding or interaction systems. In this report, we present a new interaction entropy method which is theoretically rigorous, computationally efficient, and numerically reliable for calculating entropic contribution to free energy in protein-ligand binding and other interaction processes. Drastically different from the widely employed but extremely expensive normal mode method for calculating entropy change in protein-ligand binding, the new method calculates the entropic component (interaction entropy or -TΔS) of the binding free energy directly from molecular dynamics simulation without any extra computational cost. Extensive study of over a dozen randomly selected protein-ligand binding systems demonstrated that this interaction entropy method is both computationally efficient and numerically reliable and is vastly superior to the standard normal mode approach. This interaction entropy paradigm introduces a novel and intuitive conceptual understanding of the entropic effect in protein-ligand binding and other general interaction systems as well as a practical method for highly efficient calculation of this effect.
2012-01-01
Background The economic downturn exacerbates the inadequacy of resources for combating the worldwide HIV/AIDS pandemic and amplifies the need to improve the efficiency of HIV/AIDS programs. Methods We used data envelopment analysis (DEA) to evaluate efficiency of national HIV/AIDS programs in transforming funding into services and implemented a Tobit model to identify determinants of the efficiency in 68 low- and middle-income countries. We considered the change from the lowest quartile to the average value of a variable a "notable" increase. Results Overall, the average efficiency in implementing HIV/AIDS programs was moderate (49.8%). Program efficiency varied enormously among countries with means by quartile of efficiency of 13.0%, 36.4%, 54.4% and 96.5%. A country's governance, financing mechanisms, and economic and demographic characteristics influence the program efficiency. For example, if countries achieved a notable increase in "voice and accountability" (e.g., greater participation of civil society in policy making), the efficiency of their HIV/AIDS programs would increase by 40.8%. For countries in the lowest quartile of per capita gross national income (GNI), a notable increase in per capita GNI would increase the efficiency of AIDS programs by 45.0%. Conclusions There may be substantial opportunity for improving the efficiency of AIDS services, by providing more services with existing resources. Actions beyond the health sector could be important factors affecting HIV/AIDS service delivery. PMID:22443135
WE-A-BRE-01: Debate: To Measure or Not to Measure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, J; Miften, M; Mihailidis, D
2014-06-15
Recent studies have highlighted some of the limitations of patient-specific pre-treatment IMRT QA measurements with respect to assessing plan deliverability. Pre-treatment QA measurements are frequently performed with detectors in phantoms that do not involve any patient heterogeneities or with an EPID without a phantom. Other techniques have been developed where measurement results are used to recalculate the patient-specific dose volume histograms. Measurements continue to play a fundamental role in understanding the initial and continued performance of treatment planning and delivery systems. Less attention has been focused on the role of computational techniques in a QA program such as calculation withmore » independent dose calculation algorithms or recalculation of the delivery with machine log files or EPID measurements. This session will explore the role of pre-treatment measurements compared to other methods such as computational and transit dosimetry techniques. Efficiency and practicality of the two approaches will also be presented and debated. The speakers will present a history of IMRT quality assurance and debate each other regarding which types of techniques are needed today and for future quality assurance. Examples will be shared of situations where overall quality needed to be assessed with calculation techniques in addition to measurements. Elements where measurements continue to be crucial such as for a thorough end-to-end test involving measurement will be discussed. Operational details that can reduce the gamma tool effectiveness and accuracy for patient-specific pre-treatment IMRT/VMAT QA will be described. Finally, a vision for the future of IMRT and VMAT plan QA will be discussed from a safety perspective. Learning Objectives: Understand the advantages and limitations of measurement and calculation approaches for pre-treatment measurements for IMRT and VMAT planning Learn about the elements of a balanced quality assurance program involving modulated techniques Learn how to use tools and techniques such as an end-to-end test to enhance your IMRT and VMAT QA program.« less
NASA Technical Reports Server (NTRS)
Degroh, H.
1994-01-01
The Metallurgical Programs include three simple programs which calculate solutions to problems common to metallurgical engineers and persons making metal castings. The first program calculates the mass of a binary ideal (alloy) given the weight fractions and densities of the pure components and the total volume. The second program calculates the densities of a binary ideal mixture. The third program converts the atomic percentages of a binary mixture to weight percentages. The programs use simple equations to assist the materials staff with routine calculations. The Metallurgical Programs are written in Microsoft QuickBASIC for interactive execution and have been implemented on an IBM PC-XT/AT operating MS-DOS 2.1 or higher with 256K bytes of memory. All instructions needed by the user appear as prompts as the software is used. Data is input using the keyboard only and output is via the monitor. The Metallurgical programs were written in 1987.
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
Transportation Energy Efficiency Program (TEEP) Report Abstracts
DOT National Transportation Integrated Search
1977-04-15
This bibliography summarizes the published research accomplished for the Department of Transportation's Transportation Energy Efficiency Program and its predecessor, the Automotive Energy Efficiency Program. The reports are indexed by corporate autho...
NASA Astrophysics Data System (ADS)
Dimitrakopoulos, Panagiotis
2018-03-01
The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.
Studies of the use of high-temperature nuclear heat from an HTGR for hydrogen production
NASA Technical Reports Server (NTRS)
Peterman, D. D.; Fontaine, R. W.; Quade, R. N.; Halvers, L. J.; Jahromi, A. M.
1975-01-01
The results of a study which surveyed various methods of hydrogen production using nuclear and fossil energy are presented. A description of these methods is provided, and efficiencies are calculated for each case. The process designs of systems that utilize the heat from a general atomic high temperature gas cooled reactor with a steam methane reformer and feed the reformer with substitute natural gas manufactured from coal, using reforming temperatures, are presented. The capital costs for these systems and the resultant hydrogen production price for these cases are discussed along with a research and development program.
Procedures for shape optimization of gas turbine disks
NASA Technical Reports Server (NTRS)
Cheu, Tsu-Chien
1989-01-01
Two procedures, the feasible direction method and sequential linear programming, for shape optimization of gas turbine disks are presented. The objective of these procedures is to obtain optimal designs of turbine disks with geometric and stress constraints. The coordinates of the selected points on the disk contours are used as the design variables. Structural weight, stress and their derivatives with respect to the design variables are calculated by an efficient finite element method for design senitivity analysis. Numerical examples of the optimal designs of a disk subjected to thermo-mechanical loadings are presented to illustrate and compare the effectiveness of these two procedures.
A new device to test cutting efficiency of mechanical endodontic instruments
Rubini, Alessio Giansiracusa; Plotino, Gianluca; Al-Sudani, Dina; Grande, Nicola M.; Putorti, Ermanno; Sonnino, GianPaolo; Cotti, Elisabetta; Testarelli, Luca; Gambarini, Gianluca
2014-01-01
Background The purpose of the present study was to introduce a new device specifically designed to evaluate the cutting efficiency of mechanically driven endodontic instruments. Material/Methods Twenty new Reciproc R25 (VDW, Munich, Germany) files were used to be investigated in the new device developed to test the cutting ability of endodontic instruments. The device consists of a main frame to which a mobile plastic support for the hand-piece is connected and a stainless-steel block containing a Plexiglas block against which the cutting efficiency of the instruments was tested. The length of the block cut in 1 minute was measured in a computerized program with a precision of 0.1mm. The instruments were activated by using a torque-controlled motor (Silver Reciproc; VDW, Munich, Germany) in a reciprocating movement by the “Reciproc ALL” program (Group 1) and in counter-clockwise rotation at 300 rpm (Group 2). Mean and standard deviations of each group were calculated and data were statistically analyzed with a one-way ANOVA test (P<0.05). Results Reciproc in reciprocation (Group 1) mean cut in the Plexiglas block was 8.6 mm (SD=0.6 mm), while Reciproc in rotation mean cut was 8.9 mm (SD=0.7 mm). There was no statistically significant difference between the 2 groups investigated (P>0.05). Conclusions The cutting testing device evaluated in the present study was reliable and easy to use and may be effectively used to test cutting efficiency of both rotary and reciprocating mechanical endodontic instruments. PMID:24603777
Parallel Calculation of Sensitivity Derivatives for Aircraft Design using Automatic Differentiation
NASA Technical Reports Server (NTRS)
Bischof, c. H.; Green, L. L.; Haigler, K. J.; Knauff, T. L., Jr.
1994-01-01
Sensitivity derivative (SD) calculation via automatic differentiation (AD) typical of that required for the aerodynamic design of a transport-type aircraft is considered. Two ways of computing SD via code generated by the ADIFOR automatic differentiation tool are compared for efficiency and applicability to problems involving large numbers of design variables. A vector implementation on a Cray Y-MP computer is compared with a coarse-grained parallel implementation on an IBM SP1 computer, employing a Fortran M wrapper. The SD are computed for a swept transport wing in turbulent, transonic flow; the number of geometric design variables varies from 1 to 60 with coupling between a wing grid generation program and a state-of-the-art, 3-D computational fluid dynamics program, both augmented for derivative computation via AD. For a small number of design variables, the Cray Y-MP implementation is much faster. As the number of design variables grows, however, the IBM SP1 becomes an attractive alternative in terms of compute speed, job turnaround time, and total memory available for solutions with large numbers of design variables. The coarse-grained parallel implementation also can be moved easily to a network of workstations.
Thin Cloud Detection Method by Linear Combination Model of Cloud Image
NASA Astrophysics Data System (ADS)
Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.
2018-04-01
The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.
Simulation of the wastewater temperature in sewers with TEMPEST.
Dürrenmatt, David J; Wanner, Oskar
2008-01-01
TEMPEST is a new interactive simulation program for the estimation of the wastewater temperature in sewers. Intuitive graphical user interfaces assist the user in managing data, performing calculations and plotting results. The program calculates the dynamics and longitudinal spatial profiles of the wastewater temperature in sewer lines. Interactions between wastewater, sewer air and surrounding soil are modeled in TEMPEST by mass balance equations, rate expressions found in the literature and a new empirical model of the airflow in the sewer. TEMPEST was developed as a tool which can be applied in practice, i.e., it requires as few input data as possible. These data include the upstream wastewater discharge and temperature, geometric and hydraulic parameters of the sewer, material properties of the sewer pipe and surrounding soil, ambient conditions, and estimates of the capacity of openings for air exchange between sewer and environment. Based on a case study it is shown how TEMPEST can be applied to estimate the decrease of the downstream wastewater temperature caused by heat recovery from the sewer. Because the efficiency of nitrification strongly depends on the wastewater temperature, this application is of practical relevance for situations in which the sewer ends at a nitrifying wastewater treatment plant.
PBEQ-Solver for online visualization of electrostatic potential of biomolecules.
Jo, Sunhwan; Vargyas, Miklos; Vasko-Szedlar, Judit; Roux, Benoît; Im, Wonpil
2008-07-01
PBEQ-Solver provides a web-based graphical user interface to read biomolecular structures, solve the Poisson-Boltzmann (PB) equations and interactively visualize the electrostatic potential. PBEQ-Solver calculates (i) electrostatic potential and solvation free energy, (ii) protein-protein (DNA or RNA) electrostatic interaction energy and (iii) pKa of a selected titratable residue. All the calculations can be performed in both aqueous solvent and membrane environments (with a cylindrical pore in the case of membrane). PBEQ-Solver uses the PBEQ module in the biomolecular simulation program CHARMM to solve the finite-difference PB equation of molecules specified by users. Users can interactively inspect the calculated electrostatic potential on the solvent-accessible surface as well as iso-electrostatic potential contours using a novel online visualization tool based on MarvinSpace molecular visualization software, a Java applet integrated within CHARMM-GUI (http://www.charmm-gui.org). To reduce the computational time on the server, and to increase the efficiency in visualization, all the PB calculations are performed with coarse grid spacing (1.5 A before and 1 A after focusing). PBEQ-Solver suggests various physical parameters for PB calculations and users can modify them if necessary. PBEQ-Solver is available at http://www.charmm-gui.org/input/pbeqsolver.
NASA Astrophysics Data System (ADS)
Adams, Mike; Smalian, Silva
2017-09-01
For nuclear waste packages the expected dose rates and nuclide inventory are beforehand calculated. Depending on the package of the nuclear waste deterministic programs like MicroShield® provide a range of results for each type of packaging. Stochastic programs like "Monte-Carlo N-Particle Transport Code System" (MCNP®) on the other hand provide reliable results for complex geometries. However this type of program requires a fully trained operator and calculations are time consuming. The problem here is to choose an appropriate program for a specific geometry. Therefore we compared the results of deterministic programs like MicroShield® and stochastic programs like MCNP®. These comparisons enable us to make a statement about the applicability of the various programs for chosen types of containers. As a conclusion we found that for thin-walled geometries deterministic programs like MicroShield® are well suited to calculate the dose rate. For cylindrical containers with inner shielding however, deterministic programs hit their limits. Furthermore we investigate the effect of an inhomogeneous material and activity distribution on the results. The calculations are still ongoing. Results will be presented in the final abstract.
Computer program for calculation of oxygen uptake
NASA Technical Reports Server (NTRS)
Castle, B. L.; Castle, G.; Greenleaf, J. E.
1979-01-01
A description and operational precedures are presented for a computer program, written in Super Basic, that calculates oxygen uptake, carbon dioxide production, and related ventilation parameters. Program features include: (1) the option of entering slope and intercept values of calibration curves for the O2 and CO2 and analyzers; (2) calculation of expired water vapor pressure; and (3) the option of entering inspured O2 and CO2 concentrations. The program is easily adaptable for programmable laboratory calculators.
Households with young children and use of freely distributed bednets in rural Madagascar.
Krezanoski, Paul J; Comfort, Alison B; Tsai, Alexander C; Bangsberg, David R
2014-03-01
Malaria infections are the leading cause of death for children in Madagascar. Insecticide-treated bednets offer effective prevention, but it is unclear how well free bednet distribution programs reach young children. We conducted a secondary analysis of a free bednet distribution program in Madagascar from 2007-2008. Interviews were performed at baseline and 6 months. Principal components analysis was used to construct a wealth and malaria knowledge index. Coverage efficiency was calculated as coverage of children per bednet owned. Univariable and multivariable regressions were used to determine predictors of bednet use. Bednet use, among the 560 households in the study, increased from 6 to 91% after 6 months. Coverage efficiency increased from 1.29 to 1.56 children covered per bednet owned. In multivariable analysis, having a child under 5 years of age was the only variable associated with bednet use (OR 9.10; p=0.001), yielding a 99% likelihood of using a bednet (95% CI 96.4 to 99.9%) versus 82% (95% CI 72.2 to 88.4%) in households without young children. This free bednet distribution program achieved high levels of adherence after 6 months. Household presence of children was associated with bednet use, but not household income or education, suggesting that distribution to priority groups may help overcome traditional barriers to adoption in some settings.
13 CFR 101.500 - Small Business Energy Efficiency Program.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 13 Business Credit and Assistance 1 2013-01-01 2013-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...
13 CFR 101.500 - Small Business Energy Efficiency Program.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...
13 CFR 101.500 - Small Business Energy Efficiency Program.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 13 Business Credit and Assistance 1 2011-01-01 2011-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...
13 CFR 101.500 - Small Business Energy Efficiency Program.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 13 Business Credit and Assistance 1 2014-01-01 2014-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...
13 CFR 101.500 - Small Business Energy Efficiency Program.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 13 Business Credit and Assistance 1 2012-01-01 2012-01-01 false Small Business Energy Efficiency... ADMINISTRATION Small Business Energy Efficiency § 101.500 Small Business Energy Efficiency Program. (a) The.../energy, building on the Energy Star for Small Business Program, to assist small business concerns in...
Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package.
Kaus, Joseph W; Pierce, Levi T; Walker, Ross C; McCammont, J Andrew
2013-09-10
Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license.
Improving the Efficiency of Free Energy Calculations in the Amber Molecular Dynamics Package
Pierce, Levi T.; Walker, Ross C.; McCammont, J. Andrew
2013-01-01
Alchemical transformations are widely used methods to calculate free energies. Amber has traditionally included support for alchemical transformations as part of the sander molecular dynamics (MD) engine. Here we describe the implementation of a more efficient approach to alchemical transformations in the Amber MD package. Specifically we have implemented this new approach within the more computational efficient and scalable pmemd MD engine that is included with the Amber MD package. The majority of the gain in efficiency comes from the improved design of the calculation, which includes better parallel scaling and reduction in the calculation of redundant terms. This new implementation is able to reproduce results from equivalent simulations run with the existing functionality, but at 2.5 times greater computational efficiency. This new implementation is also able to run softcore simulations at the λ end states making direct calculation of free energies more accurate, compared to the extrapolation required in the existing implementation. The updated alchemical transformation functionality will be included in the next major release of Amber (scheduled for release in Q1 2014) and will be available at http://ambermd.org, under the Amber license. PMID:24185531
Tai, David; Fang, Jianwen
2012-08-27
The large sizes of today's chemical databases require efficient algorithms to perform similarity searches. It can be very time consuming to compare two large chemical databases. This paper seeks to build upon existing research efforts by describing a novel strategy for accelerating existing search algorithms for comparing large chemical collections. The quest for efficiency has focused on developing better indexing algorithms by creating heuristics for searching individual chemical against a chemical library by detecting and eliminating needless similarity calculations. For comparing two chemical collections, these algorithms simply execute searches for each chemical in the query set sequentially. The strategy presented in this paper achieves a speedup upon these algorithms by indexing the set of all query chemicals so redundant calculations that arise in the case of sequential searches are eliminated. We implement this novel algorithm by developing a similarity search program called Symmetric inDexing or SymDex. SymDex shows over a 232% maximum speedup compared to the state-of-the-art single query search algorithm over real data for various fingerprint lengths. Considerable speedup is even seen for batch searches where query set sizes are relatively small compared to typical database sizes. To the best of our knowledge, SymDex is the first search algorithm designed specifically for comparing chemical libraries. It can be adapted to most, if not all, existing indexing algorithms and shows potential for accelerating future similarity search algorithms for comparing chemical databases.
A GPU-accelerated implicit meshless method for compressible flows
NASA Astrophysics Data System (ADS)
Zhang, Jia-Le; Ma, Zhi-Hua; Chen, Hong-Quan; Cao, Cheng
2018-05-01
This paper develops a recently proposed GPU based two-dimensional explicit meshless method (Ma et al., 2014) by devising and implementing an efficient parallel LU-SGS implicit algorithm to further improve the computational efficiency. The capability of the original 2D meshless code is extended to deal with 3D complex compressible flow problems. To resolve the inherent data dependency of the standard LU-SGS method, which causes thread-racing conditions destabilizing numerical computation, a generic rainbow coloring method is presented and applied to organize the computational points into different groups by painting neighboring points with different colors. The original LU-SGS method is modified and parallelized accordingly to perform calculations in a color-by-color manner. The CUDA Fortran programming model is employed to develop the key kernel functions to apply boundary conditions, calculate time steps, evaluate residuals as well as advance and update the solution in the temporal space. A series of two- and three-dimensional test cases including compressible flows over single- and multi-element airfoils and a M6 wing are carried out to verify the developed code. The obtained solutions agree well with experimental data and other computational results reported in the literature. Detailed analysis on the performance of the developed code reveals that the developed CPU based implicit meshless method is at least four to eight times faster than its explicit counterpart. The computational efficiency of the implicit method could be further improved by ten to fifteen times on the GPU.
Test bed ion engine development
NASA Technical Reports Server (NTRS)
Aston, G.; Deininger, W. D.
1984-01-01
A test bed ion (TBI) engine was developed to serve as a tool in exploring the limits of electrostatic ion thruster performance. A description of three key ion engine components, the decoupled extraction and amplified current (DE-AC) accelerator system, field enhanced refractory metal (FERM) hollow cathode and divergent line cusp (DLC) discharge chamber, whose designs and operating philosophies differ markedly from conventional thruster technology is given. Significant program achievements were: (1) high current density DE-AC accelerator system operation at low electric field stress with indicated feasibility of a 60 mA/sq cm argon ion beam; (2) reliable FERM cathode start up times of 1 to 2 secs. and demonstrated 35 ampere emission levels; (3) DLC discharge chamber plasma potentials negative of anode potential; and (4) identification of an efficient high plasma density engine operating mode. Using the performance projections of this program and reasonable estimates of other parameter values, a 1.0 Newton thrust ion engine is identified as a realizable technology goal. Calculations show that such an engine, comparable in beam area to a J series 30 cm thruster, could, operating on Xe or Hg, have thruster efficiencies as high as 0.76 and 0.78 respectively, with a 100 eV/ion discharge loss.
Water Management Planning: A Case Study at Blue Grass Army Depot
DOE Office of Scientific and Technical Information (OSTI.GOV)
Solana, Amy E.; Mcmordie, Katherine
2006-04-03
Executive Order 13123, Greening the Government Through Efficient Energy Management, mandates an aggressive policy for reducing potable water consumption at federal facilities. Implementation guid¬ance from the U.S. Department of Energy (DOE) set a requirement for each federal agency to “reduce potable water usage by implementing life cycle, cost-effective water efficiency programs that include a water management plan, and not less than four Federal Energy Management Program (FEMP) Best Manage¬ment Practices (BMPs).” The objective of this plan is to gain full compliance with Executive Order 13123 and associated DOE implementation guidance on behalf of Blue Grass Army Depot (BGAD), Richmond, Kentucky.more » In accordance with this plan, BGAD must: • Incorporate the plan as a component of the Installation energy conservation plan • Investigate the water savings potential and life-cycle cost effectiveness of the Operations and Maintenance (O&M) and retrofit/replacement options associated with the ten FEMP BMPs • Put into practice all applicable O&M options • Identify retrofit/replacement options appropriate for implementation (based upon calculation of the simple payback periods) • Establish a schedule for implementation of applicable and cost-effective retrofit/replacement options.« less
NASA Astrophysics Data System (ADS)
Zavaletta, Vanessa A.; Bartholmai, Brian J.; Robb, Richard A.
2007-03-01
Diffuse lung diseases, such as idiopathic pulmonary fibrosis (IPF), can be characterized and quantified by analysis of volumetric high resolution CT scans of the lungs. These data sets typically have dimensions of 512 x 512 x 400. It is too subjective and labor intensive for a radiologist to analyze each slice and quantify regional abnormalities manually. Thus, computer aided techniques are necessary, particularly texture analysis techniques which classify various lung tissue types. Second and higher order statistics which relate the spatial variation of the intensity values are good discriminatory features for various textures. The intensity values in lung CT scans range between [-1024, 1024]. Calculation of second order statistics on this range is too computationally intensive so the data is typically binned between 16 or 32 gray levels. There are more effective ways of binning the gray level range to improve classification. An optimal and very efficient way to nonlinearly bin the histogram is to use a dynamic programming algorithm. The objective of this paper is to show that nonlinear binning using dynamic programming is computationally efficient and improves the discriminatory power of the second and higher order statistics for more accurate quantification of diffuse lung disease.
Development testing of large volume water sprays for warm fog dispersal
NASA Technical Reports Server (NTRS)
Keller, V. W.; Anderson, B. J.; Burns, R. A.; Lala, G. G.; Meyer, M. B.; Beard, K. V.
1986-01-01
A new brute-force method of warm fog dispersal is described. The method uses large volume recycled water sprays to create curtains of falling drops through which the fog is processed by the ambient wind and spray induced air flow. Fog droplets are removed by coalescence/rainout. The efficiency of the technique depends upon the drop size spectra in the spray, the height to which the spray can be projected, the efficiency with which fog laden air is processed through the curtain of spray, and the rate at which new fog may be formed due to temperature differences between the air and spray water. Results of a field test program, implemented to develop the data base necessary to assess the proposed method, are presented. Analytical calculations based upon the field test results indicate that this proposed method of warm fog dispersal is feasible. Even more convincingly, the technique was successfully demonstrated in the one natural fog event which occurred during the test program. Energy requirements for this technique are an order of magnitude less than those to operate a thermokinetic system. An important side benefit is the considerable emergency fire extinguishing capability it provides along the runway.
New technology in turbine aerodynamics
NASA Technical Reports Server (NTRS)
Glassman, A. J.; Moffitt, T. P.
1972-01-01
A cursory review is presented of some of the recent work that has been done in turbine aerodynamic research at NASA-Lewis Research Center. Topics discussed include the aerodynamic effect of turbine coolant, high work-factor (ratio of stage work to square of blade speed) turbines, and computer methods for turbine design and performance prediction. An extensive bibliography is included. Experimental cooled-turbine aerodynamics programs using two-dimensional cascades, full annular cascades, and cold rotating turbine stage tests are discussed with some typical results presented. Analytically predicted results for cooled blade performance are compared to experimental results. The problems and some of the current programs associated with the use of very high work factors for fan-drive turbines of high-bypass-ratio engines are discussed. Turbines currently being investigated make use of advanced blading concepts designed to maintain high efficiency under conditions of high aerodynamic loading. Computer programs have been developed for turbine design-point performance, off-design performance, supersonic blade profile design, and the calculation of channel velocities for subsonic and transonic flow fields. The use of these programs for the design and analysis of axial and radial turbines is discussed.
AutoBayes Program Synthesis System System Internals
NASA Technical Reports Server (NTRS)
Schumann, Johann Martin
2011-01-01
This lecture combines the theoretical background of schema based program synthesis with the hands-on study of a powerful, open-source program synthesis system (Auto-Bayes). Schema-based program synthesis is a popular approach toward program synthesis. The lecture will provide an introduction into this topic and discuss how this technology can be used to generate customized algorithms. The synthesis of advanced numerical algorithms requires the availability of a powerful symbolic (algebra) system. Its task is to symbolically solve equations, simplify expressions, or to symbolically calculate derivatives (among others) such that the synthesized algorithms become as efficient as possible. We will discuss the use and importance of the symbolic system for synthesis. Any synthesis system is a large and complex piece of code. In this lecture, we will study Autobayes in detail. AutoBayes has been developed at NASA Ames and has been made open source. It takes a compact statistical specification and generates a customized data analysis algorithm (in C/C++) from it. AutoBayes is written in SWI Prolog and many concepts from rewriting, logic, functional, and symbolic programming. We will discuss the system architecture, the schema libary and the extensive support infra-structure. Practical hands-on experiments and exercises will enable the student to get insight into a realistic program synthesis system and provides knowledge to use, modify, and extend Autobayes.
Converting to DEA/MDEA mix ups sweetening capacity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spears, M.L.; Hagan, K.M.; Bullin, J.A.
1996-08-12
Mixing amines can be the best method for increasing capacity or improving efficiency in an amine sweetening unit. In many cases, it may be possible simply to add a second amine to the existing solution on the fly, or as the unit is running. Union Pacific Resources` Bryan, Tex., gas plant provides one example. The plant was converted from diethanolamine (DEA) to a DEA/MDEA (methyl DEA) mixture after analysis by TSWEET, a process-simulation program. After conversion, CO{sub 2} levels in the sales gas fell to less than pipeline specifications. Data were taken for the absorber at a constant amine circulationmore » of 120 gpm. A comparison of the performance data to the values calculated by the program proved the accuracy of TSWEET. The conversion and performance of the plant are described.« less
Calculation of cosmic ray induced single event upsets: Program CRUP (Cosmic Ray Upset Program)
NASA Astrophysics Data System (ADS)
Shapiro, P.
1983-09-01
This report documents PROGRAM CRUP, COSMIC RAY UPSET PROGRAM. The computer program calculates cosmic ray induced single-event error rates in microelectronic circuits exposed to several representative cosmic-ray environments.
Effect of particle size distribution on the separation efficiency in liquid chromatography.
Horváth, Krisztián; Lukács, Diána; Sepsey, Annamária; Felinger, Attila
2014-09-26
In this work, the influence of the width of particle size distribution (PSD) on chromatographic efficiency is studied. The PSD is described by lognormal distribution. A theoretical framework is developed in order to calculate heights equivalent to a theoretical plate in case of different PSDs. Our calculations demonstrate and verify that wide particle size distributions have significant effect on the separation efficiency of molecules. The differences of fully porous and core-shell phases regarding the influence of width of PSD are presented and discussed. The efficiencies of bimodal phases were also calculated. The results showed that these packings do not have any advantage over unimodal phases. Copyright © 2014 Elsevier B.V. All rights reserved.
Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System
NASA Astrophysics Data System (ADS)
Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu
2017-05-01
This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.
MEKS: A program for computation of inclusive jet cross sections at hadron colliders
NASA Astrophysics Data System (ADS)
Gao, Jun; Liang, Zhihua; Soper, Davison E.; Lai, Hung-Liang; Nadolsky, Pavel M.; Yuan, C.-P.
2013-06-01
EKS is a numerical program that predicts differential cross sections for production of single-inclusive hadronic jets and jet pairs at next-to-leading order (NLO) accuracy in a perturbative QCD calculation. We describe MEKS 1.0, an upgraded EKS program with increased numerical precision, suitable for comparisons to the latest experimental data from the Large Hadron Collider and Tevatron. The program integrates the regularized patron-level matrix elements over the kinematical phase space for production of two and three partons using the VEGAS algorithm. It stores the generated weighted events in finely binned two-dimensional histograms for fast offline analysis. A user interface allows one to customize computation of inclusive jet observables. Results of a benchmark comparison of the MEKS program and the commonly used FastNLO program are also documented. Program SummaryProgram title: MEKS 1.0 Catalogue identifier: AEOX_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOX_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland. Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 9234 No. of bytes in distributed program, including test data, etc.: 51997 Distribution format: tar.gz Programming language: Fortran (main program), C (CUBA library and analysis program). Computer: All. Operating system: Any UNIX-like system. RAM: ˜300 MB Classification: 11.1. External routines: LHAPDF (https://lhapdf.hepforge.org/) Nature of problem: Computation of differential cross sections for inclusive production of single hadronic jets and jet pairs at next-to-leading order accuracy in perturbative quantum chromodynamics. Solution method: Upon subtraction of infrared singularities, the hard-scattering matrix elements are integrated over available phase space using an optimized VEGAS algorithm. Weighted events are generated and filled into a finely binned two-dimensional histogram, from which the final cross sections with typical experimental binning and cuts are computed by an independent analysis program. Monte Carlo sampling of event weights is tuned automatically to get better efficiency. Running time: Depends on details of the calculation and sought numerical accuracy. See benchmark performance in Section 4. The tests provided take approximately 27 min for the jetbin run and a few seconds for jetana.
Combustion of hydrogen injected into a supersonic airstream (the SHIP computer program)
NASA Technical Reports Server (NTRS)
Markatos, N. C.; Spalding, D. B.; Tatchell, D. G.
1977-01-01
The mathematical and physical basis of the SHIP computer program which embodies a finite-difference, implicit numerical procedure for the computation of hydrogen injected into a supersonic airstream at an angle ranging from normal to parallel to the airstream main flow direction is described. The physical hypotheses built into the program include: a two-equation turbulence model, and a chemical equilibrium model for the hydrogen-oxygen reaction. Typical results for equilibrium combustion are presented and exhibit qualitatively plausible behavior. The computer time required for a given case is approximately 1 minute on a CDC 7600 machine. A discussion of the assumption of parabolic flow in the injection region is given which suggests that improvement in calculation in this region could be obtained by use of the partially parabolic procedure of Pratap and Spalding. It is concluded that the technique described herein provides the basis for an efficient and reliable means for predicting the effects of hydrogen injection into supersonic airstreams and of its subsequent combustion.
QSAR Study for Carcinogenic Potency of Aromatic Amines Based on GEP and MLPs
Song, Fucheng; Zhang, Anling; Liang, Hui; Cui, Lianhua; Li, Wenlian; Si, Hongzong; Duan, Yunbo; Zhai, Honglin
2016-01-01
A new analysis strategy was used to classify the carcinogenicity of aromatic amines. The physical-chemical parameters are closely related to the carcinogenicity of compounds. Quantitative structure activity relationship (QSAR) is a method of predicting the carcinogenicity of aromatic amine, which can reveal the relationship between carcinogenicity and physical-chemical parameters. This study accessed gene expression programming by APS software, the multilayer perceptrons by Weka software to predict the carcinogenicity of aromatic amines, respectively. All these methods relied on molecular descriptors calculated by CODESSA software and eight molecular descriptors were selected to build function equations. As a remarkable result, the accuracy of gene expression programming in training and test sets are 0.92 and 0.82, the accuracy of multilayer perceptrons in training and test sets are 0.84 and 0.74 respectively. The precision of the gene expression programming is obviously superior to multilayer perceptrons both in training set and test set. The QSAR application in the identification of carcinogenic compounds is a high efficiency method. PMID:27854309
Characterization of Lateral Structure of the p-i-n Diode for Thin-Film Silicon Solar Cell.
Kiaee, Zohreh; Joo, Seung Ki
2018-03-01
The lateral structure of the p-i-n diode was characterized for thin-film silicon solar cell application. The structure can benefit from a wide intrinsic layer, which can improve efficiency without increasing cell thickness. Compared with conventional thin-film p-i-n cells, the p-i-n diode lateral structure exploited direct light irradiation on the absorber layer, one-side contact, and bifacial irradiation. Considering the effect of different carrier lifetimes and recombinations, we calculated efficiency parameters by using a commercially available simulation program as a function of intrinsic layer width, as well as the distance between p/i or n/i junctions to contacts. We then obtained excellent parameter values of 706.52 mV open-circuit voltage, 24.16 mA/Cm2 short-circuit current, 82.66% fill factor, and 14.11% efficiency from a lateral cell (thickness = 3 μm; intrinsic layer width = 53 μm) in monofacial irradiation mode (i.e., only sunlight from the front side was considered). Simulation results of the cell without using rear-side reflector in bifacial irradiation mode showed 11.26% front and 9.72% rear efficiencies. Our findings confirmed that the laterally structured p-i-n cell can be a potentially powerful means for producing highly efficient, thin-film silicon solar cells.
Method for Evaluating Energy Use of Dishwashers, Clothes Washers, and Clothes Dryers: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eastment, M.; Hendron, R.
Building America teams are researching opportunities to improve energy efficiency for some of the more challenging end-uses, such as lighting (both fixed and occupant-provided), appliances (clothes washer, dishwasher, clothes dryer, refrigerator, and range), and miscellaneous electric loads, which are all heavily dependent on occupant behavior and product choices. These end-uses have grown to be a much more significant fraction of total household energy use (as much as 50% for very efficient homes) as energy efficient homes have become more commonplace through programs such as ENERGY STAR and Building America. As modern appliances become more sophisticated the residential energy analyst ismore » faced with a daunting task in trying to calculate the energy savings of high efficiency appliances. Unfortunately, most whole-building simulation tools do not allow the input of detailed appliance specifications. Using DOE test procedures the method outlined in this paper presents a reasonable way to generate inputs for whole-building energy-simulation tools. The information necessary to generate these inputs is available on Energy-Guide labels, the ENERGY-STAR website, California Energy Commission's Appliance website and manufacturer's literature. Building America has developed a standard method for analyzing the effect of high efficiency appliances on whole-building energy consumption when compared to the Building America's Research Benchmark building.« less
Cost analysis of the treatment of severe acute malnutrition in West Africa.
Isanaka, Sheila; Menzies, Nicolas A; Sayyad, Jessica; Ayoola, Mudasiru; Grais, Rebecca F; Doyon, Stéphane
2017-10-01
We present an updated cost analysis to provide new estimates of the cost of providing community-based treatment for severe acute malnutrition, including expenditure shares for major cost categories. We calculated total and per child costs from a provider perspective. We categorized costs into three main activities (outpatient treatment, inpatient treatment, and management/administration) and four cost categories within each activity (personnel; therapeutic food; medical supplies; and infrastructure and logistical support). For each category, total costs were calculated by multiplying input quantities expended in the Médecins Sans Frontières nutrition program in Niger during a 12-month study period by 2015 input prices. All children received outpatient treatment, with 43% also receiving inpatient treatment. In this large, well-established program, the average cost per child treated was €148.86, with outpatient and inpatient treatment costs of €75.50 and €134.57 per child, respectively. Therapeutic food (44%, €32.98 per child) and personnel (35%, €26.70 per child) dominated outpatient costs, while personnel (56%, €75.47 per child) dominated in the cost of inpatient care. Sensitivity analyses suggested lowering prices of medical treatments, and therapeutic food had limited effect on total costs per child, while increasing program size and decreasing use of expatriate staff support reduced total costs per child substantially. Updated estimates of severe acute malnutrition treatment cost are substantially lower than previously published values, and important cost savings may be possible with increases in coverage/program size and integration into national health programs. These updated estimates can be used to suggest approaches to improve efficiency and inform national-level resource allocation. © 2016 John Wiley & Sons Ltd.
Computer programs for calculating potential flow in propulsion system inlets
NASA Technical Reports Server (NTRS)
Stockman, N. O.; Button, S. L.
1973-01-01
In the course of designing inlets, particularly for VTOL and STOL propulsion systems, a calculational procedure utilizing three computer programs evolved. The chief program is the Douglas axisymmetric potential flow program called EOD which calculates the incompressible potential flow about arbitrary axisymmetric bodies. The other two programs, original with Lewis, are called SCIRCL AND COMBYN. Program SCIRCL generates input for EOD from various specified analytic shapes for the inlet components. Program COMBYN takes basic solutions output by EOD and combines them into solutions of interest, and applies a compressibility correction.
Efficient GW calculations using eigenvalue-eigenvector decomposition of the dielectric matrix
NASA Astrophysics Data System (ADS)
Nguyen, Huy-Viet; Pham, T. Anh; Rocca, Dario; Galli, Giulia
2011-03-01
During the past 25 years, the GW method has been successfully used to compute electronic quasi-particle excitation spectra of a variety of materials. It is however a computationally intensive technique, as it involves summations over occupied and empty electronic states, to evaluate both the Green function (G) and the dielectric matrix (DM) entering the expression of the screened Coulomb interaction (W). Recent developments have shown that eigenpotentials of DMs can be efficiently calculated without any explicit evaluation of empty states. In this work, we will present a computationally efficient approach to the calculations of GW spectra by combining a representation of DMs in terms of its eigenpotentials and a recently developed iterative algorithm. As a demonstration of the efficiency of the method, we will present calculations of the vertical ionization potentials of several systems. Work was funnded by SciDAC-e DE-FC02-06ER25777.
StreamThermal: A software package for calculating thermal metrics from stream temperature data
Tsang, Yin-Phan; Infante, Dana M.; Stewart, Jana S.; Wang, Lizhu; Tingly, Ralph; Thornbrugh, Darren; Cooper, Arthur; Wesley, Daniel
2016-01-01
Improving quality and better availability of continuous stream temperature data allows natural resource managers, particularly in fisheries, to understand associations between different characteristics of stream thermal regimes and stream fishes. However, there is no convenient tool to efficiently characterize multiple metrics reflecting stream thermal regimes with the increasing amount of data. This article describes a software program packaged as a library in R to facilitate this process. With this freely-available package, users will be able to quickly summarize metrics that describe five categories of stream thermal regimes: magnitude, variability, frequency, timing, and rate of change. The installation and usage instruction of this package, the definition of calculated thermal metrics, as well as the output format from the package are described, along with an application showing the utility for multiple metrics. We believe this package can be widely utilized by interested stakeholders and greatly assist more studies in fisheries.
Correlation Energies from the Two-Component Random Phase Approximation.
Kühn, Michael
2014-02-11
The correlation energy within the two-component random phase approximation accounting for spin-orbit effects is derived. The resulting plasmon equation is rewritten-analogously to the scalar relativistic case-in terms of the trace of two Hermitian matrices for (Kramers-restricted) closed-shell systems and then represented as an integral over imaginary frequency using the resolution of the identity approximation. The final expression is implemented in the TURBOMOLE program suite. The code is applied to the computation of equilibrium distances and vibrational frequencies of heavy diatomic molecules. The efficiency is demonstrated by calculation of the relative energies of the Oh-, D4h-, and C5v-symmetric isomers of Pb6. Results within the random phase approximation are obtained based on two-component Kohn-Sham reference-state calculations, using effective-core potentials. These values are finally compared to other two-component and scalar relativistic methods, as well as experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matthews, Devin A., E-mail: dmatthews@utexas.edu; Stanton, John F.
2015-02-14
The theory of non-orthogonal spin-adaptation for closed-shell molecular systems is applied to coupled cluster methods with quadruple excitations (CCSDTQ). Calculations at this level of detail are of critical importance in describing the properties of molecular systems to an accuracy which can meet or exceed modern experimental techniques. Such calculations are of significant (and growing) importance in such fields as thermodynamics, kinetics, and atomic and molecular spectroscopies. With respect to the implementation of CCSDTQ and related methods, we show that there are significant advantages to non-orthogonal spin-adaption with respect to simplification and factorization of the working equations and to creating anmore » efficient implementation. The resulting algorithm is implemented in the CFOUR program suite for CCSDT, CCSDTQ, and various approximate methods (CCSD(T), CC3, CCSDT-n, and CCSDT(Q))« less
NASA Astrophysics Data System (ADS)
Prasad, Bishwajit
Scope and methods of study. Complementing breeding effort by deploying alternative methods of identifying higher yielding genotypes in a wheat breeding program is important for obtaining greater genetic gains. Spectral reflectance indices (SRI) are one of the many indirect selection tools that have been reported to be associated with different physiological process of wheat. A total of five experiments (a set of 25 released cultivars from winter wheat breeding programs of the U.S. Great Plains and four populations of randomly derived recombinant inbred lines having 25 entries in each population) were conducted in two years under Great Plains winter wheat rainfed environments at Oklahoma State University research farms. Grain yield was measured in each experiment and biomass was measured in three experiments at three growth stages (booting, heading, and grainfilling). Canopy spectral reflectance was measured at three growth stages and eleven SRI were calculated. Correlation (phenotypic and genetic) between grain yield and SRI, biomass and SRI, heritability (broad sense) of the SRI and yield, response to selection and correlated response, relative selection efficiency of the SRI, and efficiency in selecting the higher yielding genotypes by the SRI were assessed. Findings and conclusions. The genetic correlation coefficients revealed that the water based near infrared indices (WI and NWI) were strongly associated with grain yield and biomass production. The regression analysis detected a linear relationship between the water based indices with grain yield and biomass. The two newly developed indices (NWI-3 and NWI-4) gave higher broad sense heritability than grain yield, higher direct response to selection compared to grain yield, correlated response equal to or higher than direct response for grain yield, relative selection efficiency greater than one, and higher efficiency in selecting higher yielding genotypes. Based on the overall genetic analysis required to establish any trait as an efficient indirect selection tool, the water based SRI (especially NWI-3 and NWI-4) have the potential to complement the classical breeding effort for selecting genotypes with higher yield potential in a winter wheat breeding program.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Ian M.; Goldman, Charles A.; Murphy, Sean
The average cost to utilities to save a kilowatt-hour (kWh) in the United States is 2.5 cents, according to the most comprehensive assessment to date of the cost performance of energy efficiency programs funded by electricity customers. These costs are similar to those documented earlier. Cost-effective efficiency programs help ensure electricity system reliability at the most affordable cost as part of utility planning and implementation activities for resource adequacy. Building on prior studies, Berkeley Lab analyzed the cost performance of 8,790 electricity efficiency programs between 2009 and 2015 for 116 investor-owned utilities and other program administrators in 41 states. Themore » Berkeley Lab database includes programs representing about three-quarters of total spending on electricity efficiency programs in the United States.« less
NASA Technical Reports Server (NTRS)
Tuma, Margaret L.; Weisshaar, Andreas; Li, Jian; Beheim, Glenn
1995-01-01
To determine the feasibility of coupling the output of a single-mode optical fiber into a single-mode rib waveguide in a temperature varying environment, a theoretical calculation of the coupling efficiency between the two was investigated. Due to the complex geometry of the rib guide, there is no analytical solution to the wave equation for the guided modes, thus, approximation and/or numerical techniques must be utilized to determine the field patterns of the guide. In this study, three solution methods were used for both the fiber and guide fields; the effective-index method (EIM), Marcatili's approximation, and a Fourier method. These methods were utilized independently to calculate the electric field profile of each component at two temperatures, 20 C and 300 C, representing a nominal and high temperature. Using the electric field profile calculated from each method, the theoretical coupling efficiency between an elliptical-core optical fiber and a rib waveguide was calculated using the overlap integral and the results were compared. It was determined that a high coupling efficiency can be achieved when the two components are aligned. The coupling efficiency was more sensitive to alignment offsets in the y direction than the x, due to the elliptical modal field profile of both components. Changes in the coupling efficiency over temperature were found to be minimal.
NASA Astrophysics Data System (ADS)
Iskin, Ibrahim
Energy efficiency stands out with its potential to address a number of challenges that today's electric utilities face, including increasing and changing electricity demand, shrinking operating capacity, and decreasing system reliability and flexibility. Being the least cost and least risky alternative, the share of energy efficiency programs in utilities' energy portfolios has been on the rise since the 1980s, and their increasing importance is expected to continue in the future. Despite holding great promise, the ability to determine and invest in only the most promising program alternatives plays a key role in the successful use of energy efficiency as a utility-wide resource. This issue becomes even more significant considering the availability of a vast number of potential energy efficiency programs, the rapidly changing business environment, and the existence of multiple stakeholders. This dissertation introduces hierarchical decision modeling as the framework for energy efficiency program planning in electric utilities. The model focuses on the assessment of emerging energy efficiency programs and proposes to bridge the gap between technology screening and cost/benefit evaluation practices. This approach is expected to identify emerging technology alternatives which have the highest potential to pass cost/benefit ratio testing procedures and contribute to the effectiveness of decision practices in energy efficiency program planning. The model also incorporates rank order analysis and sensitivity analysis for testing the robustness of results from different stakeholder perspectives and future uncertainties in an attempt to enable more informed decision-making practices. The model was applied to the case of 13 high priority emerging energy efficiency program alternatives identified in the Pacific Northwest, U.S.A. The results of this study reveal that energy savings potential is the most important program management consideration in selecting emerging energy efficiency programs. Market dissemination potential and program development and implementation potential are the second and third most important, whereas ancillary benefits potential is the least important program management consideration. The results imply that program value considerations, comprised of energy savings potential and ancillary benefits potential; and program feasibility considerations, comprised of program development and implementation potential and market dissemination potential, have almost equal impacts on assessment of emerging energy efficiency programs. Considering the overwhelming number of value-focused studies and the few feasibility-focused studies in the literature, this finding clearly shows that feasibility-focused studies are greatly understudied. The hierarchical decision model developed in this dissertation is generalizable. Thus, other utilities or power systems can adopt the research steps employed in this study as guidelines and conduct similar assessment studies on emerging energy efficiency programs of their interest.
A Global Review of Incentive Programs to Accelerate Energy-Efficient Appliances and Equipment
DOE Office of Scientific and Technical Information (OSTI.GOV)
de la Rue du Can, Stephane; Phadke, Amol; Leventis, Greg
Incentive programs are an essential policy tool to move the market toward energy-efficient products. They offer a favorable complement to mandatory standards and labeling policies by accelerating the market penetration of energy-efficient products above equipment standard requirements and by preparing the market for increased future mandatory requirements. They sway purchase decisions and in some cases production decisions and retail stocking decisions toward energy-efficient products. Incentive programs are structured according to their regulatory environment, the way they are financed, by how the incentive is targeted, and by who administers them. This report categorizes the main elements of incentive programs, using casemore » studies from the Major Economies Forum to illustrate their characteristics. To inform future policy and program design, it seeks to recognize design advantages and disadvantages through a qualitative overview of the variety of programs in use around the globe. Examples range from rebate programs administered by utilities under an Energy-Efficiency Resource Standards (EERS) regulatory framework (California, USA) to the distribution of Eco-Points that reward customers for buying efficient appliances under a government recovery program (Japan). We found that evaluations have demonstrated that financial incentives programs have greater impact when they target highly efficient technologies that have a small market share. We also found that the benefits and drawbacks of different program design aspects depend on the market barriers addressed, the target equipment, and the local market context and that no program design surpasses the others. The key to successful program design and implementation is a thorough understanding of the market and effective identification of the most important local factors hindering the penetration of energy-efficient technologies.« less
Fourth COS FUV Lifetime Position: Cross-Dispersion Profiles, Flux, and Flat-Field Calibration
NASA Astrophysics Data System (ADS)
Rafelski, Marc
2016-10-01
Obtain observations of spectrophotometric white dwarf standard stars at all cenwaves (excepting G130M/1055 and G130M/1096) and FP-POS to determine flux calibrations to S/N>30 and concurrently, the 1-D L- and P-flat templates, and 2-D cross-dispersion profiles required for improved extraction, at LP4. This program ties the spectroscopic sensitivity monitoring at LP4 with that at LP3, in case rapid evolution of gain at LP4 is discovered in coordination with program 14854. The main requirements for this program are S/N 50/resel, which is driven by two requirements: (1) for high S/N 2-D spectral profiles which are calculated by scaling Program 12806 profiles and requiring that profile contours can be located such that flux errors are less than 1-2%, and (2) for the flat fielding of pixel-to-pixel variations (p-flats). WD 0308-565 is the primary target for this program due to its status as a flux standard and TDS target. GD 71 is used to more efficiently calibrate Segment A in the G160M modes.
L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing
NASA Astrophysics Data System (ADS)
Demetriou, I. C.
2006-04-01
Fortran 77 software is given for least squares smoothing to data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is also unknown of the optimization problem. A highly useful description of the constraints is that they follow from the assumption of initially increasing and subsequently decreasing rates of change, or vice versa, of the process considered. The underlying algorithm partitions the data into two disjoint sets of adjacent data and calculates the required fit by solving a strictly convex quadratic programming problem for each set. The piecewise linear interpolant to the fit is convex on the first set and concave on the other one. The partition into suitable sets is achieved by a finite iterative algorithm, which is made quite efficient because of the interactions of the quadratic programming problems on consecutive data. The algorithm obtains the solution by employing no more quadratic programming calculations over subranges of data than twice the number of the divided differences constraints. The quadratic programming technique makes use of active sets and takes advantage of a B-spline representation of the smoothed values that allows some efficient updating procedures. The entire code required to implement the method is 2920 Fortran lines. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes over subranges of data that is only proportional to the number of data. The results suggest that the package can be used for very large numbers of data values. Some examples with output are provided to help new users and exhibit certain features of the software. Important applications of the smoothing technique may be found in calculating a sigmoid approximation, which is a common topic in various contexts in applications in disciplines like physics, economics, biology and engineering. Distribution material that includes single and double precision versions of the code, driver programs, technical details of the implementation of the software package and test examples that demonstrate the use of the software is available in an accompanying ASCII file. Program summaryTitle of program:L2CXCV Catalogue identifier:ADXM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXM_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC Intel Pentium, Sun Sparc Ultra 5, Hewlett-Packard HP UX 11.0 Operating system:WINDOWS 98, 2000, Unix/Solaris 7, Unix/HP UX 11.0 Programming language used:FORTRAN 77 Memory required to execute with typical data:O(n), where n is the number of data No. of bits in a byte:8 No. of lines in distributed program, including test data, etc.:29 349 No. of bytes in distributed program, including test data, etc.:1 276 663 No. of processors used:1 Has the code been vectorized or parallelized?:no Distribution format:default tar.gz Separate documentation available:Yes Nature of physical problem:Analysis of processes that show initially increasing and then decreasing rates of change (sigmoid shape), as, for example, in heat curves, reactor stability conditions, evolution curves, photoemission yields, growth models, utility functions, etc. Identifying an unknown convex/concave (sigmoid) function from some measurements of its values that contain random errors. Also, identifying the inflection point of this sigmoid function. Method of solution:Univariate data smoothing by minimizing the sum of the squares of the residuals (least squares approximation) subject to the condition that the second order divided differences of the smoothed values change sign at most once. Ideally, this is the number of sign changes in the second derivative of the underlying function. The remarkable property of the smoothed values is that they consist of one separate section of optimal components that give nonnegative second divided differences (convexity) and one separate section of optimal components that give nonpositive second divided differences (concavity). The solution process finds the joint (that is the inflection point estimate of the underlying function) of the sections automatically. The underlying method is iterative, each iteration solving a structured strictly convex quadratic programming problem in order to obtain a convex or a concave section over a subrange of data. Restrictions on the complexity of the problem:Number of data, n, is not limited in the software package, but is limited to 2000 in the main driver. The total work of the method requires 2n-2 structured quadratic programming calculations over subranges of data, which in practice does not exceed the amount of O(n) computer operations. Typical running times:CPU time on a PC with an Intel 733 MHz processor operating in Windows 98: About 2 s to smooth n=1000 noisy measurements that follow the shape of the sine function over one period. Summary:L2CXCV is a package of Fortran 77 subroutines for least squares smoothing to n univariate data values contaminated by random errors subject to one sign change in the second divided differences of the smoothed values, where the location of the sign change is unknown. The piecewise linear interpolant to the smoothed values gives a convex/concave fit to the data. The underlying algorithm is based on the property that in this best convex/concave fit, the convex and the concave section are both optimal and separate. The algorithm is iterative, each iteration solving a strictly convex quadratic programming problem for the best convex fit to the first k data, starting from the best convex fit to the first k-1 data. By reversing the order and sign of the data, the algorithm obtains the best concave fit to the last n-k data. Then it chooses that k as the optimal position of the required sign change (which defines the inflection point of the fit), if the convex and the concave components to the first k and the last n-k data, respectively, form a convex/concave vector that gives the least sum of squares of residuals. In effect the algorithm requires at most 2n-2 quadratic programming calculations over subranges of data. The package employs a technique for quadratic programming, which takes advantage of a B-spline representation of the smoothed values and makes use of some efficient O(k) updating procedures, where k is the number of data of a subrange. The package has been tested on a variety of data sets and it has performed very efficiently, terminating in an overall number of active set changes that is about n, thus exhibiting quadratic performance in n. The Fortran codes have been designed to minimize the use of computing resources. Attention has been given to computer rounding errors details, which are essential to the robustness of the software package. Numerical examples with output are provided to help the use of the software and exhibit certain features of the method. Distribution material that includes driver programs, technical details of the installation of the package and test examples that demonstrate the use of the software is available in an ASCII file that accompanies this work.
NASA Astrophysics Data System (ADS)
Zhao, Zhen-tao; Huang, Wei; Li, Shi-Bin; Zhang, Tian-Tian; Yan, Li
2018-06-01
In the current study, a variable Mach number waverider design approach has been proposed based on the osculating cone theory. The design Mach number of the osculating cone constant Mach number waverider with the same volumetric efficiency of the osculating cone variable Mach number waverider has been determined by writing a program for calculating the volumetric efficiencies of waveriders. The CFD approach has been utilized to verify the effectiveness of the proposed approach. At the same time, through the comparative analysis of the aerodynamic performance, the performance advantage of the osculating cone variable Mach number waverider is studied. The obtained results show that the osculating cone variable Mach number waverider owns higher lift-to-drag ratio throughout the flight profile when compared with the osculating cone constant Mach number waverider, and it has superior low-speed aerodynamic performance while maintaining nearly the same high-speed aerodynamic performance.
Laser-launched flyers with organic working fluids
NASA Astrophysics Data System (ADS)
Mulford, Roberta; Swift, Damian
2003-10-01
The TRIDENT laser has been used to launch flyers by depositing IR energy in a thin layer of material - the working fluid - sandwiched between the flyer and a transparent substrate. We have investigated the use of working fluids based on organics, chosen as they are quite efficient absorbers of IR energy and should also convert heat to mechanical work more efficiently than materials such as carbon. A thermodynamically complete equation of state was developed for one of the fluids investigated experimentally - a carbohydrate solution - by chemical equilibrium calculations using the CHEETAH program. Continuum mechanics simulations were made of the flyer launch process, modeling the effect of the laser as energy deposition in the working fluid, and taking into account the compression and recoil of the substrate. We compare the simulations with a range of experiments and demonstrate the optimization of substrate and fluid thickness for a given flyer thickness and speed.
Implementation of Two-Component Time-Dependent Density Functional Theory in TURBOMOLE.
Kühn, Michael; Weigend, Florian
2013-12-10
We report the efficient implementation of a two-component time-dependent density functional theory proposed by Wang et al. (Wang, F.; Ziegler, T.; van Lenthe, E.; van Gisbergen, S.; Baerends, E. J. J. Chem. Phys. 2005, 122, 204103) that accounts for spin-orbit effects on excitations of closed-shell systems by employing a noncollinear exchange-correlation kernel. In contrast to the aforementioned implementation, our method is based on two-component effective core potentials as well as Gaussian-type basis functions. It is implemented in the TURBOMOLE program suite for functionals of the local density approximation and the generalized gradient approximation. Accuracy is assessed by comparison of two-component vertical excitation energies of heavy atoms and ions (Cd, Hg, Au(+)) and small molecules (I2, TlH) to other two- and four-component approaches. Efficiency is demonstrated by calculating the electronic spectrum of Au20.
Novel strategy to implement active-space coupled-cluster methods
NASA Astrophysics Data System (ADS)
Rolik, Zoltán; Kállay, Mihály
2018-03-01
A new approach is presented for the efficient implementation of coupled-cluster (CC) methods including higher excitations based on a molecular orbital space partitioned into active and inactive orbitals. In the new framework, the string representation of amplitudes and intermediates is used as long as it is beneficial, but the contractions are evaluated as matrix products. Using a new diagrammatic technique, the CC equations are represented in a compact form due to the string notations we introduced. As an application of these ideas, a new automated implementation of the single-reference-based multi-reference CC equations is presented for arbitrary excitation levels. The new program can be considered as an improvement over the previous implementations in many respects; e.g., diagram contributions are evaluated by efficient vectorized subroutines. Timings for test calculations for various complete active-space problems are presented. As an application of the new code, the weak interactions in the Be dimer were studied.
Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán
2018-04-05
Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.
Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
Status of the Neutron Capture Measurement on 237Np with the DANCE Array at LANSCE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Esch, E.-I.; Bond, E.M.; Bredeweg, T. A.
2005-05-24
Neptunium-237 is a major constituent of spent nuclear fuel. Estimates place the amount of 237Np bound for the Yucca Mountain high-level waste repository at 40 metric tons. The Department of Energy's Advanced Fuel Cycle Initiative program is evaluating methods for transmuting the actinide waste that will be generated by future operation of commercial nuclear power plants. The critical parameter that defines the transmutation efficiency of actinide isotopes is the neutron fission-to-capture ratio for the particular isotope in a given neutron spectrum. The calculation of transmutation efficiency therefore requires accurate fission and capture cross sections. Current 237Np evaluations available for transmutermore » system studies show significant discrepancies in both the fission and capture cross sections in the energy regions of interest. Herein we report on 237Np (n,{gamma}) measurements using the recently commissioned DANCE array.« less
Propulsive efficiency of frog swimming with different feet and swimming patterns
Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu
2017-01-01
ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669
JADA: a graphical user interface for comprehensive internal dose assessment in nuclear medicine.
Grimes, Joshua; Uribe, Carlos; Celler, Anna
2013-07-01
The main objective of this work was to design a comprehensive dosimetry package that would keep all aspects of internal dose calculation within the framework of a single software environment and that would be applicable for a variety of dose calculation approaches. Our MATLAB-based graphical user interface (GUI) can be used for processing data obtained using pure planar, pure SPECT, or hybrid planar/SPECT imaging. Time-activity data for source regions are obtained using a set of tools that allow the user to reconstruct SPECT images, load images, coregister a series of planar images, and to perform two-dimensional and three-dimensional image segmentation. Curve fits are applied to the acquired time-activity data to construct time-activity curves, which are then integrated to obtain time-integrated activity coefficients. Subsequently, dose estimates are made using one of three methods. The organ level dose calculation subGUI calculates mean organ doses that are equivalent to dose assessment performed by OLINDA/EXM. Voxelized dose calculation options, which include the voxel S value approach and Monte Carlo simulation using the EGSnrc user code DOSXYZnrc, are available within the process 3D image data subGUI. The developed internal dosimetry software package provides an assortment of tools for every step in the dose calculation process, eliminating the need for manual data transfer between programs. This saves times and minimizes user errors, while offering a versatility that can be used to efficiently perform patient-specific internal dose calculations in a variety of clinical situations.
Radiation heat transfer in multitube, alkaline-metal thermal-to-electric converter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tournier, J.M.P.; El-Genk, M.S.
Vapor anode, multitube Alkali-Metal Thermal-to-Electric Converters (AMTECs) are being considered for a number of space missions, such as the NASA Pluto/Express (PX) and Europa missions, scheduled for the years 2004 and 2005, respectively. These static converters can achieve a high fraction of Carnot efficiency at relatively low operating temperatures. An optimized cell can potentially provide a conversion efficiency between 20 and 30 percent, when operated at a hot-side temperature of 1000--1200 K and a cold-side temperature of 550--650 K. A comprehensive modeling and testing program of vapor anode, multitube AMTEC cells has been underway for more than three years atmore » the Air Force Research Laboratory`s Power and Thermal Group (AFRL/VSDVP), jointly with the University of New Mexico`s Institute for Space and Nuclear Power Studies. The objective of this program is to demonstrate the readiness of AMTECs for flight on future US Air Force space missions. A fast, integrated AMTEC Performance and Evaluation Analysis Model (APEAM) has been developed to support ongoing vacuum tests at AFRL and perform analyses and investigate potential design changes to improve the PX-cell performance. This model consists of three major components (Tournier and El-Genk 1998a, b): (a) a sodium vapor pressure loss model, which describes continuum, transition and free-molecule flow regimes in the low-pressure cavity of the cell; (b) an electrochemical and electrical circuit model; and (c) a radiation/conduction heat transfer model, for calculating parasitic heat losses. This Technical Note describes the methodology used to calculate the radiation view factors within the enclosure of the PX-cells, and the numerical procedure developed in this work to determine the radiation heat transport and temperatures within the cell cavity.« less
NASA Technical Reports Server (NTRS)
Walton, J. T.
1994-01-01
The development of a single-stage-to-orbit aerospace vehicle intended to be launched horizontally into low Earth orbit, such as the National Aero-Space Plane (NASP), has concentrated on the use of the supersonic combustion ramjet (scramjet) propulsion cycle. SRGULL, a scramjet cycle analysis code, is an engineer's tool capable of nose-to-tail, hydrogen-fueled, airframe-integrated scramjet simulation in a real gas flow with equilibrium thermodynamic properties. This program facilitates initial estimates of scramjet cycle performance by linking a two-dimensional forebody, inlet and nozzle code with a one-dimensional combustor code. Five computer codes (SCRAM, SEAGUL, INLET, Progam HUD, and GASH) originally developed at NASA Langley Research Center in support of hypersonic technology are integrated in this program to analyze changing flow conditions. The one-dimensional combustor code is based on the combustor subroutine from SCRAM and the two-dimensional coding is based on an inviscid Euler program (SEAGUL). Kinetic energy efficiency input for sidewall area variation modeling can be calculated by the INLET program code. At the completion of inviscid component analysis, Program HUD, an integral boundary layer code based on the Spaulding-Chi method, is applied to determine the friction coefficient which is then used in a modified Reynolds Analogy to calculate heat transfer. Real gas flow properties such as flow composition, enthalpy, entropy, and density are calculated by the subroutine GASH. Combustor input conditions are taken from one-dimensionalizing the two-dimensional inlet exit flow. The SEAGUL portions of this program are limited to supersonic flows, but the combustor (SCRAM) section can handle supersonic and dual-mode operation. SRGULL has been compared to scramjet engine tests with excellent results. SRGULL was written in FORTRAN 77 on an IBM PC compatible using IBM's FORTRAN/2 or Microway's NDP386 F77 compiler. The program is fully user interactive, but can also run in batch mode. It operates under the UNIX, VMS, NOS, and DOS operating systems. The source code is not directly compatible with all PC compilers (e.g., Lahey or Microsoft FORTRAN) due to block and segment size requirements. SRGULL executable code requires about 490K RAM and a math coprocessor on PC's. The SRGULL program was developed in 1989, although the component programs originated in the 1960's and 1970's. IBM, IBM PC, and DOS are registered trademarks of International Business Machines. VMS is a registered trademark of Digital Equipment Corporation. UNIX is a registered trademark of Bell Laboratories. NOS is a registered trademark of Control Data Corporation.
Development of full wave code for modeling RF fields in hot non-uniform plasmas
NASA Astrophysics Data System (ADS)
Zhao, Liangji; Svidzinski, Vladimir; Spencer, Andrew; Kim, Jin-Soo
2016-10-01
FAR-TECH, Inc. is developing a full wave RF modeling code to model RF fields in fusion devices and in general plasma applications. As an important component of the code, an adaptive meshless technique is introduced to solve the wave equations, which allows resolving plasma resonances efficiently and adapting to the complexity of antenna geometry and device boundary. The computational points are generated using either a point elimination method or a force balancing method based on the monitor function, which is calculated by solving the cold plasma dispersion equation locally. Another part of the code is the conductivity kernel calculation, used for modeling the nonlocal hot plasma dielectric response. The conductivity kernel is calculated on a coarse grid of test points and then interpolated linearly onto the computational points. All the components of the code are parallelized using MPI and OpenMP libraries to optimize the execution speed and memory. The algorithm and the results of our numerical approach to solving 2-D wave equations in a tokamak geometry will be presented. Work is supported by the U.S. DOE SBIR program.
Full Wave Parallel Code for Modeling RF Fields in Hot Plasmas
NASA Astrophysics Data System (ADS)
Spencer, Joseph; Svidzinski, Vladimir; Evstatiev, Evstati; Galkin, Sergei; Kim, Jin-Soo
2015-11-01
FAR-TECH, Inc. is developing a suite of full wave RF codes in hot plasmas. It is based on a formulation in configuration space with grid adaptation capability. The conductivity kernel (which includes a nonlocal dielectric response) is calculated by integrating the linearized Vlasov equation along unperturbed test particle orbits. For Tokamak applications a 2-D version of the code is being developed. Progress of this work will be reported. This suite of codes has the following advantages over existing spectral codes: 1) It utilizes the localized nature of plasma dielectric response to the RF field and calculates this response numerically without approximations. 2) It uses an adaptive grid to better resolve resonances in plasma and antenna structures. 3) It uses an efficient sparse matrix solver to solve the formulated linear equations. The linear wave equation is formulated using two approaches: for cold plasmas the local cold plasma dielectric tensor is used (resolving resonances by particle collisions), while for hot plasmas the conductivity kernel is calculated. Work is supported by the U.S. DOE SBIR program.
Dynamic Characteristics of a Simple Brayton Cryocycle
NASA Astrophysics Data System (ADS)
Kutzschbach, A.; Kauschke, M.; Haberstroh, Ch.; Quack, H.
2006-04-01
The goal of the overall program is to develop a dynamic numerical model of helium refrigerators and the associated cooling systems based on commercial simulation software. The aim is to give system designers a tool to search for optimum control strategies during the construction phase of the refrigerator with the help of a plant "simulator". In a first step, a simple Brayton refrigerator has been investigated, which consists of a compressor, an after-cooler, a counter-current heat exchanger, a turboexpander and a heat source. Operating modes are "refrigeration" and "liquefaction". Whereas for the steady state design only component efficiencies are needed and mass and energy balances have to be calculated, for the dynamic calculation one needs also the thermal masses and the helium inventory. Transient mass and energy balances have to be formulated for many small elements and then solved simultaneously for all elements. Starting point of the simulation of the Brayton cycle is the steady state operation at design conditions. The response of the system to step and cyclic changes of the refrigeration or liquefaction rate are calculated and characterized.
Dissociative photoionization of isoprene: experiments and calculations.
Liu, Xianyun; Zhang, Weijun; Wang, Zhenya; Huang, Mingqiang; Yang, Xibin; Tao, Ling; Sun, Yue; Xu, Yuntao; Shan, Xiaobin; Liu, Fuyi; Sheng, Liusi
2009-03-01
Vacuum ultraviolet (VUV) dissociative photoionization of isoprene in the energy region 8.5-18 eV was investigated with photoionization mass spectroscopy (PIMS) using synchrotron radiation (SR). The ionization energy (IE) of isoprene as well as the appearance energies (AEs) of its fragment ions C(5)H(7) (+), C(5)H(5) (+), C(4)H(5) (+), C(3)H(6) (+), C(3)H(5) (+), C(3)H(4) (+), C(3)H(3) (+) and C(2)H(3) (+) were determined with photoionization efficiency (PIE) curves. The dissociation energies of some possible dissociation channels to produce those fragment ions were also determined experimentally. The total energies of C(5)H(8) and its main fragments were calculated using the Gaussian 03 program and the Gaussian-2 method. The IE of C(5)H(8), the AEs for its fragment ions, and the dissociation energies to produce them were predicted using the high-accuracy energy model. According to our results, the experimental dissociation energies were in reasonable agreement with the calculated values of the proposed photodissociation channels of C(5)H(8). Copyright (c) 2009 John Wiley & Sons, Ltd.
Transfer matrix calculation for ion optical elements using real fields
NASA Astrophysics Data System (ADS)
Mishra, P. M.; Blaum, K.; George, S.; Grieser, M.; Wolf, A.
2018-03-01
With the increasing importance of ion storage rings and traps in low energy physics experiments, an efficient transport of ion species from the ion source area to the experimental setup becomes essential. Some available, powerful software packages rely on transfer matrix calculations in order to compute the ion trajectory through the ion-optical beamline systems of high complexity. With analytical approaches, so far the transfer matrices are documented only for a few ideal ion optical elements. Here we describe an approach (using beam tracking calculations) to determine the transfer matrix for any individual electrostatic or magnetostatic ion optical element. We verify the procedure by considering the well-known cases and then apply it to derive the transfer matrix of a 90-degree electrostatic quadrupole deflector including its realistic geometry and fringe fields. A transfer line consisting of a quadrupole deflector and a quadrupole doublet is considered, where the results from the standard first order transfer matrix based ion optical simulation program implementing the derived transfer matrix is compared with the real field beam tracking simulations.
BASIC Programming In Water And Wastewater Analysis
NASA Technical Reports Server (NTRS)
Dreschel, Thomas
1988-01-01
Collection of computer programs assembled for use in water-analysis laboratories. First program calculates quality-control parameters used in routine water analysis. Second calculates line of best fit for standard concentrations and absorbances entered. Third calculates specific conductance from conductivity measurement and temperature at which measurement taken. Fourth calculates any one of four types of residue measured in water. Fifth, sixth, and seventh calculate results of titrations commonly performed on water samples. Eighth converts measurements, to actual dissolved-oxygen concentration using oxygen-saturation values for fresh and salt water. Ninth and tenth perform calculations of two other common titrimetric analyses. Eleventh calculates oil and grease residue from water sample. Last two use spectro-photometric measurements of absorbance at different wavelengths and residue measurements. Programs included in collection written for Hewlett-Packard 2647F in H-P BASIC.
Womack, James C; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-11-28
Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.
NASA Astrophysics Data System (ADS)
Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-11-01
Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.
An automated program for reinforcement requirements for openings in cylindrical pressure vessels
NASA Technical Reports Server (NTRS)
Wilson, J. F.; Taylor, J. T.
1975-01-01
An automated interactive program for calculating the reinforcement requirements for openings in cylindrical pressure vessels subjected to internal pressure is described. The program is written for an electronic desk top calculator. The program calculates the required area of reinforcement for a given opening and compares this value with the area of reinforcement provided by a proposed design. All program steps, operating instructions, and example problems with input and sample output are documented.
Computer program developed for flowsheet calculations and process data reduction
NASA Technical Reports Server (NTRS)
Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.
1969-01-01
Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.
Tao, Guohua; Miller, William H
2011-07-14
An efficient time-dependent importance sampling method is developed for the Monte Carlo calculation of time correlation functions via the initial value representation (IVR) of semiclassical (SC) theory. A prefactor-free time-dependent sampling function weights the importance of a trajectory based on the magnitude of its contribution to the time correlation function, and global trial moves are used to facilitate the efficient sampling the phase space of initial conditions. The method can be generally applied to sampling rare events efficiently while avoiding being trapped in a local region of the phase space. Results presented in the paper for two system-bath models demonstrate the efficiency of this new importance sampling method for full SC-IVR calculations.
Complex wet-environments in electronic-structure calculations
NASA Astrophysics Data System (ADS)
Fisicaro, Giuseppe; Genovese, Luigi; Andreussi, Oliviero; Marzari, Nicola; Goedecker, Stefan
The computational study of chemical reactions in complex, wet environments is critical for applications in many fields. It is often essential to study chemical reactions in the presence of an applied electrochemical potentials, including complex electrostatic screening coming from the solvent. In the present work we present a solver to handle both the Generalized Poisson and the Poisson-Boltzmann equation. A preconditioned conjugate gradient (PCG) method has been implemented for the Generalized Poisson and the linear regime of the Poisson-Boltzmann, allowing to solve iteratively the minimization problem with some ten iterations. On the other hand, a self-consistent procedure enables us to solve the Poisson-Boltzmann problem. The algorithms take advantage of a preconditioning procedure based on the BigDFT Poisson solver for the standard Poisson equation. They exhibit very high accuracy and parallel efficiency, and allow different boundary conditions, including surfaces. The solver has been integrated into the BigDFT and Quantum-ESPRESSO electronic-structure packages and it will be released as a independent program, suitable for integration in other codes. We present test calculations for large proteins to demonstrate efficiency and performances. This work was done within the PASC and NCCR MARVEL projects. Computer resources were provided by the Swiss National Supercomputing Centre (CSCS) under Project ID s499. LG acknowledges also support from the EXTMOS EU project.
Software and hardware complex for research and management of the separation process
NASA Astrophysics Data System (ADS)
Borisov, A. P.
2018-01-01
The article is devoted to the development of a program for studying the operation of an asynchronous electric drive using vector-algorithmic switching of windings, as well as the development of a hardware-software complex for controlling parameters and controlling the speed of rotation of an asynchronous electric drive for investigating the operation of a cyclone. To study the operation of an asynchronous electric drive, a method was used in which the average value of flux linkage is found and a method for vector-algorithmic calculation of the power and electromagnetic moment of an asynchronous electric drive feeding from a single-phase network is developed, with vector-algorithmic commutation, and software for calculating parameters. The software part of the complex allows to regulate the speed of rotation of the motor by vector-algorithmic switching of transistors or, using pulse-width modulation (PWM), set any engine speed. Also sensors are connected to the hardware-software complex at the inlet and outlet of the cyclone. The developed cyclone with an inserted complex allows to receive high efficiency of product separation at various entrance speeds. At an inlet air speed of 18 m / s, the cyclone’s maximum efficiency is achieved. For this, it is necessary to provide the rotational speed of an asynchronous electric drive with a frequency of 45 Hz.
Campbell, Marie L; Rankin, Janet M
2017-03-01
Institutional ethnography (IE) is used to examine transformations in a professional nurse's work associated with her engagement with a hospital's electronic health record (EHR) which is being updated to integrate professional caregiving and produce more efficient and effective health care. We review in the technical and scholarly literature the practices and promises of information technology and, especially of its applications in health care, finding useful the more critical and analytic perspectives. Among the latter, scholarship on the activities of economising is important to our inquiry into the actual activities that transform 'things' (in our case, nursing knowledge and action) into calculable information for objective and financially relevant decision-making. Beginning with an excerpt of observational data, we explicate observed nurse-patient interactions, discovering in them traces of institutional ruling relations that the nurse's activation of the EHR carries into the nursing setting. The EHR, we argue, materialises and generalises the ruling relations across institutionally located caregivers; its authorised information stabilises their knowing and acting, shaping health care towards a calculated effective and efficient form. Participating in the EHR's ruling practices, nurses adopt its ruling standpoint; a transformation that we conclude needs more careful analysis and debate. © 2016 Foundation for the Sociology of Health & Illness.
NASA Astrophysics Data System (ADS)
Aliberti, P.; Feng, Y.; Takeda, Y.; Shrestha, S. K.; Green, M. A.; Conibeer, G.
2010-11-01
Theoretical efficiencies of a hot carrier solar cell considering indium nitride as the absorber material have been calculated in this work. In a hot carrier solar cell highly energetic carriers are extracted from the device before thermalisation, allowing higher efficiencies in comparison to conventional solar cells. Previous reports on efficiency calculations approached the problem using two different theoretical frameworks, the particle conservation (PC) model or the impact ionization model, which are only valid in particular extreme conditions. In addition an ideal absorber material with the approximation of parabolic bands has always been considered in the past. Such assumptions give an overestimation of the efficiency limits and results can only be considered indicative. In this report the real properties of wurtzite bulk InN absorber have been taken into account for the calculation, including the actual dispersion relation and absorbance. A new hybrid model that considers particle balance and energy balance at the same time has been implemented. Effects of actual impact ionization (II) and Auger recombination (AR) lifetimes have been included in the calculations for the first time, considering the real InN band structure and thermalisation rates. It has been observed that II-AR mechanisms are useful for cell operation in particular conditions, allowing energy redistribution of hot carriers. A maximum efficiency of 43.6% has been found for 1000 suns, assuming thermalisation constants of 100 ps and ideal blackbody absorption. This value of efficiency is considerably lower than values previously calculated adopting PC or II-AR models.
NASA Technical Reports Server (NTRS)
Vanderplaats, Garrett; Townsend, James C. (Technical Monitor)
2002-01-01
The purpose of this research under the NASA Small Business Innovative Research program was to develop algorithms and associated software to solve very large nonlinear, constrained optimization tasks. Key issues included efficiency, reliability, memory, and gradient calculation requirements. This report describes the general optimization problem, ten candidate methods, and detailed evaluations of four candidates. The algorithm chosen for final development is a modern recreation of a 1960s external penalty function method that uses very limited computer memory and computational time. Although of lower efficiency, the new method can solve problems orders of magnitude larger than current methods. The resulting BIGDOT software has been demonstrated on problems with 50,000 variables and about 50,000 active constraints. For unconstrained optimization, it has solved a problem in excess of 135,000 variables. The method includes a technique for solving discrete variable problems that finds a "good" design, although a theoretical optimum cannot be guaranteed. It is very scalable in that the number of function and gradient evaluations does not change significantly with increased problem size. Test cases are provided to demonstrate the efficiency and reliability of the methods and software.
Optimization methods and silicon solar cell numerical models
NASA Technical Reports Server (NTRS)
Girardini, K.; Jacobsen, S. E.
1986-01-01
An optimization algorithm for use with numerical silicon solar cell models was developed. By coupling an optimization algorithm with a solar cell model, it is possible to simultaneously vary design variables such as impurity concentrations, front junction depth, back junction depth, and cell thickness to maximize the predicted cell efficiency. An optimization algorithm was developed and interfaced with the Solar Cell Analysis Program in 1 Dimension (SCAP1D). SCAP1D uses finite difference methods to solve the differential equations which, along with several relations from the physics of semiconductors, describe mathematically the performance of a solar cell. A major obstacle is that the numerical methods used in SCAP1D require a significant amount of computer time, and during an optimization the model is called iteratively until the design variables converge to the values associated with the maximum efficiency. This problem was alleviated by designing an optimization code specifically for use with numerically intensive simulations, to reduce the number of times the efficiency has to be calculated to achieve convergence to the optimal solution.
EUV near normal incidence collector development at SAGEM
NASA Astrophysics Data System (ADS)
Mercier Ythier, R.; Bozec, X.; Geyl, R.; Rinchet, A.; Hecquet, Christophe; Ravet-Krill, Marie-Françoise; Delmotte, Franck; Sassolas, Benoît; Flaminio, Raffaele; Mackowski, Jean-Marie; Michel, Christophe; Montorio, Jean-Luc; Morgado, Nazario; Pinard, Laurent; Roméo, Elodie
2008-03-01
Through its participation to European programs, SAGEM has worked on the design and manufacturing of normal incidence collectors for EUV sources. By opposition to grazing incidence, normal incidence collectors are expected to collect more light with a simpler and cheaper design. Designs are presented for the two current types of existing sources: Discharge Produced Plasma (DPP) and Laser Produced Plasma (LPP). Collection efficiency is calculated in both cases. It is shown that these collectors can achieve about 10 % efficiency for DPP sources and 40 % for LPP sources. SAGEM works on the collectors manufacturability are also presented, including polishing, coating and cooling. The feasibility of polishing has been demonstrated with a roughness better than 2 angstroms obtained on several materials (glass, silicon, Silicon Carbide, metals...). SAGEM is currently working with the Institut d'Optique and the Laboratoire des Materiaux Avancés on the design and the process of EUV coatings for large mirrors. Lastly, SAGEM has studied the design and feasibility of an efficient thermal control, based on a liquid cooling through slim channels machined close to the optical surface.
Potential gains from hospital mergers in Denmark.
Kristensen, Troels; Bogetoft, Peter; Pedersen, Kjeld Moeller
2010-12-01
The Danish hospital sector faces a major rebuilding program to centralize activity in fewer and larger hospitals. We aim to conduct an efficiency analysis of hospitals and to estimate the potential cost savings from the planned hospital mergers. We use Data Envelopment Analysis (DEA) to estimate a cost frontier. Based on this analysis, we calculate an efficiency score for each hospital and estimate the potential gains from the proposed mergers by comparing individual efficiencies with the efficiency of the combined hospitals. Furthermore, we apply a decomposition algorithm to split merger gains into technical efficiency, size (scale) and harmony (mix) gains. The motivation for this decomposition is that some of the apparent merger gains may actually be available with less than a full-scale merger, e.g., by sharing best practices and reallocating certain resources and tasks. Our results suggest that many hospitals are technically inefficient, and the expected "best practice" hospitals are quite efficient. Also, some mergers do not seem to lower costs. This finding indicates that some merged hospitals become too large and therefore experience diseconomies of scale. Other mergers lead to considerable cost reductions; we find potential gains resulting from learning better practices and the exploitation of economies of scope. To ensure robustness, we conduct a sensitivity analysis using two alternative returns-to-scale assumptions and two alternative estimation approaches. We consistently find potential gains from improving the technical efficiency and the exploitation of economies of scope from mergers.
Newly emerging resource efficiency manager programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, S.; Howell, C.
1997-12-31
Many facilities in the northwest such as K--12 schools, community colleges, and military installations are implementing resource-efficiency awareness programs. These programs are generally referred to as resource efficiency manager (REM) or resource conservation manager (RCM) programs. Resource efficiency management is a systems approach to managing a facility`s energy, water, and solid waste. Its aim is to reduce utility budgets by focusing on behavioral changes, maintenance and operation procedures, resource accounting, education and training, and a comprehensive awareness campaign that involves everyone in the organization.
An Efficiency Comparison of MBA Programs: Top 10 versus Non-Top 10
ERIC Educational Resources Information Center
Hsu, Maxwell K.; James, Marcia L.; Chao, Gary H.
2009-01-01
The authors compared the cohort group of the top-10 MBA programs in the United States with their lower-ranking counterparts on their value-added efficiency. The findings reveal that the top-10 MBA programs in the United States are associated with statistically higher average "technical and scale efficiency" and "scale efficiency", but not with a…
HyPEP FY06 Report: Models and Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
DOE report
2006-09-01
The Department of Energy envisions the next generation very high-temperature gas-cooled reactor (VHTR) as a single-purpose or dual-purpose facility that produces hydrogen and electricity. The Ministry of Science and Technology (MOST) of the Republic of Korea also selected VHTR for the Nuclear Hydrogen Development and Demonstration (NHDD) Project. This research project aims at developing a user-friendly program for evaluating and optimizing cycle efficiencies of producing hydrogen and electricity in a Very-High-Temperature Reactor (VHTR). Systems for producing electricity and hydrogen are complex and the calculations associated with optimizing these systems are intensive, involving a large number of operating parameter variations andmore » many different system configurations. This research project will produce the HyPEP computer model, which is specifically designed to be an easy-to-use and fast running tool for evaluating nuclear hydrogen and electricity production facilities. The model accommodates flexible system layouts and its cost models will enable HyPEP to be well-suited for system optimization. Specific activities of this research are designed to develop the HyPEP model into a working tool, including (a) identifying major systems and components for modeling, (b) establishing system operating parameters and calculation scope, (c) establishing the overall calculation scheme, (d) developing component models, (e) developing cost and optimization models, and (f) verifying and validating the program. Once the HyPEP model is fully developed and validated, it will be used to execute calculations on candidate system configurations. FY-06 report includes a description of reference designs, methods used in this study, models and computational strategies developed for the first year effort. Results from computer codes such as HYSYS and GASS/PASS-H used by Idaho National Laboratory and Argonne National Laboratory, respectively will be benchmarked with HyPEP results in the following years.« less
The importance of geospatial data to calculate the optimal distribution of renewable energies
NASA Astrophysics Data System (ADS)
Díaz, Paula; Masó, Joan
2013-04-01
Specially during last three years, the renewable energies are revolutionizing the international trade while they are geographically diversifying markets. Renewables are experiencing a rapid growth in power generation. According to REN21 (2012), during last six years, the total renewables capacity installed grew at record rates. In 2011, the EU raised its share of global new renewables capacity till 44%. The BRICS nations (Brazil, Russia, India and China) accounted for about 26% of the total global. Moreover, almost twenty countries in the Middle East, North Africa, and sub-Saharan Africa have currently active markets in renewables. The energy return ratios are commonly used to calculate the efficiency of the traditional energy sources. The Energy Return On Investment (EROI) compares the energy returned for a certain source and the energy used to get it (explore, find, develop, produce, extract, transform, harvest, grow, process, etc.). These energy return ratios have demonstrated a general decrease of efficiency of the fossil fuels and gas. When considering the limitations of the quantity of energy produced by some sources, the energy invested to obtain them and the difficulties of finding optimal locations for the establishment of renewables farms (e.g. due to an ever increasing scarce of appropriate land) the EROI becomes relevant in renewables. A spatialized EROI, which uses variables with spatial distribution, enables the optimal position in terms of both energy production and associated costs. It is important to note that the spatialized EROI can be mathematically formalized and calculated the same way for different locations in a reproducible way. This means that having established a concrete EROI methodology it is possible to generate a continuous map that will highlight the best productive zones for renewable energies in terms of maximum energy return at minimum cost. Relevant variables to calculate the real energy invested are the grid connections between production and consumption, transportation loses and efficiency of the grid. If appropriate, the spatialized EROI analysis could include any indirect costs that the source of energy might produce, such as visual impacts, food market impacts and land price. Such a spatialized study requires GIS tools to compute operations using both spatial relations like distances and frictions, and topological relations like connectivity, not easy to consider in the way that EROI is currently calculated. In a broader perspective, by applying the EROI to various energy sources, a comparative analysis of the efficiency to obtain different source can be done in a quantitative way. The increase in energy investment is also accompanied by the increase of manufactures and policies. Further efforts will be necessary in the coming years to provide energy access through smart grids and to determine the efficient areas in terms of cost of production and energy returned on investment. The authors present the EROI as a reliable solution to address the input and output energy relationship and increase the efficiency in energy investment considering the appropriate geospatial variables. The spatialized EROI can be a useful tool to consider by decision makers when designing energy policies and programming energy funds, because it is an objective demonstration of which energy sources are more convenient in terms of costs and efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meinking, Rick; Adamson, Joy M
2013-12-20
Energy efficiency is vitally important in Maine. Nearly 70% of Maine households rely on fuel oil as their primary energy source for home heating, a higher share than in any other state. Coupled with the state's long, cold winters, Maine's dependence on oil renders homeowners particularly vulnerable to fluctuating fuel costs. With $4.5 million in seed funding from the Energy Department's Better Buildings Neighborhood Program, the Governor's Energy Office (GEO), through Efficiency Maine Trust (the Trust), is spurring Maine landlords to lower their monthly energy bills and improve comfort for their tenants during the state's cold winter months and increasinglymore » warmer summers. Maine's aging multifamily housing stock can be expensive to heat and costly to maintain. It is not unusual to find buildings with little or no insulation, drafty windows, and significant air leaks, making them ideal candidates for energy efficiency upgrades. Maine modeled its Multifamily Efficiency Program (MEP) after the state's highly successful Home Energy Savings Program (HESP) for single-family homes. HESP provided cash incentives and financing opportunities to owners of one-to four-unit structures, which resulted in thousands of energy assessments and whole-house energy upgrades in 225 communities. Maine's new MEP multifamily energy efficiency upgrade and weatherization initiative focuses on small to medium-sized (i.e., five to 20 units) apartment buildings. The program's energy efficiency upgrades will provide at least 20% energy savings for each upgraded multifamily unit. The Trust’s MEP relies on a network of approved program partners who help move projects through the pipeline from assessment to upgrade. MEP has two components: benchmarking and development of an Energy Reduction Plan (ERP). Using the ENERGY STAR® Portfolio Manager benchmarking tool, MEP provides an assessment of current energy usage in the building, establishes a baseline for future energy efficiency improvements, and enables tracking and monitoring of future energy usage at the building— all at no cost to the building owner. The ERP is developed by a program partner using either the Trust’s approved modeling or prescriptive tools; it provides detailed information about the current energyrelated conditions in the building and recommends energy efficiency, health, and safety improvements. The Trust's delivery contractor provides quality assurance and controls throughout the process. Through this effort, MEP's goal is to establish a self-sustaining, market-driven program, demonstrating the value of energy efficiency to other building owners. The increasing value of properties across the state will help incentivize these owners to continue upgrades after the grant period has ended. Targeting urban areas in Maine with dense clusters of multifamily units—such as Portland, Lewiston- Auburn, Bangor, and Augusta—MEP engaged a variety of stakeholder groups early on to design its multifamily program. Through direct emails and its website, program officials invited lending institutions, building professionals, engineering firms, equipment distributors, and local property owners associations to attend open meetings around the state to learn about the goals of the multifamily program and to help define its parameters. These meetings helped program administrators understand the diversity of the customer base: some owners are individuals with a single building, while other owners are groups of people or management companies with an entire portfolio of multifamily buildings. The diversity of the customer base notwithstanding, owners see MEP as an opportunity to make gains in their respective properties. Consistently high turnouts at stakeholder meetings fueled greater customer interest as awareness of the program spread through word of mouth. The program also gained traction by utilizing the program partner networks and building on the legacy of the Trust’s successful HESP for single-family residences. MEP offers significant incentives for building owners to participate in the upgrade program. Wholebuilding benchmarking services are available to most multifamily housing buildings free of charge. The service provides the building owner with an assessment of the building's current energy efficiency as compared to other multifamily buildings on a national scale, establishes a baseline to measure future improvements, and enables owners to track monthly energy consumption using the ENERGY STAR Portfolio Manager. Once the benchmarking process is complete, the program links building owners with approved program partners (e.g., energy professionals, home performance contractors) to identify and implement specific energy-saving opportunities in the building. Program partners can also provide project quotes with estimated financing incentives and payback period calculations that enable building owners to make informed decisions. What's more, the Trust provides two financial incentives for successful completion of program milestones. The first is a per-unit incentive for completion of an approved ERP (i.e., $100 per unit if a prescriptive path is followed, and $200 per unit for a modeled ERP). Upon final inspection of the installed project scope of work, an incentive of $1,400 per unit or 50% of installed cost—whichever is less—is paid. The Trust originally established a $1 million loan-loss reserve fund (LLRF) to further enhance financing opportunities for qualified multifamily building owners. This funding mechanism was designed to connect building owners with lenders that retain the mortgages for their properties and encourages the lenders to offer financing for energy efficiency improvements. However, there has been no interest in the LLRF and therefore the LLRF has been reduced. Ultimately, MEP plans to build an online tool for building owners to assess opportunities to make upgrades in their multifamily units. The tool will include a performance rating system to provide a way for building owners to more easily understand energy use in their building, and how it could be improved with energy efficiency upgrades. Prospective tenants will also be able to use the rating system to make informed decisions about where to rent. Furthermore, the rating can be incorporated into real estate listings as a way for prospective home buyers and the real estate financial community to evaluate a home's operating costs. The Trust’s MEP has identified the state's most experienced energy professionals, vendors, suppliers, and contractors that install energy efficiency equipment in the multifamily sector to be qualified program partners. To be eligible for partnership, energy assessment professionals and contractors are required to have demonstrated experience in the multifamily sector and hold associated professional certifications, such as Building Operator Certification (BOC), Certified Energy Manager (CEM), Professional Engineer (PE), or Building Performance Institute (BPI) Multifamily Building Analyst. Widespread program interest has enabled the Trust to redirect funds that might otherwise be needed for program promotion to building capacity through contractor training. In addition to boosting professional training and certification opportunities, MEP teaches its partners how to market the multifamily program to prospective multifamily homeowners.« less
7 CFR 1416.304 - Payment calculations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 10 2010-01-01 2010-01-01 false Payment calculations. 1416.304 Section 1416.304 Agriculture Regulations of the Department of Agriculture (Continued) COMMODITY CREDIT CORPORATION, DEPARTMENT... PROGRAMS Citrus Disaster Program § 1416.304 Payment calculations. (a) Payments will be calculated by...
HELAC-PHEGAS: A generator for all parton level processes
NASA Astrophysics Data System (ADS)
Cafarella, Alessandro; Papadopoulos, Costas G.; Worek, Malgorzata
2009-10-01
The updated version of the HELAC-PHEGAS event generator is presented. The matrix elements are calculated through Dyson-Schwinger recursive equations using color connection representation. Phase-space generation is based on a multichannel approach, including optimization. HELAC-PHEGAS generates parton level events with all necessary information, in the most recent Les Houches Accord format, for the study of any process within the Standard Model in hadron and lepton colliders. New version program summaryProgram title: HELAC-PHEGAS Catalogue identifier: ADMS_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADMS_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 35 986 No. of bytes in distributed program, including test data, etc.: 380 214 Distribution format: tar.gz Programming language: Fortran Computer: All Operating system: Linux Classification: 11.1, 11.2 External routines: Optionally Les Houches Accord (LHA) PDF Interface library ( http://projects.hepforge.org/lhapdf/) Catalogue identifier of previous version: ADMS_v1_0 Journal reference of previous version: Comput. Phys. Comm. 132 (2000) 306 Does the new version supersede the previous version?: Yes, partly Nature of problem: One of the most striking features of final states in current and future colliders is the large number of events with several jets. Being able to predict their features is essential. To achieve this, the calculations need to describe as accurately as possible the full matrix elements for the underlying hard processes. Even at leading order, perturbation theory based on Feynman graphs runs into computational problems, since the number of graphs contributing to the amplitude grows as n!. Solution method: Recursive algorithms based on Dyson-Schwinger equations have been developed recently in order to overcome the computational obstacles. The calculation of the amplitude, using Dyson-Schwinger recursive equations, results in a computational cost growing asymptotically as 3 n, where n is the number of particles involved in the process. Off-shell subamplitudes are introduced, for which a recursion relation has been obtained allowing to express an n-particle amplitude in terms of subamplitudes, with 1-, 2-, … up to (n-1) particles. The color connection representation is used in order to treat amplitudes involving colored particles. In the present version HELAC-PHEGAS can be used to efficiently obtain helicity amplitudes, total cross sections, parton-level event samples in LHA format, for arbitrary multiparticle processes in the Standard Model in leptonic, pp¯ and pp collisions. Reasons for new version: Substantial improvements, major functionality upgrade. Summary of revisions: Color connection representation, efficient integration over PDF via the PARNI algorithm, interface to LHAPDF, parton level events generated in the most recent LHA format, k reweighting for Parton Shower matching, numerical predictions for amplitudes for arbitrary processes for phase-space points provided by the user, new user interface and the possibility to run over computer clusters. Running time: Depending on the process studied. Usually from seconds to hours. References:A. Kanaki, C.G. Papadopoulos, Comput. Phys. Comm. 132 (2000) 306. C.G. Papadopoulos, Comput. Phys. Comm. 137 (2001) 247. URL: http://www.cern.ch/helac-phegas.
Programmable Calculators: Implications for the Mathematics Curriculum.
ERIC Educational Resources Information Center
Spikell, Mark A., Ed.
This document is a collection of reports presented at a programable calculator symposium held in Seattle, Washington, in April, 1980, as part of the annual meeting of the National Council of Teachers of Mathematics (NCTM). The session was designed to review whether the programable calculator has a place in the school mathematics program, in light…
NASA Technical Reports Server (NTRS)
Svehla, R. A.; Mcbride, B. J.
1973-01-01
A FORTRAN IV computer program for the calculation of the thermodynamic and transport properties of complex mixtures is described. The program has the capability of performing calculations such as:(1) chemical equilibrium for assigned thermodynamic states, (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. Condensed species, as well as gaseous species, are considered in the thermodynamic calculation; but only the gaseous species are considered in the transport calculations.
An investigation of the feasibility of active boundary layer thickening for aircraft drag reduction
NASA Technical Reports Server (NTRS)
Ash, R. L.; Koodalattupuram, C.
1986-01-01
The feasibility of using a forward mounted windmilling propeller to extract momentum from the flow around an axisymmetric body to reduce total drag has been studied. Numerical calculations indicate that a net drag reduction is possible when the energy extracted is returned to an aft mounted pusher propeller. However, net drag reduction requires very high device efficiencies. Results of an experimental program to study the coupling between a propeller wake and a turbulent boundary layer are also reported. The experiments showed that a complex coupling exists and simple modes for the flow field are not sufficiently accurate to predict total drag.
Simulation of cooperating robot manipulators on a mobile platform
NASA Technical Reports Server (NTRS)
Murphy, Steve H.; Wen, John T.; Saridis, George N.
1990-01-01
The dynamic equations of motion for two manipulators holding a common object on a freely moving mobile platform are developed. The full dynamic interactions from arms to platform and arm-tip to arm-tip are included in the formulation. The development of the closed chain dynamics allows for the use of any solution for the open topological tree of base and manipulator links. In particular, because the system has 18 degrees of freedom, recursive solutions for the dynamic simulation become more promising for efficient calculations of the motion. Simulation of the system is accomplished through a MATLAB program, and the response is visualized graphically using the SILMA Cimstation.
Electrodynamic tether system study
NASA Technical Reports Server (NTRS)
1987-01-01
The purpose of this program is to define an Electrodynamic Tether System (ETS) that could be erected from the space station and/or platforms to function as an energy storage device. A schematic representation of the ETS concept mounted on the space station is presented. In addition to the hardware design and configuration efforts, studies are also documented involving simulations of the Earth's magnetic fields and the effects this has on overall system efficiency calculations. Also discussed are some preliminary computer simulations of orbit perturbations caused by the cyclic/night operations of the ETS. System cost estimates, an outline for future development testing for the ETS system, and conclusions and recommendations are also provided.
Vortex methods for separated flows
NASA Technical Reports Server (NTRS)
Spalart, Philippe R.
1988-01-01
The numerical solution of the Euler or Navier-Stokes equations by Lagrangian vortex methods is discussed. The mathematical background is presented and includes the relationship with traditional point-vortex studies, convergence to smooth solutions of the Euler equations, and the essential differences between two and three-dimensional cases. The difficulties in extending the method to viscous or compressible flows are explained. Two-dimensional flows around bluff bodies are emphasized. Robustness of the method and the assessment of accuracy, vortex-core profiles, time-marching schemes, numerical dissipation, and efficient programming are treated. Operation counts for unbounded and periodic flows are given, and two algorithms designed to speed up the calculations are described.
77 FR 54839 - Energy Efficiency and Conservation Loan Program
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-06
... CFR Parts 1710, 1717, 1721, 1724, and 1730 RIN 0572-AC19 Energy Efficiency and Conservation Loan..., proposing policies and procedures for loan and guarantee financial assistance in support of energy efficiency programs (EE Programs) sponsored and implemented by electric utilities for the benefit of rural...
Using GIS to evaluate a fire safety program in North Carolina.
Dudley, Thomas; Creppage, Kathleen; Shanahan, Meghan; Proescholdbell, Scott
2013-10-01
Evaluating program impact is a critical aspect of public health. Utilizing Geographic Information Systems (GIS) is a novel way to evaluate programs which try to reduce residential fire injuries and deaths. The purpose of this study is to demonstrate the application of GIS within the evaluation of a smoke alarm installation program in North Carolina. This approach incorporates national fire incident data which, when linked with program data, provides a clear depiction of the 10 years impact of the Get Alarmed, NC! program and estimates the number of potential lives saved. We overlapped Get Alarmed, NC! program installation data with national information on fires using GIS to identify homes that experienced a fire after an alarm was installed and calculated potential lives saved based on program documentation and average housing occupancy. We found that using GIS was an efficient and quick way to match addresses from two distinct sources. From this approach we estimated that between 221 and 384 residents were potentially saved due to alarms installed in their homes by Get Alarmed, NC!. Compared with other program evaluations that require intensive and costly participant telephone surveys and/or in-person interviews, the GIS approach is inexpensive, quick, and can easily analyze large disparate datasets. In addition, it can be used to help target the areas most at risk from the onset. These benefits suggest that by incorporating previously unutilized data, the GIS approach has the potential for broader applications within public health program evaluation.
Computer programs for eddy-current defect studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pate, J. R.; Dodd, C. V.
Several computer programs to aid in the design of eddy-current tests and probes have been written. The programs, written in Fortran, deal in various ways with the response to defects exhibited by four types of probes: the pancake probe, the reflection probe, the circumferential boreside probe, and the circumferential encircling probe. Programs are included which calculate the impedance or voltage change in a coil due to a defect, which calculate and plot the defect sensitivity factor of a coil, and which invert calculated or experimental readings to obtain the size of a defect. The theory upon which the programs aremore » based is the Burrows point defect theory, and thus the calculations of the programs will be more accurate for small defects. 6 refs., 21 figs.« less
NASA Technical Reports Server (NTRS)
Miller, R. D.; Anderson, L. R.
1979-01-01
The LOADS program L218, a digital computer program that calculates dynamic load coefficient matrices utilizing the force summation method, is described. The load equations are derived for a flight vehicle in straight and level flight and excited by gusts and/or control motions. In addition, sensor equations are calculated for use with an active control system. The load coefficient matrices are calculated for the following types of loads: translational and rotational accelerations, velocities, and displacements; panel aerodynamic forces; net panel forces; shears and moments. Program usage and a brief description of the analysis used are presented. A description of the design and structure of the program to aid those who will maintain and/or modify the program in the future is included.
Code of Federal Regulations, 2010 CFR
2010-04-01
... mortgage insurance premiums for Program mortgages. 4001.203 Section 4001.203 Housing and Urban Development... HOMEOWNERS PROGRAM HOPE FOR HOMEOWNERS PROGRAM Rights and Obligations Under the Contract of Insurance § 4001.203 Calculation of upfront and annual mortgage insurance premiums for Program mortgages. (a...
Code of Federal Regulations, 2011 CFR
2011-04-01
... mortgage insurance premiums for Program mortgages. 4001.203 Section 4001.203 Housing and Urban Development... HOMEOWNERS PROGRAM HOPE FOR HOMEOWNERS PROGRAM Rights and Obligations Under the Contract of Insurance § 4001.203 Calculation of upfront and annual mortgage insurance premiums for Program mortgages. (a...
Embedded system based on PWM control of hydrogen generator with SEPIC converter
NASA Astrophysics Data System (ADS)
Fall, Cheikh; Setiawan, Eko; Habibi, Muhammad Afnan; Hodaka, Ichijo
2017-09-01
The objective of this paper is to design and to produce a micro electrical plant system based on fuel cell for teaching material-embedded systems in technical vocational training center. Based on this, the student can experience generating hydrogen by fuel cells, controlling the rate of hydrogen generation by the duty ration of single-ended primary-inductor converter(SEPIC), drawing the curve rate of hydrogen to duty ratio, generating electrical power by using hydrogen, and calculating the fuel cell efficiency when it is used as electrical energy generator. This project is of great importance insofar as students will need to acquire several skills to be able to realize it such as continuous DC DC conversion and the scientific concept behind the converter, the regulation of systems with integral proportional controllers, the installation of photovoltaic cells, the use of high-tech sensors, microcontroller programming, object-oriented programming, mastery of the fuel cell syste
Estimating Arrhenius parameters using temperature programmed molecular dynamics.
Imandi, Venkataramana; Chatterjee, Abhijit
2016-07-21
Kinetic rates at different temperatures and the associated Arrhenius parameters, whenever Arrhenius law is obeyed, are efficiently estimated by applying maximum likelihood analysis to waiting times collected using the temperature programmed molecular dynamics method. When transitions involving many activated pathways are available in the dataset, their rates may be calculated using the same collection of waiting times. Arrhenius behaviour is ascertained by comparing rates at the sampled temperatures with ones from the Arrhenius expression. Three prototype systems with corrugated energy landscapes, namely, solvated alanine dipeptide, diffusion at the metal-solvent interphase, and lithium diffusion in silicon, are studied to highlight various aspects of the method. The method becomes particularly appealing when the Arrhenius parameters can be used to find rates at low temperatures where transitions are rare. Systematic coarse-graining of states can further extend the time scales accessible to the method. Good estimates for the rate parameters are obtained with 500-1000 waiting times.
NASA Astrophysics Data System (ADS)
Dikmen, Erkan; Ayaz, Mahir; Gül, Doğan; Şahin, Arzu Şencan
2017-07-01
The determination of drying behavior of herbal plants is a complex process. In this study, gene expression programming (GEP) model was used to determine drying behavior of herbal plants as fresh sweet basil, parsley and dill leaves. Time and drying temperatures are input parameters for the estimation of moisture ratio of herbal plants. The results of the GEP model are compared with experimental drying data. The statistical values as mean absolute percentage error, root-mean-squared error and R-square are used to calculate the difference between values predicted by the GEP model and the values actually observed from the experimental study. It was found that the results of the GEP model and experimental study are in moderately well agreement. The results have shown that the GEP model can be considered as an efficient modelling technique for the prediction of moisture ratio of herbal plants.
A method to calculate the gamma ray detection efficiency of a cylindrical NaI (Tl) crystal
NASA Astrophysics Data System (ADS)
Ahmadi, S.; Ashrafi, S.; Yazdansetad, F.
2018-05-01
Given a wide range application of NaI(Tl) detector in industrial and medical sectors, computation of the related detection efficiency in different distances of a radioactive source, especially for calibration purposes, is the subject of radiation detection studies. In this work, a 2in both in radius and height cylindrical NaI (Tl) scintillator was used, and by changing the radial, axial, and diagonal positions of an isotropic 137Cs point source relative to the detector, the solid angles and the interaction probabilities of gamma photons with the detector's sensitive area have been calculated. The calculations present the geometric and intrinsic efficiency as the functions of detector's dimensions and the position of the source. The calculation model is in good agreement with experiment, and MCNPX simulation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anisimova, N. P.; Tropina, N. E., E-mail: Mazina_ne@mail.ru; Tropin, A. N.
2010-12-15
The opportunity to increase the output emission efficiency of PbSe-based photoluminescence structures by depositing an antireflection layer is analyzed. A model of a three-layer thin film where the central layer is formed of a composite medium is proposed to calculate the reflectance spectra of the system. In von Bruggeman's approximation of the effective medium theory, the effective permittivity of the composite layer is calculated. The model proposed in the study is used to calculate the thickness of the arsenic chalcogenide (AsS{sub 4}) antireflection layer. The optimal AsS{sub 4} layer thickness determined experimentally is close to the results of calculation, andmore » the corresponding gain in the output photoluminescence efficiency is as high as 60%.« less
Computational efficiency for the surface renewal method
NASA Astrophysics Data System (ADS)
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
A nonproprietary, nonsecret program for calculating Stirling cryocoolers
NASA Technical Reports Server (NTRS)
Martini, W. R.
1985-01-01
A design program for an integrated Stirling cycle cryocooler was written on an IBM-PC computer. The program is easy to use and shows the trends and itemizes the losses. The calculated results were compared with some measured performance values. The program predicts somewhat optimistic performance and needs to be calibrated more with experimental measurements. Adding a multiplier to the friction factor can bring the calculated rsults in line with the limited test results so far available. The program is offered as a good framework on which to build a truly useful design program for all types of cryocoolers.
Energy efficiency in nonprofit agencies: Creating effective program models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, M.A.; Prindle, B.; Scherr, M.I.
Nonprofit agencies are a critical component of the health and human services system in the US. It has been clearly demonstrated by programs that offer energy efficiency services to nonprofits that, with minimal investment, they can educe their energy consumption by ten to thirty percent. This energy conservation potential motivated the Department of Energy and Oak Ridge National Laboratory to conceive a project to help states develop energy efficiency programs for nonprofits. The purpose of the project was two-fold: (1) to analyze existing programs to determine which design and delivery mechanisms are particularly effective, and (2) to create model programsmore » for states to follow in tailoring their own plans for helping nonprofits with energy efficiency programs. Twelve existing programs were reviewed, and three model programs were devised and put into operation. The model programs provide various forms of financial assistance to nonprofits and serve as a source of information on energy efficiency as well. After examining the results from the model programs (which are still on-going) and from the existing programs, several replicability factors'' were developed for use in the implementation of programs by other states. These factors -- some concrete and practical, others more generalized -- serve as guidelines for states devising program based on their own particular needs and resources.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Renaud, M; Seuntjens, J; Roberge, D
Purpose: Assessing the performance and uncertainty of a pre-calculated Monte Carlo (PMC) algorithm for proton and electron transport running on graphics processing units (GPU). While PMC methods have been described in the past, an explicit quantification of the latent uncertainty arising from recycling a limited number of tracks in the pre-generated track bank is missing from the literature. With a proper uncertainty analysis, an optimal pre-generated track bank size can be selected for a desired dose calculation uncertainty. Methods: Particle tracks were pre-generated for electrons and protons using EGSnrc and GEANT4, respectively. The PMC algorithm for track transport was implementedmore » on the CUDA programming framework. GPU-PMC dose distributions were compared to benchmark dose distributions simulated using general-purpose MC codes in the same conditions. A latent uncertainty analysis was performed by comparing GPUPMC dose values to a “ground truth” benchmark while varying the track bank size and primary particle histories. Results: GPU-PMC dose distributions and benchmark doses were within 1% of each other in voxels with dose greater than 50% of Dmax. In proton calculations, a submillimeter distance-to-agreement error was observed at the Bragg Peak. Latent uncertainty followed a Poisson distribution with the number of tracks per energy (TPE) and a track bank of 20,000 TPE produced a latent uncertainty of approximately 1%. Efficiency analysis showed a 937× and 508× gain over a single processor core running DOSXYZnrc for 16 MeV electrons in water and bone, respectively. Conclusion: The GPU-PMC method can calculate dose distributions for electrons and protons to a statistical uncertainty below 1%. The track bank size necessary to achieve an optimal efficiency can be tuned based on the desired uncertainty. Coupled with a model to calculate dose contributions from uncharged particles, GPU-PMC is a candidate for inverse planning of modulated electron radiotherapy and scanned proton beams. This work was supported in part by FRSQ-MSSS (Grant No. 22090), NSERC RG (Grant No. 432290) and CIHR MOP (Grant No. MOP-211360)« less
Federal Register 2010, 2011, 2012, 2013, 2014
2013-06-06
... Approved Information Collection for the Energy Efficiency and Conservation Block Grant Program Status... guidance concerning the Energy Efficiency and Conservation Block Grant (EECBG) Program is available for... Conservation Block Grant (EECBG) Program Status Report''; (3) Type of Review: Revision of currently approved...
Organizational determinants of efficiency and effectiveness in mental health partial care programs.
Schinnar, A P; Kamis-Gould, E; Delucia, N; Rothbard, A B
1990-01-01
The use of partial care as a treatment modality for mentally ill patients, particularly the chronically mentally ill, has greatly increased. However, research into what constitutes a "good" program has been scant. This article reports on an evaluation study of staff productivity, cost efficiency, and service effectiveness of adult partial care programs carried out in New Jersey in fiscal year 1984/1985. Five program performance indexes are developed based on comparisons of multiple measures of resources, service activities, and client outcomes. These are used to test various hypotheses regarding the effect of organizational and fiscal variables on partial care program efficiency and effectiveness. The four issues explored are: auspices, organizational complexity, service mix, and fiscal control by the state. These were found to explain about half of the variance in program performance. In addition, partial care programs demonstrating midlevel performance with regard to productivity and efficiency were observed to be the most effective, implying a possible optimal level of efficiency at which effectiveness is maximized. PMID:2113046
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
Jain, Vivek; Chang, Wei; Byonanebye, Dathan M.; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R.; Havlir, Diane V.; Kahn, James G.
2015-01-01
Background Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Methods Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Findings Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451–716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100–200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. Conclusions In a Ugandan HIV clinic, ART delivery costs—including VL testing—for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions. PMID:26632823
Jain, Vivek; Chang, Wei; Byonanebye, Dathan M; Owaraganise, Asiphas; Twinomuhwezi, Ellon; Amanyire, Gideon; Black, Douglas; Marseille, Elliot; Kamya, Moses R; Havlir, Diane V; Kahn, James G
2015-01-01
Evidence favoring earlier HIV ART initiation at high CD4+ T-cell counts (CD4>350/uL) has grown, and guidelines now recommend earlier HIV treatment. However, the cost of providing ART to individuals with CD4>350 in Sub-Saharan Africa has not been well estimated. This remains a major barrier to optimal global cost projections for accelerating the scale-up of ART. Our objective was to compute costs of ART delivery to high CD4+count individuals in a typical rural Ugandan health center-based HIV clinic, and use these data to construct scenarios of efficient ART scale-up. Within a clinical study evaluating streamlined ART delivery to 197 individuals with CD4+ cell counts >350 cells/uL (EARLI Study: NCT01479634) in Mbarara, Uganda, we performed a micro-costing analysis of administrative records, ART prices, and time-and-motion analysis of staff work patterns. We computed observed per-person-per-year (ppy) costs, and constructed models estimating costs under several increasingly efficient ART scale-up scenarios using local salaries, lowest drug prices, optimized patient loads, and inclusion of viral load (VL) testing. Among 197 individuals enrolled in the EARLI Study, median pre-ART CD4+ cell count was 569/uL (IQR 451-716). Observed ART delivery cost was $628 ppy at steady state. Models using local salaries and only core laboratory tests estimated costs of $529/$445 ppy (+/-VL testing, respectively). Models with lower salaries, lowest ART prices, and optimized healthcare worker schedules reduced costs by $100-200 ppy. Costs in a maximally efficient scale-up model were $320/$236 ppy (+/- VL testing). This included $39 for personnel, $106 for ART, $130/$46 for laboratory tests, and $46 for administrative/other costs. A key limitation of this study is its derivation and extrapolation of costs from one large rural treatment program of high CD4+ count individuals. In a Ugandan HIV clinic, ART delivery costs--including VL testing--for individuals with CD4>350 were similar to estimates from high-efficiency programs. In higher efficiency scale-up models, costs were substantially lower. These favorable costs may be achieved because high CD4+ count patients are often asymptomatic, facilitating more efficient streamlined ART delivery. Our work provides a framework for calculating costs of efficient ART scale-up models using accessible data from specific programs and regions.
Efficiency of whole-body counter for various body size calculated by MCNP5 software.
Krstic, D; Nikezic, D
2012-11-01
The efficiency of a whole-body counter for (137)Cs and (40)K was calculated using the MCNP5 code. The ORNL phantoms of a human body of different body sizes were applied in a sitting position in front of a detector. The aim was to investigate the dependence of efficiency on the body size (age) and the detector position with respect to the body and to estimate the accuracy of real measurements. The calculation work presented here is related to the NaI detector, which is available in the Serbian Whole-body Counter facility in Vinca Institute.
A knowledge-based design framework for airplane conceptual and preliminary design
NASA Astrophysics Data System (ADS)
Anemaat, Wilhelmus A. J.
The goal of work described herein is to develop the second generation of Advanced Aircraft Analysis (AAA) into an object-oriented structure which can be used in different environments. One such environment is the third generation of AAA with its own user interface, the other environment with the same AAA methods (i.e. the knowledge) is the AAA-AML program. AAA-AML automates the initial airplane design process using current AAA methods in combination with AMRaven methodologies for dependency tracking and knowledge management, using the TechnoSoft Adaptive Modeling Language (AML). This will lead to the following benefits: (1) Reduced design time: computer aided design methods can reduce design and development time and replace tedious hand calculations. (2) Better product through improved design: more alternative designs can be evaluated in the same time span, which can lead to improved quality. (3) Reduced design cost: due to less training and less calculation errors substantial savings in design time and related cost can be obtained. (4) Improved Efficiency: the design engineer can avoid technically correct but irrelevant calculations on incomplete or out of sync information, particularly if the process enables robust geometry earlier. Although numerous advancements in knowledge based design have been developed for detailed design, currently no such integrated knowledge based conceptual and preliminary airplane design system exists. The third generation AAA methods are tested over a ten year period on many different airplane designs. Using AAA methods will demonstrate significant time savings. The AAA-AML system will be exercised and tested using 27 existing airplanes ranging from single engine propeller, business jets, airliners, UAV's to fighters. Data for the varied sizing methods will be compared with AAA results, to validate these methods. One new design, a Light Sport Aircraft (LSA), will be developed as an exercise to use the tool for designing a new airplane. Using these tools will show an improvement in efficiency over using separate programs due to the automatic recalculation with any change of input data. The direct visual feedback of 3D geometry in the AAA-AML, will lead to quicker resolving of problems as opposed to conventional methods.
Noise produced by turbulent flow into a rotor: Users manual for noise calculation
NASA Technical Reports Server (NTRS)
Amiet, R. K.; Egolf, C. G.; Simonich, J. C.
1989-01-01
A users manual for a computer program for the calculation of noise produced by turbulent flow into a helicopter rotor is presented. These inputs to the program are obtained from the atmospheric turbulence model and mean flow distortion calculation, described in another volume of this set of reports. Descriptions of the various program modules and subroutines, their function, programming structure, and the required input and output variables are included. This routine is incorporated as one module of NASA's ROTONET helicopter noise prediction program.
A modular radiative transfer program for gas filter correlation radiometry
NASA Technical Reports Server (NTRS)
Casas, J. C.; Campbell, S. A.
1977-01-01
The fundamentals of a computer program, simulated monochromatic atmospheric radiative transfer (SMART), which calculates atmospheric path transmission, solar radiation, and thermal radiation in the 4.6 micrometer spectral region, are described. A brief outline of atmospheric absorption properties and line by line transmission calculations is explained in conjunction with an outline of the SMART computational procedures. Program flexibility is demonstrated by simulating the response of a gas filter correlation radiometer as one example of an atmospheric infrared sensor. Program limitations, input data requirements, program listing, and comparison of SMART transmission calculations are presented.
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Mielke, Steven L.; Clarkson, Kenneth L.; Truhlar, Donald G.
2012-08-01
We present a Fortran program package, MSTor, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsional motions by the recently proposed MS-T method. This method interpolates between the local harmonic approximation in the low-temperature limit, and the limit of free internal rotation of all torsions at high temperature. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes six utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Catalogue identifier: AEMF_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMF_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 77 434 No. of bytes in distributed program, including test data, etc.: 3 264 737 Distribution format: tar.gz Programming language: Fortran 90, C, and Perl Computer: Itasca (HP Linux cluster, each node has two-socket, quad-core 2.8 GHz Intel Xeon X5560 “Nehalem EP” processors), Calhoun (SGI Altix XE 1300 cluster, each node containing two quad-core 2.66 GHz Intel Xeon “Clovertown”-class processors sharing 16 GB of main memory), Koronis (Altix UV 1000 server with 190 6-core Intel Xeon X7542 “Westmere” processors at 2.66 GHz), Elmo (Sun Fire X4600 Linux cluster with AMD Opteron cores), and Mac Pro (two 2.8 GHz Quad-core Intel Xeon processors) Operating system: Linux/Unix/Mac OS RAM: 2 Mbytes Classification: 16.3, 16.12, 23 Nature of problem: Calculation of the partition functions and thermodynamic functions (standard-state energy, enthalpy, entropy, and free energy as functions of temperatures) of complex molecules involving multiple torsional motions. Solution method: The multi-structural approximation with torsional anharmonicity (MS-T). The program also provides results for the multi-structural local harmonic approximation [1]. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multi-torsional problems for which one can afford to calculate all the conformations and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes and six utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomain defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 24 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 seconds. J. Zheng, T. Yu, E. Papajak, I.M. Alecu, S.L. Mielke, D.G. Truhlar, Practical methods for including torsional anharmonicity in thermochemical calculations of complex molecules: The internal-coordinate multi-structural approximation, Phys. Chem. Chem. Phys. 13 (2011) 10885-10907.
NASA Technical Reports Server (NTRS)
Tuma, Margaret L.
1995-01-01
To determine the feasibility of coupling the output of an optical fiber to a rib waveguide in a temperature environment ranging from 20 C to 300 C, a theoretical calculation of the coupling efficiency between the two was investigated. This is a significant problem which needs to be addressed to determine whether an integrated optic device can function in a harsh temperature environment. Because the behavior of the integrated-optic device is polarization sensitive, a polarization-preserving optic fiber, via its elliptical core, was used to couple light with a known polarization into the device. To couple light energy efficiently from an optical fiber into a channel waveguide, the design of both components should provide for well-matched electric field profiles. The rib waveguide analyzed was the light input channel of an integrated-optic pressure sensor. Due to the complex geometry of the rib waveguide, there is no analytical solution to the wave equation for the guided modes. Approximation or numerical techniques must be utilized to determine the propagation constants and field patterns of the guide. In this study, three solution methods were used to determine the field profiles of both the fiber and guide: the effective-index method (EIM), Marcatili's approximation, and a Fourier method. These methods were utilized independently to calculate the electric field profile of a rib channel waveguide and elliptical fiber at two temperatures, 20 C and 300 C. These temperatures were chosen to represent a nominal and a high temperature that the device would experience. Using the electric field profile calculated from each method, the theoretical coupling efficiency between the single-mode optical fiber and rib waveguide was calculated using the overlap integral and results of the techniques compared. Initially, perfect alignment was assumed and the coupling efficiency calculated. Then, the coupling efficiency calculation was repeated for a range of transverse offsets at both temperatures. Results of the calculation indicate a high coupling efficiency can be achieved when the two components were properly aligned. The coupling efficiency was more sensitive to alignment offsets in the y direction than the x, due to the elliptical modal profile of both components. Changes in the coupling efficiency over temperature were found to be minimal.
Code of Federal Regulations, 2013 CFR
2013-01-01
... least three significant figures shall be reported. 4.3Off mode. 4.3.1Pool heaters with a seasonal off... significant figures shall be reported. 5.Calculations. 5.1Thermal efficiency. Calculate the thermal efficiency...
JPL-IDEAS - ITERATIVE DESIGN OF ANTENNA STRUCTURES
NASA Technical Reports Server (NTRS)
Levy, R.
1994-01-01
The Iterative DEsign of Antenna Structures (IDEAS) program is a finite element analysis and design optimization program with special features for the analysis and design of microwave antennas and associated sub-structures. As the principal structure analysis and design tool for the Jet Propulsion Laboratory's Ground Antenna and Facilities Engineering section of NASA's Deep Space Network, IDEAS combines flexibility with easy use. The relatively small bending stiffness of the components of large, steerable reflector antennas allows IDEAS to use pinjointed (three translational degrees of freedom per joint) models for modeling the gross behavior of these antennas when subjected to static and dynamic loading. This facilitates the formulation of the redesign algorithm which has only one design variable per structural element. Input data deck preparation has been simplified by the use of NAMELIST inputs to promote clarity of data input for problem defining parameters, user selection of execution and design options and output requests, and by the use of many attractive and familiar features of the NASTRAN program (in many cases, NASTRAN and IDEAS formatted bulk data cards are interchangeable). Features such as simulation of a full symmetric structure based on analyses of only half the structure make IDEAS a handy and efficient analysis tool, with many features unavailable in any other finite element analysis program. IDEAS can choose design variables such as areas of rods and thicknesses of plates to minimize total structure weight, constrain the structure weight to a specified value while maximizing a natural frequency or minimizing compliance measures, and can use a stress ratio algorithm to size each structural member so that it is at maximum or minimum stress level for at least one of the applied loads. Calculations of total structure weight can be broken down according to material. Center of gravity weight balance, static first and second moments about the center of mass and optionally about a user-specified gridpoint, and lumped structure weight at grid points can also be calculated. Other analysis outputs include calculation of reactions, displacements, and element stresses due to specified gravity, thermal, and external applied loads; calculations of linear combinations of specific node displacements (e.g. to represent motions of rigid attachments not included in the structure model), natural frequency eigenvalues and eigenvectors, structure reactions and element stresses, and coordinates of effective modal masses. Cassegrain antenna boresight error analysis of a best fitting paraboloid and Cassegrain microwave antenna root mean square half-pathlength error analysis of a best fitting paraboloid are also performed. The IDEAS program is written in ATHENA FORTRAN and ASSEMBLER for an EXEC 8 operating system and was implemented on a UNIVAC 1100 series computer. The minimum memory requirement for the program is approximately 42,000 36-bit words. This program is available on a 9-track 1600 BPI magnetic tape in UNIVAC FURPUR format only; since JPL-IDEAS will not run on other platforms, COSMIC will not reformat the code to be readable on other platforms. The program was developed in 1988.
NASA Technical Reports Server (NTRS)
Bennett, R. M.; Bland, S. R.; Redd, L. T.
1973-01-01
Computer programs for calculating the stability characteristics of a balloon tethered in a steady wind are presented. Equilibrium conditions, characteristic roots, and modal ratios are calculated for a range of discrete values of velocity for a fixed tether-line length. Separate programs are used: (1) to calculate longitudinal stability characteristics, (2) to calculate lateral stability characteristics, (3) to plot the characteristic roots versus velocity, (4) to plot the characteristic roots in root-locus form, (5) to plot the longitudinal modes of motion, and (6) to plot the lateral modes for motion. The basic equations, program listings, and the input and output data for sample cases are presented, with a brief discussion of the overall operation and limitations. The programs are based on a linearized, stability-derivative type of analysis, including balloon aerodynamics, apparent mass, buoyancy effects, and static forces which result from the tether line.
Processing Device for High-Speed Execution of an Xrisc Computer Program
NASA Technical Reports Server (NTRS)
Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)
2016-01-01
A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.
Evaluation of jamming efficiency for the protection of a single ground object
NASA Astrophysics Data System (ADS)
Matuszewski, Jan
2018-04-01
The electronic countermeasures (ECM) include methods to completely prevent or restrict the effective use of the electromagnetic spectrum by the opponent. The most widespread means of disorganizing the operation of electronic devices is to create active and passive radio-electronic jamming. The paper presents the way of jamming efficiency calculations for protecting ground objects against the radars mounted on the airborne platforms. The basic mathematical formulas for calculating the efficiency of active radar jamming are presented. The numerical calculations for ground object protection are made for two different electronic warfare scenarios: the jammer is placed very closely and in a determined distance from the protecting object. The results of these calculations are presented in the appropriate figures showing the minimal distance of effective jamming. The realization of effective radar jamming in electronic warfare systems depends mainly on the precise knowledge of radar and the jammer's technical parameters, the distance between them, the assumed value of the degradation coefficient, the conditions of electromagnetic energy propagation and the applied jamming method. The conclusions from these calculations facilitate making a decision regarding how jamming should be conducted to achieve high efficiency during the electronic warfare training.
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; Hilgart, Mark C.; Stepanov, Sergey; Sanishvili, Ruslan; Becker, Michael; Winter, Graeme; Sauter, Nicholas K.; Smith, Janet L.; Fischetti, Robert F.
2014-01-01
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates a collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce. PMID:25484844
Stochastic optimal operation of reservoirs based on copula functions
NASA Astrophysics Data System (ADS)
Lei, Xiao-hui; Tan, Qiao-feng; Wang, Xu; Wang, Hao; Wen, Xin; Wang, Chao; Zhang, Jing-wen
2018-02-01
Stochastic dynamic programming (SDP) has been widely used to derive operating policies for reservoirs considering streamflow uncertainties. In SDP, there is a need to calculate the transition probability matrix more accurately and efficiently in order to improve the economic benefit of reservoir operation. In this study, we proposed a stochastic optimization model for hydropower generation reservoirs, in which 1) the transition probability matrix was calculated based on copula functions; and 2) the value function of the last period was calculated by stepwise iteration. Firstly, the marginal distribution of stochastic inflow in each period was built and the joint distributions of adjacent periods were obtained using the three members of the Archimedean copulas, based on which the conditional probability formula was derived. Then, the value in the last period was calculated by a simple recursive equation with the proposed stepwise iteration method and the value function was fitted with a linear regression model. These improvements were incorporated into the classic SDP and applied to the case study in Ertan reservoir, China. The results show that the transition probability matrix can be more easily and accurately obtained by the proposed copula function based method than conventional methods based on the observed or synthetic streamflow series, and the reservoir operation benefit can also be increased.
A Gaussian quadrature method for total energy analysis in electronic state calculations
NASA Astrophysics Data System (ADS)
Fukushima, Kimichika
This article reports studies by Fukushima and coworkers since 1980 concerning their highly accurate numerical integral method using Gaussian quadratures to evaluate the total energy in electronic state calculations. Gauss-Legendre and Gauss-Laguerre quadratures were used for integrals in the finite and infinite regions, respectively. Our previous article showed that, for diatomic molecules such as CO and FeO, elliptic coordinates efficiently achieved high numerical integral accuracy even with a numerical basis set including transition metal atomic orbitals. This article will generalize straightforward details for multiatomic systems with direct integrals in each decomposed elliptic coordinate determined from the nuclear positions of picked-up atom pairs. Sample calculations were performed for the molecules O3 and H2O. This article will also try to present, in another coordinate, a numerical integral by partially using the Becke's decomposition published in 1988, but without the Becke's fuzzy cell generated by the polynomials of internuclear distance between the pair atoms. Instead, simple nuclear weights comprising exponential functions around nuclei are used. The one-center integral is performed with a Gaussian quadrature pack in a spherical coordinate, included in the author's original program in around 1980. As for this decomposition into one-center integrals, sample calculations are carried out for Li2.
Pothineni, Sudhir Babu; Venugopalan, Nagarajan; Ogata, Craig M.; ...
2014-11-18
The calculation of single- and multi-crystal data collection strategies and a data processing pipeline have been tightly integrated into the macromolecular crystallographic data acquisition and beamline control software JBluIce. Both tasks employ wrapper scripts around existing crystallographic software. JBluIce executes scripts through a distributed resource management system to make efficient use of all available computing resources through parallel processing. The JBluIce single-crystal data collection strategy feature uses a choice of strategy programs to help users rank sample crystals and collect data. The strategy results can be conveniently exported to a data collection run. The JBluIce multi-crystal strategy feature calculates amore » collection strategy to optimize coverage of reciprocal space in cases where incomplete data are available from previous samples. The JBluIce data processing runs simultaneously with data collection using a choice of data reduction wrappers for integration and scaling of newly collected data, with an option for merging with pre-existing data. Data are processed separately if collected from multiple sites on a crystal or from multiple crystals, then scaled and merged. Results from all strategy and processing calculations are displayed in relevant tabs of JBluIce.« less
A note on calculation of efficiency and emissions from wood and wood pellet stoves
NASA Astrophysics Data System (ADS)
Petrocelli, D.; Lezzi, A. M.
2015-11-01
In recent years, national laws and international regulations have introduced strict limits on efficiency and emissions from woody biomass appliances to promote the diffusion of models characterized by low emissions and high efficiency. The evaluation of efficiency and emissions is made during the certification process which consists in standardized tests. Standards prescribe the procedures to be followed during tests and the relations to be used to determine the mean value of efficiency and emissions. As a matter of fact these values are calculated using flue gas temperature and composition averaged over the whole test period, lasting from 1 to 6 hours. Typically, in wood appliances the fuel burning rate is not constant and this leads to a considerable variation in time of composition and flow rate of the flue gas. In this paper we show that this fact may cause significant differences between emission values calculated according to standards and those obtained integrating over the test period the instantaneous mass and energy balances. In addition, we propose some approximated relations and a method for wood stoves which supply more accurate results than those calculated according to standards. These relations can be easily implemented in a computer controlled data acquisition systems.
NASA Astrophysics Data System (ADS)
Ma, Wei; Meng, Sheng
2014-03-01
We present a set of algorithms based on solo first principles calculations, to accurately calculate key properties of a DSC device including sunlight harvest, electron injection, electron-hole recombination, and open circuit voltages. Two series of D- π-A dyes are adopted as sample dyes. The short circuit current can be predicted by calculating the dyes' photo absorption, and the electron injection and recombination lifetime using real-time time-dependent density functional theory (TDDFT) simulations. Open circuit voltage can be reproduced by calculating energy difference between the quasi-Fermi level of electrons in the semiconductor and the electrolyte redox potential, considering the influence of electron recombination. Based on timescales obtained from real time TDDFT dynamics for excited states, the estimated power conversion efficiency of DSC fits nicely with the experiment, with deviation below 1-2%. Light harvesting efficiency, incident photon-to-electron conversion efficiency and the current-voltage characteristics can also be well reproduced. The predicted efficiency can serve as either an ideal limit for optimizing photovoltaic performance of a given dye, or a virtual device that closely mimicking the performance of a real device under different experimental settings.
Energy Efficiency Finance Programs: Use Case Analysis to Define Data Needs and Guidelines
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thompson, Peter; Larsen, Peter; Kramer, Chris
There are over 200 energy efficiency loan programs—across 49 U.S. states—administered by utilities, state/local government agencies, or private lenders.1 This distributed model has led to significant variation in program design and implementation practices including how data is collected and used. The challenge of consolidating and aggregating data across independently administered programs has been illustrated by a recent pilot of an open source database for energy efficiency financing program data. This project was led by the Environmental Defense Fund (EDF), the Investor Confidence Project, the Clean Energy Finance Center (CEFC), and the University of Chicago. This partnership discussed data collection practicesmore » with a number of existing energy efficiency loan programs and identified four programs that were suitable and willing to participate in the pilot database (Diamond 2014).2 The partnership collected information related to ~12,000 loans with an aggregate value of ~$100M across the four programs. Of the 95 data fields collected across the four programs, 30 fields were common between two or more programs and only seven data fields were common across all programs. The results of that pilot study illustrate the inconsistencies in current data definition and collection practices among energy efficiency finance programs and may contribute to certain barriers.« less
NASA Technical Reports Server (NTRS)
Hall, Edward J.; Delaney, Robert A.; Bettner, James L.
1991-01-01
The primary objective of this study was the development of a time-dependent three-dimensional Euler/Navier-Stokes aerodynamic analysis to predict unsteady compressible transonic flows about ducted and unducted propfan propulsion systems at angle of attack. The computer codes resulting from this study are referred to as Advanced Ducted Propfan Analysis Codes (ADPAC). This report is intended to serve as a computer program user's manual for the ADPAC developed under Task 2 of NASA Contract NAS3-25270, Unsteady Ducted Propfan Analysis. Aerodynamic calculations were based on a four-stage Runge-Kutta time-marching finite volume solution technique with added numerical dissipation. A time-accurate implicit residual smoothing operator was utilized for unsteady flow predictions. For unducted propfans, a single H-type grid was used to discretize each blade passage of the complete propeller. For ducted propfans, a coupled system of five grid blocks utilizing an embedded C-grid about the cowl leading edge was used to discretize each blade passage. Grid systems were generated by a combined algebraic/elliptic algorithm developed specifically for ducted propfans. Numerical calculations were compared with experimental data for both ducted and unducted propfan flows. The solution scheme demonstrated efficiency and accuracy comparable with other schemes of this class.
Theoretical research program to study chemical reactions in AOTV bow shock tubes
NASA Technical Reports Server (NTRS)
Taylor, Peter R.
1993-01-01
The main focus was the development, implementation, and calibration of methods for performing molecular electronic structure calculations to high accuracy. These various methods were then applied to a number of chemical reactions and species of interest to NASA, notably in the area of combustion chemistry. Among the development work undertaken was a collaborative effort to develop a program to efficiently predict molecular structures and vibrational frequencies using energy derivatives. Another major development effort involved the design of new atomic basis sets for use in chemical studies: these sets were considerably more accurate than those previously in use. Much effort was also devoted to calibrating methods for computing accurate molecular wave functions, including the first reliable calibrations for realistic molecules using full CI results. A wide variety of application calculations were undertaken. One area of interest was the spectroscopy and thermochemistry of small molecules, including establishing small molecule binding energies to an accuracy rivaling, or even on occasion surpassing, the experiment. Such binding energies are essential input to modeling chemical reaction processes, such as combustion. Studies of large molecules and processes important in both hydrogen and hydrocarbon combustion chemistry were also carried out. Finally, some effort was devoted to the structure and spectroscopy of small metal clusters, with applications to materials science problems.
NASA Astrophysics Data System (ADS)
Zheng, Jingjing; Meana-Pañeda, Rubén; Truhlar, Donald G.
2013-08-01
We present an improved version of the MSTor program package, which calculates partition functions and thermodynamic functions of complex molecules involving multiple torsions; the method is based on either a coupled torsional potential or an uncoupled torsional potential. The program can also carry out calculations in the multiple-structure local harmonic approximation. The program package also includes seven utility codes that can be used as stand-alone programs to calculate reduced moment of inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes for torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files for the MSTor calculation and Voronoi calculation, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Restrictions: There is no limit on the number of torsions that can be included in either the Voronoi calculation or the full MS-T calculation. In practice, the range of problems that can be addressed with the present method consists of all multitorsional problems for which one can afford to calculate all the conformational structures and their frequencies. Unusual features: The method can be applied to transition states as well as stable molecules. The program package also includes the hull program for the calculation of Voronoi volumes, the symmetry program for determining point group symmetry of a molecule, and seven utility codes that can be used as stand-alone programs to calculate reduced moment-of-inertia matrices by the method of Kilpatrick and Pitzer, to generate conformational structures, to calculate, either analytically or by Monte Carlo sampling, volumes of the torsional subdomains defined by Voronoi tessellation of the conformational subspace, to generate template input files, and to calculate one-dimensional torsional partition functions using the torsional eigenvalue summation method. Additional comments: The program package includes a manual, installation script, and input and output files for a test suite. Running time: There are 26 test runs. The running time of the test runs on a single processor of the Itasca computer is less than 2 s. References: [1] MS-T(C) method: Quantum Thermochemistry: Multi-Structural Method with Torsional Anharmonicity Based on a Coupled Torsional Potential, J. Zheng and D.G. Truhlar, Journal of Chemical Theory and Computation 9 (2013) 1356-1367, DOI: http://dx.doi.org/10.1021/ct3010722. [2] MS-T(U) method: Practical Methods for Including Torsional Anharmonicity in Thermochemical Calculations of Complex Molecules: The Internal-Coordinate Multi-Structural Approximation, J. Zheng, T. Yu, E. Papajak, I, M. Alecu, S.L. Mielke, and D.G. Truhlar, Physical Chemistry Chemical Physics 13 (2011) 10885-10907.
Increasing the volumetric efficiency of Diesel engines by intake pipes
NASA Technical Reports Server (NTRS)
List, Hans
1933-01-01
Development of a method for calculating the volumetric efficiency of piston engines with intake pipes. Application of this method to the scavenging pumps of two-stroke-cycle engines with crankcase scavenging and to four-stroke-cycle engines. The utility of the method is demonstrated by volumetric-efficiency tests of the two-stroke-cycle engines with crankcase scavenging. Its practical application to the calculation of intake pipes is illustrated by example.
Bayram, Tuncay; Sönmez, Bircan
2012-04-01
In this study, we aimed to make a computer program that calculates approximate radiation dose received by embryo/fetus in nuclear medicine applications. Radiation dose values per MBq-1 received by embryo/fetus in nuclear medicine applications were gathered from literature for various stages of pregnancy. These values were embedded in the computer code, which was written in Fortran 90 program language. The computer program called nmfdose covers almost all radiopharmaceuticals used in nuclear medicine applications. Approximate radiation dose received by embryo/fetus can be calculated easily at a few steps using this computer program. Although there are some constraints on using the program for some special cases, nmfdose is useful and it provides practical solution for calculation of approximate dose to embryo/fetus in nuclear medicine applications. None declared.
Computer Programs for Calculating the Isentropic Flow Properties for Mixtures of R-134a and Air
NASA Technical Reports Server (NTRS)
Kvaternik, Raymond G.
2000-01-01
Three computer programs for calculating the isentropic flow properties of R-134a/air mixtures which were developed in support of the heavy gas conversion of the Langley Transonic Dynamics Tunnel (TDT) from dichlorodifluoromethane (R-12) to 1,1,1,2 tetrafluoroethane (R-134a) are described. The first program calculates the Mach number and the corresponding flow properties when the total temperature, total pressure, static pressure, and mole fraction of R-134a in the mixture are given. The second program calculates tables of isentropic flow properties for a specified set of free-stream Mach numbers given the total pressure, total temperature, and mole fraction of R-134a. Real-gas effects are accounted for in these programs by treating the gases comprising the mixture as both thermally and calorically imperfect. The third program is a specialized version of the first program in which the gases are thermally perfect. It was written to provide a simpler computational alternative to the first program in those cases where real-gas effects are not important. The theory and computational procedures underlying the programs are summarized, the equations used to compute the flow quantities of interest are given, and sample calculated results that encompass the operating conditions of the TDT are shown.
A C-band 55% PAE high gain two-stage power amplifier based on AlGaN/GaN HEMT
NASA Astrophysics Data System (ADS)
Zheng, Jia-Xin; Ma, Xiao-Hua; Lu, Yang; Zhao, Bo-Chao; Zhang, Hong-He; Zhang, Meng; Cao, Meng-Yi; Hao, Yue
2015-10-01
A C-band high efficiency and high gain two-stage power amplifier based on AlGaN/GaN high electron mobility transistor (HEMT) is designed and measured in this paper. The input and output impedances for the optimum power-added efficiency (PAE) are determined at the fundamental and 2nd harmonic frequency (f0 and 2f0). The harmonic manipulation networks are designed both in the driver stage and the power stage which manipulate the second harmonic to a very low level within the operating frequency band. Then the inter-stage matching network and the output power combining network are calculated to achieve a low insertion loss. So the PAE and the power gain is greatly improved. In an operation frequency range of 5.4 GHz-5.8 GHz in CW mode, the amplifier delivers a maximum output power of 18.62 W, with a PAE of 55.15% and an associated power gain of 28.7 dB, which is an outstanding performance. Project supported by the National Key Basic Research Program of China (Grant No. 2011CBA00606), Program for New Century Excellent Talents in University, China (Grant No. NCET-12-0915), and the National Natural Science Foundation of China (Grant No. 61334002).
Chiarotti, Ugo; Moroli, Valerio; Menchetti, Fernando; Piancaldini, Roberto; Bianco, Loris; Viotto, Alberto; Baracchini, Giulia; Gaspardo, Daniele; Nazzi, Fabio; Curti, Maurizio; Gabriele, Massimiliano
2017-03-01
A 39-W thermoelectric generator prototype has been realized and then installed in industrial plant for on-line trials. The prototype was developed as an energy harvesting demonstrator using low temperature cooling water waste heat as energy source. The objective of the research program is to measure the actual performances of this kind of device working with industrial water below 90 °C, as hot source, and fresh water at a temperature of about 15 °C, as cold sink. The article shows the first results of the research program. It was verified, under the tested operative conditions, that the produced electric power exceeds the energy required to pump the water from the hot source and cold sink to the thermoelectric generator unit if they are located at a distance not exceeding 50 m and the electric energy conversion efficiency is 0.33%. It was calculated that increasing the distance of the hot source and cold sink to the thermoelectric generator unit to 100 m the produced electric energy equals the energy required for water pumping, while reducing the distance of the hot source and cold sink to zero meters the developed unit produces an electric energy conversion efficiency of 0.61%.
Standardization of Tc-99 by two methods and participation at the CCRI(II)-K2. Tc-99 comparison.
Sahagia, M; Antohe, A; Ioan, R; Luca, A; Ivan, C
2014-05-01
The work accomplished within the participation at the 2012 key comparison of Tc-99 is presented. The solution was standardized for the first time in IFIN-HH by two methods: LSC-TDCR and 4π(PC)β-γ efficiency tracer. The methods are described and the results are compared. For the LSC-TDCR method, the program TDCR07c, written and provided by P. Cassette, was used for processing the measurement data. The results are 2.1% higher than when applying the TDCR06b program; the higher value, calculated with the software TDCR07c, was used for reporting the final result in the comparison. The tracer used for the 4π(PC)β-γ efficiency tracer method was a standard (60)Co solution. The sources were prepared from the mixture (60)Co+(99)Tc solution and a general extrapolation curve, type: N(βTc-99)/(M)(Tc-99)=f [1-ε(Co-60)], was drawn. This value was not used for the final result of the comparison. The difference between the values of activity concentration obtained by the two methods was within the limit of the combined standard uncertainty of the difference of these two results. © 2013 Published by Elsevier Ltd.
Algorithm of composing the schedule of construction and installation works
NASA Astrophysics Data System (ADS)
Nehaj, Rustam; Molotkov, Georgij; Rudchenko, Ivan; Grinev, Anatolij; Sekisov, Aleksandr
2017-10-01
An algorithm for scheduling works is developed, in which the priority of the work corresponds to the total weight of the subordinate works, the vertices of the graph, and it is proved that for graphs of the tree type the algorithm is optimal. An algorithm is synthesized to reduce the search for solutions when drawing up schedules of construction and installation works, allocating a subset with the optimal solution of the problem of the minimum power, which is determined by the structure of its initial data and numerical values. An algorithm for scheduling construction and installation work is developed, taking into account the schedule for the movement of brigades, which is characterized by the possibility to efficiently calculate the values of minimizing the time of work performance by the parameters of organizational and technological reliability through the use of the branch and boundary method. The program of the computational algorithm was compiled in the MatLAB-2008 program. For the initial data of the matrix, random numbers were taken, uniformly distributed in the range from 1 to 100. It takes 0.5; 2.5; 7.5; 27 minutes to solve the problem. Thus, the proposed method for estimating the lower boundary of the solution is sufficiently accurate and allows efficient solution of the minimax task of scheduling construction and installation works.
Program calculates Z-factor for natural gas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Coker, A.K.
The Fortran program called Physic presented in this article calculates the gas deviation or compressibility factor, Z, of natural gas. The author has used the program for determining discharge-piping pressure drop. The calculated Z is within 5% accuracy for natural hydrocarbon gas with a specific gravity between 0.5 and 0.8, and at a pressure below 5,000 psia.
NASA Technical Reports Server (NTRS)
Gokoglu, S. A.; Chen, B. K.; Rosner, D. E.
1984-01-01
The computer program based on multicomponent chemically frozen boundary layer (CFBL) theory for calculating vapor and/or small particle deposition rates is documented. A specific application to perimter-averaged Na2SO4 deposition rate calculations on a cylindrical collector is demonstrated. The manual includes a typical program input and output for users.
Soil bulk density and soil moisture calculated with a FORTRAN 77 program.
G.L. Starr; J.M. Geist
1988-01-01
This paper presents an improved version of BDEN, an interactive computer program written in FORTRAN 77 that will calculate soil bulk density and moisture percentage by weight and volume. Calculations allow for deducting coarse fragment weight and volume. The program will also summarize the resulting data by giving the mean, standard deviation, and 95-percent confidence...
Analysis and calculation of lightning-induced voltages in aircraft electrical circuits
NASA Technical Reports Server (NTRS)
Plumer, J. A.
1974-01-01
Techniques to calculate the transfer functions relating lightning-induced voltages in aircraft electrical circuits to aircraft physical characteristics and lightning current parameters are discussed. The analytical work was carried out concurrently with an experimental program of measurements of lightning-induced voltages in the electrical circuits of an F89-J aircraft. A computer program, ETCAL, developed earlier to calculate resistive and inductive transfer functions is refined to account for skin effect, providing results more valid over a wider range of lightning waveshapes than formerly possible. A computer program, WING, is derived to calculate the resistive and inductive transfer functions between a basic aircraft wing and a circuit conductor inside it. Good agreement is obtained between transfer inductances calculated by WING and those reduced from measured data by ETCAL. This computer program shows promise of expansion to permit eventual calculation of potential lightning-induced voltages in electrical circuits of complete aircraft in the design stage.
Trace contaminant control simulation computer program, version 8.1
NASA Technical Reports Server (NTRS)
Perry, J. L.
1994-01-01
The Trace Contaminant Control Simulation computer program is a tool for assessing the performance of various process technologies for removing trace chemical contamination from a spacecraft cabin atmosphere. Included in the simulation are chemical and physical adsorption by activated charcoal, chemical adsorption by lithium hydroxide, absorption by humidity condensate, and low- and high-temperature catalytic oxidation. Means are provided for simulating regenerable as well as nonregenerable systems. The program provides an overall mass balance of chemical contaminants in a spacecraft cabin given specified generation rates. Removal rates are based on device flow rates specified by the user and calculated removal efficiencies based on cabin concentration and removal technology experimental data. Versions 1.0 through 8.0 are documented in NASA TM-108409. TM-108409 also contains a source file listing for version 8.0. Changes to version 8.0 are documented in this technical memorandum and a source file listing for the modified version, version 8.1, is provided. Detailed descriptions for the computer program subprograms are extracted from TM-108409 and modified as necessary to reflect version 8.1. Version 8.1 supersedes version 8.0. Information on a separate user's guide is available from the author.
New developments in water efficiency
NASA Astrophysics Data System (ADS)
Gregg, Tony T.; Dewees, Amanda; Gross, Drema; Hoffman, Bill; Strub, Dan; Watson, Matt
2006-10-01
An overview of significant new developments in water efficiency is presented in this paper. The areas covered will be legislative, regulatory, new programs or program wrinkles, new products, and new studies on the effectiveness of conservation programs. Examples include state and local level efficiency regulations in Texas; the final results of the national submetering study for apartments in the US; the US effort to adopt the IWA protocols for leak detection; new water efficient commercial products such as ET irrigation controllers, new models of efficient clothes washers, and innovative toilet designs.
The NASA Aircraft Energy Efficiency Program
NASA Technical Reports Server (NTRS)
Klineberg, J. M.
1978-01-01
The objective of the NASA Aircraft Energy Efficiency Program is to accelerate the development of advanced technology for more energy-efficient subsonic transport aircraft. This program will have application to current transport derivatives in the early 1980s and to all-new aircraft of the late 1980s and early 1990s. Six major technology projects were defined that could result in fuel savings in commercial aircraft: (1) Engine Component Improvement, (2) Energy Efficient Engine, (3) Advanced Turboprops, (4) Energy Efficiency Transport (aerodynamically speaking), (5) Laminar Flow Control, and (6) Composite Primary Structures.
Hydration Free Energy from Orthogonal Space Random Walk and Polarizable Force Field.
Abella, Jayvee R; Cheng, Sara Y; Wang, Qiantao; Yang, Wei; Ren, Pengyu
2014-07-08
The orthogonal space random walk (OSRW) method has shown enhanced sampling efficiency in free energy calculations from previous studies. In this study, the implementation of OSRW in accordance with the polarizable AMOEBA force field in TINKER molecular modeling software package is discussed and subsequently applied to the hydration free energy calculation of 20 small organic molecules, among which 15 are positively charged and five are neutral. The calculated hydration free energies of these molecules are compared with the results obtained from the Bennett acceptance ratio method using the same force field, and overall an excellent agreement is obtained. The convergence and the efficiency of the OSRW are also discussed and compared with BAR. Combining enhanced sampling techniques such as OSRW with polarizable force fields is very promising for achieving both accuracy and efficiency in general free energy calculations.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-05-30
...-AC46 Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating Methods: Public Meeting AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy... regulations authorizing the use of alternative methods of determining energy efficiency or energy consumption...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-07
... DEPARTMENT OF ENERGY Office of Energy Efficiency and Renewable Energy [Docket Number EERE-BT-PET-0024] Energy Efficiency Program for Consumer Products: Commonwealth of Massachusetts Petition for Exemption From Federal Preemption of Massachusetts' Energy Efficiency Standard for Residential Non...
Project W-320, 241-C-106 sluicing HVAC calculations, Volume 1
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, J.W.
1998-08-07
This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. The report contains the following calculations: Exhaust airflow sizing for Tank 241-C-106; Equipment sizing and selection recirculation fan; Sizing high efficiency mist eliminator; Sizing electric heating coil; Equipment sizing and selection of recirculation condenser; Chiller skid system sizing and selection; High efficiency metal filter shielding input and flushing frequency; and Exhaust skid stack sizing and fan sizing.
MCPB.py: A Python Based Metal Center Parameter Builder.
Li, Pengfei; Merz, Kenneth M
2016-04-25
MCPB.py, a python based metal center parameter builder, has been developed to build force fields for the simulation of metal complexes employing the bonded model approach. It has an optimized code structure, with far fewer required steps than the previous developed MCPB program. It supports various AMBER force fields and more than 80 metal ions. A series of parametrization schemes to derive force constants and charge parameters are available within the program. We give two examples (one metalloprotein example and one organometallic compound example), indicating the program's ability to build reliable force fields for different metal ion containing complexes. The original version was released with AmberTools15. It is provided via the GNU General Public License v3.0 (GNU_GPL_v3) agreement and is free to download and distribute. MCPB.py provides a bridge between quantum mechanical calculations and molecular dynamics simulation software packages thereby enabling the modeling of metal ion centers. It offers an entry into simulating metal ions in a number of situations by providing an efficient way for researchers to handle the vagaries and difficulties associated with metal ion modeling.
Optimal Energy Consumption Analysis of Natural Gas Pipeline
Liu, Enbin; Li, Changjun; Yang, Yi
2014-01-01
There are many compressor stations along long-distance natural gas pipelines. Natural gas can be transported using different boot programs and import pressures, combined with temperature control parameters. Moreover, different transport methods have correspondingly different energy consumptions. At present, the operating parameters of many pipelines are determined empirically by dispatchers, resulting in high energy consumption. This practice does not abide by energy reduction policies. Therefore, based on a full understanding of the actual needs of pipeline companies, we introduce production unit consumption indicators to establish an objective function for achieving the goal of lowering energy consumption. By using a dynamic programming method for solving the model and preparing calculation software, we can ensure that the solution process is quick and efficient. Using established optimization methods, we analyzed the energy savings for the XQ gas pipeline. By optimizing the boot program, the import station pressure, and the temperature parameters, we achieved the optimal energy consumption. By comparison with the measured energy consumption, the pipeline now has the potential to reduce energy consumption by 11 to 16 percent. PMID:24955410
Coupling Network Computing Applications in Air-cooled Turbine Blades Optimization
NASA Astrophysics Data System (ADS)
Shi, Liang; Yan, Peigang; Xie, Ming; Han, Wanjin
2018-05-01
Through establishing control parameters from blade outside to inside, the parametric design of air-cooled turbine blade based on airfoil has been implemented. On the basis of fast updating structure features and generating solid model, a complex cooling system has been created. Different flow units are modeled into a complex network topology with parallel and serial connection. Applying one-dimensional flow theory, programs have been composed to get pipeline network physical quantities along flow path, including flow rate, pressure, temperature and other parameters. These inner units parameters set as inner boundary conditions for external flow field calculation program HIT-3D by interpolation, thus to achieve full field thermal coupling simulation. Referring the studies in literatures to verify the effectiveness of pipeline network program and coupling algorithm. After that, on the basis of a modified design, and with the help of iSIGHT-FD, an optimization platform had been established. Through MIGA mechanism, the target of enhancing cooling efficiency has been reached, and the thermal stress has been effectively reduced. Research work in this paper has significance for rapid deploying the cooling structure design.
NASA Technical Reports Server (NTRS)
1979-01-01
One of the most comprehensive and most effective programs is NECAP, an acronym for NASA Energy Cost Analysis Program. Developed by Langley Research Center, NECAP operates according to heating/cooling calculation procedures formulated by the American Society of Heating, Refrigeration and Air Conditioning Engineers (ASHRAE). The program enables examination of a multitude of influences on heat flow into and out of buildings. For example, NECAP considers traditional weather patterns for a given locale and predicts the effects on a particular building design of sun, rain, wind, even shadows from other buildings. It takes into account the mass of structural materials, insulating values, the type of equipment the building will house, equipment operating schedules, heat by people and machinery, heat loss or gain through windows and other openings and a variety of additional details. NECAP ascertains how much energy the building should require ideally, aids selection of the most economical and most efficient energy systems and suggests design and operational measures for reducing the building's energy needs. Most importantly, NECAP determines cost effectiveness- whether an energy-saving measure will pay back its installation cost through monetary savings in energy bills. thrown off
Vlaisavljevich, Bess; Shiozaki, Toru
2016-08-09
We report the development of the theory and computer program for analytical nuclear energy gradients for (extended) multistate complete active space perturbation theory (CASPT2) with full internal contraction. The vertical shifts are also considered in this work. This is an extension of the fully internally contracted CASPT2 nuclear gradient program recently developed for a state-specific variant by us [MacLeod and Shiozaki, J. Chem. Phys. 2015, 142, 051103]; in this extension, the so-called λ equation is solved to account for the variation of the multistate CASPT2 energies with respect to the change in the amplitudes obtained in the preceding state-specific CASPT2 calculations, and the Z vector equations are modified accordingly. The program is parallelized using the MPI3 remote memory access protocol that allows us to perform efficient one-sided communication. The optimized geometries of the ground and excited states of a copper corrole and benzophenone are presented as numerical examples. The code is publicly available under the GNU General Public License.
IRFK2D: a computer program for simulating intrinsic random functions of order k
NASA Astrophysics Data System (ADS)
Pardo-Igúzquiza, Eulogio; Dowd, Peter A.
2003-07-01
IRFK2D is an ANSI Fortran-77 program that generates realizations of an intrinsic function of order k (with k equal to 0, 1 or 2) with a permissible polynomial generalized covariance model. The realizations may be non-conditional or conditioned to the experimental data. The turning bands method is used to generate realizations in 2D and 3D from simulations of an intrinsic random function of order k along lines that span the 2D or 3D space. The program generates two output files, the first containing the simulated values and the second containing the theoretical generalized variogram for different directions together with the theoretical model. The experimental variogram is calculated from the simulated values while the theoretical variogram is the specified generalized covariance model. The generalized variogram is used to assess the quality of the simulation as measured by the extent to which the generalized covariance is reproduced by the simulation. The examples given in this paper indicate that IRFK2D is an efficient implementation of the methodology.
Kim, Myoung Soo; Park, Jung Ha; Park, Kyung Yeon
2012-10-01
This study was done to develop and evaluate a drug dosage calculation training program using cognitive loading theory based on a smartphone application. Calculation ability, dosage calculation related self-efficacy and anxiety were measured. A nonequivalent control group design was used. Smartphone application and a handout for self-study were developed and administered to the experimental group and only a handout was provided for control group. Intervention period was 4 weeks. Data were analyzed using descriptive analysis, χ²-test, t-test, and ANCOVA with the SPSS 18.0. The experimental group showed more 'self-efficacy for drug dosage calculation' than the control group (t=3.82, p<.001). Experimental group students had higher ability to perform drug dosage calculations than control group students (t=3.98, p<.001), with regard to 'metric conversion' (t=2.25, p=.027), 'table dosage calculation' (t=2.20, p=.031) and 'drop rate calculation' (t=4.60, p<.001). There was no difference in improvement in 'anxiety for drug dosage calculation'. Mean satisfaction score for the program was 86.1. These results indicate that this drug dosage calculation training program using smartphone application is effective in improving dosage calculation related self-efficacy and calculation ability. Further study should be done to develop additional interventions for reducing anxiety.
Numerical Simulation of Measurements during the Reactor Physical Startup at Unit 3 of Rostov NPP
NASA Astrophysics Data System (ADS)
Tereshonok, V. A.; Kryakvin, L. V.; Pitilimov, V. A.; Karpov, S. A.; Kulikov, V. I.; Zhylmaganbetov, N. M.; Kavun, O. Yu.; Popykin, A. I.; Shevchenko, R. A.; Shevchenko, S. A.; Semenova, T. V.
2017-12-01
The results of numerical calculations and measurements of some reactor parameters during the physical startup tests at unit 3 of Rostov NPP are presented. The following parameters are considered: the critical boron acid concentration and the currents from ionization chambers (IC) during the scram system efficiency evaluation. The scram system efficiency was determined using the inverse point kinetics equation with the measured and simulated IC currents. The results of steady-state calculations of relative power distribution and efficiency of the scram system and separate groups of control rods of the control and protection system are also presented. The calculations are performed using several codes, including precision ones.
Calculating the Flow Field in a Radial Turbine Scroll
NASA Technical Reports Server (NTRS)
Baskharone, E.; Abdallah, S.; Hamed, A.; Tabaoff, W.
1983-01-01
Set of two computer programs calculates flow field in radial turbine scroll. Programs represent improvement in analyzing flow in radial turbine scrolls and provide designer with tools for designing better scrolls. Programs written in FORTRAN IV.
CROSSER - CUMULATIVE BINOMIAL PROGRAMS
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative binomial program, CROSSER, is one of a set of three programs which calculate cumulative binomial probability distributions for arbitrary inputs. The three programs, CROSSER, CUMBIN (NPO-17555), and NEWTONP (NPO-17556), can be used independently of one another. CROSSER can be used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. The program has been used for reliability/availability calculations. CROSSER calculates the point at which the reliability of a k-out-of-n system equals the common reliability of the n components. It is designed to work well with all integer values 0 < k <= n. To run the program, the user simply runs the executable version and inputs the information requested by the program. The program is not designed to weed out incorrect inputs, so the user must take care to make sure the inputs are correct. Once all input has been entered, the program calculates and lists the result. It also lists the number of iterations of Newton's method required to calculate the answer within the given error. The CROSSER program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly with most C compilers. The program format is interactive. It has been implemented under DOS 3.2 and has a memory requirement of 26K. CROSSER was developed in 1988.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prevatte, Scott A.
2006-03-01
In the fall of 2004, as one part of a Basin-Wide Monitoring Program developed by the Upper Columbia Regional Technical Team and Upper Columbia Salmon Recovery Board, the Yakama Nation Fisheries Resource Management program began monitoring downstream migration of ESA listed Upper Columbia River spring chinook salmon and Upper Columbia River steelhead in Nason Creek, a tributary to the Wenatchee River. This report summarizes juvenile spring chinook salmon and steelhead trout migration data collected in Nason Creek during 2005 and also incorporates data from 2004. We used species enumeration at the trap and efficiency trials to describe emigration timing andmore » to estimate population size. Data collection was divided into spring/early summer and fall periods with a break during the summer months occurring due to low stream flow. Trapping began on March 1st and was suspended on July 29th when stream flow dropped below the minimum (30 cfs) required to rotate the trap cone. The fall period began on September 28th with increased stream flow and ended on November 23rd when snow and ice began to accumulate on the trap. During the spring and early summer we collected 311 yearling (2003 brood) spring chinook salmon, 86 wild steelhead smolts and 453 steelhead parr. Spring chinook (2004 brood) outgrew the fry stage of fork length < 60 mm during June and July, 224 were collected at the trap. Mark-recapture trap efficiency trials were performed over a range of stream discharge stages whenever ample numbers of fish were being collected. A total of 247 spring chinook yearlings, 54 steelhead smolts, and 178 steelhead parr were used during efficiency trials. A statically significant relationship between stream discharge and trap efficiency has not been identified in Nason Creek, therefore a pooled trap efficiency was used to estimate the population size of both spring chinook (14.98%) and steelhead smolts (12.96%). We estimate that 2,076 ({+-} 119 95%CI) yearling spring chinook and 688 ({+-} 140 95%CI) steelhead smolts emigrated past the trap during the spring/early summer sample period along with 10,721 ({+-} 1,220 95%CI) steelhead parr. During the fall we collected 924 subyearling (2004 brood) spring chinook salmon and 1,008 steelhead parr of various size and age classes. A total of 732 spring chinook subyearlings and 602 steelhead parr were used during 13 mark-recapture trap efficiency trials. A pooled trap efficiency of 24.59% was used to calculate the emigration of spring chinook and 17.11% was used for steelhead parr during the period from September 28th through November 23rd. We estimate that 3758 ({+-} 92 95%CI) subyearling spring chinook and 5,666 ({+-} 414 95%CI) steelhead parr migrated downstream past the trap along with 516 ({+-} 42 95%CI) larger steelhead pre-smolts during the 2005 fall sample period.« less
Relationship between efficiency and predictability in stock price change
NASA Astrophysics Data System (ADS)
Eom, Cheoljun; Oh, Gabjin; Jung, Woo-Sung
2008-09-01
In this study, we evaluate the relationship between efficiency and predictability in the stock market. The efficiency, which is the issue addressed by the weak-form efficient market hypothesis, is calculated using the Hurst exponent and the approximate entropy (ApEn). The predictability corresponds to the hit-rate; this is the rate of consistency between the direction of the actual price change and that of the predicted price change, as calculated via the nearest neighbor prediction method. We determine that the Hurst exponent and the ApEn value are negatively correlated. However, predictability is positively correlated with the Hurst exponent.
A computer program for the calculation of laminar and turbulent boundary layer flows
NASA Technical Reports Server (NTRS)
Dwyer, H. A.; Doss, E. D.; Goldman, A. L.
1972-01-01
The results are presented of a study to produce a computer program to calculate laminar and turbulent boundary layer flows. The program is capable of calculating the following types of flow: (1) incompressible or compressible, (2) two dimensional or axisymmetric, and (3) flows with significant transverse curvature. Also, the program can handle a large variety of boundary conditions, such as blowing or suction, arbitrary temperature distributions and arbitrary wall heat fluxes. The program has been specialized to the calculation of equilibrium air flows and all of the thermodynamic and transport properties used are for air. For the turbulent transport properties, the eddy viscosity approach has been used. Although the eddy viscosity models are semi-empirical, the model employed in the program has corrections for pressure gradients, suction and blowing and compressibility. The basic method of approach is to put the equations of motion into a finite difference form and then solve them by use of a digital computer. The program is written in FORTRAN 4 and requires small amounts of computer time on most scientific machines. For example, most laminar flows can be calculated in less than one minute of machine time, while turbulent flows usually require three or four minutes.
Pinior, Beate; Firth, Clair L; Richter, Veronika; Lebl, Karin; Trauffler, Martine; Dzieciol, Monika; Hutter, Sabine E; Burgstaller, Johann; Obritzhauser, Walter; Winter, Petra; Käsbohrer, Annemarie
2017-02-01
Infection with bovine viral diarrhea virus (BVDV) results in major economic losses either directly through decreased productive performance in cattle herds or indirectly, such as through expenses for control programs. The aim of this systematic review was to review financial and/or economic assessment studies of prevention and/or mitigation activities of BVDV at national, regional and farm level worldwide. Once all predefined criteria had been met, 35 articles were included for this systematic review. Studies were analyzed with particular focus on the type of financially and/or economically-assessed prevention and/or mitigation activities. Due to the wide range of possible prevention and/or mitigation activities, these activities were grouped into five categories: i) control and/or eradication programs, ii) monitoring or surveillance, iii) prevention, iv) vaccination and v) individual culling, control and testing strategies. Additionally, the studies were analyzed according to economically-related variables such as efficiency, costs or benefits of prevention and/or mitigation activities, the applied financial and/or economic and statistical methods, the payers of prevention and/or mitigation activities, the assessed production systems, and the countries for which such evaluations are available. Financial and/or economic assessments performed in Europe were dominated by those from the United Kingdom, which assessed mostly vaccination strategies, and Norway which primarily carried out assessments in the area of control and eradication programs; whereas among non-European countries the United States carried out the majority of financial and/or economic assessments in the area of individual culling, control and testing. More than half of all studies provided an efficiency calculation of prevention and/or mitigation activities and demonstrated whether the inherent costs of implemented activities were or were not justified. The dairy sector was three times more likely to be assessed by the countries than beef production systems. In addition, the dairy sector was approximately eight times more likely to be assessed economically with respect to prevention and/or mitigation activities than calf and youngstock production systems. Furthermore, the private sector was identified as the primary payer of prevention and/or mitigation activities. This systematic review demonstrated a lack of studies relating to efficiency calculations, in particular at national and regional level, and the specific production systems. Thus, we confirmed the need for more well-designed studies in animal health economics in order to demonstrate that the implementation and inherent costs of BVDV prevention and/or mitigation activities are justified. Copyright © 2016 The Author(s). Published by Elsevier B.V. All rights reserved.
Analytical scheme calculations of angular momentum coupling and recoupling coefficients
NASA Astrophysics Data System (ADS)
Deveikis, A.; Kuznecovas, A.
2007-03-01
We investigate the Scheme programming language opportunities to analytically calculate the Clebsch-Gordan coefficients, Wigner 6j and 9j symbols, and general recoupling coefficients that are used in the quantum theory of angular momentum. The considered coefficients are calculated by a direct evaluation of the sum formulas. The calculation results for large values of quantum angular momenta were compared with analogous calculations with FORTRAN and Java programming languages.
LATTICE/hor ellipsis/a beam transport program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staples, J.
1987-06-01
LATTICE is a computer program that calculates the first order characteristics of synchrotrons and beam transport systems. The program uses matrix algebra to calculate the propagation of the betatron (Twiss) parameters along a beam line. The program draws on ideas from several older programs, notably Transport and Synch, adds many new ones and incorporates them into an interactive, user-friendly program. LATTICE will calculate the matched functions of a synchrotron lattice and display them in a number of ways, including a high resolution Tektronix graphics display. An optimizer is included to adjust selected element parameters so the beam meets a setmore » of constraints. LATTICE is a first order program, but the effect of sextupoles on the chromaticity of a synchrotron lattice is included, and the optimizer will set the sextupole strengths for zero chromaticity. The program will also calculate the characteristics of beam transport systems. In this mode, the beam parameters, defined at the start of the transport line, are propagated through to the end. LATTICE has two distinct modes: the lattice mode which finds the matched functions of a synchrotron, and the transport mode which propagates a predefined beam through a beam line. However, each mode can be used for either type of problem: the transport mode may be used to calculate an insertion for a synchrotron lattice, and the lattice mode may be used to calculate the characteristics of a long periodic beam transport system.« less
Shwiff, Stephanie A; Kirkpatrick, Katy N; Sterner, Ray T
2008-12-01
To conduct a benefit-cost analysis of the results of the domestic dog and coyote (DDC) oral rabies vaccine (ORV) program in Texas from 1995 through 2006 by use of fiscal records and relevant public health data. Retrospective benefit-cost analysis. Procedures-Pertinent economic data were collected in 20 counties of south Texas affected by a DDC-variant rabies epizootic. The costs and benefits afforded by a DDC ORV program were then calculated. Costs were the total expenditures of the ORV program. Benefits were the savings associated with the number of potentially prevented human postexposure prophylaxis (PEP) treatments and animal rabies tests for the DDC-variant rabies virus in the epizootic area and an area of potential disease expansion. Total estimated benefits of the program approximately ranged from $89 million to $346 million, with total program costs of $26,358,221 for the study period. The estimated savings (ie, damages avoided) from extrapolated numbers of PEP treatments and animal rabies tests yielded benefit-cost ratios that ranged from 3.38 to 13.12 for various frequen-cies of PEP and animal testing. In Texas, the use of ORV stopped the northward spread and led to the progressive elimination of the DDC variant of rabies in coyotes (Canis latrans). The decision to implement an ORV program was cost-efficient, although many unknowns were involved in the original decision, and key economic variables were identified for consideration in future planning of ORV programs.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-04
... Efficiency and Conservation Block Grant Program Status Report AGENCY: U.S. Department of Energy. ACTION... . Additional information and reporting guidance concerning the Energy Efficiency and Conservation Block Grant... Title: ``Energy Efficiency and Conservation Block Grant (EECBG) Program Status Report''; (3) Type of...
CalcHEP 3.4 for collider physics within and beyond the Standard Model
NASA Astrophysics Data System (ADS)
Belyaev, Alexander; Christensen, Neil D.; Pukhov, Alexander
2013-07-01
We present version 3.4 of the CalcHEP software package which is designed for effective evaluation and simulation of high energy physics collider processes at parton level. The main features of CalcHEP are the computation of Feynman diagrams, integration over multi-particle phase space and event simulation at parton level. The principle attractive key-points along these lines are that it has: (a) an easy startup and usage even for those who are not familiar with CalcHEP and programming; (b) a friendly and convenient graphical user interface (GUI); (c) the option for the user to easily modify a model or introduce a new model by either using the graphical interface or by using an external package with the possibility of cross checking the results in different gauges; (d) a batch interface which allows to perform very complicated and tedious calculations connecting production and decay modes for processes with many particles in the final state. With this features set, CalcHEP can efficiently perform calculations with a high level of automation from a theory in the form of a Lagrangian down to phenomenology in the form of cross sections, parton level event simulation and various kinematical distributions. In this paper we report on the new features of CalcHEP 3.4 which improves the power of our package to be an effective tool for the study of modern collider phenomenology. Program summaryProgram title: CalcHEP Catalogue identifier: AEOV_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEOV_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 78535 No. of bytes in distributed program, including test data, etc.: 818061 Distribution format: tar.gz Programming language: C. Computer: PC, MAC, Unix Workstations. Operating system: Unix. RAM: Depends on process under study Classification: 4.4, 5. External routines: X11 Nature of problem: Implement new models of particle interactions. Generate Feynman diagrams for a physical process in any implemented theoretical model. Integrate phase space for Feynman diagrams to obtain cross sections or particle widths taking into account kinematical cuts. Simulate collisions at modern colliders and generate respective unweighted events. Mix events for different subprocesses and connect them with the decays of unstable particles. Solution method: Symbolic calculations. Squared Feynman diagram approach Vegas Monte Carlo algorithm. Restrictions: Up to 2→4 production (1→5 decay) processes are realistic on typical computers. Higher multiplicities sometimes possible for specific 2→5 and 2→6 processes. Unusual features: Graphical user interface, symbolic algebra calculation of squared matrix element, parallelization on a pbs cluster. Running time: Depends strongly on the process. For a typical 2→2 process it takes seconds. For 2→3 processes the typical running time is of the order of minutes. For higher multiplicities it could take much longer.
wannier90: A tool for obtaining maximally-localised Wannier functions
NASA Astrophysics Data System (ADS)
Mostofi, Arash A.; Yates, Jonathan R.; Lee, Young-Su; Souza, Ivo; Vanderbilt, David; Marzari, Nicola
2008-05-01
We present wannier90, a program for calculating maximally-localised Wannier functions (MLWF) from a set of Bloch energy bands that may or may not be attached to or mixed with other bands. The formalism works by minimising the total spread of the MLWF in real space. This is done in the space of unitary matrices that describe rotations of the Bloch bands at each k-point. As a result, wannier90 is independent of the basis set used in the underlying calculation to obtain the Bloch states. Therefore, it may be interfaced straightforwardly to any electronic structure code. The locality of MLWF can be exploited to compute band-structure, density of states and Fermi surfaces at modest computational cost. Furthermore, wannier90 is able to output MLWF for visualisation and other post-processing purposes. Wannier functions are already used in a wide variety of applications. These include analysis of chemical bonding in real space; calculation of dielectric properties via the modern theory of polarisation; and as an accurate and minimal basis set in the construction of model Hamiltonians for large-scale systems, in linear-scaling quantum Monte Carlo calculations, and for efficient computation of material properties, such as the anomalous Hall coefficient. wannier90 is freely available under the GNU General Public License from http://www.wannier.org/. Program summaryProgram title: wannier90 Catalogue identifier: AEAK_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAK_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 556 495 No. of bytes in distributed program, including test data, etc.: 5 709 419 Distribution format: tar.gz Programming language: Fortran 90, perl Computer: any architecture with a Fortran 90 compiler Operating system: Linux, Windows, Solaris, AIX, Tru64 Unix, OSX RAM: 10 MB Word size: 32 or 64 Classification: 7.3 External routines:BLAS ( http://www/netlib.org/blas). LAPACK ( http://www.netlib.org/lapack). Both available under open-source licenses. Nature of problem: Obtaining maximally-localised Wannier functions from a set of Bloch energy bands that may or may not be entangled. Solution method: In the case of entangled bands, the optimally-connected subspace of interest is determined by minimising a functional which measures the subspace dispersion across the Brillouin zone. The maximally-localised Wannier functions within this subspace are obtained by subsequent minimisation of a functional that represents the total spread of the Wannier functions in real space. For the case of isolated energy bands only the second step of the procedure is required. Unusual features: Simple and user-friendly input system. Wannier functions and interpolated band structure output in a variety of file formats for visualisation. Running time: Test cases take 1 minute. References:N. Marzari, D. Vanderbilt, Maximally localized generalized Wannier functions for composite energy bands, Phys. Rev. B 56 (1997) 12847. I. Souza, N. Marzari, D. Vanderbilt, Maximally localized Wannier functions for entangled energy bands, Phys. Rev. B 65 (2001) 035109.
NASA Technical Reports Server (NTRS)
Davidson, J.; Ottey, H. R.; Sawitz, P.; Zusman, F. S.
1985-01-01
The underlying engineering and mathematical models as well as the computational methods used by the Spectrum Orbit Utilization Program 5 (SOUP5) analysis programs are described. Included are the algorithms used to calculate the technical parameters, and references to the technical literature. The organization, capabilities, processing sequences, and processing and data options of the SOUP5 system are described. The details of the geometric calculations are given. Also discussed are the various antenna gain algorithms; rain attenuation and depolarization calculations; calculations of transmitter power and received power flux density; channelization options, interference categories, and protection ratio calculation; generation of aggregrate interference and margins; equivalent gain calculations; and how to enter a protection ratio template.
TLD efficiency calculations for heavy ions: an analytical approach
Boscolo, Daria; Scifoni, Emanuele; Carlino, Antonio; ...
2015-12-18
The use of thermoluminescent dosimeters (TLDs) in heavy charged particles’ dosimetry is limited by their non-linear dose response curve and by their response dependence on the radiation quality. Thus, in order to use TLDs with particle beams, a model that can reproduce the behavior of these detectors under different conditions is needed. Here a new, simple and completely analytical algorithm for the calculation of the relative TL-efficiency depending on the ion charge Z and energy E is presented. In addition, the detector response is evaluated starting from the single ion case, where the computed effectiveness values have been compared withmore » experimental data as well as with predictions from a different method. The main advantage of this approach is that, being fully analytical, it is computationally fast and can be efficiently integrated into treatment planning verification tools. In conclusion, the calculated efficiency values have been then implemented in the treatment planning code TRiP98 and dose calculations on a macroscopic target irradiated with an extended carbon ion field have been performed and verified against experimental data.« less
NASA Astrophysics Data System (ADS)
Meigo, S.
1997-02-01
For neutrons 25, 30 and 65 MeV, the response functions and detection efficiencies of an NE213 liquid scintillator were measured. Quasi-monoenergetic neutrons produced by the 7Li(p,N 0.1) reaction were employed for the measurement and the absolute flux of incident neutrons was determined within 4% accuracy using a proton recoil telescope. Response functions and detection efficiencies calculated with the Monte Carlo codes, CECIL and SCINFUL, were compared with the measured data. It was found that response functions calculated with SCINFUL agreed better with experimental ones than those with CECIL, however, the deuteron light output used in SCINFUL was too low. The response functions calculated with a revised SCINFUL agreed with the experimental ones quite well even for the deuteron bump and peak due to the C(n,d 0) reaction. It was confirmed that the detection efficiencies calculated with the original and the revised SCINFULs agreed with the experimental data within the experimental error, while those with CECIL were about 20% higher in the energy region above 30 MeV.
The Super Efficient Refrigerator Program: Case study of a Golden Carrot program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eckert, J B
1995-07-01
The work in this report was conducted by the Analytic Studies Division (ASD) of the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy, Office of Building Technologies. This case study describes the development and implementation of the Super Efficient Refrigerator Program (SERP), which awarded $30 million to the refrigerator manufacturer that developed and commercialized a refrigerator that exceeded 1993 federal efficiency standards by at least 25%. The program was funded by 24 public and private utilities. As the first Golden Carrot program to be implemented in the United States, SERPmore » was studied as an example for future `market-pull` efforts.« less
The NASA Aircraft Energy Efficiency program
NASA Technical Reports Server (NTRS)
Klineberg, J. M.
1979-01-01
A review is provided of the goals, objectives, and recent progress in each of six aircraft energy efficiency programs aimed at improved propulsive, aerodynamic and structural efficiency for future transport aircraft. Attention is given to engine component improvement, an energy efficient turbofan engine, advanced turboprops, revolutionary gains in aerodynamic efficiency for aircraft of the late 1990s, laminar flow control, and composite primary aircraft structures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zimring, Mark
2011-03-18
Launched in 2006, over 8,700 residential energy upgrades have been completed through Austin Energy's Home Performance with Energy Star (HPwES) program. The program's lending partner, Velocity Credit Union (VCU) has originated almost 1,800 loans, totaling approximately $12.5 million. Residential energy efficiency loans are typically small, and expensive to originate and service relative to larger financing products. National lenders have been hesitant to deliver attractive loan products to this small, but growing, residential market. In response, energy efficiency programs have found ways to partner with local and regional banks, credit unions, community development finance institutions (CDFIs) and co-ops to deliver energymore » efficiency financing to homeowners. VCU's experience with the Austin Energy HPwES program highlights the potential benefits of energy efficiency programs to a lending partner.« less