Development of an efficient procedure for calculating the aerodynamic effects of planform variation
NASA Technical Reports Server (NTRS)
Mercer, J. E.; Geller, E. W.
1981-01-01
Numerical procedures to compute gradients in aerodynamic loading due to planform shape changes using panel method codes were studied. Two procedures were investigated: one computed the aerodynamic perturbation directly; the other computed the aerodynamic loading on the perturbed planform and on the base planform and then differenced these values to obtain the perturbation in loading. It is indicated that computing the perturbed values directly can not be done satisfactorily without proper aerodynamic representation of the pressure singularity at the leading edge of a thin wing. For the alternative procedure, a technique was developed which saves most of the time-consuming computations from a panel method calculation for the base planform. Using this procedure the perturbed loading can be calculated in about one-tenth the time of that for the base solution.
Series expansion of the modified Einstein Procedure
Seema Chandrakant Shah-Fairbank
2009-01-01
This study examines calculating total sediment discharge based on the Modified Einstein Procedure (MEP). A new procedure based on the Series Expansion of the Modified Einstein Procedure (SEMEP) has been developed. This procedure contains four main modifications to MEP. First, SEMEP solves the Einstein integrals quickly and accurately based on a series expansion. Next,...
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle as...
40 CFR 1066.610 - Mass-based and molar-based exhaust emission calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Mass-based and molar-based exhaust... (CONTINUED) AIR POLLUTION CONTROLS VEHICLE-TESTING PROCEDURES Calculations § 1066.610 Mass-based and molar-based exhaust emission calculations. (a) Calculate your total mass of emissions over a test cycle as...
The purpose of this SOP is to describe the procedures undertaken to calculate the ingestion exposure using composite food chemical residue values from the day of direct measurements. The calculation is based on the probabilistic approach. This SOP uses data that have been proper...
ERIC Educational Resources Information Center
Cepriá, Gemma; Salvatella, Luis
2014-01-01
All pH calculations for simple acid-base systems used in introductory courses on general or analytical chemistry can be carried out by using a general procedure requiring the use of predominance diagrams. In particular, the pH is calculated as the sum of an independent term equaling the average pK[subscript a] values of the acids involved in the…
NASA Technical Reports Server (NTRS)
Nieten, Joseph L.; Seraphine, Kathleen M.
1991-01-01
Procedural modeling systems, rule based modeling systems, and a method for converting a procedural model to a rule based model are described. Simulation models are used to represent real time engineering systems. A real time system can be represented by a set of equations or functions connected so that they perform in the same manner as the actual system. Most modeling system languages are based on FORTRAN or some other procedural language. Therefore, they must be enhanced with a reaction capability. Rule based systems are reactive by definition. Once the engineering system has been decomposed into a set of calculations using only basic algebraic unary operations, a knowledge network of calculations and functions can be constructed. The knowledge network required by a rule based system can be generated by a knowledge acquisition tool or a source level compiler. The compiler would take an existing model source file, a syntax template, and a symbol table and generate the knowledge network. Thus, existing procedural models can be translated and executed by a rule based system. Neural models can be provide the high capacity data manipulation required by the most complex real time models.
NASA Astrophysics Data System (ADS)
Gallup, G. A.; Gerratt, J.
1985-09-01
The van der Waals energy between the two parts of a system is a very small fraction of the total electronic energy. In such cases, calculations have been based on perturbation theory. However, such an approach involves certain difficulties. For this reason, van der Waals energies have also been directly calculated from total energies. But such a method has definite limitations as to the size of systems which can be treated, and recently ab initio calculations have been combined with damped semiempirical long-range dispersion potentials to treat larger systems. In this procedure, large basis set superposition errors occur, which must be removed by the counterpoise method. The present investigation is concerned with an approach which is intermediate between the previously considered procedures. The first step in the new approach involves a variational calculation based upon valence bond functions. The procedure includes also the optimization of excited orbitals, and an approximation of atomic integrals and Hamiltonian matrix elements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... for each data set that is collected during the initial performance test. A single composite value of... Multiple Zone Concentrations Calculations Procedure based on inlet and outlet concentrations (Column A of... composite value of Ks discussed in section III.C of this appendix. This value of Ks is calculated during the...
The Application of Six Sigma Techniques in the Evaluation of Enzyme Measurement Procedures in China.
Zhang, Chuanbao; Zhao, Haijian; Wang, Jing; Zeng, Jie; Wang, Zhiguo
2015-01-01
Recently, Six Sigma techniques have been adopted by clinical laboratories to evaluate laboratory performance. Measurement procedures in laboratories can be categorized as "excellent", "good", and "improvement needed" based on sigma (σ) metrics of σ ≥ 6, 3 ≤ σ < 6, and σ < 3, respectively. The quality goal index (QGI) was further investigated for measurement procedures with σ ≤ 3. Improvements of the procedures were recommended based on QGI: QGI < 0.8 indicates that the precision of the procedure needs to be improved; QGI > 1.2 indicates that the trueness of the procedure needs to be improved; 0.8 ≤ QGI ≤ 1.2 indicates that both the precision and trueness of the procedure need to be improved. Fresh frozen sera containing seven enzymes (ALT, ALP, AMY, AST, CK, GGT, and LDH) were sent to 78 clinical laboratories in China. The biases for measurement procedures in each laboratory (Bias) were calculated based on the target values assigned by 18 laboratories performing IFCC (International Federation of Clinical Chemistry and Laboratory medicine) recommended reference methods. The imprecision of each measurement procedure was represented by coefficient variations (CV) calculated based on internal quality control (IQC) data. The σ and QGI values were calculated as follows: σ = (TEa-Bias)/CV; QGI = Bias/(1.5 x CV). TEa is allowable total error for each enzyme derived from biological variation. Our study indicated that 7.9% (6/76, ALP) to 31.0% (18/58, AMY) of the participating laboratories were scored as "excellent" (σ ≥ 6), 21.1% (16/76, ALP) to 41.3% (31/75, CK) of the laboratories were scored as "good" (3 ≤ σ < 6), and 31.0% (18/58, AMY) to 71.1% (54/76, ALP) of the laboratories need to improve their enzyme measurement procedures (σ < 3). For those with σ < 3, QGIs were further calculated. Based on QGI values, 8.6% (5/58, AMY) to 35.9% (28/78, LDH) of the laboratories (QGI < 0.8) need to improve the precision of the procedures, 8.0% (6/75, CK) to 52.6% (40/76, ALP) of the laboratories (QGI ≤ 1.2) need to improve the trueness of the procedures; and 2.7% (2/75, AST) to 16.3% (8/49, GGT) of the laboratories (0.8 ≤ QGI ≤ 1.2) need to improve both the precision and trueness of the procedures. Even though rapid progress has been made to standardize serum enzyme measurements in China in recent years, our study using Six Sigma techniques still suggested that approximately 31.1% to 71.0% of the laboratories need to improve their enzyme measurement procedures, either in terms of precision, trueness, or both.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miliordos, Evangelos; Xantheas, Sotiris S.
We propose a general procedure for the numerical calculation of the harmonic vibrational frequencies that is based on internal coordinates and Wilson’s GF methodology via double differentiation of the energy. The internal coordinates are defined as the geometrical parameters of a Z-matrix structure, thus avoiding issues related to their redundancy. Linear arrangements of atoms are described using a dummy atom of infinite mass. The procedure has been automated in FORTRAN90 and its main advantage lies in the nontrivial reduction of the number of single-point energy calculations needed for the construction of the Hessian matrix when compared to the corresponding numbermore » using double differentiation in Cartesian coordinates. For molecules of C 1 symmetry the computational savings in the energy calculations amount to 36N – 30, where N is the number of atoms, with additional savings when symmetry is present. Typical applications for small and medium size molecules in their minimum and transition state geometries as well as hydrogen bonded clusters (water dimer and trimer) are presented. Finally, in all cases the frequencies based on internal coordinates differ on average by <1 cm –1 from those obtained from Cartesian coordinates.« less
Lin, Blossom Yen-Ju; Chao, Te-Hsin; Yao, Yuh; Tu, Shu-Min; Wu, Chun-Ching; Chern, Jin-Yuan; Chao, Shiu-Hsiung; Shaw, Keh-Yuong
2007-04-01
Previous studies have shown the advantages of using activity-based costing (ABC) methodology in the health care industry. The potential values of ABC methodology in health care are derived from the more accurate cost calculation compared to the traditional step-down costing, and the potentials to evaluate quality or effectiveness of health care based on health care activities. This project used ABC methodology to profile the cost structure of inpatients with surgical procedures at the Department of Colorectal Surgery in a public teaching hospital, and to identify the missing or inappropriate clinical procedures. We found that ABC methodology was able to accurately calculate costs and to identify several missing pre- and post-surgical nursing education activities in the course of treatment.
DOT National Transportation Integrated Search
2009-01-01
The Virginia Department of Transportation's (VDOT's) current pavement design procedure is based on the 1993 AASHTO Guide for Design of Pavement Structures. In this procedure, a required structural capacity is calculated as a function of the anticipat...
Calculation of conductivities and currents in the ionosphere
NASA Technical Reports Server (NTRS)
Kirchhoff, V. W. J. H.; Carpenter, L. A.
1975-01-01
Formulas and procedures to calculate ionospheric conductivities are summarized. Ionospheric currents are calculated using a semidiurnal E-region neutral wind model and electric fields from measurements at Millstone Hill. The results agree well with ground based magnetogram records for magnetic quiet days.
40 CFR 98.455 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... § 98.455 Procedures for estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations is required. Replace missing data, if needed, based on data from...
40 CFR 98.305 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... Use § 98.305 Procedures for estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations is required. Replace missing data, if needed, based on data from...
40 CFR 98.305 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... Use § 98.305 Procedures for estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations is required. Replace missing data, if needed, based on data from...
40 CFR 98.455 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... § 98.455 Procedures for estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations is required. Replace missing data, if needed, based on data from...
40 CFR 98.455 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... § 98.455 Procedures for estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations is required. Replace missing data, if needed, based on data from...
40 CFR 98.305 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... Use § 98.305 Procedures for estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations is required. Replace missing data, if needed, based on data from...
40 CFR 98.305 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... Use § 98.305 Procedures for estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations is required. Replace missing data, if needed, based on data from...
40 CFR 98.455 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... § 98.455 Procedures for estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations is required. Replace missing data, if needed, based on data from...
Calculation of gas release from DC and AC arc furnaces in a foundry
NASA Astrophysics Data System (ADS)
Krutyanskii, M. M.; Nekhamin, S. M.; Rebikov, E. M.
2016-12-01
A procedure for the calculation of gas release from arc furnaces is presented. The procedure is based on the stoichiometric ratios of the oxidation of carbon in liquid iron during the oxidation heat period and the oxidation of iron from a steel charge by oxygen in the period of solid charge melting during the gas exchange of the furnace cavity with the external atmosphere.
Experimental Verification of Buffet Calculation Procedure Using Unsteady PSP
NASA Technical Reports Server (NTRS)
Panda, Jayanta
2016-01-01
Typically a limited number of dynamic pressure sensors are employed to determine the unsteady aerodynamic forces on large, slender aerospace structures. The estimated forces are known to be very sensitive to the number of the dynamic pressure sensors and the details of the integration scheme. This report describes a robust calculation procedure, based on frequency-specific correlation lengths, that is found to produce good estimation of fluctuating forces from a few dynamic pressure sensors. The validation test was conducted on a flat panel, placed on the floor of a wind tunnel, and was subjected to vortex shedding from a rectangular bluff-body. The panel was coated with fast response Pressure Sensitive Paint (PSP), which allowed time-resolved measurements of unsteady pressure fluctuations on a dense grid of spatial points. The first part of the report describes the detail procedure used to analyze the high-speed, PSP camera images. The procedure includes steps to reduce contamination by electronic shot noise, correction for spatial non-uniformities, and lamp brightness variation, and finally conversion of fluctuating light intensity to fluctuating pressure. The latter involved applying calibration constants from a few dynamic pressure sensors placed at selective points on the plate. Excellent comparison in the spectra, coherence and phase, calculated via PSP and dynamic pressure sensors validated the PSP processing steps. The second part of the report describes the buffet validation process, for which the first step was to use pressure histories from all PSP points to determine the "true" force fluctuations. In the next step only a selected number of pixels were chosen as "virtual sensors" and a correlation-length based buffet calculation procedure was applied to determine "modeled" force fluctuations. By progressively decreasing the number of virtual sensors it was observed that the present calculation procedure was able to make a close estimate of the "true" unsteady forces only from four sensors. It is believed that the present work provides the first validation of the buffet calculation procedure which has been used for the development of many space vehicles.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-04
... NMAC) addition of in subsections methodology for (A) and (B). fugitive dust control permits, revised... fee Fee Calculations and requirements for Procedures. fugitive dust control permits. 9/7/2004 Section... schedule based on acreage, add and update calculation methodology used to calculate non- programmatic dust...
Reconciling quality and cost: A case study in interventional radiology.
Zhang, Li; Domröse, Sascha; Mahnken, Andreas
2015-10-01
To provide a method to calculate delay cost and examine the relationship between quality and total cost. The total cost including capacity, supply and delay cost for running an interventional radiology suite was calculated. The capacity cost, consisting of labour, lease and overhead costs, was derived based on expenses per unit time. The supply cost was calculated according to actual procedural material use. The delay cost and marginal delay cost derived from queueing models was calculated based on waiting times of inpatients for their procedures. Quality improvement increased patient safety and maintained the outcome. The average daily delay costs were reduced from 1275 € to 294 €, and marginal delay costs from approximately 2000 € to 500 €, respectively. The one-time annual cost saved from the transfer of surgical to radiological procedures was approximately 130,500 €. The yearly delay cost saved was approximately 150,000 €. With increased revenue of 10,000 € in project phase 2, the yearly total cost saved was approximately 290,000 €. Optimal daily capacity of 4.2 procedures was determined. An approach for calculating delay cost toward optimal capacity allocation was presented. An overall quality improvement was achieved at reduced costs. • Improving quality in terms of safety, outcome, efficiency and timeliness reduces cost. • Mismatch of demand and capacity is detrimental to quality and cost. • Full system utilization with random demand results in long waiting periods and increased cost.
Idaho AASHTOWare pavement ME design user's guide, version 1.1.
DOT National Transportation Integrated Search
2014-03-01
The AASHTOWare Pavement ME Design procedure is based on mechanistic-empirical (M-E) design concepts. This means that the design procedure calculates pavement responses such as stresses, strains, and deflections under axle loads and climatic condition...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Townsend, D.W.; Linnhoff, B.
In Part I, criteria for heat engine and heat pump placement in chemical process networks were derived, based on the ''temperature interval'' (T.I) analysis of the heat exchanger network problem. Using these criteria, this paper gives a method for identifying the best outline design for any combined system of chemical process, heat engines, and heat pumps. The method eliminates inferior alternatives early, and positively leads on to the most appropriate solution. A graphical procedure based on the T.I. analysis forms the heart of the approach, and the calculations involved are simple enough to be carried out on, say, a programmablemore » calculator. Application to a case study is demonstrated. Optimization methods based on this procedure are currently under research.« less
Accurate wavelengths for X-ray spectroscopy and the NIST hydrogen-like ion database
NASA Astrophysics Data System (ADS)
Kotochigova, S. A.; Kirby, K. P.; Brickhouse, N. S.; Mohr, P. J.; Tupitsyn, I. I.
2005-06-01
We have developed an ab initio multi-configuration Dirac-Fock-Sturm method for the precise calculation of X-ray emission spectra, including energies, transition wavelengths and transition probabilities. The calculations are based on non-orthogonal basis sets, generated by solving the Dirac-Fock and Dirac-Fock-Sturm equations. Inclusion of Sturm functions into the basis set provides an efficient description of correlation effects in highly charged ions and fast convergence of the configuration interaction procedure. A second part of our study is devoted to developing a theoretical procedure and creating an interactive database to generate energies and transition frequencies for hydrogen-like ions. This procedure is highly accurate and based on current knowledge of the relevant theory, which includes relativistic, quantum electrodynamic, recoil, and nuclear size effects.
Numerical modeling and optimization of the Iguassu gas centrifuge
NASA Astrophysics Data System (ADS)
Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.
2017-07-01
The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.
Advanced Geometric Optics on a Programmable Pocket Calculator.
ERIC Educational Resources Information Center
Nussbaum, Allen
1979-01-01
Presents a ray-tracing procedure based on some ideas of Herzberger and the matrix approach to geometrical optics. This method, which can be implemented on a programmable pocket calculator, applies to any conic surface, including paraboloids, spheres, and planes. (Author/GA)
How accurately can the peak skin dose in fluoroscopy be determined using indirect dose metrics?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Ensor, Joe E.; Pasciak, Alexander S.
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that result in skin reactions can be reached during these procedures. There is no consensus as to whether or not indirect skin dosimetry is sufficiently accurate for fluoroscopically-guided interventions. However, measuring PSD with film is difficult and the decision to do so must be madea priori. The purpose of this study was to assess the accuracy of different types of indirect dose estimates and to determine if PSD can be calculated within ±50% using indirect dose metrics for embolization procedures. Methods: PSD were measured directly using radiochromicmore » film for 41 consecutive embolization procedures at two sites. Indirect dose metrics from the procedures were collected, including reference air kerma. Four different estimates of PSD were calculated from the indirect dose metrics and compared along with reference air kerma to the measured PSD for each case. The four indirect estimates included a standard calculation method, the use of detailed information from the radiation dose structured report, and two simplified calculation methods based on the standard method. Indirect dosimetry results were compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the different indirect estimates were examined. Results: When using the standard calculation method, calculated PSD were within ±35% for all 41 procedures studied. Calculated PSD were within ±50% for a simplified method using a single source-to-patient distance for all calculations. Reference air kerma was within ±50% for all but one procedure. Cases for which reference air kerma or calculated PSD exhibited large (±35%) differences from the measured PSD were analyzed, and two main causative factors were identified: unusually small or large source-to-patient distances and large contributions to reference air kerma from cone beam computed tomography or acquisition runs acquired at large primary gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±35% for embolization procedures. Reference air kerma can be used without modification to set notification limits and substantial radiation dose levels, provided the displayed reference air kerma is accurate. These results can reasonably be extended to similar procedures, including vascular and interventional oncology. Considering these results, film dosimetry is likely an unnecessary effort for these types of procedures when indirect dose metrics are available.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brickstad, B.; Bergman, M.
A computerized procedure has been developed that predicts the growth of an initial circumferential surface crack through a pipe and further on to failure. The crack growth mechanism can either be fatigue or stress corrosion. Consideration is taken to complex crack shapes and for the through-wall cracks, crack opening areas and leak rates are also calculated. The procedure is based on a large number of three-dimensional finite element calculations of cracked pipes. The results from these calculations are stored in a database from which the PC-program, denoted LBBPIPE, reads all necessary information. In this paper, a sensitivity analysis is presentedmore » for cracked pipes subjected to both stress corrosion and vibration fatigue.« less
The Individual Virtual Eye: a Computer Model for Advanced Intraocular Lens Calculation
Einighammer, Jens; Oltrup, Theo; Bende, Thomas; Jean, Benedikt
2010-01-01
Purpose To describe the individual virtual eye, a computer model of a human eye with respect to its optical properties. It is based on measurements of an individual person and one of its major application is calculating intraocular lenses (IOLs) for cataract surgery. Methods The model is constructed from an eye's geometry, including axial length and topographic measurements of the anterior corneal surface. All optical components of a pseudophakic eye are modeled with computer scientific methods. A spline-based interpolation method efficiently includes data from corneal topographic measurements. The geometrical optical properties, such as the wavefront aberration, are simulated with real ray-tracing using Snell's law. Optical components can be calculated using computer scientific optimization procedures. The geometry of customized aspheric IOLs was calculated for 32 eyes and the resulting wavefront aberration was investigated. Results The more complex the calculated IOL is, the lower the residual wavefront error is. Spherical IOLs are only able to correct for the defocus, while toric IOLs also eliminate astigmatism. Spherical aberration is additionally reduced by aspheric and toric aspheric IOLs. The efficient implementation of time-critical numerical ray-tracing and optimization procedures allows for short calculation times, which may lead to a practicable method integrated in some device. Conclusions The individual virtual eye allows for simulations and calculations regarding geometrical optics for individual persons. This leads to clinical applications like IOL calculation, with the potential to overcome the limitations of those current calculation methods that are based on paraxial optics, exemplary shown by calculating customized aspheric IOLs.
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Standard calculation procedure. 434.510 Section 434.510... HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard calculation procedure. 510.1The Standard Calculation Procedure consists of methods and assumptions for...
NASA Technical Reports Server (NTRS)
Trosset, Michael W.
1999-01-01
Comprehensive computational experiments to assess the performance of algorithms for numerical optimization require (among other things) a practical procedure for generating pseudorandom nonlinear objective functions. We propose a procedure that is based on the convenient fiction that objective functions are realizations of stochastic processes. This report details the calculations necessary to implement our procedure for the case of certain stationary Gaussian processes and presents a specific implementation in the statistical programming language S-PLUS.
Unit Method of Accounting for Investments.
ERIC Educational Resources Information Center
Jones, Leigh A.
1971-01-01
The unit method of accounting for investments, also called the market-value method, is defined as a procedure for accurately allocating income and investment gains and losses, both realized and unrealized, between component funds of an investment pool. This procedure provides a data base for the calculation of investment performance. Advantages of…
40 CFR 98.294 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... scales or methods used for accounting purposes. (3) Document the procedures used to ensure the accuracy of the monthly measurements of trona consumed. (b) If you calculate CO2 process emissions based on... your facility, or methods used for accounting purposes. (3) Document the procedures used to ensure the...
40 CFR 98.85 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... to determine combined process and combustion CO2 emissions, the missing data procedures in § 98.35 apply. (b) For CO2 process emissions from cement manufacturing facilities calculated according to § 98... best available estimate of the monthly clinker production based on information used for accounting...
Element-by-element Solution Procedures for Nonlinear Structural Analysis
NASA Technical Reports Server (NTRS)
Hughes, T. J. R.; Winget, J. M.; Levit, I.
1984-01-01
Element-by-element approximate factorization procedures are proposed for solving the large finite element equation systems which arise in nonlinear structural mechanics. Architectural and data base advantages of the present algorithms over traditional direct elimination schemes are noted. Results of calculations suggest considerable potential for the methods described.
Calculation of the solvus temperature of metastable phases in the Al-Mg-Si alloys
NASA Astrophysics Data System (ADS)
Vasilyev, A. A.; Gruzdev, A. S.; Kuz'min, N. L.
2011-09-01
A procedure has been proposed for the self-consistent calculation of the solvus temperatures of metastable phase precipitates in Al-Mg-Si alloys and the specific energy of their interface with the aluminum matrix. The procedure is based on the results of experimental studies on the kinetics of formation of these precipitates during decomposition of supersaturated solid solutions of quenched Al-Mg-Si alloys, which were carried out by measuring the Young's modulus and electrical resistivity. On the basis of the obtained set of solvus temperatures of the β″-phase, an empirical formula has been proposed for calculating this temperature as a function of the chemical composition of the initial solid solution.
Gas electron multiplier (GEM) foil test, repair and effective gain calculation
NASA Astrophysics Data System (ADS)
Tahir, Muhammad; Zubair, Muhammad; Khan, Tufail A.; Khan, Ashfaq; Malook, Asad
2018-06-01
The focus of my research is based on the gas electron multiplier (GEM) foil test, repairing and effective gain calculation of GEM detector. During my research work define procedure of GEM foil testing short-circuit, detection short-circuits in the foil. Study different ways to remove the short circuits in the foils. Set and define the GEM foil testing procedures in the open air, and with nitrogen gas. Measure the leakage current of the foil and applying different voltages with specified step size. Define the Quality Control (QC) tests and different components of GEM detectors before assembly. Calculate the effective gain of GEM detectors using 109Cd and 55Fe radioactive source.
Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases
NASA Astrophysics Data System (ADS)
Morifuji, Masato
2018-01-01
We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.
Coarse mesh and one-cell block inversion based diffusion synthetic acceleration
NASA Astrophysics Data System (ADS)
Kim, Kang-Seog
DSA (Diffusion Synthetic Acceleration) has been developed to accelerate the SN transport iteration. We have developed solution techniques for the diffusion equations of FLBLD (Fully Lumped Bilinear Discontinuous), SCB (Simple Comer Balance) and UCB (Upstream Corner Balance) modified 4-step DSA in x-y geometry. Our first multi-level method includes a block Gauss-Seidel iteration for the discontinuous diffusion equation, uses the continuous diffusion equation derived from the asymptotic analysis, and avoids void cell calculation. We implemented this multi-level procedure and performed model problem calculations. The results showed that the FLBLD, SCB and UCB modified 4-step DSA schemes with this multi-level technique are unconditionally stable and rapidly convergent. We suggested a simplified multi-level technique for FLBLD, SCB and UCB modified 4-step DSA. This new procedure does not include iterations on the diffusion calculation or the residual calculation. Fourier analysis results showed that this new procedure was as rapidly convergent as conventional modified 4-step DSA. We developed new DSA procedures coupled with 1-CI (Cell Block Inversion) transport which can be easily parallelized. We showed that 1-CI based DSA schemes preceded by SI (Source Iteration) are efficient and rapidly convergent for LD (Linear Discontinuous) and LLD (Lumped Linear Discontinuous) in slab geometry and for BLD (Bilinear Discontinuous) and FLBLD in x-y geometry. For 1-CI based DSA without SI in slab geometry, the results showed that this procedure is very efficient and effective for all cases. We also showed that 1-CI based DSA in x-y geometry was not effective for thin mesh spacings, but is effective and rapidly convergent for intermediate and thick mesh spacings. We demonstrated that the diffusion equation discretized on a coarse mesh could be employed to accelerate the transport equation. Our results showed that coarse mesh DSA is unconditionally stable and is as rapidly convergent as fine mesh DSA in slab geometry. For x-y geometry our coarse mesh DSA is very effective for thin and intermediate mesh spacings independent of the scattering ratio, but is not effective for purely scattering problems and high aspect ratio zoning. However, if the scattering ratio is less than about 0.95, this procedure is very effective for all mesh spacing.
Gonçalves, Cristina P; Mohallem, José R
2004-11-15
We report the development of a simple algorithm to modify quantum chemistry codes based on the LCAO procedure, to account for the isotope problem in electronic structure calculations. No extra computations are required compared to standard Born-Oppenheimer calculations. An upgrade of the Gamess package called ISOTOPE is presented, and its applicability is demonstrated in some examples.
NASA Astrophysics Data System (ADS)
Kehlenbeck, Matthias; Breitner, Michael H.
Business users define calculated facts based on the dimensions and facts contained in a data warehouse. These business calculation definitions contain necessary knowledge regarding quantitative relations for deep analyses and for the production of meaningful reports. The business calculation definitions are implementation and widely organization independent. But no automated procedures facilitating their exchange across organization and implementation boundaries exist. Separately each organization currently has to map its own business calculations to analysis and reporting tools. This paper presents an innovative approach based on standard Semantic Web technologies. This approach facilitates the exchange of business calculation definitions and allows for their automatic linking to specific data warehouses through semantic reasoning. A novel standard proxy server which enables the immediate application of exchanged definitions is introduced. Benefits of the approach are shown in a comprehensive case study.
Homogeneity tests of clustered diagnostic markers with applications to the BioCycle Study
Tang, Liansheng Larry; Liu, Aiyi; Schisterman, Enrique F.; Zhou, Xiao-Hua; Liu, Catherine Chun-ling
2014-01-01
Diagnostic trials often require the use of a homogeneity test among several markers. Such a test may be necessary to determine the power both during the design phase and in the initial analysis stage. However, no formal method is available for the power and sample size calculation when the number of markers is greater than two and marker measurements are clustered in subjects. This article presents two procedures for testing the accuracy among clustered diagnostic markers. The first procedure is a test of homogeneity among continuous markers based on a global null hypothesis of the same accuracy. The result under the alternative provides the explicit distribution for the power and sample size calculation. The second procedure is a simultaneous pairwise comparison test based on weighted areas under the receiver operating characteristic curves. This test is particularly useful if a global difference among markers is found by the homogeneity test. We apply our procedures to the BioCycle Study designed to assess and compare the accuracy of hormone and oxidative stress markers in distinguishing women with ovulatory menstrual cycles from those without. PMID:22733707
Zhang, Zhihong; Tendulkar, Amod; Sun, Kay; Saloner, David A; Wallace, Arthur W; Ge, Liang; Guccione, Julius M; Ratcliffe, Mark B
2011-01-01
Both the Young-Laplace law and finite element (FE) based methods have been used to calculate left ventricular wall stress. We tested the hypothesis that the Young-Laplace law is able to reproduce results obtained with the FE method. Magnetic resonance imaging scans with noninvasive tags were used to calculate three-dimensional myocardial strain in 5 sheep 16 weeks after anteroapical myocardial infarction, and in 1 of those sheep 6 weeks after a Dor procedure. Animal-specific FE models were created from the remaining 5 animals using magnetic resonance images obtained at early diastolic filling. The FE-based stress in the fiber, cross-fiber, and circumferential directions was calculated and compared to stress calculated with the assumption that wall thickness is very much less than the radius of curvature (Young-Laplace law), and without that assumption (modified Laplace). First, circumferential stress calculated with the modified Laplace law is closer to results obtained with the FE method than stress calculated with the Young-Laplace law. However, there are pronounced regional differences, with the largest difference between modified Laplace and FE occurring in the inner and outer layers of the infarct borderzone. Also, stress calculated with the modified Laplace is very different than stress in the fiber and cross-fiber direction calculated with FE. As a consequence, the modified Laplace law is inaccurate when used to calculate the effect of the Dor procedure on regional ventricular stress. The FE method is necessary to determine stress in the left ventricle with postinfarct and surgical ventricular remodeling. Copyright © 2011 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.
Is Earth-based scaling a valid procedure for calculating heat flows for Mars?
NASA Astrophysics Data System (ADS)
Ruiz, Javier; Williams, Jean-Pierre; Dohm, James M.; Fernández, Carlos; López, Valle
2013-09-01
Heat flow is a very important parameter for constraining the thermal evolution of a planetary body. Several procedures for calculating heat flows for Mars from geophysical or geological proxies have been used, which are valid for the time when the structures used as indicators were formed. The more common procedures are based on estimates of lithospheric strength (the effective elastic thickness of the lithosphere or the depth to the brittle-ductile transition). On the other hand, several works by Kargel and co-workers have estimated martian heat flows from scaling the present-day terrestrial heat flow to Mars, but the so-obtained values are much higher than those deduced from lithospheric strength. In order to explain the discrepancy, a recent paper by Rodriguez et al. (Rodriguez, J.A.P., Kargel, J.S., Tanaka, K.L., Crown, D.A., Berman, D.C., Fairén, A.G., Baker, V.R., Furfaro, R., Candelaria, P., Sasaki, S. [2011]. Icarus 213, 150-194) criticized the heat flow calculations for ancient Mars presented by Ruiz et al. (Ruiz, J., Williams, J.-P., Dohm, J.M., Fernández, C., López, V. [2009]. Icarus 207, 631-637) and other studies calculating ancient martian heat flows from lithospheric strength estimates, and casted doubts on the validity of the results obtained by these works. Here however we demonstrate that the discrepancy is due to computational and conceptual errors made by Kargel and co-workers, and we conclude that the scaling from terrestrial heat flow values is not a valid procedure for estimating reliable heat flows for Mars.
Computational study of Ca, Sr and Ba under pressure
NASA Astrophysics Data System (ADS)
Jona, F.; Marcus, P. M.
2006-05-01
A first-principles procedure for the calculation of equilibrium properties of crystals under hydrostatic pressure is applied to Ca, Sr and Ba. The procedure is based on minimizing the Gibbs free energy G (at zero temperature) with respect to the structure at a given pressure p, and hence does not require the equation of state to fix the pressure. The calculated lattice constants of Ca, Sr and Ba are shown to be generally closer to measured values than previous calculations using other procedures. In particular for Ba, where careful and extensive pressure data are available, the calculated lattice parameters fit measurements to about 1% in three different phases, both cubic and hexagonal. Rigid-lattice transition pressures between phases which come directly from the crossing of G(p) curves are not close to measured transition pressures. One reason is the need to include zero-point energy (ZPE) of vibration in G. The ZPE of cubic phases is calculated with a generalized Debye approximation and applied to Ca and Sr, where it produces significant shifts in transition pressures. An extensive tabulation is given of structural parameters and elastic constants from the literature, including both theoretical and experimental results.
Adamska, K; Bellinghausen, R; Voelkel, A
2008-06-27
The Hansen solubility parameter (HSP) seems to be a useful tool for the thermodynamic characterization of different materials. Unfortunately, estimation of the HSP values can cause some problems. In this work different procedures by using inverse gas chromatography have been presented for calculation of pharmaceutical excipients' solubility parameter. The new procedure proposed, based on the Lindvig et al. methodology, where experimental data of Flory-Huggins interaction parameter are used, can be a reasonable alternative for the estimation of HSP values. The advantage of this method is that the values of Flory-Huggins interaction parameter chi for all test solutes are used for further calculation, thus diverse interactions between test solute and material are taken into consideration.
[Evidence based medicine and cost-effectiveness analysis in ophthalmology].
Nováková, D; Rozsíval, P
2004-09-01
To make the reader familiar with the term evidence based medicine (EBM), to explain the principle of cost-effectiveness analysis (price-profit), and to show its usefulness to compare the effectiveness of different medical procedures. Based on few examples, in this article the relevance and calculation of important parameters of cost-effectiveness analysis (CE), as utility value (UV), quality adjusted life years (QALY) is explained. In addition, calculation of UV and QALY for the cataract surgery, including its complications, is provided. According to this method, laser photocoagulation and cryocoagulation of the early stages of retinopathy of prematurity, treatment of amblyopia, cataract surgery of one or both eyes, from the vitreoretinal procedures the early vitrectomy in cases of hemophtalmus in proliferative diabetic retinopathy or grid laser photocoagulation in diabetic macular edema or worsening of the visual acuity due to the branch retinal vein occlusion belong to highly effective procedures. On the other hand, to the procedures with low cost effectiveness belongs the treating of the central retinal artery occlusion with anterior chamber paracentesis, as well as with CO2 inhalation, or photodynamic therapy in choroidal neovascularization in age-related macular degeneration with visual acuity of the better eye 20/200. Cost-effectiveness analysis is a new perspective method evaluating successfulness of medical procedure comparing the final effect with the financial costs. In evaluation of effectiveness of individual procedures, three main aspects are considered: subjective feeling of influence of the disease on the patient's life, objective results of clinical examination and financial costs of the procedure. According to this method, the cataract surgery, as well as procedures in the pediatric ophthalmology belong to the most effective surgical methods.
NASA Astrophysics Data System (ADS)
Shchinnikov, P. A.; Safronov, A. V.
2014-12-01
General principles of a procedure for matching energy balances of thermal power plants (TPPs), whose use enhances the accuracy of information-measuring systems (IMSs) during calculations of performance characteristics (PCs), are stated. To do this, there is the possibility for changing values of measured and calculated variables within intervals determined by measurement errors and regulations. An example of matching energy balances of the thermal power plants with a T-180 turbine is made. The proposed procedure allows one to reduce the divergence of balance equations by 3-4 times. It is shown also that the equipment operation mode affects the profit deficiency. Dependences for the divergence of energy balances on the deviation of input parameters and calculated data for the fuel economy before and after matching energy balances are represented.
NASA Astrophysics Data System (ADS)
Zaripov, D. I.; Renfu, Li
2018-05-01
The implementation of high-efficiency digital image correlation methods based on a zero-normalized cross-correlation (ZNCC) procedure for high-speed, time-resolved measurements using a high-resolution digital camera is associated with big data processing and is often time consuming. In order to speed-up ZNCC computation, a high-speed technique based on a parallel projection correlation procedure is proposed. The proposed technique involves the use of interrogation window projections instead of its two-dimensional field of luminous intensity. This simplification allows acceleration of ZNCC computation up to 28.8 times compared to ZNCC calculated directly, depending on the size of interrogation window and region of interest. The results of three synthetic test cases, such as a one-dimensional uniform flow, a linear shear flow and a turbulent boundary-layer flow, are discussed in terms of accuracy. In the latter case, the proposed technique is implemented together with an iterative window-deformation technique. On the basis of the results of the present work, the proposed technique is recommended to be used for initial velocity field calculation, with further correction using more accurate techniques.
47 CFR 1.1623 - Probability calculation.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be...
Prediction of Combustion Gas Deposit Compositions
NASA Technical Reports Server (NTRS)
Kohl, F. J.; Mcbride, B. J.; Zeleznik, F. J.; Gordon, S.
1985-01-01
Demonstrated procedure used to predict accurately chemical compositions of complicated deposit mixtures. NASA Lewis Research Center's Computer Program for Calculation of Complex Chemical Equilibrium Compositions (CEC) used in conjunction with Computer Program for Calculation of Ideal Gas Thermodynamic Data (PAC) and resulting Thermodynamic Data Base (THDATA) to predict deposit compositions from metal or mineral-seeded combustion processes.
Evaluation of atomic pressure in the multiple time-step integration algorithm.
Andoh, Yoshimichi; Yoshii, Noriyuki; Yamada, Atsushi; Okazaki, Susumu
2017-04-15
In molecular dynamics (MD) calculations, reduction in calculation time per MD loop is essential. A multiple time-step (MTS) integration algorithm, the RESPA (Tuckerman and Berne, J. Chem. Phys. 1992, 97, 1990-2001), enables reductions in calculation time by decreasing the frequency of time-consuming long-range interaction calculations. However, the RESPA MTS algorithm involves uncertainties in evaluating the atomic interaction-based pressure (i.e., atomic pressure) of systems with and without holonomic constraints. It is not clear which intermediate forces and constraint forces in the MTS integration procedure should be used to calculate the atomic pressure. In this article, we propose a series of equations to evaluate the atomic pressure in the RESPA MTS integration procedure on the basis of its equivalence to the Velocity-Verlet integration procedure with a single time step (STS). The equations guarantee time-reversibility even for the system with holonomic constrants. Furthermore, we generalize the equations to both (i) arbitrary number of inner time steps and (ii) arbitrary number of force components (RESPA levels). The atomic pressure calculated by our equations with the MTS integration shows excellent agreement with the reference value with the STS, whereas pressures calculated using the conventional ad hoc equations deviated from it. Our equations can be extended straightforwardly to the MTS integration algorithm for the isothermal NVT and isothermal-isobaric NPT ensembles. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
He, Jiangang; Franchini, Cesare
2017-11-01
In this paper we assess the predictive power of the self-consistent hybrid functional scPBE0 in calculating the band gap of oxide semiconductors. The computational procedure is based on the self-consistent evaluation of the mixing parameter α by means of an iterative calculation of the static dielectric constant using the perturbation expansion after discretization method and making use of the relation \
Graphing within-subjects confidence intervals using SPSS and S-Plus.
Wright, Daniel B
2007-02-01
Within-subjects confidence intervals are often appropriate to report and to display. Loftus and Masson (1994) have reported methods to calculate these, and their use is becoming common. In the present article, procedures for calculating within-subjects confidence intervals in SPSS and S-Plus are presented (an R version is on the accompanying Web site). The procedure in S-Plus allows the user to report the bias corrected and adjusted bootstrap confidence intervals as well as the standard confidence intervals based on traditional methods. The presented code can be easily altered to fit the individual user's needs.
St. Lawrence River Freeze-Up Forecast Procedure.
ERIC Educational Resources Information Center
Assel, R. A.
A standard operating procedure (SOP) is presented for calculating the date of freeze-up on the St. Lawrence River at Massena, N.Y. The SOP is based on two empirical temperature decline equations developed for Kingston, Ontario, and Massena, N.Y., respectively. Input data needed to forecast freeze-up consist of the forecast December flow rate and…
Young, David W
2015-11-01
Historically, hospital departments have computed the costs of individual tests or procedures using the ratio of cost to charges (RCC) method, which can produce inaccurate results. To determine a more accurate cost of a test or procedure, the activity-based costing (ABC) method must be used. Accurate cost calculations will ensure reliable information about the profitability of a hospital's DRGs.
Improvement of a 2D numerical model of lava flows
NASA Astrophysics Data System (ADS)
Ishimine, Y.
2013-12-01
I propose an improved procedure that reduces an improper dependence of lava flow directions on the orientation of Digital Elevation Model (DEM) in two-dimensional simulations based on Ishihara et al. (in Lava Flows and Domes, Fink, JH eds., 1990). The numerical model for lava flow simulations proposed by Ishihara et al. (1990) is based on two-dimensional shallow water model combined with a constitutive equation for a Bingham fluid. It is simple but useful because it properly reproduces distributions of actual lava flows. Thus, it has been regarded as one of pioneer work of numerical simulations of lava flows and it is still now widely used in practical hazard prediction map for civil defense officials in Japan. However, the model include an improper dependence of lava flow directions on the orientation of DEM because the model separately assigns the condition for the lava flow to stop due to yield stress for each of two orthogonal axes of rectangular calculating grid based on DEM. This procedure brings a diamond-shaped distribution as shown in Fig. 1 when calculating a lava flow supplied from a point source on a virtual flat plane although the distribution should be circle-shaped. To improve the drawback, I proposed a modified procedure that uses the absolute value of yield stress derived from both components of two orthogonal directions of the slope steepness to assign the condition for lava flows to stop. This brings a better result as shown in Fig. 2. Fig. 1. (a) Contour plots calculated with the original model of Ishihara et al. (1990). (b) Contour plots calculated with a proposed model.
Automated reconstruction of rainfall events responsible for shallow landslides
NASA Astrophysics Data System (ADS)
Vessia, G.; Parise, M.; Brunetti, M. T.; Peruccacci, S.; Rossi, M.; Vennari, C.; Guzzetti, F.
2014-04-01
Over the last 40 years, many contributions have been devoted to identifying the empirical rainfall thresholds (e.g. intensity vs. duration ID, cumulated rainfall vs. duration ED, cumulated rainfall vs. intensity EI) for the initiation of shallow landslides, based on local as well as worldwide inventories. Although different methods to trace the threshold curves have been proposed and discussed in literature, a systematic study to develop an automated procedure to select the rainfall event responsible for the landslide occurrence has rarely been addressed. Nonetheless, objective criteria for estimating the rainfall responsible for the landslide occurrence (effective rainfall) play a prominent role on the threshold values. In this paper, two criteria for the identification of the effective rainfall events are presented: (1) the first is based on the analysis of the time series of rainfall mean intensity values over one month preceding the landslide occurrence, and (2) the second on the analysis of the trend in the time function of the cumulated mean intensity series calculated from the rainfall records measured through rain gauges. The two criteria have been implemented in an automated procedure written in R language. A sample of 100 shallow landslides collected in Italy by the CNR-IRPI research group from 2002 to 2012 has been used to calibrate the proposed procedure. The cumulated rainfall E and duration D of rainfall events that triggered the documented landslides are calculated through the new procedure and are fitted with power law in the (D,E) diagram. The results are discussed by comparing the (D,E) pairs calculated by the automated procedure and the ones by the expert method.
Calculation of nanodrop profile from fluid density distribution.
Berim, Gersh O; Ruckenstein, Eli
2016-05-01
Two approaches are examined, which can be used to determine the drop profile from the fluid density distributions (FDDs) obtained on the basis of microscopic theories. For simplicity, only two-dimensional (cylindrical, or axisymmetrical) distributions are examined and it is assumed that the fluid is either in contact with a smooth solid or separated from the smooth solid by a lubricating liquid film. The first approach is based on the sharp-kink interface approximation in which the density of the liquid inside and the density of the vapor outside the drop are constant with the exception of the surface layer of the drop where the density is different from the above ones. In this case, the drop profile was calculated by minimizing the total potential energy of the system. The second approach is based on a nonuniform FDD obtained either by the density functional theory or molecular dynamics simulations. To determine the drop profile from such an FDD, which does not contain sharp interfaces, three procedures can be used. In the first two procedures, P1 and P2, the one-dimensional FDDs along straight lines which are parallel to the surface of the solid are extracted from the two-dimensional FDD. Each of those one-dimensional FDDs has a vapor-liquid interface at which the fluid density changes from vapor-like to liquid-like values. Procedure P1 uses the locations of the equimolar dividing surfaces for the one-dimensional FDDs as points of the drop profile. Procedure P2 is based on the assumption that the fluid density is constant on the surface of the drop, that density being selected either arbitrarily or as a fluid density at the location of the equimolar dividing surface for one of the one-dimensional FDDs employed in procedure P1. In the third procedure, P3, which is suggested for the first time in this paper, the one-dimensional FDDs are taken along the straight lines passing through a selected point inside the drop (radial line). Then, the drop profile is calculated like in procedure P1. It is shown, that procedure P3 provides a drop profile which is more reasonable than the other ones. Relationship of the discussed procedures to those used in image analysis is briefly discussed. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Cancro, George J.; Tolson, Robert H.; Keating, Gerald M.
1998-01-01
The success of aerobraking by the Mars Global Surveyor (MGS) spacecraft was partly due to the analysis of MGS accelerometer data. Accelerometer data was used to determine the effect of the atmosphere on each orbit, to characterize the nature of the atmosphere, and to predict the atmosphere for future orbits. To interpret the accelerometer data, a data reduction procedure was developed to produce density estimations utilizing inputs from the spacecraft, the Navigation Team, and pre-mission aerothermodynamic studies. This data reduction procedure was based on the calculation of aerodynamic forces from the accelerometer data by considering acceleration due to gravity gradient, solar pressure, angular motion of the MGS, instrument bias, thruster activity, and a vibration component due to the motion of the damaged solar array. Methods were developed to calculate all of the acceleration components including a 4 degree of freedom dynamics model used to gain a greater understanding of the damaged solar array. The total error inherent to the data reduction procedure was calculated as a function of altitude and density considering contributions from ephemeris errors, errors in force coefficient, and instrument errors due to bias and digitization. Comparing the results from this procedure to the data of other MGS Teams has demonstrated that this procedure can quickly and accurately describe the density and vertical structure of the Martian upper atmosphere.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.
NASA Astrophysics Data System (ADS)
Bournia-Petrou, Ethel A.
The main goal of this investigation was to study how student rank in class, student gender and skill sequence affect high school students' performance on the lab skills involved in a laboratory-based inquiry task in physics. The focus of the investigation was the effect of skill sequence as determined by the particular task. The skills considered were: Hypothesis, Procedure, Planning, Data, Graph, Calculations and Conclusion. Three physics lab tasks based on the simple pendulum concept were administered to 282 Regents physics high school students. The reliability of the designed tasks was high. Student performance was evaluated on individual student written responses and a scoring rubric. The tasks had high discrimination power and were of moderate difficulty (65%). It was found that, student performance was weak on Conclusion (42%), Hypothesis (48%), and Procedure (51%), where the numbers in parentheses represent the mean as a percentage of the maximum possible score. Student performance was strong on Calculations (91%), Data (82%), Graph (74%) and Plan (68%). Out of all seven skills, Procedure had the strongest correlation (.73) with the overall task performance. Correlation analysis revealed some strong relationships among the seven skills which were grouped in two distinct clusters: Hypothesis, Procedure and Plan belong to one, and Data, Graph, Calculations, and Conclusion belong to the other. This distinction may indicate different mental processes at play within each skill cluster. The effect of student rank was not statistically significant according to the MANOVA results due to the large variation of rank levels among the participating schools. The effect of gender was significant on the entire test because of performance differences on Calculations and Graph, where male students performed better than female students. Skill sequence had a significant effect on the skills of Procedure, Plan, Data and Conclusion. Students are rather weak in proposing a sensible, detailed procedure for the inquiry task which involves the "novel" concept. However they perform better on Procedure and Plan, if the "novel" task is not preceded by another, which explicitly offers step-by-step procedure instructions. It was concluded that the format of detailed, structured instructions often adopted by many commercial and school-developed lab books and conventional lab practices, fails to prepare students to propose a successful, detailed procedure when faced with a slightly "novel", lab-based inquiry task. Student performance on Data collection was higher in the tasks that involved the more familiar experimental arrangement than in the tasks using the slightly "novel" equipment. Student performance on Conclusion was better in tasks where they had to collect the Data themselves than in tasks, where all relevant Data information was given to them.
Naseri, H; Homaeinezhad, M R; Pourkhajeh, H
2013-09-01
The major aim of this study is to describe a unified procedure for detecting noisy segments and spikes in transduced signals with a cyclic but non-stationary periodic nature. According to this procedure, the cycles of the signal (onset and offset locations) are detected. Then, the cycles are clustered into a finite number of groups based on appropriate geometrical- and frequency-based time series. Next, the median template of each time series of each cluster is calculated. Afterwards, a correlation-based technique is devised for making a comparison between a test cycle feature and the associated time series of each cluster. Finally, by applying a suitably chosen threshold for the calculated correlation values, a segment is prescribed to be either clean or noisy. As a key merit of this research, the procedure can introduce a decision support for choosing accurately orthogonal-expansion-based filtering or to remove noisy segments. In this paper, the application procedure of the proposed method is comprehensively described by applying it to phonocardiogram (PCG) signals for finding noisy cycles. The database consists of 126 records from several patients of a domestic research station acquired by a 3M Littmann(®) 3200, 4KHz sampling frequency electronic stethoscope. By implementing the noisy segments detection algorithm with this database, a sensitivity of Se=91.41% and a positive predictive value, PPV=92.86% were obtained based on physicians assessments. Copyright © 2013 Elsevier Ltd. All rights reserved.
Soil Studies: Applying Acid-Base Chemistry to Environmental Analysis.
ERIC Educational Resources Information Center
West, Donna M.; Sterling, Donna R.
2001-01-01
Laboratory activities for chemistry students focus attention on the use of acid-base chemistry to examine environmental conditions. After using standard laboratory procedures to analyze soil and rainwater samples, students use web-based resources to interpret their findings. Uses CBL probes and graphing calculators to gather and analyze data and…
77 FR 61604 - Exposure Modeling Public Meeting; Notice of Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-10
..., birds, reptiles, and amphibians: Model Parameterization and Knowledge base Development. 4. Standard Operating Procedure for calculating degradation kinetics. 5. Aquatic exposure modeling using field studies...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buhl, T.E.; Hansen, W.R.
1984-05-01
Estimators for calculating the risk of cancer and genetic disorders induced by exposure to ionizing radiation have been recommended by the US National Academy of Sciences Committee on the Biological Effects of Ionizing Radiations, the UN Scientific Committee on the Effects of Atomic Radiation, and the International Committee on Radiological Protection. These groups have also considered the risks of somatic effects other than cancer. The US National Council on Radiation Protection and Measurements has discussed risk estimate procedures for radiation-induced health effects. The recommendations of these national and international advisory committees are summarized and compared in this report. Based onmore » this review, two procedures for risk estimation are presented for use in radiological assessments performed by the US Department of Energy under the National Environmental Policy Act of 1969 (NEPA). In the first procedure, age- and sex-averaged risk estimators calculated with US average demographic statistics would be used with estimates of radiation dose to calculate the projected risk of cancer and genetic disorders that would result from the operation being reviewed under NEPA. If more site-specific risk estimators are needed, and the demographic information is available, a second procedure is described that would involve direct calculation of the risk estimators using recommended risk-rate factors. The computer program REPCAL has been written to perform this calculation and is described in this report. 25 references, 16 tables.« less
Crystal structure optimisation using an auxiliary equation of state
NASA Astrophysics Data System (ADS)
Jackson, Adam J.; Skelton, Jonathan M.; Hendon, Christopher H.; Butler, Keith T.; Walsh, Aron
2015-11-01
Standard procedures for local crystal-structure optimisation involve numerous energy and force calculations. It is common to calculate an energy-volume curve, fitting an equation of state around the equilibrium cell volume. This is a computationally intensive process, in particular, for low-symmetry crystal structures where each isochoric optimisation involves energy minimisation over many degrees of freedom. Such procedures can be prohibitive for non-local exchange-correlation functionals or other "beyond" density functional theory electronic structure techniques, particularly where analytical gradients are not available. We present a simple approach for efficient optimisation of crystal structures based on a known equation of state. The equilibrium volume can be predicted from one single-point calculation and refined with successive calculations if required. The approach is validated for PbS, PbTe, ZnS, and ZnTe using nine density functionals and applied to the quaternary semiconductor Cu2ZnSnS4 and the magnetic metal-organic framework HKUST-1.
Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio
2016-10-01
We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
ERIC Educational Resources Information Center
van Iterson, Loretta; Augustijn, Paul B.; de Jong, Peter F.; van der Leij, Aryan
2013-01-01
The goal of this study was to investigate reliable cognitive change in epilepsy by developing computational procedures to determine reliable change index scores (RCIs) for the Dutch Wechsler Intelligence Scales for Children. First, RCIs were calculated based on stability coefficients from a reference sample. Then, these RCIs were applied to a…
Nonglobal correlations in collider physics
Moult, Ian; Larkoski, Andrew J.
2016-01-13
Despite their importance for precision QCD calculations, correlations between in- and out-of-jet regions of phase space have never directly been observed. These so-called non-global effects are present generically whenever a collider physics measurement is not explicitly dependent on radiation throughout the entire phase space. In this paper, we introduce a novel procedure based on mutual information, which allows us to isolate these non-global correlations between measurements made in different regions of phase space. We study this procedure both analytically and in Monte Carlo simulations in the context of observables measured on hadronic final states produced in e+e- collisions, though itmore » is more widely applicable.The procedure exploits the sensitivity of soft radiation at large angles to non-global correlations, and we calculate these correlations through next-to-leading logarithmic accuracy. The bulk of these non-global correlations are found to be described in Monte Carlo simulation. They increase by the inclusion of non-perturbative effects, which we show can be incorporated in our calculation through the use of a model shape function. As a result, this procedure illuminates the source of non-global correlations and has connections more broadly to fundamental quantities in quantum field theory.« less
Boundary condition computational procedures for inviscid, supersonic steady flow field calculations
NASA Technical Reports Server (NTRS)
Abbett, M. J.
1971-01-01
Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.
NASA Technical Reports Server (NTRS)
Vadyak, J.; Hoffman, J. D.; Bishop, A. R.
1978-01-01
The calculation procedure is based on the method of characteristics for steady three-dimensional flow. The bow shock wave and the internal shock wave system were computed using a discrete shock wave fitting procedure. The general structure of the computer program is discussed, and a brief description of each subroutine is given. All program input parameters are defined, and a brief discussion on interpretation of the output is provided. A number of sample cases, complete with data deck listings, are presented.
Garcia, F; Arruda-Neto, J D; Manso, M V; Helene, O M; Vanin, V R; Rodriguez, O; Mesa, J; Likhachev, V P; Filho, J W; Deppman, A; Perez, G; Guzman, F; de Camargo, S P
1999-10-01
A new and simple statistical procedure (STATFLUX) for the calculation of transfer coefficients of radionuclide transport to animals and plants is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. By using experimentally available curves of radionuclide concentrations versus time, for each animal compartment (organs), flow parameters were estimated by employing a least-squares procedure, whose consistency is tested. Some numerical results are presented in order to compare the STATFLUX transfer coefficients with those from other works and experimental data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hueda, A.U.; Perez, B.L.; Jodra, L.G.
1960-01-01
A presentation is made of calculation methods for ionexchange installations based on kinetic considerations and similarity with other unit operations. Factors to be experimentally obtained as well as difficulties which may occur in its determination are also given. Calculation procedures most commonly used in industry are enclosed and explain with numerical resolution of a problem of water demineralization. (auth)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, A.E.; Tschanz, J.; Monarch, M.
1996-05-01
The Air Quality Utility Information System (AQUIS) is a database management system that operates under dBASE IV. It runs on an IBM-compatible personal computer (PC) with MS DOS 5.0 or later, 4 megabytes of memory, and 30 megabytes of disk space. AQUIS calculates emissions for both traditional and toxic pollutants and reports emissions in user-defined formats. The system was originally designed for use at 7 facilities of the Air Force Materiel Command, and now more than 50 facilities use it. Within the last two years, the system has been used in support of Title V permit applications at Department ofmore » Defense facilities. Growth in the user community, changes and additions to reference emission factor data, and changing regulatory requirements have demanded additions and enhancements to the system. These changes have ranged from adding or updating an emission factor to restructuring databases and adding new capabilities. Quality assurance (QA) procedures have been developed to ensure that emission calculations are correct even when databases are reconfigured and major changes in calculation procedures are implemented. This paper describes these QA and updating procedures. Some user facilities include light industrial operations associated with aircraft maintenance. These facilities have operations such as fiberglass and composite layup and plating operations for which standard emission factors are not available or are inadequate. In addition, generally applied procedures such as material balances may need special treatment to work in an automated environment, for example, in the use of oils and greases and when materials such as polyurethane paints react chemically during application. Some techniques used in these situations are highlighted here. To provide a framework for the main discussions, this paper begins with a description of AQUIS.« less
New standards for reducing gravity data: The North American gravity database
Hinze, W. J.; Aiken, C.; Brozena, J.; Coakley, B.; Dater, D.; Flanagan, G.; Forsberg, R.; Hildenbrand, T.; Keller, Gordon R.; Kellogg, J.; Kucks, R.; Li, X.; Mainville, A.; Morin, R.; Pilkington, M.; Plouff, D.; Ravat, D.; Roman, D.; Urrutia-Fucugauchi, J.; Veronneau, M.; Webring, M.; Winester, D.
2005-01-01
The North American gravity database as well as databases from Canada, Mexico, and the United States are being revised to improve their coverage, versatility, and accuracy. An important part of this effort is revising procedures for calculating gravity anomalies, taking into account our enhanced computational power, improved terrain databases and datums, and increased interest in more accurately defining long-wavelength anomaly components. Users of the databases may note minor differences between previous and revised database values as a result of these procedures. Generally, the differences do not impact the interpretation of local anomalies but do improve regional anomaly studies. The most striking revision is the use of the internationally accepted terrestrial ellipsoid for the height datum of gravity stations rather than the conventionally used geoid or sea level. Principal facts of gravity observations and anomalies based on both revised and previous procedures together with germane metadata will be available on an interactive Web-based data system as well as from national agencies and data centers. The use of the revised procedures is encouraged for gravity data reduction because of the widespread use of the global positioning system in gravity fieldwork and the need for increased accuracy and precision of anomalies and consistency with North American and national databases. Anomalies based on the revised standards should be preceded by the adjective "ellipsoidal" to differentiate anomalies calculated using heights with respect to the ellipsoid from those based on conventional elevations referenced to the geoid. ?? 2005 Society of Exploration Geophysicists. All rights reserved.
[Computer diagnosis of traumatic impact by hepatic lesion].
Kimbar, V I; Sevankeev, V V
2007-01-01
A method of computer-assisted diagnosis of traumatic affection by liver damage (HEPAR-test program) is described. The program is based on calculated diagnostic coefficients using Bayes' probability method with Wald's recognition procedure.
NASA Astrophysics Data System (ADS)
Ma, J.; Liu, Q.
2018-02-01
This paper presents an improved short circuit calculation method, based on pre-computed surface to determine the short circuit current of a distribution system with multiple doubly fed induction generators (DFIGs). The short circuit current, injected into power grid by DFIG, is determined by low voltage ride through (LVRT) control and protection under grid fault. However, the existing methods are difficult to calculate the short circuit current of DFIG in engineering practice due to its complexity. A short circuit calculation method, based on pre-computed surface, was proposed by developing the surface of short circuit current changing with the calculating impedance and the open circuit voltage. And the short circuit currents were derived by taking into account the rotor excitation and crowbar activation time. Finally, the pre-computed surfaces of short circuit current at different time were established, and the procedure of DFIG short circuit calculation considering its LVRT was designed. The correctness of proposed method was verified by simulation.
NASA Astrophysics Data System (ADS)
Gololobova, E. G.; Gorichev, I. G.; Lainer, Yu. A.; Skvortsova, I. V.
2011-05-01
A procedure was proposed for the calculation of the acid-base equilibrium constants at an alumina/electrolyte interface from experimental data on the adsorption of singly charged ions (Na+, Cl-) at various pH values. The calculated constants (p K {1/0}= 4.1, p K {2/0}= 11.9, p K {3/0}= 8.3, and p K {4/0}= 7.7) are shown to agree with the values obtained from an experimental pH dependence of the electrokinetic potential and the results of potentiometric titration of Al2O3 suspensions.
NASA Astrophysics Data System (ADS)
Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing
2015-10-01
Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.
Improved telescope focus using only two focus images
NASA Astrophysics Data System (ADS)
Barrick, Gregory; Vermeulen, Tom; Thomas, James
2008-07-01
In an effort to reduce the amount of time spent focusing the telescope and to improve the quality of the focus, a new procedure has been investigated and implemented at the Canada-France-Hawaii Telescope (CFHT). The new procedure is based on a paper by Tokovinin and Heathcote and requires only two out-of-focus images to determine the best focus for the telescope. Using only two images provides a great time savings over the five or more images required for a standard through-focus sequence. In addition, it has been found that this method is significantly less sensitive to seeing variations than the traditional through-focus procedure, so the quality of the resulting focus is better. Finally, the new procedure relies on a second moment calculation and so is computationally easier and more robust than methods using a FWHM calculation. The new method has been implemented for WIRCam for the past 18 months, for MegaPrime for the past year, and has recently been implemented for ESPaDOnS.
NASA Technical Reports Server (NTRS)
Ehlers, F. E.; Weatherill, W. H.; Yip, E. L.
1984-01-01
A finite difference method to solve the unsteady transonic flow about harmonically oscillating wings was investigated. The procedure is based on separating the velocity potential into steady and unsteady parts and linearizing the resulting unsteady differential equation for small disturbances. The differential equation for the unsteady velocity potential is linear with spatially varying coefficients and with the time variable eliminated by assuming harmonic motion. An alternating direction implicit procedure was investigated, and a pilot program was developed for both two and three dimensional wings. This program provides a relatively efficient relaxation solution without previously encountered solution instability problems. Pressure distributions for two rectangular wings are calculated. Conjugate gradient techniques were developed for the asymmetric, indefinite problem. The conjugate gradient procedure is evaluated for applications to the unsteady transonic problem. Different equations for the alternating direction procedure are derived using a coordinate transformation for swept and tapered wing planforms. Pressure distributions for swept, untaped wings of vanishing thickness are correlated with linear results for sweep angles up to 45 degrees.
Transport of Space Environment Electrons: A Simplified Rapid-Analysis Computational Procedure
NASA Technical Reports Server (NTRS)
Nealy, John E.; Anderson, Brooke M.; Cucinotta, Francis A.; Wilson, John W.; Katz, Robert; Chang, C. K.
2002-01-01
A computational procedure for describing transport of electrons in condensed media has been formulated for application to effects and exposures from spectral distributions typical of electrons trapped in planetary magnetic fields. The procedure is based on earlier parameterizations established from numerous electron beam experiments. New parameterizations have been derived that logically extend the domain of application to low molecular weight (high hydrogen content) materials and higher energies (approximately 50 MeV). The production and transport of high energy photons (bremsstrahlung) generated in the electron transport processes have also been modeled using tabulated values of photon production cross sections. A primary purpose for developing the procedure has been to provide a means for rapidly performing numerous repetitive calculations essential for electron radiation exposure assessments for complex space structures. Several favorable comparisons have been made with previous calculations for typical space environment spectra, which have indicated that accuracy has not been substantially compromised at the expense of computational speed.
IJS procedure for RELAP5 to TRACE input model conversion using SNAP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Prosek, A.; Berar, O. A.
2012-07-01
The TRAC/RELAP Advanced Computational Engine (TRACE) advanced, best-estimate reactor systems code developed by the U.S. Nuclear Regulatory Commission comes with a graphical user interface called Symbolic Nuclear Analysis Package (SNAP). Much of efforts have been done in the past to develop the RELAP5 input decks. The purpose of this study is to demonstrate the Institut 'Josef Stefan' (IJS) conversion procedure from RELAP5 to TRACE input model of BETHSY facility. The IJS conversion procedure consists of eleven steps and is based on the use of SNAP. For calculations of the selected BETHSY 6.2TC test the RELAP5/MOD3.3 Patch 4 and TRACE V5.0more » Patch 1 were used. The selected BETHSY 6.2TC test was 15.24 cm equivalent diameter horizontal cold leg break in the reference pressurized water reactor without high pressure and low pressure safety injection. The application of the IJS procedure for conversion of BETHSY input model showed that it is important to perform the steps in proper sequence. The overall calculated results obtained with TRACE using the converted RELAP5 model were close to experimental data and comparable to RELAP5/MOD3.3 calculations. Therefore it can be concluded, that proposed IJS conversion procedure was successfully demonstrated on the BETHSY integral test facility input model. (authors)« less
A numerical procedure for solving the inverse scattering problem for stratified dielectric media
NASA Astrophysics Data System (ADS)
Vogelzang, E.; Yevick, D.; Ferwerda, H. A.
1983-05-01
In this paper the refractive index profile of a dielectric stratified medium, terminated by a perfect conductor, is calculated from the complex reflection coefficient for monochromatic plane waves, incident from different directions. The advantage of this approach is that the dispersion of the refractive index does not enter the calculations. The calculation is based on the Marchenko and Gelfand-Levitan equations taking into account the bound modes of the layer. Some illustrative numerical examples are presented.
NASA Technical Reports Server (NTRS)
Sanger, Eugen
1932-01-01
A method is presented for approximate static calculation, which is based on the customary assumption of rigid ribs, while taking into account the systematic errors in the calculation results due to this arbitrary assumption. The procedure is given in greater detail for semicantilever and cantilever wings with polygonal spar plan form and for wings under direct loading only. The last example illustrates the advantages of the use of influence lines for such wing structures and their practical interpretation.
40 CFR 600.002-93 - Definitions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... traveled by an automobile or group of automobiles per volume of fuel consumed as computed in § 600.113 or § 600.207; or (ii) The equivalent petroleum-based fuel economy for an electrically powered automobile as... means the equivalent petroleum-based fuel economy value as determined by the calculation procedure...
49 CFR 1141.1 - Procedures to calculate interest rates.
Code of Federal Regulations, 2010 CFR
2010-10-01
... the portion of the year covered by the interest rate. A simple multiplication of the nominal rate by... 49 Transportation 8 2010-10-01 2010-10-01 false Procedures to calculate interest rates. 1141.1... TRANSPORTATION BOARD, DEPARTMENT OF TRANSPORTATION RULES OF PRACTICE PROCEDURES TO CALCULATE INTEREST RATES...
40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?
Code of Federal Regulations, 2013 CFR
2013-07-01
... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...
40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?
Code of Federal Regulations, 2012 CFR
2012-07-01
... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...
40 CFR 63.11980 - What are the test methods and calculation procedures for process wastewater?
Code of Federal Regulations, 2014 CFR
2014-07-01
... calculation procedures for process wastewater? 63.11980 Section 63.11980 Protection of Environment... § 63.11980 What are the test methods and calculation procedures for process wastewater? (a) Performance... performance tests during worst-case operating conditions for the PVCPU when the process wastewater treatment...
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
A Z-number-based decision making procedure with ranking fuzzy numbers method
NASA Astrophysics Data System (ADS)
Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah
2014-12-01
The theory of fuzzy set has been in the limelight of various applications in decision making problems due to its usefulness in portraying human perception and subjectivity. Generally, the evaluation in the decision making process is represented in the form of linguistic terms and the calculation is performed using fuzzy numbers. In 2011, Zadeh has extended this concept by presenting the idea of Z-number, a 2-tuple fuzzy numbers that describes the restriction and the reliability of the evaluation. The element of reliability in the evaluation is essential as it will affect the final result. Since this concept can still be considered as new, available methods that incorporate reliability for solving decision making problems is still scarce. In this paper, a decision making procedure based on Z-numbers is proposed. Due to the limitation of its basic properties, Z-numbers will be first transformed to fuzzy numbers for simpler calculations. A method of ranking fuzzy number is later used to prioritize the alternatives. A risk analysis problem is presented to illustrate the effectiveness of this proposed procedure.
ERIC Educational Resources Information Center
Hepburn, Larry; Shin, Masako
This document, one of eight in a multi-cultural competency-based vocational/technical curricula series, is on clerical occupations. This program is designed to run 36 weeks and cover 10 instructional areas: beginning typing, typing I, typing II, duplicating, receptionist activities, general office procedures, operation of electronic calculator,…
NASA Technical Reports Server (NTRS)
Jansson, S.; Leckie, F. A.; Onat, E. T.; Ranaweera, M. P.
1990-01-01
The combination of thermal and mechanical loading expected in practice means that constitutive equations of metal matrix composites must be developed which deal with time-independent and time-dependent irreversible deformation. Also, the internal state of composites is extremely complicated which underlines the need to formulate macroscopic constitutive equations with a limited number of state variables which represent the internal state at the micro level. One available method for calculating the macro properties of composites in terms of the distribution and properties of the constituent materials is the method of homogenization whose formulation is based on the periodicity of the substructure of the composite. A homogenization procedure was developed which lends itself to the use of the finite element procedure. The efficiency of these procedures, to determine the macroscopic properties of a composite system from its constituent properties, was demonstrated utilizing an aluminum plate perforated by directionally oriented slits. The selection of this problem is based on the fact that, extensive experimental results exist, the macroscopic response is highly anisotropic, and that the slits provide very high stress gradients which severely test the effectiveness of the computational procedures. Furthermore, both elastic and plastic properties were investigated so that the application to practical systems with inelastic deformation should be able to proceed without difficulty. The effectiveness of the procedures was rigorously checked against experimental results and with the predictions of approximate calculations. Using the computational results it is illustrated how macroscopic constitutive equations can be expressed in forms of the elastic and limit load behavior.
French, Katy E; Guzman, Alexis B; Rubio, Augustin C; Frenzel, John C; Feeley, Thomas W
2016-09-01
With the movement towards bundled payments, stakeholders should know the true cost of the care they deliver. Time-driven activity-based costing (TDABC) can be used to estimate costs for each episode of care. In this analysis, TDABC is used to both estimate the costs of anesthesia care and identify the primary drivers of those costs of 11 common oncologic outpatient surgical procedures. Personnel cost were calculated by determining the hourly cost of each provider and the associated process time of the 11 surgical procedures. Using the anesthesia record, drugs, supplies and equipment costs were identified and calculated. The current staffing model was used to determine baseline personnel costs for each procedure. Using the costs identified through TDABC analysis, the effect of different staffing ratios on anesthesia costs could be predicted. Costs for each of the procedures were determined. Process time and costs are linearly related. Personnel represented 79% of overall cost while drugs, supplies and equipment represented the remaining 21%. Changing staffing ratios shows potential savings between 13% and 28% across the 11 procedures. TDABC can be used to estimate the costs of anesthesia care. This costing information is critical to assessing the anesthesiology component in a bundled payment. It can also be used to identify areas of cost savings and model costs of anesthesia care. CRNA to anesthesiologist staffing ratios profoundly influence the cost of care. This methodology could be applied to other medical specialties to help determine costs in the setting of bundled payments. Copyright © 2015 Elsevier Inc. All rights reserved.
French, Katy E.; Guzman, Alexis B.; Rubio, Augustin C.; Frenzel, John C.; Feeley, Thomas W
2015-01-01
Background With the movement towards bundled payments, stakeholders should know the true cost of the care they deliver. Time-driven activity-based costing (TDABC) can be used to estimate costs for each episode of care. In this analysis, TDABC is used to both estimate the costs of anesthesia care and identify the primary drivers of those costs of 11 common oncologic outpatient surgical procedures. Methods Personnel cost were calculated by determining the hourly cost of each provider and the associated process time of the 11 surgical procedures. Using the anesthesia record, drugs, supplies and equipment costs were identified and calculated. The current staffing model was used to determine baseline personnel costs for each procedure. Using the costs identified through TDABC analysis, the effect of different staffing ratios on anesthesia costs could be predicted. Results Costs for each of the procedures were determined. Process time and costs are linearly related. Personnel represented 79% of overall cost while drugs, supplies and equipment represented the remaining 21%. Changing staffing ratios shows potential savings between 13-28% across the 11 procedures. Conclusions TDABC can be used to estimate the costs of anesthesia care. This costing information is critical to assessing the anesthesiology component in a bundled payment. It can also be used to identify areas of cost savings and model costs of anesthesia care. CRNA to anesthesiologist staffing ratios profoundly influence the cost of care. This methodology could be applied to other medical specialties to help determine costs in the setting of bundled payments. PMID:27637823
Kimura, Koji; Sawa, Akihiro; Akagi, Shinji; Kihira, Kenji
2007-06-01
We have developed an original system to conduct surgical site infection (SSI) surveillance. This system accumulates SSI surveillance information based on the National Nosocomial Infections Surveillance (NNIS) System and the Japanese Nosocomial Infections Surveillance (JNIS) System. The features of this system are as follows: easy input of data, high generality, data accuracy, SSI rate by operative procedure and risk index category (RIC) can be promptly calculated and compared with the current NNIS SSI rate, and the SSI rates and accumulated data can be exported electronically. Using this system, we monitored 798 patients in 24 operative procedure categories in the Digestive Organs Surgery Department of Mazda Hospital, Mazda Motor Corporation, from January 2004 through December 2005. The total number and rate of SSI were 47 and 5.89%, respectively. The SSI rates of 777 patients were calculated based on 15 operative procedure categories and Risk Index Categories (RIC). The highest SSI rate was observed in the rectum surgery of RIC 1 (30%), followed by the colon surgery of RIC3 (28.57%). About 30% of the isolated infecting bacteria were Enterococcus faecalis, Staphylococcus aureus, Klebsiella pneumoniae, Pseudomonas aeruginosa, and Escherichia coli. Using quantification theory type 2, the American Society of Anesthesiology score (4.531), volume of hemorrhage under operation (3.075), wound classification (1.76), operation time (1.352), and history of diabetes (0.989) increased to higher ranks as factors for SSI. Therefore, we evaluated this system as a useful tool in safety control for operative procedures.
Integrated flight/propulsion control - Subsystem specifications for performance
NASA Technical Reports Server (NTRS)
Neighbors, W. K.; Rock, Stephen M.
1993-01-01
A procedure is presented for calculating multiple subsystem specifications given a number of performance requirements on the integrated system. This procedure applies to problems where the control design must be performed in a partitioned manner. It is based on a structured singular value analysis, and generates specifications as magnitude bounds on subsystem uncertainties. The performance requirements should be provided in the form of bounds on transfer functions of the integrated system. This form allows the expression of model following, command tracking, and disturbance rejection requirements. The procedure is demonstrated on a STOVL aircraft design.
Development of QC Procedures for Ocean Data Obtained by National Research Projects of Korea
NASA Astrophysics Data System (ADS)
Kim, S. D.; Park, H. M.
2017-12-01
To establish data management system for ocean data obtained by national research projects of Ministry of Oceans and Fisheries of Korea, KIOST conducted standardization and development of QC procedures. After reviewing and analyzing the existing international and domestic ocean-data standards and QC procedures, the draft version of standards and QC procedures were prepared. The proposed standards and QC procedures were reviewed and revised by experts in the field of oceanography and academic societies several times. A technical report on the standards of 25 data items and 12 QC procedures for physical, chemical, biological and geological data items. The QC procedure for temperature and salinity data was set up by referring the manuals published by GTSPP, ARGO and IOOS QARTOD. It consists of 16 QC tests applicable for vertical profile data and time series data obtained in real-time mode and delay mode. Three regional range tests to inspect annual, seasonal and monthly variations were included in the procedure. Three programs were developed to calculate and provide upper limit and lower limit of temperature and salinity at depth from 0 to 1550m. TS data of World Ocean Database, ARGO, GTSPP and in-house data of KIOST were analysed statistically to calculate regional limit of Northwest Pacific area. Based on statistical analysis, the programs calculate regional ranges using mean and standard deviation at 3 kind of grid systems (3° grid, 1° grid and 0.5° grid) and provide recommendation. The QC procedures for 12 data items were set up during 1st phase of national program for data management (2012-2015) and are being applied to national research projects practically at 2nd phase (2016-2019). The QC procedures will be revised by reviewing the result of QC application when the 2nd phase of data management programs is completed.
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.
2017-08-01
An inverse thermal analysis of Alloy 690 laser and hybrid laser-GMA welds is presented that uses numerical-analytical basis functions and boundary constraints based on measured solidification cross sections. In particular, the inverse analysis procedure uses three-dimensional constraint conditions such that two-dimensional projections of calculated solidification boundaries are constrained to map within experimentally measured solidification cross sections. Temperature histories calculated by this analysis are input data for computational procedures that predict solid-state phase transformations and mechanical response. These temperature histories can be used for inverse thermal analysis of welds corresponding to other welding processes whose process conditions are within similar regimes.
Optimization for minimum sensitivity to uncertain parameters
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.; Sobieszczanski-Sobieski, Jaroslaw
1994-01-01
A procedure to design a structure for minimum sensitivity to uncertainties in problem parameters is described. The approach is to minimize directly the sensitivity derivatives of the optimum design with respect to fixed design parameters using a nested optimization procedure. The procedure is demonstrated for the design of a bimetallic beam for minimum weight with insensitivity to uncertainties in structural properties. The beam is modeled with finite elements based on two dimensional beam analysis. A sequential quadratic programming procedure used as the optimizer supplies the Lagrange multipliers that are used to calculate the optimum sensitivity derivatives. The method was perceived to be successful from comparisons of the optimization results with parametric studies.
Numerical procedure to determine geometric view factors for surfaces occluded by cylinders
NASA Technical Reports Server (NTRS)
Sawyer, P. L.
1978-01-01
A numerical procedure was developed to determine geometric view factors between connected infinite strips occluded by any number of infinite circular cylinders. The procedure requires a two-dimensional cross-sectional model of the configuration of interest. The two-dimensional model consists of a convex polygon enclosing any number of circles. Each side of the polygon represents one strip, and each circle represents a circular cylinder. A description and listing of a computer program based on this procedure are included in this report. The program calculates geometric view factors between individual strips and between individual strips and the collection of occluding cylinders.
UDOT calibration of AASHTO's new prestress loss design equations.
DOT National Transportation Integrated Search
2009-07-01
In the next edition of the AASHTO LRFD Bridge Design Specifications the procedure to calculate prestress losses will change dramatically. The new equations are empirically based on high performance concrete from four states (Nebraska, New Hampshire, ...
UDOT's calibration of AASHTO's new prestress loss design equations.
DOT National Transportation Integrated Search
2009-07-01
In the next edition of the AASHTO LRFD Bridge Design Specifications the procedure to calculate prestress losses will change dramatically. The new equations are empirically based on high performance concrete from four states (Nebraska, New Hampshire, ...
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Klenke, D.; Trudinger, B. C.; Spreiter, J. R.
1980-01-01
Computational procedures are developed and applied to the prediction of solar wind interaction with nonmagnetic terrestrial planet atmospheres, with particular emphasis to Venus. The theoretical method is based on a single fluid, steady, dissipationless, magnetohydrodynamic continuum model, and is appropriate for the calculation of axisymmetric, supersonic, super-Alfvenic solar wind flow past terrestrial planets. The procedures, which consist of finite difference codes to determine the gasdynamic properties and a variety of special purpose codes to determine the frozen magnetic field, streamlines, contours, plots, etc. of the flow, are organized into one computational program. Theoretical results based upon these procedures are reported for a wide variety of solar wind conditions and ionopause obstacle shapes. Plasma and magnetic field comparisons in the ionosheath are also provided with actual spacecraft data obtained by the Pioneer Venus Orbiter.
Three-dimensional turbopump flowfield analysis
NASA Technical Reports Server (NTRS)
Sharma, O. P.; Belford, K. A.; Ni, R. H.
1992-01-01
A program was conducted to develop a flow prediction method applicable to rocket turbopumps. The complex nature of a flowfield in turbopumps is described and examples of flowfields are discussed to illustrate that physics based models and analytical calculation procedures based on computational fluid dynamics (CFD) are needed to develop reliable design procedures for turbopumps. A CFD code developed at NASA ARC was used as the base code. The turbulence model and boundary conditions in the base code were modified, respectively, to: (1) compute transitional flows and account for extra rates of strain, e.g., rotation; and (2) compute surface heat transfer coefficients and allow computation through multistage turbomachines. Benchmark quality data from two and three-dimensional cascades were used to verify the code. The predictive capabilities of the present CFD code were demonstrated by computing the flow through a radial impeller and a multistage axial flow turbine. Results of the program indicate that the present code operated in a two-dimensional mode is a cost effective alternative to full three-dimensional calculations, and that it permits realistic predictions of unsteady loadings and losses for multistage machines.
Laitano, R F; Toni, M P; Pimpinella, M; Bovi, M
2002-07-21
The factor Kwall to correct for photon attenuation and scatter in the wall of ionization chambers for 60Co air-kerma measurement has been traditionally determined by a procedure based on a linear extrapolation of the chamber current to zero wall thickness. Monte Carlo calculations by Rogers and Bielajew (1990 Phys. Med. Biol. 35 1065-78) provided evidence, mostly for chambers of cylindrical and spherical geometry, of appreciable deviations between the calculated values of Kwall and those obtained by the traditional extrapolation procedure. In the present work an experimental method other than the traditional extrapolation procedure was used to determine the Kwall factor. In this method the dependence of the ionization current in a cylindrical chamber was analysed as a function of an effective wall thickness in place of the physical (radial) wall thickness traditionally considered in this type of measurement. To this end the chamber wall was ideally divided into distinct regions and for each region an effective thickness to which the chamber current correlates was determined. A Monte Carlo calculation of attenuation and scatter effects in the different regions of the chamber wall was also made to compare calculation to measurement results. The Kwall values experimentally determined in this work agree within 0.2% with the Monte Carlo calculation. The agreement between these independent methods and the appreciable deviation (up to about 1%) between the results of both these methods and those obtained by the traditional extrapolation procedure support the conclusion that the two independent methods providing comparable results are correct and the traditional extrapolation procedure is likely to be wrong. The numerical results of the present study refer to a cylindrical cavity chamber like that adopted as the Italian national air-kerma standard at INMRI-ENEA (Italy). The method used in this study applies, however, to any other chamber of the same type.
Finding trap stiffness of optical tweezers using digital filters.
Almendarez-Rangel, Pedro; Morales-Cruzado, Beatriz; Sarmiento-Gómez, Erick; Pérez-Gutiérrez, Francisco G
2018-02-01
Obtaining trap stiffness and calibration of the position detection system is the basis of a force measurement using optical tweezers. Both calibration quantities can be calculated using several experimental methods available in the literature. In most cases, stiffness determination and detection system calibration are performed separately, often requiring procedures in very different conditions, and thus confidence of calibration methods is not assured due to possible changes in the environment. In this work, a new method to simultaneously obtain both the detection system calibration and trap stiffness is presented. The method is based on the calculation of the power spectral density of positions through digital filters to obtain the harmonic contributions of the position signal. This method has the advantage of calculating both trap stiffness and photodetector calibration factor from the same dataset in situ. It also provides a direct method to avoid unwanted frequencies that could greatly affect calibration procedure, such as electric noise, for example.
Code of Federal Regulations, 2012 CFR
2012-07-01
... economy and CO2 emission values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the 5-cycle city and highway fuel economy and CO2 emission values from the tests performed using alcohol or natural gas test fuel, if 5-cycle testing has been performed. Otherwise, the procedure in § 600...
Code of Federal Regulations, 2014 CFR
2014-07-01
... economy and CO2 emission values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the 5-cycle city and highway fuel economy and CO2 emission values from the tests performed using alcohol or natural gas test fuel, if 5-cycle testing has been performed. Otherwise, the procedure in § 600...
Code of Federal Regulations, 2013 CFR
2013-07-01
... economy and CO2 emission values from the tests performed using gasoline or diesel test fuel. (ii) Calculate the 5-cycle city and highway fuel economy and CO2 emission values from the tests performed using alcohol or natural gas test fuel, if 5-cycle testing has been performed. Otherwise, the procedure in § 600...
Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model
NASA Technical Reports Server (NTRS)
White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.
1989-01-01
A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.
NASA Astrophysics Data System (ADS)
Suproniuk, M.; Pawłowski, M.; Wierzbowski, M.; Majda-Zdancewicz, E.; Pawłowski, Ma.
2018-04-01
The procedure for determination of trap parameters by photo-induced transient spectroscopy is based on the Arrhenius plot that illustrates a thermal dependence of the emission rate. In this paper, we show that the Arrhenius plot obtained by the correlation method is shifted toward lower temperatures as compared to the one obtained with the inverse Laplace transformation. This shift is caused by the model adequacy error of the correlation method and introduces errors to a calculation procedure of defect center parameters. The effect is exemplified by comparing the results of the determination of trap parameters with both methods based on photocurrent transients for defect centers observed in tin-doped neutron-irradiated silicon crystals and in gallium arsenide grown with the Vertical Gradient Freeze method.
Operational Control Procedures for the Activated Sludge Process, Part III-A: Calculation Procedures.
ERIC Educational Resources Information Center
West, Alfred W.
This is the second in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals exclusively with the calculation procedures, including simplified mixing formulas, aeration tank…
Development of Multiobjective Optimization Techniques for Sonic Boom Minimization
NASA Technical Reports Server (NTRS)
Chattopadhyay, Aditi; Rajadas, John Narayan; Pagaldipti, Naryanan S.
1996-01-01
A discrete, semi-analytical sensitivity analysis procedure has been developed for calculating aerodynamic design sensitivities. The sensitivities of the flow variables and the grid coordinates are numerically calculated using direct differentiation of the respective discretized governing equations. The sensitivity analysis techniques are adapted within a parabolized Navier Stokes equations solver. Aerodynamic design sensitivities for high speed wing-body configurations are calculated using the semi-analytical sensitivity analysis procedures. Representative results obtained compare well with those obtained using the finite difference approach and establish the computational efficiency and accuracy of the semi-analytical procedures. Multidisciplinary design optimization procedures have been developed for aerospace applications namely, gas turbine blades and high speed wing-body configurations. In complex applications, the coupled optimization problems are decomposed into sublevels using multilevel decomposition techniques. In cases with multiple objective functions, formal multiobjective formulation such as the Kreisselmeier-Steinhauser function approach and the modified global criteria approach have been used. Nonlinear programming techniques for continuous design variables and a hybrid optimization technique, based on a simulated annealing algorithm, for discrete design variables have been used for solving the optimization problems. The optimization procedure for gas turbine blades improves the aerodynamic and heat transfer characteristics of the blades. The two-dimensional, blade-to-blade aerodynamic analysis is performed using a panel code. The blade heat transfer analysis is performed using an in-house developed finite element procedure. The optimization procedure yields blade shapes with significantly improved velocity and temperature distributions. The multidisciplinary design optimization procedures for high speed wing-body configurations simultaneously improve the aerodynamic, the sonic boom and the structural characteristics of the aircraft. The flow solution is obtained using a comprehensive parabolized Navier Stokes solver. Sonic boom analysis is performed using an extrapolation procedure. The aircraft wing load carrying member is modeled as either an isotropic or a composite box beam. The isotropic box beam is analyzed using thin wall theory. The composite box beam is analyzed using a finite element procedure. The developed optimization procedures yield significant improvements in all the performance criteria and provide interesting design trade-offs. The semi-analytical sensitivity analysis techniques offer significant computational savings and allow the use of comprehensive analysis procedures within design optimization studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Swartjes, Frank A., E-mail: frank.swartjes@rivm.nl; Versluijs, Kees W.; Otte, Piet F.
Consumption of vegetables that are grown in urban areas takes place worldwide. In developing countries, vegetables are traditionally grown in urban areas for cheap food supply. In developing and developed countries, urban gardening is gaining momentum. A problem that arises with urban gardening is the presence of contaminants in soil, which can be taken up by vegetables. In this study, a scientifically-based and practical procedure has been developed for assessing the human health risks from the consumption of vegetables from cadmium-contaminated land. Starting from a contaminated site, the procedure follows a tiered approach which is laid out as follows. Inmore » Tier 0, the plausibility of growing vegetables is investigated. In Tier 1 soil concentrations are compared with the human health-based Critical soil concentration. Tier 2 offers the possibility for a detailed site-specific human health risk assessment in which calculated exposure is compared to the toxicological reference dose. In Tier 3, vegetable concentrations are measured and tested following a standardized measurement protocol. To underpin the derivation of the Critical soil concentrations and to develop a tool for site-specific assessment the determination of the representative concentration in vegetables has been evaluated for a range of vegetables. The core of the procedure is based on Freundlich-type plant–soil relations, with the total soil concentration and the soil properties as variables. When a significant plant–soil relation is lacking for a specific vegetable a geometric mean of BioConcentrationFactors (BCF) is used, which is normalized according to soil properties. Subsequently, a ‘conservative’ vegetable-group-consumption-rate-weighted BioConcentrationFactor is calculated as basis for the Critical soil concentration (Tier 1). The tool to perform site-specific human health risk assessment (Tier 2) includes the calculation of a ‘realistic worst case’ site-specific vegetable-group-consumption-rate-weighted BioConcentrationFactor. -- Highlights: • A scientifically-based and practical procedure has been developed for assessing the human health risks from the consumption of vegetables. • Uptake characteristics of cadmium in a series of vegetables is represented by a vegetable-group-consumption-rate-weighted BioConcentrationFactor. • Calculations and measurement steps are combined.« less
Computerized tomography-assisted calculation of sinus augmentation volume.
Krennmair, Gerald; Krainhöfner, Martin; Maier, Harald; Weinländer, Michael; Piehslinger, Eva
2006-01-01
This study was intended to calculate the augmentation volume for a sinus lift procedure based on cross-sectional computerized tomography (CT) scans for 2 different augmentation heights. Based on area calculations of cross-sectional CT scans, the volume of additional bone needed was calculated for 44 sinus lift procedures. The amount of bone volume needed to raise the sinus floor to heights of both 12 and 17 mm was calculated. To achieve a sinus floor height of 12 mm, it was necessary to increase the height by a mean of 7.2+/-2.1 mm (range, 3.0 to 10.5 mm), depending on the residual ridge height; to achieve a height of 17 mm, a mean of 12.4+/-2.0 mm (range, 8.5 to 15.5 mm) was required (P < .01). The calculated augmentation volume for an augmentation height of 12 mm was 1.7+/-.9 cm3; for an augmentation height of 17 mm, the volume required was 3.6+/-1.5 cm3. Increasing the height of the sinus lift by 5 mm, ie, from 12 mm to 17 mm augmentation height, increased the augmentation volume by 100%. A significant correlation was found between augmentation height and the calculated sinus lift augmentation volume (r = 0. 78, P < .01). Detailed preoperative knowledge of sinus lift augmentation volume is helpful as a predictive value in deciding on a donor site for harvesting autogenous bone and on the ratio of bone to bone substitute to use. Calculation of the augmentation size can help determine the surgical approach and thus perioperative treatment and the costs of the surgery for both patients and clinicians.
Calculative techniques for transonic flows about certain classes of wing body combinations
NASA Technical Reports Server (NTRS)
Stahara, S. S.; Spreiter, J. R.
1972-01-01
Procedures based on the method of local linearization and transonic equivalence rule were developed for predicting properties of transonic flows about certain classes of wing-body combinations. The procedures are applicable to transonic flows with free stream Mach number in the ranges near one, below the lower critical and above the upper critical. Theoretical results are presented for surface and flow field pressure distributions for both lifting and nonlifting situations.
NASA Astrophysics Data System (ADS)
Brencic, M.; Hictaler, J.
2012-04-01
During recent years substantial efforts were directed toward the reconstruction of past meteorological data sets of precipitation, air temperature, air pressure and sunshine. In Alpine space of Europe long tradition of meteorological data monitoring exist starting with the first modern measurements in late 18th century. However, older data were obtained under very different conditions, standards and quality. Consequently direct comparison between data sets of different observation points is not possible. Several methods defined as data homogenisation procedures were developed intended to enable comparison of data from different observation points and sources. In spite of the fact that homogenisation procedures are scientifically agreed final result represented as homogenised data series depends on the ability and approach of the interpreters. Well know data set from the Greater Alpine region based on the common homogenisation procedure is HISTALP data series. However, HISTALP data set is not the only available homogenised data set in the region. Local agencies responsible for meteorological observations (e.g. in Slovenia Environmental Agency of Slovenia - ARSO) perform their own homogenisation procedures. Because more detailed information about measuring procedures and locations for the particular stations is available for them one can expect differences between homogenised data sets. Longer meteorological data sets can be used to detect past drought events of various magnitudes. They can help to discern past droughts and their characteristics. A very frequently used meteorological drought index is standardized precipitation index - SPI. The nature of SPI is designed to detect events of low frequency. With the help of this index periods of extremely low precipitation can be defined. It is usually based on monthly amount of precipitation where cumulative precipitation amount for the particular time period is calculated. During the calculation of SPI with a time series of monthly precipitation data for a location can calculate the SPI for any month in the record for the previous i months where i=1,2,3, …, 12, …, 24, …. 48, … depending upon the time scale of the interest. A 3 month SPI index is usually used for a short-term or seasonal drought index, a 12 month SPI is used for an intermediate term drought index, and a 48 month SPI is used for a long term drought index. In the paper results of SPI calculations are presented for the precipitation stations in the region of the Southern Alps for the last 200 years. Compared are results of differently homogenised data sets for the same observation points. We have performed comparison of homogenised data sets between HISTALP and ARSO data base. For the period after World War II when reliable precipitation measurements are available comparison was performed also between raw data series and homogenised data series. Differences between calculated form short term SPI (from 1 to 6 months) are small and don't influence the interpretation of short term drought appearance. With the prolonged length of SPI differences between calculated values rise and influence the detection of longer term drought appearance. It can be also illustrated that differences among parameters of model distribution (gamma distribution) are larger for longer SPI than for shorter SPI. It can be empirically concluded that homogenisation procedure of precipitation data sets can importantly influence the SPI values and has impact on conclusions about long term drought appearance.
A new procedure for calculating contact stresses in gear teeth
NASA Technical Reports Server (NTRS)
Somprakit, Paisan; Huston, Ronald L.
1991-01-01
A numerical procedure for evaluating and monitoring contact stresses in meshing gear teeth is discussed. The procedure is intended to extend the range of applicability and to improve the accuracy of gear contact stress analysis. The procedure is based upon fundamental solution from the theory of elasticity. It is an iterative numerical procedure. The method is believed to have distinct advantages over the classical Hertz method, the finite-element method, and over existing approaches with the boundary element method. Unlike many classical contact stress analyses, friction effects and sliding are included. Slipping and sticking in the contact region are studied. Several examples are discussed. The results are in agreement with classical results. Applications are presented for spur gears.
Calculation of multiphoton ionization processes
NASA Technical Reports Server (NTRS)
Chang, T. N.; Poe, R. T.
1976-01-01
We propose an accurate and efficient procedure in the calculation of multiphoton ionization processes. In addition to the calculational advantage, this procedure also enables us to study the relative contributions of the resonant and nonresonant intermediate states.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A; Pasciak, A
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that Result in skin reactions can be reached during these procedures. The purpose of this study was to assess the accuracy of different indirect dose estimates and to determine if PSD can be calculated within ±50% for embolization procedures. Methods: PSD were measured directly using radiochromic film for 41 consecutive embolization procedures. Indirect dose metrics from procedures were collected, including reference air kerma (RAK). Four different estimates of PSD were calculated and compared along with RAK to the measured PSD. The indirect estimates included a standard method,more » use of detailed information from the RDSR, and two simplified calculation methods. Indirect dosimetry was compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the indirect estimates were examined. Results: PSD calculated with the standard calculation method were within ±50% for all 41 procedures. This was also true for a simplified method using a single source-to-patient distance (SPD) for all calculations. RAK was within ±50% for all but one procedure. Cases for which RAK or calculated PSD exhibited large differences from the measured PSD were analyzed, and two causative factors were identified: ‘extreme’ SPD and large contributions to RAK from rotational angiography or runs acquired at large gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±50% for embolization procedures, and usually to within ±35%. RAK can be used without modification to set notification limits and substantial radiation dose levels. These results can be extended to similar procedures, including vascular and interventional oncology. Film dosimetry is likely an unnecessary effort for these types of procedures.« less
Modeling reliability measurement of interface on information system: Towards the forensic of rules
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan
2018-02-01
Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.
A mathematical procedure to predict optical performance of CPCs
NASA Astrophysics Data System (ADS)
Yu, Y. M.; Yu, M. J.; Tang, R. S.
2016-08-01
To evaluate the optical performance of a CPC based concentrating photovoltaic system, it is essential to find the angular dependence of optical efficiency of compound parabolic concentrator (CPC-θe ) where the incident angle of solar rays on solar cells is restricted within θe for the radiation over its acceptance angle. In this work, a mathematical procedure was developed to calculate the optical efficiency of CPC-θe for radiation incident at any angle based radiation transfer within CPC-θe . Calculations show that, given the acceptance half-angle (θa ), the annual radiation of full CPC-θe increases with the increase of θe and the CPC without restriction of exit angle (CPC-90) annually collects the most radiation due to large geometry (Ct ); whereas for truncated CPCs with identical θa and Ct , the annual radiation collected by CPC-θe is almost identical to that by CPC-90, even slightly higher. Calculations also indicate that the annual radiation on the absorber of CPC-θe at the angle larger than θe decrease with the increase of θe but always less than that of CPC-90, and this implies that the CPC-θe based PV system is more efficient than CPC-90 based PV system because the radiation on solar cells incident at large angle is poorly converted into electricity.
Al-mejrad, Lamya A.; Albarrag, Ahmed M.
2017-01-01
PURPOSE The goal of this study was to compare the adhesion of Candida albicans to the surfaces of CAD/CAM and conventionally fabricated complete denture bases. MATERIALS AND METHODS Twenty discs of acrylic resin poly (methyl methacrylate) were fabricated with CAD/CAM and conventional procedures (heat-polymerized acrylic resin). The specimens were divided into two groups: 10 discs were fabricated using the CAD/CAM procedure (Wieland Digital Denture Ivoclar Vivadent), and 10 discs were fabricated using a conventional flasking and pressure-pack technique. Candida colonization was performed on all the specimens using four Candida albicans isolates. The difference in Candida albicans adhesion on the discs was evaluated. The number of adherent yeast cells was calculated by the colony-forming units (CFU) and by Fluorescence microscopy. RESULTS There was a significant difference in the adhesion of Candida albicans to the complete denture bases created with CAD/CAM and the adhesion to those created with the conventional procedure. The CAD/CAM denture bases exhibited less adhesion of Candida albicans than did the denture bases created with the conventional procedure (P<.05). CONCLUSION The CAD/CAM procedure for fabricating complete dentures showed promising potential for reducing the adherence of Candida to the denture base surface. Clinical Implications. Complete dentures made with the CAD/CAM procedure might decrease the incidence of denture stomatitis compared with conventional dentures. PMID:29142649
Al-Fouzan, Afnan F; Al-Mejrad, Lamya A; Albarrag, Ahmed M
2017-10-01
The goal of this study was to compare the adhesion of Candida albicans to the surfaces of CAD/CAM and conventionally fabricated complete denture bases. Twenty discs of acrylic resin poly (methyl methacrylate) were fabricated with CAD/CAM and conventional procedures (heat-polymerized acrylic resin). The specimens were divided into two groups: 10 discs were fabricated using the CAD/CAM procedure (Wieland Digital Denture Ivoclar Vivadent), and 10 discs were fabricated using a conventional flasking and pressure-pack technique. Candida colonization was performed on all the specimens using four Candida albicans isolates. The difference in Candida albicans adhesion on the discs was evaluated. The number of adherent yeast cells was calculated by the colony-forming units (CFU) and by Fluorescence microscopy. There was a significant difference in the adhesion of Candida albicans to the complete denture bases created with CAD/CAM and the adhesion to those created with the conventional procedure. The CAD/CAM denture bases exhibited less adhesion of Candida albicans than did the denture bases created with the conventional procedure ( P <.05). The CAD/CAM procedure for fabricating complete dentures showed promising potential for reducing the adherence of Candida to the denture base surface. Clinical Implications. Complete dentures made with the CAD/CAM procedure might decrease the incidence of denture stomatitis compared with conventional dentures.
Air-Gapped Structures as Magnetic Elements for Use in Power Processing Systems. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Ohri, A. K.
1977-01-01
Methodical approaches to the design of inductors for use in LC filters and dc-to-dc converters using air gapped magnetic structures are presented. Methods for the analysis and design of full wave rectifier LC filter circuits operating with the inductor current in both the continuous conduction and the discontinuous conduction modes are also described. In the continuous conduction mode, linear circuit analysis techniques are employed, while in the case of the discontinuous mode, the method of analysis requires computer solutions of the piecewise linear differential equations which describe the filter in the time domain. Procedures for designing filter inductors using air gapped cores are presented. The first procedure requires digital computation to yield a design which is optimized in the sense of minimum core volume and minimum number of turns. The second procedure does not yield an optimized design as defined above, but the design can be obtained by hand calculations or with a small calculator. The third procedure is based on the use of specially prepared magnetic core data and provides an easy way to quickly reach a workable design.
Cian, Francesco; Villiers, Elisabeth; Archer, Joy; Pitorri, Francesca; Freeman, Kathleen
2014-06-01
Quality control (QC) validation is an essential tool in total quality management of a veterinary clinical pathology laboratory. Cost-analysis can be a valuable technique to help identify an appropriate QC procedure for the laboratory, although this has never been reported in veterinary medicine. The aim of this study was to determine the applicability of the Six Sigma Quality Cost Worksheets in the evaluation of possible candidate QC rules identified by QC validation. Three months of internal QC records were analyzed. EZ Rules 3 software was used to evaluate candidate QC procedures, and the costs associated with the application of different QC rules were calculated using the Six Sigma Quality Cost Worksheets. The costs associated with the current and the candidate QC rules were compared, and the amount of cost savings was calculated. There was a significant saving when the candidate 1-2.5s, n = 3 rule was applied instead of the currently utilized 1-2s, n = 3 rule. The savings were 75% per year (£ 8232.5) based on re-evaluating all of the patient samples in addition to the controls, and 72% per year (£ 822.4) based on re-analyzing only the control materials. The savings were also shown to change accordingly with the number of samples analyzed and with the number of daily QC procedures performed. These calculations demonstrated the importance of the selection of an appropriate QC procedure, and the usefulness of the Six Sigma Costs Worksheet in determining the most cost-effective rule(s) when several candidate rules are identified by QC validation. © 2014 American Society for Veterinary Clinical Pathology and European Society for Veterinary Clinical Pathology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roos, E.; Maier, V.; Nagel, G.
The break preclusion concept is based on {open_quotes}KTA rules{close_quotes}, {open_quotes}RSK guidelines{close_quotes} and {open_quotes}Rahmenspeziflkation Basissicherheit{close_quotes}. These fundamental rules containing for example requirements on material, design, calculation, manufacturing and testing procedures are explained and the technical realisation is shown by means of examples. The proof of the quality of these piping systems can be executed by means of fracture mechanics calculations by showing that in every case the leakage monitoring system already detect cracks which are clearly smaller than the critical crack. Thus the leak before break behavior and the break preclusion concept is implicitly affirmed. In order to further diminish conservativitiesmore » in the fracture mechanics procedures, specific research projects are executed which are explained in this contribution.« less
NASA Technical Reports Server (NTRS)
Peters, C.; Kampe, F. (Principal Investigator)
1980-01-01
The mathematical description and implementation of the statistical estimation procedure known as the Houston integrated spatial/spectral estimator (HISSE) is discussed. HISSE is based on a normal mixture model and is designed to take advantage of spectral and spatial information of LANDSAT data pixels, utilizing the initial classification and clustering information provided by the AMOEBA algorithm. The HISSE calculates parametric estimates of class proportions which reduce the error inherent in estimates derived from typical classify and count procedures common to nonparametric clustering algorithms. It also singles out spatial groupings of pixels which are most suitable for labeling classes. These calculations are designed to aid the analyst/interpreter in labeling patches with a crop class label. Finally, HISSE's initial performance on an actual LANDSAT agricultural ground truth data set is reported.
ERIC Educational Resources Information Center
Tholkes, Ben F.
1998-01-01
Defines camping risks and lists types and examples: (1) objective risk beyond control; (2) calculated risk based on personal choice; (3) perceived risk; and (4) reckless risk. Describes campers to watch ("immortals" and abdicators), and several "treatments" of risk: avoidance, safety procedures and well-trained staff, adequate…
Calculation of Drug Solubilities by Pharmacy Students.
ERIC Educational Resources Information Center
Cates, Lindley A.
1981-01-01
A method of estimating the solubilities of drugs in water is reported that is based on a principle applied in quantitative structure-activity relationships. This procedure involves correlation of partition coefficient values using the octanol/water system and aqueous solubility. (Author/MLW)
NASA Astrophysics Data System (ADS)
Culpitt, Tanner; Brorsen, Kurt R.; Hammes-Schiffer, Sharon
2017-06-01
Density functional theory (DFT) embedding approaches have generated considerable interest in the field of computational chemistry because they enable calculations on larger systems by treating subsystems at different levels of theory. To circumvent the calculation of the non-additive kinetic potential, various projector methods have been developed to ensure the orthogonality of molecular orbitals between subsystems. Herein the orthogonality constrained basis set expansion (OCBSE) procedure is implemented to enforce this subsystem orbital orthogonality without requiring a level shifting parameter. This scheme is a simple alternative to existing parameter-free projector-based schemes, such as the Huzinaga equation. The main advantage of the OCBSE procedure is that excellent convergence behavior is attained for DFT-in-DFT embedding without freezing any of the subsystem densities. For the three chemical systems studied, the level of accuracy is comparable to or higher than that obtained with the Huzinaga scheme with frozen subsystem densities. Allowing both the high-level and low-level DFT densities to respond to each other during DFT-in-DFT embedding calculations provides more flexibility and renders this approach more generally applicable to chemical systems. It could also be useful for future extensions to embedding approaches combining wavefunction theories and DFT.
Effects of enviromentally imposed roughness on airfoil performance
NASA Technical Reports Server (NTRS)
Cebeci, Tuncer
1987-01-01
The experimental evidence for the effects of rain, insects, and ice on airfoil performance are examined. The extent to which the available information can be incorporated in a calculation method in terms of change of shape and surface roughness is discussed. The methods described are based on the interactive boundary layer procedure of Cebeci or on the thin layer Navier Stokes procedure developed at NASA. Cases presented show that extensive flow separation occurs on the rough surfaces.
Lui, Kung-Jong
2012-05-01
When a new test with fewer invasions or less expenses to administer than the traditional test is developed, we may be interested in testing whether the former is non-inferior to the latter with respect to test accuracy. We define non-inferiority via both the odds ratio (OR) of correctly identifying a case and the OR of correctly identifying a non-case between two tests under comparison. We focus our discussion on testing the non-inferiority of a new screening test to a traditional screening test when a confirmatory procedure is performed only on patients with screen positives. On the basis of well-established methods for paired-sample data, we derive an asymptotic test procedure and an exact test procedure with respect to the two ORs defined here. Using Monte Carlo simulation, we evaluate the performance of these test procedures in a variety of situations. We note that the test procedures proposed here can also be applicable if we are interested in testing non-inferiority with respect to the ratio of sensitivities and the ratio of specificities. We discuss interval estimation of these ORs and sample size calculation based on the asymptotic test procedure considered here. We use the data taken from a study of the prostate-specific-antigen (PSA) test and the digital rectal examination (DRE) test to illustrate the practical use of these test procedures, interval estimators and sample size calculation formula. Copyright © 2012 Elsevier Inc. All rights reserved.
40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...
40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 16 2011-07-01 2011-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...
40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 17 2012-07-01 2012-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...
40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 17 2013-07-01 2013-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...
40 CFR 75.75 - Additional ozone season calculation procedures for special circumstances.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 17 2014-07-01 2014-07-01 false Additional ozone season calculation... § 75.75 Additional ozone season calculation procedures for special circumstances. (a) The owner or operator of a unit that is required to calculate ozone season heat input for purposes of providing data...
Response surface method in geotechnical/structural analysis, phase 1
NASA Astrophysics Data System (ADS)
Wong, F. S.
1981-02-01
In the response surface approach, an approximating function is fit to a long running computer code based on a limited number of code calculations. The approximating function, called the response surface, is then used to replace the code in subsequent repetitive computations required in a statistical analysis. The procedure of the response surface development and feasibility of the method are shown using a sample problem in slop stability which is based on data from centrifuge experiments of model soil slopes and involves five random soil parameters. It is shown that a response surface can be constructed based on as few as four code calculations and that the response surface is computationally extremely efficient compared to the code calculation. Potential applications of this research include probabilistic analysis of dynamic, complex, nonlinear soil/structure systems such as slope stability, liquefaction, and nuclear reactor safety.
Determination of Reaction Stoichiometries by Flow Injection Analysis.
ERIC Educational Resources Information Center
Rios, Angel; And Others
1986-01-01
Describes a method of flow injection analysis intended for calculation of complex-formation and redox reaction stoichiometries based on a closed-loop configuration. The technique is suitable for use in undergraduate laboratories. Information is provided for equipment, materials, procedures, and sample results. (JM)
Harańczyk, Maciej; Gutowski, Maciej
2007-01-01
We describe a procedure of finding low-energy tautomers of a molecule. The procedure consists of (i) combinatorial generation of a library of tautomers, (ii) screening based on the results of geometry optimization of initial structures performed at the density functional level of theory, and (iii) final refinement of geometry for the top hits at the second-order Möller-Plesset level of theory followed by single-point energy calculations at the coupled cluster level of theory with single, double, and perturbative triple excitations. The library of initial structures of various tautomers is generated with TauTGen, a tautomer generator program. The procedure proved to be successful for these molecular systems for which common chemical knowledge had not been sufficient to predict the most stable structures.
Plastic Surgery Statistics in the US: Evidence and Implications.
Heidekrueger, Paul I; Juran, Sabrina; Patel, Anup; Tanna, Neil; Broer, P Niclas
2016-04-01
The American Society of Plastic Surgeons publishes yearly procedural statistics, collected through questionnaires and online via tracking operations and outcomes for plastic surgeons (TOPS). The statistics, disaggregated by U.S. region, leave two important factors unaccounted for: (1) the underlying base population and (2) the number of surgeons performing the procedures. The presented analysis puts the regional distribution of surgeries into perspective and contributes to fulfilling the TOPS legislation objectives. ASPS statistics from 2005 to 2013 were analyzed by geographic region in the U.S. Using population estimates from the 2010 U.S. Census Bureau, procedures were calculated per 100,000 population. Then, based on the ASPS member roster, the rate of surgeries per surgeon by region was calculated and the interaction of these two variables was related to each other. In 2013, 1668,420 esthetic surgeries were performed in the U.S., resulting in the following ASPS ranking: 1st Mountain/Pacific (Region 5; 502,094 procedures, 30 % share), 2nd New England/Middle Atlantic (Region 1; 319,515, 19 %), 3rd South Atlantic (Region 3; 310,441, 19 %), 4th East/West South Central (Region 4; 274,282, 16 %), and 5th East/West North Central (Region 2; 262,088, 16 %). However, considering underlying populations, distribution and ranking appear to be different, displaying a smaller variance in surgical demand. Further, the number of surgeons and rate of procedures show great regional variation. Demand for plastic surgery is influenced by patients' geographic background and varies among U.S. regions. While ASPS data provide important information, additional insight regarding the demand for surgical procedures can be gained by taking certain demographic factors into consideration. This journal requires that the authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.
Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul
2017-12-01
The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.
Leblond, Veronique; Ouzegdouh, Maya; Button, Paul
2017-01-01
Abstract Introduction The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos® CELLEX® fully integrated system in 2012. This report summarizes our single‐center experience of transitioning from the use of multi‐step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. Materials and Methods The total number of ECP procedures performed 2011–2015 was derived from department records. The time taken to complete a single ECP treatment using a multi‐step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time‐driven activity‐based costing methods were applied to provide a cost comparison. Results The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi‐step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per‐session cost of performing ECP using the multi‐step procedure was greater than with the CELLEX® system (€1,429.37 and €1,264.70 per treatment, respectively). Conclusions For hospitals considering a transition from multi‐step procedures to fully integrated methods for ECP where cost may be a barrier, time‐driven activity‐based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX® allow for more patient treatments per year. PMID:28419561
10 CFR 766.102 - Calculation methodology.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 4 2011-01-01 2011-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...
10 CFR 766.102 - Calculation methodology.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 4 2014-01-01 2014-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...
10 CFR 766.102 - Calculation methodology.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 4 2012-01-01 2012-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...
10 CFR 766.102 - Calculation methodology.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 4 2010-01-01 2010-01-01 false Calculation methodology. 766.102 Section 766.102 Energy DEPARTMENT OF ENERGY URANIUM ENRICHMENT DECONTAMINATION AND DECOMMISSIONING FUND; PROCEDURES FOR SPECIAL ASSESSMENT OF DOMESTIC UTILITIES Procedures for Special Assessment § 766.102 Calculation methodology. (a...
Separation behavior of boundary layers on three-dimensional wings
NASA Technical Reports Server (NTRS)
Stock, H. W.
1981-01-01
An inverse boundary layer procedure for calculating separated, turbulent boundary layers at infinitely long, crabbing wing was developed. The procedure was developed for calculating three dimensional, incompressible turbulent boundary layers was expanded to adiabatic, compressible flows. Example calculations with transsonic wings were made including viscose effects. In this case an approximated calculation method described for areas of separated, turbulent boundary layers, permitting calculation of this displacement thickness. The laminar boundary layer development was calculated with inclined ellipsoids.
Alchemical Free Energy Calculations for Nucleotide Mutations in Protein-DNA Complexes.
Gapsys, Vytautas; de Groot, Bert L
2017-12-12
Nucleotide-sequence-dependent interactions between proteins and DNA are responsible for a wide range of gene regulatory functions. Accurate and generalizable methods to evaluate the strength of protein-DNA binding have long been sought. While numerous computational approaches have been developed, most of them require fitting parameters to experimental data to a certain degree, e.g., machine learning algorithms or knowledge-based statistical potentials. Molecular-dynamics-based free energy calculations offer a robust, system-independent, first-principles-based method to calculate free energy differences upon nucleotide mutation. We present an automated procedure to set up alchemical MD-based calculations to evaluate free energy changes occurring as the result of a nucleotide mutation in DNA. We used these methods to perform a large-scale mutation scan comprising 397 nucleotide mutation cases in 16 protein-DNA complexes. The obtained prediction accuracy reaches 5.6 kJ/mol average unsigned deviation from experiment with a correlation coefficient of 0.57 with respect to the experimentally measured free energies. Overall, the first-principles-based approach performed on par with the molecular modeling approaches Rosetta and FoldX. Subsequently, we utilized the MD-based free energy calculations to construct protein-DNA binding profiles for the zinc finger protein Zif268. The calculation results compare remarkably well with the experimentally determined binding profiles. The software automating the structure and topology setup for alchemical calculations is a part of the pmx package; the utilities have also been made available online at http://pmx.mpibpc.mpg.de/dna_webserver.html .
NASA Technical Reports Server (NTRS)
Neuhauser, Daniel; Baer, Michael; Judson, Richard S.; Kouri, Donald J.
1989-01-01
The first successful application of the three-dimensional quantum body frame wave packet approach to reactive scattering is reported for the H + H2 exchange reaction on the LSTH potential surface. The method used is based on a procedure for calculating total reaction probabilities from wave packets. It is found that converged, vibrationally resolved reactive probabilities can be calculated with a grid that is not much larger than required for the pure inelastic calculation. Tabular results are presented for several energies.
Degassing procedure for ultrahigh vacuum
NASA Technical Reports Server (NTRS)
Moore, B. C.
1979-01-01
Calculations based on diffusion coefficients and degassing rates for stainless-steel vacuum chambers indicate that baking at lower temperatures for longer periods give lower ultimate pressures than rapid baking at high temperatures. Process could reduce pressures in chambers for particle accelerators, fusion reactors, material research, and other applications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...
Code of Federal Regulations, 2013 CFR
2013-07-01
... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...
Code of Federal Regulations, 2014 CFR
2014-07-01
... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...
Code of Federal Regulations, 2012 CFR
2012-07-01
... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...
Canovas, Carmen; van der Mooren, Marrie; Rosén, Robert; Piers, Patricia A; Wang, Li; Koch, Douglas D; Artal, Pablo
2015-05-01
To determine the impact of the equivalent refractive index (ERI) on intraocular lens (IOL) power prediction for eyes with previous myopic laser in situ keratomileusis (LASIK) using custom ray tracing. AMO B.V., Groningen, the Netherlands, and the Department of Ophthalmology, Baylor College of Medicine, Houston, Texas, USA. Retrospective data analysis. The ERI was calculated individually from the post-LASIK total corneal power. Two methods to account for the posterior corneal surface were tested; that is, calculation from pre-LASIK data or from post-LASIK data only. Four IOL power predictions were generated using a computer-based ray-tracing technique, including individual ERI results from both calculation methods, a mean ERI over the whole population, and the ERI for normal patients. For each patient, IOL power results calculated from the four predictions as well as those obtained with the Haigis-L were compared with the optimum IOL power calculated after cataract surgery. The study evaluated 25 patients. The mean and range of ERI values determined using post-LASIK data were similar to those determined from pre-LASIK data. Introducing individual or an average ERI in the ray-tracing IOL power calculation procedure resulted in mean IOL power errors that were not significantly different from zero. The ray-tracing procedure that includes an average ERI gave a greater percentage of eyes with an IOL power prediction error within ±0.5 diopter than the Haigis-L (84% versus 52%). For IOL power determination in post-LASIK patients, custom ray tracing including a modified ERI was an accurate procedure that exceeded the current standards for normal eyes. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
40 CFR 86.164-08 - Supplemental Federal Test Procedure calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... provisions provide the procedures for calculating mass emission results of each regulated exhaust pollutant... this section. These provisions provide the procedures for determining the weighted mass emissions for... reported test results for the SFTP composite (NMHC+NOX) and optional composite CO standards shall be...
40 CFR 86.164-08 - Supplemental Federal Test Procedure calculations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... provisions provide the procedures for calculating mass emission results of each regulated exhaust pollutant... this section. These provisions provide the procedures for determining the weighted mass emissions for... reported test results for the SFTP composite (NMHC+NOX) and optional composite CO standards shall be...
The anatomy of floating shock fitting. [shock waves computation for flow field
NASA Technical Reports Server (NTRS)
Salas, M. D.
1975-01-01
The floating shock fitting technique is examined. Second-order difference formulas are developed for the computation of discontinuities. A procedure is developed to compute mesh points that are crossed by discontinuities. The technique is applied to the calculation of internal two-dimensional flows with arbitrary number of shock waves and contact surfaces. A new procedure, based on the coalescence of characteristics, is developed to detect the formation of shock waves. Results are presented to validate and demonstrate the versatility of the technique.
Steady inviscid transonic flows over planar airfoils: A search for a simplified procedure
NASA Technical Reports Server (NTRS)
Magnus, R.; Yoshihara, H.
1973-01-01
A finite difference procedure based upon a system of unsteady equations in proper conservation form with either exact or small disturbance steady terms is used to calculate the steady flows over several classes of airfoils. The airfoil condition is fulfilled on a slab whose upstream extremity is a semi-circle overlaying the airfoil leading edge circle. The limitations of the small disturbance equations are demonstrated in an extreme example of a blunt-nosed, aft-cambered airfoil. The necessity of using the equations in proper conservation form to capture the shock properly is stressed. Ability of the steady relaxation procedures to capture the shock is briefly examined.
Transmission eigenchannels for coherent phonon transport
NASA Astrophysics Data System (ADS)
Klöckner, J. C.; Cuevas, J. C.; Pauly, F.
2018-04-01
We present a procedure to determine transmission eigenchannels for coherent phonon transport in nanoscale devices using the framework of nonequilibrium Green's functions. We illustrate our procedure by analyzing a one-dimensional chain, where all steps can be carried out analytically. More importantly, we show how the procedure can be combined with ab initio calculations to provide a better understanding of phonon heat transport in realistic atomic-scale junctions. In particular, we study the phonon eigenchannels in a gold metallic atomic-size contact and different single-molecule junctions based on molecules such as an alkane chain, a brominated benzene-diamine, where destructive phonon interference effects take place, and a C60 junction.
Pizzoli, Giuliano; Lobello, Maria Grazia; Carlotti, Benedetta; Elisei, Fausto; Nazeeruddin, Mohammad K; Vitillaro, Giuseppe; De Angelis, Filippo
2012-10-14
We report a combined spectro-photometric and computational investigation of the acid-base equilibria of the N3 solar cell sensitizer [Ru(dcbpyH(2))(2)(NCS)(2)] (dcbpyH(2) = 4,4'-dicarboxyl-2,2' bipyridine) in aqueous/ethanol solutions. The absorption spectra of N3 recorded at various pH values were analyzed by Single Value Decomposition techniques, followed by Global Fitting procedures, allowing us to identify four separate acid-base equilibria and their corresponding ground state pK(a) values. DFT/TDDFT calculations were performed for the N3 dye in solution, investigating the possible relevant species obtained by sequential deprotonation of the four dye carboxylic groups. TDDFT excited state calculations provided UV-vis absorption spectra which nicely agree with the experimental spectral shapes at various pH values. The calculated pK(a) values are also in good agreement with experimental data, within <1 pK(a) unit. Based on the calculated energy differences a tentative assignment of the N3 deprotonation pathway is reported.
40 CFR 86.164-00 - Supplemental Federal Test Procedure calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... provide the procedures for calculating mass emission results of each regulated exhaust pollutant for the... section. These provisions provide the procedures for determining the weighted mass emissions for the FTP... test results for the SFTP composite (NMHC+NOX) and optional composite CO standards shall be computed by...
40 CFR 86.164-00 - Supplemental Federal Test Procedure calculations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... provide the procedures for calculating mass emission results of each regulated exhaust pollutant for the... section. These provisions provide the procedures for determining the weighted mass emissions for the FTP... test results for the SFTP composite (NMHC+NOX) and optional composite CO standards shall be computed by...
Hovgaard, Lisette Hvid; Andersen, Steven Arild Wuyts; Konge, Lars; Dalsgaard, Torur; Larsen, Christian Rifbjerg
2018-03-30
The use of robotic surgery for minimally invasive procedures has increased considerably over the last decade. Robotic surgery has potential advantages compared to laparoscopic surgery but also requires new skills. Using virtual reality (VR) simulation to facilitate the acquisition of these new skills could potentially benefit training of robotic surgical skills and also be a crucial step in developing a robotic surgical training curriculum. The study's objective was to establish validity evidence for a simulation-based test for procedural competency for the vaginal cuff closure procedure that can be used in a future simulation-based, mastery learning training curriculum. Eleven novice gynaecological surgeons without prior robotic experience and 11 experienced gynaecological robotic surgeons (> 30 robotic procedures) were recruited. After familiarization with the VR simulator, participants completed the module 'Guided Vaginal Cuff Closure' six times. Validity evidence was investigated for 18 preselected simulator metrics. The internal consistency was assessed using Cronbach's alpha and a composite score was calculated based on metrics with significant discriminative ability between the two groups. Finally, a pass/fail standard was established using the contrasting groups' method. The experienced surgeons significantly outperformed the novice surgeons on 6 of the 18 metrics. The internal consistency was 0.58 (Cronbach's alpha). The experienced surgeons' mean composite score for all six repetitions were significantly better than the novice surgeons' (76.1 vs. 63.0, respectively, p < 0.001). A pass/fail standard of 75/100 was established. Four novice surgeons passed this standard (false positives) and three experienced surgeons failed (false negatives). Our study has gathered validity evidence for a simulation-based test for procedural robotic surgical competency in the vaginal cuff closure procedure and established a credible pass/fail standard for future proficiency-based training.
ERIC Educational Resources Information Center
West, Alfred W.
This is the third in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals with the calculation procedures associated with a step-feed process. Illustrations and examples are included to…
Yang, Jianhong; Li, Xiaomeng; Xu, Jinwu; Ma, Xianghong
2018-01-01
The quantitative analysis accuracy of calibration-free laser-induced breakdown spectroscopy (CF-LIBS) is severely affected by the self-absorption effect and estimation of plasma temperature. Herein, a CF-LIBS quantitative analysis method based on the auto-selection of internal reference line and the optimized estimation of plasma temperature is proposed. The internal reference line of each species is automatically selected from analytical lines by a programmable procedure through easily accessible parameters. Furthermore, the self-absorption effect of the internal reference line is considered during the correction procedure. To improve the analysis accuracy of CF-LIBS, the particle swarm optimization (PSO) algorithm is introduced to estimate the plasma temperature based on the calculation results from the Boltzmann plot. Thereafter, the species concentrations of a sample can be calculated according to the classical CF-LIBS method. A total of 15 certified alloy steel standard samples of known compositions and elemental weight percentages were used in the experiment. Using the proposed method, the average relative errors of Cr, Ni, and Fe calculated concentrations were 4.40%, 6.81%, and 2.29%, respectively. The quantitative results demonstrated an improvement compared with the classical CF-LIBS method and the promising potential of in situ and real-time application.
Iterative pass optimization of sequence data
NASA Technical Reports Server (NTRS)
Wheeler, Ward C.
2003-01-01
The problem of determining the minimum-cost hypothetical ancestral sequences for a given cladogram is known to be NP-complete. This "tree alignment" problem has motivated the considerable effort placed in multiple sequence alignment procedures. Wheeler in 1996 proposed a heuristic method, direct optimization, to calculate cladogram costs without the intervention of multiple sequence alignment. This method, though more efficient in time and more effective in cladogram length than many alignment-based procedures, greedily optimizes nodes based on descendent information only. In their proposal of an exact multiple alignment solution, Sankoff et al. in 1976 described a heuristic procedure--the iterative improvement method--to create alignments at internal nodes by solving a series of median problems. The combination of a three-sequence direct optimization with iterative improvement and a branch-length-based cladogram cost procedure, provides an algorithm that frequently results in superior (i.e., lower) cladogram costs. This iterative pass optimization is both computation and memory intensive, but economies can be made to reduce this burden. An example in arthropod systematics is discussed. c2003 The Willi Hennig Society. Published by Elsevier Science (USA). All rights reserved.
Nakagawa, Yoshiaki; Takemura, Tadamasa; Yoshihara, Hiroyuki; Nakagawa, Yoshinobu
2011-04-01
A hospital director must estimate the revenues and expenses not only in a hospital but also in each clinical division to determine the proper management strategy. A new prospective payment system based on the Diagnosis Procedure Combination (DPC/PPS) introduced in 2003 has made the attribution of revenues and expenses for each clinical department very complicated because of the intricate involvement between the overall or blanket component and a fee-for service (FFS). Few reports have so far presented a programmatic method for the calculation of medical costs and financial balance. A simple method has been devised, based on personnel cost, for calculating medical costs and financial balance. Using this method, one individual was able to complete the calculations for a hospital which contains 535 beds and 16 clinics, without using the central hospital computer system.
Unraveling Higher Education's Costs.
ERIC Educational Resources Information Center
Gordon, Gus; Charles, Maria
1998-01-01
The activity-based costing (ABC) method of analyzing institutional costs in higher education involves four procedures: determining the various discrete activities of the organization; calculating the cost of each; determining the cost drivers; tracing cost to the cost objective or consumer of each activity. Few American institutions have used the…
40 CFR 1065.610 - Duty cycle generation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Duty cycle generation. 1065.610... CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.610 Duty cycle generation. This section describes how to generate duty cycles that are specific to your engine, based on the...
Steinbach, Sarah M L; Sturgess, Christopher P; Dunning, Mark D; Neiger, Reto
2015-06-01
Assessment of renal function by means of plasma clearance of a suitable marker has become standard procedure for estimation of glomerular filtration rate (GFR). Sinistrin, a polyfructan solely cleared by the kidney, is often used for this purpose. Pharmacokinetic modeling using adequate software is necessary to calculate disappearance rate and half-life of sinistrin. The purpose of this study was to describe the use of a Microsoft excel based add-in program to calculate plasma sinistrin clearance, as well as additional pharmacokinetic parameters such as transfer rates (k), half-life (t1/2) and volume of distribution (Vss) for sinistrin in dogs with varying degrees of renal function. Copyright © 2015 Elsevier Ltd. All rights reserved.
Measuring the self-similarity exponent in Lévy stable processes of financial time series
NASA Astrophysics Data System (ADS)
Fernández-Martínez, M.; Sánchez-Granero, M. A.; Trinidad Segovia, J. E.
2013-11-01
Geometric method-based procedures, which will be called GM algorithms herein, were introduced in [M.A. Sánchez Granero, J.E. Trinidad Segovia, J. García Pérez, Some comments on Hurst exponent and the long memory processes on capital markets, Phys. A 387 (2008) 5543-5551], to efficiently calculate the self-similarity exponent of a time series. In that paper, the authors showed empirically that these algorithms, based on a geometrical approach, are more accurate than the classical algorithms, especially with short length time series. The authors checked that GM algorithms are good when working with (fractional) Brownian motions. Moreover, in [J.E. Trinidad Segovia, M. Fernández-Martínez, M.A. Sánchez-Granero, A note on geometric method-based procedures to calculate the Hurst exponent, Phys. A 391 (2012) 2209-2214], a mathematical background for the validity of such procedures to estimate the self-similarity index of any random process with stationary and self-affine increments was provided. In particular, they proved theoretically that GM algorithms are also valid to explore long-memory in (fractional) Lévy stable motions. In this paper, we prove empirically by Monte Carlo simulation that GM algorithms are able to calculate accurately the self-similarity index in Lévy stable motions and find empirical evidence that they are more precise than the absolute value exponent (denoted by AVE onwards) and the multifractal detrended fluctuation analysis (MF-DFA) algorithms, especially with a short length time series. We also compare them with the generalized Hurst exponent (GHE) algorithm and conclude that both GM2 and GHE algorithms are the most accurate to study financial series. In addition to that, we provide empirical evidence, based on the accuracy of GM algorithms to estimate the self-similarity index in Lévy motions, that the evolution of the stocks of some international market indices, such as U.S. Small Cap and Nasdaq100, cannot be modelized by means of a Brownian motion.
Density functional theory calculation of refractive indices of liquid-forming silicon oil compounds
NASA Astrophysics Data System (ADS)
Lee, Sanghun; Park, Sung Soo; Hagelberg, Frank
2012-02-01
A combination of quantum chemical calculation and molecular dynamics simulation is applied to compute refractive indices of liquid-forming silicon oils. The densities of these species are obtained from molecular dynamics simulations based on the NPT ensemble while the molecular polarizabilities are evaluated by density functional theory. This procedure is shown to yield results well compatible with available experimental data, suggesting that it represents a robust and economic route for determining the refractive indices of liquid-forming organic complexes containing silicon.
NASA Technical Reports Server (NTRS)
Sarracino, Marcello
1941-01-01
The present article deals with what is considered to be a simpler and more accurate method of determining, from the results of bench tests under approved rating conditions, the power at altitude of a supercharged aircraft engine, without application of correction formulas. The method of calculating the characteristics at altitude, of supercharged engines, based on the consumption of air, is a more satisfactory and accurate procedure, especially at low boost pressures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Z; Vijayan, S; Rana, V
2015-06-15
Purpose: A system was developed that automatically calculates the organ and effective dose for individual fluoroscopically-guided procedures using a log of the clinical exposure parameters. Methods: We have previously developed a dose tracking system (DTS) to provide a real-time color-coded 3D- mapping of skin dose. This software produces a log file of all geometry and exposure parameters for every x-ray pulse during a procedure. The data in the log files is input into PCXMC, a Monte Carlo program that calculates organ and effective dose for projections and exposure parameters set by the user. We developed a MATLAB program to readmore » data from the log files produced by the DTS and to automatically generate the definition files in the format used by PCXMC. The processing is done at the end of a procedure after all exposures are completed. Since there are thousands of exposure pulses with various parameters for fluoroscopy, DA and DSA and at various projections, the data for exposures with similar parameters is grouped prior to entry into PCXMC to reduce the number of Monte Carlo calculations that need to be performed. Results: The software developed automatically transfers data from the DTS log file to PCXMC and runs the program for each grouping of exposure pulses. When the dose from all exposure events are calculated, the doses for each organ and all effective doses are summed to obtain procedure totals. For a complicated interventional procedure, the calculations can be completed on a PC without manual intervention in less than 30 minutes depending on the level of data grouping. Conclusion: This system allows organ dose to be calculated for individual procedures for every patient without tedious calculations or data entry so that estimates of stochastic risk can be obtained in addition to the deterministic risk estimate provided by the DTS. Partial support from NIH grant R01EB002873 and Toshiba Medical Systems Corp.« less
Lens of the eye dose calculation for neuro-interventional procedures and CBCT scans of the head
NASA Astrophysics Data System (ADS)
Xiong, Zhenyu; Vijayan, Sarath; Rana, Vijay; Jain, Amit; Rudin, Stephen; Bednarek, Daniel R.
2016-03-01
The aim of this work is to develop a method to calculate lens dose for fluoroscopically-guided neuro-interventional procedures and for CBCT scans of the head. EGSnrc Monte Carlo software is used to determine the dose to the lens of the eye for the projection geometry and exposure parameters used in these procedures. This information is provided by a digital CAN bus on the Toshiba Infinix C-Arm system which is saved in a log file by the real-time skin-dose tracking system (DTS) we previously developed. The x-ray beam spectra on this machine were simulated using BEAMnrc. These spectra were compared to those determined by SpekCalc and validated through measured percent-depth-dose (PDD) curves and half-value-layer (HVL) measurements. We simulated CBCT procedures in DOSXYZnrc for a CTDI head phantom and compared the surface dose distribution with that measured with Gafchromic film, and also for an SK150 head phantom and compared the lens dose with that measured with an ionization chamber. Both methods demonstrated good agreement. Organ dose calculated for a simulated neuro-interventional-procedure using DOSXYZnrc with the Zubal CT voxel phantom agreed within 10% with that calculated by PCXMC code for most organs. To calculate the lens dose in a neuro-interventional procedure, we developed a library of normalized lens dose values for different projection angles and kVp's. The total lens dose is then calculated by summing the values over all beam projections and can be included on the DTS report at the end of the procedure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Z; Vijayan, S; Oines, A
Purpose: To compare PCXMC and EGSnrc calculated organ and effective radiation doses from cone-beam computed tomography (CBCT) and interventional fluoroscopically-guided procedures using automatic exposure-event grouping. Methods: For CBCT, we used PCXMC20Rotation.exe to automatically calculate the doses and compared the results to those calculated using EGSnrc with the Zubal patient phantom. For interventional procedures, we use the dose tracking system (DTS) which we previously developed to produce a log file of all geometry and exposure parameters for every x-ray pulse during a procedure, and the data in the log file is input into PCXMC and EGSnrc for dose calculation. A MATLABmore » program reads data from the log files and groups similar exposures to reduce calculation time. The definition files are then automatically generated in the format used by PCXMC and EGSnrc. Processing is done at the end of the procedure after all exposures are completed. Results: For the Toshiba Infinix CBCT LCI-Middle-Abdominal protocol, most organ doses calculated with PCXMC20Rotation closely matched those calculated with EGSnrc. The effective doses were 33.77 mSv with PCXMC20Rotation and 32.46 mSv with EGSnrc. For a simulated interventional cardiac procedure, similar close agreement in organ dose was obtained between the two codes; the effective doses were 12.02 mSv with PCXMC and 11.35 mSv with EGSnrc. The calculations can be completed on a PC without manual intervention in less than 15 minutes with PCXMC and in about 10 hours with EGSnrc, depending on the level of data grouping and accuracy desired. Conclusion: Effective dose and most organ doses in CBCT and interventional radiology calculated by PCXMC closely match those calculated by EGSnrc. Data grouping, which can be done automatically, makes the calculation time with PCXMC on a standard PC acceptable. This capability expands the dose information that can be provided by the DTS. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less
Zhang, M; Westerly, D C; Mackie, T R
2011-08-07
With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D(98%), D(50%) and D(2%) values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom study, by updating the proton pencil beam energy from the on-line image after realignment, this on-line adaptive procedure is necessary and effective for the DET-based IG-IMPT. Without dose re-calculation and re-optimization, it could be easily incorporated into the clinical workflow.
NASA Astrophysics Data System (ADS)
Zhang, M.; Westerly, D. C.; Mackie, T. R.
2011-08-01
With on-line image guidance (IG), prostate shifts relative to the bony anatomy can be corrected by realigning the patient with respect to the treatment fields. In image guided intensity modulated proton therapy (IG-IMPT), because the proton range is more sensitive to the material it travels through, the realignment may introduce large dose variations. This effect is studied in this work and an on-line adaptive procedure is proposed to restore the planned dose to the target. A 2D anthropomorphic phantom was constructed from a real prostate patient's CT image. Two-field laterally opposing spot 3D-modulation and 24-field full arc distal edge tracking (DET) plans were generated with a prescription of 70 Gy to the planning target volume. For the simulated delivery, we considered two types of procedures: the non-adaptive procedure and the on-line adaptive procedure. In the non-adaptive procedure, only patient realignment to match the prostate location in the planning CT was performed. In the on-line adaptive procedure, on top of the patient realignment, the kinetic energy for each individual proton pencil beam was re-determined from the on-line CT image acquired after the realignment and subsequently used for delivery. Dose distributions were re-calculated for individual fractions for different plans and different delivery procedures. The results show, without adaptive, that both the 3D-modulation and the DET plans experienced delivered dose degradation by having large cold or hot spots in the prostate. The DET plan had worse dose degradation than the 3D-modulation plan. The adaptive procedure effectively restored the planned dose distribution in the DET plan, with delivered prostate D98%, D50% and D2% values less than 1% from the prescription. In the 3D-modulation plan, in certain cases the adaptive procedure was not effective to reduce the delivered dose degradation and yield similar results as the non-adaptive procedure. In conclusion, based on this 2D phantom study, by updating the proton pencil beam energy from the on-line image after realignment, this on-line adaptive procedure is necessary and effective for the DET-based IG-IMPT. Without dose re-calculation and re-optimization, it could be easily incorporated into the clinical workflow.
The Aristotle method: a new concept to evaluate quality of care based on complexity.
Lacour-Gayet, François; Clarke, David R
2005-06-01
Evaluation of quality of care is a duty of the modern medical practice. A reliable method of quality evaluation able to compare fairly institutions and inform a patient and his family of the potential risk of a procedure is clearly needed. It is now well recognized that any method that purports to evaluate quality of care should include a case mix/risk stratification method. No valuable method was available until recently in pediatric cardiac surgery. The Aristotle method is a new concept of evaluation of quality of care in congenital heart surgery based on the complexity of the surgical procedures. Involving a panel of expert surgeons, the project started in 1999 and included 50 pediatric surgeons from 23 countries. The basic score adjusts the complexity of a given procedure and is calculated as the sum of potential for mortality, potential for morbidity and anticipated technical difficulty. The Comprehensive Score further adjusts the complexity according to the specific patient characteristics (anatomy, associated procedures, co-morbidity, etc.). The Aristotle method is original as it introduces several new concepts: the calculated complexity is a constant for a given patient all over the world; complexity is an independent value and risk is a variable depending on the performance; and Performance = Complexity x Outcome. The Aristotle score is a good vector of communication between patients, doctors and insurance companies and may stimulate the quality and the organization of heath care in our field and in others.
A Procedure Using Calculators to Express Answers in Fractional Form.
ERIC Educational Resources Information Center
Carlisle, Earnest
A procedure is described that enables students to perform operations on fractions with a calculator, expressing the answer as a fraction. Patterns using paper-and-pencil procedures for each operation with fractions are presented. A microcomputer software program illustrates how the answer can be found using integer values of the numerators and…
The purpose of this SOP is to describe the procedures undertaken to calculate sampling weights. The sampling weights are needed to obtain weighted statistics of the NHEXAS data. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by t...
The purpose of this SOP is to describe the procedures undertaken to calculate the dermal exposure to chlorpyrifos and diazinon. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University of Arizona NHEXAS and Battelle Labora...
The purpose of this SOP is to describe the procedures undertaken to calculate the time activity pattern of the NHEXAS samples. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University of Arizona NHEXAS and Battelle Laborat...
Longo, Mariaconcetta; Marchioni, Chiara; Insero, Teresa; Donnarumma, Raffaella; D'Adamo, Alessandro; Lucatelli, Pierleone; Fanelli, Fabrizio; Salvatori, Filippo Maria; Cannavale, Alessandro; Di Castro, Elisabetta
2016-03-01
This study evaluates X-ray exposure in patient undergoing abdominal extra-vascular interventional procedures by means of Digital Imaging and COmmunications in Medicine (DICOM) image headers and Monte Carlo simulation. The main aim was to assess the effective and equivalent doses, under the hypothesis of their correlation with the dose area product (DAP) measured during each examination. This allows to collect dosimetric information about each patient and to evaluate associated risks without resorting to in vivo dosimetry. The dose calculation was performed in 79 procedures through the Monte Carlo simulator PCXMC (A PC-based Monte Carlo program for calculating patient doses in medical X-ray examinations), by using the real geometrical and dosimetric irradiation conditions, automatically extracted from DICOM headers. The DAP measurements were also validated by using thermoluminescent dosemeters on an anthropomorphic phantom. The expected linear correlation between effective doses and DAP was confirmed with an R(2) of 0.974. Moreover, in order to easily calculate patient doses, conversion coefficients that relate equivalent doses to measurable quantities, such as DAP, were obtained. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Maceneaney, P M; Malone, D E
2000-12-01
To design a spreadsheet program to analyse interventional radiology (IR) data rapidly produced in local research or reported in the literature using 'evidence-based medicine' (EBM) parameters of treatment benefit and harm. Microsoft Excel(TM)was used. The spreadsheet consists of three worksheets. The first shows the 'Levels of Evidence and Grades of Recommendations' that can be assigned to therapeutic studies as defined by the Oxford Centre for EBM. The second and third worksheets facilitate the EBM assessment of therapeutic benefit and harm. Validity criteria are described. These include the assessment of the adequacy of sample size in the detection of possible procedural complications. A contingency (2 x 2) table for raw data on comparative outcomes in treated patients and controls has been incorporated. Formulae for EBM calculations are related to these numerators and denominators in the spreadsheet. The parameters calculated are benefit - relative risk reduction, absolute risk reduction, number needed to treat (NNT). Harm - relative risk, relative odds, number needed to harm (NNH). Ninety-five per cent confidence intervals are calculated for all these indices. The results change automatically when the data in the therapeutic outcome cells are changed. A final section allows the user to correct the NNT or NNH in their application to individual patients. This spreadsheet can be used on desktop and palmtop computers. The MS Excel(TM)version can be downloaded via the Internet from the URL ftp://radiography.com/pub/TxHarm00.xls. A spreadsheet is useful for the rapid analysis of the clinical benefit and harm from IR procedures.
Activity-based differentiation of pathologists' workload in surgical pathology.
Meijer, G A; Oudejans, J J; Koevoets, J J M; Meijer, C J L M
2009-06-01
Adequate budget control in pathology practice requires accurate allocation of resources. Any changes in types and numbers of specimens handled or protocols used will directly affect the pathologists' workload and consequently the allocation of resources. The aim of the present study was to develop a model for measuring the pathologists' workload that can take into account the changes mentioned above. The diagnostic process was analyzed and broken up into separate activities. The time needed to perform these activities was measured. Based on linear regression analysis, for each activity, the time needed was calculated as a function of the number of slides or blocks involved. The total pathologists' time required for a range of specimens was calculated based on standard protocols and validated by comparing to actually measured workload. Cutting up, microscopic procedures and dictating turned out to be highly correlated to number of blocks and/or slides per specimen. Calculated workload per type of specimen was significantly correlated to the actually measured workload. Modeling pathologists' workload based on formulas that calculate workload per type of specimen as a function of the number of blocks and slides provides a basis for a comprehensive, yet flexible, activity-based costing system for pathology.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-16
... combustible, organic material calculated as carbon, or (3) ammonium nitrate-based fertilizers containing... that passes the insensitivity test prescribed in the definition of ammonium nitrate fertilizer issued by the Fertilizer Institute'' in its ``Definition and Test Procedures for Ammonium Nitrate Fertilizer...
EuroFIR Guideline on calculation of nutrient content of foods for food business operators.
Machackova, Marie; Giertlova, Anna; Porubska, Janka; Roe, Mark; Ramos, Carlos; Finglas, Paul
2018-01-01
This paper presents a Guideline for calculating nutrient content of foods by calculation methods for food business operators and presents data on compliance between calculated values and analytically determined values. In the EU, calculation methods are legally valid to determine the nutrient values of foods for nutrition labelling (Regulation (EU) No 1169/2011). However, neither a specific calculation method nor rules for use of retention factors are defined. EuroFIR AISBL (European Food Information Resource) has introduced a Recipe Calculation Guideline based on the EuroFIR harmonized procedure for recipe calculation. The aim is to provide food businesses with a step-by-step tool for calculating nutrient content of foods for the purpose of nutrition declaration. The development of this Guideline and use in the Czech Republic is described and future application to other Member States is discussed. Limitations of calculation methods and the importance of high quality food composition data are discussed. Copyright © 2017. Published by Elsevier Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Volkov, S. A., E-mail: volkoff-sergey@mail.ru
2016-06-15
A new subtractive procedure for canceling ultraviolet and infrared divergences in the Feynman integrals described here is developed for calculating QED corrections to the electron anomalous magnetic moment. The procedure formulated in the form of a forest expression with linear operators applied to Feynman amplitudes of UV-diverging subgraphs makes it possible to represent the contribution of each Feynman graph containing only electron and photon propagators in the form of a converging integral with respect to Feynman parameters. The application of the developed method for numerical calculation of two- and threeloop contributions is described.
Sensitivity calculations for iteratively solved problems
NASA Technical Reports Server (NTRS)
Haftka, R. T.
1985-01-01
The calculation of sensitivity derivatives of solutions of iteratively solved systems of algebraic equations is investigated. A modified finite difference procedure is presented which improves the accuracy of the calculated derivatives. The procedure is demonstrated for a simple algebraic example as well as an element-by-element preconditioned conjugate gradient iterative solution technique applied to truss examples.
A full potential inverse method based on a density linearization scheme for wing design
NASA Technical Reports Server (NTRS)
Shankar, V.
1982-01-01
A mixed analysis inverse procedure based on the full potential equation in conservation form was developed to recontour a given base wing to produce density linearization scheme in applying the pressure boundary condition in terms of the velocity potential. The FL030 finite volume analysis code was modified to include the inverse option. The new surface shape information, associated with the modified pressure boundary condition, is calculated at a constant span station based on a mass flux integration. The inverse method is shown to recover the original shape when the analysis pressure is not altered. Inverse calculations for weakening of a strong shock system and for a laminar flow control (LFC) pressure distribution are presented. Two methods for a trailing edge closure model are proposed for further study.
Fast-Running Aeroelastic Code Based on Unsteady Linearized Aerodynamic Solver Developed
NASA Technical Reports Server (NTRS)
Reddy, T. S. R.; Bakhle, Milind A.; Keith, T., Jr.
2003-01-01
The NASA Glenn Research Center has been developing aeroelastic analyses for turbomachines for use by NASA and industry. An aeroelastic analysis consists of a structural dynamic model, an unsteady aerodynamic model, and a procedure to couple the two models. The structural models are well developed. Hence, most of the development for the aeroelastic analysis of turbomachines has involved adapting and using unsteady aerodynamic models. Two methods are used in developing unsteady aerodynamic analysis procedures for the flutter and forced response of turbomachines: (1) the time domain method and (2) the frequency domain method. Codes based on time domain methods require considerable computational time and, hence, cannot be used during the design process. Frequency domain methods eliminate the time dependence by assuming harmonic motion and, hence, require less computational time. Early frequency domain analyses methods neglected the important physics of steady loading on the analyses for simplicity. A fast-running unsteady aerodynamic code, LINFLUX, which includes steady loading and is based on the frequency domain method, has been modified for flutter and response calculations. LINFLUX, solves unsteady linearized Euler equations for calculating the unsteady aerodynamic forces on the blades, starting from a steady nonlinear aerodynamic solution. First, we obtained a steady aerodynamic solution for a given flow condition using the nonlinear unsteady aerodynamic code TURBO. A blade vibration analysis was done to determine the frequencies and mode shapes of the vibrating blades, and an interface code was used to convert the steady aerodynamic solution to a form required by LINFLUX. A preprocessor was used to interpolate the mode shapes from the structural dynamic mesh onto the computational dynamics mesh. Then, we used LINFLUX to calculate the unsteady aerodynamic forces for a given mode, frequency, and phase angle. A postprocessor read these unsteady pressures and calculated the generalized aerodynamic forces, eigenvalues, and response amplitudes. The eigenvalues determine the flutter frequency and damping. As a test case, the flutter of a helical fan was calculated with LINFLUX and compared with calculations from TURBO-AE, a nonlinear time domain code, and from ASTROP2, a code based on linear unsteady aerodynamics.
Zhu, Jinhan; Chen, Lixin; Chen, Along; Luo, Guangwen; Deng, Xiaowu; Liu, Xiaowei
2015-04-11
To use a graphic processing unit (GPU) calculation engine to implement a fast 3D pre-treatment dosimetric verification procedure based on an electronic portal imaging device (EPID). The GPU algorithm includes the deconvolution and convolution method for the fluence-map calculations, the collapsed-cone convolution/superposition (CCCS) algorithm for the 3D dose calculations and the 3D gamma evaluation calculations. The results of the GPU-based CCCS algorithm were compared to those of Monte Carlo simulations. The planned and EPID-based reconstructed dose distributions in overridden-to-water phantoms and the original patients were compared for 6 MV and 10 MV photon beams in intensity-modulated radiation therapy (IMRT) treatment plans based on dose differences and gamma analysis. The total single-field dose computation time was less than 8 s, and the gamma evaluation for a 0.1-cm grid resolution was completed in approximately 1 s. The results of the GPU-based CCCS algorithm exhibited good agreement with those of the Monte Carlo simulations. The gamma analysis indicated good agreement between the planned and reconstructed dose distributions for the treatment plans. For the target volume, the differences in the mean dose were less than 1.8%, and the differences in the maximum dose were less than 2.5%. For the critical organs, minor differences were observed between the reconstructed and planned doses. The GPU calculation engine was used to boost the speed of 3D dose and gamma evaluation calculations, thus offering the possibility of true real-time 3D dosimetric verification.
NASA Astrophysics Data System (ADS)
Kawakami, Takashi; Sano, Shinsuke; Saito, Toru; Sharma, Sandeep; Shoji, Mitsuo; Yamada, Satoru; Takano, Yu; Yamanaka, Shusuke; Okumura, Mitsutaka; Nakajima, Takahito; Yamaguchi, Kizashi
2017-09-01
Theoretical examinations of the ferromagnetic coupling in the m-phenylene-bis-methylene molecule and its oligomer were carried out. These systems are good candidates for exchange-coupled systems to investigate strong electronic correlations. We studied effective exchange integrals (J), which indicated magnetic coupling between interacting spins in these species. First, theoretical calculations based on a broken-symmetry single-reference procedure, i.e. the UHF, UMP2, UMP4, UCCSD(T) and UB3LYP methods, were carried out with a GAUSSIAN program code under an SR wave function. From these results, the J value by the UHF method was largely positive because of the strong ferromagnetic spin polarisation effect. The J value by the UCCSD(T) and UB3LYP methods improved an overestimation problem by correcting the dynamical electronic correlation. Next, magnetic coupling among these spins was studied using the CAS-based method of the symmetry-adapted multireference methods procedure. Thus, the UNO DMRG CASCI (UNO, unrestricted natural orbital; DMRG, density matrix renormalised group; CASCI, complete active space configuration interaction) method was mainly employed with a combination of ORCA and BLOCK program codes. DMRG CASCI calculations in valence electron counting, which included all orbitals to full valence CI, provided the most reliable result, and support the UB3LYP method for extended systems.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cristina, S.; Feliziani, M.
1995-11-01
This paper describes a new procedure for the numerical computation of the electric field and current density distributions in a dc electrostatic precipitator in the presence of dust, taking into account the particle-size distribution. Poisson`s and continuity equations are numerically solved by supposing that the coronating conductors satisfy Kaptzov`s assumption on the emitter surfaces. Two iterative numerical procedures, both based on the finite element method (FEM), are implemented for evaluating, respectively, the unknown ionic charge density and the particle charge density distributions. The V-I characteristic and the precipitation efficiencies for the individual particle-size classes, calculated with reference to the pilotmore » precipitator installed by ENEL (Italian Electricity Board) at its Marghera (Venice) coal-fired power station, are found to be very close to those measured experimentally.« less
Noise Certification Considerations for Helicopters Based on Laboratory Investigations
1976-07-01
Calculations were made, usiug 1/3-octave I /2-second spectral analyses, of dBA, dBAT, dBAD, EdBA, PNdB, PNdBD, EPNdB, dBD , dBE, and dBA corrected...networks were examined, dBD and dBE, and the "crest" factor which iF dfined as ZO log Peak SPL PMS SPL was applied to uncorrected dBA. The ten engineering...calculation procedures and weighting networks investigated are: dBA PNdB + dBAT PNdBT dBA D EPNdB EdBA dBD (calculated at peak PNdB) dBA (with "crest
Depolarization Lidar Determination Of Cloud-Base Microphysical Properties
NASA Astrophysics Data System (ADS)
Donovan, D. P.; Klein Baltink, H.; Henzing, J. S.; de Roode, S.; Siebesma, A. P.
2016-06-01
The links between multiple-scattering induced depolarization and cloud microphysical properties (e.g. cloud particle number density, effective radius, water content) have long been recognised. Previous efforts to use depolarization information in a quantitative manner to retrieve cloud microphysical cloud properties have also been undertaken but with limited scope and, arguably, success. In this work we present a retrieval procedure applicable to liquid stratus clouds with (quasi-)linear LWC profiles and (quasi-)constant number density profiles in the cloud-base region. This set of assumptions allows us to employ a fast and robust inversion procedure based on a lookup-table approach applied to extensive lidar Monte-Carlo multiple-scattering calculations. An example validation case is presented where the results of the inversion procedure are compared with simultaneous cloud radar observations. In non-drizzling conditions it was found, in general, that the lidar- only inversion results can be used to predict the radar reflectivity within the radar calibration uncertainty (2-3 dBZ). Results of a comparison between ground-based aerosol number concentration and lidar-derived cloud base number considerations are also presented. The observed relationship between the two quantities is seen to be consistent with the results of previous studies based on aircraft-based in situ measurements.
The purpose of this SOP is to describe the procedures undertaken to calculate sampling weights. The sampling weights are needed to obtain weighted statistics of the study data. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by th...
The purpose of this SOP is to describe the procedures undertaken to calculate the dermal exposure using a probabilistic approach. This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University of Arizona NHEXAS and Battelle Labo...
Lui, Kung-Jong; Chang, Kuang-Chao
2015-01-01
In studies of screening accuracy, we may commonly encounter the data in which a confirmatory procedure is administered to only those subjects with screen positives for ethical concerns. We focus our discussion on simultaneously testing equality of sensitivity and specificity between two binary screening tests when only subjects with screen positives receive the confirmatory procedure. We develop four asymptotic test procedures and one exact test procedure. We derive sample size calculation formula for a desired power of detecting a difference at a given nominal [Formula: see text]-level. We employ Monte Carlo simulation to evaluate the performance of these test procedures and the accuracy of the sample size calculation formula developed here in a variety of situations. Finally, we use the data obtained from a study of the prostate-specific-antigen test and digital rectal examination test on 949 Black men to illustrate the practical use of these test procedures and the sample size calculation formula.
NASA Technical Reports Server (NTRS)
Rolfes, R.; Noor, A. K.; Sparr, H.
1998-01-01
A postprocessing procedure is presented for the evaluation of the transverse thermal stresses in laminated plates. The analytical formulation is based on the first-order shear deformation theory and the plate is discretized by using a single-field displacement finite element model. The procedure is based on neglecting the derivatives of the in-plane forces and the twisting moments, as well as the mixed derivatives of the bending moments, with respect to the in-plane coordinates. The calculated transverse shear stiffnesses reflect the actual stacking sequence of the composite plate. The distributions of the transverse stresses through-the-thickness are evaluated by using only the transverse shear forces and the thermal effects resulting from the finite element analysis. The procedure is implemented into a postprocessing routine which can be easily incorporated into existing commercial finite element codes. Numerical results are presented for four- and ten-layer cross-ply laminates subjected to mechanical and thermal loads.
78 FR 7939 - Energy Conservation Program: Test Procedures for Microwave Ovens (Active Mode)
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-04
...The U.S. Department of Energy (DOE) proposes to revise its test procedures for microwave ovens established under the Energy Policy and Conservation Act. The proposed amendments would add provisions for measuring the active mode energy use for microwave ovens, including both microwave-only ovens and convection microwave ovens. Specifically, DOE is proposing provisions for measuring the energy use of the microwave-only cooking mode for both microwave-only ovens and convection microwave ovens based on the testing methods in the latest draft version of the International Electrotechnical Commission Standard 60705, ``Household microwave ovens--Methods for measuring performance.'' DOE is proposing provisions for measuring the energy use of the convection-only cooking mode for convection microwave ovens based on the DOE test procedure for conventional ovens in our regulations. DOE is also proposing to calculate the energy use of the convection-microwave cooking mode for convection microwave ovens by apportioning the microwave-only mode and convection-only mode energy consumption measurements based on typical consumer use.
Outcomes and Safety of the Combined Abdominoplasty-Hysterectomy: A Preliminary Study.
Massenburg, Benjamin B; Sanati-Mehrizy, Paymon; Ingargiola, Michael J; Rosa, Jonatan Hernandez; Taub, Peter J
2015-10-01
Abdominoplasty (ABP) at the time of hysterectomy (HYS) has been described in the literature since 1986 and is being increasingly requested by patients. However, outcomes of the combined procedure have not been thoroughly explored. The authors reviewed the American College of Surgeons National Surgical Quality Improvement Program database and identified each ABP, HYS, and combined ABP-HYS performed between 2005 and 2012. The incidence of complications in each of the three procedures was calculated, and a multiplicative-risk model was used to calculate the probability of a complication for a patient undergoing distinct HYS and ABP on different dates. One-sample binomial hypothesis tests were performed to determine statistical significance. There were 1325 ABP cases, 12,173 HYS cases, and 143 ABP-HYS cases identified. Surgical complications occurred in 7.7 % of patients undergoing an ABP-HYS, while the calculated risk of a surgical complication was 12.5 % (p = 0.0407) for patients undergoing separate ABP and HYS procedures. The mean operative time was significantly lower for an ABP-HYS at 238 vs. 270 min for separate ABP and HYS procedures (p < 0.0001), and the mean time under anesthesia was significantly lower at 295 vs. 364 min (p < 0.0001). A combined ABP-HYS has a lower incidence of surgical complications than separate ABP and HYS procedures performed on different dates. These data should not encourage all patients to elect a combined ABP-HYS, if only undergoing a HYS, as the combined procedure is associated with increased risks when compared to either isolated individual procedure. However, in patients who are planning on undergoing both procedures on separate dates, a combined ABP-HYS is a safe option that will result in fewer surgical complications, less operative time, less time under anesthesia, and a trend towards fewer days in the hospital. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Electrostatic potential map modelling with COSY Infinity
NASA Astrophysics Data System (ADS)
Maloney, J. A.; Baartman, R.; Planche, T.; Saminathan, S.
2016-06-01
COSY Infinity (Makino and Berz, 2005) is a differential-algebra based simulation code which allows accurate calculation of transfer maps to arbitrary order. COSY's existing internal procedures were modified to allow electrostatic elements to be specified using an array of field potential data from the midplane. Additionally, a new procedure was created allowing electrostatic elements and their fringe fields to be specified by an analytic function. This allows greater flexibility in accurately modelling electrostatic elements and their fringe fields. Applied examples of these new procedures are presented including the modelling of a shunted electrostatic multipole designed with OPERA, a spherical electrostatic bender, and the effects of different shaped apertures in an electrostatic beam line.
Engineering calculations for communications satellite systems planning
NASA Technical Reports Server (NTRS)
Martin, C. H.; Gonsalvez, D. J.; Levis, C. A.; Wang, C. W.
1983-01-01
Progress is reported on a computer code to improve the efficiency of spectrum and orbit utilization for the Broadcasting Satellite Service in the 12 GHz band for Region 2. It implements a constrained gradient search procedure using an exponential objective function based on aggregate signal to noise ratio and an extended line search in the gradient direction. The procedure is tested against a manually generated initial scenario and appears to work satisfactorily. In this test it was assumed that alternate channels use orthogonal polarizations at any one satellite location.
Vo, Elaine; Davila, Jessica A; Hou, Jason; Hodge, Krystle; Li, Linda T; Suliburk, James W; Kao, Lillian S; Berger, David H; Liang, Mike K
2013-08-01
Large databases provide a wealth of information for researchers, but identifying patient cohorts often relies on the use of current procedural terminology (CPT) codes. In particular, studies of stoma surgery have been limited by the accuracy of CPT codes in identifying and differentiating ileostomy procedures from colostomy procedures. It is important to make this distinction because the prevalence of complications associated with stoma formation and reversal differ dramatically between types of stoma. Natural language processing (NLP) is a process that allows text-based searching. The Automated Retrieval Console is an NLP-based software that allows investigators to design and perform NLP-assisted document classification. In this study, we evaluated the role of CPT codes and NLP in differentiating ileostomy from colostomy procedures. Using CPT codes, we conducted a retrospective study that identified all patients undergoing a stoma-related procedure at a single institution between January 2005 and December 2011. All operative reports during this time were reviewed manually to abstract the following variables: formation or reversal and ileostomy or colostomy. Sensitivity and specificity for validation of the CPT codes against the mastery surgery schedule were calculated. Operative reports were evaluated by use of NLP to differentiate ileostomy- from colostomy-related procedures. Sensitivity and specificity for identifying patients with ileostomy or colostomy procedures were calculated for CPT codes and NLP for the entire cohort. CPT codes performed well in identifying stoma procedures (sensitivity 87.4%, specificity 97.5%). A total of 664 stoma procedures were identified by CPT codes between 2005 and 2011. The CPT codes were adequate in identifying stoma formation (sensitivity 97.7%, specificity 72.4%) and stoma reversal (sensitivity 74.1%, specificity 98.7%), but they were inadequate in identifying ileostomy (sensitivity 35.0%, specificity 88.1%) and colostomy (75.2% and 80.9%). NLP performed with greater sensitivity, specificity, and accuracy than CPT codes in identifying stoma procedures and stoma types. Major differences where NLP outperformed CPT included identifying ileostomy (specificity 95.8%, sensitivity 88.3%, and accuracy 91.5%) and colostomy (97.6%, 90.5%, and 92.8%, respectively). CPT codes can identify effectively patients who have had stoma procedures and are adequate in distinguishing between formation and reversal; however, CPT codes cannot differentiate ileostomy from colostomy. NLP can be used to differentiate between ileostomy- and colostomy-related procedures. The role of NLP in conjunction with electronic medical records in data retrieval warrants further investigation. Published by Mosby, Inc.
The (Un)Certainty of Selectivity in Liquid Chromatography Tandem Mass Spectrometry
NASA Astrophysics Data System (ADS)
Berendsen, Bjorn J. A.; Stolker, Linda A. M.; Nielen, Michel W. F.
2013-01-01
We developed a procedure to determine the "identification power" of an LC-MS/MS method operated in the MRM acquisition mode, which is related to its selectivity. The probability of any compound showing the same precursor ion, product ions, and retention time as the compound of interest is used as a measure of selectivity. This is calculated based upon empirical models constructed from three very large compound databases. Based upon the final probability estimation, additional measures to assure unambiguous identification can be taken, like the selection of different or additional product ions. The reported procedure in combination with criteria for relative ion abundances results in a powerful technique to determine the (un)certainty of the selectivity of any LC-MS/MS analysis and thus the risk of false positive results. Furthermore, the procedure is very useful as a tool to validate method selectivity.
[Definition and stabilization of processes II. Clinical Processes in a Urology Department].
Pascual, Carlos; Luján, Marcos; Mora, José Ramón; Diz, Manuel Ramón; Martín, Carlos; López, María Carmen
2015-01-01
New models in clinical management seek a clinical practice based on quality, efficacy and efficiency, avoiding variability and improvisation. In this paper we have developed one of the most frequent clinical processes in our speciality, the process based on DRG 311 or transurethral procedures without complications. Along it we will describe its components: Stabilization form, clinical trajectory, cost calculation, and finally the process flowchart.
Comparison of Sample Size by Bootstrap and by Formulas Based on Normal Distribution Assumption.
Wang, Zuozhen
2018-01-01
Bootstrapping technique is distribution-independent, which provides an indirect way to estimate the sample size for a clinical trial based on a relatively smaller sample. In this paper, sample size estimation to compare two parallel-design arms for continuous data by bootstrap procedure are presented for various test types (inequality, non-inferiority, superiority, and equivalence), respectively. Meanwhile, sample size calculation by mathematical formulas (normal distribution assumption) for the identical data are also carried out. Consequently, power difference between the two calculation methods is acceptably small for all the test types. It shows that the bootstrap procedure is a credible technique for sample size estimation. After that, we compared the powers determined using the two methods based on data that violate the normal distribution assumption. To accommodate the feature of the data, the nonparametric statistical method of Wilcoxon test was applied to compare the two groups in the data during the process of bootstrap power estimation. As a result, the power estimated by normal distribution-based formula is far larger than that by bootstrap for each specific sample size per group. Hence, for this type of data, it is preferable that the bootstrap method be applied for sample size calculation at the beginning, and that the same statistical method as used in the subsequent statistical analysis is employed for each bootstrap sample during the course of bootstrap sample size estimation, provided there is historical true data available that can be well representative of the population to which the proposed trial is planning to extrapolate.
Computation of the dipole moments of proteins.
Antosiewicz, J
1995-10-01
A simple and computationally feasible procedure for the calculation of net charges and dipole moments of proteins at arbitrary pH and salt conditions is described. The method is intended to provide data that may be compared to the results of transient electric dichroism experiments on protein solutions. The procedure consists of three major steps: (i) calculation of self energies and interaction energies for ionizable groups in the protein by using the finite-difference Poisson-Boltzmann method, (ii) determination of the position of the center of diffusion (to which the calculated dipole moment refers) and the extinction coefficient tensor for the protein, and (iii) generation of the equilibrium distribution of protonation states of the protein by a Monte Carlo procedure, from which mean and root-mean-square dipole moments and optical anisotropies are calculated. The procedure is applied to 12 proteins. It is shown that it gives hydrodynamic and electrical parameters for proteins in good agreement with experimental data.
The purpose of this SOP is to describe the procedures undertaken for calculating ingestion exposure using the indirect method of exposure estimation. This SOP uses This SOP uses data that have been properly coded and certified with appropriate QA/QC procedures by the University ...
Global aesthetic surgery statistics: a closer look.
Heidekrueger, Paul I; Juran, S; Ehrl, D; Aung, T; Tanna, N; Broer, P Niclas
2017-08-01
Obtaining quality global statistics about surgical procedures remains an important yet challenging task. The International Society of Aesthetic Plastic Surgery (ISAPS) reports the total number of surgical and non-surgical procedures performed worldwide on a yearly basis. While providing valuable insight, ISAPS' statistics leave two important factors unaccounted for: (1) the underlying base population, and (2) the number of surgeons performing the procedures. Statistics of the published ISAPS' 'International Survey on Aesthetic/Cosmetic Surgery' were analysed by country, taking into account the underlying national base population according to the official United Nations population estimates. Further, the number of surgeons per country was used to calculate the number of surgeries performed per surgeon. In 2014, based on ISAPS statistics, national surgical procedures ranked in the following order: 1st USA, 2nd Brazil, 3rd South Korea, 4th Mexico, 5th Japan, 6th Germany, 7th Colombia, and 8th France. When considering the size of the underlying national populations, the demand for surgical procedures per 100,000 people changes the overall ranking substantially. It was also found that the rate of surgical procedures per surgeon shows great variation between the responding countries. While the US and Brazil are often quoted as the countries with the highest demand for plastic surgery, according to the presented analysis, other countries surpass these countries in surgical procedures per capita. While data acquisition and quality should be improved in the future, valuable insight regarding the demand for surgical procedures can be gained by taking specific demographic and geographic factors into consideration.
Performance of DIMTEST-and NOHARM-Based Statistics for Testing Unidimensionality
ERIC Educational Resources Information Center
Finch, Holmes; Habing, Brian
2007-01-01
This Monte Carlo study compares the ability of the parametric bootstrap version of DIMTEST with three goodness-of-fit tests calculated from a fitted NOHARM model to detect violations of the assumption of unidimensionality in testing data. The effectiveness of the procedures was evaluated for different numbers of items, numbers of examinees,…
The Field Production of Water for Injection
1985-12-01
L/day Bedridden Patient 0.75 L/day Average Diseased Patient 0.50 L/day e (There is no feasible methodology to forecast the number of procedures per... Bedridden Patient 0.75 All Diseased Patients 0.50 An estimate of the liters/day needed may be calculated based on a forecasted patient stream, including
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brow, R.K.; Kovacic, L.; Chambers, R.S.
1996-04-01
Hernetic glass sealing technologies developed for weapons component applications can be utilized for the design and manufacture of fuel cells. Design and processing of of a seal are optimized through an integrated approach based on glass composition research, finite element analysis, and sealing process definition. Glass sealing procedures are selected to accommodate the limits imposed by glass composition and predicted calculations.
Closure to new results for an approximate method for calculating two-dimensional furrow infiltration
USDA-ARS?s Scientific Manuscript database
In a discussion paper, Ebrahimian and Noury (2015) raised several concerns about an approximate solution to the two-dimensional Richards equation presented by Bautista et al (2014). The solution is based on a procedure originally proposed by Warrick et al. (2007). Such a solution is of practical i...
NASA Astrophysics Data System (ADS)
Maciejewska, Beata; Błasiak, Sławomir; Piasecka, Magdalena
This work discusses the mathematical model for laminar-flow heat transfer in a minichannel. The boundary conditions in the form of temperature distributions on the outer sides of the channel walls were determined from experimental data. The data were collected from the experimental stand the essential part of which is a vertical minichannel 1.7 mm deep, 16 mm wide and 180 mm long, asymmetrically heated by a Haynes-230 alloy plate. Infrared thermography allowed determining temperature changes on the outer side of the minichannel walls. The problem was analysed numerically through either ANSYS CFX software or special calculation procedures based on the Finite Element Method and Trefftz functions in the thermal boundary layer. The Trefftz functions were used to construct the basis functions. Solutions to the governing differential equations were approximated with a linear combination of Trefftz-type basis functions. Unknown coefficients of the linear combination were calculated by minimising the functional. The results of the comparative analysis were represented in a graphical form and discussed.
Design of tubesheet for U-tube heat exchangers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Paliwal, D.N.; Saxena, R.M.
1993-02-01
Thorough analysis of two-side integral tubesheet of U-tube heat exchanger is carried out, using Panc's component theory of plates. Effects of solid annular rim and interaction between tubesheet and shell/channel are considered. A design procedure based on foregoing analysis is proposed. Fictive elastic constants due to Osweiller, as well as effective elastic constants due to Slot and O'Donnell, are employed. Deformations, internal forces and primary stress intensities are evaluated in both pitch and diagonal directions. Stress category concept of ASME Sect. VIII Div. 2 is used. Design thickness obtained by this method is compared with the thicknesses calculated, using ASMEmore » Sect. VIII Div. 1, TEMA and BS-5500. This method enables us to calculate stresses in shell and channel in the junction region as well. Present analysis and design procedure thoroughly investigates the tubesheet behavior and leads to a thinner tubesheet. It is concluded that though all the codes based on Gardner's work provide safe and efficient design rules, and lie on firm footing, still there is further scope for reducing the design thickness of tubesheet by about ten percent.« less
The neural bases of the multiplication problem-size effect across countries
Prado, Jérôme; Lu, Jiayan; Liu, Li; Dong, Qi; Zhou, Xinlin; Booth, James R.
2013-01-01
Multiplication problems involving large numbers (e.g., 9 × 8) are more difficult to solve than problems involving small numbers (e.g., 2 × 3). Behavioral research indicates that this problem-size effect might be due to different factors across countries and educational systems. However, there is no neuroimaging evidence supporting this hypothesis. Here, we compared the neural correlates of the multiplication problem-size effect in adults educated in China and the United States. We found a greater neural problem-size effect in Chinese than American participants in bilateral superior temporal regions associated with phonological processing. However, we found a greater neural problem-size effect in American than Chinese participants in right intra-parietal sulcus (IPS) associated with calculation procedures. Therefore, while the multiplication problem-size effect might be a verbal retrieval effect in Chinese as compared to American participants, it may instead stem from the use of calculation procedures in American as compared to Chinese participants. Our results indicate that differences in educational practices might affect the neural bases of symbolic arithmetic. PMID:23717274
Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R.
2012-01-01
We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient’s skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures. PMID:24027616
Rana, Vijay; Rudin, Stephen; Bednarek, Daniel R
2012-02-23
We have developed a dose-tracking system (DTS) that calculates the radiation dose to the patient's skin in real-time by acquiring exposure parameters and imaging-system-geometry from the digital bus on a Toshiba Infinix C-arm unit. The cumulative dose values are then displayed as a color map on an OpenGL-based 3D graphic of the patient for immediate feedback to the interventionalist. Determination of those elements on the surface of the patient 3D-graphic that intersect the beam and calculation of the dose for these elements in real time demands fast computation. Reducing the size of the elements results in more computation load on the computer processor and therefore a tradeoff occurs between the resolution of the patient graphic and the real-time performance of the DTS. The speed of the DTS for calculating dose to the skin is limited by the central processing unit (CPU) and can be improved by using the parallel processing power of a graphics processing unit (GPU). Here, we compare the performance speed of GPU-based DTS software to that of the current CPU-based software as a function of the resolution of the patient graphics. Results show a tremendous improvement in speed using the GPU. While an increase in the spatial resolution of the patient graphics resulted in slowing down the computational speed of the DTS on the CPU, the speed of the GPU-based DTS was hardly affected. This GPU-based DTS can be a powerful tool for providing accurate, real-time feedback about patient skin-dose to physicians while performing interventional procedures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selle, J.E.
A modification was made to the Kaufman method of calculating binary phase diagrams to permit calculation of intra-rare earth diagrams. Atomic volumes for all phases, real or hypothetical, are necessary to determine interaction parameters for calculation of complete diagrams. The procedures used to determine unknown atomic volumes are describes. Also, procedures are described for determining lattice stability parameters for unknown transformations. Results are presented on the calculation of intra-rare earth diagrams between both trivalent and divalent rare earths. 13 refs., 36 figs., 11 tabs.
40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations
Code of Federal Regulations, 2014 CFR
2014-07-01
... Federal Emission Test Procedure and the following results were calculated: HC=.139 grams/mile CO=1.59 grams/mile CO2=317 grams/mile According to the procedure in § 600.113-78, the city fuel economy or MPGc, for the vehicle may be calculated by substituting the HC, CO, and CO2 grams/mile values into the...
40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations
Code of Federal Regulations, 2012 CFR
2012-07-01
... Federal Emission Test Procedure and the following results were calculated: HC=.139 grams/mile CO=1.59 grams/mile CO2=317 grams/mile According to the procedure in § 600.113-78, the city fuel economy or MPGc, for the vehicle may be calculated by substituting the HC, CO, and CO2 grams/mile values into the...
40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations
Code of Federal Regulations, 2011 CFR
2011-07-01
... Federal Emission Test Procedure and the following results were calculated: HC=.139 grams/mile CO=1.59 grams/mile CO2=317 grams/mile According to the procedure in § 600.113-78, the city fuel economy or MPGc, for the vehicle may be calculated by substituting the HC, CO, and CO2 grams/mile values into the...
40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations
Code of Federal Regulations, 2013 CFR
2013-07-01
... Federal Emission Test Procedure and the following results were calculated: HC=.139 grams/mile CO=1.59 grams/mile CO2=317 grams/mile According to the procedure in § 600.113-78, the city fuel economy or MPGc, for the vehicle may be calculated by substituting the HC, CO, and CO2 grams/mile values into the...
Computer method for design of acoustic liners for turbofan engines
NASA Technical Reports Server (NTRS)
Minner, G. L.; Rice, E. J.
1976-01-01
A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.
Ghogawala, Zoher; Whitmore, Robert G; Watters, William C; Sharan, Alok; Mummaneni, Praveen V; Dailey, Andrew T; Choudhri, Tanvir F; Eck, Jason C; Groff, Michael W; Wang, Jeffrey C; Resnick, Daniel K; Dhall, Sanjay S; Kaiser, Michael G
2014-07-01
A comprehensive economic analysis generally involves the calculation of indirect and direct health costs from a societal perspective as opposed to simply reporting costs from a hospital or payer perspective. Hospital charges for a surgical procedure must be converted to cost data when performing a cost-effectiveness analysis. Once cost data has been calculated, quality-adjusted life year data from a surgical treatment are calculated by using a preference-based health-related quality-of-life instrument such as the EQ-5D. A recent cost-utility analysis from a single study has demonstrated the long-term (over an 8-year time period) benefits of circumferential fusions over stand-alone posterolateral fusions. In addition, economic analysis from a single study has found that lumbar fusion for selected patients with low-back pain can be recommended from an economic perspective. Recent economic analysis, from a single study, finds that femoral ring allograft might be more cost-effective compared with a specific titanium cage when performing an anterior lumbar interbody fusion plus posterolateral fusion.
Analysis of Direct Costs of Outpatient Arthroscopic Rotator Cuff Repair.
Narvy, Steven J; Ahluwalia, Avtar; Vangsness, C Thomas
2016-01-01
Arthroscopic rotator cuff surgery is one of the most commonly performed orthopedic surgical procedures. We conducted a study to calculate the direct cost of arthroscopic repair of rotator cuff tears confirmed by magnetic resonance imaging. Twenty-eight shoulders in 26 patients (mean age, 54.5 years) underwent primary rotator cuff repair by a single fellowship-trained arthroscopic surgeon in the outpatient surgery center of a major academic medical center. All patients had interscalene blocks placed while in the preoperative holding area. Direct costs of this cycle of care were calculated using the time-driven activity-based costing algorithm. Mean time in operating room was 148 minutes; mean time in recovery was 105 minutes. Calculated surgical cost for this process cycle was $5904.21. Among material costs, suture anchor costs were the main cost driver. Preoperative bloodwork was obtained in 23 cases, adding a mean cost of $111.04. Our findings provide important preliminary information regarding the direct economic costs of rotator cuff surgery and may be useful to hospitals and surgery centers negotiating procedural reimbursement for the increased cost of repairing complex tears.
NASA Technical Reports Server (NTRS)
Wang, Qinglin; Gogineni, S. P.
1991-01-01
A numerical procedure for estimating the true scattering coefficient, sigma(sup 0), from measurements made using wide-beam antennas. The use of wide-beam antennas results in an inaccurate estimate of sigma(sup 0) if the narrow-beam approximation is used in the retrieval process for sigma(sup 0). To reduce this error, a correction procedure was proposed that estimates the error resulting from the narrow-beam approximation and uses the error to obtain a more accurate estimate of sigma(sup 0). An exponential model was assumed to take into account the variation of sigma(sup 0) with incidence angles, and the model parameters are estimated from measured data. Based on the model and knowledge of the antenna pattern, the procedure calculates the error due to the narrow-beam approximation. The procedure is shown to provide a significant improvement in estimation of sigma(sup 0) obtained with wide-beam antennas. The proposed procedure is also shown insensitive to the assumed sigma(sup 0) model.
NASA Astrophysics Data System (ADS)
Zhou, Xue; Cui, Xinglei; Chen, Mo; Zhai, Guofu
2016-05-01
Species composites of Ag-N2, Ag-H2 and Ag-He plasmas in the temperature range of 3,000-20,000 K and at 1 atmospheric pressure were calculated by using the minimization of Gibbs free energy. Thermodynamic properties and transport coefficients of nitrogen, hydrogen and helium plasmas mixed with a variety of silver vapor were then calculated based on the equilibrium composites and collision integral data. The calculation procedure was verified by comparing the results obtained in this paper with the published transport coefficients on the case of pure nitrogen plasma. The influences of the silver vapor concentration on composites, thermodynamic properties and transport coefficients were finally analyzed and summarized for all the three types of plasmas. Those physical properties were important for theoretical study and numerical calculation on arc plasma generated by silver-based electrodes in those gases in sealed electromagnetic relays and contacts. supported by National Natural Science Foundation of China (Nos. 51277038 and 51307030)
Thanh, Tran Thien; Vuong, Le Quang; Ho, Phan Long; Chuong, Huynh Dinh; Nguyen, Vo Hoang; Tao, Chau Van
2018-04-01
In this work, an advanced analytical procedure was applied to calculate radioactivity in spiked water samples in a close geometry gamma spectroscopy. It included MCNP-CP code in order to calculate the coincidence summing correction factor (CSF). The CSF results were validated by a deterministic method using ETNA code for both p-type HPGe detectors. It showed that a good agreement for both codes. Finally, the validity of the developed procedure was confirmed by a proficiency test to calculate the activities of various radionuclides. The results of the radioactivity measurement with both detectors using the advanced analytical procedure were received the ''Accepted'' statuses following the proficiency test. Copyright © 2018 Elsevier Ltd. All rights reserved.
Ballabio, Davide; Consonni, Viviana; Mauri, Andrea; Todeschini, Roberto
2010-01-11
In multivariate regression and classification issues variable selection is an important procedure used to select an optimal subset of variables with the aim of producing more parsimonious and eventually more predictive models. Variable selection is often necessary when dealing with methodologies that produce thousands of variables, such as Quantitative Structure-Activity Relationships (QSARs) and highly dimensional analytical procedures. In this paper a novel method for variable selection for classification purposes is introduced. This method exploits the recently proposed Canonical Measure of Correlation between two sets of variables (CMC index). The CMC index is in this case calculated for two specific sets of variables, the former being comprised of the independent variables and the latter of the unfolded class matrix. The CMC values, calculated by considering one variable at a time, can be sorted and a ranking of the variables on the basis of their class discrimination capabilities results. Alternatively, CMC index can be calculated for all the possible combinations of variables and the variable subset with the maximal CMC can be selected, but this procedure is computationally more demanding and classification performance of the selected subset is not always the best one. The effectiveness of the CMC index in selecting variables with discriminative ability was compared with that of other well-known strategies for variable selection, such as the Wilks' Lambda, the VIP index based on the Partial Least Squares-Discriminant Analysis, and the selection provided by classification trees. A variable Forward Selection based on the CMC index was finally used in conjunction of Linear Discriminant Analysis. This approach was tested on several chemical data sets. Obtained results were encouraging.
Numerical Simulation of the Ground Response to the Tire Load Using Finite Element Method
NASA Astrophysics Data System (ADS)
Valaskova, Veronika; Vlcek, Jozef
2017-10-01
Response of the pavement to the excitation caused by the moving vehicle is one of the actual problems of the civil engineering practice. The load from the vehicle is transferred to the pavement structure through contact area of the tires. Experimental studies show nonuniform distribution of the pressure in the area. This non-uniformity is caused by the flexible nature and the shape of the tire and is influenced by the tire inflation. Several tire load patterns, including uniform distribution and point load, were involved in the numerical modelling using finite element method. Applied tire loads were based on the tire contact forces of the lorry Tatra 815. There were selected two procedures for the calculations. The first one was based on the simplification of the vehicle to the half-part model. The characteristics of the vehicle model were verified by the experiment and by the numerical model in the software ADINA, when vehicle behaviour during the ride was investigated. Second step involved application of the calculated contact forces for the front axle as the load on the multi-layered half space representing the pavement structure. This procedure was realized in the software Plaxis and considered various stress patterns for the load. The response of the ground to the vehicle load was then analyzed. Axisymmetric model was established for this procedure. The paper presents the results of the investigation of the contact pressure distribution and corresponding reaction of the pavement to various load distribution patterns. The results show differences in some calculated quantities for different load patterns, which need to be verified by the experimental way when also ground response should be observed.
Economic Valuation of the Global Burden of Cleft Disease Averted by a Large Cleft Charity.
Poenaru, Dan; Lin, Dan; Corlew, Scott
2016-05-01
This study attempts to quantify the burden of disease averted through the global surgical work of a large cleft charity, and estimate the economic impact of this effort over a 10-year period. Anonymized data of all primary cleft lip and cleft palate procedures in the Smile Train database were analyzed and disability-adjusted life years (DALYs) calculated using country-specific life expectancy tables, established disability weights, and estimated success of surgery and residual disability probabilities; multiple age weighting and discounting permutations were included. Averted DALYs were calculated and gross national income (GNI) per capita was then multiplied by averted DALYs to estimate economic gains. 548,147 primary cleft procedures were performed in 83 countries between 2001 and 2011. 547,769 records contained complete data available for the study; 58 % were cleft lip and 42 % cleft palate. Averted DALYs ranged between 1.46 and 4.95 M. The mean economic impact ranged between USD 5510 and 50,634 per person. This corresponded to a global economic impact of between USD 3.0B and 27.7B USD, depending on the DALY and GNI values used. The estimated cost of providing these procedures based on an average reimbursement rate was USD 197M (0.7-6.6 % of the estimated impact). The immense economic gain realized through procedures focused on a small proportion of the surgical burden of disease highlights the importance and cost-effectiveness of surgical treatment globally. This methodology can be applied to evaluate interventions for other conditions, and for evidence-based health care resource allocation.
Swartjes, Frank A; Versluijs, Kees W; Otte, Piet F
2013-10-01
Consumption of vegetables that are grown in urban areas takes place worldwide. In developing countries, vegetables are traditionally grown in urban areas for cheap food supply. In developing and developed countries, urban gardening is gaining momentum. A problem that arises with urban gardening is the presence of contaminants in soil, which can be taken up by vegetables. In this study, a scientifically-based and practical procedure has been developed for assessing the human health risks from the consumption of vegetables from cadmium-contaminated land. Starting from a contaminated site, the procedure follows a tiered approach which is laid out as follows. In Tier 0, the plausibility of growing vegetables is investigated. In Tier 1 soil concentrations are compared with the human health-based Critical soil concentration. Tier 2 offers the possibility for a detailed site-specific human health risk assessment in which calculated exposure is compared to the toxicological reference dose. In Tier 3, vegetable concentrations are measured and tested following a standardized measurement protocol. To underpin the derivation of the Critical soil concentrations and to develop a tool for site-specific assessment the determination of the representative concentration in vegetables has been evaluated for a range of vegetables. The core of the procedure is based on Freundlich-type plant-soil relations, with the total soil concentration and the soil properties as variables. When a significant plant-soil relation is lacking for a specific vegetable a geometric mean of BioConcentrationFactors (BCF) is used, which is normalized according to soil properties. Subsequently, a 'conservative' vegetable-group-consumption-rate-weighted BioConcentrationFactor is calculated as basis for the Critical soil concentration (Tier 1). The tool to perform site-specific human health risk assessment (Tier 2) includes the calculation of a 'realistic worst case' site-specific vegetable-group-consumption-rate-weighted BioConcentrationFactor. © 2013 Elsevier Inc. All rights reserved.
Solar wind flow past Venus - Theory and comparisons
NASA Technical Reports Server (NTRS)
Spreiter, J. R.; Stahara, S. S.
1980-01-01
Advanced computational procedures are applied to an improved model of solar wind flow past Venus to calculate the locations of the ionopause and bow wave and the properties of the flowing ionosheath plasma in the intervening region. The theoretical method is based on a single-fluid, steady, dissipationless, magneto-hydrodynamic continuum model and is appropriate for the calculation of axisymmetric supersonic, super-Alfvenic solar wind flow past a nonmagnetic planet possessing a sufficiently dense ionosphere to stand off the flowing plasma above the subsolar point and elsewhere. Determination of time histories of plasma and magnetic field properties along an arbitrary spacecraft trajectory and provision for an arbitrary oncoming direction of the interplanetary solar wind have been incorporated in the model. An outline is provided of the underlying theory and computational procedures, and sample comparisons of the results are presented with observations from the Pioneer Venus orbiter.
Liquid discharges from patients undergoing 131I treatments.
Barquero, R; Basurto, F; Nuñez, C; Esteban, R
2008-10-01
This work discusses the production and management of liquid radioactive wastes as excretas from patients undergoing therapy procedures with 131I radiopharmaceuticals in Spain. The activity in the sewage has been estimated with and without waste radioactive decay tanks. Two common therapy procedures have been considered, the thyroid cancer (4.14 GBq administered per treatment), and the hyperthyroidism (414 MBq administered per treatment). The calculations were based on measurements of external exposure around the 244 hyperthyroidism patients and 23 thyroid cancer patients. The estimated direct activity discharged to the sewage for two thyroid carcinomas and three hyperthyroidisms was 14.57 GBq and 1.27 GBq, respectively, per week; the annual doses received by the most exposed individual (sewage worker) were 164 microSv and 13 microSv, respectively. General equations to calculate the activity as a function of the number of patient treated each week were also obtained.
Scheraga, H A; Paine, G H
1986-01-01
We are using a variety of theoretical and computational techniques to study protein structure, protein folding, and higher-order structures. Our earlier work involved treatments of liquid water and aqueous solutions of nonpolar and polar solutes, computations of the stabilities of the fundamental structures of proteins and their packing arrangements, conformations of small cyclic and open-chain peptides, structures of fibrous proteins (collagen), structures of homologous globular proteins, introduction of special procedures as constraints during energy minimization of globular proteins, and structures of enzyme-substrate complexes. Recently, we presented a new methodology for predicting polypeptide structure (described here); the method is based on the calculation of the probable and average conformation of a polypeptide chain by the application of equilibrium statistical mechanics in conjunction with an adaptive, importance sampling Monte Carlo algorithm. As a test, it was applied to Met-enkephalin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baryshev, Sergey V.; Thimsen, Elijah
2015-04-14
Herein, we report an analytical procedure to calculate the enthalpy of formation for thin film multinary compounds from sputtering rates measured during ion bombardment. The method is based on Sigmunds sputtering theory and the BornHaber cycle. Using this procedure, an enthalpy of formation for a CZTS film of the composition Cu1.9Zn1.5Sn0.8S4 was measured as -930 +/- 98 kJ mol1. This value is much more negative than the sum of the enthalpies of formation for the constituent binary compounds, meaning the multinary formation reaction is predicted to be exothermic. The measured enthalpy of formation was used to estimate the temperature dependencemore » of the Gibbs free energy of reaction, which appears consistent with many experimental reports in the CZTS processing literature.« less
NASA Technical Reports Server (NTRS)
Bennett, Floyd V.; Yntema, Robert T.
1959-01-01
Several approximate procedures for calculating the bending-moment response of flexible airplanes to continuous isotropic turbulence are presented and evaluated. The modal methods (the mode-displacement and force-summation methods) and a matrix method (segmented-wing method) are considered. These approximate procedures are applied to a simplified airplane for which an exact solution to the equation of motion can be obtained. The simplified airplane consists of a uniform beam with a concentrated fuselage mass at the center. Airplane motions are limited to vertical rigid-body translation and symmetrical wing bending deflections. Output power spectra of wing bending moments based on the exact transfer-function solutions are used as a basis for the evaluation of the approximate methods. It is shown that the force-summation and the matrix methods give satisfactory accuracy and that the mode-displacement method gives unsatisfactory accuracy.
NASA Astrophysics Data System (ADS)
Miyake, Shugo; Matsui, Genzou; Ohta, Hiromichi; Hatori, Kimihito; Taguchi, Kohei; Yamamoto, Suguru
2017-07-01
Thermal microscopes are a useful technology to investigate the spatial distribution of the thermal transport properties of various materials. However, for high thermal effusivity materials, the estimated values of thermophysical parameters based on the conventional 1D heat flow model are known to be higher than the values of materials in the literature. Here, we present a new procedure to solve the problem which calculates the theoretical temperature response with the 3D heat flow and measures reference materials which involve known values of thermal effusivity and heat capacity. In general, a complicated numerical iterative method and many thermophysical parameters are required for the calculation in the 3D heat flow model. Here, we devised a simple procedure by using a molybdenum (Mo) thin film with low thermal conductivity on the sample surface, enabling us to measure over a wide thermal effusivity range for various materials.
An estimation of the cost per visit of nursing home care services.
Ryu, Ho-Sihn
2009-01-01
Procedures used for analyzing the cost of providing home care nursing services through hospital-based home care agencies (HCAs) was the focus of this study. A cross-sectional descriptive study design was used to analyze the workload and caseload of 36 home care nurses from ten HCAs. In addition, information obtained from a national health insurance database, including 54,639 home care claim cases from a total of 185 HCAs during a 6-month period, were analyzed. The findings provide a foundation for improving the alternative home care billing and reimbursement system by using the actual amount of time invested in providing home care when calculating the cost of providing home care nursing services. Further, this study provides a procedure for calculating nursing service costs by analyzing actual data. The results have great potential for use in nursing service cost analysis methodology, which is an essential step in developing a policy for providing home care.
Solubility of gases and liquids in glassy polymers.
De Angelis, Maria Grazia; Sarti, Giulio C
2011-01-01
This review discusses a macroscopic thermodynamic procedure to calculate the solubility of gases, vapors, and liquids in glassy polymers that is based on the general procedure provided by the nonequilibrium thermodynamics for glassy polymers (NET-GP) method. Several examples are presented using various nonequilibrium (NE) models including lattice fluid (NELF), statistical associating fluid theory (NE-SAFT), and perturbed hard sphere chain (NE-PHSC). Particular applications illustrate the calculation of infinite-dilution solubility coefficients in different glassy polymers and the prediction of solubility isotherms for different gases and vapors in pure polymers as well as in polymer blends. The determination of model parameters is discussed, and the predictive abilities of the models are illustrated. Attention is also given to the solubility of gas mixtures and solubility isotherms in nanocomposite mixed matrices. The fractional free volume determined from solubility data can be used to correlate solute diffusivities in mixed matrices.
Automation of PCXMC and ImPACT for NASA Astronaut Medical Imaging Dose and Risk Tracking
NASA Technical Reports Server (NTRS)
Bahadori, Amir; Picco, Charles; Flores-McLaughlin, John; Shavers, Mark; Semones, Edward
2011-01-01
To automate astronaut organ and effective dose calculations from occupational X-ray and computed tomography (CT) examinations incorporating PCXMC and ImPACT tools and to estimate the associated lifetime cancer risk per the National Council on Radiation Protection & Measurements (NCRP) using MATLAB(R). Methods: NASA follows guidance from the NCRP on its operational radiation safety program for astronauts. NCRP Report 142 recommends that astronauts be informed of the cancer risks from reported exposures to ionizing radiation from medical imaging. MATLAB(R) code was written to retrieve exam parameters for medical imaging procedures from a NASA database, calculate associated dose and risk, and return results to the database, using the Microsoft .NET Framework. This code interfaces with the PCXMC executable and emulates the ImPACT Excel spreadsheet to calculate organ doses from X-rays and CTs, respectively, eliminating the need to utilize the PCXMC graphical user interface (except for a few special cases) and the ImPACT spreadsheet. Results: Using MATLAB(R) code to interface with PCXMC and replicate ImPACT dose calculation allowed for rapid evaluation of multiple medical imaging exams. The user inputs the exam parameter data into the database and runs the code. Based on the imaging modality and input parameters, the organ doses are calculated. Output files are created for record, and organ doses, effective dose, and cancer risks associated with each exam are written to the database. Annual and post-flight exposure reports, which are used by the flight surgeon to brief the astronaut, are generated from the database. Conclusions: Automating PCXMC and ImPACT for evaluation of NASA astronaut medical imaging radiation procedures allowed for a traceable and rapid method for tracking projected cancer risks associated with over 12,000 exposures. This code will be used to evaluate future medical radiation exposures, and can easily be modified to accommodate changes to the risk calculation procedure.
NASA Technical Reports Server (NTRS)
Mingelgrin, U.
1972-01-01
Many properties of gaseous systems such as electromagnetic absorption and emission, sound dispersion and absorption, may be elucidated if the nature of collisions between the particles in the system is understood. A procedure for the calculation of the classical trajectories of two interacting diatomic molecules is described. The dynamics of the collision will be assumed to be that of two rigid rotors moving in a specified potential. The actual outcome of a representative sample of many trajectories at 298K was computed, and the use of these values at any temperature for calculations of various molecular properties will be described. Calculations performed for the O2 microwave spectrum are given to demonstrate the use of the procedure described.
Madsen, Kristoffer H; Ewald, Lars; Siebner, Hartwig R; Thielscher, Axel
2015-01-01
Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector potential of the TMS coils. To develop an approach to reconstruct the magnetic vector potential based on automated measurements. We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel approach to determine the magnetic vector potential via volume integration of the measured field. The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well with that calculated using a model reconstructed from x-ray images. The setup can supply validated models for existing and newly appearing TMS coils. Copyright © 2015 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
López, J.; Hernández, J.; Gómez, P.; Faura, F.
2018-02-01
The VOFTools library includes efficient analytical and geometrical routines for (1) area/volume computation, (2) truncation operations that typically arise in VOF (volume of fluid) methods, (3) area/volume conservation enforcement (VCE) in PLIC (piecewise linear interface calculation) reconstruction and(4) computation of the distance from a given point to the reconstructed interface. The computation of a polyhedron volume uses an efficient formula based on a quadrilateral decomposition and a 2D projection of each polyhedron face. The analytical VCE method is based on coupling an interpolation procedure to bracket the solution with an improved final calculation step based on the above volume computation formula. Although the library was originally created to help develop highly accurate advection and reconstruction schemes in the context of VOF methods, it may have more general applications. To assess the performance of the supplied routines, different tests, which are provided in FORTRAN and C, were implemented for several 2D and 3D geometries.
Zou, Cheng; Sun, Zhenguo; Cai, Dong; Muhammad, Salman; Zhang, Wenzeng; Chen, Qiang
2016-01-01
A method is developed to accurately determine the spatial impulse response at the specifically discretized observation points in the radiated field of 1-D linear ultrasonic phased array transducers with great efficiency. In contrast, the previously adopted solutions only optimize the calculation procedure for a single rectangular transducer and required approximation considerations or nonlinear calculation. In this research, an algorithm that follows an alternative approach to expedite the calculation of the spatial impulse response of a rectangular linear array is presented. The key assumption for this algorithm is that the transducer apertures are identical and linearly distributed on an infinite rigid plane baffled with the same pitch. Two points in the observation field, which have the same position relative to two transducer apertures, share the same spatial impulse response that contributed from corresponding transducer, respectively. The observation field is discretized specifically to meet the relationship of equality. The analytical expressions of the proposed algorithm, based on the specific selection of the observation points, are derived to remove redundant calculations. In order to measure the proposed methodology, the simulation results obtained from the proposed method and the classical summation method are compared. The outcomes demonstrate that the proposed strategy can speed up the calculation procedure since it accelerates the speed-up ratio which relies upon the number of discrete points and the number of the array transducers. This development will be valuable in the development of advanced and faster linear ultrasonic phased array systems. PMID:27834799
Rathmayer, Markus; Heinlein, Wolfgang; Reiß, Claudia; Albert, Jörg G; Akoglu, Bora; Braun, Martin; Brechmann, Thorsten; Gölder, Stefan K; Lankisch, Tim; Messmann, Helmut; Schneider, Arne; Wagner, Martin; Dollhopf, Markus; Gundling, Felix; Röhling, Michael; Haag, Cornelie; Dohle, Ines; Werner, Sven; Lammert, Frank; Fleßa, Steffen; Wilke, Michael H; Schepp, Wolfgang; Lerch, Markus M
2017-10-01
Background In the German hospital reimbursement system (G-DRG) endoscopic procedures are listed in cost center 8. For reimbursement between hospital departments and external providers outdated or incomplete catalogues (e. g. DKG-NT, GOÄ) have remained in use. We have assessed the cost for endoscopic procedures in the G-DRG-system. Methods To assess the cost of endoscopic procedures 74 hospitals, annual providers of cost-data to the Institute for the Hospital Remuneration System (InEK) made their data (2011 - 2015; § 21 KHEntgG) available to the German-Society-of-Gastroenterology (DGVS) in anonymized form (4873 809 case-data-sets). Using cases with exactly one endoscopic procedure (n = 274 186) average costs over 5 years were calculated for 46 endoscopic procedure-tiers. Results Robust mean endoscopy costs ranged from 230.56 € for gastroscopy (144 666 cases), 276.23 € (n = 32 294) for a simple colonoscopy, to 844.07 € (n = 10 150) for ERCP with papillotomy and plastic stent insertion and 1602.37 € (n = 967) for ERCP with a self-expanding metal stent. Higher costs, specifically for complex procedures, were identified for University Hospitals. Discussion For the first time this catalogue for endoscopic procedure-tiers, based on § 21 KHEntgG data-sets from 74 InEK-calculating hospitals, permits a realistic assessment of endoscopy costs in German hospitals. The higher costs in university hospitals are likely due to referral bias for complex cases and emergency interventions. For 46 endoscopic procedure-tiers an objective cost-allocation within the G-DRG system is now possible. By international comparison the costs of endoscopic procedures in Germany are low, due to either greater efficiency, lower personnel allocation or incomplete documentation of the real expenses. © Georg Thieme Verlag KG Stuttgart · New York.
NASA Technical Reports Server (NTRS)
Eberle, W. R.
1981-01-01
A computer program to calculate the wake downwind of a wind turbine was developed. Turbine wake characteristics are useful for determining optimum arrays for wind turbine farms. The analytical model is based on the characteristics of a turbulent coflowing jet with modification for the effects of atmospheric turbulence. The program calculates overall wake characteristics, wind profiles, and power recovery for a wind turbine directly in the wake of another turbine, as functions of distance downwind of the turbine. The calculation procedure is described in detail, and sample results are presented to illustrate the general behavior of the wake and the effects of principal input parameters.
NASA Technical Reports Server (NTRS)
Gloersen, P.; Campbell, W. J.
1984-01-01
Data acquired with the Scanning Multichannel Microwave Radiometer (SMMR) on board the Nimbus-7 Satellite for a six-week period in Fram Strait were analyzed with a procedure for calculating sea ice concentration, multiyear fraction, and ice temperature. Calculations were compared with independent observations made on the surface and from aircraft to check the validity of the calculations based on SMMR data. The calculation of multiyear fraction, which was known to be invalid near the melting point of sea ice, is discussed. The indication of multiyear ice is found to disappear a number of times, presumably corresponding to freeze/thaw cycles which occurred in this time period.
Allison, Stuart A; Xin, Yao
2005-08-15
A boundary element (BE) procedure is developed to numerically calculate the electrophoretic mobility of highly charged, rigid model macroions in the thin double layer regime based on the continuum primitive model. The procedure is based on that of O'Brien (R.W. O'Brien, J. Colloid Interface Sci. 92 (1983) 204). The advantage of the present procedure over existing BE methodologies that are applicable to rigid model macroions in general (S. Allison, Macromolecules 29 (1996) 7391) is that computationally time consuming integrations over a large number of volume elements that surround the model particle are completely avoided. The procedure is tested by comparing the mobilities derived from it with independent theory of the mobility of spheres of radius a in a salt solution with Debye-Huckel screening parameter, kappa. The procedure is shown to yield accurate mobilities provided (kappa)a exceeds approximately 50. The methodology is most relevant to model macroions of mean linear dimension, L, with 1000>(kappa)L>100 and reduced absolute zeta potential (q|zeta|/k(B)T) greater than 1.0. The procedure is then applied to the compact form of high molecular weight, duplex DNA that is formed in the presence of the trivalent counterion, spermidine, under low salt conditions. For T4 DNA (166,000 base pairs), the compact form is modeled as a sphere (diameter=600 nm) and as a toroid (largest linear dimension=600 nm). In order to reconcile experimental and model mobilities, approximately 95% of the DNA phosphates must be neutralized by bound counterions. This interpretation, based on electrokinetics, is consistent with independent studies.
NASA Astrophysics Data System (ADS)
Liu, Tianhui; Chen, Jun; Zhang, Zhaojun; Shen, Xiangjian; Fu, Bina; Zhang, Dong H.
2018-04-01
We constructed a nine-dimensional (9D) potential energy surface (PES) for the dissociative chemisorption of H2O on a rigid Ni(100) surface using the neural network method based on roughly 110 000 energies obtained from extensive density functional theory (DFT) calculations. The resulting PES is accurate and smooth, based on the small fitting errors and the good agreement between the fitted PES and the direct DFT calculations. Time dependent wave packet calculations also showed that the PES is very well converged with respect to the fitting procedure. The dissociation probabilities of H2O initially in the ground rovibrational state from 9D quantum dynamics calculations are quite different from the site-specific results from the seven-dimensional (7D) calculations, indicating the importance of full-dimensional quantum dynamics to quantitatively characterize this gas-surface reaction. It is found that the validity of the site-averaging approximation with exact potential holds well, where the site-averaging dissociation probability over 15 fixed impact sites obtained from 7D quantum dynamics calculations can accurately approximate the 9D dissociation probability for H2O in the ground rovibrational state.
NASA Astrophysics Data System (ADS)
Osten, W.; Pedrini, G.; Weidmann, P.; Gadow, R.
2015-08-01
A minimum invasive but high resolution method for residual stress analysis of ceramic coatings made by thermal spraycoating using a pulsed laser for flexible hole drilling is described. The residual stresses are retrieved by applying the measured surface data for a model-based reconstruction procedure. While the 3D deformations and the profile of the machined area are measured with digital holography, the residual stresses are calculated by FE analysis. To improve the sensitivity of the method, a SLM is applied to control the distribution and the shape of the holes. The paper presents the complete measurement and reconstruction procedure and discusses the advantages and challenges of the new technology.
Code of Federal Regulations, 2010 CFR
2010-04-01
... administration or application, of mind altering substances or other procedures calculated to disrupt profoundly... application of mind altering substances or other procedures calculated to disrupt profoundly the senses or... custody or physical control. (5) The term “acquiescence” as used in this definition requires that the...
NASA Astrophysics Data System (ADS)
Vattré, A.; Devincre, B.; Feyel, F.; Gatti, R.; Groh, S.; Jamond, O.; Roos, A.
2014-02-01
A unified model coupling 3D dislocation dynamics (DD) simulations with the finite element (FE) method is revisited. The so-called Discrete-Continuous Model (DCM) aims to predict plastic flow at the (sub-)micron length scale of materials with complex boundary conditions. The evolution of the dislocation microstructure and the short-range dislocation-dislocation interactions are calculated with a DD code. The long-range mechanical fields due to the dislocations are calculated by a FE code, taking into account the boundary conditions. The coupling procedure is based on eigenstrain theory, and the precise manner in which the plastic slip, i.e. the dislocation glide as calculated by the DD code, is transferred to the integration points of the FE mesh is described in full detail. Several test cases are presented, and the DCM is applied to plastic flow in a single-crystal Nickel-based superalloy.
Polynomial probability distribution estimation using the method of moments
Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949
Constales, Denis; Yablonsky, Gregory S.; Wang, Lucun; ...
2017-04-25
This paper presents a straightforward and user-friendly procedure for extracting a reactivity characterization of catalytic reactions on solid materials under non-steady-state conditions, particularly in temporal analysis of products (TAP) experiments. The kinetic parameters derived by this procedure can help with the development of detailed mechanistic understanding. The procedure consists of the following two major steps: 1) Three “Laplace reactivities” are first determined based on the moments of the exit flow pulse response data; 2) Depending on a select kinetic model, kinetic constants of elementary reaction steps can then be expressed as a function of reactivities and determined accordingly. In particular,more » we distinguish two calculation methods based on the availability and reliability of reactant and product data. The theoretical results are illustrated using a reverse example with given parameters as well as an experimental example of CO oxidation over a supported Au/SiO 2 catalyst. The procedure presented here provides an efficient tool for kinetic characterization of many complex chemical reactions.« less
Polynomial probability distribution estimation using the method of moments.
Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper
2017-01-01
We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.
Tracing contacts of TB patients in Malaysia: costs and practicality.
Atif, Muhammad; Sulaiman, Syed Azhar Syed; Shafie, Asrul Akmal; Ali, Irfhan; Asif, Muhammad
2012-01-01
Tuberculin skin testing (TST) and chest X-ray are the conventional methods used for tracing suspected tuberculosis (TB) patients. The purpose of the study was to calculate the cost incurred by Penang General Hospital on performing one contact tracing procedure using an activity based costing approach. Contact tracing records (including the demographic profile of contacts and outcome of the contact tracing procedure) from March 2010 until February 2011 were retrospectively obtained from the TB contact tracing record book. The human resource cost was calculated by multiplying the mean time spent (in minutes) by employees doing a specific activity by their per-minute salaries. The costs of consumables, Purified Protein Derivative vials and clinical equipment were obtained from the procurement section of the Pharmacy and Radiology Departments. The cost of the building was calculated by multiplying the area of space used by the facility with the unit cost of the public building department. Straight-line deprecation with a discount rate of 3% was assumed for the calculation of equivalent annual costs for the building and machines. Out of 1024 contact tracing procedures, TST was positive (≥10 mm) in 38 suspects. However, chemoprophylaxis was started in none. Yield of contact tracing (active tuberculosis) was as low as 0.5%. The total unit cost of chest X-ray and TST was MYR 9.23 (2.90 USD) & MYR 11.80 (USD 3.70), respectively. The total cost incurred on a single contact tracing procedure was MYR 21.03 (USD 6.60). Our findings suggest that the yield of contact tracing was very low which may be attributed to an inappropriate prioritization process. TST may be replaced with more accurate and specific methods (interferon gamma release assay) in highly prioritized contacts; or TST-positive contacts should be administered 6H therapy (provided that the chest radiography excludes TB) in accordance with standard protocols. The unit cost of contact tracing can be significantly reduced if radiological examination is done only in TST or IRGA positive contacts.
Determination of stress intensity factors for interface cracks under mixed-mode loading
NASA Technical Reports Server (NTRS)
Naik, Rajiv A.; Crews, John H., Jr.
1992-01-01
A simple technique was developed using conventional finite element analysis to determine stress intensity factors, K1 and K2, for interface cracks under mixed-mode loading. This technique involves the calculation of crack tip stresses using non-singular finite elements. These stresses are then combined and used in a linear regression procedure to calculate K1 and K2. The technique was demonstrated by calculating three different bimaterial combinations. For the normal loading case, the K's were within 2.6 percent of an exact solution. The normalized K's under shear loading were shown to be related to the normalized K's under normal loading. Based on these relations, a simple equation was derived for calculating K1 and K2 for mixed-mode loading from knowledge of the K's under normal loading. The equation was verified by computing the K's for a mixed-mode case with equal and normal shear loading. The correlation between exact and finite element solutions is within 3.7 percent. This study provides a simple procedure to compute K2/K1 ratio which has been used to characterize the stress state at the crack tip for various combinations of materials and loadings. Tests conducted over a range of K2/K1 ratios could be used to fully characterize interface fracture toughness.
Sławuta, P; Glińska-Suchocka, K; Cekiera, A
2015-01-01
Apart from the HH equation, the acid-base balance of an organism is also described by the Stewart model, which assumes that the proper insight into the ABB of the organism is given by an analysis of: pCO2, the difference of concentrations of strong cations and anions in the blood serum - SID, and the total concentration of nonvolatile weak acids - Acid total. The notion of an anion gap (AG), or the apparent lack of ions, is closely related to the acid-base balance described according to the HH equation. Its value mainly consists of negatively charged proteins, phosphates, and sulphates in blood. In the human medicine, a modified anion gap is used, which, including the concentration of the protein buffer of blood, is, in fact, the combination of the apparent lack of ions derived from the classic model and the Stewart model. In brachycephalic dogs, respiratory acidosis often occurs, which is caused by an overgrowth of the soft palate, making it impossible for a free air flow and causing an increase in pCO2--carbonic acid anhydride The aim of the present paper was an attempt to answer the question whether, in the case of systemic respiratory acidosis, changes in the concentration of buffering ions can also be seen. The study was carried out on 60 adult dogs of boxer breed in which, on the basis of the results of endoscopic examination, a strong overgrowth of the soft palate requiring a surgical correction was found. For each dog, the value of the anion gap before and after the palate correction procedure was calculated according to the following equation: AG = ([Na+ mmol/l] + [K+ mmol/l])--([Cl- mmol/l]+ [HCO3- mmol/l]) as well as the value of the modified AG--according to the following equation: AGm = calculated AG + 2.5 x (albumins(r)--albumins(d)). The values of AG calculated for the dogs before and after the procedure fell within the limits of the reference values and did not differ significantly whereas the values of AGm calculated for the dogs before and after the procedure differed from each other significantly. 1) On the basis of the values of AGm obtained it should be stated that in spite of finding respiratory acidosis in the examined dogs, changes in ion concentration can also be seen, which, according to the Stewart theory, compensate metabolic ABB disorders 2) In spite of the fact that all the values used for calculation of AGm were within the limits of reference values, the values of AGm in dogs before and after the soft palate correction procedure differed from each other significantly, which proves high sensitivity and usefulness of the AGm calculation as a diagnostic method.
Robustness of methods for blinded sample size re-estimation with overdispersed count data.
Schneider, Simon; Schmidli, Heinz; Friede, Tim
2013-09-20
Counts of events are increasingly common as primary endpoints in randomized clinical trials. With between-patient heterogeneity leading to variances in excess of the mean (referred to as overdispersion), statistical models reflecting this heterogeneity by mixtures of Poisson distributions are frequently employed. Sample size calculation in the planning of such trials requires knowledge on the nuisance parameters, that is, the control (or overall) event rate and the overdispersion parameter. Usually, there is only little prior knowledge regarding these parameters in the design phase resulting in considerable uncertainty regarding the sample size. In this situation internal pilot studies have been found very useful and very recently several blinded procedures for sample size re-estimation have been proposed for overdispersed count data, one of which is based on an EM-algorithm. In this paper we investigate the EM-algorithm based procedure with respect to aspects of their implementation by studying the algorithm's dependence on the choice of convergence criterion and find that the procedure is sensitive to the choice of the stopping criterion in scenarios relevant to clinical practice. We also compare the EM-based procedure to other competing procedures regarding their operating characteristics such as sample size distribution and power. Furthermore, the robustness of these procedures to deviations from the model assumptions is explored. We find that some of the procedures are robust to at least moderate deviations. The results are illustrated using data from the US National Heart, Lung and Blood Institute sponsored Asymptomatic Cardiac Ischemia Pilot study. Copyright © 2013 John Wiley & Sons, Ltd.
Word-Problem-Solving Strategy for Minority Students at Risk for Math Difficulties
ERIC Educational Resources Information Center
Kong, Jennifer E.; Orosco, Michael J.
2016-01-01
Minority students at risk for math difficulties (MD) struggle with word problems for various reasons beyond procedural or calculation challenges. As a result, these students require support in reading and language development in addition to math. The purpose of this study was to assess the effectiveness of a math comprehension strategy based on a…
Environmental Survey Plans, Fort Sheridan, Sampling and Analysis Plan, Fort Sheridan, Illinois
1990-07-01
technique. The flameless AA procedure is based on the absorption of radiation at 253.7 nm by Hg vapor. The Hg is reduced to the elemental state and aerated...desiccator, cupric oxide is added, and the sample is combusted in an induction furnace. The organic carbon content is determined through a calculation in
40 CFR 63.8000 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2011 CFR
2011-07-01
... atoms, and you use a combustion-based control device (excluding a flare) to meet an organic HAP emission... calculating the concentration of each organic compound that contains halogen atoms using the procedures specified in § 63.115(d)(2)(v), multiplying each concentration by the number of halogen atoms in the organic...
40 CFR 63.8000 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2010 CFR
2010-07-01
... atoms, and you use a combustion-based control device (excluding a flare) to meet an organic HAP emission... calculating the concentration of each organic compound that contains halogen atoms using the procedures specified in § 63.115(d)(2)(v), multiplying each concentration by the number of halogen atoms in the organic...
40 CFR 63.8000 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2014 CFR
2014-07-01
... atoms, and you use a combustion-based control device (excluding a flare) to meet an organic HAP emission... calculating the concentration of each organic compound that contains halogen atoms using the procedures specified in § 63.115(d)(2)(v), multiplying each concentration by the number of halogen atoms in the organic...
40 CFR 63.8000 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2013 CFR
2013-07-01
... atoms, and you use a combustion-based control device (excluding a flare) to meet an organic HAP emission... calculating the concentration of each organic compound that contains halogen atoms using the procedures specified in § 63.115(d)(2)(v), multiplying each concentration by the number of halogen atoms in the organic...
40 CFR 63.8000 - What are my general requirements for complying with this subpart?
Code of Federal Regulations, 2012 CFR
2012-07-01
... atoms, and you use a combustion-based control device (excluding a flare) to meet an organic HAP emission... calculating the concentration of each organic compound that contains halogen atoms using the procedures specified in § 63.115(d)(2)(v), multiplying each concentration by the number of halogen atoms in the organic...
40 CFR 60.50Da - Compliance determination procedures and methods.
Code of Federal Regulations, 2010 CFR
2010-07-01
... paragraphs (g)(1) and (2) of this section to calculate emission rates based on electrical output to the grid... of appendix A of this part shall be used to compute the emission rate of PM. (2) For the particular... reduction from fuel pretreatment, percent; and %Rg = Percent reduction by SO2 control system, percent. (2...
A Method of Measuring the Costs and Benefits of Applied Research.
ERIC Educational Resources Information Center
Sprague, John W.
The Bureau of Mines studied the application of the concepts and methods of cost-benefit analysis to the problem of ranking alternative applied research projects. Procedures for measuring the different classes of project costs and benefits, both private and public, are outlined, and cost-benefit calculations are presented, based on the criteria of…
Robert Zahner; Albert R. Stage
1966-01-01
A method is described for computing daily values of moisture stress on forest vegetation, or water deficits, based on the differences between Thornthwaite's potential evapotranspiration and computed soil-moisture depletion. More realistic functions are used for soil-moisture depletion on specific soil types than have been customary. These functions relate daily...
Relative Proximity Theory: Measuring the Gap between Actual and Ideal Online Course Delivery
ERIC Educational Resources Information Center
Swart, William; MacLeod, Kenneth; Paul, Ravi; Zhang, Aixiu; Gagulic, Mario
2014-01-01
Based on the Theory of Transactional Distance and Needs Assessment, this article reports a procedure for quantitatively measuring how close the actual delivery of a course was to ideal, as perceived by students. It extends Zhang's instrument and prescribes the computational steps to calculate relative proximity at the element and construct…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-03
... part of the office-based and ancillary radiology payment methodology. This notice updates the CY 2010... covered ancillary radiology services to the lesser of the ASC rate or the amount calculated by multiplying... procedures and covered ancillary radiology services are determined using the amounts in the MPFS final rule...
An Integrated Analysis-Test Approach
NASA Technical Reports Server (NTRS)
Kaufman, Daniel
2003-01-01
This viewgraph presentation provides an overview of a project to develop a computer program which integrates data analysis and test procedures. The software application aims to propose a new perspective to traditional mechanical analysis and test procedures and to integrate pre-test and test analysis calculation methods. The program also should also be able to be used in portable devices and allows for the 'quasi-real time' analysis of data sent by electronic means. Test methods reviewed during this presentation include: shaker swept sine and random tests, shaker shock mode tests, shaker base driven model survey tests and acoustic tests.
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Standard calculation procedure. 434.510 Section 434.510 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard...
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Standard calculation procedure. 434.510 Section 434.510 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard...
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Standard calculation procedure. 434.510 Section 434.510 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard...
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Standard calculation procedure. 434.510 Section 434.510 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard...
Comparison of Polar Cap (PC) index calculations.
NASA Astrophysics Data System (ADS)
Stauning, P.
2012-04-01
The Polar Cap (PC) index introduced by Troshichev and Andrezen (1985) is derived from polar magnetic variations and is mainly a measure of the intensity of the transpolar ionospheric currents. These currents relate to the polar cap antisunward ionospheric plasma convection driven by the dawn-dusk electric field, which in turn is generated by the interaction of the solar wind with the Earth's magnetosphere. Coefficients to calculate PCN and PCS index values from polar magnetic variations recorded at Thule and Vostok, respectively, have been derived by several different procedures in the past. The first published set of coefficients for Thule was derived by Vennerstrøm, 1991 and is still in use for calculations of PCN index values by DTU Space. Errors in the program used to calculate index values were corrected in 1999 and again in 2001. In 2005 DMI adopted a unified procedure proposed by Troshichev for calculations of the PCN index. Thus there exists 4 different series of PCN index values. Similarly, at AARI three different sets of coefficients have been used to calculate PCS indices in the past. The presentation discusses the principal differences between the various PC index procedures and provides comparisons between index values derived from the same magnetic data sets using the different procedures. Examples from published papers are examined to illustrate the differences.
Code OK3 - An upgraded version of OK2 with beam wobbling function
NASA Astrophysics Data System (ADS)
Ogoyski, A. I.; Kawata, S.; Popov, P. H.
2010-07-01
For computer simulations on heavy ion beam (HIB) irradiation onto a target with an arbitrary shape and structure in heavy ion fusion (HIF), the code OK2 was developed and presented in Computer Physics Communications 161 (2004). Code OK3 is an upgrade of OK2 including an important capability of wobbling beam illumination. The wobbling beam introduces a unique possibility for a smooth mechanism of inertial fusion target implosion, so that sufficient fusion energy is released to construct a fusion reactor in future. New version program summaryProgram title: OK3 Catalogue identifier: ADST_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADST_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 221 517 No. of bytes in distributed program, including test data, etc.: 2 471 015 Distribution format: tar.gz Programming language: C++ Computer: PC (Pentium 4, 1 GHz or more recommended) Operating system: Windows or UNIX RAM: 2048 MBytes Classification: 19.7 Catalogue identifier of previous version: ADST_v2_0 Journal reference of previous version: Comput. Phys. Comm. 161 (2004) 143 Does the new version supersede the previous version?: Yes Nature of problem: In heavy ion fusion (HIF), ion cancer therapy, material processing, etc., a precise beam energy deposition is essentially important [1]. Codes OK1 and OK2 have been developed to simulate the heavy ion beam energy deposition in three-dimensional arbitrary shaped targets [2, 3]. Wobbling beam illumination is important to smooth the beam energy deposition nonuniformity in HIF, so that a uniform target implosion is realized and a sufficient fusion output energy is released. Solution method: OK3 code works on the base of OK1 and OK2 [2, 3]. The code simulates a multi-beam illumination on a target with arbitrary shape and structure, including beam wobbling function. Reasons for new version: The code OK3 is based on OK2 [3] and uses the same algorithm with some improvements, the most important one is the beam wobbling function. Summary of revisions:In the code OK3, beams are subdivided on many bunches. The displacement of each bunch center from the initial beam direction is calculated. Code OK3 allows the beamlet number to vary from bunch to bunch. That reduces the calculation error especially in case of very complicated mesh structure with big internal holes. The target temperature rises during the time of energy deposition. Some procedures are improved to perform faster. The energy conservation is checked up on each step of calculation process and corrected if necessary. New procedures included in OK3 Procedure BeamCenterRot( ) rotates the beam axis around the impinging direction of each beam. Procedure BeamletRot( ) rotates the beamlet axes that belong to each beam. Procedure Rotation( ) sets the coordinates of rotated beams and beamlets in chamber and pellet systems. Procedure BeamletOut( ) calculates the lost energy of ions that have not impinged on the target. Procedure TargetT( ) sets the temperature of the target layer of energy deposition during the irradiation process. Procedure ECL( ) checks up the energy conservation law at each step of the energy deposition process. Procedure ECLt( ) performs the final check up of the energy conservation law at the end of deposition process. Modified procedures in OK3 Procedure InitBeam( ): This procedure initializes the beam radius and coefficients A1, A2, A3, A4 and A5 for Gauss distributed beams [2]. It is enlarged in OK3 and can set beams with radii from 1 to 20 mm. Procedure kBunch( ) is modified to allow beamlet number variation from bunch to bunch during the deposition. Procedure ijkSp( ) and procedure Hole( ) are modified to perform faster. Procedure Espl( ) and procedure ChechE( ) are modified to increase the calculation accuracy. Procedure SD( ) calculates the total relative root-mean-square (RMS) deviation and the total relative peak-to-valley (PTV) deviation in energy deposition non-uniformity. This procedure is not included in code OK2 because of its limited applications (for spherical targets only). It is taken from code OK1 and modified to perform with code OK3. Running time: The execution time depends on the pellet mesh number and the number of beams in the simulated illumination as well as on the beam characteristics (beam radius on the pellet surface, beam subdivision, projectile particle energy and so on). In almost all of the practical running tests performed, the typical running time for one beam deposition is about 30 s on a PC with a CPU of Pentium 4, 2.4 GHz. References:A.I. Ogoyski, et al., Heavy ion beam irradiation non-uniformity in inertial fusion, Phys. Lett. A 315 (2003) 372-377. A.I. Ogoyski, et al., Code OK1 - Simulation of multi-beam irradiation on a spherical target in heavy ion fusion, Comput. Phys. Comm. 157 (2004) 160-172. A.I. Ogoyski, et al., Code OK2 - A simulation code of ion-beam illumination on an arbitrary shape and structure target, Comput. Phys. Comm. 161 (2004) 143-150.
Giessner-Prettre, C; Ribas Prado, F; Pullman, B; Kan, L; Kast, J R; Ts'o, P O
1981-01-01
A FORTRAN computer program called SHIFTS is described. Through SHIFTS, one can calculate the NMR chemical shifts of the proton resonances of single and double-stranded nucleic acids of known sequences and of predetermined conformations. The program can handle RNA and DNA for an arbitrary sequence of a set of 4 out of the 6 base types A,U,G,C,I and T. Data files for the geometrical parameters are available for A-, A'-, B-, D- and S-conformations. The positions of all the atoms are calculated using a modified version of the SEQ program [1]. Then, based on this defined geometry three chemical shift effects exerted by the atoms of the neighboring nucleotides on the protons of each monomeric unit are calculated separately: the ring current shielding effect: the local atomic magnetic susceptibility effect (including both diamagnetic and paramagnetic terms); and the polarization or electric field effect. Results of the program are compared with experimental results for a gamma (ApApGpCpUpU) 2 helical duplex and with calculated results on this same helix based on model building of A'-form and B-form and on graphical procedure for evaluating the ring current effects.
Evaluation of audit-based performance measures for dental care plans.
Bader, J D; Shugars, D A; White, B A; Rindal, D B
1999-01-01
Although a set of clinical performance measures, i.e., a report card for dental plans, has been designed for use with administrative data, most plans do not have administrative data systems containing the data needed to calculate the measures. Therefore, we evaluated the use of a set of proxy clinical performance measures calculated from data obtained through chart audits. Chart audits were conducted in seven dental programs--three public health clinics, two dental health maintenance organizations (DHMO), and two preferred provider organizations (PPO). In all instances audits were completed by clinical staff who had been trained using telephone consultation and a self-instructional audit manual. The performance measures were calculated for the seven programs, audit reliability was assessed in four programs, and for one program the audit-based proxy measures were compared to the measures calculated using administrative data. The audit-based measures were sensitive to known differences in program performance. The chart audit procedures yielded reasonably reliable data. However, missing data in patient charts rendered the calculation of some measures problematic--namely, caries and periodontal disease assessment and experience. Agreement between administrative and audit-based measures was good for most, but not all, measures in one program. The audit-based proxy measures represent a complex but feasible approach to the calculation of performance measures for those programs lacking robust administrative data systems. However, until charts contain more complete diagnostic information (i.e., periodontal charting and diagnostic codes or reason-for-treatment codes), accurate determination of these aspects of clinical performance will be difficult.
Optical absorption spectra and g factor of MgO: Mn2+explored by ab initio and semi empirical methods
NASA Astrophysics Data System (ADS)
Andreici Eftimie, E.-L.; Avram, C. N.; Brik, M. G.; Avram, N. M.
2018-02-01
In this paper we present a methodology for calculations of the optical absorption spectra, ligand field parameters and g factor for the Mn2+ (3d5) ions doped in MgO host crystal. The proposed technique combines two methods: the ab initio multireference (MR) and the semi empirical ligand field (LF) in the framework of the exchange charge model (ECM) respectively. Both methods of calculations are applied to the [MnO6]10-cluster embedded in an extended point charge field of host matrix ligands based on Gellé-Lepetit procedure. The first step of such investigations was the full optimization of the cubic structure of perfect MgO crystal, followed by the structural optimization of the doped of MgO:Mn2+ system, using periodic density functional theory (DFT). The ab initio MR wave functions approaches, such as complete active space self-consistent field (CASSCF), N-electron valence second order perturbation theory (NEVPT2) and spectroscopy oriented configuration interaction (SORCI), are used for the calculations. The scalar relativistic effects have also been taken into account through the second order Douglas-Kroll-Hess (DKH2) procedure. Ab initio ligand field theory (AILFT) allows to extract all LF parameters and spin-orbit coupling constant from such calculations. In addition, the ECM of ligand field theory (LFT) has been used for modelling theoptical absorption spectra. The perturbation theory (PT) was employed for the g factor calculation in the semi empirical LFT. The results of each of the aforementioned types of calculations are discussed and the comparisons between the results obtained and the experimental results show a reasonable agreement, which justifies this new methodology based on the simultaneous use of both methods. This study establishes fundamental principles for the further modelling of larger embedded cluster models of doped metal oxides.
NASA Technical Reports Server (NTRS)
Anderson, O. L.
1974-01-01
A finite-difference procedure for computing the turbulent, swirling, compressible flow in axisymmetric ducts is described. Arbitrary distributions of heat and mass transfer at the boundaries can be treated, and the effects of struts, inlet guide vanes, and flow straightening vanes can be calculated. The calculation procedure is programmed in FORTRAN 4 and has operated successfully on the UNIVAC 1108, IBM 360, and CDC 6600 computers. The analysis which forms the basis of the procedure, a detailed description of the computer program, and the input/output formats are presented. The results of sample calculations performed with the computer program are compared with experimental data.
Design of Energy Storage Reactors for Dc-To-Dc Converters. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Chen, D. Y.
1975-01-01
Two methodical approaches to the design of energy-storage reactors for a group of widely used dc-to-dc converters are presented. One of these approaches is based on a steady-state time-domain analysis of piecewise-linearized circuit models of the converters, while the other approach is based on an analysis of the same circuit models, but from an energy point of view. The design procedure developed from the first approach includes a search through a stored data file of magnetic core characteristics and results in a list of usable reactor designs which meet a particular converter's requirements. Because of the complexity of this procedure, a digital computer usually is used to implement the design algorithm. The second approach, based on a study of the storage and transfer of energy in the magnetic reactors, leads to a straightforward design procedure which can be implemented with hand calculations. An equation to determine the lower-bound volume of workable cores for given converter design specifications is derived. Using this computer lower-bound volume, a comparative evaluation of various converter configurations is presented.
Design procedures for fiber composite box beams
NASA Technical Reports Server (NTRS)
Chamis, Cristos C.; Murthy, Pappu L. N.
1989-01-01
Step-by-step procedures are described which can be used for the preliminary design of fiber composite box beams subjected to combined loadings. These procedures include a collection of approximate closed-form equations so that all the required calculations can be performed using pocket calculators. Included is an illustrative example of a tapered cantilever box beam subjected to combined loads. The box beam is designed to satisfy strength, displacement, buckling, and frequency requirements.
Design Procedures for Fiber Composite Box Beams
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Murthy, Pappu L. N.
1988-01-01
Step-by-step procedures are described which can be used for the preliminary design of fiber composite box beams subjected to combined loadings. These procedures include a collection of approximate closed-form equations so that all the required calculations can be performed using pocket calculators. Included is an illustrated example of a tapered cantilever box beam subjected to combined loads. The box beam is designed to satisfy strength, displacement, buckling, and frequency requirements.
[Cost analysis of intraoperative neurophysiological monitoring (IOM)].
Kombos, T; Suess, O; Brock, M
2002-01-01
A number of studies demonstrate that a significant reduction of postoperative neurological deficits can be achieved by applying intraoperative neurophysiological monitoring (IOM) methods. A cost analysis of IOM is imperative considering the strained financial situation in the public health services. The calculation model presented here comprises two cost components: material and personnel. The material costs comprise consumer goods and depreciation of capital goods. The computation base was 200 IOM cases per year. Consumer goods were calculated for each IOM procedure respectively. The following constellation served as a basis for calculating personnel costs: (a) a medical technician (salary level BAT Vc) for one hour per case; (b) a resident (BAT IIa) for the entire duration of the measurement, and (c) a senior resident (BAT Ia) only for supervision. An IOM device consisting of an 8-channel preamplifier, an electrical and acoustic stimulator and special software costs 66,467 euros on the average. With an annual depreciation of 20%, the costs are 13,293 euros per year. This amounts to 66.46 euros per case for the capital goods. For reusable materials a sum of 0.75 euro; per case was calculated. Disposable materials were calculate for each procedure respectively. Total costs of 228.02 euro; per case were,s a sum of 0.75 euros per case was calculated. Disposable materials were calculate for each procedure respectively. Total costs of 228.02 euros per case were, calculated for surgery on the peripheral nervous system. They amount to 196.40 euros per case for spinal interventions and to 347.63 euros per case for more complex spinal operations. Operations in the cerebellopontine angle and brain stem cost 376.63 euros and 397.33 euros per case respectively. IOM costs amount to 328.03 euros per case for surgical management of an intracranial aneurysm and to 537.15 euros per case for functional interventions. Expenses run up to 833.63 euros per case for operations near the motor cortex and to 117.65 euros per case for intraoperative speech monitoring. Costs for inpatient medical rehabilitation have increased considerably in recent years. In view of the financial situation, it is necessary to reduce postoperative morbidity and the costs it involves. IOM leads to a reduction of morbidity. The costs for IOM calculated here justify its routine application in view of the legal and socioeconomic consequences of surgery-related neurological deficits.
NASA Astrophysics Data System (ADS)
Ma, Kevin; Moin, Paymann; Zhang, Aifeng; Liu, Brent
2010-03-01
Bone Age Assessment (BAA) of children is a clinical procedure frequently performed in pediatric radiology to evaluate the stage of skeletal maturation based on the left hand x-ray radiograph. The current BAA standard in the US is using the Greulich & Pyle (G&P) Hand Atlas, which was developed fifty years ago and was only based on Caucasian population from the Midwest US. To bring the BAA procedure up-to-date with today's population, a Digital Hand Atlas (DHA) consisting of 1400 hand images of normal children of different ethnicities, age, and gender. Based on the DHA and to solve inter- and intra-observer reading discrepancies, an automatic computer-aided bone age assessment system has been developed and tested in clinical environments. The algorithm utilizes features extracted from three regions of interests: phalanges, carpal, and radius. The features are aggregated into a fuzzy logic system, which outputs the calculated bone age. The previous BAA system only uses features from phalanges and carpal, thus BAA result for children over age of 15 is less accurate. In this project, the new radius features are incorporated into the overall BAA system. The bone age results, calculated from the new fuzzy logic system, are compared against radiologists' readings based on G&P atlas, and exhibits an improvement in reading accuracy for older children.
NASA Astrophysics Data System (ADS)
Xu, Weimin; Chen, Shi; Lu, Hongyan
2016-04-01
Integrated gravity is an efficient way in studying spatial and temporal characteristics of the dynamics and tectonics. Differential measurements based on the continuous and discrete gravity observations shows highly competitive in terms of both efficiency and precision with single result. The differential continuous gravity variation between the nearby stations, which is based on the observation of Scintrex g-Phone relative gravimeters in every single station. It is combined with the repeated mobile relative measurements or absolute results to study the regional integrated gravity changes. Firstly we preprocess the continuous records by Tsoft software, and calculate the theoretical earth tides and ocean tides by "MT80TW" program through high precision tidal parameters from "WPARICET". The atmospheric loading effects and complex drift are strictly considered in the procedure. Through above steps we get the continuous gravity in every station and we can calculate the continuous gravity variation between nearby stations, which is called the differential continuous gravity changes. Then the differential results between related stations is calculated based on the repeated gravity measurements, which are carried out once or twice every year surrounding the gravity stations. Hence we get the discrete gravity results between the nearby stations. Finally, the continuous and discrete gravity results are combined in the same related stations, including the absolute gravity results if necessary, to get the regional integrated gravity changes. This differential gravity results is more accurate and effective in dynamical monitoring, regional hydrologic effects studying, tectonic activity and other geodynamical researches. The time-frequency characteristics of continuous gravity results are discussed to insure the accuracy and efficiency in the procedure.
Resnick, Cory M; Daniels, Kimberly M; Flath-Sporn, Susan J; Doyle, Michael; Heald, Ronald; Padwa, Bonnie L
2016-11-01
To determine the effects on time, cost, and complication rates of integrating physician assistants (PAs) into the procedural components of an outpatient oral and maxillofacial surgery practice. This is a prospective cohort study of patients from the Department of Plastic and Oral Surgery at Boston Children's Hospital who underwent removal of 4 impacted third molars with intravenous sedation in our outpatient facility. Patients were separated into the "no PA group" and PA group. Process maps were created to capture all activities from room preparation to patient discharge, and all activities were timed for each case. A time-driven activity-based costing method was used to calculate the average times and costs from the provider's perspective for each group. Complication rates were calculated during the periods for both groups. Descriptive statistics were calculated, and significance was set at P < .05. The total process time did not differ significantly between groups, but the average total procedure cost decreased by $75.08 after the introduction of PAs (P < .001). The time that the oral and maxillofacial surgeon was directly involved in the procedure decreased by an average of 19.2 minutes after the introduction of PAs (P < .001). No significant differences in postoperative complications were found. The addition of PAs into the procedural components of an outpatient oral and maxillofacial surgery practice resulted in decreased costs whereas complication rates remained constant. The increased availability of the oral and maxillofacial surgeon after the incorporation of PAs allows for more patients to be seen during a clinic session, which has the potential to further increase efficiency and revenue. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Round-off errors in cutting plane algorithms based on the revised simplex procedure
NASA Technical Reports Server (NTRS)
Moore, J. E.
1973-01-01
This report statistically analyzes computational round-off errors associated with the cutting plane approach to solving linear integer programming problems. Cutting plane methods require that the inverse of a sequence of matrices be computed. The problem basically reduces to one of minimizing round-off errors in the sequence of inverses. Two procedures for minimizing this problem are presented, and their influence on error accumulation is statistically analyzed. One procedure employs a very small tolerance factor to round computed values to zero. The other procedure is a numerical analysis technique for reinverting or improving the approximate inverse of a matrix. The results indicated that round-off accumulation can be effectively minimized by employing a tolerance factor which reflects the number of significant digits carried for each calculation and by applying the reinversion procedure once to each computed inverse. If 18 significant digits plus an exponent are carried for each variable during computations, then a tolerance value of 0.1 x 10 to the minus 12th power is reasonable.
Sauter, Thomas C; Hautz, Wolf E; Hostettler, Simone; Brodmann-Maeder, Monika; Martinolli, Luca; Lehmann, Beat; Exadaktylos, Aristomenis K; Haider, Dominik G
2016-08-02
Sedation is a procedure required for many interventions in the Emergency department (ED) such as reductions, surgical procedures or cardioversions. However, especially under emergency conditions with high risk patients and rapidly changing interdisciplinary and interprofessional teams, the procedure caries important risks. It is thus vital but difficult to implement a standard operating procedure for sedation procedures in any ED. Reports on both, implementation strategies as well as their success are currently lacking. This study describes the development, implementation and clinical evaluation of an interprofessional and interdisciplinary simulation-based sedation training concept. All physicians and nurses with specialised training in emergency medicine at the Berne University Department of Emergency Medicine participated in a mandatory interdisciplinary and interprofessional simulation-based sedation training. The curriculum consisted of an individual self-learning module, an airway skill training course, three simulation-based team training cases, and a final practical learning course in the operating theatre. Before and after each training session, self-efficacy, awareness of emergency procedures, knowledge of sedation medication and crisis resource management were assessed with a questionnaire. Changes in these measures were compared via paired tests, separately for groups formed based on experience and profession. To assess the clinical effect of training, we collected patient and team satisfaction as well as duration and complications for all sedations in the ED within the year after implementation. We further compared time to beginning of procedure, time for duration of procedure and time until discharge after implementation with the one year period before the implementation. Cohen's d was calculated as effect size for all statistically significant tests. Fifty staff members (26 nurses and 24 physicians) participated in the training. In all subgroups, there is a significant increase in self-efficacy and knowledge with high effect size (d z = 1.8). The learning is independent of profession and experience level. In the clinical evaluation after implementation, we found no major complications among the sedations performed. Time to procedure significantly improved after the introduction of the training (d = 0.88). Learning is independent of previous working experience and equally effective in raising the self-efficacy and knowledge in all professional groups. Clinical outcome evaluation confirms the concepts safety and feasibility. An interprofessional and interdisciplinary simulation-based sedation training is an efficient way to implement a conscious sedation concept in an ED.
10 CFR 434.507 - Calculation procedure and simulation tool.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Calculation procedure and simulation tool. 434.507 Section 434.507 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.507...
40 CFR 98.175 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.55 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in paragraphs...
40 CFR 98.265 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.175 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.115 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.175 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.55 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in paragraphs...
40 CFR 98.65 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations, according to the following...
40 CFR 98.65 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations, according to the following...
40 CFR 98.115 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.115 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.115 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.225 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in paragraphs...
40 CFR 98.175 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.115 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.225 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in paragraphs...
40 CFR 98.65 - Procedures for estimating missing data.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations, according to the following...
40 CFR 98.55 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in paragraphs...
40 CFR 98.55 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in paragraphs...
40 CFR 98.65 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations, according to the following...
40 CFR 98.265 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter must be used in the calculations as specified in paragraphs...
40 CFR 98.175 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.225 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in paragraphs...
40 CFR 98.65 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations, according to the following...
40 CFR 98.225 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... estimating missing data. A complete record of all measured parameters used in the GHG emissions calculations... substitute data value for the missing parameter shall be used in the calculations as specified in paragraphs...
Wenke, A; Gaber, A; Hertle, L; Roeder, N; Pühse, G
2012-07-01
Precise and complete coding of diagnoses and procedures is of value for optimizing revenues within the German diagnosis-related groups (G-DRG) system. The implementation of effective structures for coding is cost-intensive. The aim of this study was to prove whether higher costs can be refunded by complete acquisition of comorbidities and complications. Calculations were based on DRG data of the Department of Urology, University Hospital of Münster, Germany, covering all patients treated in 2009. The data were regrouped and subjected to a process of simulation (increase and decrease of patient clinical complexity levels, PCCL) with the help of recently developed software. In urology a strong dependency of quantity and quality of coding of secondary diagnoses on PCCL and subsequent profits was found. Departmental budgetary procedures can be optimized when coding is effective. The new simulation tool can be a valuable aid to improve profits available for distribution. Nevertheless, calculation of time use and financial needs by this procedure are subject to specific departmental terms and conditions. Completeness of coding of (secondary) diagnoses must be the ultimate administrative goal of patient case documentation in urology.
Design and Analysis of Hydrostatic Transmission System
NASA Astrophysics Data System (ADS)
Mistry, Kayzad A.; Patel, Bhaumikkumar A.; Patel, Dhruvin J.; Parsana, Parth M.; Patel, Jitendra P.
2018-02-01
This study develops a hydraulic circuit to drive a conveying system dealing with heavy and delicate loads. Various safety circuits have been added in order to ensure stable working at high pressure and precise controlling. Here we have shown the calculation procedure based on an arbitrarily selected load. Also the circuit design and calculations of various components used is depicted along with the system simulation. The results show that the system is stable and efficient enough to transmit heavy loads by functioning of the circuit. By this information, one can be able to design their own hydrostatic circuits for various heavy loading conditions.
Sjöström, Susanne; Kopp Kallner, Helena; Simeonova, Emilia; Madestam, Andreas; Gemzell-Danielsson, Kristina
2016-01-01
The objective of the present study is to calculate the cost-effectiveness of early medical abortion performed by nurse-midwifes in comparison to physicians in a high resource setting where ultrasound dating is part of the protocol. Non-physician health care professionals have previously been shown to provide medical abortion as effectively and safely as physicians, but the cost-effectiveness of such task shifting remains to be established. A cost effectiveness analysis was conducted based on data from a previously published randomized-controlled equivalence study including 1180 healthy women randomized to the standard procedure, early medical abortion provided by physicians, or the intervention, provision by nurse-midwifes. A 1.6% risk difference for efficacy defined as complete abortion without surgical interventions in favor of midwife provision was established which means that for every 100 procedures, the intervention treatment resulted in 1.6 fewer incomplete abortions needing surgical intervention than the standard treatment. The average direct and indirect costs and the incremental cost-effectiveness ratio (ICER) were calculated. The study was conducted at a university hospital in Stockholm, Sweden. The average direct costs per procedure were EUR 45 for the intervention compared to EUR 58.3 for the standard procedure. Both the cost and the efficacy of the intervention were superior to the standard treatment resulting in a negative ICER at EUR -831 based on direct costs and EUR -1769 considering total costs per surgical intervention avoided. Early medical abortion provided by nurse-midwives is more cost-effective than provision by physicians. This evidence provides clinicians and decision makers with an important tool that may influence policy and clinical practice and eventually increase numbers of abortion providers and reduce one barrier to women's access to safe abortion.
Microstructure based procedure for process parameter control in rolling of aluminum thin foils
NASA Astrophysics Data System (ADS)
Johannes, Kronsteiner; Kabliman, Evgeniya; Klimek, Philipp-Christoph
2018-05-01
In present work, a microstructure based procedure is used for a numerical prediction of strength properties for Al-Mg-Sc thin foils during a hot rolling process. For this purpose, the following techniques were developed and implemented. At first, a toolkit for a numerical analysis of experimental stress-strain curves obtained during a hot compression testing by a deformation dilatometer was developed. The implemented techniques allow for the correction of a temperature increase in samples due to adiabatic heating and for the determination of a yield strength needed for the separation of the elastic and plastic deformation regimes during numerical simulation of multi-pass hot rolling. At the next step, an asymmetric Hot Rolling Simulator (adjustable table inlet/outlet height as well as separate roll infeed) was developed in order to match the exact processing conditions of a semi-industrial rolling procedure. At each element of a finite element mesh the total strength is calculated by in-house Flow Stress Model based on evolution of mean dislocation density. The strength values obtained by numerical modelling were found in a reasonable agreement with results of tensile tests for thin Al-Mg-Sc foils. Thus, the proposed simulation procedure might allow to optimize the processing parameters with respect to the microstructure development.
MIRACAL: A mission radiation calculation program for analysis of lunar and interplanetary missions
NASA Technical Reports Server (NTRS)
Nealy, John E.; Striepe, Scott A.; Simonsen, Lisa C.
1992-01-01
A computational procedure and data base are developed for manned space exploration missions for which estimates are made for the energetic particle fluences encountered and the resulting dose equivalent incurred. The data base includes the following options: statistical or continuum model for ordinary solar proton events, selection of up to six large proton flare spectra, and galactic cosmic ray fluxes for elemental nuclei of charge numbers 1 through 92. The program requires an input trajectory definition information and specifications of optional parameters, which include desired spectral data and nominal shield thickness. The procedure may be implemented as an independent program or as a subroutine in trajectory codes. This code should be most useful in mission optimization and selection studies for which radiation exposure is of special importance.
NASA Astrophysics Data System (ADS)
Nielsen, Jens C. O.; Li, Xin
2018-01-01
An iterative procedure for numerical prediction of long-term degradation of railway track geometry (longitudinal level) due to accumulated differential settlement of ballast/subgrade is presented. The procedure is based on a time-domain model of dynamic vehicle-track interaction to calculate the contact loads between sleepers and ballast in the short-term, which are then used in an empirical model to determine the settlement of ballast/subgrade below each sleeper in the long-term. The number of load cycles (wheel passages) accounted for in each iteration step is determined by an adaptive step length given by a maximum settlement increment. To reduce the computational effort for the simulations of dynamic vehicle-track interaction, complex-valued modal synthesis with a truncated modal set is applied for the linear subset of the discretely supported track model with non-proportional spatial distribution of viscous damping. Gravity loads and state-dependent vehicle, track and wheel-rail contact conditions are accounted for as external loads on the modal model, including situations involving loss of (and recovered) wheel-rail contact, impact between hanging sleeper and ballast, and/or a prescribed variation of non-linear track support stiffness properties along the track model. The procedure is demonstrated by calculating the degradation of longitudinal level over time as initiated by a prescribed initial local rail irregularity (dipped welded rail joint).
Radiation exposure and safety practices during pediatric central line placement
Saeman, Melody R.; Burkhalter, Lorrie S.; Blackburn, Timothy J.; Murphy, Joseph T.
2015-01-01
Purpose Pediatric surgeons routinely use fluoroscopy for central venous line (CVL) placement. We examined radiation safety practices and patient/surgeon exposure during fluoroscopic CVL. Methods Fluoroscopic CVL procedures performed by 11 pediatric surgeons in 2012 were reviewed. Fluoroscopic time (FT), patient exposure (mGy), and procedural data were collected. Anthropomorphic phantom simulations were used to calculate scatter and dose (mSv). Surgeons were surveyed regarding safety practices. Results 386 procedures were reviewed. Median FT was 12.8 seconds. Median patient estimated effective dose was 0.13 mSv. Median annual FT per surgeon was 15.4 minutes. Simulations showed no significant difference (p = 0.14) between reported exposures (median 3.5 mGy/min) and the modeled regression exposures from the C-arm default mode (median 3.4 mGy/min). Median calculated surgeon exposure was 1.5 mGy/year. Eight of 11 surgeons responded to the survey. Only three reported 100% lead protection and frequent dosimeter use. Conclusion We found non-standard radiation training, safety practices, and dose monitoring for the 11 surgeons. Based on simulations, the C-arm default setting was typically used instead of low dose. While most CVL procedures have low patient/surgeon doses, every effort should be used to minimize patient and occupational exposure, suggesting the need for formal hands-on training for non-radiologist providers using fluoroscopy. PMID:25837269
Influence of time of concentration on variation of runoff from a small urbanized watershed
Devendra Amatya; Agnieszka Cupak; Andrzej Walega
2015-01-01
The main objective of the paper is to estimate the influence of time of concentration (TC) on maximum flow in an urbanized watershed. The calculations of maximum flow have been carried out using the Rational method, Technical Release 55 (TR55) procedure based on NRCS (National Resources Conservation Services) guidelines, and NRCS-UH rainfall-runoff model. Similarly,...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jackson, R.J.
1976-11-01
The FFTF fuel pin design analysis is shown to be conservative through comparison with pin irradiation experience in EBR-II. This comparison shows that the actual lifetimes of EBR-II fuel pins are either greater than 80,000 MWd/MTM or greater than the calculated allowable lifetimes based on thermal creep strain.
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
Automation photometer of Hitachi U–2000 spectrophotometer with RS–232C–based computer
Kumar, K. Senthil; Lakshmi, B. S.; Pennathur, Gautam
1998-01-01
The interfacing of a commonly used spectrophotometer, the Hitachi U2000, through its RS–232C port to a IBM compatible computer is described. The hardware for data acquisation was designed by suitably modifying readily available materials, and the software was written using the C programming language. The various steps involved in these procedures are elucidated in detail. The efficacy of the procedure was tested experimentally by running the visible spectrum of a cyanine dye. The spectrum was plotted through a printer hooked to the computer. The spectrum was also plotted by transforming the abscissa to the wavenumber scale. This was carried out by using another module written in C. The efficiency of the whole set-up has been calculated using standard procedures. PMID:18924834
A loudness calculation procedure applied to shaped sonic booms
NASA Technical Reports Server (NTRS)
Shepherd, Kevin P.; Sullivan, Brenda M.
1991-01-01
Described here is a procedure that can be used to calculate the loudness of sonic booms. The procedure is applied to a wide range of sonic booms, both classical N-waves and a variety of other shapes of booms. The loudness of N-waves is controlled by overpressure and the associated rise time. The loudness of shaped booms is highly dependent on the characteristics of the initial shock. A comparison of the calculated loudness values indicates that shaped booms may have significantly reduced loudness relative to N-waves having the same peak overpressure. This result implies that a supersonic transport designed to yield minimized sonic booms may be substantially more acceptable than an unconstrained design.
Mode calculations in unstable resonators with flowing saturable gain. 1:hermite-gaussian expansion.
Siegman, A E; Sziklas, E A
1974-12-01
We present a procedure for calculating the three-dimensional mode pattern, the output beam characteristics, and the power output of an oscillating high-power laser taking into account a nonuniform, transversely flowing, saturable gain medium; index inhomogeneities inside the laser resonator; and arbitrary mirror distortion and misalignment. The laser is divided into a number of axial segments. The saturated gain-and-index variation. across each short segment is lumped into a complex gain profile across the midplane of that segment. The circulating optical wave within the resonator is propagated from midplane to midplane in free-space fashion and is multiplied by the lumped complex gain profile upon passing through each midplane. After each complete round trip of the optical wave inside the resonator, the saturated gain profiles are recalculated based upon the circulating fields in the cavity. The procedure when applied to typical unstable-resonator flowing-gain lasers shows convergence to a single distorted steady-state mode of oscillation. Typical near-field and far-field results are presented. Several empirical rules of thumb for finite truncated Hermite-Gaussian expansions, including an approximate sampling theorem, have been developed as part of the calculations.
Numerical investigations of low-density nozzle flow by solving the Boltzmann equation
NASA Technical Reports Server (NTRS)
Deng, Zheng-Tao; Liaw, Goang-Shin; Chou, Lynn Chen
1995-01-01
A two-dimensional finite-difference code to solve the BGK-Boltzmann equation has been developed. The solution procedure consists of three steps: (1) transforming the BGK-Boltzmann equation into two simultaneous partial differential equations by taking moments of the distribution function with respect to the molecular velocity u(sub z), with weighting factors 1 and u(sub z)(sup 2); (2) solving the transformed equations in the physical space based on the time-marching technique and the four-stage Runge-Kutta time integration, for a given discrete-ordinate. The Roe's second-order upwind difference scheme is used to discretize the convective terms and the collision terms are treated as source terms; and (3) using the newly calculated distribution functions at each point in the physical space to calculate the macroscopic flow parameters by the modified Gaussian quadrature formula. Repeating steps 2 and 3, the time-marching procedure stops when the convergent criteria is reached. A low-density nozzle flow field has been calculated by this newly developed code. The BGK Boltzmann solution and experimental data show excellent agreement. It demonstrated that numerical solutions of the BGK-Boltzmann equation are ready to be experimentally validated.
Application of ATC/DDD methodology to evaluate perioperative antimicrobial prophylaxis.
Akalin, Serife; Kutlu, Selda Sayin; Cirak, Bayram; Eskiçorapcı, Saadettin Yilmaz; Bagdatli, Dilek; Akkaya, Semih
2012-02-01
To evaluate quality of perioperative antibiotic prophylaxis (PAP) and to calculate the cost per procedure in a Turkish university hospital. A 352-bed teaching hospital in Denizli, Turkey. An prospective audit was performed between July and October 2010. All clean, clean-contaminated and contaminated elective surgical procedures in ten surgical wards were recorded. Antimicrobial use was calculated per procedure using the ATC-DDD system. The appropriateness of antibiotic use for each procedure was evaluated according to international guidelines on PAP. In addition, the cost per procedure was calculated. Overall, in 577 of the 625 (92.3%) of the studied procedures, PAP was used. PAP was indicated in 12.5% of the group where it was not used, and not indicated in 7.1% of the group where it was used. Unnecessarily prolonged antimicrobial prophylaxis was observed in 56.9% of the procedures, mean duration was 2.6 ± 2.7 days. The most frequently used antimicrobials were cefazolin (117.9 DDD/100-operation) and sulbactam/ampicillin (102.2 DDD/100-operation). The timing of the starting dose was appropriate in 545 procedures (94.5%). In the group that received PAP, only 80 (13.7%) of the procedures were found to be fully appropriate and correct. The density of antimicrobial use per operation was 2.8 DDD. The mean cost of the use of prophylactic antimicrobials
NASA Astrophysics Data System (ADS)
De Lucas, Javier; Segovia, José Juan
2018-05-01
Blackbody cavities are the standard radiation sources widely used in the fields of radiometry and radiation thermometry. Its effective emissivity and uncertainty depend to a large extent on the temperature gradient. An experimental procedure based on the radiometric method for measuring the gradient is followed. Results are applied to particular blackbody configurations where gradients can be thermometrically estimated by contact thermometers and where the relationship between both basic methods can be established. The proposed procedure may be applied to commercial blackbodies if they are modified allowing secondary contact temperature measurement. In addition, the established systematic may be incorporated as part of the actions for quality assurance in routine calibrations of radiation thermometers, by using the secondary contact temperature measurement for detecting departures from the real radiometrically obtained gradient and the effect on the uncertainty. On the other hand, a theoretical model is proposed to evaluate the effect of temperature variations on effective emissivity and associated uncertainty. This model is based on a gradient sample chosen following plausible criteria. The model is consistent with the Monte Carlo method for calculating the uncertainty of effective emissivity and complements others published in the literature where uncertainty is calculated taking into account only geometrical variables and intrinsic emissivity. The mathematical model and experimental procedure are applied and validated using a commercial type three-zone furnace, with a blackbody cavity modified to enable a secondary contact temperature measurement, in the range between 400 °C and 1000 °C.
Fire resistance of structural composite lumber products
Robert H. White
2006-01-01
Use of structural composite lumber products is increasing. In applications requiring a fire resistance rating, calculation procedures are used to obtain the fire resistance rating of exposed structural wood products. A critical factor in the calculation procedures is char rate for ASTM E 119 fire exposure. In this study, we tested 14 structural composite lumber...
40 CFR 98.85 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations. The owner or operator must...
40 CFR 98.85 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations. The owner or operator must...
40 CFR 98.295 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... estimating missing data. For the emission calculation methodologies in § 98.293(b)(2) and (b)(3), a complete... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.85 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations. The owner or operator must...
40 CFR 98.185 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations as specified in the...
40 CFR 98.85 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations. The owner or operator must...
40 CFR 98.185 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations as specified in the...
40 CFR 98.295 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 22 2012-07-01 2012-07-01 false Procedures for estimating missing data... estimating missing data. For the emission calculation methodologies in § 98.293(b)(2) and (b)(3), a complete... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.185 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations as specified in the...
40 CFR 98.295 - Procedures for estimating missing data.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 21 2011-07-01 2011-07-01 false Procedures for estimating missing data... estimating missing data. For the emission calculation methodologies in § 98.293(b)(2) and (b)(3), a complete... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
40 CFR 98.185 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 22 2013-07-01 2013-07-01 false Procedures for estimating missing data... missing data. A complete record of all measured parameters used in the GHG emissions calculations in § 98... substitute data value for the missing parameter shall be used in the calculations as specified in the...
40 CFR 98.295 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 21 2014-07-01 2014-07-01 false Procedures for estimating missing data... estimating missing data. For the emission calculation methodologies in § 98.293(b)(2) and (b)(3), a complete... unavailable, a substitute data value for the missing parameter shall be used in the calculations as specified...
Management of leg and pressure ulcer in hospitalized patients: direct costs are lower than expected.
Assadian, Ojan; Oswald, Joseph S; Leisten, Rainer; Hinz, Peter; Daeschlein, Georg; Kramer, Axel
2011-01-01
In Germany, cost calculations on the financial burden of wound treatment are scarce. Studies for attributable costs in hospitalized patients estimate for pressure ulcer additional costs of € 6,135.50 per patient, a calculation based on the assumption that pressure ulcers will lead to prolonged hospitalization averaging 2 months. The scant data available in this field prompted us to conduct a prospective economical study assessing the direct costs of treatment of chronic ulcers in hospitalized patients. The study was designed and conducted as an observational, prospective, multi-centre economical study over a period of 8 months in three community hospitals in Germany. Direct treatment costs for leg ulcer (n=77) and pressure ulcer (n=35) were determined observing 67 patients (average age: 75±12 years). 109 treatments representing 111 in-ward admissions and 62 outpatient visits were observed. During a total of 3,331 hospitalized and 867 outpatient wound therapies, 4,198 wound dressing changes were documented. Costs of material were calculated on a per item base. Direct costs of care and treatment, including materials used, surgical interventions, and personnel costs were determined. An average of € 1,342 per patient (€ 48/d) was spent for treatment of leg ulcer (staff costs € 581, consumables € 458, surgical procedures € 189, and diagnostic procedures € 114). On average, each wound dressing change caused additional costs of € 15. For pressure ulcer, € 991 per patient (€ 52/d) was spent on average (staff costs € 313, consumables € 618, and for surgical procedures € 60). Each wound dressing change resulted in additional costs of € 20 on average. When direct costs of chronic wounds are calculated on a prospective case-by-case basis for a treatment period over 3 months, these costs are lower than estimated to date. While reduction in prevalence of chronic wounds along with optimised patient care will result in substantial cost saving, this saving might be lower than expected. Our results, however, do not serve as basis for making any conclusions on cost-benefit analysis for both, the affected individual, as well as for the society.
NASA Astrophysics Data System (ADS)
Novoselov, V. B.; Shekhter, M. V.
2012-12-01
A refined procedure for estimating the effect the flashing of condensate in a steam turbine's regenerative and delivery-water heaters on the increase of rotor rotation frequency during rejection of electric load is presented. The results of calculations carried out according to the proposed procedure as applied to the delivery-water and regenerative heaters of a T-110/120-12.8 turbine are given.
Assessment of the MPACT Resonance Data Generation Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Kang Seog; Williams, Mark L.
Currently, heterogeneous models are being used to generate resonance self-shielded cross-section tables as a function of background cross sections for important nuclides such as 235U and 238U by performing the CENTRM (Continuous Energy Transport Model) slowing down calculation with the MOC (Method of Characteristics) spatial discretization and ESSM (Embedded Self-Shielding Method) calculations to obtain background cross sections. And then the resonance self-shielded cross section tables are converted into subgroup data which are to be used in estimating problem-dependent self-shielded cross sections in MPACT (Michigan Parallel Characteristics Transport Code). Although this procedure has been developed and thus resonance data have beenmore » generated and validated by benchmark calculations, assessment has never been performed to review if the resonance data are properly generated by the procedure and utilized in MPACT. This study focuses on assessing the procedure and a proper use in MPACT.« less
The use of optimization techniques to design controlled diffusion compressor blading
NASA Technical Reports Server (NTRS)
Sanger, N. L.
1982-01-01
A method for automating compressor blade design using numerical optimization, and applied to the design of a controlled diffusion stator blade row is presented. A general purpose optimization procedure is employed, based on conjugate directions for locally unconstrained problems and on feasible directions for locally constrained problems. Coupled to the optimizer is an analysis package consisting of three analysis programs which calculate blade geometry, inviscid flow, and blade surface boundary layers. The optimizing concepts and selection of design objective and constraints are described. The procedure for automating the design of a two dimensional blade section is discussed, and design results are presented.
Veeravagu, Anand; Li, Amy; Swinney, Christian; Tian, Lu; Moraff, Adrienne; Azad, Tej D; Cheng, Ivan; Alamin, Todd; Hu, Serena S; Anderson, Robert L; Shuer, Lawrence; Desai, Atman; Park, Jon; Olshen, Richard A; Ratliff, John K
2017-07-01
OBJECTIVE The ability to assess the risk of adverse events based on known patient factors and comorbidities would provide more effective preoperative risk stratification. Present risk assessment in spine surgery is limited. An adverse event prediction tool was developed to predict the risk of complications after spine surgery and tested on a prospective patient cohort. METHODS The spinal Risk Assessment Tool (RAT), a novel instrument for the assessment of risk for patients undergoing spine surgery that was developed based on an administrative claims database, was prospectively applied to 246 patients undergoing 257 spinal procedures over a 3-month period. Prospectively collected data were used to compare the RAT to the Charlson Comorbidity Index (CCI) and the American College of Surgeons National Surgery Quality Improvement Program (ACS NSQIP) Surgical Risk Calculator. Study end point was occurrence and type of complication after spine surgery. RESULTS The authors identified 69 patients (73 procedures) who experienced a complication over the prospective study period. Cardiac complications were most common (10.2%). Receiver operating characteristic (ROC) curves were calculated to compare complication outcomes using the different assessment tools. Area under the curve (AUC) analysis showed comparable predictive accuracy between the RAT and the ACS NSQIP calculator (0.670 [95% CI 0.60-0.74] in RAT, 0.669 [95% CI 0.60-0.74] in NSQIP). The CCI was not accurate in predicting complication occurrence (0.55 [95% CI 0.48-0.62]). The RAT produced mean probabilities of 34.6% for patients who had a complication and 24% for patients who did not (p = 0.0003). The generated predicted values were stratified into low, medium, and high rates. For the RAT, the predicted complication rate was 10.1% in the low-risk group (observed rate 12.8%), 21.9% in the medium-risk group (observed 31.8%), and 49.7% in the high-risk group (observed 41.2%). The ACS NSQIP calculator consistently produced complication predictions that underestimated complication occurrence: 3.4% in the low-risk group (observed 12.6%), 5.9% in the medium-risk group (observed 34.5%), and 12.5% in the high-risk group (observed 38.8%). The RAT was more accurate than the ACS NSQIP calculator (p = 0.0018). CONCLUSIONS While the RAT and ACS NSQIP calculator were both able to identify patients more likely to experience complications following spine surgery, both have substantial room for improvement. Risk stratification is feasible in spine surgery procedures; currently used measures have low accuracy.
Liu, Yun-Feng; Fan, Ying-Ying; Dong, Hui-Yue; Zhang, Jian-Xing
2017-12-01
The method used in biomechanical modeling for finite element method (FEM) analysis needs to deliver accurate results. There are currently two solutions used in FEM modeling for biomedical model of human bone from computerized tomography (CT) images: one is based on a triangular mesh and the other is based on the parametric surface model and is more popular in practice. The outline and modeling procedures for the two solutions are compared and analyzed. Using a mandibular bone as an example, several key modeling steps are then discussed in detail, and the FEM calculation was conducted. Numerical calculation results based on the models derived from the two methods, including stress, strain, and displacement, are compared and evaluated in relation to accuracy and validity. Moreover, a comprehensive comparison of the two solutions is listed. The parametric surface based method is more helpful when using powerful design tools in computer-aided design (CAD) software, but the triangular mesh based method is more robust and efficient.
NASA Technical Reports Server (NTRS)
West, Harry; Papadopoulos, Evangelos; Dubowsky, Steven; Cheah, Hanson
1989-01-01
Emulating on earth the weightlessness of a manipulator floating in space requires knowledge of the manipulator's mass properties. A method for calculating these properties by measuring the reaction forces and moments at the base of the manipulator is described. A manipulator is mounted on a 6-DOF sensor, and the reaction forces and moments at its base are measured for different positions of the links as well as for different orientations of its base. A procedure is developed to calculate from these measurements some combinations of the mass properties. The mass properties identified are not sufficiently complete for computed torque and other dynamic control techniques, but do allow compensation for the gravitational load on the links, and for simulation of weightless conditions on a space emulator. The algorithm has been experimentally demonstrated on a PUMA 260 and used to measure the independent combinations of the 16 mass parameters of the base and three proximal links.
Temperature Histories in Ceramic-Insulated Heat-Sink Nozzle
NASA Technical Reports Server (NTRS)
Ciepluch, Carl C.
1960-01-01
Temperature histories were calculated for a composite nozzle wall by a simplified numerical integration calculation procedure. These calculations indicated that there is a unique ratio of insulation and metal heat-sink thickness that will minimize total wall thickness for a given operating condition and required running time. The optimum insulation and metal thickness will vary throughout the nozzle as a result of the variation in heat-transfer rate. The use of low chamber pressure results in a significant increase in the maximum running time of a given weight nozzle. Experimentally measured wall temperatures were lower than those calculated. This was due in part to the assumption of one-dimensional or slab heat flow in the calculation procedure.
Analytical one-dimensional model for laser-induced ultrasound in planar optically absorbing layer.
Svanström, Erika; Linder, Tomas; Löfqvist, Torbjörn
2014-03-01
Ultrasound generated by means of laser-based photoacoustic principles are in common use today and applications can be found both in biomedical diagnostics, non-destructive testing and materials characterisation. For certain measurement applications it could be beneficial to shape the generated ultrasound regarding spectral properties and temporal profile. To address this, we studied the generation and propagation of laser-induced ultrasound in a planar, layered structure. We derived an analytical expression for the induced pressure wave, including different physical and optical properties of each layer. A Laplace transform approach was employed in analytically solving the resulting set of photoacoustic wave equations. The results correspond to simulations and were compared to experimental results. To enable the comparison between recorded voltage from the experiments and the calculated pressure we employed a system identification procedure based on physical properties of the ultrasonic transducer to convert the calculated acoustic pressure to voltages. We found reasonable agreement between experimentally obtained voltages and the voltages determined from the calculated acoustic pressure, for the samples studied. The system identification procedure was found to be unstable, however, possibly from violations of material isotropy assumptions by film adhesives and coatings in the experiment. The presented analytical model can serve as a basis when addressing the inverse problem of shaping an acoustic pulse from absorption of a laser pulse in a planar layered structure of elastic materials. Copyright © 2013 Elsevier B.V. All rights reserved.
40 CFR 1065.640 - Flow meter calibration calculations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Flow meter calibration calculations... POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Calculations and Data Requirements § 1065.640 Flow meter calibration calculations. This section describes the calculations for calibrating various flow meters. After...
Winkler, Peter; Zurl, Brigitte; Guss, Helmuth; Kindl, Peter; Stuecklschweiger, Georg
2005-02-21
A system for dosimetric verification of intensity-modulated radiotherapy (IMRT) treatment plans using absolute calibrated radiographic films is presented. At our institution this verification procedure is performed for all IMRT treatment plans prior to patient irradiation. Therefore clinical treatment plans are transferred to a phantom and recalculated. Composite treatment plans are irradiated to a single film. Film density to absolute dose conversion is performed automatically based on a single calibration film. A software application encompassing film calibration, 2D registration of measurement and calculated distributions, image fusion, and a number of visual and quantitative evaluation utilities was developed. The main topic of this paper is a performance analysis for this quality assurance procedure, with regard to the specification of tolerance levels for quantitative evaluations. Spatial and dosimetric precision and accuracy were determined for the entire procedure, comprising all possible sources of error. The overall dosimetric and spatial measurement uncertainties obtained thereby were 1.9% and 0.8 mm respectively. Based on these results, we specified 5% dose difference and 3 mm distance-to-agreement as our tolerance levels for patient-specific quality assurance for IMRT treatments.
Verma, Mahendra K.
2012-01-01
The Energy Independence and Security Act of 2007 (Public Law 110-140) authorized the U.S. Geological Survey (USGS) to conduct a national assessment of geologic storage resources for carbon dioxide (CO2), requiring estimation of hydrocarbon-in-place volumes and formation volume factors for all the oil, gas, and gas-condensate reservoirs within the U.S. sedimentary basins. The procedures to calculate in-place volumes for oil and gas reservoirs have already been presented by Verma and Bird (2005) to help with the USGS assessment of the undiscovered resources in the National Petroleum Reserve, Alaska, but there is no straightforward procedure available for calculating in-place volumes for gas-condensate reservoirs for the carbon sequestration project. The objective of the present study is to propose a simple procedure for calculating the hydrocarbon-in-place volume of a condensate reservoir to help estimate the hydrocarbon pore volume for potential CO2 sequestration.
Imprecise (fuzzy) information in geostatistics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardossy, A.; Bogardi, I.; Kelly, W.E.
1988-05-01
A methodology based on fuzzy set theory for the utilization of imprecise data in geostatistics is presented. A common problem preventing a broader use of geostatistics has been the insufficient amount of accurate measurement data. In certain cases, additional but uncertain (soft) information is available and can be encoded as subjective probabilities, and then the soft kriging method can be applied (Journal, 1986). In other cases, a fuzzy encoding of soft information may be more realistic and simplify the numerical calculations. Imprecise (fuzzy) spatial information on the possible variogram is integrated into a single variogram which is used in amore » fuzzy kriging procedure. The overall uncertainty of prediction is represented by the estimation variance and the calculated membership function for each kriged point. The methodology is applied to the permeability prediction of a soil liner for hazardous waste containment. The available number of hard measurement data (20) was not enough for a classical geostatistical analysis. An additional 20 soft data made it possible to prepare kriged contour maps using the fuzzy geostatistical procedure.« less
NASA Astrophysics Data System (ADS)
Merkisz, J.; Lijewski, P.; Fuc, P.; Siedlecki, M.; Ziolkowski, A.
2016-09-01
The paper analyzes the exhaust emissions from farm vehicles based on research performed under field conditions (RDE) according to the NTE procedure. This analysis has shown that it is hard to meet the NTE requirements under field conditions (engine operation in the NTE zone for at least 30 seconds). Due to a very high variability of the engine conditions, the share of a valid number of NTE windows in the field test is small throughout the entire test. For this reason, a modification of the measurement and exhaust emissions calculation methodology has been proposed for farm vehicles of the NRMM group. A test has been developed composed of the following phases: trip to the operation site (paved roads) and field operations (including u-turns and maneuvering). The range of the operation time share in individual test phases has been determined. A change in the method of calculating the real exhaust emissions has also been implemented in relation to the NTE procedure.
NASA Astrophysics Data System (ADS)
Braiek, A.; Adili, A.; Albouchi, F.; Karkri, M.; Ben Nasrallah, S.
2016-06-01
The aim of this work is to simultaneously identify the conductive and radiative parameters of a semitransparent sample using a photothermal method associated with an inverse problem. The identification of the conductive and radiative proprieties is performed by the minimization of an objective function that represents the errors between calculated temperature and measured signal. The calculated temperature is obtained from a theoretical model built with the thermal quadrupole formalism. Measurement is obtained in the rear face of the sample whose front face is excited by a crenel of heat flux. For identification procedure, a genetic algorithm is developed and used. The genetic algorithm is a useful tool in the simultaneous estimation of correlated or nearly correlated parameters, which can be a limiting factor for the gradient-based methods. The results of the identification procedure show the efficiency and the stability of the genetic algorithm to simultaneously estimate the conductive and radiative properties of clear glass.
Sun, Pengzhan; Wang, Yanlei; Liu, He; Wang, Kunlin; Wu, Dehai; Xu, Zhiping; Zhu, Hongwei
2014-01-01
A mild annealing procedure was recently proposed for the scalable enhancement of graphene oxide (GO) properties with the oxygen content preserved, which was demonstrated to be attributed to the thermally driven phase separation. In this work, the structure evolution of GO with mild annealing is closely investigated. It reveals that in addition to phase separation, the transformation of oxygen functionalities also occurs, which leads to the slight reduction of GO membranes and furthers the enhancement of GO properties. These results are further supported by the density functional theory based calculations. The results also show that the amount of chemically bonded oxygen atoms on graphene decreases gradually and we propose that the strongly physisorbed oxygen species constrained in the holes and vacancies on GO lattice might be responsible for the preserved oxygen content during the mild annealing procedure. The present experimental results and calculations indicate that both the diffusion and transformation of oxygen functional groups might play important roles in the scalable enhancement of GO properties. PMID:25372142
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
NASA Astrophysics Data System (ADS)
Pisano, Luca; Vessia, Giovanna; Vennari, Carmela; Parise, Mario
2015-04-01
Empirical rainfall thresholds are a well established method to draw information about Duration (D) and Cumulated (E) values of the rainfalls that are likely to initiate shallow landslides. To this end, rain-gauge records of rainfall heights are commonly used. Several procedures can be applied to address the calculation of the Duration-Cumulated height and, eventually, the Intensity values related to the rainfall events responsible for shallow landslide onset. A large number of procedures are drawn from particular geological settings and climate conditions based on an expert identification of the rainfall event. A few researchers recently devised automated procedures to reconstruct the rainfall events responsible for landslide onset. In this study, 300 pairs of D, E couples, related to shallow landslides that occurred in a ten year span 2002-2012 on the Italian territory, have been drawn by means of two procedures: the expert method (Brunetti et al., 2010) and the automated method (Vessia et al., 2014). The two procedures start from the same sources of information on shallow landslides occurred during or soon after a rainfall. Although they have in common the method to select the date (up to the hour of the landslide occurrence), the site of the landslide and the choice of the rain-gauge representative for the rainfall, they differ when calculating the Duration and Cumulated height of the rainfall event. Moreover, the expert procedure identifies only one D, E pair for each landslide whereas the automated procedure draws 6 possible D,E pairs for the same landslide event. Each one of the 300 D, E pairs calculated by the automated procedure reproduces about 80% of the E values and about 60% of the D values calculated by the expert procedure. Unfortunately, no standard methods are available for checking the forecasting ability of both the expert and the automated reconstruction of the true D, E pairs that result in shallow landslide. Nonetheless, a statistical analysis on marginal distributions of the seven samples of 300 D and E values are performed in this study. The main objective of this statistical analysis is to highlight similarities and differences in the two sets of samples of Duration and Cumulated values collected by the two procedures. At first, the sample distributions have been investigated: the seven E samples are Lognormal distributed, whereas the D samples are all distributed Weibull like. On E samples, due to their Lognormal distribution, statistical tests can be applied to check two null hypotheses: equal mean values through the Student test, equal standard deviations through the Fisher test. These two hypotheses are accepted for the seven E samples, meaning that they come from the same population, at a confidence level of 95%. Conversely, the preceding tests cannot be applied to the seven D samples that are Weibull distributed with shape parameters k ranging between 0.9 to 1.2. Nonetheless, the two procedures calculate the rainfall event through the selection of the E values; after that the D is drawn. Thus, the results of this statistical analysis preliminary confirms the similarities of the two D,E pair set of values drawn from the two different procedures. References Brunetti, M.T., Peruccacci, S., Rossi, M., Luciani, S., Valigi, D., and Guzzetti, F.: Rainfall thresholds for the possible occurrence of landslides in Italy, Nat. Hazards Earth Syst. Sci., 10, 447-458, doi:10.5194/nhess-10-447-2010, 2010. Vessia G., Parise M., Brunetti M.T., Peruccacci S., Rossi M., Vennari C., and Guzzetti F.: Automated reconstruction of rainfall events responsible for shallow landslides, Nat. Hazards Earth Syst. Sci., 14, 2399-2408, doi: 10.5194/nhess-14-2399-2014, 2014.
Strategic flexibility in computational estimation for Chinese- and Canadian-educated adults.
Xu, Chang; Wells, Emma; LeFevre, Jo-Anne; Imbo, Ineke
2014-09-01
The purpose of the present study was to examine factors that influence strategic flexibility in computational estimation for Chinese- and Canadian-educated adults. Strategic flexibility was operationalized as the percentage of trials on which participants chose the problem-based procedure that best balanced proximity to the correct answer with simplification of the required calculation. For example, on 42 × 57, the optimal problem-based solution is 40 × 60 because 2,400 is closer to the exact answer 2,394 than is 40 × 50 or 50 × 60. In Experiment 1 (n = 50), where participants had free choice of estimation procedures, Chinese-educated participants were more likely to choose the optimal problem-based procedure (80% of trials) than Canadian-educated participants (50%). In Experiment 2 (n = 48), participants had to choose 1 of 3 solution procedures. They showed moderate strategic flexibility that was equal across groups (60%). In Experiment 3 (n = 50), participants were given the same 3 procedure choices as in Experiment 2 but different instructions and explicit feedback. When instructed to respond quickly, both groups showed moderate strategic flexibility as in Experiment 2 (60%). When instructed to respond as accurately as possible or to balance speed and accuracy, they showed very high strategic flexibility (greater than 90%). These findings suggest that solvers will show very different levels of strategic flexibility in response to instructions, feedback, and problem characteristics and that these factors interact with individual differences (e.g., arithmetic skills, nationality) to produce variable response patterns.
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Kasaragod, Deepa K.; Matcher, Stephen J.
2012-03-01
We demonstrate theoretically and experimentally that the phase retardance and relative optic-axis orientation of a sample can be calculated without prior knowledge of the actual value of the phase modulation amplitude when using a polarization-sensitive optical coherence tomography system based on continuous polarization modulation (CPM-PS-OCT). We also demonstrate that the sample Jones matrix can be calculated at any values of the phase modulation amplitude in a reasonable range depending on the system effective signal-to-noise ratio. This has fundamental importance for the development of clinical systems by simplifying the polarization modulator drive instrumentation and eliminating its calibration procedure. This was validated on measurements of a three-quarter waveplate and an equine tendon sample by a fiber-based swept-source CPM-PS-OCT system.
Substructure program for analysis of helicopter vibrations
NASA Technical Reports Server (NTRS)
Sopher, R.
1981-01-01
A substructure vibration analysis which was developed as a design tool for predicting helicopter vibrations is described. The substructure assembly method and the composition of the transformation matrix are analyzed. The procedure for obtaining solutions to the equations of motion is illustrated for the steady-state forced response solution mode, and rotor hub load excitation and impedance are analyzed. Calculation of the mass, damping, and stiffness matrices, as well as the forcing function vectors of physical components resident in the base program code, are discussed in detail. Refinement of the model is achieved by exercising modules which interface with the external program to represent rotor induced variable inflow and fuselage induced variable inflow at the rotor. The calculation of various flow fields is discussed, and base program applications are detailed.
Bratsas, Charalampos; Koutkias, Vassilis; Kaimakamis, Evangelos; Bamidis, Panagiotis; Maglaveras, Nicos
2007-01-01
Medical Computational Problem (MCP) solving is related to medical problems and their computerized algorithmic solutions. In this paper, an extension of an ontology-based model to fuzzy logic is presented, as a means to enhance the information retrieval (IR) procedure in semantic management of MCPs. We present herein the methodology followed for the fuzzy expansion of the ontology model, the fuzzy query expansion procedure, as well as an appropriate ontology-based Vector Space Model (VSM) that was constructed for efficient mapping of user-defined MCP search criteria and MCP acquired knowledge. The relevant fuzzy thesaurus is constructed by calculating the simultaneous occurrences of terms and the term-to-term similarities derived from the ontology that utilizes UMLS (Unified Medical Language System) concepts by using Concept Unique Identifiers (CUI), synonyms, semantic types, and broader-narrower relationships for fuzzy query expansion. The current approach constitutes a sophisticated advance for effective, semantics-based MCP-related IR.
NASA Technical Reports Server (NTRS)
El-Hady, N. M.
1981-01-01
A computer program HADY-I for calculating the linear incompressible or compressible stability characteristics of the laminar boundary layer on swept and tapered wings is described. The eigenvalue problem and its adjoint arising from the linearized disturbance equations with the appropriate boundary conditions are solved numerically using a combination of Newton-Raphson interative scheme and a variable step size integrator based on the Runge-Kutta-Fehlburh fifth-order formulas. The integrator is used in conjunction with a modified Gram-Schmidt orthonormalization procedure. The computer program HADY-I calculates the growth rates of crossflow or streamwise Tollmien-Schlichting instabilities. It also calculates the group velocities of these disturbances. It is restricted to parallel stability calculations, where the boundary layer (meanflow) is assumed to be parallel. The meanflow solution is an input to the program.
Normal theory procedures for calculating upper confidence limits (UCL) on the risk function for continuous responses work well when the data come from a normal distribution. However, if the data come from an alternative distribution, the application of the normal theory procedure...
Adherence to infection control guidelines in surgery on MRSA positive patients : A cost analysis.
Saegeman, V; Schuermans, A
2016-09-01
In surgical units, similar to other healthcare departments, guidelines are used to curb transmission of methicillin resistant Staphylococcus aureus (MRSA). The aim of this study was to calculate the extra costs for material and extra working hours for compliance to MRSA infection control guidelines in the operating rooms of a University Hospital. The study was based on observations of surgeries on MRSA positive patients. The average cost per surgery was calculated utilizing local information on unit costs. Robustness of the calculations was evaluated with a sensitivity analysis. The total extra costs of adherence to MRSA infection control guidelines averaged € 340.46 per surgical procedure (range € 207.76- € 473.15). A sensitivity analysis based on a standardized operating room hourly rate reached a cost of € 366.22. The extra costs of adherence to infection control guidelines are considerable. To reduce costs, the logistical planning of surgeries could be improved by for instance a dedicated room.
Taboo Search: An Approach to the Multiple Minima Problem
NASA Astrophysics Data System (ADS)
Cvijovic, Djurdje; Klinowski, Jacek
1995-02-01
Described here is a method, based on Glover's taboo search for discrete functions, of solving the multiple minima problem for continuous functions. As demonstrated by model calculations, the algorithm avoids entrapment in local minima and continues the search to give a near-optimal final solution. Unlike other methods of global optimization, this procedure is generally applicable, easy to implement, derivative-free, and conceptually simple.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowdy, M.W.; Couch, M.D.
A vehicle comparison methodology based on the Otto-Engine Equivalent (OEE) vehicle concept is described. As an illustration of this methodology, the concept is used to make projections of the fuel economy potential of passenger cars using various alternative power systems. Sensitivities of OEE vehicle results to assumptions made in the calculational procedure are discussed. Factors considered include engine torque boundary, rear axle ratio, performance criteria, engine transient response, and transmission shift logic.
NASA Astrophysics Data System (ADS)
Guo, L.; Yin, Y.; Deng, M.; Guo, L.; Yan, J.
2017-12-01
At present, most magnetotelluric (MT) forward modelling and inversion codes are based on finite difference method. But its structured mesh gridding cannot be well adapted for the conditions with arbitrary topography or complex tectonic structures. By contrast, the finite element method is more accurate in calculating complex and irregular 3-D region and has lower requirement of function smoothness. However, the complexity of mesh gridding and limitation of computer capacity has been affecting its application. COMSOL Multiphysics is a cross-platform finite element analysis, solver and multiphysics full-coupling simulation software. It achieves highly accurate numerical simulations with high computational performance and outstanding multi-field bi-directional coupling analysis capability. In addition, its AC/DC and RF module can be used to easily calculate the electromagnetic responses of complex geological structures. Using the adaptive unstructured grid, the calculation is much faster. In order to improve the discretization technique of computing area, we use the combination of Matlab and COMSOL Multiphysics to establish a general procedure for calculating the MT responses for arbitrary resistivity models. The calculated responses include the surface electric and magnetic field components, impedance components, magnetic transfer functions and phase tensors. Then, the reliability of this procedure is certificated by 1-D, 2-D and 3-D and anisotropic forward modeling tests. Finally, we establish the 3-D lithospheric resistivity model for the Proterozoic Wutai-Hengshan Mts. within the North China Craton by fitting the real MT data collected there. The reliability of the model is also verified by induced vectors and phase tensors. Our model shows more details and better resolution, compared with the previously published 3-D model based on the finite difference method. In conclusion, COMSOL Multiphysics package is suitable for modeling the 3-D lithospheric resistivity structures under complex tectonic deformation backgrounds, which could be a good complement to the existing finite-difference inversion algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1977-04-01
The design calculations for the Waste Isolation Pilot Plant (WIPP) are presented. The following categories are discussed: general nuclear calculations; radwaste calculations; structural calculations; mechanical calculations; civil calculations; electrical calculations; TRU waste surface facility time and motion analysis; shaft sinking procedures; hoist time and motion studies; mining system analysis; mine ventilation calculations; mine structural analysis; and miscellaneous underground calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sahu, Nityananda; Gadre, Shridhar R., E-mail: gadre@iitk.ac.in
The present work reports the calculation of vibrational infrared (IR) and Raman spectra of large molecular systems employing molecular tailoring approach (MTA). Further, it extends the grafting procedure for the accurate evaluation of IR and Raman spectra of large molecular systems, employing a new methodology termed as Fragments-in-Fragments (FIF), within MTA. Unlike the previous MTA-based studies, the accurate estimation of the requisite molecular properties is achieved without performing any full calculations (FC). The basic idea of the grafting procedure is implemented by invoking the nearly basis-set-independent nature of the MTA-based error vis-à-vis the respective FCs. FIF has been tested outmore » for the estimation of the above molecular properties for three isomers, viz., β-strand, 3{sub 10}- and α-helix of acetyl(alanine){sub n}NH{sub 2} (n = 10, 15) polypeptides, three conformers of doubly protonated gramicidin S decapeptide and trpzip2 protein (PDB id: 1LE1), respectively, employing BP86/TZVP, M06/6-311G**, and M05-2X/6-31G** levels of theory. For most of the cases, a maximum difference of 3 cm{sup −1} is achieved between the grafted-MTA frequencies and the corresponding FC values. Further, a comparison of the BP86/TZVP level IR and Raman spectra of α-helical (alanine){sub 20} and its N-deuterated derivative shows an excellent agreement with the existing experimental spectra. In view of the requirement of only MTA-based calculations and the ability of FIF to work at any level of theory, the current methodology provides a cost-effective solution for obtaining accurate spectra of large molecular systems.« less
Yamamoto, Takeshi
2008-12-28
Conventional quantum chemical solvation theories are based on the mean-field embedding approximation. That is, the electronic wavefunction is calculated in the presence of the mean field of the environment. In this paper a direct quantum mechanical/molecular mechanical (QM/MM) analog of such a mean-field theory is formulated based on variational and perturbative frameworks. In the variational framework, an appropriate QM/MM free energy functional is defined and is minimized in terms of the trial wavefunction that best approximates the true QM wavefunction in a statistically averaged sense. Analytical free energy gradient is obtained, which takes the form of the gradient of effective QM energy calculated in the averaged MM potential. In the perturbative framework, the above variational procedure is shown to be equivalent to the first-order expansion of the QM energy (in the exact free energy expression) about the self-consistent reference field. This helps understand the relation between the variational procedure and the exact QM/MM free energy as well as existing QM/MM theories. Based on this, several ways are discussed for evaluating non-mean-field effects (i.e., statistical fluctuations of the QM wavefunction) that are neglected in the mean-field calculation. As an illustration, the method is applied to an S(N)2 Menshutkin reaction in water, NH(3)+CH(3)Cl-->NH(3)CH(3) (+)+Cl(-), for which free energy profiles are obtained at the Hartree-Fock, MP2, B3LYP, and BHHLYP levels by integrating the free energy gradient. Non-mean-field effects are evaluated to be <0.5 kcal/mol using a Gaussian fluctuation model for the environment, which suggests that those effects are rather small for the present reaction in water.
Calculating Nozzle Side Loads using Acceleration Measurements of Test-Based Models
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ruf, Joe
2007-01-01
As part of a NASA/MSFC research program to evaluate the effect of different nozzle contours on the well-known but poorly characterized "side load" phenomena, we attempt to back out the net force on a sub-scale nozzle during cold-flow testing using acceleration measurements. Because modeling the test facility dynamics is problematic, new techniques for creating a "pseudo-model" of the facility and nozzle directly from modal test results are applied. Extensive verification procedures were undertaken, resulting in a loading scale factor necessary for agreement between test and model based frequency response functions. Side loads are then obtained by applying a wide-band random load onto the system model, obtaining nozzle response PSD's, and iterating both the amplitude and frequency of the input until a good comparison of the response with the measured response PSD for a specific time point is obtained. The final calculated loading can be used to compare different nozzle profiles for assessment during rocket engine nozzle development and as a basis for accurate design of the nozzle and engine structure to withstand these loads. The techniques applied within this procedure have extensive applicability to timely and accurate characterization of all test fixtures used for modal test.A viewgraph presentation on a model-test based pseudo-model used to calculate side loads on rocket engine nozzles is included. The topics include: 1) Side Loads in Rocket Nozzles; 2) Present Side Loads Research at NASA/MSFC; 3) Structural Dynamic Model Generation; 4) Pseudo-Model Generation; 5) Implementation; 6) Calibration of Pseudo-Model Response; 7) Pseudo-Model Response Verification; 8) Inverse Force Determination; 9) Results; and 10) Recent Work.
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
40 CFR 1065.850 - Calculations.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Calculations. 1065.850 Section 1065.850 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING PROCEDURES Testing With Oxygenated Fuels § 1065.850 Calculations. Use the calculations...
Beibei, Zhou; Quanjiu, Wang; Shuai, Tan
2014-01-01
A theory based on Manning roughness equation, Philip equation and water balance equation was developed which only employed the advance distance in the calculation of the infiltration parameters and irrigation coefficients in both the border irrigation and the surge irrigation. The improved procedure was validated with both the border irrigation and surge irrigation experiments. The main results are shown as follows. Infiltration parameters of the Philip equation could be calculated accurately only using water advance distance in the irrigation process comparing to the experimental data. With the calculated parameters and the water balance equation, the irrigation coefficients were also estimated. The water advance velocity should be measured at about 0.5 m to 1.0 m far from the water advance in the experimental corn fields. PMID:25061664
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lampley, C.M.
1979-01-01
An updated version of the SKYSHINE Monte Carlo procedure has been developed. The new computer code, SKYSHINE-II, provides a substantial increase in versatility in that the program possesses the ability to address three types of point-isotropic radiation sources: (1) primary gamma rays, (2) neutrons, and (3) secondary gamma rays. In addition, the emitted radiation may now be characterized by an energy emission spectrum product of a new energy-dependent atmospheric transmission data base developed by Radiation Research Associates, Inc. for each of the three source types described above. Most of the computational options present in the original program have been retainedmore » in the new version. Hence, the SKYSHINE-II computer code provides a versatile and viable tool for the analysis of the radiation environment in the vicinity of a building structure containing radiation sources, situated within the confines of a nuclear power plant. This report describes many of the calculational methods employed within the SKYSHINE-II program. A brief description of the new data base is included. Utilization instructions for the program are provided for operation of the SKYSHINE-II code on the Brookhaven National Laboratory Central Scientific Computing Facility. A listing of the source decks, block data routines, and the new atmospheric transmission data base are provided in the appendices of the report.« less
Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K
2014-01-01
In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in.
Gupta, Parth Sarthi Sen; Banerjee, Shyamashree; Islam, Rifat Nawaz Ul; Mondal, Sudipta; Mondal, Buddhadev; Bandyopadhyay, Amal K
2014-01-01
In the genomic and proteomic era, efficient and automated analyses of sequence properties of protein have become an important task in bioinformatics. There are general public licensed (GPL) software tools to perform a part of the job. However, computations of mean properties of large number of orthologous sequences are not possible from the above mentioned GPL sets. Further, there is no GPL software or server which can calculate window dependent sequence properties for a large number of sequences in a single run. With a view to overcome above limitations, we have developed a standalone procedure i.e. PHYSICO, which performs various stages of computation in a single run based on the type of input provided either in RAW-FASTA or BLOCK-FASTA format and makes excel output for: a) Composition, Class composition, Mean molecular weight, Isoelectic point, Aliphatic index and GRAVY, b) column based compositions, variability and difference matrix, c) 25 kinds of window dependent sequence properties. The program is fast, efficient, error free and user friendly. Calculation of mean and standard deviation of homologous sequences sets, for comparison purpose when relevant, is another attribute of the program; a property seldom seen in existing GPL softwares. Availability PHYSICO is freely available for non-commercial/academic user in formal request to the corresponding author akbanerjee@biotech.buruniv.ac.in PMID:24616564
NASA Technical Reports Server (NTRS)
Hafez, M.; Ahmad, J.; Kuruvila, G.; Salas, M. D.
1987-01-01
In this paper, steady, axisymmetric inviscid, and viscous (laminar) swirling flows representing vortex breakdown phenomena are simulated using a stream function-vorticity-circulation formulation and two numerical methods. The first is based on an inverse iteration, where a norm of the solution is prescribed and the swirling parameter is calculated as a part of the output. The second is based on direct Newton iterations, where the linearized equations, for all the unknowns, are solved simultaneously by an efficient banded Gaussian elimination procedure. Several numerical solutions for inviscid and viscous flows are demonstrated, followed by a discussion of the results. Some improvements on previous work have been achieved: first order upwind differences are replaced by second order schemes, line relaxation procedure (with linear convergence rate) is replaced by Newton's iterations (which converge quadratically), and Reynolds numbers are extended from 200 up to 1000.
Non-gaussian signatures of general inflationary trajectories
NASA Astrophysics Data System (ADS)
Horner, Jonathan S.; Contaldi, Carlo R.
2014-09-01
We carry out a numerical calculation of the bispectrum in generalised trajectories of canonical, single-field inflation. The trajectories are generated in the Hamilton-Jacobi (HJ) formalism based on Hubble Slow Roll (HSR) parameters. The calculation allows generally shape and scale dependent bispectra, or dimensionless fNL, in the out-of-slow-roll regime. The distributions of fNL for various shapes and HSR proposals are shown as an example of how this procedure can be used within the context of Monte Carlo exploration of inflationary trajectories. We also show how allowing out-of-slow-roll behaviour can lead to a bispectrum that is relatively large for equilateral shapes.
Mechanistic interpretation of nondestructive pavement testing deflections
NASA Astrophysics Data System (ADS)
Hoffman, M. S.; Thompson, M. R.
1981-06-01
A method for the back calculation of material properties in flexible pavements based on the interpretation of surface deflection measurements is proposed. The ILLI-PAVE, a stress-dependent finite element pavement model, was used to generate data for developing algorithms and nomographs for deflection basin interpretation. Twenty four different flexible pavement sections throughout the State of Illinois were studied. Deflections were measured and loading mode effects on pavement response were investigated. The factors controlling the pavement response to different loading modes are identified and explained. Correlations between different devices are developed. The back calculated parameters derived from the proposed evaluation procedure can be used as inputs for asphalt concrete overlay design.
The Hartree-Fock calculation of the magnetic properties of molecular solutes
NASA Astrophysics Data System (ADS)
Cammi, R.
1998-08-01
In this paper we set the formal bases for the calculation of the magnetic susceptibility and of the nuclear magnetic shielding tensors for molecular solutes described within the framework of the polarizable continuum model (PCM). The theory has been developed at self-consistent field (SCF) level and adapted to be used within the framework of some of the computational procedures of larger use, i.e., the gauge invariant atomic orbital method (GIAO) and the continuous set gauge transformation method (CSGT). The numerical results relative to the magnetizabilities and chemical shielding of acetonitrile and nitrometane in various solvents computed with the PCM-CSGT method are also presented.
Automatic P-S phase picking procedure based on Kurtosis: Vanuatu region case study
NASA Astrophysics Data System (ADS)
Baillard, C.; Crawford, W. C.; Ballu, V.; Hibert, C.
2012-12-01
Automatic P and S phase picking is indispensable for large seismological data sets. Robust algorithms, based on short term and long term average ratio comparison (Allen, 1982), are commonly used for event detection, but further improvements can be made in phase identification and picking. We present a picking scheme using consecutively Kurtosis-derived Characteristic Functions (CF) and Eigenvalue decompositions on 3-component seismic data to independently pick P and S arrivals. When computed over a sliding window of the signal, a sudden increase in the CF reveals a transition from a gaussian to a non-gaussian distribution, characterizing the phase onset (Saragiotis, 2002). One advantage of the method is that it requires much fewer adjustable parameters than competing methods. We modified the Kurtosis CF to improve pick precision, by computing the CF over several frequency bandwidths, window sizes and smoothing parameters. Once phases were picked, we determined the onset type (P or S) using polarization parameters (rectilinearity, azimuth and dip) calculated using Eigenvalue decompositions of the covariance matrix (Cichowicz, 1993). Finally, we removed bad picks using a clustering procedure and the signal-to-noise ratio (SNR). The pick quality index was also assigned based on the SNR value. Amplitude calculation is integrated into the procedure to enable automatic magnitude calculation. We applied this procedure to data from a network of 30 wideband seismometers (including 10 oceanic bottom seismometers) in Vanuatu that ran for 10 months from May 2008 to February 2009. We manually picked the first 172 events of June, whose local magnitudes range from 0.7 to 3.7. We made a total of 1601 picks, 1094 P and 507 S. We then applied our automatic picking to the same dataset. 70% of the manually picked onsets were picked automatically. For P-picks, the difference between manual and automatic picks is 0.01 ± 0.08 s overall; for the best quality picks (quality index 0: 64% of the P-picks) the difference is -0.01 ± 0.07 s. For S-picks, the difference is -0.09 ± 0.26 s overall and -0.06 ± 0.14 s for good quality picks (index 1: 26% of the S-picks). Residuals showed no dependence on the event magnitudes. The method independently picks S and P waves with good precision and only a few parameters to adjust for relatively small earthquakes (mostly ≤ 2 Ml). The automatic procedure was then applied to the whole dataset. Earthquake locations obtained by inverting onset arrivals revealed clustering and lineations that helped us to constrain the subduction plane. Those key parameters will be integrated to a 3D finite-difference modeling and compared to GPS data in order to better understand the complex geodynamics behavior of the Vanuatu region.
A review of the calculation procedure for critical acid loads for terrestrial ecosystems.
van der Salm, C; de Vries, W
2001-04-23
Target loads for acid deposition in the Netherlands, as formulated in the Dutch environmental policy plan, are based on critical load calculations at the end of the 1980s. Since then knowledge on the effect of acid deposition on terrestrial ecosystems has substantially increased. In the early 1990s a simple mass balance model was developed to calculate critical loads. This model was evaluated and the methods were adapted to represent the current knowledge. The main changes in the model are the use of actual empirical relationships between Al and H concentrations in the soil solution, the addition of a constant base saturation as a second criterion for soil quality and the use of tree species-dependant critical Al/base cation (BC) ratios for Dutch circumstances. The changes in the model parameterisation and in the Al/BC criteria led to considerably (50%) higher critical loads for root damage. The addition of a second criterion in the critical load calculations for soil quality caused a decrease in the critical loads for soils with a median to high base saturation such as loess and clay soils. The adaptation hardly effected the median critical load for soil quality in the Netherlands, since only 15% of the Dutch forests occur on these soils. On a regional scale, however, critical loads were (much) lower in areas where those soils are located.
NASA Technical Reports Server (NTRS)
Miles, Jeffrey Hilton
2015-01-01
A cross-power spectrum phase based adaptive technique is discussed which iteratively determines the time delay between two digitized signals that are coherent. The adaptive delay algorithm belongs to a class of algorithms that identifies a minimum of a pattern matching function. The algorithm uses a gradient technique to find the value of the adaptive delay that minimizes a cost function based in part on the slope of a linear function that fits the measured cross power spectrum phase and in part on the standard error of the curve fit. This procedure is applied to data from a Honeywell TECH977 static-engine test. Data was obtained using a combustor probe, two turbine exit probes, and far-field microphones. Signals from this instrumentation are used estimate the post-combustion residence time in the combustor. Comparison with previous studies of the post-combustion residence time validates this approach. In addition, the procedure removes the bias due to misalignment of signals in the calculation of coherence which is a first step in applying array processing methods to the magnitude squared coherence data. The procedure also provides an estimate of the cross-spectrum phase-offset.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-04
... Conservation Program: Test Procedures for Residential Clothes Washers; Correction AGENCY: Office of Energy.... Department of Energy (DOE) is correcting a final rule establishing revised test procedures for residential... factor calculation section of the currently applicable test procedure. DATES: Effective: April 6, 2012...
40 CFR 600.208-77 - Sample calculation.
Code of Federal Regulations, 2010 CFR
2010-07-01
... ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.208-77 Sample calculation...
Müller, Matthias; Gras, Florian; Marintschev, Ivan; Mückley, Thomas; Hofmann, Gunter O
2009-01-01
A novel, radiation- and reference base-free procedure for placement of navigated instruments and implants was developed and its practicability and precision in retrograde drillings evaluated in an experimental setting. Two different guidance techniques were used: One experimental group was operated on using the radiation- and reference base-free navigation technique (Fluoro Free), and the control group was operated on using standard fluoroscopy for guidance. For each group, 12 core decompressions were simulated by retrograde drillings in different artificial femurs following arthroscopic determination of the osteochondral lesions. The final guide-wire position was evaluated by postoperative CT analysis using vector calculation. High precision was achieved in both groups, but operating time was significantly reduced in the navigated group as compared to the control group. This was due to a 100% first-pass accuracy of drilling in the navigated group; in the control group a mean of 2.5 correction maneuvers per drilling were necessary. Additionally, the procedure was free of radiation in the navigated group, whereas 17.2 seconds of radiation exposure time were measured in the fluoroscopy-guided group. The developed Fluoro Free procedure is a promising and simplified approach to navigating different instruments as well as implants in relation to visually or tactilely placed pointers or objects without the need for radiation exposure or invasive fixation of a dynamic reference base in the bone.
Evaluation of Flight Deck-Based Interval Management Crew Procedure Feasibility
NASA Technical Reports Server (NTRS)
Wilson, Sara R.; Murdoch, Jennifer L.; Hubbs, Clay E.; Swieringa, Kurt A.
2013-01-01
Air traffic demand is predicted to increase over the next 20 years, creating a need for new technologies and procedures to support this growth in a safe and efficient manner. The National Aeronautics and Space Administration's (NASA) Air Traffic Management Technology Demonstration - 1 (ATD-1) will operationally demonstrate the feasibility of efficient arrival operations combining ground-based and airborne NASA technologies. The integration of these technologies will increase throughput, reduce delay, conserve fuel, and minimize environmental impacts. The ground-based tools include Traffic Management Advisor with Terminal Metering for precise time-based scheduling and Controller Managed Spacing decision support tools for better managing aircraft delay with speed control. The core airborne technology in ATD-1 is Flight deck-based Interval Management (FIM). FIM tools provide pilots with speed commands calculated using information from Automatic Dependent Surveillance - Broadcast. The precise merging and spacing enabled by FIM avionics and flight crew procedures will reduce excess spacing buffers and result in higher terminal throughput. This paper describes a human-in-the-loop experiment designed to assess the acceptability and feasibility of the ATD-1 procedures used in a voice communications environment. This experiment utilized the ATD-1 integrated system of ground-based and airborne technologies. Pilot participants flew a high-fidelity fixed base simulator equipped with an airborne spacing algorithm and a FIM crew interface. Experiment scenarios involved multiple air traffic flows into the Dallas-Fort Worth Terminal Radar Control airspace. Results indicate that the proposed procedures were feasible for use by flight crews in a voice communications environment. The delivery accuracy at the achieve-by point was within +/- five seconds and the delivery precision was less than five seconds. Furthermore, FIM speed commands occurred at a rate of less than one per minute, and pilots found the frequency of the speed commands to be acceptable at all times throughout the experiment scenarios.
A procedure for predicting internal and external noise fields of blowdown wind tunnels
NASA Technical Reports Server (NTRS)
Hosier, R. N.; Mayes, W. H.
1972-01-01
The noise generated during the operation of large blowdown wind tunnels is considered. Noise calculation procedures are given to predict the test-section overall and spectrum level noise caused by both the tunnel burner and turbulent boundary layer. External tunnel noise levels due to the tunnel burner and circular jet exhaust flow are also calculated along with their respective cut-off frequency and spectrum peaks. The predicted values are compared with measured data, and the ability of the prediction procedure to estimate blowdown-wind-tunnel noise levels is shown.
Blade selection for a modern axial-flow compressor
NASA Technical Reports Server (NTRS)
Wright, L. C.
1974-01-01
The procedures leading to successful design of an axial flow compressor are discussed. The three related approaches to cascade selection are: (1) experimental approach which relies on the use of experimental results from identical cascades to satisfy the velocity diagrams calculated, (2) a purely analytical procedure whereby blade shapes are calculated from the theoretical cascade and viscous flow equations, and (3) a semiempirical procedure which used experimental data together with the theoretically derived functional relations to relate the cascade parameters. Diagrams of typical transonic blade sections with uncambered leading edges are presented.
40 CFR 86.244-94 - Calculations; exhaust emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... New Medium-Duty Passenger Vehicles; Cold Temperature Test Procedures § 86.244-94 Calculations; exhaust.... Should NOX measurements be calculated, note that the humidity correction factor is not valid at colder...
40 CFR 86.244-94 - Calculations; exhaust emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... New Medium-Duty Passenger Vehicles; Cold Temperature Test Procedures § 86.244-94 Calculations; exhaust.... Should NOX measurements be calculated, note that the humidity correction factor is not valid at colder...
40 CFR 86.244-94 - Calculations; exhaust emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... New Medium-Duty Passenger Vehicles; Cold Temperature Test Procedures § 86.244-94 Calculations; exhaust.... Should NOX measurements be calculated, note that the humidity correction factor is not valid at colder...
"Hyperstat": an educational and working tool in epidemiology.
Nicolosi, A
1995-01-01
The work of a researcher in epidemiology is based on studying literature, planning studies, gathering data, analyzing data and writing results. Therefore he has need for performing, more or less, simple calculations, the need for consulting or quoting literature, the need for consulting textbooks about certain issues or procedures, and the need for looking at a specific formula. There are no programs conceived as a workstation to assist the different aspects of researcher work in an integrated fashion. A hypertextual system was developed which supports different stages of the epidemiologist's work. It combines database management, statistical analysis or planning, and literature searches. The software was developed on Apple Macintosh by using Hypercard 2.1 as a database and HyperTalk as a programming language. The program is structured in 7 "stacks" or files: Procedures; Statistical Tables; Graphs; References; Text; Formulas; Help. Each stack has its own management system with an automated Table of Contents. Stacks contain "cards" which make up the databases and carry executable programs. The programs are of four kinds: association; statistical procedure; formatting (input/output); database management. The system performs general statistical procedures, procedures applicable to epidemiological studies only (follow-up and case-control), and procedures for clinical trials. All commands are given by clicking the mouse on self-explanatory "buttons". In order to perform calculations, the user only needs to enter the data into the appropriate cells and then click on the selected procedure's button. The system has a hypertextual structure. The user can go from a procedure to other cards following the preferred order of succession and according to built-in associations. The user can access different levels of knowledge or information from any stack he is consulting or operating. From every card, the user can go to a selected procedure to perform statistical calculations, to the reference database management system, to the textbook in which all procedures and issues are discussed in detail, to the database of statistical formulas with automated table of contents, to statistical tables with automated table of contents, or to the help module. he program has a very user-friendly interface and leaves the user free to use the same format he would use on paper. The interface does not require special skills. It reflects the Macintosh philosophy of using windows, buttons and mouse. This allows the user to perform complicated calculations without losing the "feel" of data, weight alternatives, and simulations. This program shares many features in common with hypertexts. It has an underlying network database where the nodes consist of text, graphics, executable procedures, and combinations of these; the nodes in the database correspond to windows on the screen; the links between the nodes in the database are visible as "active" text or icons in the windows; the text is read by following links and opening new windows. The program is especially useful as an educational tool, directed to medical and epidemiology students. The combination of computing capabilities with a textbook and databases of formulas and literature references, makes the program versatile and attractive as a learning tool. The program is also helpful in the work done at the desk, where the researcher examines results, consults literature, explores different analytic approaches, plans new studies, or writes grant proposals or scientific articles.
NASA Technical Reports Server (NTRS)
Miller, C. G., III; Wilder, S. E.
1972-01-01
Data-reduction procedures for determining free stream and post-normal shock kinetic and thermodynamic quantities are derived. These procedures are applicable to imperfect real air flows in thermochemical equilibrium for temperatures to 15 000 K and a range of pressures from 0.25 N/sq m to 1 GN/sq m. Although derived primarily to meet the immediate needs of the 6-inch expansion tube, these procedures are applicable to any supersonic or hypersonic test facility where combinations of three of the following flow parameters are measured in the test section: (1) Stagnation pressure behind normal shock; (2) freestream static pressure; (3) stagnation point heat transfer rate; (4) free stream velocity; (5) stagnation density behind normal shock; and (6) free stream density. Limitations of the nine procedures and uncertainties in calculated flow quantities corresponding to uncertainties in measured input data are discussed. A listing of the computer program is presented, along with a description of the inputs required and a sample of the data printout.
Verification of the ODOT overlay design procedure : final report, June 1996.
DOT National Transportation Integrated Search
1996-06-01
The current ODOT overlay design procedure sometimes indicates additional pavement thickness is needed right after the overlay construction. Evaluation of the current procedure reveals that using spreadabiity to back calculate existing pavement modulu...
Integral flange design program. [procedure for computing stresses
NASA Technical Reports Server (NTRS)
Wilson, J. F.
1974-01-01
An automated interactive flange design program utilizing an electronic desk top calculator is presented. The program calculates the operating and seating stresses for circular flanges of the integral or optional type subjected to internal pressure. The required input information is documented. The program provides an automated procedure for computing stresses in selected flange geometries for comparison to the allowable code values.
The purpose of this SOP is to describe the procedures undertaken for calculating ingestion exposure from Day 4 composite measurements from duplicate diet using the direct method of exposure estimation. This SOP uses data that have been properly coded and certified with appropria...
Finnveden, Svante; Hörlin, Nils-Erik; Barbagallo, Mathias
2014-04-01
Viscoelastic properties of porous materials, typical of those used in vehicles for noise insulation and absorption, are estimated from measurements and inverse finite element procedures. The measurements are taken in a near vacuum and cover a broad frequency range: 20 Hz to 1 kHz. The almost cubic test samples were made of 25 mm foam covered by a "heavy layer" of rubber. They were mounted in a vacuum chamber on an aluminum table, which was excited in the vertical and horizontal directions with a shaker. Three kinds of response are measured allowing complete estimates of the viscoelastic moduli for isotropic materials and also providing some information on the degree of material anisotropicity. First, frequency independent properties are estimated, where dissipation is described by constant loss factors. Then, fractional derivative models that capture the variation with frequency of the stiffness and damping are adapted. The measurement setup is essentially two-dimensional and calculations are three-dimensional and for a state of plane strain. The good agreement between measured and calculated response provides some confidence in the presented procedures. If, however, the material model cannot fit the measurements well, the inverse procedure yields a certain degree of arbitrariness to the parameter estimation.
A model to determine payments associated with radiology procedures.
Mabotuwana, Thusitha; Hall, Christopher S; Thomas, Shiby; Wald, Christoph
2017-12-01
Across the United States, there is a growing number of patients in Accountable Care Organizations and under risk contracts with commercial insurance. This is due to proliferation of new value-based payment models and care delivery reform efforts. In this context, the business model of radiology within a hospital or health system context is shifting from a primary profit-center to a cost-center with a goal of cost savings. Radiology departments need to increasingly understand how the transactional nature of the business relates to financial rewards. The main challenge with current reporting systems is that the information is presented only at an aggregated level, and often not broken down further, for instance, by type of exam. As such, the primary objective of this research is to provide better visibility into payments associated with individual radiology procedures in order to better calibrate expense/capital structure of the imaging enterprise to the actual revenue or value-add to the organization it belongs to. We propose a methodology that can be used to determine technical payments at a procedure level. We use a proportion based model to allocate payments to individual radiology procedures based on total charges (which also includes non-radiology related charges). Using a production dataset containing 424,250 radiology exams we calculated the overall average technical charge for Radiology to be $873.08 per procedure and the corresponding average payment to be $326.43 (range: $48.27 for XR and $2750.11 for PET/CT) resulting in an average payment percentage of 37.39% across all exams. We describe how charges associated with a procedure can be used to approximate technical payments at a more granular level with a focus on Radiology. The methodology is generalizable to approximate payment for other services as well. Understanding payments associated with each procedure can be useful during strategic practice planning. Charge-to-total charge ratio can be used to approximate radiology payments at a procedure level. Copyright © 2017 Elsevier B.V. All rights reserved.
A procedure to evaluate environmental rehabilitation in limestone quarries.
Neri, Ana Claudia; Sánchez, Luis Enrique
2010-11-01
A procedure to evaluate mine rehabilitation practices during the operational phase was developed and validated. It is based on a comparison of actually observed or documented practices with internationally recommended best practices (BP). A set of 150 BP statements was derived from international guides in order to establish the benchmark. The statements are arranged in six rehabilitation programs under three categories: (1) planning (2) operational and (3) management, corresponding to the adoption of the plan-do-check-act management systems model to mine rehabilitation. The procedure consists of (i) performing technical inspections guided by a series of field forms containing BP statements; (ii) classifying evidences in five categories; and (iii) calculating conformity indexes and levels. For testing and calibration purposes, the procedure was applied to nine limestone quarries and conformity indexes were calculated for the rehabilitation programs in each quarry. Most quarries featured poor planning practices, operational practices reached high conformity levels in 50% of the cases and management practices scored moderate conformity. Despite all quarries being ISO 14001 certified, their management systems pay low attention to issues pertaining to land rehabilitation and biodiversity. The best results were achieved by a quarry whose expansion was recently submitted to the environmental impact assessment process, suggesting that public scrutiny may play a positive role in enhancing rehabilitation practices. Conformity indexes and levels can be used to chart the evolution of rehabilitation practices at regular intervals, to establish corporate goals and for communication with stakeholders. Copyright 2010 Elsevier Ltd. All rights reserved.
Farace, P; Pontalti, R; Cristoforetti, L; Antolini, R; Scarpa, M
1997-11-01
This paper presents an automatic method to obtain tissue complex permittivity values to be used as input data in the computer modelling for hyperthermia treatment planning. Magnetic resonance (MR) images were acquired and the tissue water content was calculated from the signal intensity of the image pixels. The tissue water content was converted into complex permittivity values by monotonic functions based on mixture theory. To obtain a water content map by MR imaging a gradient-echo pulse sequence was used and an experimental procedure was set up to correct for relaxation and radiofrequency field inhomogeneity effects on signal intensity. Two approaches were followed to assign the permittivity values to fat-rich tissues: (i) fat-rich tissue localization by a segmentation procedure followed by assignment of tabulated permittivity values; (ii) water content evaluation by chemical shift imaging followed by permittivity calculation. Tests were performed on phantoms of known water content to establish the reliability of the proposed method. MRI data were acquired and processed pixel-by-pixel according to the outlined procedure. The signal intensity in the phantom images correlated well with water content. Experiments were performed on volunteers' healthy tissue. In particular two anatomical structures were chosen to calculate permittivity maps: the head and the thigh. The water content and electric permittivity values were obtained from the MRI data and compared to others in the literature. A good agreement was found for muscle, cerebrospinal fluid (CSF) and white and grey matter. The advantages of the reported method are discussed in the light of possible application in hyperthermia treatment planning.
Efficiency of personal dosimetry methods in vascular interventional radiology.
Bacchim Neto, Fernando Antonio; Alves, Allan Felipe Fattori; Mascarenhas, Yvone Maria; Giacomini, Guilherme; Maués, Nadine Helena Pelegrino Bastos; Nicolucci, Patrícia; de Freitas, Carlos Clayton Macedo; Alvarez, Matheus; Pina, Diana Rodrigues de
2017-05-01
The aim of the present study was to determine the efficiency of six methods for calculate the effective dose (E) that is received by health professionals during vascular interventional procedures. We evaluated the efficiency of six methods that are currently used to estimate professionals' E, based on national and international recommendations for interventional radiology. Equivalent doses on the head, neck, chest, abdomen, feet, and hands of seven professionals were monitored during 50 vascular interventional radiology procedures. Professionals' E was calculated for each procedure according to six methods that are commonly employed internationally. To determine the best method, a more efficient E calculation method was used to determine the reference value (reference E) for comparison. The highest equivalent dose were found for the hands (0.34±0.93mSv). The two methods that are described by Brazilian regulations overestimated E by approximately 100% and 200%. The more efficient method was the one that is recommended by the United States National Council on Radiological Protection and Measurements (NCRP). The mean and median differences of this method relative to reference E were close to 0%, and its standard deviation was the lowest among the six methods. The present study showed that the most precise method was the one that is recommended by the NCRP, which uses two dosimeters (one over and one under protective aprons). The use of methods that employ at least two dosimeters are more efficient and provide better information regarding estimates of E and doses for shielded and unshielded regions. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
New Criterion and Tool for Caltrans Seismic Hazard Characterization
NASA Astrophysics Data System (ADS)
Shantz, T.; Merriam, M.; Turner, L.; Chiou, B.; Liu, X.
2008-12-01
Caltrans recently adopted new procedures for the development of response spectra for structure design. These procedures incorporate both deterministic and probabilistic criteria. The Next Generation Attenuation (NGA) models (2008) are used for deterministic assessment (using a revised late-Quaternary age fault database), and the USGS 2008 5% in 50-year hazard maps are used for probabilistic assessment. A minimum deterministic spectrum based on a M6.5 earthquake at 12 km is also included. These spectra are enveloped and the largest values used. A new publicly available web-based design tool for calculating the design spectrum will be used for calculations. The tool is built on a Windows-Apache-MySQL-PHP (WAMP) platform and integrates GoogleMaps for increased flexibility in the tool's use. Links to Caltrans data such as pre-construction logs of test borings assist in the estimation of Vs30 values used in the new procedures. Basin effects based on new models developed for the CFM, for the San Francisco Bay area by the USGS, and by Thurber (2008) are also incorporated. It is anticipated that additional layers such as CGS Seismic Hazard Zone maps will be added in the future. Application of the new criterion will result in expected higher levels of ground motion at many bridges west of the Coast Ranges. In eastern California, use of the NGA relationships for strike-slip faulting (the dominant sense of motion in California) will often result in slightly lower expected values for bridges. The expected result is a more realistic prediction of ground motions at bridges, in keeping with those motions developed for other large-scale and important structures. The tool is based on a simplified fault map of California, so it will not be used for more detailed evaluations such as surface rupture determination. Announcements regarding tool availability (expected to be in early 2009) are at http://www.dot.ca.gov/research/index.htm
Utilization of group theory in studies of molecular clusters
NASA Astrophysics Data System (ADS)
Ocak, Mahir E.
The structure of the molecular symmetry group of molecular clusters was analyzed and it is shown that the molecular symmetry group of a molecular cluster can be written as direct products and semidirect products of its subgroups. Symmetry adaptation of basis functions in direct product groups and semidirect product groups was considered in general and the sequential symmetry adaptation procedure which is already known for direct product groups was extended to the case of semidirect product groups. By using the sequential symmetry adaptation procedure a new method for calculating the VRT spectra of molecular clusters which is named as Monomer Basis Representation (MBR) method is developed. In the MBR method, calculations starts with a single monomer with the purpose of obtaining an optimized basis for that monomer as a linear combination of some primitive basis functions. Then, an optimized basis for each identical monomer is generated from the optimized basis of this monomer. By using the optimized bases of the monomers, a basis is generated generated for the solution of the full problem, and the VRT spectra of the cluster is obtained by using this basis. Since an optimized basis is used for each monomer which has a much smaller size than the primitive basis from which the optimized bases are generated, the MBR method leads to an exponential optimization in the size of the basis that is required for the calculations. Application of the MBR method has been illustrated by calculating the VRT spectra of water dimer by using the SAPT-5st potential surface of Groenenboom et al. The rest of the calculations are in good agreement with both the original calculations of Groenenboom et al. and also with the experimental results. Comparing the size of the optimized basis with the size of the primitive basis, it can be said that the method works efficiently. Because of its efficiency, the MBR method can be used for studies of clusters bigger than dimers. Thus, MBR method can be used for studying the many-body terms and for deriving accurate potential surfaces.
NASA Astrophysics Data System (ADS)
Rossi, Francesca; Zingoni, Tiziano; Di Cicco, Emiliano; Manetti, Leonardo; Pini, Roberto; Fortuna, Damiano
2011-07-01
Laser light is nowadays routinely used in the aesthetic treatments of facial skin, such as in laser rejuvenation, scar removal etc. The induced thermal damage may be varied by setting different laser parameters, in order to obtain a particular aesthetic result. In this work, it is proposed a theoretical study on the induced thermal damage in the deep tissue, by considering different laser pulse duration. The study is based on the Finite Element Method (FEM): a bidimensional model of the facial skin is depicted in axial symmetry, considering the different skin structures and their different optical and thermal parameters; the conversion of laser light into thermal energy is modeled by the bio-heat equation. The light source is a CO2 laser, with different pulse durations. The model enabled to study the thermal damage induced into the skin, by calculating the Arrhenius integral. The post-processing results enabled to study in space and time the temperature dynamics induced in the facial skin, to study the eventual cumulative effects of subsequent laser pulses and to optimize the procedure for applications in dermatological surgery. The calculated data where then validated in an experimental measurement session, performed in a sheep animal model. Histological analyses were performed on the treated tissues, evidencing the spatial distribution and the entity of the thermal damage in the collageneous tissue. Modeling and experimental results were in good agreement, and they were used to design a new optimized laser based skin resurfacing procedure.
Frank, Steven M; Rothschild, James A; Masear, Courtney G; Rivers, Richard J; Merritt, William T; Savage, Will J; Ness, Paul M
2013-06-01
The maximum surgical blood order schedule (MSBOS) is used to determine preoperative blood orders for specific surgical procedures. Because the list was developed in the late 1970s, many new surgical procedures have been introduced and others improved upon, making the original MSBOS obsolete. The authors describe methods to create an updated, institution-specific MSBOS to guide preoperative blood ordering. Blood utilization data for 53,526 patients undergoing 1,632 different surgical procedures were gathered from an anesthesia information management system. A novel algorithm based on previously defined criteria was used to create an MSBOS for each surgical specialty. The economic implications were calculated based on the number of blood orders placed, but not indicated, according to the MSBOS. Among 27,825 surgical cases that did not require preoperative blood orders as determined by the MSBOS, 9,099 (32.7%) had a type and screen, and 2,643 (9.5%) had a crossmatch ordered. Of 4,644 cases determined to require only a type and screen, 1,509 (32.5%) had a type and crossmatch ordered. By using the MSBOS to eliminate unnecessary blood orders, the authors calculated a potential reduction in hospital charges and actual costs of $211,448 and $43,135 per year, respectively, or $8.89 and $1.81 per surgical patient, respectively. An institution-specific MSBOS can be created, using blood utilization data extracted from an anesthesia information management system along with our proposed algorithm. Using these methods to optimize the process of preoperative blood ordering can potentially improve operating room efficiency, increase patient safety, and decrease costs.
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.
An approximate method for calculating three-dimensional inviscid hypersonic flow fields
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Dejarnette, Fred R.
1990-01-01
An approximate solution technique was developed for 3-D inviscid, hypersonic flows. The method employs Maslen's explicit pressure equation in addition to the assumption of approximate stream surfaces in the shock layer. This approximation represents a simplification to Maslen's asymmetric method. The present method presents a tractable procedure for computing the inviscid flow over 3-D surfaces at angle of attack. The solution procedure involves iteratively changing the shock shape in the subsonic-transonic region until the correct body shape is obtained. Beyond this region, the shock surface is determined using a marching procedure. Results are presented for a spherically blunted cone, paraboloid, and elliptic cone at angle of attack. The calculated surface pressures are compared with experimental data and finite difference solutions of the Euler equations. Shock shapes and profiles of pressure are also examined. Comparisons indicate the method adequately predicts shock layer properties on blunt bodies in hypersonic flow. The speed of the calculations makes the procedure attractive for engineering design applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ndong, Mamadou; Lauvergnat, David; Nauts, André
2013-11-28
We present new techniques for an automatic computation of the kinetic energy operator in analytical form. These techniques are based on the use of the polyspherical approach and are extended to take into account Cartesian coordinates as well. An automatic procedure is developed where analytical expressions are obtained by symbolic calculations. This procedure is a full generalization of the one presented in Ndong et al., [J. Chem. Phys. 136, 034107 (2012)]. The correctness of the new implementation is analyzed by comparison with results obtained from the TNUM program. We give several illustrations that could be useful for users of themore » code. In particular, we discuss some cyclic compounds which are important in photochemistry. Among others, we show that choosing a well-adapted parameterization and decomposition into subsystems can allow one to avoid singularities in the kinetic energy operator. We also discuss a relation between polyspherical and Z-matrix coordinates: this comparison could be helpful for building an interface between the new code and a quantum chemistry package.« less
Citak, Demirhan; Tuzen, Mustafa; Soylak, Mustafa
2010-01-15
A speciation procedure based on the coprecipitation of manganese(II) with zirconium(IV) hydroxide has been developed for the investigation of levels of manganese species. The determination of manganese levels was performed by flame atomic absorption spectrometry (FAAS). Total manganese was determined after the reduction of Mn(VII) to Mn(II) by ascorbic acid. The analytical parameters including pH, amount of zirconium(IV), sample volume, etc., were investigated for the quantitative recoveries of manganese(II). The effects of matrix ions were also examined. The recoveries for manganese(II) were in the range of 95-98%. Preconcentration factor was calculated as 50. The detection limit for the analyte ions based on 3 sigma (n=21) was 0.75 microg L(-1) for Mn(II). The relative standard deviation was found to be lower than 7%. The validation of the presented procedure was performed by analysis of certified reference material having different matrices, NIST SRM 1515 (Apple Leaves) and NIST SRM 1568a (Rice Flour). The procedure was successfully applied to natural waters and food samples.
A united event grand canonical Monte Carlo study of partially doped polyaniline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byshkin, M. S., E-mail: mbyshkin@unisa.it, E-mail: gmilano@unisa.it; Correa, A.; Buonocore, F.
2013-12-28
A Grand Canonical Monte Carlo scheme, based on united events combining protonation/deprotonation and insertion/deletion of HCl molecules is proposed for the generation of polyaniline structures at intermediate doping levels between 0% (PANI EB) and 100% (PANI ES). A procedure based on this scheme and subsequent structure relaxations using molecular dynamics is described and validated. Using the proposed scheme and the corresponding procedure, atomistic models of amorphous PANI-HCl structures were generated and studied at different doping levels. Density, structure factors, and solubility parameters were calculated. Their values agree well with available experimental data. The interactions of HCl with PANI have beenmore » studied and distribution of their energies has been analyzed. The procedure has also been extended to the generation of PANI models including adsorbed water and the effect of inclusion of water molecules on PANI properties has also been modeled and discussed. The protocol described here is general and the proposed United Event Grand Canonical Monte Carlo scheme can be easily extended to similar polymeric materials used in gas sensing and to other systems involving adsorption and chemical reactions steps.« less
Two-dimensional imaging of two types of radicals by the CW-EPR method
NASA Astrophysics Data System (ADS)
Czechowski, Tomasz; Krzyminiewski, Ryszard; Jurga, Jan; Chlewicki, Wojciech
2008-01-01
The CW-EPR method of image reconstruction is based on sample rotation in a magnetic field with a constant gradient (50 G/cm). In order to obtain a projection (radical density distribution) along a given direction, the EPR spectra are recorded with and without the gradient. Deconvolution, then gives the distribution of the spin density. Projection at 36 different angles gives the information that is necessary for reconstruction of the radical distribution. The problem becomes more complex when there are at least two types of radicals in the sample, because the deconvolution procedure does not give satisfactory results. We propose a method to calculate the projections for each radical, based on iterative procedures. The images of density distribution for each radical obtained by our procedure have proved that the method of deconvolution, in combination with iterative fitting, provides correct results. The test was performed on a sample of polymer PPS Br 111 ( p-phenylene sulphide) with glass fibres and minerals. The results indicated a heterogeneous distribution of radicals in the sample volume. The images obtained were in agreement with the known shape of the sample.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahn, T.; Rolfes, R.; Jonkman, J.
A significant number of wind turbines installed today have reached their designed service life of 20 years, and the number will rise continuously. Most of these turbines promise a more economical performance if they operate for more than 20 years. To assess a continued operation, we have to analyze the load-bearing capacity of the support structure with respect to site-specific conditions. Such an analysis requires the comparison of the loads used for the design of the support structure with the actual loads experienced. This publication presents the application of a so-called inverse load calculation to a 5-MW wind turbine supportmore » structure. The inverse load calculation determines external loads derived from a mechanical description of the support structure and from measured structural responses. Using numerical simulations with the software fast, we investigated the influence of wind-turbine-specific effects such as the wind turbine control or the dynamic interaction between the loads and the support structure to the presented inverse load calculation procedure. fast is used to study the inverse calculation of simultaneously acting wind and wave loads, which has not been carried out until now. Furthermore, the application of the inverse load calculation procedure to a real 5-MW wind turbine support structure is demonstrated. In terms of this practical application, setting up the mechanical system for the support structure using measurement data is discussed. The paper presents results for defined load cases and assesses the accuracy of the inversely derived dynamic loads for both the simulations and the practical application.« less
NASA Technical Reports Server (NTRS)
Reginato, R.; Idso, S.; Vedder, J.; Jackson, R.; Blanchard, M.; Goettelman, R.
1975-01-01
A procedure is presented for calculating 24-hour totals of evaporation from wet and drying soils. Its application requires a knowledge of the daily solar radiation, the maximum and minimum, air temperatures, moist surface albedo, and maximum and minimum surface temperatures. Tests of the technique on a bare field of Avondale loam at Phoenix, Arizona showed it to be independent of season.
The Otto-engine-equivalent vehicle concept
NASA Technical Reports Server (NTRS)
Dowdy, M. W.; Couch, M. D.
1978-01-01
A vehicle comparison methodology based on the Otto-Engine Equivalent (OEE) vehicle concept is described. As an illustration of this methodology, the concept is used to make projections of the fuel economy potential of passenger cars using various alternative power systems. Sensitivities of OEE vehicle results to assumptions made in the calculational procedure are discussed. Factors considered include engine torque boundary, rear axle ratio, performance criteria, engine transient response, and transmission shift logic.
Use of Suction Piles for Mooring of Mobile Offshore Bases.
1998-06-11
This procedure did not, however, take into account the passive suction developed by the pile. Investigation of soil interaction with suction piles...resulting o’^ distribution, which accounts for friction, is also shown in Fig. 5. The effective vertical stress profile within the clay just before the... accounting for active/passive soil pressures and skirt friction components. The principles used by Bye and his colleagues in the stability calculation
Interpretation of the results of statistical measurements. [search for basic probability model
NASA Technical Reports Server (NTRS)
Olshevskiy, V. V.
1973-01-01
For random processes, the calculated probability characteristic, and the measured statistical estimate are used in a quality functional, which defines the difference between the two functions. Based on the assumption that the statistical measurement procedure is organized so that the parameters for a selected model are optimized, it is shown that the interpretation of experimental research is a search for a basic probability model.
Transformer miniaturization for transcutaneous current/voltage pulse applications.
Kolen, P T
1999-05-01
A general procedure for the design of a miniaturized step up transformer to be used in the context of surface electrode based current/voltage pulse generation is presented. It has been shown that the optimum secondary current pulse width is 4.5 tau, where tau is the time constant associated with the pulse forming network associated with the transformer/electrode interaction. This criteria has been shown to produce the highest peak to average current ratio for the secondary current pulse. The design procedure allows for the calculation of the optimum turns ratio, primary turns, and secondary turns for a given electrode load/tissue and magnetic core parameters. Two design examples for transformer optimization are presented.
Groen, Reinou S; Kamara, Thaim B; Dixon-Cole, Richmond; Kwon, Steven; Kingham, T Peter; Kushner, Adam L
2012-08-01
A first step toward improving surgical care in many low and middle income countries is to document the need. To facilitate the collection and analysis of surgical capacity data and measure changes over time, Surgeons OverSeas (SOS) developed a tool and index based on personnel, infrastructure, procedures, equipment, and supplies (PIPES). A follow-up assessment of 10 government hospitals in Sierra Leone was completed 42 months after an initial survey in 2008 using the PIPES tool. An index based on number of operating rooms, personnel, infrastructure, procedures, equipment, and supplies was calculated. An index was also calculated, using the 2008 data for comparison. Most hospitals demonstrated an increased index that correlated with site visits that verified improved conditions. Connaught Hospital in Sierra Leone had the highest score (9.2), consistent with its being the best equipped and staffed Ministry of Health and Sanitation facility. Makeni District Hospital had the greatest increase, from 3.8 to 7.5, consistent with a newly constructed facility. The PIPES tool was easily administered at hospitals in Sierra Leone and an index was found useful. Surgical capacity in Sierra Leone improved between 2008 and 2011, as demonstrated by an increase in the overall PIPES indices.
NASA Astrophysics Data System (ADS)
Lu, Zenghai; Kasaragod, Deepa K.; Matcher, Stephen J.
2012-01-01
We demonstrate theoretically and experimentally that the phase retardance and relative optic-axis orientation of a sample can be calculated without prior knowledge of the actual value of the phase modulation amplitude when using a polarization-sensitive optical coherence tomography system based on continuous polarization modulation (CPM-PS-OCT). We also demonstrate that the sample Jones matrix can be calculated at any values of the phase modulation amplitude in a reasonable range depending on the system effective signal-to-noise ratio. This has fundamental importance for the development of clinical systems by simplifying the polarization modulator drive instrumentation and eliminating its calibration procedure. This was validated on measurements of a three-quarter waveplate and an equine tendon sample by a fiber-based swept-source CPM-PS-OCT system.
A New Quantum Watermarking Based on Quantum Wavelet Transforms
NASA Astrophysics Data System (ADS)
Heidari, Shahrokh; Naseri, Mosayeb; Gheibi, Reza; Baghfalaki, Masoud; Rasoul Pourarian, Mohammad; Farouk, Ahmed
2017-06-01
Quantum watermarking is a technique to embed specific information, usually the owner’s identification, into quantum cover data such for copyright protection purposes. In this paper, a new scheme for quantum watermarking based on quantum wavelet transforms is proposed which includes scrambling, embedding and extracting procedures. The invisibility and robustness performances of the proposed watermarking method is confirmed by simulation technique. The invisibility of the scheme is examined by the peak-signal-to-noise ratio (PSNR) and the histogram calculation. Furthermore the robustness of the scheme is analyzed by the Bit Error Rate (BER) and the Correlation Two-Dimensional (Corr 2-D) calculation. The simulation results indicate that the proposed watermarking scheme indicate not only acceptable visual quality but also a good resistance against different types of attack. Supported by Kermanshah Branch, Islamic Azad University, Kermanshah, Iran
Solar radiation for Mars power systems
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph; Landis, Geoffrey A.
1991-01-01
Detailed information about the solar radiation characteristics on Mars are necessary for effective design of future planned solar energy systems operating on the surface of Mars. A procedure and solar radiation related data from which the diurnally and daily variation of the global, direct (or beam), and diffuse insolation on Mars are calculated, are presented. The radiation data are based on measured optical depth of the Martian atmosphere derived from images taken of the Sun with a special diode on the Viking Lander cameras; and computation based on multiple wavelength and multiple scattering of the solar radiation.
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph; Flood, Dennis J.
1989-01-01
Detailed information on solar radiation characteristics on Mars are necessary for effective design of future planned solar energy systems operating on the surface of Mars. Presented here is a procedure and solar radiation related data from which the diurnally, hourly and daily variation of the global, direct beam and diffuse insolation on Mars are calculated. The radiation data are based on measured optical depth of the Martian atmosphere derived from images taken of the sun with a special diode on the Viking cameras; and computation based on multiple wavelength and multiple scattering of the solar radiation.
NASA Technical Reports Server (NTRS)
Appelbaum, Joseph; Flood, Dennis J.
1990-01-01
Detailed information on solar radiation characteristics on Mars are necessary for effective design of future planned solar energy systems operating on the surface of Mars. Presented here is a procedure and solar radiation related data from which the diurnally, hourly and daily variation of the global, direct beam and diffuse insolation on Mars are calculated. The radiation data are based on measured optical depth of the Martian atmosphere derived from images taken of the sun with a special diode on the Viking cameras; and computation based on multiple wavelength and multiple scattering of the solar radiation.
Improvement of the Earth's gravity field from terrestrial and satellite data
NASA Technical Reports Server (NTRS)
1987-01-01
The terrestrial gravity data base was updated. Studies related to the Geopotential Research Mission (GRM) have primarily considered the local recovery of gravity anomalies on the surface of the Earth based on satellite to satellite tracking or gradiometer data. A simulation study was used to estimate the accuracy of 1 degree-mean anomalies which could be recovered from the GRM data. Numerous procedures were developed for the intent of performing computations at the laser stations in the SL6 system to improve geoid undulation calculations.
Quantifying faculty teaching time in a department of obstetrics and gynecology.
Emmons, S
1998-10-01
The goal of this project was to develop a reproducible system that measures quantity and quality of teaching in unduplicated hours, such that comparisons of teaching activities could be drawn within and across departments. Such a system could be used for allocating teaching monies and for assessing teaching as part of the promotion and tenure process. Various teaching activities, including time spent in clinic, rounds, and doing procedures, were enumerated. The faculty were surveyed about their opinions on the proportion of clinical time spent in teaching. The literature also was reviewed. Based on analysis of the faculty survey and the literature, a series of calculations were developed to divide clinical time among resident teaching, medical student teaching, and patient care. The only input needed was total time spent in the various clinical activities, time spent in didactic activities, and the resident procedure database. This article describes a simple and fair database system to calculate time spent teaching from activities such as clinic, ward rounds, labor and delivery, and surgery. The teaching portfolio database calculates teaching as a proportion of the faculty member's total activities. The end product is a report that provides a reproducible yearly summary of faculty teaching time per activity and per type of learner.
Landry, Nicholas W.; Knezevic, Marko
2015-01-01
Property closures are envelopes representing the complete set of theoretically feasible macroscopic property combinations for a given material system. In this paper, we present a computational procedure based on fast Fourier transforms (FFTs) for delineation of elastic property closures for hexagonal close packed (HCP) metals. The procedure consists of building a database of non-zero Fourier transforms for each component of the elastic stiffness tensor, calculating the Fourier transforms of orientation distribution functions (ODFs), and calculating the ODF-to-elastic property bounds in the Fourier space. In earlier studies, HCP closures were computed using the generalized spherical harmonics (GSH) representation and an assumption of orthotropic sample symmetry; here, the FFT approach allowed us to successfully calculate the closures for a range of HCP metals without invoking any sample symmetry assumption. The methodology presented here facilitates for the first time computation of property closures involving normal-shear coupling stiffness coefficients. We found that the representation of these property linkages using FFTs need more terms compared to GSH representations. However, the use of FFT representations reduces the computational time involved in producing the property closures due to the use of fast FFT algorithms. Moreover, FFT algorithms are readily available as opposed to GSH codes. PMID:28793566
A break-even price calculation for the use of sirolimus-eluting stents in angioplasty.
Galanaud, Jean-Philippe; Delavennat, Juliette; Durand-Zaleski, Isabelle
2003-03-01
One of the major complications of angioplasty is the early occurrence of restenosis requiring a repeat procedure. When bare-metal stents are used, clinical restenosis results in a repeat procedure in 10% to 15% of cases. Based on the results of an international, randomized clinical trial, the use of sirolimus-eluting stents reduces this risk. The aims of this study were to calculate the theoretical break-even price for sirolimus-eluting stents in France, the Netherlands, and the United States, and to determine the additional health care cost per patient. The break-even price was calculated by adding the savings resulting from a 15% decrease in the rate of clinical restenosis to the price of bare-metal stents. Costs were computed from the viewpoint of the health care system, exclusive of other societal costs. The break-even prices were 1291 Euro to 1489 Euro in France, 2028 Euro in the Netherlands, and 2708 Euroin the United States (1.00 Euro = 1.00 US dollar in purchasing power parity). These results indicate that the commercial price of sirolimuseluting stents will increase hospital spending for patients undergoing angioplasty by 17% to 55% per patient. This additional cost to the health care system should be discussed in view of possible productivity savings and improved quality of life for patients.
Rowan Gorilla I rigged up, heads for eastern Canada
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
1984-03-01
Designed to operate in very hostile offshore environments, the first of the Rowan Gorilla class of self-elevating drilling rigs has been towed to its drilling assignment offshore Nova Scotia. About 40% larger than other jackups, these rigs can operate in 300 ft of water, drilling holes as deep as 30,000 ft. They also feature unique high-pressure and solids control systems that are expected to improve drilling procedures and efficiencies. A quantitative formation pressure evaluation program for the Hewlett-Packard HP-41 handheld calculator computes formation pressures by three independent methods - the corrected d exponent, Bourgoyne and Young, and normalized penetration ratemore » techniques for abnormal pressure detection and computation. Based on empirically derived drilling rate equations, each of the methods can be calculated separately, without being dependent on or influenced by the results or stored data from the other two subprograms. The quantitative interpretation procedure involves establishing a normal drilling rate trend and calculating the pore pressure from the magnitude of the drilling rate trend or plotting parameter increases above the trend line. Mobil's quick, accurate program could aid drilling operators in selecting the casing point, minimizing differential sticking, maintaining the proper mud weights to avoid kicks and lost circulation, and maximizing penetration rates.« less
Calculation of power spectrums from digital time series with missing data points
NASA Technical Reports Server (NTRS)
Murray, C. W., Jr.
1980-01-01
Two algorithms are developed for calculating power spectrums from the autocorrelation function when there are missing data points in the time series. Both methods use an average sampling interval to compute lagged products. One method, the correlation function power spectrum, takes the discrete Fourier transform of the lagged products directly to obtain the spectrum, while the other, the modified Blackman-Tukey power spectrum, takes the Fourier transform of the mean lagged products. Both techniques require fewer calculations than other procedures since only 50% to 80% of the maximum lags need be calculated. The algorithms are compared with the Fourier transform power spectrum and two least squares procedures (all for an arbitrary data spacing). Examples are given showing recovery of frequency components from simulated periodic data where portions of the time series are missing and random noise has been added to both the time points and to values of the function. In addition the methods are compared using real data. All procedures performed equally well in detecting periodicities in the data.
NASA Astrophysics Data System (ADS)
Kanisch, G.
2017-05-01
The concepts of ISO 11929 (2010) are applied to evaluation of radionuclide activities from more complex multi-nuclide gamma-ray spectra. From net peak areas estimated by peak fitting, activities and their standard uncertainties are calculated by weighted linear least-squares method with an additional step, where uncertainties of the design matrix elements are taken into account. A numerical treatment of the standard's uncertainty function, based on ISO 11929 Annex C.5, leads to a procedure for deriving decision threshold and detection limit values. The methods shown allow resolving interferences between radionuclide activities also in case of calculating detection limits where they can improve the latter by including more than one gamma line per radionuclide. The co"mmon single nuclide weighted mean is extended to an interference-corrected (generalized) weighted mean, which, combined with the least-squares method, allows faster detection limit calculations. In addition, a new grouped uncertainty budget was inferred, which for each radionuclide gives uncertainty budgets from seven main variables, such as net count rates, peak efficiencies, gamma emission intensities and others; grouping refers to summation over lists of peaks per radionuclide.
García-Roa, Roberto; Sáiz, Jorge; Gómara, Belén; López, Pilar; Martín, José
2018-02-01
Knowledge about chemical communication in some vertebrates is still relatively limited. Squamates are a glaring example of this, even when recent evidences indicate that scents are involved in social and sexual interactions. In lizards, where our understanding of chemical communication has considerably progressed in the last few years, many questions about chemical interactions remain unanswered. A potential reason for this is the inherent complexity and technical limitations that some methodologies embody when analyzing the compounds used to convey information. We provide here a straightforward procedure to analyze lizard chemical secretions based on gas chromatography coupled to mass spectrometry that uses an internal standard for the semiquantification of compounds. We compare the results of this method with those obtained by the traditional procedure of calculating relative proportions of compounds. For such purpose, we designed two experiments to investigate if these procedures allowed revealing changes in chemical secretions 1) when lizards received previously a vitamin dietary supplementation or 2) when the chemical secretions were exposed to high temperatures. Our results show that the procedure based on relative proportions is useful to describe the overall chemical profile, or changes in it, at population or species levels. On the other hand, the use of the procedure based on semiquantitative determination can be applied when the target of study is the variation in one or more particular compounds of the sample, as it has proved more accurate detecting quantitative variations in the secretions. This method would reveal new aspects produced by, for example, the effects of different physiological and climatic factors that the traditional method does not show.
RECONSTRUCTION OF INDIVIDUAL DOSES DUE TO MEDICAL EXPOSURES FOR MEMBERS OF THE TECHA RIVER COHORT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shagina, N. B.; Golikov, V.; Degteva, M. O.
Purpose: To describe a methodology for reconstruction of doses due to medical exposures for members of the Techa River Cohort (TRC) who received diagnostic radiation at the clinic of the Urals Research Center for Radiation Medicine (URCRM) in 1952–2005. To calculate doses of medical exposure for the TRC members and compare with the doses that resulted from radioactive contamination of the Techa River. Material and Methods: Reconstruction of individual medical doses is based on data on x-ray diagnostic procedures available for each person examined at the URCRM clinics and values of absorbed dose in 12 organs per typical x-ray proceduremore » calculated with the use of a mathematical phantom. Personal data on x-ray diagnostic examinations have been complied in the computerized “Registry of x-ray diagnostic procedures.” Sources of information are archival registry books from the URCRM x-ray room (available since 1956) and records on x-ray diagnostic procedures in patient-case histories (since 1952). The absorbed doses for 12 organs of interest have been evaluated per unit typical x-ray procedure with account taken of the x-ray examination parameters characteristic for the diagnostic machines used at the URCRM clinics. These parameters have been evaluated from published data on technical characteristics of the x-ray diagnostic machines used at the URCRM clinics in 1952–1988 and taken from the x-ray room for machines used at the URCRM in 1989–2005. Absorbed doses in the 12 organs per unit typical x-ray procedure have been calculated with use of a special computer code, EDEREX, developed at the Saint-Petersburg Research Institute of Radiation Hygiene after Professor P.V. Ramzaev. Individual accumulated doses of medical exposure have been calculated with a computer code, MEDS (Medical Exposure Dosimetry System), specifically developed at the URCRM. Results: At present, the “Registry of x-ray diagnostic procedures” contains information on individual x-ray examinations for over 9,500 persons including 6,415 TRC members. Statistical analysis of the Registry data showed that the more frequent types of examinations were fluoroscopy and radiography of the chest and fluoroscopy of the stomach and the esophagus. Average absorbed doses accumulated by year 2005 calculated for the 12 organs varied from 4 mGy for testes to 40 mGy for bone surfaces. Maximum individual medical doses could reach 500–650 mGy and in some cases exceeded doses from exposure at the Techa River. Conclusions: For the first time the doses of medical exposure were calculated and analyzed for members of the Techa River Cohort who received diagnostic radiation at the URCRM clinics. These results are being used in radiation-risk analysis to adjust for this source of confounding exposure in the TRC.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dutton, Spencer M.; Chan, Wanyu R.; Mendell, Mark J.
California's building efficiency standards (Title 24) mandate minimum prescribed ventilation rates (VRs) for commercial buildings. Title 24 standards currently include a prescriptive procedure similar to ASHRAE’s prescriptive “ventilation rate procedure”, but does not include an alternative procedure, akin to ASHRAE’s non-prescriptive “indoor air quality procedure” (IAQP). The IAQP determines minimum VRs based on objectively and subjectively evaluated indoor air quality (IAQ). The first primary goal of this study was to determine, in a set of California retail stores, the adequacy of Title 24 VRs and observed current measured VRs in providing the level of IAQ specified through an IAQP process,more » The second primary goal was to evaluate whether several VRs implemented experimentally in a big box store would achieve adequate IAQ, assessed objectively and subjectively. For the first goal, a list of contaminants of concern (CoCs) and reference exposure levels (RELs) were selected for evaluating IAQ. Ventilation rates and indoor and outdoor CoC concentrations were measured in 13 stores, including one “big box” store. Mass balance models were employed to calculate indoor contaminant source strengths for CoCs in each store. Using these source strengths and typical outdoor air contaminant concentrations, mass balance models were again used to calculate for each store the “IAQP” VR that would maintain indoor CoC concentrations below selected RELs. These IAQP VRs were compared to the observed VRs and to the Title 24- prescribed VRs. For the second goal, a VR intervention study was performed in the big box store to determine how objectively assessed indoor contaminant levels and subjectively assessed IAQ varied with VR. The three intervention study VRs included an approximation of the store’s current VR [0.24 air changes per hour (ACH)], the Title 24-prescribed VR [0.69 ACH], and the calculated IAQPbased VR [1.51 ACH]). Calculations of IAQP-based VRs showed that for the big box store and 11 of the 12 other stores, neither current measured VRs nor the Title 24-prescribed VRs would be sufficient to maintain indoor concentrations of all CoCs below RELs. In the intervention study, with the IAQP-based VR applied in the big box store, all CoCs were controlled below RELs (within margins of error). Also, at all three VRs in this store, the percentage of subjects reporting acceptable air quality exceeded an 80% criterion of acceptability. The IAQP allows consideration of outdoor air ventilation as just one of several possible tools for achieving adequate IAQ. In two of the 13 surveyed buildings, applying the IAQP to allow lower VRs could have saved energy whilst still maintaining acceptable indoor air quality. In the remaining 11 buildings, saving energy through lower VRs would require combination with other strategies, either reducing indoor sources of CoCs such as formaldehyde, or use of gas phase air cleaning technologies. Based on the findings from applying the IAQP calculations to retail stores and the IAQP-based intervention study, recommendations are made regarding the potential introduction of a comparable procedure in Title 24.« less
Formago, Margaret; Schrauder, Michael G.; Rauh, Claudia; Hack, Carolin C.; Jud, Sebastian M.; Hildebrandt, Thomas; Schulz-Wendtland, Rüdiger; Frentz, S.; Graubert, S.; Beckmann, Matthias W.; Lux, Michael P.
2017-01-01
Introduction The care of patients with breast cancer is extremely complex and requires interdisciplinary care in certified facilities. These specialized facilities provide numerous services without being correspondingly remunerated. The question whether breast cancer surgery should be performed in an outpatient setting to reduce costs is increasingly being debated. This study compares inpatient surgical treatment with a model of the same surgery performed on an outpatient basis to examine the potential financial impact. Material and Methods A theoretical model was developed and the DRG fees for surgical interventions to treat primary breast cancer were calculated. A theoretical 1-day DRG was then calculated to permit comparisons with outpatient procedures. The costs of outpatient surgery were calculated based on the remuneration rates of the AOP (Outpatient Surgery) Contract and the EBM (Uniform Assessment Scale) and compared to the costs of the 1-day DRG. Results The DRG fee for both breast-conserving surgery and mastectomy is higher than the fee paid in the context of the EBM system, although the same procedures were carried out in both systems. If a hospital were to carry out breast-conserving surgery as an outpatient procedure, the fee would be € 1313.81; depending on the type of surgery, the hospital would therefore only receive between 39.20% and 52.82% of the DRG fee. This was the case even for a 1-day treatment. Compared to the real DRG fees the difference would be even more striking. Conclusion Carrying out breast cancer surgery as an outpatient procedure would result in a significant shortfall of revenues. Additional services from certified centers, such as the interdisciplinary planning of treatment, psycho-oncological and social-medical care with the involvement of relatives, detailed documentation, etc., which are currently provided without surcharge or adequate remuneration, could no longer be maintained. The quality of processes and excellent results which have been achieved and ultimately the care given by certified facilities would be significantly at risk. PMID:28845052
Formago, Margaret; Schrauder, Michael G; Rauh, Claudia; Hack, Carolin C; Jud, Sebastian M; Hildebrandt, Thomas; Schulz-Wendtland, Rüdiger; Frentz, S; Graubert, S; Beckmann, Matthias W; Lux, Michael P
2017-08-01
The care of patients with breast cancer is extremely complex and requires interdisciplinary care in certified facilities. These specialized facilities provide numerous services without being correspondingly remunerated. The question whether breast cancer surgery should be performed in an outpatient setting to reduce costs is increasingly being debated. This study compares inpatient surgical treatment with a model of the same surgery performed on an outpatient basis to examine the potential financial impact. A theoretical model was developed and the DRG fees for surgical interventions to treat primary breast cancer were calculated. A theoretical 1-day DRG was then calculated to permit comparisons with outpatient procedures. The costs of outpatient surgery were calculated based on the remuneration rates of the AOP (Outpatient Surgery) Contract and the EBM (Uniform Assessment Scale) and compared to the costs of the 1-day DRG. The DRG fee for both breast-conserving surgery and mastectomy is higher than the fee paid in the context of the EBM system, although the same procedures were carried out in both systems. If a hospital were to carry out breast-conserving surgery as an outpatient procedure, the fee would be € 1313.81; depending on the type of surgery, the hospital would therefore only receive between 39.20% and 52.82% of the DRG fee. This was the case even for a 1-day treatment. Compared to the real DRG fees the difference would be even more striking. Carrying out breast cancer surgery as an outpatient procedure would result in a significant shortfall of revenues. Additional services from certified centers, such as the interdisciplinary planning of treatment, psycho-oncological and social-medical care with the involvement of relatives, detailed documentation, etc., which are currently provided without surcharge or adequate remuneration, could no longer be maintained. The quality of processes and excellent results which have been achieved and ultimately the care given by certified facilities would be significantly at risk.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, James, E-mail: 9jhb3@queensu.ca; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca
In this paper we show that it is possible to use an iterative eigensolver in conjunction with Halverson and Poirier’s symmetrized Gaussian (SG) basis [T. Halverson and B. Poirier, J. Chem. Phys. 137, 224101 (2012)] to compute accurate vibrational energy levels of molecules with as many as five atoms. This is done, without storing and manipulating large matrices, by solving a regular eigenvalue problem that makes it possible to exploit direct-product structure. These ideas are combined with a new procedure for selecting which basis functions to use. The SG basis we work with is orders of magnitude smaller than themore » basis made by using a classical energy criterion. We find significant convergence errors in previous calculations with SG bases. For sum-of-product Hamiltonians, SG bases large enough to compute accurate levels are orders of magnitude larger than even simple pruned bases composed of products of harmonic oscillator functions.« less
Trends in Medicare Reimbursement for Orthopedic Procedures: 2000 to 2016.
Eltorai, Adam E M; Durand, Wesley M; Haglin, Jack M; Rubin, Lee E; Weiss, Arnold-Peter C; Daniels, Alan H
2018-03-01
Understanding trends in reimbursement is critical to the financial sustainability of orthopedic practices. Little research has examined physician fee trends over time for orthopedic procedures. This study evaluated trends in Medicare reimbursements for orthopedic surgical procedures. The Medicare Physician Fee Schedule was examined for Current Procedural Terminology code values for the most common orthopedic and nonorthopedic procedures between 2000 and 2016. Prices were adjusted for inflation to 2016-dollar values. To assess mean growth rate for each procedure and subspecialty, compound annual growth rates were calculated. Year-to-year dollar amount changes were calculated for each procedure and subspecialty. Reimbursement trends for individual procedures and across subspecialties were compared. Between 2000 and 2016, annual reimbursements decreased for all orthopedic procedures examined except removal of orthopedic implant. The orthopedic procedures with the greatest mean annual decreases in reimbursement were shoulder arthroscopy/decompression, total knee replacement, and total hip replacement. The orthopedic procedures with the least annual reimbursement decreases were carpal tunnel release and repair of ankle fracture. Rate of Medicare procedure reimbursement change varied between subspecialties. Trauma had the smallest decrease in annual change compared with spine, sports, and hand. Annual reimbursement decreased at a significantly greater rate for adult reconstruction procedures than for any of the other subspecialties. These findings indicate that reimbursement for procedures has steadily decreased, with the most rapid decrease seen in adult reconstruction. [Orthopedics. 2018; 41(2):95-102.]. Copyright 2018, SLACK Incorporated.
Chan, Bun; Gilbert, Andrew T B; Gill, Peter M W; Radom, Leo
2014-09-09
We have examined the performance of a variety of density functional theory procedures for the calculation of complexation energies and proton-exchange barriers, with a focus on the Minnesota-class of functionals that are generally highly robust and generally show good accuracy. A curious observation is that M05-type and M06-type methods show an atypical decrease in calculated barriers with increasing proportion of Hartree-Fock exchange. To obtain a clearer picture of the performance of the underlying components of M05-type and M06-type functionals, we have investigated the combination of MPW-type and PBE-type exchange and B95-type and PBE-type correlation procedures. We find that, for the extensive E3 test set, the general performance of the various hybrid-DFT procedures improves in the following order: PBE1-B95 → PBE1-PBE → MPW1-PBE → PW6-B95. As M05-type and M06-type procedures are related to PBE1-B95, it would be of interest to formulate and examine the general performance of an alternative Minnesota DFT method related to PW6-B95.
Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F
2015-10-01
Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.
Longitudinal aerodynamic characteristics of light, twin-engine, propeller-driven airplanes
NASA Technical Reports Server (NTRS)
Wolowicz, C. H.; Yancey, R. B.
1972-01-01
Representative state-of-the-art analytical procedures and design data for predicting the longitudinal static and dynamic stability and control characteristics of light, propeller-driven airplanes are presented. Procedures for predicting drag characteristics are also included. The procedures are applied to a twin-engine, propeller-driven airplane in the clean configuration from zero lift to stall conditions. The calculated characteristics are compared with wind-tunnel and flight data. Included in the comparisons are level-flight trim characteristics, period and damping of the short-period oscillatory mode, and windup-turn characteristics. All calculations are documented.
Articulated Arm Coordinate Measuring Machine Calibration by Laser Tracker Multilateration
Majarena, Ana C.; Brau, Agustín; Velázquez, Jesús
2014-01-01
A new procedure for the calibration of an articulated arm coordinate measuring machine (AACMM) is presented in this paper. First, a self-calibration algorithm of four laser trackers (LTs) is developed. The spatial localization of a retroreflector target, placed in different positions within the workspace, is determined by means of a geometric multilateration system constructed from the four LTs. Next, a nonlinear optimization algorithm for the identification procedure of the AACMM is explained. An objective function based on Euclidean distances and standard deviations is developed. This function is obtained from the captured nominal data (given by the LTs used as a gauge instrument) and the data obtained by the AACMM and compares the measured and calculated coordinates of the target to obtain the identified model parameters that minimize this difference. Finally, results show that the procedure presented, using the measurements of the LTs as a gauge instrument, is very effective by improving the AACMM precision. PMID:24688418
Analytic methods for design of wave cycles for wave rotor core engines
NASA Technical Reports Server (NTRS)
Resler, Edwin L., Jr.; Mocsari, Jeffrey C.; Nalim, M. R.
1993-01-01
A procedure to design a preliminary wave rotor cycle for any application is presented. To complete a cycle with heat addition there are two separate but related design steps that must be followed. The 'wave' boundary conditions determine the allowable amount of heat added in any case and the ensuing wave pattern requires certain pressure discharge conditions to allow the process to be made cyclic. This procedure, when applied, gives a first estimate of the cycle performance and the necessary information for the next step in the design process, namely the application of a characteristic based or other appropriate detailed one dimensional wave calculation that locates the proper porting around the periphery of the wave rotor. Four examples of the design procedure are given to demonstrate its utility and generality. These examples also illustrate the large gains in performance that could be realized with the use of wave rotor enhanced propulsion cycles.
A new method for automatic discontinuity traces sampling on rock mass 3D model
NASA Astrophysics Data System (ADS)
Umili, G.; Ferrero, A.; Einstein, H. H.
2013-02-01
A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.
Lange, Jeffrey; Karellas, Andrew; Street, John; Eck, Jason C; Lapinsky, Anthony; Connolly, Patrick J; Dipaola, Christian P
2013-03-01
Observational. To estimate the radiation dose imparted to patients during typical thoracolumbar spinal surgical scenarios. Minimally invasive techniques continue to become more common in spine surgery. Computer-assisted navigation systems coupled with intraoperative cone-beam computed tomography (CT) represent one such method used to aid in instrumented spinal procedures. Some studies indicate that cone-beam CT technology delivers a relatively low dose of radiation to patients compared with other x-ray-based imaging modalities. The goal of this study was to estimate the radiation exposure to the patient imparted during typical posterior thoracolumbar instrumented spinal procedures, using intraoperative cone-beam CT and to place these values in the context of standard CT doses. Cone-beam CT scans were obtained using Medtronic O-arm (Medtronic, Minneapolis, MN). Thermoluminescence dosimeters were placed in a linear array on a foam-plastic thoracolumbar spine model centered above the radiation source for O-arm presets of lumbar scans for small or large patients. In-air dosimeter measurements were converted to skin surface measurements, using published conversion factors. Dose-length product was calculated from these values. Effective dose was estimated using published effective dose to dose-length product conversion factors. Calculated dosages for many full-length procedures using the small-patient setting fell within the range of published effective doses of abdominal CT scans (1-31 mSv). Calculated dosages for many full-length procedures using the large-patient setting fell within the range of published effective doses of abdominal CT scans when the number of scans did not exceed 3. We have demonstrated that single cone-beam CT scans and most full-length posterior instrumented spinal procedures using O-arm in standard mode would likely impart a radiation dose within the range of those imparted by a single standard CT scan of the abdomen. Radiation dose increases with patient size, and the radiation dose received by larger patients as a result of more than 3 O-arm scans in standard mode may exceed the dose received during standard CT of the abdomen. Understanding radiation imparted to patients by cone-beam CT is important for assessing risks and benefits of this technology, especially when spinal surgical procedures require multiple intraoperative scans.
Population delineation of polar bears using satellite collar data
Bethke, R.; Taylor, Mitchell K.; Amstrup, Steven C.; Messier, François
1996-01-01
To produce reliable estimates of the size or vital rates of a given population, it is important that the boundaries of the population under study are clearly defined. This is particularly critical for large, migratory animals where levels of sustainable harvest are based on these estimates, and where small errors may have serious long-term consequences for the population. Once populations are delineated, rates of exchange between adjacent populations can be determined and accounted/corrected for when calculating abundance (e.g., based on mark-recapture data). Using satellite radio-collar locations for polar bears in the western Canadian Arctic, we illustrate one approach to delineating wildlife populations that integrates cluster analysis methods for determining group membership with home range plotting procedures to define spatial utilization. This approach is flexible with respect to the specific procedures used and provides an objective and quantitative basis for defining population boundaries.
NASA Astrophysics Data System (ADS)
Baumgart, M.; Druml, N.; Consani, M.
2018-05-01
This paper presents a simulation approach for Time-of-Flight cameras to estimate sensor performance and accuracy, as well as to help understanding experimentally discovered effects. The main scope is the detailed simulation of the optical signals. We use a raytracing-based approach and use the optical path length as the master parameter for depth calculations. The procedure is described in detail with references to our implementation in Zemax OpticStudio and Python. Our simulation approach supports multiple and extended light sources and allows accounting for all effects within the geometrical optics model. Especially multi-object reflection/scattering ray-paths, translucent objects, and aberration effects (e.g. distortion caused by the ToF lens) are supported. The optical path length approach also enables the implementation of different ToF senor types and transient imaging evaluations. The main features are demonstrated on a simple 3D test scene.
Stochastic DG Placement for Conservation Voltage Reduction Based on Multiple Replications Procedure
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Zhaoyu; Chen, Bokan; Wang, Jianhui
2015-06-01
Conservation voltage reduction (CVR) and distributed-generation (DG) integration are popular strategies implemented by utilities to improve energy efficiency. This paper investigates the interactions between CVR and DG placement to minimize load consumption in distribution networks, while keeping the lowest voltage level within the predefined range. The optimal placement of DG units is formulated as a stochastic optimization problem considering the uncertainty of DG outputs and load consumptions. A sample average approximation algorithm-based technique is developed to solve the formulated problem effectively. A multiple replications procedure is developed to test the stability of the solution and calculate the confidence interval ofmore » the gap between the candidate solution and optimal solution. The proposed method has been applied to the IEEE 37-bus distribution test system with different scenarios. The numerical results indicate that the implementations of CVR and DG, if combined, can achieve significant energy savings.« less
A study of stiffness, residual strength and fatigue life relationships for composite laminates
NASA Technical Reports Server (NTRS)
Ryder, J. T.; Crossman, F. W.
1983-01-01
Qualitative and quantitative exploration of the relationship between stiffness, strength, fatigue life, residual strength, and damage of unnotched, graphite/epoxy laminates subjected to tension loading. Clarification of the mechanics of the tension loading is intended to explain previous contradictory observations and hypotheses; to develop a simple procedure to anticipate strength, fatigue life, and stiffness changes; and to provide reasons for the study of more complex cases of compression, notches, and spectrum fatigue loading. Mathematical models are developed based upon analysis of the damage states. Mathematical models were based on laminate analysis, free body type modeling or a strain energy release rate. Enough understanding of the tension loaded case is developed to allow development of a proposed, simple procedure for calculating strain to failure, stiffness, strength, data scatter, and shape of the stress-life curve for unnotched laminates subjected to tension load.
NASA Astrophysics Data System (ADS)
Sandoval-Santana, J. C.; Ibarra-Sierra, V. G.; Azaizia, S.; Carrère, H.; Bakaleinikov, L. A.; Kalevich, V. K.; Ivchenko, E. L.; Marie, X.; Amand, T.; Balocchi, A.; Kunold, A.
2018-03-01
We propose an experimental procedure to track the evolution of electronic and nuclear spins in Ga2+ centers in GaAsN dilute semiconductors. The method is based on a pump-probe scheme that enables to monitor the time evolution of the three components of the electronic and nuclear spin variables. In contrast to other characterization methods, as nuclear magnetic resonance, this one only needs moderate magnetic fields (B≈ 10 mT), and does not require microwave irradiation. Specifically, we carry out a series of tests for different experimental conditions in order to optimize the procedure for maximum sensitivity in the measurement of the circular degree of polarization. Based on previous experimental results and the theoretical calculations presented here, we estimate that the method could yield a time resolution of about 10ps.
Elangovan, Cheran; Singh, Supriya Palwinder; Gardner, Paul; Snyderman, Carl; Tyler-Kabara, Elizabeth C; Habeych, Miguel; Crammond, Donald; Balzer, Jeffrey; Thirumala, Parthasarathy D
2016-02-01
OBJECT The aim of this study was to evaluate the value of intraoperative neurophysiological monitoring (IONM) using electromyography (EMG), brainstem auditory evoked potentials (BAEPs), and somatosensory evoked potentials (SSEPs) to predict and/or prevent postoperative neurological deficits in pediatric patients undergoing endoscopic endonasal surgery (EES) for skull base tumors. METHODS All consecutive pediatric patients with skull base tumors who underwent EES with at least 1 modality of IONM (BAEP, SSEP, and/or EMG) at our institution between 1999 and 2013 were retrospectively reviewed. Staged procedures and repeat procedures were identified and analyzed separately. To evaluate the diagnostic accuracy of significant free-run EMG activity, the prevalence of cranial nerve (CN) deficits and the sensitivity, specificity, and positive and negative predictive values were calculated. RESULTS A total of 129 patients underwent 159 procedures; 6 patients had a total of 9 CN deficits. The incidences of CN deficits based on the total number of nerves monitored in the groups with and without significant free-run EMG activity were 9% and 1.5%, respectively. The incidences of CN deficits in the groups with 1 staged and more than 1 staged EES were 1.5% and 29%, respectively. The sensitivity, specificity, and negative predictive values (with 95% confidence intervals) of significant EMG to detect CN deficits in repeat procedures were 0.55 (0.22-0.84), 0.86 (0.79-0.9), and 0.97 (0.92-0.99), respectively. Two patients had significant changes in their BAEPs that were reversible with an increase in mean arterial pressure. CONCLUSIONS IONM can be applied effectively and reliably during EES in children. EMG monitoring is specific for detecting CN deficits and can be an effective guide for dissecting these procedures. Triggered EMG should be elicited intraoperatively to check the integrity of the CNs during and after tumor resection. Given the anatomical complexity of pediatric EES and the unique challenges encountered, multimodal IONM can be a valuable adjunct to these procedures.
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
Aerodynamic Analysis Over Double Wedge Airfoil
NASA Astrophysics Data System (ADS)
Prasad, U. S.; Ajay, V. S.; Rajat, R. H.; Samanyu, S.
2017-05-01
Aeronautical studies are being focused more towards supersonic flights and methods to attain a better and safer flight with highest possible performance. Aerodynamic analysis is part of the whole procedure, which includes focusing on airfoil shapes which will permit sustained flight of aircraft at these speeds. Airfoil shapes differ based on the applications, hence the airfoil shapes considered for supersonic speeds are different from the ones considered for Subsonic. The present work is based on the effects of change in physical parameter for the Double wedge airfoil. Mach number range taken is for transonic and supersonic. Physical parameters considered for the Double wedge case with wedge angle (ranging from 5 degree to 15 degree. Available Computational tools are utilized for analysis. Double wedge airfoil is analysed at different Angles of attack (AOA) based on the wedge angle. Analysis is carried out using fluent at standard conditions with specific heat ratio taken as 1.4. Manual calculations for oblique shock properties are calculated with the help of Microsoft excel. MATLAB is used to form a code for obtaining shock angle with Mach number and wedge angle at the given parameters. Results obtained from manual calculations and fluent analysis are cross checked.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., inertial separators, afterburners, thermal or catalytic incinerators, adsorption devices (such as carbon... and calculation procedures (e.g., mass balance or stoichiometric calculations). (4) Maintenance and...
SU-G-206-05: A Comparison of Head Phantoms Used for Dose Determination in Imaging Procedures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiong, Z; Vijayan, S; Kilian-Meneghin, J
Purpose: To determine similarities and differences between various head phantoms that might be used for dose measurements in diagnostic imaging procedures. Methods: We chose four frequently used anthropomorphic head phantoms (SK-150, PBU-50, RS-240T and Alderson Rando), a computational patient phantom (Zubal) and the CTDI head phantom for comparison in our study. We did a CT scan of the head phantoms using the same protocol and compared their dimensions and CT numbers. The scan data was used to calculate dose values for each of the phantoms using EGSnrc Monte Carlo software. An .egsphant file was constructed to describe these phantoms usingmore » a Visual C++ program for DOSXYZnrc/EGSnrc simulation. The lens dose was calculated for a simulated CBCT scan using DOSXYZnrc/EGSnrc and the calculated doses were validated with measurements using Gafchromic film and an ionization chamber. Similar calculations and measurements were made for PA radiography to investigate the attenuation and backscatter differences between these phantoms. We used the Zubal phantom as the standard for comparison since it was developed based on a CT scan of a patient. Results: The lens dose for the Alderson Rando phantom is around 9% different than the Zubal phantom, while the lens dose for the PBU-50 phantom was about 50% higher, possibly because its skull thickness and the density of bone and soft tissue are lower than anthropometric values. The lens dose for the CTDI phantom is about 500% higher because of its totally different structure. The entrance dose profiles are similar for the five anthropomorphic phantoms, while that for the CTDI phantom was distinctly different. Conclusion: The CTDI and PBU-50 head phantoms have substantially larger lens dose estimates in CBCT. The other four head phantoms have similar entrance dose with backscatter hence should be preferred for dose measurement in imaging procedures of the head. Partial support from NIH Grant R01-EB002873 and Toshiba Medical Systems Corp.« less
Dexter, F; Macario, A; Cerone, S M
1998-09-01
To evaluate whether a hospital's profitability for a surgeon's common procedures predicts the surgeon's overall profitability for the hospital. Observational study. Community and university-affiliated tertiary hospital with 21,903 surgical procedures performed per year. 7,520 patients having surgery performed by one of 46 surgeons. None. Financial data were obtained for all patients cared for by all the surgeons who performed at least ten cases of one of the hospital's six most common procedures. A surgeon's overall profitability for the hospital was measured using his or her contribution margin ratio (i.e., total revenue for all of the surgeon's patients divided by total variable cost for the patients). Contribution margin was calculated twice: once with all of a surgeon's patients, and second, limiting consideration to those patients who underwent one of the six common procedures. The common procedures accounted for 22 +/- 15% of the 46 surgeons' overall caseload, 29 +/- 10% of their patients' hospital costs, and 30 +/- 12% of the hospital revenue generated by the surgeons. Hospital contribution margin ratios ranged from 1.4 to 4.2. Contribution margin ratios for common procedures and contribution margin ratios for all patients were correlated (tau = 0.58, n = 46, p < 0.0001). Even though most surgical cases were for uncommon procedures, a surgeon's hospital profitability on common procedures predicted the surgeon's overall financial performance. Perioperative incentive programs based on common surgical procedures (clinical pathways) are likely to accurately reflect a surgeon's financial performance on their other surgeries.
Inferring Aquifer Transmissivity from River Flow Data
NASA Astrophysics Data System (ADS)
Trichakis, Ioannis; Pistocchi, Alberto
2016-04-01
Daily streamflow data is the measurable result of many different hydrological processes within a basin; therefore, it includes information about all these processes. In this work, recession analysis applied to a pan-European dataset of measured streamflow was used to estimate hydrogeological parameters of the aquifers that contribute to the stream flow. Under the assumption that base-flow in times of no precipitation is mainly due to groundwater, we estimated parameters of European shallow aquifers connected with the stream network, and identified on the basis of the 1:1,500,000 scale Hydrogeological map of Europe. To this end, Master recession curves (MRCs) were constructed based on the RECESS model of the USGS for 1601 stream gauge stations across Europe. The process consists of three stages. Firstly, the model analyses the stream flow time-series. Then, it uses regression to calculate the recession index. Finally, it infers characteristics of the aquifer from the recession index. During time-series analysis, the model identifies those segments, where the number of successive recession days is above a certain threshold. The reason for this pre-processing lies in the necessity for an adequate number of points when performing regression at a later stage. The recession index derives from the semi-logarithmic plot of stream flow over time, and the post processing involves the calculation of geometrical parameters of the watershed through a GIS platform. The program scans the full stream flow dataset of all the stations. For each station, it identifies the segments with continuous recession that exceed a predefined number of days. When the algorithm finds all the segments of a certain station, it analyses them and calculates the best linear fit between time and the logarithm of flow. The algorithm repeats this procedure for the full number of segments, thus it calculates many different values of recession index for each station. After the program has found all the recession segments, it performs calculations to determine the expression for the MRC. Further processing of the MRCs can yield estimates of transmissivity or response time representative of the aquifers upstream of the station. These estimates can be useful for large scale (e.g. continental) groundwater modelling. The above procedure allowed calculating values of transmissivity for a large share of European aquifers, ranging from Tmin = 4.13E-04 m²/d to Tmax = 8.12E+03 m²/d, with an average value Taverage = 9.65E+01 m²/d. These results are in line with the literature, indicating that the procedure may provide realistic results for large-scale groundwater modelling. In this contribution we present the results in the perspective of their application for the parameterization of a pan-European bi-dimensional shallow groundwater flow model.
Phillips, Steven P.; Belitz, Kenneth
1991-01-01
The occurrence of selenium in agricultural drain water from the western San Joaquin Valley, California, has focused concern on the semiconfined ground-water flow system, which is underlain by the Corcoran Clay Member of the Tulare Formation. A two-step procedure is used to calibrate a preliminary model of the system for the purpose of determining the steady-state hydraulic properties. Horizontal and vertical hydraulic conductivities are modeled as functions of the percentage of coarse sediment, hydraulic conductivities of coarse-textured (Kcoarse) and fine-textured (Kfine) end members, and averaging methods used to calculate equivalent hydraulic conductivities. The vertical conductivity of the Corcoran (Kcorc) is an additional parameter to be evaluated. In the first step of the calibration procedure, the model is run by systematically varying the following variables: (1) Kcoarse/Kfine, (2) Kcoarse/Kcorc, and (3) choice of averaging methods in the horizontal and vertical directions. Root mean square error and bias values calculated from the model results are functions of these variables. These measures of error provide a means for evaluating model sensitivity and for selecting values of Kcoarse, Kfine, and Kcorc for use in the second step of the calibration procedure. In the second step, recharge rates are evaluated as functions of Kcoarse, Kcorc, and a combination of averaging methods. The associated Kfine values are selected so that the root mean square error is minimized on the basis of the results from the first step. The results of the two-step procedure indicate that the spatial distribution of hydraulic conductivity that best produces the measured hydraulic head distribution is created through the use of arithmetic averaging in the horizontal direction and either geometric or harmonic averaging in the vertical direction. The equivalent hydraulic conductivities resulting from either combination of averaging methods compare favorably to field- and laboratory-based values.
An improved procedure for detection and enumeration of walrus signatures in airborne thermal imagery
Burn, Douglas M.; Udevitz, Mark S.; Speckman, Suzann G.; Benter, R. Bradley
2009-01-01
In recent years, application of remote sensing to marine mammal surveys has been a promising area of investigation for wildlife managers and researchers. In April 2006, the United States and Russia conducted an aerial survey of Pacific walrus (Odobenus rosmarus divergens) using thermal infrared sensors to detect groups of animals resting on pack ice in the Bering Sea. The goal of this survey was to estimate the size of the Pacific walrus population. An initial analysis of the U.S. data using previously-established methods resulted in lower detectability of walrus groups in the imagery and higher variability in calibration models than was expected based on pilot studies. This paper describes an improved procedure for detection and enumeration of walrus groups in airborne thermal imagery. Thermal images were first subdivided into smaller 200 x 200 pixel "tiles." We calculated three statistics to represent characteristics of walrus signatures from the temperature histogram for each the. Tiles that exhibited one or more of these characteristics were examined further to determine if walrus signatures were present. We used cluster analysis on tiles that contained walrus signatures to determine which pixels belonged to each group. We then calculated a thermal index value for each walrus group in the imagery and used generalized linear models to estimate detection functions (the probability of a group having a positive index value) and calibration functions (the size of a group as a function of its index value) based on counts from matched digital aerial photographs. The new method described here improved our ability to detect walrus groups at both 2 m and 4 m spatial resolution. In addition, the resulting calibration models have lower variance than the original method. We anticipate that the use of this new procedure will greatly improve the quality of the population estimate derived from these data. This procedure may also have broader applicability to thermal infrared surveys of other wildlife species. Published by Elsevier B.V.
40 CFR 98.333 - Calculating GHG emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... Stationary Fuel Combustion Sources). (b) Calculate and report under this subpart the process CO2 emissions by... calculate and report the annual process CO2 emissions using the procedures specified in either paragraph (a... and combustion CO2 emissions by operating and maintaining a CEMS according to the Tier 4 Calculation...
40 CFR 98.333 - Calculating GHG emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... Stationary Fuel Combustion Sources). (b) Calculate and report under this subpart the process CO2 emissions by... calculate and report the annual process CO2 emissions using the procedures specified in either paragraph (a... and combustion CO2 emissions by operating and maintaining a CEMS according to the Tier 4 Calculation...
40 CFR 98.333 - Calculating GHG emissions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... Stationary Fuel Combustion Sources). (b) Calculate and report under this subpart the process CO2 emissions by... calculate and report the annual process CO2 emissions using the procedures specified in either paragraph (a... and combustion CO2 emissions by operating and maintaining a CEMS according to the Tier 4 Calculation...
Comments on the variational modified-hypernetted-chain theory for simple fluids
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1986-02-01
The variational modified-hypernetted-chain (VMHNC) theory, based on the approximation of universality of the bridge functions, is reformulated. The new formulation includes recent calculations by Lado and by Lado, Foiles, and Ashcroft, as two stages in a systematic approach which is analyzed. A variational iterative procedure for solving the exact (diagrammatic) equations for the fluid structure which is formally identical to the VMHNC is described, featuring the theory of simple classical fluids as a one-iteration theory. An accurate method for calculating the pair structure for a given potential and for inverting structure factor data in order to obtain the potential and the thermodynamic functions, follows from our analysis.
NASA Technical Reports Server (NTRS)
Yates, Leslie A.
1992-01-01
Software for an automated film-reading system that uses personal computers and digitized shadowgraphs is described. The software identifies pixels associated with fiducial-line and model images, and least-squares procedures are used to calculate the positions and orientations of the images. Automated position and orientation readings for sphere and cone models are compared to those obtained using a manual film reader. When facility calibration errors are removed from these readings, the accuracy of the automated readings is better than the pixel resolution, and it is equal to, or better than, the manual readings. The effects of film-reading and facility-calibration errors on calculated aerodynamic coefficients is discussed.
Convective and morphological instabilities during crystal growth: Effect of gravity modulation
NASA Technical Reports Server (NTRS)
Coreill, S. R.; Murray, B. T.; Mcfadden, G. B.; Wheeler, A. A.; Saunders, B. V.
1992-01-01
During directional solidification of a binary alloy at constant velocity in the vertical direction, morphological and convective instabilities may occur due to the temperature and solute gradients associated with the solidification process. The effect of time-periodic modulation (vibration) is studied by considering a vertical gravitational acceleration which is sinusoidal in time. The conditions for the onset of solutal convection are calculated numerically, employing two distinct computational procedures based on Floquet theory. In general, a stable state can be destabilized by modulation and an unstable state can be stabilized. In the limit of high frequency modulation, the method of averaging and multiple-scale asymptotic analysis can be used to simplify the calculations.
NASA Technical Reports Server (NTRS)
Weilmuenster, K. J.; Gnoffo, Peter A.
1992-01-01
A procedure which reduces the memory requirements for computing the viscous flow over a modified Orbiter geometry at a hypersonic flight condition is presented. The Langley Aerothermodynamic Upwind Relaxation Algorithm (LAURA) code which incorporates a thermochemical nonequilibrium chemistry model, a finite rate catalytic wall boundary condition and wall temperature distribution based on radiation equilibrium is used in this study. In addition, the effect of choice of 'min mod' function, eigenvalue limiter and grid density on surface heating is investigated. The surface heating from a flowfield calculation at Mach number 22, altitude of 230,000 ft and 40 deg angle of attack is compared with flight data from three Orbiter flights.