Simplified method for calculating shear deflections of beams.
I. Orosz
1970-01-01
When one designs with wood, shear deflections can become substantial compared to deflections due to moments, because the modulus of elasticity in bending differs from that in shear by a large amount. This report presents a simplified energy method to calculate shear deflections in bending members. This simplified approach should help designers decide whether or not...
NASA Astrophysics Data System (ADS)
Staszczuk, Anna
2017-03-01
The paper provides comparative results of calculations of heat exchange between ground and typical residential buildings using simplified (quasi-stationary) and more accurate (transient, three-dimensional) methods. Such characteristics as building's geometry, basement hollow and construction of ground touching assemblies were considered including intermittent and reduced heating mode. The calculations with simplified methods were conducted in accordance with currently valid norm: PN-EN ISO 13370:2008. Thermal performance of buildings. Heat transfer via the ground. Calculation methods. Comparative estimates concerning transient, 3-D, heat flow were performed with computer software WUFI®plus. The differences of heat exchange obtained using more exact and simplified methods have been specified as a result of the analysis.
NASA Technical Reports Server (NTRS)
Baer-Riedhart, J. L.
1982-01-01
A simplified gross thrust calculation method was evaluated on its ability to predict the gross thrust of a modified J85-21 engine. The method used tailpipe pressure data and ambient pressure data to predict the gross thrust. The method's algorithm is based on a one-dimensional analysis of the flow in the afterburner and nozzle. The test results showed that the method was notably accurate over the engine operating envelope using the altitude facility measured thrust for comparison. A summary of these results, the simplified gross thrust method and requirements, and the test techniques used are discussed in this paper.
Study on Collision of Ship Side Structure by Simplified Plastic Analysis Method
NASA Astrophysics Data System (ADS)
Sun, C. J.; Zhou, J. H.; Wu, W.
2017-10-01
During its lifetime, a ship may encounter collision or grounding and sustain permanent damage after these types of accidents. Crashworthiness has been based on two kinds of main methods: simplified plastic analysis and numerical simulation. A simplified plastic analysis method is presented in this paper. Numerical methods using the non-linear finite-element software LS-DYNA are conducted to validate the method. The results show that, as for the accuracy of calculation results, the simplified plasticity analysis are in good agreement with the finite element simulation, which reveals that the simplified plasticity analysis method can quickly and accurately estimate the crashworthiness of the side structure during the collision process and can be used as a reliable risk assessment method.
Simplified procedure for computing the absorption of sound by the atmosphere
DOT National Transportation Integrated Search
2007-10-31
This paper describes a study that resulted in the development of a simplified : method for calculating attenuation by atmospheric-absorption for wide-band : sounds analyzed by one-third octave-band filters. The new method [referred to : herein as the...
Weather data for simplified energy calculation methods. Volume IV. United States: WYEC data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, A.R.; Moreno, S.; Deringer, J.
The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 23 cities using Weather Year for Energy Calculations (WYEC) source weather data. Considerable overlap is present in cities (21) covered by both the TRY and WYEC data. The weather data at each city hasmore » been summarized in a number of ways to provide differing levels of detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less
Simplified Model and Response Analysis for Crankshaft of Air Compressor
NASA Astrophysics Data System (ADS)
Chao-bo, Li; Jing-jun, Lou; Zhen-hai, Zhang
2017-11-01
The original model of crankshaft is simplified to the appropriateness to balance the calculation precision and calculation speed, and then the finite element method is used to analyse the vibration response of the structure. In order to study the simplification and stress concentration for crankshaft of air compressor, this paper compares calculative mode frequency and experimental mode frequency of the air compressor crankshaft before and after the simplification, the vibration response of reference point constraint conditions is calculated by using the simplified model, and the stress distribution of the original model is calculated. The results show that the error between calculative mode frequency and experimental mode frequency is controlled in less than 7%, the constraint will change the model density of the system, the position between the crank arm and the shaft appeared stress concentration, so the part of the crankshaft should be treated in the process of manufacture.
Simplified methods for calculating photodissociation rates
NASA Technical Reports Server (NTRS)
Shimazaki, T.; Ogawa, T.; Farrell, B. C.
1977-01-01
Simplified methods for calculating the transmission of solar UV radiation and the dissociation coefficients of various molecules are compared. A significant difference sometimes appears in calculations of the individual band, but the total transmission and the total dissociation coefficients integrated over the entire SR (solar radiation) band region agree well between the methods. The ambiguities in the solar flux data affect the calculated dissociation coefficients more strongly than does the method. A simpler method is developed for the purpose of reducing the computation time and computer memory size necessary for storing coefficients of the equations. The new method can reduce the computation time by a factor of more than 3 and the memory size by a factor of more than 50 compared with the Hudson-Mahle method, and yet the result agrees within 10 percent (in most cases much less) with the original Hudson-Mahle results, except for H2O and CO2. A revised method is necessary for these two molecules, whose absorption cross sections change very rapidly over the SR band spectral range.
Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels
Jiang, Nan; Ma, Shaochun
2015-01-01
In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels. PMID:28793631
Simplified Calculation Model and Experimental Study of Latticed Concrete-Gypsum Composite Panels.
Jiang, Nan; Ma, Shaochun
2015-10-27
In order to address the performance complexity of the various constituent materials of (dense-column) latticed concrete-gypsum composite panels and the difficulty in the determination of the various elastic constants, this paper presented a detailed structural analysis of the (dense-column) latticed concrete-gypsum composite panel and proposed a feasible technical solution to simplified calculation. In conformity with mechanical rules, a typical panel element was selected and divided into two homogenous composite sub-elements and a secondary homogenous element, respectively for solution, thus establishing an equivalence of the composite panel to a simple homogenous panel and obtaining the effective formulas for calculating the various elastic constants. Finally, the calculation results and the experimental results were compared, which revealed that the calculation method was correct and reliable and could meet the calculation needs of practical engineering and provide a theoretical basis for simplified calculation for studies on composite panel elements and structures as well as a reference for calculations of other panels.
Weather data for simplified energy calculation methods. Volume II. Middle United States: TRY data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Olsen, A.R.; Moreno, S.; Deringer, J.
1984-08-01
The objective of this report is to provide a source of weather data for direct use with a number of simplified energy calculation methods available today. Complete weather data for a number of cities in the United States are provided for use in the following methods: degree hour, modified degree hour, bin, modified bin, and variable degree day. This report contains sets of weather data for 22 cities in the continental United States using Test Reference Year (TRY) source weather data. The weather data at each city has been summarized in a number of ways to provide differing levels ofmore » detail necessary for alternative simplified energy calculation methods. Weather variables summarized include dry bulb and wet bulb temperature, percent relative humidity, humidity ratio, wind speed, percent possible sunshine, percent diffuse solar radiation, total solar radiation on horizontal and vertical surfaces, and solar heat gain through standard DSA glass. Monthly and annual summaries, in some cases by time of day, are available. These summaries are produced in a series of nine computer generated tables.« less
NASA Astrophysics Data System (ADS)
Zhang, Hua-qing; Sun, Xi-ping; Wang, Yuan-zhan; Yin, Ji-long; Wang, Chao-yang
2015-10-01
There has been a growing trend in the development of offshore deep-water ports in China. For such deep sea projects, all-vertical-piled wharves are suitable structures and generally located in open waters, greatly affected by wave action. Currently, no systematic studies or simplified numerical methods are available for deriving the dynamic characteristics and dynamic responses of all-vertical-piled wharves under wave cyclic loads. In this article, we compare the dynamic characteristics of an all-vertical-piled wharf with those of a traditional inshore high-piled wharf through numerical analysis; our research reveals that the vibration period of an all-vertical-piled wharf under cyclic loading is longer than that of an inshore high-piled wharf and is much closer to the period of the loading wave. Therefore, dynamic calculation and analysis should be conducted when designing and calculating the characteristics of an all-vertical-piled wharf. We establish a dynamic finite element model to examine the dynamic response of an all-vertical-piled wharf under wave cyclic loads and compare the results with those under wave equivalent static load; the comparison indicates that dynamic amplification of the structure is evident when the wave dynamic load effect is taken into account. Furthermore, a simplified dynamic numerical method for calculating the dynamic response of an all-vertical-piled wharf is established based on the P-Y curve. Compared with finite element analysis, the simplified method is more convenient to use and applicable to large structural deformation while considering the soil non-linearity. We confirmed that the simplified method has acceptable accuracy and can be used in engineering applications.
NASA Technical Reports Server (NTRS)
Jones, Robert T
1937-01-01
A simplified treatment of the application of Heaviside's operational methods to problems of airplane dynamics is given. Certain graphical methods and logarithmic formulas that lessen the amount of computation involved are explained. The problem representing a gust disturbance or control manipulation is taken up and it is pointed out that in certain cases arbitrary control manipulations may be dealt with as though they imposed specific constraints on the airplane, thus avoiding the necessity of any integration. The application of the calculations described in the text is illustrated by several examples chosen to show the use of the methods and the practicability of the graphical and logarithmic computations described.
Holmes, Robert R.; Dunn, Chad J.
1996-01-01
A simplified method to estimate total-streambed scour was developed for application to bridges in the State of Illinois. Scour envelope curves, developed as empirical relations between calculated total scour and bridge-site chracteristics for 213 State highway bridges in Illinois, are used in the method to estimate the 500-year flood scour. These 213 bridges, geographically distributed throughout Illinois, had been previously evaluated for streambed scour with the application of conventional hydraulic and scour-analysis methods recommended by the Federal Highway Administration. The bridge characteristics necessary for application of the simplified bridge scour-analysis method can be obtained from an office review of bridge plans, examination of topographic maps, and reconnaissance-level site inspection. The estimates computed with the simplified method generally resulted in a larger value of 500-year flood total-streambed scour than with the more detailed conventional method. The simplified method was successfully verified with a separate data set of 106 State highway bridges, which are geographically distributed throughout Illinois, and 15 county highway bridges.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miller, P.J.
1996-07-01
A simplified method for determining the reactive rate parameters for the ignition and growth model is presented. This simplified ignition and growth (SIG) method consists of only two adjustable parameters, the ignition (I) and growth (G) rate constants. The parameters are determined by iterating these variables in DYNA2D hydrocode simulations of the failure diameter and the gap test sensitivity until the experimental values are reproduced. Examples of four widely different explosives were evaluated using the SIG model. The observed embedded gauge stress-time profiles for these explosives are compared to those calculated by the SIG equation and the results are described.
NASA Technical Reports Server (NTRS)
Shertzer, Janine; Temkin, Aaron
2004-01-01
The development of a practical method of accurately calculating the full scattering amplitude, without making a partial wave decomposition is continued. The method is developed in the context of electron-hydrogen scattering, and here exchange is dealt with by considering e-H scattering in the static exchange approximation. The Schroedinger equation in this approximation can be simplified to a set of coupled integro-differential equations. The equations are solved numerically for the full scattering wave function. The scattering amplitude can most accurately be calculated from an integral expression for the amplitude; that integral can be formally simplified, and then evaluated using the numerically determined wave function. The results are essentially identical to converged partial wave results.
Research on simplified parametric finite element model of automobile frontal crash
NASA Astrophysics Data System (ADS)
Wu, Linan; Zhang, Xin; Yang, Changhai
2018-05-01
The modeling method and key technologies of the automobile frontal crash simplified parametric finite element model is studied in this paper. By establishing the auto body topological structure, extracting and parameterizing the stiffness properties of substructures, choosing appropriate material models for substructures, the simplified parametric FE model of M6 car is built. The comparison of the results indicates that the simplified parametric FE model can accurately calculate the automobile crash responses and the deformation of the key substructures, and the simulation time is reduced from 6 hours to 2 minutes.
Efficient calculation of the polarizability: a simplified effective-energy technique
NASA Astrophysics Data System (ADS)
Berger, J. A.; Reining, L.; Sottile, F.
2012-09-01
In a recent publication [J.A. Berger, L. Reining, F. Sottile, Phys. Rev. B 82, 041103(R) (2010)] we introduced the effective-energy technique to calculate in an accurate and numerically efficient manner the GW self-energy as well as the polarizability, which is required to evaluate the screened Coulomb interaction W. In this work we show that the effective-energy technique can be used to further simplify the expression for the polarizability without a significant loss of accuracy. In contrast to standard sum-over-state methods where huge summations over empty states are required, our approach only requires summations over occupied states. The three simplest approximations we obtain for the polarizability are explicit functionals of an independent- or quasi-particle one-body reduced density matrix. We provide evidence of the numerical accuracy of this simplified effective-energy technique as well as an analysis of our method.
1980-03-31
1.56 1.26 153 ~.Comparison with the method of Papper and Moler (1974) The method of calculation described in Chapter 3 and applied in this chapter was...digitization of the profiles. Using their method, Papper and Moler (private communication) have kindly performed calculations corresponding to those presented
NASA Technical Reports Server (NTRS)
Bennett, Floyd V.; Yntema, Robert T.
1959-01-01
Several approximate procedures for calculating the bending-moment response of flexible airplanes to continuous isotropic turbulence are presented and evaluated. The modal methods (the mode-displacement and force-summation methods) and a matrix method (segmented-wing method) are considered. These approximate procedures are applied to a simplified airplane for which an exact solution to the equation of motion can be obtained. The simplified airplane consists of a uniform beam with a concentrated fuselage mass at the center. Airplane motions are limited to vertical rigid-body translation and symmetrical wing bending deflections. Output power spectra of wing bending moments based on the exact transfer-function solutions are used as a basis for the evaluation of the approximate methods. It is shown that the force-summation and the matrix methods give satisfactory accuracy and that the mode-displacement method gives unsatisfactory accuracy.
NASA Astrophysics Data System (ADS)
Buchholz, Max; Grossmann, Frank; Ceotto, Michele
2018-03-01
We present and test an approximate method for the semiclassical calculation of vibrational spectra. The approach is based on the mixed time-averaging semiclassical initial value representation method, which is simplified to a form that contains a filter to remove contributions from approximately harmonic environmental degrees of freedom. This filter comes at no additional numerical cost, and it has no negative effect on the accuracy of peaks from the anharmonic system of interest. The method is successfully tested for a model Hamiltonian and then applied to the study of the frequency shift of iodine in a krypton matrix. Using a hierarchic model with up to 108 normal modes included in the calculation, we show how the dynamical interaction between iodine and krypton yields results for the lowest excited iodine peaks that reproduce experimental findings to a high degree of accuracy.
Analytic method for calculating properties of random walks on networks
NASA Technical Reports Server (NTRS)
Goldhirsch, I.; Gefen, Y.
1986-01-01
A method for calculating the properties of discrete random walks on networks is presented. The method divides complex networks into simpler units whose contribution to the mean first-passage time is calculated. The simplified network is then further iterated. The method is demonstrated by calculating mean first-passage times on a segment, a segment with a single dangling bond, a segment with many dangling bonds, and a looplike structure. The results are analyzed and related to the applicability of the Einstein relation between conductance and diffusion.
Unimolecular decomposition reactions at low-pressure: A comparison of competitive methods
NASA Technical Reports Server (NTRS)
Adams, G. F.
1980-01-01
The lack of a simple rate coefficient expression to describe the pressure and temperature dependence hampers chemical modeling of flame systems. Recently developed simplified models to describe unimolecular processes include the calculation of rate constants for thermal unimolecular reactions and recombinations at the low pressure limit, at the high pressure limit and in the intermediate fall-off region. Comparison between two different applications of Troe's simplified model and a comparison between the simplified model and the classic RRKM theory are described.
Accuracy of a simplified method for shielded gamma-ray skyshine sources
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bassett, M.S.; Shultis, J.K.
1989-11-01
Rigorous transport or Monte Carlo methods for estimating far-field gamma-ray skyshine doses generally are computationally intensive. consequently, several simplified techniques such as point-kernel methods and methods based on beam response functions have been proposed. For unshielded skyshine sources, these simplified methods have been shown to be quite accurate from comparisons to benchmark problems and to benchmark experimental results. For shielded sources, the simplified methods typically use exponential attenuation and photon buildup factors to describe the effect of the shield. However, the energy and directional redistribution of photons scattered in the shield is usually ignored, i.e., scattered photons are assumed tomore » emerge from the shield with the same energy and direction as the uncollided photons. The accuracy of this shield treatment is largely unknown due to the paucity of benchmark results for shielded sources. In this paper, the validity of such a shield treatment is assessed by comparison to a composite method, which accurately calculates the energy and angular distribution of photons penetrating the shield.« less
A simplified method for calculating temperature time histories in cryogenic wind tunnels
NASA Technical Reports Server (NTRS)
Stallings, R. L., Jr.; Lamb, M.
1976-01-01
Average temperature time history calculations of the test media and tunnel walls for cryogenic wind tunnels have been developed. Results are in general agreement with limited preliminary experimental measurements obtained in a 13.5-inch pilot cryogenic wind tunnel.
A simplified digital lock-in amplifier for the scanning grating spectrometer.
Wang, Jingru; Wang, Zhihong; Ji, Xufei; Liu, Jie; Liu, Guangda
2017-02-01
For the common measurement and control system of a scanning grating spectrometer, the use of an analog lock-in amplifier requires complex circuitry and sophisticated debugging, whereas the use of a digital lock-in amplifier places a high demand on the calculation capability and storage space. In this paper, a simplified digital lock-in amplifier based on averaging the absolute values within a complete period is presented and applied to a scanning grating spectrometer. The simplified digital lock-in amplifier was implemented on a low-cost microcontroller without multipliers, and got rid of the reference signal and specific configuration of the sampling frequency. Two positive zero-crossing detections were used to lock the phase of the measured signal. However, measurement method errors were introduced by the following factors: frequency fluctuation, sampling interval, and integer restriction of the sampling number. The theoretical calculation and experimental results of the signal-to-noise ratio of the proposed measurement method were 2055 and 2403, respectively.
NASA Technical Reports Server (NTRS)
Carlson, H. W.
1978-01-01
Sonic boom overpressures and signature duration may be predicted for the entire affected ground area for a wide variety of supersonic airplane configurations and spacecraft operating at altitudes up to 76 km in level flight or in moderate climbing or descending flight paths. The outlined procedure relies to a great extent on the use of charts to provide generation and propagation factors for use in relatively simple expressions for signature calculation. Computational requirements can be met by hand-held scientific calculators, or even by slide rules. A variety of correlations of predicted and measured sonic-boom data for airplanes and spacecraft serve to demonstrate the applicability of the simplified method.
NASA Technical Reports Server (NTRS)
Martina, Albert P
1953-01-01
The methods of NACA Reports 865 and 1090 have been applied to the calculation of the rolling- and yawing-moment coefficients due to rolling for unswept wings with or without flaps or ailerons. The methods allow the use of nonlinear section lift data together with lifting-line theory. Two calculated examples are presented in simplified computing forms in order to illustrate the procedures involved.
Development of a model for on-line control of crystal growth by the AHP method
NASA Astrophysics Data System (ADS)
Gonik, M. A.; Lomokhova, A. V.; Gonik, M. M.; Kuliev, A. T.; Smirnov, A. D.
2007-05-01
The possibility to apply a simplified 2D model for heat transfer calculations in crystal growth by the axial heat close to phase interface (AHP) method is discussed in this paper. A comparison with global heat transfer calculations with the CGSim software was performed to confirm the accuracy of this model. The simplified model was shown to provide adequate results for the shape of the melt-crystal interface and temperature field in an opaque (Ge) and a transparent crystal (CsI:Tl). The model proposed is used for identification of the growth setup as a control object, for synthesis of a digital controller (PID controller at the present stage) and, finally, in on-line simulations of crystal growth control.
NASA Technical Reports Server (NTRS)
Kubota, H.
1976-01-01
A simplified analytical method for calculation of thermal response within a transpiration-cooled porous heat shield material in an intense radiative-convective heating environment is presented. The essential assumptions of the radiative and convective transfer processes in the heat shield matrix are the two-temperature approximation and the specified radiative-convective heatings of the front surface. Sample calculations for porous silica with CO2 injection are presented for some typical parameters of mass injection rate, porosity, and material thickness. The effect of these parameters on the cooling system is discussed.
Failure mode and effects analysis: a comparison of two common risk prioritisation methods.
McElroy, Lisa M; Khorzad, Rebeca; Nannicelli, Anna P; Brown, Alexandra R; Ladner, Daniela P; Holl, Jane L
2016-05-01
Failure mode and effects analysis (FMEA) is a method of risk assessment increasingly used in healthcare over the past decade. The traditional method, however, can require substantial time and training resources. The goal of this study is to compare a simplified scoring method with the traditional scoring method to determine the degree of congruence in identifying high-risk failures. An FMEA of the operating room (OR) to intensive care unit (ICU) handoff was conducted. Failures were scored and ranked using both the traditional risk priority number (RPN) and criticality-based method, and a simplified method, which designates failures as 'high', 'medium' or 'low' risk. The degree of congruence was determined by first identifying those failures determined to be critical by the traditional method (RPN≥300), and then calculating the per cent congruence with those failures designated critical by the simplified methods (high risk). In total, 79 process failures among 37 individual steps in the OR to ICU handoff process were identified. The traditional method yielded Criticality Indices (CIs) ranging from 18 to 72 and RPNs ranging from 80 to 504. The simplified method ranked 11 failures as 'low risk', 30 as medium risk and 22 as high risk. The traditional method yielded 24 failures with an RPN ≥300, of which 22 were identified as high risk by the simplified method (92% agreement). The top 20% of CI (≥60) included 12 failures, of which six were designated as high risk by the simplified method (50% agreement). These results suggest that the simplified method of scoring and ranking failures identified by an FMEA can be a useful tool for healthcare organisations with limited access to FMEA expertise. However, the simplified method does not result in the same degree of discrimination in the ranking of failures offered by the traditional method. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Simplified refracting technique in keratoconus.
Gasset, A R
1975-01-01
A simple but effective technique to refract keratoconus patients is presented. The theoretical objection to these methods are discussed. In addition, a formula to calculate lenticular astigmatism is presented.
A simplified method for elastic-plastic-creep structural analysis
NASA Technical Reports Server (NTRS)
Kaufman, A.
1984-01-01
A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.
A simplified method for elastic-plastic-creep structural analysis
NASA Technical Reports Server (NTRS)
Kaufman, A.
1985-01-01
A simplified inelastic analysis computer program (ANSYPM) was developed for predicting the stress-strain history at the critical location of a thermomechanically cycled structure from an elastic solution. The program uses an iterative and incremental procedure to estimate the plastic strains from the material stress-strain properties and a plasticity hardening model. Creep effects are calculated on the basis of stress relaxation at constant strain, creep at constant stress or a combination of stress relaxation and creep accumulation. The simplified method was exercised on a number of problems involving uniaxial and multiaxial loading, isothermal and nonisothermal conditions, dwell times at various points in the cycles, different materials and kinematic hardening. Good agreement was found between these analytical results and nonlinear finite element solutions for these problems. The simplified analysis program used less than 1 percent of the CPU time required for a nonlinear finite element analysis.
Approximate method for calculating free vibrations of a large-wind-turbine tower structure
NASA Technical Reports Server (NTRS)
Das, S. C.; Linscott, B. S.
1977-01-01
A set of ordinary differential equations were derived for a simplified structural dynamic lumped-mass model of a typical large-wind-turbine tower structure. Dunkerley's equation was used to arrive at a solution for the fundamental natural frequencies of the tower in bending and torsion. The ERDA-NASA 100-kW wind turbine tower structure was modeled, and the fundamental frequencies were determined by the simplified method described. The approximate fundamental natural frequencies for the tower agree within 18 percent with test data and predictions analyzed.
Simplified solution for point contact deformation between two elastic solids
NASA Technical Reports Server (NTRS)
Brewe, D. E.; Hamrock, B. J.
1976-01-01
A linear-regression by the method of least squares is made on the geometric variables that occur in the equation for point contact deformation. The ellipticity and the complete eliptic integrals of the first and second kind are expressed as a function of the x, y-plane principal radii. The ellipticity was varied from 1 (circular contact) to 10 (a configuration approaching line contact). These simplified equations enable one to calculate easily the point-contact deformation to within 3 percent without resorting to charts or numerical methods.
Analysis of temperature distribution in liquid-cooled turbine blades
NASA Technical Reports Server (NTRS)
Livingood, John N B; Brown, W Byron
1952-01-01
The temperature distribution in liquid-cooled turbine blades determines the amount of cooling required to reduce the blade temperature to permissible values at specified locations. This report presents analytical methods for computing temperature distributions in liquid-cooled turbine blades, or in simplified shapes used to approximate sections of the blade. The individual analyses are first presented in terms of their mathematical development. By means of numerical examples, comparisons are made between simplified and more complete solutions and the effects of several variables are examined. Nondimensional charts to simplify some temperature-distribution calculations are also given.
Ohisa, Noriko; Ogawa, Hiromasa; Murayama, Nobuki; Yoshida, Katsumi
2010-02-01
Polysomnography (PSG) is the gold standard for the diagnosis of sleep apnea hypopnea syndrome (SAHS), but it takes time to analyze the PSG and PSG cannot be performed repeatedly because of efforts and costs. Therefore, simplified sleep respiratory disorder indices in which are reflected the PSG results are needed. The Memcalc method, which is a combination of the maximum entropy method for spectral analysis and the non-linear least squares method for fitting analysis (Makin2, Suwa Trust, Tokyo, Japan) has recently been developed. Spectral entropy which is derived by the Memcalc method might be useful to expressing the trend of time-series behavior. Spectral entropy of ECG which is calculated with the Memcalc method was evaluated by comparing to the PSG results. Obstructive SAS patients (n = 79) and control volanteer (n = 7) ECG was recorded using MemCalc-Makin2 (GMS) with PSG recording using Alice IV (Respironics) from 20:00 to 6:00. Spectral entropy of ECG, which was calculated every 2 seconds using the Memcalc method, was compared to sleep stages which were analyzed manually from PSG recordings. Spectral entropy value (-0.473 vs. -0.418, p < 0.05) were significantly increased in the OSAHS compared to the control. For the entropy cutoff level of -0.423, sensitivity and specificity for OSAHS were 86.1% and 71.4%, respectively, resulting in a receiver operating characteristic with an area under the curve of 0.837. The absolute value of entropy had inverse correlation with stage 3. Spectral entropy, which was calculated with Memcalc method, might be a possible index evaluating the quality of sleep.
Kadji, Caroline; De Groof, Maxime; Camus, Margaux F; De Angelis, Riccardo; Fellas, Stéphanie; Klass, Magdalena; Cecotti, Vera; Dütemeyer, Vivien; Barakat, Elie; Cannie, Mieke M; Jani, Jacques C
2017-01-01
The aim of this study was to apply a semi-automated calculation method of fetal body volume and, thus, of magnetic resonance-estimated fetal weight (MR-EFW) prior to planned delivery and to evaluate whether the technique of measurement could be simplified while remaining accurate. MR-EFW was calculated using a semi-automated method at 38.6 weeks of gestation in 36 patients and compared to the picture archiving and communication system (PACS). Per patient, 8 sequences were acquired with a slice thickness of 4-8 mm and an intersection gap of 0, 4, 8, 12, 16, or 20 mm. The median absolute relative errors for MR-EFW and the time of planimetric measurements were calculated for all 8 sequences and for each method (assisted vs. PACS), and the difference between the methods was calculated. The median delivery weight was 3,280 g. The overall median relative error for all 288 MR-EFW calculations was 2.4% using the semi-automated method and 2.2% for the PACS method. Measurements did not differ between the 8 sequences using the assisted method (p = 0.313) or the PACS (p = 0.118), while the time of planimetric measurement decreased significantly with a larger gap (p < 0.001) and in the assisted method compared to the PACS method (p < 0.01). Our simplified MR-EFW measurement showed a dramatic decrease in time of planimetric measurement without a decrease in the accuracy of weight estimates. © 2017 S. Karger AG, Basel.
NASA Technical Reports Server (NTRS)
Barth, Timothy; Saini, Subhash (Technical Monitor)
1999-01-01
This talk considers simplified finite element discretization techniques for first-order systems of conservation laws equipped with a convex (entropy) extension. Using newly developed techniques in entropy symmetrization theory, simplified forms of the Galerkin least-squares (GLS) and the discontinuous Galerkin (DG) finite element method have been developed and analyzed. The use of symmetrization variables yields numerical schemes which inherit global entropy stability properties of the POE system. Central to the development of the simplified GLS and DG methods is the Degenerative Scaling Theorem which characterizes right symmetrizes of an arbitrary first-order hyperbolic system in terms of scaled eigenvectors of the corresponding flux Jacobean matrices. A constructive proof is provided for the Eigenvalue Scaling Theorem with detailed consideration given to the Euler, Navier-Stokes, and magnetohydrodynamic (MHD) equations. Linear and nonlinear energy stability is proven for the simplified GLS and DG methods. Spatial convergence properties of the simplified GLS and DO methods are numerical evaluated via the computation of Ringleb flow on a sequence of successively refined triangulations. Finally, we consider a posteriori error estimates for the GLS and DG demoralization assuming error functionals related to the integrated lift and drag of a body. Sample calculations in 20 are shown to validate the theory and implementation.
Approximate methods in gamma-ray skyshine calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faw, R.E.; Roseberry, M.L.; Shultis, J.K.
1985-11-01
Gamma-ray skyshine, an important component of the radiation field in the environment of a nuclear power plant, has recently been studied in relation to storage of spent fuel and nuclear waste. This paper reviews benchmark skyshine experiments and transport calculations against which computational procedures may be tested. The paper also addresses the applicability of simplified computational methods involving single-scattering approximations. One such method, suitable for microcomputer implementation, is described and results are compared with other work.
NASA Astrophysics Data System (ADS)
Şahin, Rıdvan; Liu, Peide
2017-07-01
Simplified neutrosophic set (SNS) is an appropriate tool used to express the incompleteness, indeterminacy and uncertainty of the evaluation objects in decision-making process. In this study, we define the concept of possibility SNS including two types of information such as the neutrosophic performance provided from the evaluation objects and its possibility degree using a value ranging from zero to one. Then by extending the existing neutrosophic information, aggregation models for SNSs that cannot be used effectively to fusion the two different information described above, we propose two novel neutrosophic aggregation operators considering possibility, which are named as a possibility-induced simplified neutrosophic weighted arithmetic averaging operator and possibility-induced simplified neutrosophic weighted geometric averaging operator, and discuss their properties. Moreover, we develop a useful method based on the proposed aggregation operators for solving a multi-criteria group decision-making problem with the possibility simplified neutrosophic information, in which the weights of decision-makers and decision criteria are calculated based on entropy measure. Finally, a practical example is utilised to show the practicality and effectiveness of the proposed method.
Kohno, R; Hotta, K; Nishioka, S; Matsubara, K; Tansho, R; Suzuki, T
2011-11-21
We implemented the simplified Monte Carlo (SMC) method on graphics processing unit (GPU) architecture under the computer-unified device architecture platform developed by NVIDIA. The GPU-based SMC was clinically applied for four patients with head and neck, lung, or prostate cancer. The results were compared to those obtained by a traditional CPU-based SMC with respect to the computation time and discrepancy. In the CPU- and GPU-based SMC calculations, the estimated mean statistical errors of the calculated doses in the planning target volume region were within 0.5% rms. The dose distributions calculated by the GPU- and CPU-based SMCs were similar, within statistical errors. The GPU-based SMC showed 12.30-16.00 times faster performance than the CPU-based SMC. The computation time per beam arrangement using the GPU-based SMC for the clinical cases ranged 9-67 s. The results demonstrate the successful application of the GPU-based SMC to a clinical proton treatment planning.
Computation of wheel-rail contact force for non-mapping wheel-rail profile of Translohr tram
NASA Astrophysics Data System (ADS)
Ji, Yuanjin; Ren, Lihui; Zhou, Jinsong
2017-09-01
Translohr tram has steel wheels, in V-like arrangements, as guide wheels. These operate over the guide rails in inverted-V arrangements. However, the horizontal and vertical coordinates of the guide wheels and guide rails are not always mapped one-to-one. In this study, a simplified elastic method is proposed in order to calculate the contact points between the wheels and the rails. By transforming the coordinates, the non-mapping geometric relationship between wheel and rail is converted into a mapping relationship. Considering the Translohr tram's multi-point contact between the guide wheel and the guide rail, the elastic-contact hypothesis take into account the existence of contact patches between the bodies, and the location of the contact points is calculated using a simplified elastic method. In order to speed up the calculation, a multi-dimensional contact table is generated, enabling the use of simulation for Translohr tram running on curvatures with different radii.
Study on a pattern classification method of soil quality based on simplified learning sample dataset
Zhang, Jiahua; Liu, S.; Hu, Y.; Tian, Y.
2011-01-01
Based on the massive soil information in current soil quality grade evaluation, this paper constructed an intelligent classification approach of soil quality grade depending on classical sampling techniques and disordered multiclassification Logistic regression model. As a case study to determine the learning sample capacity under certain confidence level and estimation accuracy, and use c-means algorithm to automatically extract the simplified learning sample dataset from the cultivated soil quality grade evaluation database for the study area, Long chuan county in Guangdong province, a disordered Logistic classifier model was then built and the calculation analysis steps of soil quality grade intelligent classification were given. The result indicated that the soil quality grade can be effectively learned and predicted by the extracted simplified dataset through this method, which changed the traditional method for soil quality grade evaluation. ?? 2011 IEEE.
Research on carrying capacity of hydrostatic slideway on heavy-duty gantry CNC machine
NASA Astrophysics Data System (ADS)
Cui, Chao; Guo, Tieneng; Wang, Yijie; Dai, Qin
2017-05-01
Hydrostatic slideway is a key part in the heavy-duty gantry CNC machine, which supports the total weight of the gantry and moves smoothly along the table. Therefore, the oil film between sliding rails plays an important role on the carrying capacity and precision of machine. In this paper, the oil film in no friction is simulated with three-dimensional CFD. The carrying capacity of heavy hydrostatic slideway, pressure and velocity characteristic of the flow field are analyzed. The simulation result is verified through comparing with the experimental data obtained from the heavy-duty gantry machine. For the requirement of engineering, the oil film carrying capacity is analyzed with simplified theoretical method. The precision of the simplified method is evaluated and the effectiveness is verified with the experimental data. The simplified calculation method is provided for designing oil pad on heavy-duty gantry CNC machine hydrostatic slideway.
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-10-01
Aims: We derive a simplified model for estimating atomic data on inelastic processes in low-energy collisions of heavy-particles with hydrogen, in particular for the inelastic processes with high and moderate rate coefficients. It is known that these processes are important for non-LTE modeling of cool stellar atmospheres. Methods: Rate coefficients are evaluated using a derived method, which is a simplified version of a recently proposed approach based on the asymptotic method for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: The rate coefficients are found to be expressed via statistical probabilities and reduced rate coefficients. It turns out that the reduced rate coefficients for mutual neutralization and ion-pair formation processes depend on single electronic bound energies of an atom, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to potassium-hydrogen collisions. For the first time, rate coefficients are evaluated for inelastic processes in K+H and K++H- collisions for all transitions from ground states up to and including ionic states. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/606/A147
Fast calculation of the line-spread-function by transversal directions decoupling
NASA Astrophysics Data System (ADS)
Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra
2016-07-01
We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.
The role of interest and inflation rates in life-cycle cost analysis
NASA Technical Reports Server (NTRS)
Eisenberger, I.; Remer, D. S.; Lorden, G.
1978-01-01
The effect of projected interest and inflation rates on life cycle cost calculations is discussed and a method is proposed for making such calculations which replaces these rates by a single parameter. Besides simplifying the analysis, the method clarifies the roles of these rates. An analysis of historical interest and inflation rates from 1950 to 1976 shows that the proposed method can be expected to yield very good projections of life cycle cost even if the rates themselves fluctuate considerably.
DOT National Transportation Integrated Search
2012-11-30
This report presents the results of the study to extend the useful attenuation range of the Approximate Method outlined in the American National Standard, Method for Calculation of the Absorption of Sound by the Atmosphere (ANSI S1.26-1995), an...
Schemel, Laurence E.
2001-01-01
This article presents a simplified conversion to salinity units for use with specific conductance data from monitoring stations that have been normalized to a standard temperature of 25 °C and an equation for the reverse calculation. Although these previously undocumented methods have been shared with many IEP agencies over the last two decades, the sources of the equations and data are identified here so that the original literature can be accessed.
Lajolo, Carlo; Giuliani, Michele; Cordaro, Massimo; Marigo, Luca; Marcelli, Antonio; Fiorillo, Fabio; Pascali, Vincenzo L; Oliva, Antonio
2013-10-01
Chronological age (CA) plays a fundamental role in forensic dentistry (i.e. personal identification and evaluation of imputability). Even though several studies outlined the association between biological and chronological age, there is still great variability in the estimates. The aim of this study was to determine the possible correlation between biological and CA age through the use of two new radiographic indexes (Oro-Cervical Radiographic Simplified Score - OCRSS and Oro-Cervical Radiographic Simplified Score Without Wisdom Teeth - OCRSSWWT) that are based on the oro-cervical area. Sixty Italian Caucasian individuals were divided into 3 groups according to their CA: Group 1: CAG 1 = 8-14 yr; Group 2: CAG 2 = 14-18 yr; Group 3: CAG 3 = 18-25 yr; panorexes and standardised cephalograms were evaluated according Demirjian's Method for dental age calculation (DM), Cervical Vertebral Maturation method for skeletal age calculation (CVMS) and Third Molar Development for age estimation (TMD). The stages of each method were simplified in order to generate OCRSS, which summarized the simplified scores of the three methods, and OCRSSWWT, which summarized the simplified DM and CVMS scores. There was a significant correlation between OCRSS and CAGs (Slope = 0.954, p < 0.001, R-squared = 0.79) and between OCRSSWWT and CAGs (Slope = 0.863, p < 0.001, R-squared = 0.776). Even though the indexes, especially OCRSS, appear to be highly reliable, growth variability among individuals can deeply influence the anatomical changes from childhood to adulthood. A multi-disciplinary approach that considers many different biomarkers could help make radiological age determination more reliable when it is used to predict CA. Copyright © 2013 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.
NASA Astrophysics Data System (ADS)
Rugun, Y.; Zhaoyan, Q.
1986-05-01
In this paper, the concepts and methods for design of high-Mach-number airfoils of axial flow compressor are described. The correlation-equations of main parameters such as geometries of airfoil and cascade, stream parameters and wake characteristic parameters of compressor are provided. For obtaining the total pressure loss coefficients of cascade and adopting the simplified calculating method, several curves and charts are provided by authors. The testing results and calculating values are compared, and both the results are in better agreement.
A Fast and Accurate Method of Radiation Hydrodynamics Calculation in Spherical Symmetry
NASA Astrophysics Data System (ADS)
Stamer, Torsten; Inutsuka, Shu-ichiro
2018-06-01
We develop a new numerical scheme for solving the radiative transfer equation in a spherically symmetric system. This scheme does not rely on any kind of diffusion approximation, and it is accurate for optically thin, thick, and intermediate systems. In the limit of a homogeneously distributed extinction coefficient, our method is very accurate and exceptionally fast. We combine this fast method with a slower but more generally applicable method to describe realistic problems. We perform various test calculations, including a simplified protostellar collapse simulation. We also discuss possible future improvements.
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
NASA Technical Reports Server (NTRS)
Murthy, A. V.
1987-01-01
A simplified fourwall interference assessment method has been described, and a computer program developed to facilitate correction of the airfoil data obtained in the Langley 0.3-m Transonic Cryogenic Tunnel (TCT). The procedure adopted is to first apply a blockage correction due to sidewall boundary-layer effects by various methods. The sidewall boundary-layer corrected data are then used to calculate the top and bottom wall interference effects by the method of Capallier, Chevallier and Bouinol, using the measured wall pressure distribution and the model force coefficients. The interference corrections obtained by the present method have been compared with other methods and found to give good agreement for the experimental data obtained in the TCT with slotted top and bottom walls.
High fidelity simulations of infrared imagery with animated characters
NASA Astrophysics Data System (ADS)
Näsström, F.; Persson, A.; Bergström, D.; Berggren, J.; Hedström, J.; Allvar, J.; Karlsson, M.
2012-06-01
High fidelity simulations of IR signatures and imagery tend to be slow and do not have effective support for animation of characters. Simplified rendering methods based on computer graphics methods can be used to overcome these limitations. This paper presents a method to combine these tools and produce simulated high fidelity thermal IR data of animated people in terrain. Infrared signatures for human characters have been calculated using RadThermIR. To handle multiple character models, these calculations use a simplified material model for the anatomy and clothing. Weather and temperature conditions match the IR-texture used in the terrain model. The calculated signatures are applied to the animated 3D characters that, together with the terrain model, are used to produce high fidelity IR imagery of people or crowds. For high level animation control and crowd simulations, HLAS (High Level Animation System) has been developed. There are tools available to create and visualize skeleton based animations, but tools that allow control of the animated characters on a higher level, e.g. for crowd simulation, are usually expensive and closed source. We need the flexibility of HLAS to add animation into an HLA enabled sensor system simulation framework.
Determining Planck's Constant Using a Light-emitting Diode.
ERIC Educational Resources Information Center
Sievers, Dennis; Wilson, Alan
1989-01-01
Describes a method for making a simple, inexpensive apparatus which can be used to determine Planck's constant. Provides illustrations of a circuit diagram using one or more light-emitting diodes and a BASIC computer program for simplifying calculations. (RT)
Three-dimensional unsteady lifting surface theory in the subsonic range
NASA Technical Reports Server (NTRS)
Kuessner, H. G.
1985-01-01
The methods of the unsteady lifting surface theory are surveyed. Linearized Euler's equations are simplified by means of a Galileo-Lorentz transformation and a Laplace transformation so that the time and the compressibility of the fluid are limited to two constants. The solutions to this simplified problem are represented as integrals with a differential nucleus; these results in tolerance conditions, for which any exact solution must suffice. It is shown that none of the existing three-dimensional lifting surface theories in subsonic range satisfy these conditions. An oscillating elliptic lifting surface which satisfies the tolerance conditions is calculated through the use of Lame's functions. Numerical examples are calculated for the borderline cases of infinitely stretched elliptic lifting surfaces and of circular lifting surfaces. Out of the harmonic solutions any such temporal changes of the down current are calculated through the use of an inverse Laplace transformation.
A simplified method for assessing particle deposition rate in aircraft cabins
NASA Astrophysics Data System (ADS)
You, Ruoyu; Zhao, Bin
2013-03-01
Particle deposition in aircraft cabins is important for the exposure of passengers to particulate matter, as well as the airborne infectious diseases. In this study, a simplified method is proposed for initial and quick assessment of particle deposition rate in aircraft cabins. The method included: collecting the inclined angle, area, characteristic length, and freestream air velocity for each surface in a cabin; estimating the friction velocity based on the characteristic length and freestream air velocity; modeling the particle deposition velocity using the empirical equation we developed previously; and then calculating the particle deposition rate. The particle deposition rates for the fully-occupied, half-occupied, 1/4-occupied and empty first-class cabin of the MD-82 commercial airliner were estimated. The results show that the occupancy did not significantly influence the particle deposition rate of the cabin. Furthermore, the simplified human model can be used in the assessment with acceptable accuracy. Finally, the comparison results show that the particle deposition rate of aircraft cabins and indoor environments are quite similar.
Calibration method and apparatus for measuring the concentration of components in a fluid
Durham, M.D.; Sagan, F.J.; Burkhardt, M.R.
1993-12-21
A calibration method and apparatus for use in measuring the concentrations of components of a fluid is provided. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The peak-to-trough calculations are simplified by compensating for radiation absorption by the apparatus. The invention also allows absorption characteristics of an interfering fluid component to be accurately determined and negated thereby facilitating analysis of the fluid. 7 figures.
Calibration method and apparatus for measuring the concentration of components in a fluid
Durham, Michael D.; Sagan, Francis J.; Burkhardt, Mark R.
1993-01-01
A calibration method and apparatus for use in measuring the concentrations of components of a fluid is provided. The measurements are determined from the intensity of radiation over a selected range of radiation wavelengths using peak-to-trough calculations. The peak-to-trough calculations are simplified by compensating for radiation absorption by the apparatus. The invention also allows absorption characteristics of an interfering fluid component to be accurately determined and negated thereby facilitating analysis of the fluid.
Trial densities for the extended Thomas-Fermi model
NASA Astrophysics Data System (ADS)
Yu, An; Jimin, Hu
1996-02-01
A new and simplified form of nuclear densities is proposed for the extended Thomas-Fermi method (ETF) and applied to calculate the ground-state properties of several spherical nuclei, with results comparable or even better than other conventional density profiles. With the expectation value method (EVM) for microscopic corrections we checked our new densities for spherical nuclei. The binding energies of ground states almost reproduce the Hartree-Fock (HF) calculations exactly. Further applications to nuclei far away from the β-stability line are discussed.
A practical method of predicting the loudness of complex electrical stimuli
NASA Astrophysics Data System (ADS)
McKay, Colette M.; Henshall, Katherine R.; Farrell, Rebecca J.; McDermott, Hugh J.
2003-04-01
The output of speech processors for multiple-electrode cochlear implants consists of current waveforms with complex temporal and spatial patterns. The majority of existing processors output sequential biphasic current pulses. This paper describes a practical method of calculating loudness estimates for such stimuli, in addition to the relative loudness contributions from different cochlear regions. The method can be used either to manipulate the loudness or levels in existing processing strategies, or to control intensity cues in novel sound processing strategies. The method is based on a loudness model described by McKay et al. [J. Acoust. Soc. Am. 110, 1514-1524 (2001)] with the addition of the simplifying approximation that current pulses falling within a temporal integration window of several milliseconds' duration contribute independently to the overall loudness of the stimulus. Three experiments were carried out with six implantees who use the CI24M device manufactured by Cochlear Ltd. The first experiment validated the simplifying assumption, and allowed loudness growth functions to be calculated for use in the loudness prediction method. The following experiments confirmed the accuracy of the method using multiple-electrode stimuli with various patterns of electrode locations and current levels.
COMPUTING SI AND CCPP USING SPREADSHEET PROGRAMS
Lotus 1-2-3 worksheets for calculating the calcite saturation index (SI) and calcium carbonate precipitation potential of a water sample are described. A simplified worksheet illustrates the principles of the method, and a more complex worksheet suitable for modeling most potabl...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A; Pasciak, A
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that Result in skin reactions can be reached during these procedures. The purpose of this study was to assess the accuracy of different indirect dose estimates and to determine if PSD can be calculated within ±50% for embolization procedures. Methods: PSD were measured directly using radiochromic film for 41 consecutive embolization procedures. Indirect dose metrics from procedures were collected, including reference air kerma (RAK). Four different estimates of PSD were calculated and compared along with RAK to the measured PSD. The indirect estimates included a standard method,more » use of detailed information from the RDSR, and two simplified calculation methods. Indirect dosimetry was compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the indirect estimates were examined. Results: PSD calculated with the standard calculation method were within ±50% for all 41 procedures. This was also true for a simplified method using a single source-to-patient distance (SPD) for all calculations. RAK was within ±50% for all but one procedure. Cases for which RAK or calculated PSD exhibited large differences from the measured PSD were analyzed, and two causative factors were identified: ‘extreme’ SPD and large contributions to RAK from rotational angiography or runs acquired at large gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±50% for embolization procedures, and usually to within ±35%. RAK can be used without modification to set notification limits and substantial radiation dose levels. These results can be extended to similar procedures, including vascular and interventional oncology. Film dosimetry is likely an unnecessary effort for these types of procedures.« less
Interpretation of searches for supersymmetry with simplified models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chatrchyan, S.; Khachatryan, V.; Sirunyan, A. M.
The results of searches for supersymmetry by the CMS experiment are interpreted in the framework of simplified models. The results are based on data corresponding to an integrated luminosity of 4.73 to 4.98 inverse femtobarns. The data were collected at the LHC in proton-proton collisions at a center-of-mass energy of 7 TeV. This paper describes the method of interpretation and provides upper limits on the product of the production cross section and branching fraction as a function of new particle masses for a number of simplified models. These limits and the corresponding experimental acceptance calculations can be used to constrainmore » other theoretical models and to compare different supersymmetry-inspired analyses.« less
NASA Astrophysics Data System (ADS)
Khondok, Piyoros; Sakulkalavek, Aparporn; Suwansukho, Kajpanya
2018-03-01
A simplified and powerful image processing procedures to separate the paddy of KHAW DOK MALI 105 or Thai jasmine rice and the paddy of sticky rice RD6 varieties were proposed. The procedures consist of image thresholding, image chain coding and curve fitting using polynomial function. From the fitting, three parameters of each variety, perimeters, area, and eccentricity, were calculated. Finally, the overall parameters were determined by using principal component analysis. The result shown that these procedures can be significantly separate both varieties.
Electron scattering intensities and Patterson functions of Skyrmions
NASA Astrophysics Data System (ADS)
Karliner, M.; King, C.; Manton, N. S.
2016-06-01
The scattering of electrons off nuclei is one of the best methods of probing nuclear structure. In this paper we focus on electron scattering off nuclei with spin and isospin zero within the Skyrme model. We consider two distinct methods and simplify our calculations by use of the Born approximation. The first method is to calculate the form factor of the spherically averaged Skyrmion charge density; the second uses the Patterson function to calculate the scattering intensity off randomly oriented Skyrmions, and spherically averages at the end. We compare our findings with experimental scattering data. We also find approximate analytical formulae for the first zero and first stationary point of a form factor.
Methods for determining the internal thrust of scramjet engine modules from experimental data
NASA Technical Reports Server (NTRS)
Voland, Randall T.
1990-01-01
Methods for calculating zero-fuel internal drag of scramjet engine modules from experimental measurements are presented. These methods include two control-volume approaches, and a pressure and skin-friction integration. The three calculation techniques are applied to experimental data taken during tests of a version of the NASA parametric scramjet. The methods agree to within seven percent of the mean value of zero-fuel internal drag even though several simplifying assumptions are made in the analysis. The mean zero-fuel internal drag coefficient for this particular engine is calculated to be 0.150. The zero-fuel internal drag coefficient when combined with the change in engine axial force with and without fuel defines the internal thrust of an engine.
The integral line-beam method for gamma skyshine analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shultis, J.K.; Faw, R.E.; Bassett, M.S.
1991-03-01
This paper presents a refinement of a simplified method, based on line-beam response functions, for performing skyshine calculations for shielded and collimated gamma-ray sources. New coefficients for an empirical fit to the line-beam response function are provided and a prescription for making the response function continuous in energy and emission direction is introduced. For a shielded source, exponential attenuation and a buildup factor correction for scattered photons in the shield are used. Results of the new integral line-beam method of calculation are compared to a variety of benchmark experimental data and calculations and are found to give generally excellent agreementmore » at a small fraction of the computational expense required by other skyshine methods.« less
Simplified method to solve sound transmission through structures lined with elastic porous material.
Lee, J H; Kim, J
2001-11-01
An approximate analysis method is developed to calculate sound transmission through structures lined with porous material. Because the porous material has both the solid phase and fluid phase, three wave components exist in the material, which makes the related analysis very complicated. The main idea in developing the approximate method is very simple: modeling the porous material using only the strongest of the three waves, which in effect idealizes the material as an equivalent fluid. The analysis procedure has to be conducted in two steps. In the first step, sound transmission through a flat double panel with a porous liner of infinite extents, which has the same cross sectional construction as the actual structure, is solved based on the full theory and the strongest wave component is identified. In the second step sound transmission through the actual structure is solved modeling the porous material as an equivalent fluid while using the actual geometry of the structure. The development and validation of the method are discussed in detail. As an application example, the transmission loss through double walled cylindrical shells with a porous core is calculated utilizing the simplified method.
Nonlinear optimization simplified by hypersurface deformation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stillinger, F.H.; Weber, T.A.
1988-09-01
A general strategy is advanced for simplifying nonlinear optimization problems, the ant-lion method. This approach exploits shape modifications of the cost-function hypersurface which distend basins surrounding low-lying minima (including global minima). By intertwining hypersurface deformations with steepest-descent displacements, the search is concentrated on a small relevant subset of all minima. Specific calculations demonstrating the value of this method are reported for the partitioning of two classes of irregular but nonrandom graphs, the prime-factor graphs and the pi graphs. We also indicate how this approach can be applied to the traveling salesman problem and to design layout optimization, and that itmore » may be useful in combination with simulated annealing strategies.« less
NASA Technical Reports Server (NTRS)
Rogallo, Vernon L; Yaggy, Paul F; Mccloud, John L , III
1956-01-01
A simplified procedure is shown for calculating the once-per-revolution oscillating aerodynamic thrust loads on propellers of tractor airplanes at zero yaw. The only flow field information required for the application of the procedure is a knowledge of the upflow angles at the horizontal center line of the propeller disk. Methods are presented whereby these angles may be computed without recourse to experimental survey of the flow field. The loads computed by the simplified procedure are compared with those computed by a more rigorous method and the procedure is applied to several airplane configurations which are believed typical of current designs. The results are generally satisfactory.
Quantum chemical calculation of the equilibrium structures of small metal atom clusters
NASA Technical Reports Server (NTRS)
Kahn, L. R.
1982-01-01
Metal atom clusters are studied based on the application of ab initio quantum mechanical approaches. Because these large 'molecular' systems pose special practical computational problems in the application of the quantum mechanical methods, there is a special need to find simplifying techniques that do not compromise the reliability of the calculations. Research is therefore directed towards various aspects of the implementation of the effective core potential technique for the removal of the metal atom core electrons from the calculations.
Advancements in dynamic kill calculations for blowout wells
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kouba, G.E.; MacDougall, G.R.; Schumacher, B.W.
1993-09-01
This paper addresses the development, interpretation, and use of dynamic kill equations. To this end, three simple calculation techniques are developed for determining the minimum dynamic kill rate. Two techniques contain only single-phase calculations and are independent of reservoir inflow performance. Despite these limitations, these two methods are useful for bracketing the minimum flow rates necessary to kill a blowing well. For the third technique, a simplified mechanistic multiphase-flow model is used to determine a most-probable minimum kill rate.
A simplified competition data analysis for radioligand specific activity determination.
Venturino, A; Rivera, E S; Bergoc, R M; Caro, R A
1990-01-01
Non-linear regression and two-step linear fit methods were developed to determine the actual specific activity of 125I-ovine prolactin by radioreceptor self-displacement analysis. The experimental results obtained by the different methods are superposable. The non-linear regression method is considered to be the most adequate procedure to calculate the specific activity, but if its software is not available, the other described methods are also suitable.
NASA Astrophysics Data System (ADS)
Iwaki, Sunao; Ueno, Shoogo
1998-06-01
The weighted minimum-norm estimation (wMNE) is a popular method to obtain the source distribution in the human brain from magneto- and electro- encephalograpic measurements when detailed information about the generator profile is not available. We propose a method to reconstruct current distributions in the human brain based on the wMNE technique with the weighting factors defined by a simplified multiple signal classification (MUSIC) prescanning. In this method, in addition to the conventional depth normalization technique, weighting factors of the wMNE were determined by the cost values previously calculated by a simplified MUSIC scanning which contains the temporal information of the measured data. We performed computer simulations of this method and compared it with the conventional wMNE method. The results show that the proposed method is effective for the reconstruction of the current distributions from noisy data.
Reinforcing mechanism of anchors in slopes: a numerical comparison of results of LEM and FEM
NASA Astrophysics Data System (ADS)
Cai, Fei; Ugai, Keizo
2003-06-01
This paper reports the limitation of the conventional Bishop's simplified method to calculate the safety factor of slopes stabilized with anchors, and proposes a new approach to considering the reinforcing effect of anchors on the safety factor. The reinforcing effect of anchors can be explained using an additional shearing resistance on the slip surface. A three-dimensional shear strength reduction finite element method (SSRFEM), where soil-anchor interactions were simulated by three-dimensional zero-thickness elasto-plastic interface elements, was used to calculate the safety factor of slopes stabilized with anchors to verify the reinforcing mechanism of anchors. The results of SSRFEM were compared with those of the conventional and proposed approaches for Bishop's simplified method for various orientations, positions, and spacings of anchors, and shear strengths of soil-grouted body interfaces. For the safety factor, the proposed approach compared better with SSRFEM than the conventional approach. The additional shearing resistance can explain the influence of the orientation, position, and spacing of anchors, and the shear strength of soil-grouted body interfaces on the safety factor of slopes stabilized with anchors.
FDTD modeling of thin impedance sheets
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Kunz, Karl S.
1991-01-01
Thin sheets of resistive or dielectric material are commonly encountered in radar cross section calculations. Analysis of such sheets is simplified by using sheet impedances. In this paper it is shown that sheet impedances can be modeled easily and accurately using Finite Difference Time Domain (FDTD) methods.
Simplified Calculation Of Solar Fluxes In Solar Receivers
NASA Technical Reports Server (NTRS)
Bhandari, Pradeep
1990-01-01
Simplified Calculation of Solar Flux Distribution on Side Wall of Cylindrical Cavity Solar Receivers computer program employs simple solar-flux-calculation algorithm for cylindrical-cavity-type solar receiver. Results compare favorably with those of more complicated programs. Applications include study of solar energy and transfer of heat, and space power/solar-dynamics engineering. Written in FORTRAN 77.
Multiscale methods for gore curvature calculations from FSI modeling of spacecraft parachutes
NASA Astrophysics Data System (ADS)
Takizawa, Kenji; Tezduyar, Tayfun E.; Kolesar, Ryan; Boswell, Cody; Kanai, Taro; Montel, Kenneth
2014-12-01
There are now some sophisticated and powerful methods for computer modeling of parachutes. These methods are capable of addressing some of the most formidable computational challenges encountered in parachute modeling, including fluid-structure interaction (FSI) between the parachute and air flow, design complexities such as those seen in spacecraft parachutes, and operational complexities such as use in clusters and disreefing. One should be able to extract from a reliable full-scale parachute modeling any data or analysis needed. In some cases, however, the parachute engineers may want to perform quickly an extended or repetitive analysis with methods based on simplified models. Some of the data needed by a simplified model can very effectively be extracted from a full-scale computer modeling that serves as a pilot. A good example of such data is the circumferential curvature of a parachute gore, where a gore is the slice of the parachute canopy between two radial reinforcement cables running from the parachute vent to the skirt. We present the multiscale methods we devised for gore curvature calculation from FSI modeling of spacecraft parachutes. The methods include those based on the multiscale sequentially-coupled FSI technique and using NURBS meshes. We show how the methods work for the fully-open and two reefed stages of the Orion spacecraft main and drogue parachutes.
Shielding analyses of an AB-BNCT facility using Monte Carlo simulations and simplified methods
NASA Astrophysics Data System (ADS)
Lai, Bo-Lun; Sheu, Rong-Jiun
2017-09-01
Accurate Monte Carlo simulations and simplified methods were used to investigate the shielding requirements of a hypothetical accelerator-based boron neutron capture therapy (AB-BNCT) facility that included an accelerator room and a patient treatment room. The epithermal neutron beam for BNCT purpose was generated by coupling a neutron production target with a specially designed beam shaping assembly (BSA), which was embedded in the partition wall between the two rooms. Neutrons were produced from a beryllium target bombarded by 1-mA 30-MeV protons. The MCNP6-generated surface sources around all the exterior surfaces of the BSA were established to facilitate repeated Monte Carlo shielding calculations. In addition, three simplified models based on a point-source line-of-sight approximation were developed and their predictions were compared with the reference Monte Carlo results. The comparison determined which model resulted in better dose estimation, forming the basis of future design activities for the first ABBNCT facility in Taiwan.
NASA Astrophysics Data System (ADS)
Iliev, I.; Trivedi, C.; Dahlhaug, O. G.
2018-06-01
The paper presents a simplified one-dimensional calculation of the efficiency hill-chart for Francis turbines, based on the velocity triangles at the inlet and outlet of the runner’s blade. Calculation is done for one streamline, namely the shroud streamline in the meridional section, where an efficiency model is established and iteratively approximated in order to satisfy the Euler equation for turbomachines at a wide operating range around the best efficiency point (BEP). Using the presented method, hill charts are calculated for one splitter-bladed Francis turbine runner and one Reversible Pump-Turbine (RPT) runner operated in the turbine mode. Both turbines have similar and relatively low specific speeds of nsQ = 23.3 and nsQ = 27, equal inlet and outlet diameters and are designed to fit in the same turbine rig for laboratory measurements (i.e. spiral casing and draft tube are the same). Calculated hill charts are compared against performance data obtained experimentally from model tests according to IEC standards for both turbines. Good agreement between theoretical and experimental results is observed when comparing the shapes of the efficiency contours in the hill-charts. The simplified analysis identifies the design parameters that defines the general shape and inclination of the turbine’s hill charts and, with some additional improvements in the loss models used, it can be used for quick assessment of the performance at off-design conditions during the design process of hydraulic turbines.
Safieddine, Doha; Chkeir, Aly; Herlem, Cyrille; Bera, Delphine; Collart, Michèle; Novella, Jean-Luc; Dramé, Moustapha; Hewson, David J; Duchêne, Jacques
2017-11-01
Falls are a major cause of death in older people. One method used to predict falls is analysis of Centre of Pressure (CoP) displacement, which provides a measure of balance quality. The Balance Quality Tester (BQT) is a device based on a commercial bathroom scale that calculates instantaneous values of vertical ground reaction force (Fz) as well as the CoP in both anteroposterior (AP) and mediolateral (ML) directions. The entire testing process needs to take no longer than 12 s to ensure subject compliance, making it vital that calculations related to balance are only calculated for the period when the subject is static. In the present study, a method is presented to detect the stabilization period after a subject has stepped onto the BQT. Four different phases of the test are identified (stepping-on, stabilization, balancing, stepping-off), ensuring that subjects are static when parameters from the balancing phase are calculated. The method, based on a simplified cumulative sum (CUSUM) algorithm, could detect the change between unstable and stable stance. The time taken to stabilize significantly affected the static balance variables of surface area and trajectory velocity, and was also related to Timed-up-and-Go performance. Such a finding suggests that the time to stabilize could be a worthwhile parameter to explore as a potential indicator of balance problems and fall risk in older people. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.
Ensuring the validity of calculated subcritical limits
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, H.K.
1977-01-01
The care taken at the Savannah River Laboratory and Plant to ensure the validity of calculated subcritical limits is described. Close attention is given to ANSI N16.1-1975, ''Validation of Calculational Methods for Nuclear Criticality Safety.'' The computer codes used for criticality safety computations, which are listed and are briefly described, have been placed in the SRL JOSHUA system to facilitate calculation and to reduce input errors. A driver module, KOKO, simplifies and standardizes input and links the codes together in various ways. For any criticality safety evaluation, correlations of the calculational methods are made with experiment to establish bias. Occasionallymore » subcritical experiments are performed expressly to provide benchmarks. Calculated subcritical limits contain an adequate but not excessive margin to allow for uncertainty in the bias. The final step in any criticality safety evaluation is the writing of a report describing the calculations and justifying the margin.« less
Wang, Yufang; Wu, Yanzhao; Feng, Min; Wang, Hui; Jin, Qinghua; Ding, Datong; Cao, Xuewei
2008-12-01
With a simple method-the reduced matrix method, we simplified the calculation of the phonon vibrational frequencies according to SWNTs structure and their phonon symmetric property and got the dispersion properties of all SWNTs at Gamma point in Brillouin zone, whose diameters lie between 0.6 and 2.5 nm. The calculating time is shrunk about 2-4 orders. A series of the dependent relationships between the diameters of SWNTs and the frequencies of Raman and IR active modes are given. Several fine structures including "glazed tile" structures in omega approximately d figures are found, which might predict a certain macro-quantum phenomenon of the phonons in SWNTs.
Simplified Computation for Nonparametric Windows Method of Probability Density Function Estimation.
Joshi, Niranjan; Kadir, Timor; Brady, Michael
2011-08-01
Recently, Kadir and Brady proposed a method for estimating probability density functions (PDFs) for digital signals which they call the Nonparametric (NP) Windows method. The method involves constructing a continuous space representation of the discrete space and sampled signal by using a suitable interpolation method. NP Windows requires only a small number of observed signal samples to estimate the PDF and is completely data driven. In this short paper, we first develop analytical formulae to obtain the NP Windows PDF estimates for 1D, 2D, and 3D signals, for different interpolation methods. We then show that the original procedure to calculate the PDF estimate can be significantly simplified and made computationally more efficient by a judicious choice of the frame of reference. We have also outlined specific algorithmic details of the procedures enabling quick implementation. Our reformulation of the original concept has directly demonstrated a close link between the NP Windows method and the Kernel Density Estimator.
NASA Astrophysics Data System (ADS)
Popov, Igor; Sukov, Sergey
2018-02-01
A modification of the adaptive artificial viscosity (AAV) method is considered. This modification is based on one stage time approximation and is adopted to calculation of gasdynamics problems on unstructured grids with an arbitrary type of grid elements. The proposed numerical method has simplified logic, better performance and parallel efficiency compared to the implementation of the original AAV method. Computer experiments evidence the robustness and convergence of the method to difference solution.
NASA Astrophysics Data System (ADS)
Ribeiro Fontoura, Jessica; Allasia, Daniel; Herbstrith Froemming, Gabriel; Freitas Ferreira, Pedro; Tassi, Rutineia
2016-04-01
Evapotranspiration is a key process of hydrological cycle and a sole term that links land surface water balance and land surface energy balance. Due to the higher information requirements of the Penman-Monteith method and the existing data uncertainty, simplified empirical methods for calculating potential and actual evapotranspiration are widely used in hydrological models. This is especially important in Brazil, where the monitoring of meteorological data is precarious. In this study were compared different methods for estimating evapotranspiration for Rio Grande do Sul, the Southernmost State of Brazil, aiming to suggest alternatives to the recommended method (Penman-Monteith-FAO 56) for estimate daily reference evapotranspiration (ETo) when meteorological data is missing or not available. The input dataset included daily and hourly-observed data from conventional and automatic weather stations respectively maintained by the National Weather Institute of Brazil (INMET) from the period of 1 January 2007 to 31 January 2010. Dataset included maximum temperature (Tmax, °C), minimum temperature (Tmin, °C), mean relative humidity (%), wind speed at 2 m height (u2, m s-1), daily solar radiation (Rs, MJ m- 2) and atmospheric pressure (kPa) that were grouped at daily time-step. Was tested the Food and Agriculture Organization of the United Nations (FAO) Penman-Monteith method (PM) at its full form, against PM assuming missing several variables not normally available in Brazil in order to calculate daily reference ETo. Missing variables were estimated as suggested in FAO56 publication or from climatological means. Furthermore, PM was also compared against the following simplified empirical methods: Hargreaves-Samani, Priestley-Taylor, Mccloud, McGuiness-Bordne, Romanenko, Radiation-Temperature, Tanner-Pelton. The statistical analysis indicates that even if just Tmin and Tmax are available, it is better to use PM estimating missing variables from syntetic data than simplified empirical methods evaluated except for Tanner-Pelton and Priestley-Taylor.
Nonlinear optimization method of ship floating condition calculation in wave based on vector
NASA Astrophysics Data System (ADS)
Ding, Ning; Yu, Jian-xing
2014-08-01
Ship floating condition in regular waves is calculated. New equations controlling any ship's floating condition are proposed by use of the vector operation. This form is a nonlinear optimization problem which can be solved using the penalty function method with constant coefficients. And the solving process is accelerated by dichotomy. During the solving process, the ship's displacement and buoyant centre have been calculated by the integration of the ship surface according to the waterline. The ship surface is described using an accumulative chord length theory in order to determine the displacement, the buoyancy center and the waterline. The draught forming the waterline at each station can be found out by calculating the intersection of the ship surface and the wave surface. The results of an example indicate that this method is exact and efficient. It can calculate the ship floating condition in regular waves as well as simplify the calculation and improve the computational efficiency and the precision of results.
A Continuous Method for Gene Flow
Palczewski, Michal; Beerli, Peter
2013-01-01
Most modern population genetics inference methods are based on the coalescence framework. Methods that allow estimating parameters of structured populations commonly insert migration events into the genealogies. For these methods the calculation of the coalescence probability density of a genealogy requires a product over all time periods between events. Data sets that contain populations with high rates of gene flow among them require an enormous number of calculations. A new method, transition probability-structured coalescence (TPSC), replaces the discrete migration events with probability statements. Because the speed of calculation is independent of the amount of gene flow, this method allows calculating the coalescence densities efficiently. The current implementation of TPSC uses an approximation simplifying the interaction among lineages. Simulations and coverage comparisons of TPSC vs. MIGRATE show that TPSC allows estimation of high migration rates more precisely, but because of the approximation the estimation of low migration rates is biased. The implementation of TPSC into programs that calculate quantities on phylogenetic tree structures is straightforward, so the TPSC approach will facilitate more general inferences in many computer programs. PMID:23666937
Quantitative accuracy of the simplified strong ion equation to predict serum pH in dogs.
Cave, N J; Koo, S T
2015-01-01
Electrochemical approach to the assessment of acid-base states should provide a better mechanistic explanation of the metabolic component than methods that consider only pH and carbon dioxide. Simplified strong ion equation (SSIE), using published dog-specific values, would predict the measured serum pH of diseased dogs. Ten dogs, hospitalized for various reasons. Prospective study of a convenience sample of a consecutive series of dogs admitted to the Massey University Veterinary Teaching Hospital (MUVTH), from which serum biochemistry and blood gas analyses were performed at the same time. Serum pH was calculated (Hcal+) using the SSIE, and published values for the concentration and dissociation constant for the nonvolatile weak acids (Atot and Ka ), and subsequently Hcal+ was compared with the dog's actual pH (Hmeasured+). To determine the source of discordance between Hcal+ and Hmeasured+, the calculations were repeated using a series of substituted values for Atot and Ka . The Hcal+ did not approximate the Hmeasured+ for any dog (P = 0.499, r(2) = 0.068), and was consistently more basic. Substituted values Atot and Ka did not significantly improve the accuracy (r(2) = 0.169 to <0.001). Substituting the effective SID (Atot-[HCO3-]) produced a strong association between Hcal+ and Hmeasured+ (r(2) = 0.977). Using the simplified strong ion equation and the published values for Atot and Ka does not appear to provide a quantitative explanation for the acid-base status of dogs. Efficacy of substituting the effective SID in the simplified strong ion equation suggests the error lies in calculating the SID. Copyright © 2015 The Authors. Journal of Veterinary Internal Medicine published by Wiley Periodicals, Inc. on behalf of the American College of Veterinary Internal Medicine.
Comparison of methods for developing the dynamics of rigid-body systems
NASA Technical Reports Server (NTRS)
Ju, M. S.; Mansour, J. M.
1989-01-01
Several approaches for developing the equations of motion for a three-degree-of-freedom PUMA robot were compared on the basis of computational efficiency (i.e., the number of additions, subtractions, multiplications, and divisions). Of particular interest was the investigation of the use of computer algebra as a tool for developing the equations of motion. Three approaches were implemented algebraically: Lagrange's method, Kane's method, and Wittenburg's method. Each formulation was developed in absolute and relative coordinates. These six cases were compared to each other and to a recursive numerical formulation. The results showed that all of the formulations implemented algebraically required fewer calculations than the recursive numerical algorithm. The algebraic formulations required fewer calculations in absolute coordinates than in relative coordinates. Each of the algebraic formulations could be simplified, using patterns from Kane's method, to yield the same number of calculations in a given coordinate system.
Simplified paraboloid phase model-based phase tracker for demodulation of a single complex fringe.
He, A; Deepan, B; Quan, C
2017-09-01
A regularized phase tracker (RPT) is an effective method for demodulation of single closed-fringe patterns. However, lengthy calculation time, specially designed scanning strategy, and sign-ambiguity problems caused by noise and saddle points reduce its effectiveness, especially for demodulating large and complex fringe patterns. In this paper, a simplified paraboloid phase model-based regularized phase tracker (SPRPT) is proposed. In SPRPT, first and second phase derivatives are pre-determined by the density-direction-combined method and discrete higher-order demodulation algorithm, respectively. Hence, cost function is effectively simplified to reduce the computation time significantly. Moreover, pre-determined phase derivatives improve the robustness of the demodulation of closed, complex fringe patterns. Thus, no specifically designed scanning strategy is needed; nevertheless, it is robust against the sign-ambiguity problem. The paraboloid phase model also assures better accuracy and robustness against noise. Both the simulated and experimental fringe patterns (obtained using electronic speckle pattern interferometry) are used to validate the proposed method, and a comparison of the proposed method with existing RPT methods is carried out. The simulation results show that the proposed method has achieved the highest accuracy with less computational time. The experimental result proves the robustness and the accuracy of the proposed method for demodulation of noisy fringe patterns and its feasibility for static and dynamic applications.
Erosion estimation of guide vane end clearance in hydraulic turbines with sediment water flow
NASA Astrophysics Data System (ADS)
Han, Wei; Kang, Jingbo; Wang, Jie; Peng, Guoyi; Li, Lianyuan; Su, Min
2018-04-01
The end surface of guide vane or head cover is one of the most serious parts of sediment erosion for high-head hydraulic turbines. In order to investigate the relationship between erosion depth of wall surface and the characteristic parameter of erosion, an estimative method including a simplified flow model and a modificatory erosion calculative function is proposed in this paper. The flow between the end surfaces of guide vane and head cover is simplified as a clearance flow around a circular cylinder with a backward facing step. Erosion characteristic parameter of csws3 is calculated with the mixture model for multiphase flow and the renormalization group (RNG) k-𝜀 turbulence model under the actual working conditions, based on which, erosion depths of guide vane and head cover end surfaces are estimated with a modification of erosion coefficient K. The estimation results agree well with the actual situation. It is shown that the estimative method is reasonable for erosion prediction of guide vane and can provide a significant reference to determine the optimal maintenance cycle for hydraulic turbine in the future.
Using the surface panel method to predict the steady performance of ducted propellers
NASA Astrophysics Data System (ADS)
Cai, Hao-Peng; Su, Yu-Min; Li, Xin; Shen, Hai-Long
2009-12-01
A new numerical method was developed for predicting the steady hydrodynamic performance of ducted propellers. A potential based surface panel method was applied both to the duct and the propeller, and the interaction between them was solved by an induced velocity potential iterative method. Compared with the induced velocity iterative method, the method presented can save programming and calculating time. Numerical results for a JD simplified ducted propeller series showed that the method presented is effective for predicting the steady hydrodynamic performance of ducted propellers.
BRST quantization of cosmological perturbations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Armendariz-Picon, Cristian; Şengör, Gizem
2016-11-08
BRST quantization is an elegant and powerful method to quantize theories with local symmetries. In this article we study the Hamiltonian BRST quantization of cosmological perturbations in a universe dominated by a scalar field, along with the closely related quantization method of Dirac. We describe how both formalisms apply to perturbations in a time-dependent background, and how expectation values of gauge-invariant operators can be calculated in the in-in formalism. Our analysis focuses mostly on the free theory. By appropriate canonical transformations we simplify and diagonalize the free Hamiltonian. BRST quantization in derivative gauges allows us to dramatically simplify the structuremore » of the propagators, whereas Dirac quantization, which amounts to quantization in synchronous gauge, dispenses with the need to introduce ghosts and preserves the locality of the gauge-fixed action.« less
L'her, Erwan; Martin-Babau, Jérôme; Lellouche, François
2016-12-01
Knowledge of patients' height is essential for daily practice in the intensive care unit. However, actual height measurements are unavailable on a daily routine in the ICU and measured height in the supine position and/or visual estimates may lack consistency. Clinicians do need simple and rapid methods to estimate the patients' height, especially in short height and/or obese patients. The objectives of the study were to evaluate several anthropometric formulas for height estimation on healthy volunteers and to test whether several of these estimates will help tidal volume setting in ICU patients. This was a prospective, observational study in a medical intensive care unit of a university hospital. During the first phase of the study, eight limb measurements were performed on 60 healthy volunteers and 18 height estimation formulas were tested. During the second phase, four height estimates were performed on 60 consecutive ICU patients under mechanical ventilation. In the 60 healthy volunteers, actual height was well correlated with the gold standard, measured height in the erect position. Correlation was low between actual and calculated height, using the hand's length and width, the index, or the foot equations. The Chumlea method and its simplified version, performed in the supine position, provided adequate estimates. In the 60 ICU patients, calculated height using the simplified Chumlea method was well correlated with measured height (r = 0.78; ∂ < 1 %). Ulna and tibia estimates also provided valuable estimates. All these height estimates allowed calculating IBW or PBW that were significantly different from the patients' actual weight on admission. In most cases, tidal volume set according to these estimates was lower than what would have been set using the actual weight. When actual height is unavailable in ICU patients undergoing mechanical ventilation, alternative anthropometric methods to obtain patient's height based on lower leg and on forearm measurements could be useful to facilitate the application of protective mechanical ventilation in a Caucasian ICU population. The simplified Chumlea method is easy to achieve in a bed-ridden patient and provides accurate height estimates, with a low bias.
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SPN). In XFEM scheme of SPN equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging. PMID:23227108
Li, Wei; Yi, Huangjian; Zhang, Qitan; Chen, Duofang; Liang, Jimin
2012-01-01
An extended finite element method (XFEM) for the forward model of 3D optical molecular imaging is developed with simplified spherical harmonics approximation (SP(N)). In XFEM scheme of SP(N) equations, the signed distance function is employed to accurately represent the internal tissue boundary, and then it is used to construct the enriched basis function of the finite element scheme. Therefore, the finite element calculation can be carried out without the time-consuming internal boundary mesh generation. Moreover, the required overly fine mesh conforming to the complex tissue boundary which leads to excess time cost can be avoided. XFEM conveniences its application to tissues with complex internal structure and improves the computational efficiency. Phantom and digital mouse experiments were carried out to validate the efficiency of the proposed method. Compared with standard finite element method and classical Monte Carlo (MC) method, the validation results show the merits and potential of the XFEM for optical imaging.
Modal ring method for the scattering of sound
NASA Technical Reports Server (NTRS)
Baumeister, Kenneth J.; Kreider, Kevin L.
1993-01-01
The modal element method for acoustic scattering can be simplified when the scattering body is rigid. In this simplified method, called the modal ring method, the scattering body is represented by a ring of triangular finite elements forming the outer surface. The acoustic pressure is calculated at the element nodes. The pressure in the infinite computational region surrounding the body is represented analytically by an eigenfunction expansion. The two solution forms are coupled by the continuity of pressure and velocity on the body surface. The modal ring method effectively reduces the two-dimensional scattering problem to a one-dimensional problem capable of handling very high frequency scattering. In contrast to the boundary element method or the method of moments, which perform a similar reduction in problem dimension, the model line method has the added advantage of having a highly banded solution matrix requiring considerably less computer storage. The method shows excellent agreement with analytic results for scattering from rigid circular cylinders over a wide frequency range (1 is equal to or less than ka is less than or equal to 100) in the near and far fields.
Propulsive efficiency of frog swimming with different feet and swimming patterns
Jizhuang, Fan; Wei, Zhang; Bowen, Yuan; Gangfeng, Liu
2017-01-01
ABSTRACT Aquatic and terrestrial animals have different swimming performances and mechanical efficiencies based on their different swimming methods. To explore propulsion in swimming frogs, this study calculated mechanical efficiencies based on data describing aquatic and terrestrial webbed-foot shapes and swimming patterns. First, a simplified frog model and dynamic equation were established, and hydrodynamic forces on the foot were computed according to computational fluid dynamic calculations. Then, a two-link mechanism was used to stand in for the diverse and complicated hind legs found in different frog species, in order to simplify the input work calculation. Joint torques were derived based on the virtual work principle to compute the efficiency of foot propulsion. Finally, two feet and swimming patterns were combined to compute propulsive efficiency. The aquatic frog demonstrated a propulsive efficiency (43.11%) between those of drag-based and lift-based propulsions, while the terrestrial frog efficiency (29.58%) fell within the range of drag-based propulsion. The results illustrate the main factor of swimming patterns for swimming performance and efficiency. PMID:28302669
Do, Thanh Nhut; Gelin, Maxim F; Tan, Howe-Siang
2017-10-14
We derive general expressions that incorporate finite pulse envelope effects into a coherent two-dimensional optical spectroscopy (2DOS) technique. These expressions are simpler and less computationally intensive than the conventional triple integral calculations needed to simulate 2DOS spectra. The simplified expressions involving multiplications of arbitrary pulse spectra with 2D spectral response function are shown to be exactly equal to the conventional triple integral calculations of 2DOS spectra if the 2D spectral response functions do not vary with population time. With minor modifications, they are also accurate for 2D spectral response functions with quantum beats and exponential decay during population time. These conditions cover a broad range of experimental 2DOS spectra. For certain analytically defined pulse spectra, we also derived expressions of 2D spectra for arbitrary population time dependent 2DOS spectral response functions. Having simpler and more efficient methods to calculate experimentally relevant 2DOS spectra with finite pulse effect considered will be important in the simulation and understanding of the complex systems routinely being studied by using 2DOS.
NASA Technical Reports Server (NTRS)
Sivells, James C; Westrick, Gertrude C
1952-01-01
A method is presented which allows the use of nonlinear section lift data in the calculation of the spanwise lift distribution of unswept wings with flaps or ailerons. This method is based upon lifting line theory and is an extension to the method described in NACA rep. 865. The mathematical treatment of the discontinuity in absolute angle of attack at the end of the flap or aileron involves the use of a correction factor which accounts for the inability of a limited trigonometric series to represent adequately the spanwise lift distribution. A treatment of the apparent discontinuity in maximum section lift coefficient is also described. Simplified computing forms containing detailed examples are given for both symmetrical and asymmetrical lift distributions. A few comparisons of calculated characteristics with those obtained experimentally are also presented.
NASA Astrophysics Data System (ADS)
Raimondi, Valentina; Palombi, Lorenzo; Lognoli, David; Masini, Andrea; Simeone, Emilio
2017-09-01
This paper presents experimental tests and radiometric calculations for the feasibility of an ultra-compact fluorescence LIDAR from an Unmanned Air Vehicle (UAV) for the characterisation of oil spills in natural waters. The first step of this study was to define the experimental conditions for a LIDAR and its budget constraints on the basis of the specifications of small UAVs already available on the market. The second step consisted of a set of fluorescence LIDAR measurements on oil spills in the laboratory in order to propose a simplified discrimination method and to calculate the oil fluorescence conversion efficiency. Lastly, the main technical specifications of the payload were defined and radiometric calculations carried out to evaluate the performances of both the payload and the proposed discrimination method.
Approximate relations and charts for low-speed stability derivatives of swept wings
NASA Technical Reports Server (NTRS)
Toll, Thomas A; Queijo, M J
1948-01-01
Contains derivations, based on a simplified theory, of approximate relations for low-speed stability derivatives of swept wings. Method accounts for the effects and, in most cases, taper ratio. Charts, based on the derived relations, are presented for the stability derivatives of untapered swept wings. Calculated values of the derivatives are compared with experimental results.
The analysis of Stability reliability of Qian Tang River seawall
NASA Astrophysics Data System (ADS)
Wu, Xue-Xiong
2017-11-01
Qiantang River seawall due to high water soaking pond by foreshore scour, encountered during the low tide prone slope overall instability. Considering the seawall beach scour in front of random change, using the simplified Bishop method, combined with the variability of soil mechanics parameters, calculation and analysis of Qiantang River Xiasha seawall segments of the overall stability.
How accurately can the peak skin dose in fluoroscopy be determined using indirect dose metrics?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, A. Kyle, E-mail: kyle.jones@mdanderson.org; Ensor, Joe E.; Pasciak, Alexander S.
Purpose: Skin dosimetry is important for fluoroscopically-guided interventions, as peak skin doses (PSD) that result in skin reactions can be reached during these procedures. There is no consensus as to whether or not indirect skin dosimetry is sufficiently accurate for fluoroscopically-guided interventions. However, measuring PSD with film is difficult and the decision to do so must be madea priori. The purpose of this study was to assess the accuracy of different types of indirect dose estimates and to determine if PSD can be calculated within ±50% using indirect dose metrics for embolization procedures. Methods: PSD were measured directly using radiochromicmore » film for 41 consecutive embolization procedures at two sites. Indirect dose metrics from the procedures were collected, including reference air kerma. Four different estimates of PSD were calculated from the indirect dose metrics and compared along with reference air kerma to the measured PSD for each case. The four indirect estimates included a standard calculation method, the use of detailed information from the radiation dose structured report, and two simplified calculation methods based on the standard method. Indirect dosimetry results were compared with direct measurements, including an analysis of uncertainty associated with film dosimetry. Factors affecting the accuracy of the different indirect estimates were examined. Results: When using the standard calculation method, calculated PSD were within ±35% for all 41 procedures studied. Calculated PSD were within ±50% for a simplified method using a single source-to-patient distance for all calculations. Reference air kerma was within ±50% for all but one procedure. Cases for which reference air kerma or calculated PSD exhibited large (±35%) differences from the measured PSD were analyzed, and two main causative factors were identified: unusually small or large source-to-patient distances and large contributions to reference air kerma from cone beam computed tomography or acquisition runs acquired at large primary gantry angles. When calculated uncertainty limits [−12.8%, 10%] were applied to directly measured PSD, most indirect PSD estimates remained within ±50% of the measured PSD. Conclusions: Using indirect dose metrics, PSD can be determined within ±35% for embolization procedures. Reference air kerma can be used without modification to set notification limits and substantial radiation dose levels, provided the displayed reference air kerma is accurate. These results can reasonably be extended to similar procedures, including vascular and interventional oncology. Considering these results, film dosimetry is likely an unnecessary effort for these types of procedures when indirect dose metrics are available.« less
On simplified application of multidimensional Savitzky-Golay filters and differentiators
NASA Astrophysics Data System (ADS)
Shekhar, Chandra
2016-02-01
I propose a simplified approach for multidimensional Savitzky-Golay filtering, to enable its fast and easy implementation in scientific and engineering applications. The proposed method, which is derived from a generalized framework laid out by Thornley (D. J. Thornley, "Novel anisotropic multidimensional convolution filters for derivative estimation and reconstruction" in Proceedings of International Conference on Signal Processing and Communications, November 2007), first transforms any given multidimensional problem into a unique one, by transforming coordinates of the sampled data nodes to unity-spaced, uniform data nodes, and then performs filtering and calculates partial derivatives on the unity-spaced nodes. It is followed by transporting the calculated derivatives back onto the original data nodes by using the chain rule of differentiation. The burden to performing the most cumbersome task, which is to carry out the filtering and to obtain derivatives on the unity-spaced nodes, is almost eliminated by providing convolution coefficients for a number of convolution kernel sizes and polynomial orders, up to four spatial dimensions. With the availability of the convolution coefficients, the task of filtering at a data node reduces merely to multiplication of two known matrices. Simplified strategies to adequately address near-boundary data nodes and to calculate partial derivatives there are also proposed. Finally, the proposed methodologies are applied to a three-dimensional experimentally obtained data set, which shows that multidimensional Savitzky-Golay filters and differentiators perform well in both the internal and the near-boundary regions of the domain.
NASA Technical Reports Server (NTRS)
Ungar, Eugene K.; Richards, W. Lance
2015-01-01
The aircraft-based Stratospheric Observatory for Infrared Astronomy (SOFIA) is a platform for multiple infrared astronomical observation experiments. These experiments carry sensors cooled to liquid helium temperatures. The liquid helium supply is contained in large (i.e., 10 liters or more) vacuum-insulated dewars. Should the dewar vacuum insulation fail, the inrushing air will condense and freeze on the dewar wall, resulting in a large heat flux on the dewar's contents. The heat flux results in a rise in pressure and the actuation of the dewar pressure relief system. A previous NASA Engineering and Safety Center (NESC) assessment provided recommendations for the wall heat flux that would be expected from a loss of vacuum and detailed an appropriate method to use in calculating the maximum pressure that would occur in a loss of vacuum event. This method involved building a detailed supercritical helium compressible flow thermal/fluid model of the vent stack and exercising the model over the appropriate range of parameters. The experimenters designing science instruments for SOFIA are not experts in compressible supercritical flows and do not generally have access to the thermal/fluid modeling packages that are required to build detailed models of the vent stacks. Therefore, the SOFIA Program engaged the NESC to develop a simplified methodology to estimate the maximum pressure in a liquid helium dewar after the loss of vacuum insulation. The method would allow the university-based science instrument development teams to conservatively determine the cryostat's vent neck sizing during preliminary design of new SOFIA Science Instruments. This report details the development of the simplified method, the method itself, and the limits of its applicability. The simplified methodology provides an estimate of the dewar pressure after a loss of vacuum insulation that can be used for the initial design of the liquid helium dewar vent stacks. However, since it is not an exact tool, final verification of the dewar pressure vessel design requires a complete, detailed real fluid compressible flow model of the vent stack. The wall heat flux resulting from a loss of vacuum insulation increases the dewar pressure, which actuates the pressure relief mechanism and results in high-speed flow through the dewar vent stack. At high pressures, the flow can be choked at the vent stack inlet, at the exit, or at an intermediate transition or restriction. During previous SOFIA analyses, it was observed that there was generally a readily identifiable section of the vent stack that would limit the flow – e.g., a small diameter entrance or an orifice. It was also found that when the supercritical helium was approximated as an ideal gas at the dewar condition, the calculated mass flow rate based on choking at the limiting entrance or transition was less than the mass flow rate calculated using the detailed real fluid model2. Using this lower mass flow rate would yield a conservative prediction of the dewar’s wall heat flux capability. The simplified method of the current work was developed by building on this observation.
A new method to identify the foot of continental slope based on an integrated profile analysis
NASA Astrophysics Data System (ADS)
Wu, Ziyin; Li, Jiabiao; Li, Shoujun; Shang, Jihong; Jin, Xiaobin
2017-06-01
A new method is proposed to identify automatically the foot of the continental slope (FOS) based on the integrated analysis of topographic profiles. Based on the extremum points of the second derivative and the Douglas-Peucker algorithm, it simplifies the topographic profiles, then calculates the second derivative of the original profiles and the D-P profiles. Seven steps are proposed to simplify the original profiles. Meanwhile, multiple identification methods are proposed to determine the FOS points, including gradient, water depth and second derivative values of data points, as well as the concave and convex, continuity and segmentation of the topographic profiles. This method can comprehensively and intelligently analyze the topographic profiles and their derived slopes, second derivatives and D-P profiles, based on which, it is capable to analyze the essential properties of every single data point in the profile. Furthermore, it is proposed to remove the concave points of the curve and in addition, to implement six FOS judgment criteria.
A simplified analysis of propulsion installation losses for computerized aircraft design
NASA Technical Reports Server (NTRS)
Morris, S. J., Jr.; Nelms, W. P., Jr.; Bailey, R. O.
1976-01-01
A simplified method is presented for computing the installation losses of aircraft gas turbine propulsion systems. The method has been programmed for use in computer aided conceptual aircraft design studies that cover a broad range of Mach numbers and altitudes. The items computed are: inlet size, pressure recovery, additive drag, subsonic spillage drag, bleed and bypass drags, auxiliary air systems drag, boundary-layer diverter drag, nozzle boattail drag, and the interference drag on the region adjacent to multiple nozzle installations. The methods for computing each of these installation effects are described and computer codes for the calculation of these effects are furnished. The results of these methods are compared with selected data for the F-5A and other aircraft. The computer program can be used with uninstalled engine performance information which is currently supplied by a cycle analysis program. The program, including comments, is about 600 FORTRAN statements long, and uses both theoretical and empirical techniques.
A simplified method for correcting contaminant concentrations in eggs for moisture loss.
Heinz, Gary H.; Stebbins, Katherine R.; Klimstra, Jon D.; Hoffman, David J.
2009-01-01
We developed a simplified and highly accurate method for correcting contaminant concentrations in eggs for the moisture that is lost from an egg during incubation. To make the correction, one injects water into the air cell of the egg until overflowing. The amount of water injected corrects almost perfectly for the amount of water lost during incubation or when an egg is left in the nest and dehydrates and deteriorates over time. To validate the new method we weighed freshly laid chicken (Gallus gallus) eggs and then incubated sets of fertile and dead eggs for either 12 or 19 d. We then injected water into the air cells of these eggs and verified that the weights after water injection were almost identical to the weights of the eggs when they were fresh. The advantages of the new method are its speed, accuracy, and simplicity: It does not require the calculation of a correction factor that has to be applied to each contaminant residue.
Simplified Dynamic Analysis of Grinders Spindle Node
NASA Astrophysics Data System (ADS)
Demec, Peter
2014-12-01
The contribution deals with the simplified dynamic analysis of surface grinding machine spindle node. Dynamic analysis is based on the use of the transfer matrix method, which is essentially a matrix form of method of initial parameters. The advantage of the described method, despite the seemingly complex mathematical apparatus, is primarily, that it does not require for solve the problem of costly commercial software using finite element method. All calculations can be made for example in MS Excel, which is advantageous especially in the initial stages of constructing of spindle node for the rapid assessment of the suitability its design. After detailing the entire structure of spindle node is then also necessary to perform the refined dynamic analysis in the environment of FEM, which it requires the necessary skills and experience and it is therefore economically difficult. This work was developed within grant project KEGA No. 023TUKE-4/2012 Creation of a comprehensive educational - teaching material for the article Production technique using a combination of traditional and modern information technology and e-learning.
Applications of Laplace transform methods to airfoil motion and stability calculations
NASA Technical Reports Server (NTRS)
Edwards, J. W.
1979-01-01
This paper reviews the development of generalized unsteady aerodynamic theory and presents a derivation of the generalized Possio integral equation. Numerical calculations resolve questions concerning subsonic indicial lift functions and demonstrate the generation of Kutta waves at high values of reduced frequency, subsonic Mach number, or both. The use of rational function approximations of unsteady aerodynamic loads in aeroelastic stability calculations is reviewed, and a reformulation of the matrix Pade approximation technique is given. Numerical examples of flutter boundary calculations for a wing which is to be flight tested are given. Finally, a simplified aerodynamic model of transonic flow is used to study the stability of an airfoil exposed to supersonic and subsonic flow regions.
Sorbe, A; Chazel, M; Gay, E; Haenni, M; Madec, J-Y; Hendrikx, P
2011-06-01
Develop and calculate performance indicators allows to continuously follow the operation of an epidemiological surveillance network. This is an internal evaluation method, implemented by the coordinators in collaboration with all the actors of the network. Its purpose is to detect weak points in order to optimize management. A method for the development of performance indicators of epidemiological surveillance networks was developed in 2004 and was applied to several networks. Its implementation requires a thorough description of the network environment and all its activities to define priority indicators. Since this method is considered to be complex, our objective consisted in developing a simplified approach and applying it to an epidemiological surveillance network. We applied the initial method to a theoretical network model to obtain a list of generic indicators that can be adapted to any surveillance network. We obtained a list of 25 generic performance indicators, intended to be reformulated and described according to the specificities of each network. It was used to develop performance indicators for RESAPATH, an epidemiological surveillance network of antimicrobial resistance in pathogenic bacteria of animal origin in France. This application allowed us to validate the simplified method, its value in terms of practical implementation, and its level of user acceptance. Its ease of use and speed of application compared to the initial method argue in favor of its use on broader scale. Copyright © 2011 Elsevier Masson SAS. All rights reserved.
Estimating surface temperature in forced convection nucleate boiling - A simplified method
NASA Technical Reports Server (NTRS)
Hendricks, R. C.; Papell, S. S.
1977-01-01
A simplified expression to estimate surface temperatures in forced convection boiling was developed using a liquid nitrogen data base. Using the principal of corresponding states and the Kutateladze relation for maximum pool boiling heat flux, the expression was normalized for use with other fluids. The expression was applied also to neon and water. For the neon data base, the agreement was acceptable with the exclusion of one set suspected to be in the transition boiling regime. For the water data base at reduced pressure greater than 0.05 the agreement is generally good. At lower reduced pressures, the water data scatter and the calculated temperature becomes a function of flow rate.
Readout signals calculated for near-field optical pickups with land and groove recording.
Saito, K; Kishima, K; Ichimura, I
2000-08-10
Optical disk readout signals with a solid immersion lens (SIL) and the land-groove recording technique are calculated by use of a simplified vector-diffraction theory. In this method the full vector-diffraction theory is applied to calculate the diffracted light from the initial state of the disk, and the light scattered from the recorded marks is regarded as a perturbation. Using this method, we confirmed that the land-groove recording technique is effective as a means of cross-talk reduction even when the numerical aperture is more than 1. However, the top surface of the disk under the SIL must be flat, or the readout signal from marks recorded on a groove decays when the optical depth of the groove is greater than lambda/8.
76 FR 33994 - Alternative Simplified Credit Under Section 41(c)(5)
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-10
... Alternative Simplified Credit Under Section 41(c)(5) AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... regulations relating to the election and calculation of the alternative simplified credit under section 41(c... 41(c)(5). The ASC was added by the Tax Relief and Health Care Act of 2006 (Public Law 109-432, 120...
Sanders, Sharon; Flaws, Dylan; Than, Martin; Pickering, John W; Doust, Jenny; Glasziou, Paul
2016-01-01
Scoring systems are developed to assist clinicians in making a diagnosis. However, their uptake is often limited because they are cumbersome to use, requiring information on many predictors, or complicated calculations. We examined whether, and how, simplifications affected the performance of a validated score for identifying adults with chest pain in an emergency department who have low risk of major adverse cardiac events. We simplified the Emergency Department Assessment of Chest pain Score (EDACS) by three methods: (1) giving equal weight to each predictor included in the score, (2) reducing the number of predictors, and (3) using both methods--giving equal weight to a reduced number of predictors. The diagnostic accuracy of the simplified scores was compared with the original score in the derivation (n = 1,974) and validation (n = 909) data sets. There was no difference in the overall accuracy of the simplified versions of the score compared with the original EDACS as measured by the area under the receiver operating characteristic curve (0.74 to 0.75 for simplified versions vs. 0.75 for the original score in the validation cohort). With score cut-offs set to maintain the sensitivity of the combination of score and tests (electrocardiogram and cardiac troponin) at a level acceptable to clinicians (99%), simplification reduced the proportion of patients classified as low risk from 50% with the original score to between 22% and 42%. Simplification of a clinical score resulted in similar overall accuracy but reduced the proportion classified as low risk and therefore eligible for early discharge compared with the original score. Whether the trade-off is acceptable, will depend on the context in which the score is to be used. Developers of clinical scores should consider simplification as a method to increase uptake, but further studies are needed to determine the best methods of deriving and evaluating simplified scores. Copyright © 2016 Elsevier Inc. All rights reserved.
Practical modeling approaches for geological storage of carbon dioxide.
Celia, Michael A; Nordbotten, Jan M
2009-01-01
The relentless increase of anthropogenic carbon dioxide emissions and the associated concerns about climate change have motivated new ideas about carbon-constrained energy production. One technological approach to control carbon dioxide emissions is carbon capture and storage, or CCS. The underlying idea of CCS is to capture the carbon before it emitted to the atmosphere and store it somewhere other than the atmosphere. Currently, the most attractive option for large-scale storage is in deep geological formations, including deep saline aquifers. Many physical and chemical processes can affect the fate of the injected CO2, with the overall mathematical description of the complete system becoming very complex. Our approach to the problem has been to reduce complexity as much as possible, so that we can focus on the few truly important questions about the injected CO2, most of which involve leakage out of the injection formation. Toward this end, we have established a set of simplifying assumptions that allow us to derive simplified models, which can be solved numerically or, for the most simplified cases, analytically. These simplified models allow calculation of solutions to large-scale injection and leakage problems in ways that traditional multicomponent multiphase simulators cannot. Such simplified models provide important tools for system analysis, screening calculations, and overall risk-assessment calculations. We believe this is a practical and important approach to model geological storage of carbon dioxide. It also serves as an example of how complex systems can be simplified while retaining the essential physics of the problem.
75 FR 48743 - Mandatory Reporting of Greenhouse Gases
Federal Register 2010, 2011, 2012, 2013, 2014
2010-08-11
...EPA is proposing to amend specific provisions in the GHG reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These proposed changes include providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources in a facility, amending data reporting requirements to provide additional clarity on when different types of GHG emissions need to be calculated and reported, clarifying terms and definitions in certain equations, and technical corrections.
75 FR 79091 - Mandatory Reporting of Greenhouse Gases
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-17
...EPA is amending specific provisions in the greenhouse gas reporting rule to clarify certain provisions, to correct technical and editorial errors, and to address certain questions and issues that have arisen since promulgation. These final changes include generally providing additional information and clarity on existing requirements, allowing greater flexibility or simplified calculation methods for certain sources, amending data reporting requirements to provide additional clarity on when different types of greenhouse gas emissions need to be calculated and reported, clarifying terms and definitions in certain equations and other technical corrections and amendments.
77 FR 54482 - Allocation of Costs Under the Simplified Methods
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-05
... Allocation of Costs Under the Simplified Methods AGENCY: Internal Revenue Service (IRS), Treasury. ACTION... certain costs to the property and that allocate costs under the simplified production method or the simplified resale method. The proposed regulations provide rules for the treatment of negative additional...
Method for measuring anterior chamber volume by image analysis
NASA Astrophysics Data System (ADS)
Zhai, Gaoshou; Zhang, Junhong; Wang, Ruichang; Wang, Bingsong; Wang, Ningli
2007-12-01
Anterior chamber volume (ACV) is very important for an oculist to make rational pathological diagnosis as to patients who have some optic diseases such as glaucoma and etc., yet it is always difficult to be measured accurately. In this paper, a method is devised to measure anterior chamber volumes based on JPEG-formatted image files that have been transformed from medical images using the anterior-chamber optical coherence tomographer (AC-OCT) and corresponding image-processing software. The corresponding algorithms for image analysis and ACV calculation are implemented in VC++ and a series of anterior chamber images of typical patients are analyzed, while anterior chamber volumes are calculated and are verified that they are in accord with clinical observation. It shows that the measurement method is effective and feasible and it has potential to improve accuracy of ACV calculation. Meanwhile, some measures should be taken to simplify the handcraft preprocess working as to images.
Calculation of turbulence-driven secondary motion in ducts with arbitrary cross section
NASA Technical Reports Server (NTRS)
Demuren, A. O.
1989-01-01
Calculation methods for turbulent duct flows are generalized for ducts with arbitrary cross-sections. The irregular physical geometry is transformed into a regular one in computational space, and the flow equations are solved with a finite-volume numerical procedure. The turbulent stresses are calculated with an algebraic stress model derived by simplifying model transport equations for the individual Reynolds stresses. Two variants of such a model are considered. These procedures enable the prediction of both the turbulence-driven secondary flow and the anisotropy of the Reynolds stresses, in contrast to some of the earlier calculation methods. Model predictions are compared to experimental data for developed flow in triangular duct, trapezoidal duct and a rod-bundle geometry. The correct trends are predicted, and the quantitative agreement is mostly fair. The simpler variant of the algebraic stress model procured better agreement with the measured data.
Geometrical optics approach in liquid crystal films with three-dimensional director variations.
Panasyuk, G; Kelly, J; Gartland, E C; Allender, D W
2003-04-01
A formal geometrical optics approach (GOA) to the optics of nematic liquid crystals whose optic axis (director) varies in more than one dimension is described. The GOA is applied to the propagation of light through liquid crystal films whose director varies in three spatial dimensions. As an example, the GOA is applied to the calculation of light transmittance for the case of a liquid crystal cell which exhibits the homeotropic to multidomainlike transition (HMD cell). Properties of the GOA solution are explored, and comparison with the Jones calculus solution is also made. For variations on a smaller scale, where the Jones calculus breaks down, the GOA provides a fast, accurate method for calculating light transmittance. The results of light transmittance calculations for the HMD cell based on the director patterns provided by two methods, direct computer calculation and a previously developed simplified model, are in good agreement.
Scivetti, Iván; Persson, Mats
2017-09-06
We present calculations of vertical electron and hole attachment energies to the frontier orbitals of a pentacene molecule absorbed on multi-layer sodium chloride films supported by a copper substrate using a simplified density functional theory (DFT) method. The adsorbate and the film are treated fully within DFT, whereas the metal is treated implicitly by a perfect conductor model. We find that the computed energy gap between the highest and lowest unoccupied molecular orbitals-HOMO and LUMO -from the vertical attachment energies increases with the thickness of the insulating film, in agreement with experiments. This increase of the gap can be rationalised in a simple dielectric model with parameters determined from DFT calculations and is found to be dominated by the image interaction with the metal. We find, however, that this simplified model overestimates the downward shift of the energy gap in the limit of an infinitely thick film.
NASA Astrophysics Data System (ADS)
Scivetti, Iván; Persson, Mats
2017-09-01
We present calculations of vertical electron and hole attachment energies to the frontier orbitals of a pentacene molecule absorbed on multi-layer sodium chloride films supported by a copper substrate using a simplified density functional theory (DFT) method. The adsorbate and the film are treated fully within DFT, whereas the metal is treated implicitly by a perfect conductor model. We find that the computed energy gap between the highest and lowest unoccupied molecular orbitals—HOMO and LUMO -from the vertical attachment energies increases with the thickness of the insulating film, in agreement with experiments. This increase of the gap can be rationalised in a simple dielectric model with parameters determined from DFT calculations and is found to be dominated by the image interaction with the metal. We find, however, that this simplified model overestimates the downward shift of the energy gap in the limit of an infinitely thick film.
Theory and computation of optimal low- and medium-thrust transfers
NASA Technical Reports Server (NTRS)
Chuang, C.-H.
1994-01-01
This report describes the current state of development of methods for calculating optimal orbital transfers with large numbers of burns. Reported on first is the homotopy-motivated and so-called direction correction method. So far this method has been partially tested with one solver; the final step has yet to be implemented. Second is the patched transfer method. This method is rooted in some simplifying approximations made on the original optimal control problem. The transfer is broken up into single-burn segments, each single-burn solved as a predictor step and the whole problem then solved with a corrector step.
NASA Astrophysics Data System (ADS)
You, Xu; Zhi-jian, Zong; Qun, Gao
2018-07-01
This paper describes a methodology for the position uncertainty distribution of an articulated arm coordinate measuring machine (AACMM). First, a model of the structural parameter uncertainties was established by statistical method. Second, the position uncertainty space volume of the AACMM in a certain configuration was expressed using a simplified definite integration method based on the structural parameter uncertainties; it was then used to evaluate the position accuracy of the AACMM in a certain configuration. Third, the configurations of a certain working point were calculated by an inverse solution, and the position uncertainty distribution of a certain working point was determined; working point uncertainty can be evaluated by the weighting method. Lastly, the position uncertainty distribution in the workspace of the ACCMM was described by a map. A single-point contrast test of a 6-joint AACMM was carried out to verify the effectiveness of the proposed method, and it was shown that the method can describe the position uncertainty of the AACMM and it was used to guide the calibration of the AACMM and the choice of AACMM’s accuracy area.
An algorithm of Saxena-Easo on fuzzy time series forecasting
NASA Astrophysics Data System (ADS)
Ramadhani, L. C.; Anggraeni, D.; Kamsyakawuni, A.; Hadi, A. F.
2018-04-01
This paper presents a forecast model of Saxena-Easo fuzzy time series prediction to study the prediction of Indonesia inflation rate in 1970-2016. We use MATLAB software to compute this method. The algorithm of Saxena-Easo fuzzy time series doesn’t need stationarity like conventional forecasting method, capable of dealing with the value of time series which are linguistic and has the advantage of reducing the calculation, time and simplifying the calculation process. Generally it’s focus on percentage change as the universe discourse, interval partition and defuzzification. The result indicate that between the actual data and the forecast data are close enough with Root Mean Square Error (RMSE) = 1.5289.
Hypotheses of calculation of the water flow rate evaporated in a wet cooling tower
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bourillot, C.
1983-08-01
The method developed by Poppe at the University of Hannover to calculate the thermal performance of a wet cooling tower fill is presented. The formulation of Poppe is then validated using full-scale test data from a wet cooling tower at the power station at Neurath, Federal Republic of Germany. It is shown that the Poppe method predicts the evaporated water flow rate almost perfectly and the condensate content of the warm air with good accuracy over a wide range of ambient conditions. The simplifying assumptions of the Merkel theory are discussed, and the errors linked to these assumptions are systematicallymore » described, then illustrated with the test data.« less
Simplified method for the calculation of irregular waves in the coastal zone
NASA Astrophysics Data System (ADS)
Leont'ev, I. O.
2011-04-01
A method applicable for the estimation of the wave parameters along a set bottom profile is suggested. It takes into account the principal processes having an influence on the waves in the coastal zone: the transformation, refraction, bottom friction, and breaking. The ability to use a constant mean value of the friction coefficient under conditions of sandy shores is implied. The wave breaking is interpreted from the viewpoint of the concept of the limiting wave height at a given depth. The mean and root-mean-square wave heights are determined by the height distribution function, which transforms under the effect of the breaking. The verification of the method on the basis of the natural data shows that the calculation results reproduce the observed variations of the wave heights in a wide range of conditions, including profiles with underwater bars. The deviations from the calculated values mostly do not exceed 25%, and the mean square error is 11%. The method does not require a preliminary setting and can be implemented in the form of a relatively simple calculator accessible even for an inexperienced user.
NECAP 4.1: NASA's energy-cost analysis program user's manual
NASA Technical Reports Server (NTRS)
Jensen, R. N.; Henninger, R. H.; Miner, D. L.
1983-01-01
The Enery Cost Analysis Program (NECAP) is a powerful computerized method to determine and to minimize building energy consumption. The program calculates hourly heat gain or losses taking into account the building thermal resistance and mass, using hourly weather and a "response factor' method. Internal temperatures are allowed to vary in accordance with thermostat settings and equipment capacity. A simplified input procedure and numerous other technical improvements are presented. This Users Manual describes the program and provides examples.
Dynamic stall: An example of strong interaction between viscous and inviscid flows
NASA Technical Reports Server (NTRS)
Philippe, J. J.
1978-01-01
A study was done of the phenomena concerning profiles in dynamic stall configuration, and more specially those related to pitch oscillations. The most characteristic experimental results on flow separations with a vortex character, and their repercussions on local pressures and total forces were analyzed. Some aspects of the methods for predicting flows with the presence (or not) of boundary layer separation are examined, as well as the main simplified methods available to date for the calculation of total forces in such configurations.
Selection theory of free dendritic growth in a potential flow.
von Kurnatowski, Martin; Grillenbeck, Thomas; Kassner, Klaus
2013-04-01
The Kruskal-Segur approach to selection theory in diffusion-limited or Laplacian growth is extended via combination with the Zauderer decomposition scheme. This way nonlinear bulk equations become tractable. To demonstrate the method, we apply it to two-dimensional crystal growth in a potential flow. We omit the simplifying approximations used in a preliminary calculation for the same system [Fischaleck, Kassner, Europhys. Lett. 81, 54004 (2008)], thus exhibiting the capability of the method to extend mathematical rigor to more complex problems than hitherto accessible.
An approach for delineating drinking water wellhead protection areas at the Nile Delta, Egypt.
Fadlelmawla, Amr A; Dawoud, Mohamed A
2006-04-01
In Egypt, production has a high priority. To this end protecting the quality of the groundwater, specifically when used for drinking water, and delineating protection areas around the drinking water wellheads for strict landuse restrictions is essential. The delineation methods are numerous; nonetheless, the uniqueness of the hydrogeological, institutional as well as social conditions in the Nile Delta region dictate a customized approach. The analysis of the hydrological conditions and land ownership at the Nile Delta indicates the need for an accurate methodology. On the other hand, attempting to calculate the wellhead protected areas around each of the drinking wells (more than 1500) requires data, human resources, and time that exceed the capabilities of the groundwater management agency. Accordingly, a combination of two methods (simplified variable shapes and numerical modeling) was adopted. Sensitivity analyses carried out using hypothetical modeling conditions have identified the pumping rate, clay thickness, hydraulic gradient, vertical conductivity of the clay, and the hydraulic conductivity as the most significant parameters in determining the dimensions of the wellhead protection areas (WHPAs). Tables of sets of WHPAs dimensions were calculated using synthetic modeling conditions representing the most common ranges of the significant parameters. Specific WHPA dimensions can be calculated by interpolation, utilizing the produced tables along with the operational and hydrogeological conditions for the well under consideration. In order to simplify the interpolation of the appropriate dimensions of the WHPAs from the calculated tables, an interactive computer program was written. The program accepts the real time data of the significant parameters as its input, and gives the appropriate WHPAs dimensions as its output.
On equivalent parameter learning in simplified feature space based on Bayesian asymptotic analysis.
Yamazaki, Keisuke
2012-07-01
Parametric models for sequential data, such as hidden Markov models, stochastic context-free grammars, and linear dynamical systems, are widely used in time-series analysis and structural data analysis. Computation of the likelihood function is one of primary considerations in many learning methods. Iterative calculation of the likelihood such as the model selection is still time-consuming though there are effective algorithms based on dynamic programming. The present paper studies parameter learning in a simplified feature space to reduce the computational cost. Simplifying data is a common technique seen in feature selection and dimension reduction though an oversimplified space causes adverse learning results. Therefore, we mathematically investigate a condition of the feature map to have an asymptotically equivalent convergence point of estimated parameters, referred to as the vicarious map. As a demonstration to find vicarious maps, we consider the feature space, which limits the length of data, and derive a necessary length for parameter learning in hidden Markov models. Copyright © 2012 Elsevier Ltd. All rights reserved.
Calculation of Thermally-Induced Displacements in Spherically Domed Ion Engine Grids
NASA Technical Reports Server (NTRS)
Soulas, George C.
2006-01-01
An analytical method for predicting the thermally-induced normal and tangential displacements of spherically domed ion optics grids under an axisymmetric thermal loading is presented. A fixed edge support that could be thermally expanded is used for this analysis. Equations for the displacements both normal and tangential to the surface of the spherical shell are derived. A simplified equation for the displacement at the center of the spherical dome is also derived. The effects of plate perforation on displacements and stresses are determined by modeling the perforated plate as an equivalent solid plate with modified, or effective, material properties. Analytical model results are compared to the results from a finite element model. For the solid shell, comparisons showed that the analytical model produces results that closely match the finite element model results. The simplified equation for the normal displacement of the spherical dome center is also found to accurately predict this displacement. For the perforated shells, the analytical solution and simplified equation produce accurate results for materials with low thermal expansion coefficients.
NASA Astrophysics Data System (ADS)
Chen, Bai-Qiao; Guedes Soares, C.
2018-03-01
The present work investigates the compressive axial ultimate strength of fillet-welded steel-plated ship structures subjected to uniaxial compression, in which the residual stresses in the welded plates are calculated by a thermo-elasto-plastic finite element analysis that is used to fit an idealized model of residual stress distribution. The numerical results of ultimate strength based on the simplified model of residual stress show good agreement with those of various methods including the International Association of Classification Societies (IACS) Common Structural Rules (CSR), leading to the conclusion that the simplified model can be effectively used to represent the distribution of residual stresses in steel-plated structures in a wide range of engineering applications. It is concluded that the widths of the tension zones in the welded plates have a quasi-linear behavior with respect to the plate slenderness. The effect of residual stress on the axial strength of the stiffened plate is analyzed and discussed.
Simplified planar model of a car steering system with rack and pinion and McPherson suspension
NASA Astrophysics Data System (ADS)
Knapczyk, J.; Kucybała, P.
2016-09-01
The paper presents the analysis and optimization of steering system with rack and pinion and McPherson suspension using spatial model and equivalent simplified planar model. The dimension of the steering linkage that give minimum steering error can be estimated using planar model. The steering error is defined as the difference between the actual angle made by the outer front wheel during steering manoeuvers and the calculated angle for the same wheel based on the Ackerman principle. For a given linear rack displacement, a specified steering arms angular displacements are determined while simultaneously ensuring best transmission angle characteristics (i) without and (ii) with imposing linear correlation between input and output. Numerical examples are used to illustrate the proposed method.
A new theoretical basis for numerical simulations of nonlinear acoustic fields
NASA Astrophysics Data System (ADS)
Wójcik, Janusz
2000-07-01
Nonlinear acoustic equations can be considerably simplified. The presented model retains the accuracy of a more complex description of nonlinearity and a uniform description of near and far fields (in contrast to the KZK equation). A method has been presented for obtaining solutions of Kuznetsov's equation from the solutions of the model under consideration. Results of numerical calculations, including comparative ones, are presented.
ERIC Educational Resources Information Center
Energy Research and Development Administration, Washington, DC. Div. of Solar Energy.
This pamphlet offers a preview of information services available from Solcost, a research and development project. The first section explains that Solcost calculates system and costs performance for solar heated and cooled new and retrofit constructions, such as residential buildings and single zone commercial buildings. For a typical analysis,…
Study of high-performance canonical molecular orbitals calculation for proteins
NASA Astrophysics Data System (ADS)
Hirano, Toshiyuki; Sato, Fumitoshi
2017-11-01
The canonical molecular orbital (CMO) calculation can help to understand chemical properties and reactions in proteins. However, it is difficult to perform the CMO calculation of proteins because of its self-consistent field (SCF) convergence problem and expensive computational cost. To certainly obtain the CMO of proteins, we work in research and development of high-performance CMO applications and perform experimental studies. We have proposed the third-generation density-functional calculation method of calculating the SCF, which is more advanced than the FILE and direct method. Our method is based on Cholesky decomposition for two-electron integrals calculation and the modified grid-free method for the pure-XC term evaluation. By using the third-generation density-functional calculation method, the Coulomb, the Fock-exchange, and the pure-XC terms can be given by simple linear algebraic procedure in the SCF loop. Therefore, we can expect to get a good parallel performance in solving the SCF problem by using a well-optimized linear algebra library such as BLAS on the distributed memory parallel computers. The third-generation density-functional calculation method is implemented to our program, ProteinDF. To achieve computing electronic structure of the large molecule, not only overcoming expensive computation cost and also good initial guess for safe SCF convergence are required. In order to prepare a precise initial guess for the macromolecular system, we have developed the quasi-canonical localized orbital (QCLO) method. The QCLO has the characteristics of both localized and canonical orbital in a certain region of the molecule. We have succeeded in the CMO calculations of proteins by using the QCLO method. For simplified and semi-automated calculation of the QCLO method, we have also developed a Python-based program, QCLObot.
Study on the Influence of Elevation of Tailing Dam on Stability
NASA Astrophysics Data System (ADS)
Wan, Shuai; Wang, Kun; Kong, Songtao; Zhao, Runan; Lan, Ying; Zhang, Run
2017-12-01
This paper takes Yunnan as the object of a tailing, by theoretical analysis and numerical calculation method of the effect of seismic load effect of elevation on the stability of the tailing, to analyse the stability of two point driven safety factor and liquefaction area. The Bishop method is adopted to simplify the calculation of dynamic safety factor and liquefaction area analysis using comparison method of shear stress to analyse liquefaction, so we obtained the influence of elevation on the stability of the tailing. Under the earthquake, with the elevation increased, the safety coefficient of dam body decreases, shallow tailing are susceptible to liquefy. Liquefaction area mainly concentrated in the bank below the water surface, to improve the scientific basis for the design and safety management of the tailing.
NASA Astrophysics Data System (ADS)
Mihn, Byeong-Hee; Choi, Goeun; Lee, Yong Sam
2017-03-01
This study examines the scale unique instruments used for astronomical observation during the Joseon dynasty. The Small Simplified Armillary Sphere (小簡儀, So-ganui) and the Sun-and-Stars Time-Determining Instrument (日星定時儀, Ilseong-jeongsi-ui) are minimized astronomical instruments, which can be characterized, respectively, as an observational instrument and a clock, and were influenced by the Simplified Armilla (簡儀, Jianyi) of the Yuan dynasty. These two instruments were equipped with several rings, and the rings of one were similar both in size and in scale to those of the other. Using the classic method of drawing the scale on the circumference of a ring, we analyze the scales of the Small Simplified Armillary Sphere and the Sun-and-Stars Time-Determining Instrument. Like the scale feature of the Simplified Armilla, we find that these two instruments selected the specific circumference which can be drawn by two kinds of scales. If Joseon`s astronomical instruments is applied by the dual scale drawing on one circumference, we suggest that 3.14 was used as the ratio of the circumference of circle, not 3 like China, when the ring`s size was calculated in that time. From the size of Hundred-interval disk of the extant Simplified Sundial in Korea, we make a conclusion that the three rings` diameter of the Sun-and-Stars Time-Determining Instrument described in the Sejiong Sillok (世宗實錄, Veritable Records of the King Sejong) refers to that of the middle circle of every ring, not the outer circle. As analyzing the degree of 28 lunar lodges (lunar mansions) in the equator written by Chiljeongsan-naepyeon (七政算內篇, the Inner Volume of Calculation of the Motions of the Seven Celestial Determinants), we also obtain the result that the scale of the Celestial-circumference-degree in the Small Simplified Armillary Sphere was made with a scale error about 0.1 du in root mean square (RMS).
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were -32.336 and -33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range.
Gotanda, Tatsuhiro; Katsuda, Toshizo; Gotanda, Rumi; Kuwano, Tadao; Akagawa, Takuya; Tanki, Nobuyoshi; Tabuchi, Akihiko; Shimono, Tetsunori; Kawaji, Yasuyuki
2016-01-01
Radiochromic film dosimeters have a disadvantage in comparison with an ionization chamber in that the dosimetry process is time-consuming for creating a density-absorbed dose calibration curve. The purpose of this study was the development of a simplified method of creating a density-absorbed dose calibration curve from radiochromic film within a short time. This simplified method was performed using Gafchromic EBT3 film with a low energy dependence and step-shaped Al filter. The simplified method was compared with the standard method. The density-absorbed dose calibration curves created using the simplified and standard methods exhibited approximately similar straight lines, and the gradients of the density-absorbed dose calibration curves were −32.336 and −33.746, respectively. The simplified method can obtain calibration curves within a much shorter time compared to the standard method. It is considered that the simplified method for EBT3 film offers a more time-efficient means of determining the density-absorbed dose calibration curve within a low absorbed dose range such as the diagnostic range. PMID:28144120
NASA Astrophysics Data System (ADS)
Song, Jinling; Qu, Yonghua; Wang, Jindi; Wan, Huawei; Liu, Xiaoqing
2007-06-01
Radiosity method is based on the computer simulation of 3D real structures of vegetations, such as leaves, branches and stems, which are composed by many facets. Using this method we can simulate the canopy reflectance and its bidirectional distribution of the vegetation canopy in visible and NIR regions. But with vegetations are more complex, more facets to compose them, so large memory and lots of time to calculate view factors are required, which are the choke points of using Radiosity method to calculate canopy BRF of lager scale vegetation scenes. We derived a new method to solve the problem, and the main idea is to abstract vegetation crown shapes and to simplify their structures, which can lessen the number of facets. The facets are given optical properties according to the reflectance, transmission and absorption of the real structure canopy. Based on the above work, we can simulate the canopy BRF of the mix scenes with different species vegetation in the large scale. In this study, taking broadleaf trees as an example, based on their structure characteristics, we abstracted their crowns as ellipsoid shells, and simulated the canopy BRF in visible and NIR regions of the large scale scene with different crown shape and different height ellipsoids. Form this study, we can conclude: LAI, LAD the probability gap, the sunlit and shaded surfaces are more important parameter to simulate the simplified vegetation canopy BRF. And the Radiosity method can apply us canopy BRF data in any conditions for our research.
Three-dimensional calculations of rotor-airframe interaction in forward flight
NASA Technical Reports Server (NTRS)
Zori, Laith A. J.; Mathur, Sanjay R.; Rajagopalan, R. G.
1992-01-01
A method for analyzing the mutual aerodynamic interaction between a rotor and an airframe model has been developed. This technique models the rotor implicitly through the source terms of the momentum equations. A three-dimensional, incompressible, laminar, Navier-Stokes solver in cylindrical coordinates was developed for analyzing the rotor/airframe problem. The calculations are performed on a simplified model at an advance ratio of 0.1. The airframe surface pressure predictions are found to be in good agreement with wind tunnel test data. Results are presented for velocity and pressure field distributions in the wake of the rotor.
Králík, M; Krása, J; Velyhan, A; Scholz, M; Ivanova-Stanik, I M; Bienkowska, B; Miklaszewski, R; Schmidt, H; Řezáč, K; Klír, D; Kravárik, J; Kubeš, P
2010-11-01
The spectra of neutrons outside the plasma focus device PF-1000 with an upper energy limit of ≈1 MJ was measured using a Bonner spheres spectrometer in which the active detector of thermal neutrons was replaced by nine thermoluminescent chips. As an a priori spectrum for the unfolding procedure, the spectrum calculated by means of the Monte Carlo method with a simplified model of the discharge chamber was selected. Differences between unfolded and calculated spectra are discussed with respect to properties of the discharge vessel and the laboratory layout.
Tight-binding model for borophene and borophane
NASA Astrophysics Data System (ADS)
Nakhaee, M.; Ketabi, S. A.; Peeters, F. M.
2018-03-01
Starting from the simplified linear combination of atomic orbitals method in combination with first-principles calculations, we construct a tight-binding (TB) model in the two-centre approximation for borophene and hydrogenated borophene (borophane). The Slater and Koster approach is applied to calculate the TB Hamiltonian of these systems. We obtain expressions for the Hamiltonian and overlap matrix elements between different orbitals for the different atoms and present the SK coefficients in a nonorthogonal basis set. An anisotropic Dirac cone is found in the band structure of borophane. We derive a Dirac low-energy Hamiltonian and compare the Fermi velocities with that of graphene.
A method for assessing the accuracy of surgical technique in the correction of astigmatism.
Kaye, S B; Campbell, S H; Davey, K; Patterson, A
1992-12-01
Surgical results can be assessed as a function of what was aimed for, what was done, and what was achieved. One of the aims of refractive surgery is to reduce astigmatism; the smaller the postoperative astigmatism the better the result. Determination of what was done--that is, the surgical effect, can be calculated from the preoperative and postoperative astigmatism. A simplified formulation is described which facilitates the calculation (magnitude and direction) of this surgical effect. In addition, an expression for surgical accuracy is described, as a function of what was aimed for and what was achieved.
Caracappa, Peter F.; Chao, T. C. Ephraim; Xu, X. George
2010-01-01
Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body. PMID:19430219
Caracappa, Peter F; Chao, T C Ephraim; Xu, X George
2009-06-01
Red bone marrow is among the tissues of the human body that are most sensitive to ionizing radiation, but red bone marrow cannot be distinguished from yellow bone marrow by normal radiographic means. When using a computational model of the body constructed from computed tomography (CT) images for radiation dose, assumptions must be applied to calculate the dose to the red bone marrow. This paper presents an analysis of two methods of calculating red bone marrow distribution: 1) a homogeneous mixture of red and yellow bone marrow throughout the skeleton, and 2) International Commission on Radiological Protection cellularity factors applied to each bone segment. A computational dose model was constructed from the CT image set of the Visible Human Project and compared to the VIP-Man model, which was derived from color photographs of the same individual. These two data sets for the same individual provide the unique opportunity to compare the methods applied to the CT-based model against the observed distribution of red bone marrow for that individual. The mass of red bone marrow in each bone segment was calculated using both methods. The effect of the different red bone marrow distributions was analyzed by calculating the red bone marrow dose using the EGS4 Monte Carlo code for parallel beams of monoenergetic photons over an energy range of 30 keV to 6 MeV, cylindrical (simplified CT) sources centered about the head and abdomen over an energy range of 30 keV to 1 MeV, and a whole-body electron irradiation treatment protocol for 3.9 MeV electrons. Applying the method with cellularity factors improves the average difference in the estimation of mass in each bone segment as compared to the mass in VIP-Man by 45% over the homogenous mixture method. Red bone marrow doses calculated by the two methods are similar for parallel photon beams at high energy (above about 200 keV), but differ by as much as 40% at lower energies. The calculated red bone marrow doses differ significantly for simplified CT and electron beam irradiation, since the computed red bone marrow dose is a strong function of the cellularity factor applied to bone segments within the primary radiation beam. These results demonstrate the importance of properly applying realistic cellularity factors to computation dose models of the human body.
NASA Astrophysics Data System (ADS)
Belyaev, Andrey K.; Yakovleva, Svetlana A.
2017-12-01
Aims: A simplified model is derived for estimating rate coefficients for inelastic processes in low-energy collisions of heavy particles with hydrogen, in particular, the rate coefficients with high and moderate values. Such processes are important for non-local thermodynamic equilibrium modeling of cool stellar atmospheres. Methods: The derived method is based on the asymptotic approach for electronic structure calculations and the Landau-Zener model for nonadiabatic transition probability determination. Results: It is found that the rate coefficients are expressed via statistical probabilities and reduced rate coefficients. It is shown that the reduced rate coefficients for neutralization and ion-pair formation processes depend on single electronic bound energies of an atomic particle, while the reduced rate coefficients for excitation and de-excitation processes depend on two electronic bound energies. The reduced rate coefficients are calculated and tabulated as functions of electronic bound energies. The derived model is applied to barium-hydrogen ionic collisions. For the first time, rate coefficients are evaluated for inelastic processes in Ba+ + H and Ba2+ + H- collisions for all transitions between the states from the ground and up to and including the ionic state. Tables with calculated data are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/608/A33
Umari, P; Fabris, S
2012-05-07
The quasi-particle energy levels of the Zn-Phthalocyanine (ZnPc) molecule calculated with the GW approximation are shown to depend sensitively on the explicit description of the metal-center semicore states. We find that the calculated GW energy levels are in good agreement with the measured experimental photoemission spectra only when explicitly including the Zn 3s and 3p semicore states in the valence. The main origin of this effect is traced back to the exchange term in the self-energy GW approximation. Based on this finding, we propose a simplified approach for correcting GW calculations of metal phthalocyanine molecules that avoids the time-consuming explicit treatment of the metal semicore states. Our method allows for speeding up the calculations without compromising the accuracy of the computed spectra.
Phonon-defect scattering and thermal transport in semiconductors: developing guiding principles
NASA Astrophysics Data System (ADS)
Polanco, Carlos; Lindsay, Lucas
First principles calculations of thermal conductivity have shown remarkable agreement with measurements for high-quality crystals. Nevertheless, most materials contain defects that provide significant extrinsic resistance and lower the conductivity from that of a perfect sample. This effect is usually accounted for with simplified analytical models that neglect the atomistic details of the defect and the exact dynamical properties of the system, which limits prediction capabilities. Recently, a method based on Greens functions was developed to calculate the phonon-defect scattering rates from first principles. This method has shown the important role of point defects in determining thermal transport in diamond and boron arsenide, two competitors for the highest bulk thermal conductivity. Here, we study the role of point defects on other relatively high thermal conductivity semiconductors, e.g., BN, BeSe, SiC, GaN and Si. We compare their first principles defect-phonon scattering rates and effects on transport properties with those from simplified models and explore common principles that determine these. Efforts will focus on basic vibrational properties that vary from system to system, such as density of states, interatomic force constants and defect deformation. Research supported by the U.S. Department of Energy, Basic Energy Sciences, Materials Sciences and Engineering Division.
Calculating the nutrient composition of recipes with computers.
Powers, P M; Hoover, L W
1989-02-01
The objective of this research project was to compare the nutrient values computed by four commonly used computerized recipe calculation methods. The four methods compared were the yield factor, retention factor, summing, and simplified retention factor methods. Two versions of the summing method were modeled. Four pork entrée recipes were selected for analysis: roast pork, pork and noodle casserole, pan-broiled pork chops, and pork chops with vegetables. Assumptions were made about changes expected to occur in the ingredients during preparation and cooking. Models were designed to simulate the algorithms of the calculation methods using a microcomputer spreadsheet software package. Identical results were generated in the yield factor, retention factor, and summing-cooked models for roast pork. The retention factor and summing-cooked models also produced identical results for the recipe for pan-broiled pork chops. The summing-raw model gave the highest value for water in all four recipes and the lowest values for most of the other nutrients. A superior method or methods was not identified. However, on the basis of the capabilities provided with the yield factor and retention factor methods, more serious consideration of these two methods is recommended.
Recent advances in QM/MM free energy calculations using reference potentials.
Duarte, Fernanda; Amrein, Beat A; Blaha-Nelson, David; Kamerlin, Shina C L
2015-05-01
Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. Copyright © 2014. Published by Elsevier B.V.
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A probabilistic structural analysis method (PSAM) is described which makes an approximate calculation of the structural response of a system, including the associated probabilistic distributions, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The method employs the fast probability integration (FPI) algorithm of Wu and Wirsching. Typical solution strategies are illustrated by formulations for a representative critical component chosen from the Space Shuttle Main Engine (SSME) as part of a major NASA-sponsored program on PSAM. Typical results are presented to demonstrate the role of the methodology in engineering design and analysis.
SEE rate estimation based on diffusion approximation of charge collection
NASA Astrophysics Data System (ADS)
Sogoyan, Armen V.; Chumakov, Alexander I.; Smolin, Anatoly A.
2018-03-01
The integral rectangular parallelepiped (IRPP) method remains the main approach to single event rate (SER) prediction for aerospace systems, despite the growing number of issues impairing method's validity when applied to scaled technology nodes. One of such issues is uncertainty in parameters extraction in the IRPP method, which can lead to a spread of several orders of magnitude in the subsequently calculated SER. The paper presents an alternative approach to SER estimation based on diffusion approximation of the charge collection by an IC element and geometrical interpretation of SEE cross-section. In contrast to the IRPP method, the proposed model includes only two parameters which are uniquely determined from the experimental data for normal incidence irradiation at an ion accelerator. This approach eliminates the necessity of arbitrary decisions during parameter extraction and, thus, greatly simplifies calculation procedure and increases the robustness of the forecast.
Rational reduction of periodic propagators for off-period observations.
Blanton, Wyndham B; Logan, John W; Pines, Alexander
2004-02-01
Many common solid-state nuclear magnetic resonance problems take advantage of the periodicity of the underlying Hamiltonian to simplify the computation of an observation. Most of the time-domain methods used, however, require the time step between observations to be some integer or reciprocal-integer multiple of the period, thereby restricting the observation bandwidth. Calculations of off-period observations are usually reduced to brute force direct methods resulting in many demanding matrix multiplications. For large spin systems, the matrix multiplication becomes the limiting step. A simple method that can dramatically reduce the number of matrix multiplications required to calculate the time evolution when the observation time step is some rational fraction of the period of the Hamiltonian is presented. The algorithm implements two different optimization routines. One uses pattern matching and additional memory storage, while the other recursively generates the propagators via time shifting. The net result is a significant speed improvement for some types of time-domain calculations.
Senftle, F.E.; Moxham, R.M.; Tanner, A.B.
1972-01-01
The recent availability of borehole logging sondes employing a source of neutrons and a Ge(Li) detector opens up the possibility of analyzing either decay or capture gamma rays. The most efficient method for a given element can be predicted by calculating the decay-to-capture count ratio for the most prominent peaks in the respective spectra. From a practical point of view such a calculation must be slanted toward short irradiation and count times at each station in a borehole. A simplified method of computation is shown, and the decay-to-capture count ratio has been calculated and tabulated for the optimum value in the decay mode irrespective of the irradiation time, and also for a ten minute irradiation time. Based on analysis of a single peak in each spectrum, the results indicate the preferred technique and the best decay or capture peak to observe for those elements of economic interest. ?? 1972.
Liquid-filled simplified hollow-core photonic crystal fiber
NASA Astrophysics Data System (ADS)
Liu, Shengnan; Gao, Wei; Li, Hongwei; Dong, Yongkang; Zhang, Hongying
2014-12-01
We report on a novel type of liquid-filled simplified hollow-core photonic crystal fibers (HC-PCFs), and investigate their transmission properties with various filling liquids, including water, ethanol and FC-40. The loss and dispersion characterizations are calculated for different fiber parameters including strut thickness and core diameter. The results show that there are still low-loss windows existing for liquid-filled simplified HC-PCFs, and the low-loss windows and dispersions can be easily tailored by filling different liquids. Such liquid-filled simplified HC-PCFs open up many possibilities for nonlinear fiber optics, optical, biochemical and medical sensing.
A Simplified Mesh Deformation Method Using Commercial Structural Analysis Software
NASA Technical Reports Server (NTRS)
Hsu, Su-Yuen; Chang, Chau-Lyan; Samareh, Jamshid
2004-01-01
Mesh deformation in response to redefined or moving aerodynamic surface geometries is a frequently encountered task in many applications. Most existing methods are either mathematically too complex or computationally too expensive for usage in practical design and optimization. We propose a simplified mesh deformation method based on linear elastic finite element analyses that can be easily implemented by using commercially available structural analysis software. Using a prescribed displacement at the mesh boundaries, a simple structural analysis is constructed based on a spatially varying Young s modulus to move the entire mesh in accordance with the surface geometry redefinitions. A variety of surface movements, such as translation, rotation, or incremental surface reshaping that often takes place in an optimization procedure, may be handled by the present method. We describe the numerical formulation and implementation using the NASTRAN software in this paper. The use of commercial software bypasses tedious reimplementation and takes advantage of the computational efficiency offered by the vendor. A two-dimensional airfoil mesh and a three-dimensional aircraft mesh were used as test cases to demonstrate the effectiveness of the proposed method. Euler and Navier-Stokes calculations were performed for the deformed two-dimensional meshes.
Measurement of blood flow from an assist ventricle by computation of pneumatic driving parameters.
Qian, K X
1992-03-01
The measurement of blood flow from an assist ventricle is important but sometimes difficult in artificial heart experiments. Along with the development of a pneumatic cylinder-piston driver coupled with a ventricular assist device, a simplified method for measuring pump flow was established. From driving parameters such as the piston (or cylinder) displacement and air pressure, the pump flow could be calculated by the use of the equation of state for an ideal gas. The results of this method are broadly in agreement with electromagnetic and Doppler measurements.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Accelerating wavefunction in density-functional-theory embedding by truncating the active basis set
NASA Astrophysics Data System (ADS)
Bennie, Simon J.; Stella, Martina; Miller, Thomas F.; Manby, Frederick R.
2015-07-01
Methods where an accurate wavefunction is embedded in a density-functional description of the surrounding environment have recently been simplified through the use of a projection operator to ensure orthogonality of orbital subspaces. Projector embedding already offers significant performance gains over conventional post-Hartree-Fock methods by reducing the number of correlated occupied orbitals. However, in our first applications of the method, we used the atomic-orbital basis for the full system, even for the correlated wavefunction calculation in a small, active subsystem. Here, we further develop our method for truncating the atomic-orbital basis to include only functions within or close to the active subsystem. The number of atomic orbitals in a calculation on a fixed active subsystem becomes asymptotically independent of the size of the environment, producing the required O ( N 0 ) scaling of cost of the calculation in the active subsystem, and accuracy is controlled by a single parameter. The applicability of this approach is demonstrated for the embedded many-body expansion of binding energies of water hexamers and calculation of reaction barriers of SN2 substitution of fluorine by chlorine in α-fluoroalkanes.
A simplified computer solution for the flexibility matrix of contacting teeth for spiral bevel gears
NASA Technical Reports Server (NTRS)
Hsu, C. Y.; Cheng, H. S.
1987-01-01
A computer code, FLEXM, was developed to calculate the flexibility matrices of contacting teeth for spiral bevel gears using a simplified analysis based on the elementary beam theory for the deformation of gear and shaft. The simplified theory requires a computer time at least one order of magnitude less than that needed for the complete finite element method analysis reported earlier by H. Chao, and it is much easier to apply for different gear and shaft geometries. Results were obtained for a set of spiral bevel gears. The teeth deflections due to torsion, bending moment, shearing strain and axial force were found to be in the order 10(-5), 10(-6), 10(-7), and 10(-8) respectively. Thus, the torsional deformation was the most predominant factor. In the analysis of dynamic load, response frequencies were found to be larger when the mass or moment of inertia was smaller or the stiffness was larger. The change in damping coefficient had little influence on the resonance frequency, but has a marked influence on the dynamic load at the resonant frequencies.
Liu, Anlin; Li, Xingmin; He, Yanbo; Deng, Fengdong
2004-02-01
Based on the principle of energy balance, the method for calculating latent evaporation was simplified, and hence, the construction of the drought remote sensing monitoring model of crop water shortage index was also simplified. Since the modified model involved fewer parameters and reduced computing times, it was more suitable for the operation running in the routine services. After collecting the concerned meteorological elements and the NOAA/AVHRR image data, the new model was applied to monitor the spring drought in Guanzhong, Shanxi Province. The results showed that the monitoring results from the new model, which also took more considerations of the effects of the ground coverage conditions and meteorological elements such as wind speed and the water pressure, were much better than the results from the model of vegetation water supply index. From the view of the computing times, service effects and monitoring results, the simplified crop water shortage index model was more suitable for practical use. In addition, the reasons of the abnormal results of CWSI > 1 in some regions in the case studies were also discussed in this paper.
Wojtas-Niziurski, Wojciech; Meng, Yilin; Roux, Benoit; Bernèche, Simon
2013-01-01
The potential of mean force describing conformational changes of biomolecules is a central quantity that determines the function of biomolecular systems. Calculating an energy landscape of a process that depends on three or more reaction coordinates might require a lot of computational power, making some of multidimensional calculations practically impossible. Here, we present an efficient automatized umbrella sampling strategy for calculating multidimensional potential of mean force. The method progressively learns by itself, through a feedback mechanism, which regions of a multidimensional space are worth exploring and automatically generates a set of umbrella sampling windows that is adapted to the system. The self-learning adaptive umbrella sampling method is first explained with illustrative examples based on simplified reduced model systems, and then applied to two non-trivial situations: the conformational equilibrium of the pentapeptide Met-enkephalin in solution and ion permeation in the KcsA potassium channel. With this method, it is demonstrated that a significant smaller number of umbrella windows needs to be employed to characterize the free energy landscape over the most relevant regions without any loss in accuracy. PMID:23814508
NASA Astrophysics Data System (ADS)
Goldsworthy, M. J.
2012-10-01
One of the most useful tools for modelling rarefied hypersonic flows is the Direct Simulation Monte Carlo (DSMC) method. Simulator particle movement and collision calculations are combined with statistical procedures to model thermal non-equilibrium flow-fields described by the Boltzmann equation. The Macroscopic Chemistry Method for DSMC simulations was developed to simplify the inclusion of complex thermal non-equilibrium chemistry. The macroscopic approach uses statistical information which is calculated during the DSMC solution process in the modelling procedures. Here it is shown how inclusion of macroscopic information in models of chemical kinetics, electronic excitation, ionization, and radiation can enhance the capabilities of DSMC to model flow-fields where a range of physical processes occur. The approach is applied to the modelling of a 6.4 km/s nitrogen shock wave and results are compared with those from existing shock-tube experiments and continuum calculations. Reasonable agreement between the methods is obtained. The quality of the comparison is highly dependent on the set of vibrational relaxation and chemical kinetic parameters employed.
Improved Model Fitting for the Empirical Green's Function Approach Using Hierarchical Models
NASA Astrophysics Data System (ADS)
Van Houtte, Chris; Denolle, Marine
2018-04-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study examines a variety of model-fitting methods and shows that the choice of method can explain some of the discrepancy. The preferred method is Bayesian hierarchical modeling, which can reduce bias, better quantify uncertainties, and allow additional effects to be resolved. Two case study earthquakes are examined, the 2016 MW7.1 Kumamoto, Japan earthquake and a MW5.3 aftershock of the 2016 MW7.8 Kaikōura earthquake. By using hierarchical models, the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be retrieved without overfitting the data. Other methods commonly used to calculate corner frequencies may give substantial biases. In particular, if fc was calculated for the Kumamoto earthquake using an ω-square model, the obtained fc could be twice as large as a realistic value.
Simplified Relativistic Force Transformation Equation.
ERIC Educational Resources Information Center
Stewart, Benjamin U.
1979-01-01
A simplified relativistic force transformation equation is derived and then used to obtain the equation for the electromagnetic forces on a charged particle, calculate the electromagnetic fields due to a point charge with constant velocity, transform electromagnetic fields in general, derive the Biot-Savart law, and relate it to Coulomb's law.…
Modeling inelastic phonon scattering in atomic- and molecular-wire junctions
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Frederiksen, Thomas; Brandbyge, Mads
2005-11-01
Computationally inexpensive approximations describing electron-phonon scattering in molecular-scale conductors are derived from the nonequilibrium Green’s function method. The accuracy is demonstrated with a first-principles calculation on an atomic gold wire. Quantitative agreement between the full nonequilibrium Green’s function calculation and the newly derived expressions is obtained while simplifying the computational burden by several orders of magnitude. In addition, analytical models provide intuitive understanding of the conductance including nonequilibrium heating and provide a convenient way of parameterizing the physics. This is exemplified by fitting the expressions to the experimentally observed conductances through both an atomic gold wire and a hydrogen molecule.
Jarrín, E; Jarrín, I; Arnalich-Montiel, F
2015-08-01
We describe a simplified method to detect anterior lenticonus. Three eyes of 2 patients with anterior lenticonus, plus 16 eyes from 16 healthy controls underwent Scheimpflug imaging of their anterior segment with Pentacam. The anterior capsule apex angle was manually identified and automatically measured by AutoCAD. The mean angle was 173.06° (SD: 1.91) in healthy subjects, and 158.33° (SD: 3.05) in anterior lenticonus eyes. The angle obtained from patients was more than 3 SD steeper than those from healthy subjects. The apical angle calculation method seems to discriminate well between normal eyes and eyes suspected of having anterior lenticonus. Copyright © 2013 Sociedad Española de Oftalmología. Published by Elsevier España, S.L.U. All rights reserved.
NASA Technical Reports Server (NTRS)
Ray, R. J.; Hicks, J. W.; Alexander, R. I.
1988-01-01
The X-29A advanced technology demonstrator has shown the practicality and advantages of the capability to compute and display, in real time, aeroperformance flight results. This capability includes the calculation of the in-flight measured drag polar, lift curve, and aircraft specific excess power. From these elements many other types of aeroperformance measurements can be computed and analyzed. The technique can be used to give an immediate postmaneuver assessment of data quality and maneuver technique, thus increasing the productivity of a flight program. A key element of this new method was the concurrent development of a real-time in-flight net thrust algorithm, based on the simplified gross thrust method. This net thrust algorithm allows for the direct calculation of total aircraft drag.
NASA Technical Reports Server (NTRS)
Bland, S. R.
1982-01-01
Finite difference methods for unsteady transonic flow frequency use simplified equations in which certain of the time dependent terms are omitted from the governing equations. Kernel functions are derived for two dimensional subsonic flow, and provide accurate solutions of the linearized potential equation with the same time dependent terms omitted. These solutions make possible a direct evaluation of the finite difference codes for the linear problem. Calculations with two of these low frequency kernel functions verify the accuracy of the LTRAN2 and HYTRAN2 finite difference codes. Comparisons of the low frequency kernel function results with the Possio kernel function solution of the complete linear equations indicate the adequacy of the HYTRAN approximation for frequencies in the range of interest for flutter calculations.
Temperature distribution of a simplified rotor due to a uniform heat source
NASA Astrophysics Data System (ADS)
Welzenbach, Sarah; Fischer, Tim; Meier, Felix; Werner, Ewald; kyzy, Sonun Ulan; Munz, Oliver
2018-03-01
In gas turbines, high combustion efficiency as well as operational safety are required. Thus, labyrinth seal systems with honeycomb liners are commonly used. In the case of rubbing events in the seal system, the components can be damaged due to cyclic thermal and mechanical loads. Temperature differences occurring at labyrinth seal fins during rubbing events can be determined by considering a single heat source acting periodically on the surface of a rotating cylinder. Existing literature analysing the temperature distribution on rotating cylindrical bodies due to a stationary heat source is reviewed. The temperature distribution on the circumference of a simplified labyrinth seal fin is calculated using an available and easy to implement analytical approach. A finite element model of the simplified labyrinth seal fin is created and the numerical results are compared to the analytical results. The temperature distributions calculated by the analytical and the numerical approaches coincide for low sliding velocities, while there are discrepancies of the calculated maximum temperatures for higher sliding velocities. The use of the analytical approach allows the conservative estimation of the maximum temperatures arising in labyrinth seal fins during rubbing events. At the same time, high calculation costs can be avoided.
NASA Technical Reports Server (NTRS)
Bhandari, P.; Wu, Y. C.; Roschke, E. J.
1989-01-01
A simple solar flux calculation algorithm for a cylindrical cavity type solar receiver has been developed and implemented on an IBM PC-AT. Using cone optics, the contour error method is utilized to handle the slope error of a paraboloidal concentrator. The flux distribution on the side wall is calculated by integration of the energy incident from cones emanating from all the differential elements on the concentrator. The calculations are done for any set of dimensions and properties of the receiver and the concentrator, and account for any spillover on the aperture plate. The results of this algorithm compared excellently with those predicted by more complicated programs. Because of the utilization of axial symmetry and overall simplification, it is extremely fast. It can be esily extended to other axisymmetric receiver geometries.
Capiau, Sara; Wilk, Leah S; De Kesel, Pieter M M; Aalders, Maurice C G; Stove, Christophe P
2018-02-06
The hematocrit (Hct) effect is one of the most important hurdles currently preventing more widespread implementation of quantitative dried blood spot (DBS) analysis in a routine context. Indeed, the Hct may affect both the accuracy of DBS methods as well as the interpretation of DBS-based results. We previously developed a method to determine the Hct of a DBS based on its hemoglobin content using noncontact diffuse reflectance spectroscopy. Despite the ease with which the analysis can be performed (i.e., mere scanning of the DBS) and the good results that were obtained, the method did require a complicated algorithm to derive the total hemoglobin content from the DBS's reflectance spectrum. As the total hemoglobin was calculated as the sum of oxyhemoglobin, methemoglobin, and hemichrome, the three main hemoglobin derivatives formed in DBS upon aging, the reflectance spectrum needed to be unmixed to determine the quantity of each of these derivatives. We now simplified the method by only using the reflectance at a single wavelength, located at a quasi-isosbestic point in the reflectance curve. At this wavelength, assuming 1-to-1 stoichiometry of the aging reaction, the reflectance is insensitive to the hemoglobin degradation and only scales with the total amount of hemoglobin and, hence, the Hct. This simplified method was successfully validated. At each quality control level as well as at the limits of quantitation (i.e., 0.20 and 0.67) bias, intra- and interday imprecision were within 10%. Method reproducibility was excellent based on incurred sample reanalysis and surpassed the reproducibility of the original method. Furthermore, the influence of the volume spotted, the measurement location within the spot, as well as storage time and temperature were evaluated, showing no relevant impact of these parameters. Application to 233 patient samples revealed a good correlation between the Hct determined on whole blood and the predicted Hct determined on venous DBS. The bias obtained with Bland and Altman analysis was -0.015 and the limits of agreement were -0.061 and 0.031, indicating that the simplified, noncontact Hct prediction method even outperforms the original method. In addition, using caffeine as a model compound, it was demonstrated that this simplified Hct prediction method can effectively be used to implement a Hct-dependent correction factor to DBS-based results to alleviate the Hct bias.
Evaluation of various thrust calculation techniques on an F404 engine
NASA Technical Reports Server (NTRS)
Ray, Ronald J.
1990-01-01
In support of performance testing of the X-29A aircraft at the NASA-Ames, various thrust calculation techniques were developed and evaluated for use on the F404-GE-400 engine. The engine was thrust calibrated at NASA-Lewis. Results from these tests were used to correct the manufacturer's in-flight thrust program to more accurately calculate thrust for the specific test engine. Data from these tests were also used to develop an independent, simplified thrust calculation technique for real-time thrust calculation. Comparisons were also made to thrust values predicted by the engine specification model. Results indicate uninstalled gross thrust accuracies on the order of 1 to 4 percent for the various in-flight thrust methods. The various thrust calculations are described and their usage, uncertainty, and measured accuracies are explained. In addition, the advantages of a real-time thrust algorithm for flight test use and the importance of an accurate thrust calculation to the aircraft performance analysis are described. Finally, actual data obtained from flight test are presented.
SIMPLIFIED CALCULATION OF SOLAR FLUX ON THE SIDE WALL OF CYLINDRICAL CAVITY SOLAR RECEIVERS
NASA Technical Reports Server (NTRS)
Bhandari, P.
1994-01-01
The Simplified Calculation of Solar Flux Distribution on the Side Wall of Cylindrical Cavity Solar Receivers program employs a simple solar flux calculation algorithm for a cylindrical cavity type solar receiver. Applications of this program include the study of solar energy, heat transfer, and space power-solar dynamics engineering. The aperture plate of the receiver is assumed to be located in the focal plane of a paraboloidal concentrator, and the geometry is assumed to be axisymmetric. The concentrator slope error is assumed to be the only surface error; it is assumed that there are no pointing or misalignment errors. Using cone optics, the contour error method is utilized to handle the slope error of the concentrator. The flux distribution on the side wall is calculated by integration of the energy incident from cones emanating from all the differential elements on the concentrator. The calculations are done for any set of dimensions and properties of the receiver and the concentrator, and account for any spillover on the aperture plate. The results of this algorithm compared excellently with those predicted by more complicated programs. Because of the utilization of axial symmetry and overall simplification, it is extremely fast. It can be easily extended to other axi-symmetric receiver geometries. The program was written in Fortran 77, compiled using a Ryan McFarland compiler, and run on an IBM PC-AT with a math coprocessor. It requires 60K of memory and has been implemented under MS-DOS 3.2.1. The program was developed in 1988.
A novel implementation of homodyne time interval analysis method for primary vibration calibration
NASA Astrophysics Data System (ADS)
Sun, Qiao; Zhou, Ling; Cai, Chenguang; Hu, Hongbo
2011-12-01
In this paper, the shortcomings and their causes of the conventional homodyne time interval analysis (TIA) method is described with respect to its software algorithm and hardware implementation, based on which a simplified TIA method is proposed with the help of virtual instrument technology. Equipped with an ordinary Michelson interferometer and dual channel synchronous data acquisition card, the primary vibration calibration system using the simplified method can perform measurements of complex sensitivity of accelerometers accurately, meeting the uncertainty requirements laid down in pertaining ISO standard. The validity and accuracy of the simplified TIA method is verified by simulation and comparison experiments with its performance analyzed. This simplified method is recommended to apply in national metrology institute of developing countries and industrial primary vibration calibration labs for its simplified algorithm and low requirements on hardware.
Inclusion of Structural Flexibility in Design Load Analysis for Wave Energy Converters: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Yi; Yu, Yi-Hsiang; van Rij, Jennifer A
2017-08-14
Hydroelastic interactions, caused by ocean wave loading on wave energy devices with deformable structures, are studied in the time domain. A midfidelity, hybrid modeling approach of rigid-body and flexible-body dynamics is developed and implemented in an open-source simulation tool for wave energy converters (WEC-Sim) to simulate the dynamic responses of wave energy converter component structural deformations under wave loading. A generalized coordinate system, including degrees of freedom associated with rigid bodies, structural modes, and constraints connecting multiple bodies, is utilized. A simplified method of calculating stress loads and sectional bending moments is implemented, with the purpose of sizing and designingmore » wave energy converters. Results calculated using the method presented are verified with those of high-fidelity fluid-structure interaction simulations, as well as low-fidelity, frequency-domain, boundary element method analysis.« less
Recent advances in QM/MM free energy calculations using reference potentials☆
Duarte, Fernanda; Amrein, Beat A.; Blaha-Nelson, David; Kamerlin, Shina C.L.
2015-01-01
Background Recent years have seen enormous progress in the development of methods for modeling (bio)molecular systems. This has allowed for the simulation of ever larger and more complex systems. However, as such complexity increases, the requirements needed for these models to be accurate and physically meaningful become more and more difficult to fulfill. The use of simplified models to describe complex biological systems has long been shown to be an effective way to overcome some of the limitations associated with this computational cost in a rational way. Scope of review Hybrid QM/MM approaches have rapidly become one of the most popular computational tools for studying chemical reactivity in biomolecular systems. However, the high cost involved in performing high-level QM calculations has limited the applicability of these approaches when calculating free energies of chemical processes. In this review, we present some of the advances in using reference potentials and mean field approximations to accelerate high-level QM/MM calculations. We present illustrative applications of these approaches and discuss challenges and future perspectives for the field. Major conclusions The use of physically-based simplifications has shown to effectively reduce the cost of high-level QM/MM calculations. In particular, lower-level reference potentials enable one to reduce the cost of expensive free energy calculations, thus expanding the scope of problems that can be addressed. General significance As was already demonstrated 40 years ago, the usage of simplified models still allows one to obtain cutting edge results with substantially reduced computational cost. This article is part of a Special Issue entitled Recent developments of molecular dynamics. PMID:25038480
NASA Technical Reports Server (NTRS)
Gracey, William
1948-01-01
A simplified compound-pendulum method for the experimental determination of the moments of inertia of airplanes about the x and y axes is described. The method is developed as a modification of the standard pendulum method reported previously in NACA report, NACA-467. A brief review of the older method is included to form a basis for discussion of the simplified method. (author)
Coniferous canopy BRF simulation based on 3-D realistic scene.
Wang, Xin-Yun; Guo, Zhi-Feng; Qin, Wen-Han; Sun, Guo-Qing
2011-09-01
It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigated in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerful in remote sensing of heterogeneous coniferous forests over a large-scale region. L-systems is applied to render 3-D coniferous forest scenarios, and RGM model was used to calculate BRF (bidirectional reflectance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhile at a tree and forest level, the results are also good.
Coniferous Canopy BRF Simulation Based on 3-D Realistic Scene
NASA Technical Reports Server (NTRS)
Wang, Xin-yun; Guo, Zhi-feng; Qin, Wen-han; Sun, Guo-qing
2011-01-01
It is difficulties for the computer simulation method to study radiation regime at large-scale. Simplified coniferous model was investigate d in the present study. It makes the computer simulation methods such as L-systems and radiosity-graphics combined method (RGM) more powerf ul in remote sensing of heterogeneous coniferous forests over a large -scale region. L-systems is applied to render 3-D coniferous forest scenarios: and RGM model was used to calculate BRF (bidirectional refle ctance factor) in visible and near-infrared regions. Results in this study show that in most cases both agreed well. Meanwhiie at a tree and forest level. the results are also good.
NECAP 4.1: NASA's Energy-Cost Analysis Program fast input manual and example
NASA Technical Reports Server (NTRS)
Jensen, R. N.; Miner, D. L.
1982-01-01
NASA's Energy-Cost Analysis Program (NECAP) is a powerful computerized method to determine and to minimize building energy consumption. The program calculates hourly heat gain or losses taking into account the building thermal resistance and mass, using hourly weather and a response factor method. Internal temperatures are allowed to vary in accordance with thermostat settings and equipment capacity. NECAP 4.1 has a simplified input procedure and numerous other technical improvements. A very short input method is provided. It is limited to a single zone building. The user must still describe the building's outside geometry and select the type of system to be used.
Ice Cores Dating With a New Inverse Method Taking Account of the Flow Modeling Errors
NASA Astrophysics Data System (ADS)
Lemieux-Dudon, B.; Parrenin, F.; Blayo, E.
2007-12-01
Deep ice cores extracted from Antarctica or Greenland recorded a wide range of past climatic events. In order to contribute to the Quaternary climate system understanding, the calculation of an accurate depth-age relationship is a crucial point. Up to now ice chronologies for deep ice cores estimated with inverse approaches are based on quite simplified ice-flow models that fail to reproduce flow irregularities and consequently to respect all available set of age markers. We describe in this paper, a new inverse method that takes into account the model uncertainty in order to circumvent the restrictions linked to the use of simplified flow models. This method uses first guesses on two flow physical entities, the ice thinning function and the accumulation rate and then identifies correction functions on both flow entities. We highlight two major benefits brought by this new method: first of all the ability to respect large set of observations and as a consequence, the feasibility to estimate a synchronized common ice chronology for several cores at the same time. This inverse approach relies on a bayesian framework. To respect the positive constraint on the searched correction functions, we assume lognormal probability distribution on one hand for the background errors, but also for one particular set of the observation errors. We test this new inversion method on three cores simultaneously (the two EPICA cores : DC and DML and the Vostok core) and we assimilate more than 150 observations (e.g.: age markers, stratigraphic links,...). We analyze the sensitivity of the solution with respect to the background information, especially the prior error covariance matrix. The confidence intervals based on the posterior covariance matrix calculation, are estimated on the correction functions and for the first time on the overall output chronologies.
Frenning, Göran
2015-01-01
When the discrete element method (DEM) is used to simulate confined compression of granular materials, the need arises to estimate the void space surrounding each particle with Voronoi polyhedra. This entails recurring Voronoi tessellation with small changes in the geometry, resulting in a considerable computational overhead. To overcome this limitation, we propose a method with the following features:•A local determination of the polyhedron volume is used, which considerably simplifies implementation of the method.•A linear approximation of the polyhedron volume is utilised, with intermittent exact volume calculations when needed.•The method allows highly accurate volume estimates to be obtained at a considerably reduced computational cost. PMID:26150975
NASA Technical Reports Server (NTRS)
Armstrong, G. P.; Carlier, S. G.; Fukamachi, K.; Thomas, J. D.; Marwick, T. H.
1999-01-01
OBJECTIVES: To validate a simplified estimate of peak power (SPP) against true (invasively measured) peak instantaneous power (TPP), to assess the feasibility of measuring SPP during exercise and to correlate this with functional capacity. DESIGN: Development of a simplified method of measurement and observational study. SETTING: Tertiary referral centre for cardiothoracic disease. SUBJECTS: For validation of SPP with TPP, seven normal dogs and four dogs with dilated cardiomyopathy were studied. To assess feasibility and clinical significance in humans, 40 subjects were studied (26 patients; 14 normal controls). METHODS: In the animal validation study, TPP was derived from ascending aortic pressure and flow probe, and from Doppler measurements of flow. SPP, calculated using the different flow measures, was compared with peak instantaneous power under different loading conditions. For the assessment in humans, SPP was measured at rest and during maximum exercise. Peak aortic flow was measured with transthoracic continuous wave Doppler, and systolic and diastolic blood pressures were derived from brachial sphygmomanometry. The difference between exercise and rest simplified peak power (Delta SPP) was compared with maximum oxygen uptake (VO(2)max), measured from expired gas analysis. RESULTS: SPP estimates using peak flow measures correlated well with true peak instantaneous power (r = 0.89 to 0.97), despite marked changes in systemic pressure and flow induced by manipulation of loading conditions. In the human study, VO(2)max correlated with Delta SPP (r = 0.78) better than Delta ejection fraction (r = 0.18) and Delta rate-pressure product (r = 0.59). CONCLUSIONS: The simple product of mean arterial pressure and peak aortic flow (simplified peak power, SPP) correlates with peak instantaneous power over a range of loading conditions in dogs. In humans, it can be estimated during exercise echocardiography, and correlates with maximum oxygen uptake better than ejection fraction or rate-pressure product.
NASA Astrophysics Data System (ADS)
Zolfaghari, M. R.; Ajamy, A.; Asgarian, B.
2015-12-01
The primary goal of seismic reassessment procedures in oil platform codes is to determine the reliability of a platform under extreme earthquake loading. Therefore, in this paper, a simplified method is proposed to assess seismic performance of existing jacket-type offshore platforms (JTOP) in regions ranging from near-elastic to global collapse. The simplified method curve exploits well agreement between static pushover (SPO) curve and the entire summarized interaction incremental dynamic analysis (CI-IDA) curve of the platform. Although the CI-IDA method offers better understanding and better modelling of the phenomenon, it is a time-consuming and challenging task. To overcome the challenges, the simplified procedure, a fast and accurate approach, is introduced based on SPO analysis. Then, an existing JTOP in the Persian Gulf is presented to illustrate the procedure, and finally a comparison is made between the simplified method and CI-IDA results. The simplified method is very informative and practical for current engineering purposes. It is able to predict seismic performance elasticity to global dynamic instability with reasonable accuracy and little computational effort.
The Sternheimer-GW method and the spectral signatures of plasmonic polarons
NASA Astrophysics Data System (ADS)
Giustino, Feliciano
During the past three decades the GW method has emerged among the most promising electronic structure techniques for predictive calculations of quasiparticle band structures. In order to simplify the GW work-flow while at the same time improving the calculation accuracy, we developed the Sternheimer-GW method. In Sternheimer-GW both the screened Coulomb interaction and the electron Green's function are evaluated by using exclusively occupied Kohn-Sham states, as in density-functional perturbation theory. In this talk I will review the basics of Sternheimer-GW, and I will discuss two recent applications to semiconductors and superconductors. In the case of semiconductors we calculated complete energy- and momentum-resolved spectral functions by combining Sternheimer-GW with the cumulant expansion approach. This study revealed the existence of band structure replicas which arise from electron-plasmon interactions. In the case of superconductors we calculated the Coulomb pseudo-potential from first principles, and combined this approach with the Eliashberg theory of the superconducting critical temperature. This work was supported by the Leverhulme Trust (RL-2012-001), the European Research Council (EU FP7/ERC 239578), the UK Engineering and Physical Sciences Research Council (EP/J009857/1), and the Graphene Flagship (EU FP7/604391).
A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation.
Huang, Yong; Tao, Gang
2014-09-01
The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.
A feedback control strategy for the airfoil system under non-Gaussian colored noise excitation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Yong, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn; Tao, Gang, E-mail: hy@njust.edu.cn, E-mail: taogang@njust.edu.cn
2014-09-01
The stability of a binary airfoil with feedback control under stochastic disturbances, a non-Gaussian colored noise, is studied in this paper. First, based on some approximated theories and methods the non-Gaussian colored noise is simplified to an Ornstein-Uhlenbeck process. Furthermore, via the stochastic averaging method and the logarithmic polar transformation, one dimensional diffusion process can be obtained. At last by applying the boundary conditions, the largest Lyapunov exponent which can determine the almost-sure stability of the system and the effective region of control parameters is calculated.
Analytical research on impacting load of aircraft crashing upon moveable concrete target
NASA Astrophysics Data System (ADS)
Zhu, Tong; Ou, Zhuocheng; Duan, Zhuoping; Huang, Fenglei
2018-03-01
The impact load of an aircraft impact upon moveable concrete target was analyzed in this paper by both theoretical and numerical methods. The aircraft was simplified as a one dimensional pole and stress-wave theory was used to deduce the new formula. Furthermore, aiming to compare with previous experimental data, a numerical calculation based on the new formula had been carried out which showed good agreement with the experimental data. The approach, a new formula with particular numerical method, can predict not only the impact load but also the deviation between moveable and static concrete target.
A control-volume method for analysis of unsteady thrust augmenting ejector flows
NASA Technical Reports Server (NTRS)
Drummond, Colin K.
1988-01-01
A method for predicting transient thrust augmenting ejector characteristics is presented. The analysis blends classic self-similar turbulent jet descriptions with a control volume mixing region discretization to solicit transient effects in a new way. Division of the ejector into an inlet, diffuser, and mixing region corresponds with the assumption of viscous-dominated phenomenon in the latter. Inlet and diffuser analyses are simplified by a quasi-steady analysis, justified by the assumptions that pressure is the forcing function in those regions. Details of the theoretical foundation, the solution algorithm, and sample calculations are given.
Czakó, Gábor; Szalay, Viktor; Császár, Attila G
2006-01-07
The currently most efficient finite basis representation (FBR) method [Corey et al., in Numerical Grid Methods and Their Applications to Schrodinger Equation, NATO ASI Series C, edited by C. Cerjan (Kluwer Academic, New York, 1993), Vol. 412, p. 1; Bramley et al., J. Chem. Phys. 100, 6175 (1994)] designed specifically to deal with nondirect product bases of structures phi(n) (l)(s)f(l)(u), chi(m) (l)(t)phi(n) (l)(s)f(l)(u), etc., employs very special l-independent grids and results in a symmetric FBR. While highly efficient, this method is not general enough. For instance, it cannot deal with nondirect product bases of the above structure efficiently if the functions phi(n) (l)(s) [and/or chi(m) (l)(t)] are discrete variable representation (DVR) functions of the infinite type. The optimal-generalized FBR(DVR) method [V. Szalay, J. Chem. Phys. 105, 6940 (1996)] is designed to deal with general, i.e., direct and/or nondirect product, bases and grids. This robust method, however, is too general, and its direct application can result in inefficient computer codes [Czako et al., J. Chem. Phys. 122, 024101 (2005)]. It is shown here how the optimal-generalized FBR method can be simplified in the case of nondirect product bases of structures phi(n) (l)(s)f(l)(u), chi(m) (l)(t)phi(n) (l)(s)f(l)(u), etc. As a result the commonly used symmetric FBR is recovered and simplified nonsymmetric FBRs utilizing very special l-dependent grids are obtained. The nonsymmetric FBRs are more general than the symmetric FBR in that they can be employed efficiently even when the functions phi(n) (l)(s) [and/or chi(m) (l)(t)] are DVR functions of the infinite type. Arithmetic operation counts and a simple numerical example presented show unambiguously that setting up the Hamiltonian matrix requires significantly less computer time when using one of the proposed nonsymmetric FBRs than that in the symmetric FBR. Therefore, application of this nonsymmetric FBR is more efficient than that of the symmetric FBR when one wants to diagonalize the Hamiltonian matrix either by a direct or via a basis-set contraction method. Enormous decrease of computer time can be achieved, with respect to a direct application of the optimal-generalized FBR, by employing one of the simplified nonsymmetric FBRs as is demonstrated in noniterative calculations of the low-lying vibrational energy levels of the H3+ molecular ion. The arithmetic operation counts of the Hamiltonian matrix vector products and the properties of a recently developed diagonalization method [Andreozzi et al., J. Phys. A Math. Gen. 35, L61 (2002)] suggest that the nonsymmetric FBR applied along with this particular diagonalization method is suitable to large scale iterative calculations. Whether or not the nonsymmetric FBR is competitive with the symmetric FBR in large-scale iterative calculations still has to be investigated numerically.
NASA Astrophysics Data System (ADS)
Kartashov, Dmitry; Shurshakov, Vyacheslav
2018-03-01
A ray-tracing method to calculate radiation exposure levels of astronauts at different spacecraft shielding configurations has been developed. The method uses simplified shielding geometry models of the spacecraft compartments together with depth-dose curves. The depth-dose curves can be obtained with different space radiation environment models and radiation transport codes. The spacecraft shielding configurations are described by a set of geometry objects. To calculate the shielding probability functions for each object its surface is composed from a set of the disjoint adjacent triangles that fully cover the surface. Such description can be applied for any complex shape objects. The method is applied to the space experiment MATROSHKA-R modeling conditions. The experiment has been carried out onboard the ISS from 2004 to 2016. Dose measurements were realized in the ISS compartments with anthropomorphic and spherical phantoms, and the protective curtain facility that provides an additional shielding on the crew cabin wall. The space ionizing radiation dose distributions in tissue-equivalent spherical and anthropomorphic phantoms and for an additional shielding installed in the compartment are calculated. There is agreement within accuracy of about 15% between the data obtained in the experiment and calculated ones. Thus the calculation method used has been successfully verified with the MATROSHKA-R experiment data. The ray-tracing radiation dose calculation method can be recommended for estimation of dose distribution in astronaut body in different space station compartments and for estimation of the additional shielding efficiency, especially when exact compartment shielding geometry and the radiation environment for the planned mission are not known.
NASA Astrophysics Data System (ADS)
Watrous, Mitchell James
1997-12-01
A new approach to the Green's-function method for the calculation of equilibrium densities within the finite temperature, Kohn-Sham formulation of density functional theory is presented, which extends the method to all temperatures. The contour of integration in the complex energy plane is chosen such that the density is given by a sum of Green's function differences evaluated at the Matsubara frequencies, rather than by the calculation and summation of Kohn-Sham single-particle wave functions. The Green's functions are written in terms of their spectral representation and are calculated as the solutions of their defining differential equations. These differential equations are boundary value problems as opposed to the standard eigenvalue problems. For large values of the complex energy, the differential equations are further simplified from second to first-order by writing the Green's functions in terms of logarithmic derivatives. An asymptotic expression for the Green's functions is derived, which allows the sum over Matsubara poles to be approximated. The method is applied to the screening of nuclei by electrons in finite temperature plasmas. To demonstrate the method's utility, and to illustrate its advantages, the results of previous wave function type calculations for protons and neon nuclei are reproduced. The method is also used to formulate a new screening model for fusion reactions in the solar core, and the predicted reaction rate enhancements factors are compared with existing models.
SF-FDTD analysis of a predictive physical model for parallel aligned liquid crystal devices
NASA Astrophysics Data System (ADS)
Márquez, Andrés.; Francés, Jorge; Martínez, Francisco J.; Gallego, Sergi; Alvarez, Mariela L.; Calzado, Eva M.; Pascual, Inmaculada; Beléndez, Augusto
2017-08-01
Recently we demonstrated a novel and simplified model enabling to calculate the voltage dependent retardance provided by parallel aligned liquid crystal devices (PA-LCoS) for a very wide range of incidence angles and any wavelength in the visible. To our knowledge it represents the most simplified approach still showing predictive capability. Deeper insight into the physics behind the simplified model is necessary to understand if the parameters in the model are physically meaningful. Since the PA-LCoS is a black-box where we do not have information about the physical parameters of the device, we cannot perform this kind of analysis using the experimental retardance measurements. In this work we develop realistic simulations for the non-linear tilt of the liquid crystal director across the thickness of the liquid crystal layer in the PA devices. We consider these profiles to have a sine-like shape, which is a good approximation for typical ranges of applied voltage in commercial PA-LCoS microdisplays. For these simulations we develop a rigorous method based on the split-field finite difference time domain (SF-FDTD) technique which provides realistic retardance values. These values are used as the experimental measurements to which the simplified model is fitted. From this analysis we learn that the simplified model is very robust, providing unambiguous solutions when fitting its parameters. We also learn that two of the parameters in the model are physically meaningful, proving a useful reverse-engineering approach, with predictive capability, to probe into internal characteristics of the PA-LCoS device.
Spin-density functional theory treatment of He+-He collisions
NASA Astrophysics Data System (ADS)
Baxter, Matthew; Kirchner, Tom; Engel, Eberhard
2016-09-01
The He+-He collision system presents an interesting challenge to theory. On one hand, a full treatment of the three-electron dynamics constitutes a massive computational problem that has not been attempted yet; on the other hand, simplified independent-particle-model based descriptions may only provide partial information on either the transitions of the initial target electrons or on the transitions of the projectile electron, depending on the choice of atomic model potentials. We address the He+-He system within the spin-density functional theory framework on the exchange-only level. The Krieger-Li-Iafrate (KLI) approximation is used to calculate the exchange potentials for the spin-up and spin-down electrons, which ensures the correct asymptotic behavior of the effective (Kohn-Sham) potential consisting of exchange, Hartree and nuclear Coulomb potentials. The orbitals are propagated with the two-center basis generator method. In each time step, simplified versions of them are fed into the KLI equations to calculate the Kohn-Sham potential, which, in turn, is used to generate the orbitals in the next time step. First results for the transitions of all electrons and the resulting charge-changing total cross sections will be presented at the conference. Work supported by NSERC, Canada.
De Rosario, Helios; Page, Alvaro; Mata, Vicente
2014-05-07
This paper proposes a variation of the instantaneous helical pivot technique for locating centers of rotation. The point of optimal kinematic error (POKE), which minimizes the velocity at the center of rotation, may be obtained by just adding a weighting factor equal to the square of angular velocity in Woltring׳s equation of the pivot of instantaneous helical axes (PIHA). Calculations are simplified with respect to the original method, since it is not necessary to make explicit calculations of the helical axis, and the effect of accidental errors is reduced. The improved performance of this method was validated by simulations based on a functional calibration task for the gleno-humeral joint center. Noisy data caused a systematic dislocation of the calculated center of rotation towards the center of the arm marker cluster. This error in PIHA could even exceed the effect of soft tissue artifacts associated to small and medium deformations, but it was successfully reduced by the POKE estimation. Copyright © 2014 Elsevier Ltd. All rights reserved.
Numerical approach for unstructured quantum key distribution
Coles, Patrick J.; Metodiev, Eric M.; Lütkenhaus, Norbert
2016-01-01
Quantum key distribution (QKD) allows for communication with security guaranteed by quantum theory. The main theoretical problem in QKD is to calculate the secret key rate for a given protocol. Analytical formulas are known for protocols with symmetries, since symmetry simplifies the analysis. However, experimental imperfections break symmetries, hence the effect of imperfections on key rates is difficult to estimate. Furthermore, it is an interesting question whether (intentionally) asymmetric protocols could outperform symmetric ones. Here we develop a robust numerical approach for calculating the key rate for arbitrary discrete-variable QKD protocols. Ultimately this will allow researchers to study ‘unstructured' protocols, that is, those that lack symmetry. Our approach relies on transforming the key rate calculation to the dual optimization problem, which markedly reduces the number of parameters and hence the calculation time. We illustrate our method by investigating some unstructured protocols for which the key rate was previously unknown. PMID:27198739
An approximate methods approach to probabilistic structural analysis
NASA Technical Reports Server (NTRS)
Mcclung, R. C.; Millwater, H. R.; Wu, Y.-T.; Thacker, B. H.; Burnside, O. H.
1989-01-01
A major research and technology program in Probabilistic Structural Analysis Methods (PSAM) is currently being sponsored by the NASA Lewis Research Center with Southwest Research Institute as the prime contractor. This program is motivated by the need to accurately predict structural response in an environment where the loadings, the material properties, and even the structure may be considered random. The heart of PSAM is a software package which combines advanced structural analysis codes with a fast probability integration (FPI) algorithm for the efficient calculation of stochastic structural response. The basic idea of PAAM is simple: make an approximate calculation of system response, including calculation of the associated probabilities, with minimal computation time and cost, based on a simplified representation of the geometry, loads, and material. The deterministic solution resulting should give a reasonable and realistic description of performance-limiting system responses, although some error will be inevitable. If the simple model has correctly captured the basic mechanics of the system, however, including the proper functional dependence of stress, frequency, etc. on design parameters, then the response sensitivities calculated may be of significantly higher accuracy.
A new technique for calculations of binary stellar evolution, with application to magnetic braking
NASA Technical Reports Server (NTRS)
Rappaport, S.; Joss, P. C.; Verbunt, F.
1983-01-01
The development of appropriate computer programs has made it possible to conduct studies of stellar evolution which are more detailed and accurate than the investigations previously feasible. However, the use of such programs can also entail some serious drawbacks which are related to the time and expense required for the work. One approach for overcoming these drawbacks involves the employment of simplified stellar evolution codes which incorporate the essential physics of the problem of interest without attempting either great generality or maximal accuracy. Rappaport et al. (1982) have developed a simplified code to study the evolution of close binary stellar systems composed of a collapsed object and a low-mass secondary. The present investigation is concerned with a more general, but still simplified, technique for calculating the evolution of close binary systems with collapsed binaries and mass-losing secondaries.
Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT.
Liu, L J; Yu, K X; Zhang, M; Zhuang, G; Li, X; Yuan, T; Rao, B; Zhao, Q
2016-01-01
In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distribution of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.
Calculation of the Full Scattering Amplitude without Partial Wave Decomposition II
NASA Technical Reports Server (NTRS)
Shertzer, J.; Temkin, A.
2003-01-01
As is well known, the full scattering amplitude can be expressed as an integral involving the complete scattering wave function. We have shown that the integral can be simplified and used in a practical way. Initial application to electron-hydrogen scattering without exchange was highly successful. The Schrodinger equation (SE) can be reduced to a 2d partial differential equation (pde), and was solved using the finite element method. We have now included exchange by solving the resultant SE, in the static exchange approximation. The resultant equation can be reduced to a pair of coupled pde's, to which the finite element method can still be applied. The resultant scattering amplitudes, both singlet and triplet, as a function of angle can be calculated for various energies. The results are in excellent agreement with converged partial wave results.
Camera System MTF: combining optic with detector
NASA Astrophysics Data System (ADS)
Andersen, Torben B.; Granger, Zachary A.
2017-08-01
MTF is one of the most common metrics used to quantify the resolving power of an optical component. Extensive literature is dedicated to describing methods to calculate the Modulation Transfer Function (MTF) for stand-alone optical components such as a camera lens or telescope, and some literature addresses approaches to determine an MTF for combination of an optic with a detector. The formulations pertaining to a combined electro-optical system MTF are mostly based on theory, and assumptions that detector MTF is described only by the pixel pitch which does not account for wavelength dependencies. When working with real hardware, detectors are often characterized by testing MTF at discrete wavelengths. This paper presents a method to simplify the calculation of a polychromatic system MTF when it is permissible to consider the detector MTF to be independent of wavelength.
Quantum Monte Carlo studies of solvated systems
NASA Astrophysics Data System (ADS)
Schwarz, Kathleen; Letchworth Weaver, Kendra; Arias, T. A.; Hennig, Richard G.
2011-03-01
Solvation qualitatively alters the energetics of diverse processes from protein folding to reactions on catalytic surfaces. An explicit description of the solvent in quantum-mechanical calculations requires both a large number of electrons and exploration of a large number of configurations in the phase space of the solvent. These problems can be circumvented by including the effects of solvent through a rigorous classical density-functional description of the liquid environment, thereby yielding free energies and thermodynamic averages directly, while eliminating the need for explicit consideration of the solvent electrons. We have implemented and tested this approach within the CASINO Quantum Monte Carlo code. Our method is suitable for calculations in any basis within CASINO, including b-spline and plane wave trial wavefunctions, and is equally applicable to molecules, surfaces, and crystals. For our preliminary test calculations, we use a simplified description of the solvent in terms of an isodensity continuum dielectric solvation approach, though the method is fully compatible with more reliable descriptions of the solvent we shall employ in the future.
Neuzil, C.E.; Cooley, C.; Silliman, Stephen E.; Bredehoeft, J.D.; Hsieh, P.A.
1981-01-01
In Part I a general analytical solution for the transient pulse test was presented. Part II presents a graphical method for analyzing data from a test to obtain the hydraulic properties of the sample. The general solution depends on both hydraulic conductivity and specific storage and, in theory, analysis of the data can provide values for both of these hydraulic properties. However, in practice, one of two limiting cases may apply in which case it is possible to calculate only hydraulic conductivity or the product of hydraulic conductivity times specific storage. In this paper we examine the conditions when both hydraulic parameters can be calculated. The analyses of data from two tests are presented. In Appendix I the general solution presented in Part I is compared with an earlier analysis, in which compressive storage in the sample is assumed negligible, and the error in calculated hydraulic conductivity due to this simplifying assumption is examined. ?? 1981.
Climate Leadership webinar on Greenhouse Gas Management Resources for Small Businesses
Small businesses can calculate their carbon footprint and construct a greenhouse gas inventory to help track progress towards reaching emissions reduction goals. One strategy for this is EPA's Simplified GHG Emissions Calculator.
NASA Astrophysics Data System (ADS)
Zhang, J.; Gao, Q.; Tan, S. J.; Zhong, W. X.
2012-10-01
A new method is proposed as a solution for the large-scale coupled vehicle-track dynamic model with nonlinear wheel-rail contact. The vehicle is simplified as a multi-rigid-body model, and the track is treated as a three-layer beam model. In the track model, the rail is assumed to be an Euler-Bernoulli beam supported by discrete sleepers. The vehicle model and the track model are coupled using Hertzian nonlinear contact theory, and the contact forces of the vehicle subsystem and the track subsystem are approximated by the Lagrange interpolation polynomial. The response of the large-scale coupled vehicle-track model is calculated using the precise integration method. A more efficient algorithm based on the periodic property of the track is applied to calculate the exponential matrix and certain matrices related to the solution of the track subsystem. Numerical examples demonstrate the computational accuracy and efficiency of the proposed method.
A new parametric method to smooth time-series data of metabolites in metabolic networks.
Miyawaki, Atsuko; Sriyudthsak, Kansuporn; Hirai, Masami Yokota; Shiraishi, Fumihide
2016-12-01
Mathematical modeling of large-scale metabolic networks usually requires smoothing of metabolite time-series data to account for measurement or biological errors. Accordingly, the accuracy of smoothing curves strongly affects the subsequent estimation of model parameters. Here, an efficient parametric method is proposed for smoothing metabolite time-series data, and its performance is evaluated. To simplify parameter estimation, the method uses S-system-type equations with simple power law-type efflux terms. Iterative calculation using this method was found to readily converge, because parameters are estimated stepwise. Importantly, smoothing curves are determined so that metabolite concentrations satisfy mass balances. Furthermore, the slopes of smoothing curves are useful in estimating parameters, because they are probably close to their true behaviors regardless of errors that may be present in the actual data. Finally, calculations for each differential equation were found to converge in much less than one second if initial parameters are set at appropriate (guessed) values. Copyright © 2016 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Wei; Shen, Jianqi
2018-06-01
The use of a shaped beam for applications relying on light scattering depends much on the ability to evaluate the beam shape coefficients (BSC) effectively. Numerical techniques for evaluating the BSCs of a shaped beam, such as the quadrature, the localized approximation (LA), the integral localized approximation (ILA) methods, have been developed within the framework of generalized Lorenz-Mie theory (GLMT). The quadrature methods usually employ the 2-/3-dimensional integrations. In this work, the expressions of the BSCs for an elliptical Gaussian beam (EGB) are simplified into the 1-dimensional integral so as to speed up the numerical computation. Numerical results of BSCs are used to reconstruct the beam field and the fidelity of the reconstructed field to the given beam field is estimated. It is demonstrated that the proposed method is much faster than the 2-dimensional integrations and it can acquire more accurate results than the LA method. Limitations of the quadrature method and also the LA method in the numerical calculation are analyzed in detail.
Uniform semiclassical sudden approximation for rotationally inelastic scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Korsch, H.J.; Schinke, R.
1980-08-01
The infinite-order-sudden (IOS) approximation is investigated in the semiclassical limit. A simplified IOS formula for rotationally inelastic differential cross sections is derived involving a uniform stationary phase approximation for two-dimensional oscillatory integrals with two stationary points. The semiclassical analysis provides a quantitative description of the rotational rainbow structure in the differential cross section. The numerical calculation of semiclassical IOS cross sections is extremely fast compared to numerically exact IOS methods, especially if high ..delta..j transitions are involved. Rigid rotor results for He--Na/sub 2/ collisions with ..delta..j< or approx. =26 and for K--CO collisions with ..delta..j< or approx. =70 show satisfactorymore » agreement with quantal IOS calculations.« less
A scattering model for rain depolarization
NASA Technical Reports Server (NTRS)
Wiley, P. H.; Stutzman, W. L.; Bostian, C. W.
1973-01-01
A method is presented for calculating the amount of depolarization caused by precipitation for a propagation path. In the model the effects of each scatterer and their interactions are accounted for by using a series of simplifying steps. It is necessary only to know the forward scattering properties of a single scatterer. For the case of rain the results of this model for attenuation, differential phase shift, and cross polarization agree very well with the results of the only other model available, that of differential attenuation and differential phase shift. Calculations presented here show that horizontal polarization is more sensitive to depolarization than is vertical polarization for small rain drop canting angle changes. This effect increases with increasing path length.
Convective and morphological instabilities during crystal growth: Effect of gravity modulation
NASA Technical Reports Server (NTRS)
Coreill, S. R.; Murray, B. T.; Mcfadden, G. B.; Wheeler, A. A.; Saunders, B. V.
1992-01-01
During directional solidification of a binary alloy at constant velocity in the vertical direction, morphological and convective instabilities may occur due to the temperature and solute gradients associated with the solidification process. The effect of time-periodic modulation (vibration) is studied by considering a vertical gravitational acceleration which is sinusoidal in time. The conditions for the onset of solutal convection are calculated numerically, employing two distinct computational procedures based on Floquet theory. In general, a stable state can be destabilized by modulation and an unstable state can be stabilized. In the limit of high frequency modulation, the method of averaging and multiple-scale asymptotic analysis can be used to simplify the calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, F; Park, J; Barraclough, B
2016-06-15
Purpose: To develop an efficient and accurate independent dose calculation algorithm with a simplified analytical source model for the quality assurance and safe delivery of Flattening Filter Free (FFF)-IMRT on an Elekta Versa HD. Methods: The source model consisted of a point source and a 2D bivariate Gaussian source, respectively modeling the primary photons and the combined effect of head scatter, monitor chamber backscatter and collimator exchange effect. The in-air fluence was firstly calculated by back-projecting the edges of beam defining devices onto the source plane and integrating the visible source distribution. The effect of the rounded MLC leaf end,more » tongue-and-groove and interleaf transmission was taken into account in the back-projection. The in-air fluence was then modified with a fourth degree polynomial modeling the cone-shaped dose distribution of FFF beams. Planar dose distribution was obtained by convolving the in-air fluence with a dose deposition kernel (DDK) consisting of the sum of three 2D Gaussian functions. The parameters of the source model and the DDK were commissioned using measured in-air output factors (Sc) and cross beam profiles, respectively. A novel method was used to eliminate the volume averaging effect of ion chambers in determining the DDK. Planar dose distributions of five head-and-neck FFF-IMRT plans were calculated and compared against measurements performed with a 2D diode array (MapCHECK™) to validate the accuracy of the algorithm. Results: The proposed source model predicted Sc for both 6MV and 10MV with an accuracy better than 0.1%. With a stringent gamma criterion (2%/2mm/local difference), the passing rate of the FFF-IMRT dose calculation was 97.2±2.6%. Conclusion: The removal of the flattening filter represents a simplification of the head structure which allows the use of a simpler source model for very accurate dose calculation. The proposed algorithm offers an effective way to ensure the safe delivery of FFF-IMRT.« less
Analysis of the Characteristics of a Rotary Stepper Micromotor
NASA Astrophysics Data System (ADS)
Sone, Junji; Mizuma, Toshinari; Masunaga, Masakazu; Mochizuki, Shunsuke; Sarajic, Edin; Yamahata, Christophe; Fujita, Hiroyuki
A 3-phase electrostatic stepper micromotor was developed. To improve its performance for actual use, we have conducted numerical simulation to optimize the design. An improved simulation method is needed for calculation of various cases. To conduct circuit simulation of this micromotor, its structure is simplified, and a function for computing the force excited by the electrostatic field is added to the circuit simulator. We achieved a reasonably accurate simulation. We also considered an optimal drive waveform to achieve low-voltage operation.
Simplification of an MCNP model designed for dose rate estimation
NASA Astrophysics Data System (ADS)
Laptev, Alexander; Perry, Robert
2017-09-01
A study was made to investigate the methods of building a simplified MCNP model for radiological dose estimation. The research was done using an example of a complicated glovebox with extra shielding. The paper presents several different calculations for neutron and photon dose evaluations where glovebox elements were consecutively excluded from the MCNP model. The analysis indicated that to obtain a fast and reasonable estimation of dose, the model should be realistic in details that are close to the tally. Other details may be omitted.
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-01-01
Background Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Results Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Conclusion Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers. PMID:18328109
Pybel: a Python wrapper for the OpenBabel cheminformatics toolkit.
O'Boyle, Noel M; Morley, Chris; Hutchison, Geoffrey R
2008-03-09
Scripting languages such as Python are ideally suited to common programming tasks in cheminformatics such as data analysis and parsing information from files. However, for reasons of efficiency, cheminformatics toolkits such as the OpenBabel toolkit are often implemented in compiled languages such as C++. We describe Pybel, a Python module that provides access to the OpenBabel toolkit. Pybel wraps the direct toolkit bindings to simplify common tasks such as reading and writing molecular files and calculating fingerprints. Extensive use is made of Python iterators to simplify loops such as that over all the molecules in a file. A Pybel Molecule can be easily interconverted to an OpenBabel OBMol to access those methods or attributes not wrapped by Pybel. Pybel allows cheminformaticians to rapidly develop Python scripts that manipulate chemical information. It is open source, available cross-platform, and offers the power of the OpenBabel toolkit to Python programmers.
NASA Astrophysics Data System (ADS)
Vereecken, Luc; Peeters, Jozef
2003-09-01
The rigorous implementation of transition state theory (TST) for a reaction system with multiple reactant rotamers and multiple transition state conformers is discussed by way of a statistical rate analysis of the 1,5-H-shift in 1-butoxy radicals, a prototype reaction for the important class of H-shift reactions in atmospheric chemistry. Several approaches for deriving a multirotamer TST expression are treated: oscillator versus (hindered) internal rotor models; distinguishable versus indistinguishable atoms; and direct count methods versus degeneracy factors calculated by (simplified) direct count methods or from symmetry numbers and number of enantiomers, where applicable. It is shown that the various treatments are fully consistent, even if the TST expressions themselves appear different. The 1-butoxy H-shift reaction is characterized quantum chemically using B3LYP-DFT; the performance of this level of theory is compared to other methods. Rigorous application of the multirotamer TST methodology in an harmonic oscillator approximation based on this data yields a rate coefficient of k(298 K,1 atm)=1.4×105 s-1, and an Arrhenius expression k(T,1 atm)=1.43×1011 exp(-8.17 kcal mol-1/RT) s-1, which both closely match the experimental recommendations in the literature. The T-dependence is substantially influenced by the multirotamer treatment, as well as by the tunneling and fall-off corrections. The present results are compared to those of simplified TST calculations based solely on the properties of the lowest energy 1-butoxy rotamer.
Review of Thawing Time Prediction Models Depending on Process Conditions and Product Characteristics
Kluza, Franciszek; Spiess, Walter E. L.; Kozłowicz, Katarzyna
2016-01-01
Summary Determining thawing times of frozen foods is a challenging problem as the thermophysical properties of the product change during thawing. A number of calculation models and solutions have been developed. The proposed solutions range from relatively simple analytical equations based on a number of assumptions to a group of empirical approaches that sometimes require complex calculations. In this paper analytical, empirical and graphical models are presented and critically reviewed. The conditions of solution, limitations and possible applications of the models are discussed. The graphical and semi--graphical models are derived from numerical methods. Using the numerical methods is not always possible as running calculations takes time, whereas the specialized software and equipment are not always cheap. For these reasons, the application of analytical-empirical models is more useful for engineering. It is demonstrated that there is no simple, accurate and feasible analytical method for thawing time prediction. Consequently, simplified methods are needed for thawing time estimation of agricultural and food products. The review reveals the need for further improvement of the existing solutions or development of new ones that will enable accurate determination of thawing time within a wide range of practical conditions of heat transfer during processing. PMID:27904387
Simplified Numerical Description of SPT Operations
NASA Technical Reports Server (NTRS)
Manzella, David H.
1995-01-01
A simplified numerical model of the plasma discharge within the SPT-100 stationary plasma thruster was developed to aid in understanding thruster operation. A one dimensional description was used. Non-axial velocities were neglected except for the azimuthal electron velocity. A nominal operating condition of 4.5 mg/s of xenon anode flow was considered with 4.5 Amperes of discharge current, and a peak radial magnetic field strength of 130 Gauss. For these conditions, the calculated results indicated ionization fractions of 0.99 near the thruster exit with a potential drop across the discharge of approximately 250 Volts. Peak calculated electron temperatures were found to be sensitive to the choice of total ionization cross section for ionization of atomic xenon by electron bombardment and ranged from 51 eV to 60 eV. The calculated ionization fraction, potential drop, and electron number density agree favorably with previous experiments. Calculated electron temperatures are higher than previously measured.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two-time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (> 1 x 10(exp -20) moles/cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T4). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/air fuel and for the H2/O2. A similar correlation is also developed using data from NASA s Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T4) as a function of overall fuel/air ratio, pressure and initial temperature (T3). High values of the regression coefficient R2 are obtained.
NASA Technical Reports Server (NTRS)
Marek, C. John; Molnar, Melissa
2005-01-01
A simplified single rate expression for hydrogen combustion and nitrogen oxide production was developed. Detailed kinetics are predicted for the chemical kinetic times using the complete chemical mechanism over the entire operating space. These times are then correlated to the reactor conditions using an exponential fit. Simple first order reaction expressions are then used to find the conversion in the reactor. The method uses a two time step kinetic scheme. The first time averaged step is used at the initial times with smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, temperature, and pressure. The second instantaneous step is used at higher water concentrations (greater than l x 10(exp -20)) moles per cc) in the mixture which gives the chemical kinetic time as a function of the instantaneous fuel and water mole concentrations, pressure and temperature (T(sub 4)). The simple correlations are then compared to the turbulent mixing times to determine the limiting properties of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. This time is regressed over the complete initial conditions using the Excel regression routine. Chemical kinetic time equations for H2 and NOx are obtained for H2/Air fuel and for H2/O2. A similar correlation is also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium temperature (T(sub 4)) as a function of overall fuel/air ratio, pressure and initial temperature (T(sub 3)). High values of the regression coefficient R squared are obtained.
NASA Astrophysics Data System (ADS)
Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.
2018-05-01
In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.
Simplified Discontinuous Galerkin Methods for Systems of Conservation Laws with Convex Extension
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
1999-01-01
Simplified forms of the space-time discontinuous Galerkin (DG) and discontinuous Galerkin least-squares (DGLS) finite element method are developed and analyzed. The new formulations exploit simplifying properties of entropy endowed conservation law systems while retaining the favorable energy properties associated with symmetric variable formulations.
ON SOME MATHEMATICAL PROBLEMS SUGGESTED BY BIOLOGICAL SCHEMES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luehr, C.
1958-08-01
A simplified model of a population which reproduces asexually and is subject to random mututions implying improvement in chances of survival and procreation is treated by a numerical calculation. The behavior of such a system is then summarized by an analytical formula. The paper is intended as the first one of a series devoted to mathematical studies of simplified genetic situations. (auth)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xibing; Dong, Longjun, E-mail: csudlj@163.com; Australian Centre for Geomechanics, The University of Western Australia, Crawley, 6009
This paper presents an efficient closed-form solution (ECS) for acoustic emission(AE) source location in three-dimensional structures using time difference of arrival (TDOA) measurements from N receivers, N ≥ 6. The nonlinear location equations of TDOA are simplified to linear equations. The unique analytical solution of AE sources for unknown velocity system is obtained by solving the linear equations. The proposed ECS method successfully solved the problems of location errors resulting from measured deviations of velocity as well as the existence and multiplicity of solutions induced by calculations of square roots in existed close-form methods.
Numerical simulation of water evaporation inside vertical circular tubes
NASA Astrophysics Data System (ADS)
Ocłoń, Paweł; Nowak, Marzena; Majewski, Karol
2013-10-01
In this paper the results of simplified numerical analysis of water evaporation in vertical circular tubes are presented. The heat transfer in fluid domain (water or wet steam) and solid domain (tube wall) is analyzed. For the fluid domain the temperature field is calculated solving energy equation using the Control Volume Method and for the solid domain using the Finite Element Method. The heat transfer between fluid and solid domains is conjugated using the value of heat transfer coefficient from evaporating liquid to the tube wall. It is determined using the analytical Steiner-Taborek correlation. The pressure changes in fluid are computed using Friedel model.
Review of Integrated Noise Model (INM) Equations and Processes
NASA Technical Reports Server (NTRS)
Shepherd, Kevin P. (Technical Monitor); Forsyth, David W.; Gulding, John; DiPardo, Joseph
2003-01-01
The FAA's Integrated Noise Model (INM) relies on the methods of the SAE AIR-1845 'Procedure for the Calculation of Airplane Noise in the Vicinity of Airports' issued in 1986. Simplifying assumptions for aerodynamics and noise calculation were made in the SAE standard and the INM based on the limited computing power commonly available then. The key objectives of this study are 1) to test some of those assumptions against Boeing source data, and 2) to automate the manufacturer's methods of data development to enable the maintenance of a consistent INM database over time. These new automated tools were used to generate INM database submissions for six airplane types :737-700 (CFM56-7 24K), 767-400ER (CF6-80C2BF), 777-300 (Trent 892), 717-200 (BR7 15), 757-300 (RR535E4B), and the 737-800 (CFM56-7 26K).
A method for real-time implementation of HOG feature extraction
NASA Astrophysics Data System (ADS)
Luo, Hai-bo; Yu, Xin-rong; Liu, Hong-mei; Ding, Qing-hai
2011-08-01
Histogram of oriented gradient (HOG) is an efficient feature extraction scheme, and HOG descriptors are feature descriptors which is widely used in computer vision and image processing for the purpose of biometrics, target tracking, automatic target detection(ATD) and automatic target recognition(ATR) etc. However, computation of HOG feature extraction is unsuitable for hardware implementation since it includes complicated operations. In this paper, the optimal design method and theory frame for real-time HOG feature extraction based on FPGA were proposed. The main principle is as follows: firstly, the parallel gradient computing unit circuit based on parallel pipeline structure was designed. Secondly, the calculation of arctangent and square root operation was simplified. Finally, a histogram generator based on parallel pipeline structure was designed to calculate the histogram of each sub-region. Experimental results showed that the HOG extraction can be implemented in a pixel period by these computing units.
Structural Code Considerations for Solar Rooftop Installations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dwyer, Stephen F.; Dwyer, Brian P.; Sanchez, Alfred
2014-12-01
Residential rooftop solar panel installations are limited in part by the high cost of structural related code requirements for field installation. Permitting solar installations is difficult because there is a belief among residential permitting authorities that typical residential rooftops may be structurally inadequate to support the additional load associated with a photovoltaic (PV) solar installation. Typical engineering methods utilized to calculate stresses on a roof structure involve simplifying assumptions that render a complex non-linear structure to a basic determinate beam. This method of analysis neglects the composite action of the entire roof structure, yielding a conservative analysis based on amore » rafter or top chord of a truss. Consequently, the analysis can result in an overly conservative structural analysis. A literature review was conducted to gain a better understanding of the conservative nature of the regulations and codes governing residential construction and the associated structural system calculations.« less
Measurement of toroidal vessel eddy current during plasma disruption on J-TEXT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, L. J.; Yu, K. X.; Zhang, M., E-mail: zhangming@hust.edu.cn
2016-01-15
In this paper, we have employed a thin, printed circuit board eddy current array in order to determine the radial distribution of the azimuthal component of the eddy current density at the surface of a steel plate. The eddy current in the steel plate can be calculated by analytical methods under the simplifying assumptions that the steel plate is infinitely large and the exciting current is of uniform distribution. The measurement on the steel plate shows that this method has high spatial resolution. Then, we extended this methodology to a toroidal geometry with the objective of determining the poloidal distributionmore » of the toroidal component of the eddy current density associated with plasma disruption in a fusion reactor called J-TEXT. The preliminary measured result is consistent with the analysis and calculation results on the J-TEXT vacuum vessel.« less
NASA Astrophysics Data System (ADS)
Yeom, Jong-Min; Han, Kyung-Soo; Kim, Jae-Jin
2012-05-01
Solar surface insolation (SSI) represents how much solar radiance reaches the Earth's surface in a specified area and is an important parameter in various fields such as surface energy research, meteorology, and climate change. This study calculates insolation using Multi-functional Transport Satellite (MTSAT-1R) data with a simplified cloud factor over Northeast Asia. For SSI retrieval from the geostationary satellite data, the physical model of Kawamura is modified to improve insolation estimation by considering various atmospheric constituents, such as Rayleigh scattering, water vapor, ozone, aerosols, and clouds. For more accurate atmospheric parameterization, satellite-based atmospheric constituents are used instead of constant values when estimating insolation. Cloud effects are a key problem in insolation estimation because of their complicated optical characteristics and high temporal and spatial variation. The accuracy of insolation data from satellites depends on how well cloud attenuation as a function of geostationary channels and angle can be inferred. This study uses a simplified cloud factor that depends on the reflectance and solar zenith angle. Empirical criteria to select reference data for fitting to the ground station data are applied to suggest simplified cloud factor methods. Insolation estimated using the cloud factor is compared with results of the unmodified physical model and with observations by ground-based pyranometers located in the Korean peninsula. The modified model results show far better agreement with ground truth data compared to estimates using the conventional method under overcast conditions.
Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel
Shanhua, Xu; Songbo, Ren; Youde, Wang
2015-01-01
To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel. PMID:26121468
Three-Dimensional Surface Parameters and Multi-Fractal Spectrum of Corroded Steel.
Shanhua, Xu; Songbo, Ren; Youde, Wang
2015-01-01
To study multi-fractal behavior of corroded steel surface, a range of fractal surfaces of corroded surfaces of Q235 steel were constructed by using the Weierstrass-Mandelbrot method under a high total accuracy. The multi-fractal spectrum of fractal surface of corroded steel was calculated to study the multi-fractal characteristics of the W-M corroded surface. Based on the shape feature of the multi-fractal spectrum of corroded steel surface, the least squares method was applied to the quadratic fitting of the multi-fractal spectrum of corroded surface. The fitting function was quantitatively analyzed to simplify the calculation of multi-fractal characteristics of corroded surface. The results showed that the multi-fractal spectrum of corroded surface was fitted well with the method using quadratic curve fitting, and the evolution rules and trends were forecasted accurately. The findings can be applied to research on the mechanisms of corroded surface formation of steel and provide a new approach for the establishment of corrosion damage constitutive models of steel.
Pumping‐induced leakage in a bounded aquifer: An example of a scale‐invariant phenomenon
Butler, James J.; Tsou, Ming‐shu
2003-01-01
A new approach is presented for calculation of the volume of pumping‐induced leakage entering an aquifer as a function of time. This approach simplifies the total leakage calculation by extending analytical‐based methods developed for infinite systems to bounded aquifers of any size. The simplification is possible because of the relationship between drawdown and leakage in aquifers laterally bounded by impermeable formations. This relationship produces a scale‐invariant total leakage; i.e., the volume of leakage as a function of time does not change with the size of the aquifer or with the location of the pumping well. Two examples and image well theory are used to demonstrate and prove, respectively, the generality of this interesting phenomenon.
NASA Technical Reports Server (NTRS)
Shertzer, Janine; Temkin, A.
2003-01-01
As is well known, the full scattering amplitude can be expressed as an integral involving the complete scattering wave function. We have shown that the integral can be simplified and used in a practical way. Initial application to electron-hydrogen scattering without exchange was highly successful. The Schrodinger equation (SE), which can be reduced to a 2d partial differential equation (pde), was solved using the finite element method. We have now included exchange by solving the resultant SE, in the static exchange approximation, which is reducible to a pair of coupled pde's. The resultant scattering amplitudes, both singlet and triplet, calculated as a function of energy are in excellent agreement with converged partial wave results.
NASA Astrophysics Data System (ADS)
Evstatiev, Evstati; Svidzinski, Vladimir; Spencer, Andy; Galkin, Sergei
2014-10-01
Full wave 3-D modeling of RF fields in hot magnetized nonuniform plasma requires calculation of nonlocal conductivity kernel describing the dielectric response of such plasma to the RF field. In many cases, the conductivity kernel is a localized function near the test point which significantly simplifies numerical solution of the full wave 3-D problem. Preliminary results of feasibility analysis of numerical calculation of the conductivity kernel in a 3-D hot nonuniform magnetized plasma in the electron cyclotron frequency range will be reported. This case is relevant to modeling of ECRH in ITER. The kernel is calculated by integrating the linearized Vlasov equation along the unperturbed particle's orbits. Particle's orbits in the nonuniform equilibrium magnetic field are calculated numerically by one of the Runge-Kutta methods. RF electric field is interpolated on a specified grid on which the conductivity kernel is discretized. The resulting integrals in the particle's initial velocity and time are then calculated numerically. Different optimization approaches of the integration are tested in this feasibility analysis. Work is supported by the U.S. DOE SBIR program.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Section 13.305-4 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.305-4... purchase requisition, contracting officer verification statement, or other agency approved method of...
Total Ambient Dose Equivalent Buildup Factor Determination for Nbs04 Concrete.
Duckic, Paulina; Hayes, Robert B
2018-06-01
Buildup factors are dimensionless multiplicative factors required by the point kernel method to account for scattered radiation through a shielding material. The accuracy of the point kernel method is strongly affected by the correspondence of analyzed parameters to experimental configurations, which is attempted to be simplified here. The point kernel method has not been found to have widespread practical use for neutron shielding calculations due to the complex neutron transport behavior through shielding materials (i.e. the variety of interaction mechanisms that neutrons may undergo while traversing the shield) as well as non-linear neutron total cross section energy dependence. In this work, total ambient dose buildup factors for NBS04 concrete are calculated in terms of neutron and secondary gamma ray transmission factors. The neutron and secondary gamma ray transmission factors are calculated using MCNP6™ code with updated cross sections. Both transmission factors and buildup factors are given in a tabulated form. Practical use of neutron transmission and buildup factors warrants rigorously calculated results with all associated uncertainties. In this work, sensitivity analysis of neutron transmission factors and total buildup factors with varying water content has been conducted. The analysis showed significant impact of varying water content in concrete on both neutron transmission factors and total buildup factors. Finally, support vector regression, a machine learning technique, has been engaged to make a model based on the calculated data for calculation of the buildup factors. The developed model can predict most of the data with 20% relative error.
Calculating shock arrival in expansion tubes and shock tunnels using Bayesian changepoint analysis
NASA Astrophysics Data System (ADS)
James, Christopher M.; Bourke, Emily J.; Gildfind, David E.
2018-06-01
To understand the flow conditions generated in expansion tubes and shock tunnels, shock speeds are generally calculated based on shock arrival times at high-frequency wall-mounted pressure transducers. These calculations require that the shock arrival times are obtained accurately. This can be non-trivial for expansion tubes especially because pressure rises may be small and shock speeds high. Inaccurate shock arrival times can be a significant source of uncertainty. To help address this problem, this paper investigates two separate but complimentary techniques. Principally, it proposes using a Bayesian changepoint detection method to automatically calculate shock arrival, potentially reducing error and simplifying the shock arrival finding process. To compliment this, a technique for filtering the raw data without losing the shock arrival time is also presented and investigated. To test the validity of the proposed techniques, tests are performed using both a theoretical step change with different levels of noise and real experimental data. It was found that with conditions added to ensure that a real shock arrival time was found, the Bayesian changepoint analysis method was able to automatically find the shock arrival time, even for noisy signals.
Diagonalizing the Hamiltonian of λϕ4 theory in 2 space-time dimensions
NASA Astrophysics Data System (ADS)
Christensen, Neil
2018-01-01
We propose a new non-perturbative technique for calculating the scattering amplitudes of field-theory directly from the eigenstates of the Hamiltonian. Our method involves a discretized momentum space and a momentum cutoff, thereby truncating the Hilbert space and making numerical diagonalization of the Hamiltonian achievable. We show how to do this in the context of a simplified λϕ4 theory in two space-time dimensions. We present the results of our diagonalization, its dependence on time, its dependence on the parameters of the theory and its renormalization.
Calculation of load distribution in stiffened cylindrical shells
NASA Technical Reports Server (NTRS)
Ebner, H; Koller, H
1938-01-01
Thin-walled shells with strong longitudinal and transverse stiffening (for example, stressed-skin fuselages and wings) may, under certain simplifying assumptions, be treated as static systems with finite redundancies. In this report the underlying basis for this method of treatment of the problem is presented and a computation procedure for stiffened cylindrical shells with curved sheet panels indicated. A detailed discussion of the force distribution due to applied concentrated forces is given, and the discussion illustrated by numerical examples which refer to an experimentally determined circular cylindrical shell.
The Hubbard Model and Piezoresistivity
NASA Astrophysics Data System (ADS)
Celebonovic, V.; Nikolic, M. G.
2018-02-01
Piezoresistivity was discovered in the nineteenth century. Numerous applications of this phenomenon exist nowadays. The aim of the present paper is to explore the possibility of applying the Hubbard model to theoretical work on piezoresistivity. Results are encouraging, in the sense that numerical values of the strain gauge obtained by using the Hubbard model agree with results obtained by other methods. The calculation is simplified by the fact that it uses results for the electrical conductivity of 1D systems previously obtained within the Hubbard model by one of the present authors.
Computational tools for multi-linked flexible structures
NASA Technical Reports Server (NTRS)
Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.
1990-01-01
A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.
FDTD modeling of thin impedance sheets
NASA Technical Reports Server (NTRS)
Luebbers, Raymond; Kunz, Karl
1991-01-01
Thin sheets of resistive or dielectric material are commonly encountered in radar cross section calculations. Analysis of such sheets is simplified by using sheet impedances. It is shown that sheet impedances can be modeled easily and accurately using Finite Difference Time Domain (FDTD) methods. These sheets are characterized by a discontinuity in the tangential magnetic field on either side of the sheet but no discontinuity in tangential electric field. This continuity, or single valued behavior of the electric field, allows the sheet current to be expressed in terms of an impedance multiplying this electric field.
Optimum Chemical Regeneration of the Gases Burnt in Solid Oxide Fuel Cells
NASA Astrophysics Data System (ADS)
Baskakov, A. P.; Volkova, Yu. V.; Plotnikov, N. S.
2014-07-01
A simplified method of calculating the concentrations of the components of a thermodynamically equilibrium mixture (a synthesis gas) supplied to the anode channel of a battery of solid oxide fuel cells and the change in these concentrations along the indicated channel is proposed and results of corresponding calculations are presented. The variants of reforming of a natural gas (methane) by air and steam as well as by a part of the exhaust combustion products for obtaining a synthesis gas are considered. The amount of the anode gases that should be returned for the complete chemical regeneration of the gases burnt in the fuel cells was determined. The dependence of the electromotive force of an ideal oxide fuel element (the electric circuit of which is open) on the degree of absorption of oxygen in a thermodynamically equilibrium fuel mixture was calculated.
Optimization of radial-type superconducting magnetic bearing using the Taguchi method
NASA Astrophysics Data System (ADS)
Ai, Liwang; Zhang, Guomin; Li, Wanjie; Liu, Guole; Liu, Qi
2018-07-01
It is important and complicated to model and optimize the levitation behavior of superconducting magnetic bearing (SMB). That is due to the nonlinear constitutive relationships of superconductor and ferromagnetic materials, the relative movement between the superconducting stator and PM rotor, and the multi-parameter (e.g., air-gap, critical current density, and remanent flux density, etc.) affecting the levitation behavior. In this paper, we present a theoretical calculation and optimization method of the levitation behavior for radial-type SMB. A simplified model of levitation force calculation is established using 2D finite element method with H-formulation. In the model, the boundary condition of superconducting stator is imposed by harmonic series expressions to describe the traveling magnetic field generated by the moving PM rotor. Also, experimental measurements of the levitation force are performed and validate the model method. A statistical method called Taguchi method is adopted to carry out an optimization of load capacity for SMB. Then the factor effects of six optimization parameters on the target characteristics are discussed and the optimum parameters combination is determined finally. The results show that the levitation behavior of SMB is greatly improved and the Taguchi method is suitable for optimizing the SMB.
Two-port network analysis and modeling of a balanced armature receiver.
Kim, Noori; Allen, Jont B
2013-07-01
Models for acoustic transducers, such as loudspeakers, mastoid bone-drivers, hearing-aid receivers, etc., are critical elements in many acoustic applications. Acoustic transducers employ two-port models to convert between acoustic and electromagnetic signals. This study analyzes a widely-used commercial hearing-aid receiver ED series, manufactured by Knowles Electronics, Inc. Electromagnetic transducer modeling must consider two key elements: a semi-inductor and a gyrator. The semi-inductor accounts for electromagnetic eddy-currents, the 'skin effect' of a conductor (Vanderkooy, 1989), while the gyrator (McMillan, 1946; Tellegen, 1948) accounts for the anti-reciprocity characteristic [Lenz's law (Hunt, 1954, p. 113)]. Aside from Hunt (1954), no publications we know of have included the gyrator element in their electromagnetic transducer models. The most prevalent method of transducer modeling evokes the mobility method, an ideal transformer instead of a gyrator followed by the dual of the mechanical circuit (Beranek, 1954). The mobility approach greatly complicates the analysis. The present study proposes a novel, simplified and rigorous receiver model. Hunt's two-port parameters, the electrical impedance Ze(s), acoustic impedance Za(s) and electro-acoustic transduction coefficient Ta(s), are calculated using ABCD and impedance matrix methods (Van Valkenburg, 1964). The results from electrical input impedance measurements Zin(s), which vary with given acoustical loads, are used in the calculation (Weece and Allen, 2010). The hearing-aid receiver transducer model is designed based on energy transformation flow [electric→ mechanic→ acoustic]. The model has been verified with electrical input impedance, diaphragm velocity in vacuo, and output pressure measurements. This receiver model is suitable for designing most electromagnetic transducers and it can ultimately improve the design of hearing-aid devices by providing a simplified yet accurate, physically motivated analysis. This article is part of a special issue entitled "MEMRO 2012". Published by Elsevier B.V.
NASA Technical Reports Server (NTRS)
Molnar, Melissa; Marek, C. John
2005-01-01
A simplified kinetic scheme for Jet-A, and methane fuels with water injection was developed to be used in numerical combustion codes, such as the National Combustor Code (NCC) or even simple FORTRAN codes. The two time step method is either an initial time averaged value (step one) or an instantaneous value (step two). The switch is based on the water concentration in moles/cc of 1x10(exp -20). The results presented here results in a correlation that gives the chemical kinetic time as two separate functions. This two time step method is used as opposed to a one step time averaged method previously developed to determine the chemical kinetic time with increased accuracy. The first time averaged step is used at the initial times for smaller water concentrations. This gives the average chemical kinetic time as a function of initial overall fuel air ratio, initial water to fuel mass ratio, temperature, and pressure. The second instantaneous step, to be used with higher water concentrations, gives the chemical kinetic time as a function of instantaneous fuel and water mole concentration, pressure and temperature (T4). The simple correlations would then be compared to the turbulent mixing times to determine the limiting rates of the reaction. The NASA Glenn GLSENS kinetics code calculates the reaction rates and rate constants for each species in a kinetic scheme for finite kinetic rates. These reaction rates are used to calculate the necessary chemical kinetic times. Chemical kinetic time equations for fuel, carbon monoxide and NOx are obtained for Jet-A fuel and methane with and without water injection to water mass loadings of 2/1 water to fuel. A similar correlation was also developed using data from NASA's Chemical Equilibrium Applications (CEA) code to determine the equilibrium concentrations of carbon monoxide and nitrogen oxide as functions of overall equivalence ratio, water to fuel mass ratio, pressure and temperature (T3). The temperature of the gas entering the turbine (T4) was also correlated as a function of the initial combustor temperature (T3), equivalence ratio, water to fuel mass ratio, and pressure.
Comprehensive Numerical Simulation of Filling and Solidification of Steel Ingots
Pola, Annalisa; Gelfi, Marcello; La Vecchia, Giovina Marina
2016-01-01
In this paper, a complete three-dimensional numerical model of mold filling and solidification of steel ingots is presented. The risk of powder entrapment and defects formation during filling is analyzed in detail, demonstrating the importance of using a comprehensive geometry, with trumpet and runner, compared to conventional simplified models. By using a case study, it was shown that the simplified model significantly underestimates the defects sources, reducing the utility of simulations in supporting mold and process design. An experimental test was also performed on an instrumented mold and the measurements were compared to the calculation results. The good agreement between calculation and trial allowed validating the simulation. PMID:28773890
Simplified ultrasound protocol for the exclusion of clinically significant carotid artery stenosis.
Högberg, Dominika; Dellagrammaticas, Demosthenes; Kragsterman, Björn; Björck, Martin; Wanhainen, Anders
2016-08-01
To evaluate a simplified ultrasound protocol for the exclusion of clinically significant carotid artery stenosis for screening purposes. A total of 9,493 carotid arteries in 4,748 persons underwent carotid ultrasound examination. Most subjects were 65-year-old men attending screening for abdominal aortic aneurysm. The presence of a stenosis on B-mode and/or a mosaic pattern in post-stenotic areas on colour Doppler and maximum peak systolic velocity (PSV) in the internal carotid artery (ICA) were recorded. A carotid stenosis was defined as The North American Symptomatic Carotid Endarterectomy Trial (NASCET) >20% and a significant stenosis as NASCET >50%. The kappa (κ) statistic was used to assess agreement between methods. Sensitivity, specificity, positive predictive (PPV), and negative predictive (NPV) values were calculated for the greyscale/mosaic method compared to conventional assessment by means of PSV measurement. An ICA stenosis was found in 121 (1.3%) arteries; 82 (0.9%) were graded 20%-49%, 16 (0.2%) were 50%-69%, and 23 (0.2%) were 70%-99%. Eighteen (0.2%) arteries were occluded. Overall, the greyscale/mosaic protocol showed a moderate agreement with ICA PSV measurements for the detection of carotid artery stenosis, κ = 0.455. The sensitivity, specificity, PPV, and NPV for detection of >20% ICA stenosis were 91% (95% CI 0.84-0.95), 97% (0.97-0.98), 31% (0.26-0.36), and 97% (0.97-0.97), respectively. The corresponding figures for >50% stenosis were 90% (0.83-0.95), 97% (0.97-0.98), 11% (0.08-0.15), and 100% (0.99-1.00). Compared with PSV measurements, the simplified greyscale/mosaic protocol had a high negative predictive value for detection of >50% carotid stenosis, suggesting that it may be suitable as a screening method to exclude significant disease.
A method for coupling a parameterization of the planetary boundary layer with a hydrologic model
NASA Technical Reports Server (NTRS)
Lin, J. D.; Sun, Shu Fen
1986-01-01
Deardorff's parameterization of the planetary boundary layer is adapted to drive a hydrologic model. The method converts the atmospheric conditions measured at the anemometer height at one site to the mean values in the planetary boundary layer; it then uses the planetary boundary layer parameterization and the hydrologic variables to calculate the fluxes of momentum, heat and moisture at the atmosphere-land interface for a different site. A simplified hydrologic model is used for a simulation study of soil moisture and ground temperature on three different land surface covers. The results indicate that this method can be used to drive a spatially distributed hydrologic model by using observed data available at a meteorological station located on or nearby the site.
Mashouf, Shahram; Lechtman, Eli; Beaulieu, Luc; Verhaegen, Frank; Keller, Brian M; Ravi, Ananth; Pignol, Jean-Philippe
2013-09-21
The American Association of Physicists in Medicine Task Group No. 43 (AAPM TG-43) formalism is the standard for seeds brachytherapy dose calculation. But for breast seed implants, Monte Carlo simulations reveal large errors due to tissue heterogeneity. Since TG-43 includes several factors to account for source geometry, anisotropy and strength, we propose an additional correction factor, called the inhomogeneity correction factor (ICF), accounting for tissue heterogeneity for Pd-103 brachytherapy. This correction factor is calculated as a function of the media linear attenuation coefficient and mass energy absorption coefficient, and it is independent of the source internal structure. Ultimately the dose in heterogeneous media can be calculated as a product of dose in water as calculated by TG-43 protocol times the ICF. To validate the ICF methodology, dose absorbed in spherical phantoms with large tissue heterogeneities was compared using the TG-43 formalism corrected for heterogeneity versus Monte Carlo simulations. The agreement between Monte Carlo simulations and the ICF method remained within 5% in soft tissues up to several centimeters from a Pd-103 source. Compared to Monte Carlo, the ICF methods can easily be integrated into a clinical treatment planning system and it does not require the detailed internal structure of the source or the photon phase-space.
NASA Astrophysics Data System (ADS)
Rabemananajara, Tanjona R.; Horowitz, W. A.
2017-09-01
To make predictions for the particle physics processes, one has to compute the cross section of the specific process as this is what one can measure in a modern collider experiment such as the Large Hadron Collider (LHC) at CERN. Theoretically, it has been proven to be extremely difficult to compute scattering amplitudes using conventional methods of Feynman. Calculations with Feynman diagrams are realizations of a perturbative expansion and when doing calculations one has to set up all topologically different diagrams, for a given process up to a given order of coupling in the theory. This quickly makes the calculation of scattering amplitudes a hot mess. Fortunately, one can simplify calculations by considering the helicity amplitude for the Maximally Helicity Violating (MHV). This can be extended to the formalism of on-shell recursion, which is able to derive, in a much simpler way the expression of a high order scattering amplitude from lower orders.
NASA Astrophysics Data System (ADS)
Izmaylov, Artur F.; Staroverov, Viktor N.; Scuseria, Gustavo E.; Davidson, Ernest R.; Stoltz, Gabriel; Cancès, Eric
2007-02-01
We have recently formulated a new approach, named the effective local potential (ELP) method, for calculating local exchange-correlation potentials for orbital-dependent functionals based on minimizing the variance of the difference between a given nonlocal potential and its desired local counterpart [V. N. Staroverov et al., J. Chem. Phys. 125, 081104 (2006)]. Here we show that under a mildly simplifying assumption of frozen molecular orbitals, the equation defining the ELP has a unique analytic solution which is identical with the expression arising in the localized Hartree-Fock (LHF) and common energy denominator approximations (CEDA) to the optimized effective potential. The ELP procedure differs from the CEDA and LHF in that it yields the target potential as an expansion in auxiliary basis functions. We report extensive calculations of atomic and molecular properties using the frozen-orbital ELP method and its iterative generalization to prove that ELP results agree with the corresponding LHF and CEDA values, as they should. Finally, we make the case for extending the iterative frozen-orbital ELP method to full orbital relaxation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoon, J; Heins, D; Zhang, R
Purpose: To model the magnetic port in the temporary breast tissue expanders and to improve accuracy of dose calculation in Pinnacle, a commercial treatment planning system (TPS). Methods: A magnetic port in the tissue expander was modeled with a radiological measurement-basis; we have determined the dimension and the density of the model by film images and ion chamber measurement under the magnetic port, respectively. The model was then evaluated for various field sizes and photon energies by comparing depth dose values calculated by TPS (using our new model) and ion chamber measurement in a water tank. Also, the model wasmore » further evaluated by using a simplified anthropomorphic phantom with realistic geometry by placing thermoluminescent dosimeters (TLD)s around the magnetic port. Dose perturbations in a real patient’s treatment plan from the new model and a current clinical model, which is based on the subjective contouring created by the dosimetrist, were also compared. Results: Dose calculations based on our model showed less than 1% difference from ion chamber measurements for various field sizes and energies under the magnetic port when the magnetic port was placed parallel to the phantom surface. When it was placed perpendicular to the phantom surface, the maximum difference was 3.5%, while average differences were less than 3.1% for all cases. For the simplified anthropomorphic phantom, the calculated point doses agreed with TLD measurements within 5.2%. By comparing with the current model which is being used in clinic by TPS, it was found that current clinical model overestimates the effect from the magnetic port. Conclusion: Our new model showed good agreement with measurement for all cases. It could potentially improve the accuracy of dose delivery to the breast cancer patients.« less
NASA Astrophysics Data System (ADS)
Kalb, Wolfgang L.; Batlogg, Bertram
2010-01-01
The spectral density of localized states in the band gap of pentacene (trap DOS) was determined with a pentacene-based thin-film transistor from measurements of the temperature dependence and gate-voltage dependence of the contact-corrected field-effect conductivity. Several analytical methods to calculate the trap DOS from the measured data were used to clarify, if the different methods lead to comparable results. We also used computer simulations to further test the results from the analytical methods. Most methods predict a trap DOS close to the valence-band edge that can be very well approximated by a single exponential function with a slope in the range of 50-60 meV and a trap density at the valence-band edge of ≈2×1021eV-1cm-3 . Interestingly, the trap DOS is always slightly steeper than exponential. An important finding is that the choice of the method to calculate the trap DOS from the measured data can have a considerable effect on the final result. We identify two specific simplifying assumptions that lead to significant errors in the trap DOS. The temperature dependence of the band mobility should generally not be neglected. Moreover, the assumption of a constant effective accumulation-layer thickness leads to a significant underestimation of the slope of the trap DOS.
The induced electric field due to a current transient
NASA Astrophysics Data System (ADS)
Beck, Y.; Braunstein, A.; Frankental, S.
2007-05-01
Calculations and measurements of the electric fields, induced by a lightning strike, are important for understanding the phenomenon and developing effective protection systems. In this paper, a novel approach to the calculation of the electric fields due to lightning strikes, using a relativistic approach, is presented. This approach is based on a known current wave-pair model, representing the lightning current wave. The model presented is one that describes the lightning current wave, either at the first stage of the descending charge wave from the cloud or at the later stage of the return stroke. The electric fields computed are cylindrically symmetric. A simplified method for the calculation of the electric field is achieved by using special relativity theory and relativistic considerations. The proposed approach, described in this paper, is based on simple expressions (by applying Coulomb's law) compared with much more complicated partial differential equations based on Maxwell's equations. A straight forward method of calculating the electric field due to a lightning strike, modelled as a negative-positive (NP) wave-pair, is determined by using the special relativity theory in order to calculate the 'velocity field' and relativistic concepts for calculating the 'acceleration field'. These fields are the basic elements required for calculating the total field resulting from the current wave-pair model. Moreover, a modified simpler method using sub models is represented. The sub-models are filaments of either static charges or charges at constant velocity only. Combining these simple sub-models yields the total wave-pair model. The results fully agree with that obtained by solving Maxwell's equations for the discussed problem.
Study of motion of optimal bodies in the soil of grid method
NASA Astrophysics Data System (ADS)
Kotov, V. L.; Linnik, E. Yu
2016-11-01
The paper presents a method of calculating the optimum forms in axisymmetric numerical method based on the Godunov and models elastoplastic soil vedium Grigoryan. Solved two problems in a certain definition of generetrix rotation of the body of a given length and radius of the base, having a minimum impedance and maximum penetration depth. Numerical calculations are carried out by a modified method of local variations, which allows to significantly reduce the number of operations at different representations of generetrix. Significantly simplify the process of searching for optimal body allows the use of a quadratic model of local interaction for preliminary assessments. It is noted the qualitative similarity of the process of convergence of numerical calculations for solving the optimization problem based on local interaction model and within the of continuum mechanics. A comparison of the optimal bodies with absolutely optimal bodies possessing the minimum resistance of penetration below which is impossible to achieve under given constraints on the geometry. It is shown that the conical striker with a variable vertex angle, which equal to the angle of the solution is absolutely optimal body of minimum resistance of penetration for each value of the velocity of implementation will have a final depth of penetration is only 12% more than the traditional body absolutely optimal maximum depth penetration.
The U.S. Environmental Protection Agency National Stormwater Calculator (NSWC) simplifies the task of estimating runoff through a straightforward simulation process based on the EPA Stormwater Management Model. The NSWC accesses localized climate and soil hydrology data, and opti...
The refractive index in electron microscopy and the errors of its approximations.
Lentzen, M
2017-05-01
In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Li, Gang; Yu, Yue; Zhang, Cui; Lin, Ling
2017-09-01
The oxygen saturation is one of the important parameters to evaluate human health. This paper presents an efficient optimization method that can improve the accuracy of oxygen saturation measurement, which employs an optical frequency division triangular wave signal as the excitation signal to obtain dynamic spectrum and calculate oxygen saturation. In comparison to the traditional method measured RMSE (root mean square error) of SpO2 which is 0.1705, this proposed method significantly reduced the measured RMSE which is 0.0965. It is notable that the accuracy of oxygen saturation measurement has been improved significantly. The method can simplify the circuit and bring down the demand of elements. Furthermore, it has a great reference value on improving the signal to noise ratio of other physiological signals.
A case study by life cycle assessment
NASA Astrophysics Data System (ADS)
Li, Shuyun
2017-05-01
This article aims to assess the potential environmental impact of an electrical grinder during its life cycle. The Life Cycle Inventory Analysis was conducted based on the Simplified Life Cycle Assessment (SLCA) Drivers that calculated from the Valuation of Social Cost and Simplified Life Cycle Assessment Model (VSSM). The detailed results for LCI can be found under Appendix II. The Life Cycle Impact Assessment was performed based on Eco-indicator 99 method. The analysis results indicated that the major contributor to the environmental impact as it accounts for over 60% overall SLCA output. In which, 60% of the emission resulted from the logistic required for the maintenance activities. This was measured by conducting the hotspot analysis. After performing sensitivity analysis, it is evidenced that changing fuel type results in significant decrease environmental footprint. The environmental benefit can also be seen from the negative output values of the recycling activities. By conducting Life Cycle Assessment analysis, the potential environmental impact of the electrical grinder was investigated.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kastner, S.O.; Bhatia, A.K.
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284 --500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t/sub i/j, related to ''taboo'' probabilities of Markov chain theory. The t/sub i/j are here evaluated for a real atomic system, being therefore of potentialmore » interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.« less
Zero cylinder coordinate system approach to image reconstruction in fan beam ICT
NASA Astrophysics Data System (ADS)
Yan, Yan-Chun; Xian, Wu; Hall, Ernest L.
1992-11-01
The state-of-the-art of the transform algorithms has allowed the newest versions to produce excellent and efficient reconstructed images in most applications, especially in medical CT and industrial CT etc. Based on the Zero Cylinder Coordinate system (ZCC) presented in this paper, a new transform algorithm of image reconstruction in fan beam industrial CT is suggested. It greatly reduces the amount of computation of the backprojection, which requires only two INC instructions to calculate the weighted factor and the subcoordinate. A new backprojector is designed, which simplifies its assembly-line mechanism based on the ZCC method. Finally, a simulation results on microcomputer is given out, which proves this method is effective and practical.
A simplified design of the staggered herringbone micromixer for practical applications
Du, Yan; Zhang, Zhiyi; Yim, ChaeHo; Lin, Min; Cao, Xudong
2010-01-01
We demonstrated a simple method for the device design of a staggered herringbone micromixer (SHM) using numerical simulation. By correlating the simulated concentrations with channel length, we obtained a series of concentration versus channel length profiles, and used mixing completion length Lm as the only parameter to evaluate the performance of device structure on mixing. Fluorescence quenching experiments were subsequently conducted to verify the optimized SHM structure for a specific application. Good agreement was found between the optimization and the experimental data. Since Lm is straightforward, easily defined and calculated parameter for characterization of mixing performance, this method for designing micromixers is simple and effective for practical applications. PMID:20697584
A simplified design of the staggered herringbone micromixer for practical applications.
Du, Yan; Zhang, Zhiyi; Yim, Chaeho; Lin, Min; Cao, Xudong
2010-05-07
We demonstrated a simple method for the device design of a staggered herringbone micromixer (SHM) using numerical simulation. By correlating the simulated concentrations with channel length, we obtained a series of concentration versus channel length profiles, and used mixing completion length L(m) as the only parameter to evaluate the performance of device structure on mixing. Fluorescence quenching experiments were subsequently conducted to verify the optimized SHM structure for a specific application. Good agreement was found between the optimization and the experimental data. Since L(m) is straightforward, easily defined and calculated parameter for characterization of mixing performance, this method for designing micromixers is simple and effective for practical applications.
Modern methods and systems for precise control of the quality of agricultural and food production
NASA Astrophysics Data System (ADS)
Bednarjevsky, Sergey S.; Veryasov, Yuri V.; Akinina, Evgeniya V.; Smirnov, Gennady I.
1999-01-01
The results on the modeling of non-linear dynamics of strong continuous and impulse radiation in the laser nephelometry of polydisperse biological systems, important from the viewpoint of applications in biotechnologies, are presented. The processes of nonlinear self-action of the laser radiation by the multiple scattering in the disperse biological agro-media are considered. The simplified algorithms of the calculation of the parameters of the biological media under investigation are indicated and the estimates of the errors of the laser-nephelometric measurements are given. The universal high-informative optical analyzers and the standard etalon specimens of agro- objects make the technological foundation of the considered methods and systems.
Improved heat transfer modeling of the eye for electromagnetic wave exposures.
Hirata, Akimasa
2007-05-01
This study proposed an improved heat transfer model of the eye for exposure to electromagnetic (EM) waves. Particular attention was paid to the difference from the simplified heat transfer model commonly used in this field. From our computational results, the temperature elevation in the eye calculated with the simplified heat transfer model was largely influenced by the EM absorption outside the eyeball, but not when we used our improved model.
Behn, Andrew; Zimmerman, Paul M; Bell, Alexis T; Head-Gordon, Martin
2011-12-13
The growing string method is a powerful tool in the systematic study of chemical reactions with theoretical methods which allows for the rapid identification of transition states connecting known reactant and product structures. However, the efficiency of this method is heavily influenced by the choice of interpolation scheme when adding new nodes to the string during optimization. In particular, the use of Cartesian coordinates with cubic spline interpolation often produces guess structures which are far from the final reaction path and require many optimization steps (and thus many energy and gradient calculations) to yield a reasonable final structure. In this paper, we present a new method for interpolating and reparameterizing nodes within the growing string method using the linear synchronous transit method of Halgren and Lipscomb. When applied to the alanine dipeptide rearrangement and a simplified cationic alkyl ring condensation reaction, a significant speedup in terms of computational cost is achieved (30-50%).
FY16 Status Report on Development of Integrated EPP and SMT Design Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jetter, R. I.; Sham, T. -L.; Wang, Y.
2016-08-01
The goal of the Elastic-Perfectly Plastic (EPP) combined integrated creep-fatigue damage evaluation approach is to incorporate a Simplified Model Test (SMT) data based approach for creep-fatigue damage evaluation into the EPP methodology to avoid the separate evaluation of creep and fatigue damage and eliminate the requirement for stress classification in current methods; thus greatly simplifying evaluation of elevated temperature cyclic service. The EPP methodology is based on the idea that creep damage and strain accumulation can be bounded by a properly chosen “pseudo” yield strength used in an elastic-perfectly plastic analysis, thus avoiding the need for stress classification. The originalmore » SMT approach is based on the use of elastic analysis. The experimental data, cycles to failure, is correlated using the elastically calculated strain range in the test specimen and the corresponding component strain is also calculated elastically. The advantage of this approach is that it is no longer necessary to use the damage interaction, or D-diagram, because the damage due to the combined effects of creep and fatigue are accounted in the test data by means of a specimen that is designed to replicate or bound the stress and strain redistribution that occurs in actual components when loaded in the creep regime. The reference approach to combining the two methodologies and the corresponding uncertainties and validation plans are presented. Results from recent key feature tests are discussed to illustrate the applicability of the EPP methodology and the behavior of materials at elevated temperature when undergoing stress and strain redistribution due to plasticity and creep.« less
Probabilistic Analysis for Comparing Fatigue Data Based on Johnson-Weibull Parameters
NASA Technical Reports Server (NTRS)
Vlcek, Brian L.; Hendricks, Robert C.; Zaretsky, Erwin V.
2013-01-01
Leonard Johnson published a methodology for establishing the confidence that two populations of data are different. Johnson's methodology is dependent on limited combinations of test parameters (Weibull slope, mean life ratio, and degrees of freedom) and a set of complex mathematical equations. In this report, a simplified algebraic equation for confidence numbers is derived based on the original work of Johnson. The confidence numbers calculated with this equation are compared to those obtained graphically by Johnson. Using the ratios of mean life, the resultant values of confidence numbers at the 99 percent level deviate less than 1 percent from those of Johnson. At a 90 percent confidence level, the calculated values differ between +2 and 4 percent. The simplified equation is used to rank the experimental lives of three aluminum alloys (AL 2024, AL 6061, and AL 7075), each tested at three stress levels in rotating beam fatigue, analyzed using the Johnson- Weibull method, and compared to the ASTM Standard (E739 91) method of comparison. The ASTM Standard did not statistically distinguish between AL 6061 and AL 7075. However, it is possible to rank the fatigue lives of different materials with a reasonable degree of statistical certainty based on combined confidence numbers using the Johnson- Weibull analysis. AL 2024 was found to have the longest fatigue life, followed by AL 7075, and then AL 6061. The ASTM Standard and the Johnson-Weibull analysis result in the same stress-life exponent p for each of the three aluminum alloys at the median, or L(sub 50), lives
Two- and three-dimensional natural and mixed convection simulation using modular zonal models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wurtz, E.; Nataf, J.M.; Winkelmann, F.
We demonstrate the use of the zonal model approach, which is a simplified method for calculating natural and mixed convection in rooms. Zonal models use a coarse grid and use balance equations, state equations, hydrostatic pressure drop equations and power law equations of the form {ital m} = {ital C}{Delta}{sup {ital n}}. The advantage of the zonal approach and its modular implementation are discussed. The zonal model resolution of nonlinear equation systems is demonstrated for three cases: a 2-D room, a 3-D room and a pair of 3-D rooms separated by a partition with an opening. A sensitivity analysis withmore » respect to physical parameters and grid coarseness is presented. Results are compared to computational fluid dynamics (CFD) calculations and experimental data.« less
Schwinger-Keldysh diagrammatics for primordial perturbations
NASA Astrophysics Data System (ADS)
Chen, Xingang; Wang, Yi; Xianyu, Zhong-Zhi
2017-12-01
We present a systematic introduction to the diagrammatic method for practical calculations in inflationary cosmology, based on Schwinger-Keldysh path integral formalism. We show in particular that the diagrammatic rules can be derived directly from a classical Lagrangian even in the presence of derivative couplings. Furthermore, we use a quasi-single-field inflation model as an example to show how this formalism, combined with the trick of mixed propagator, can significantly simplify the calculation of some in-in correlation functions. The resulting bispectrum includes the lighter scalar case (m<3H/2) that has been previously studied, and the heavier scalar case (m>3H/2) that has not been explicitly computed for this model. The latter provides a concrete example of quantum primordial standard clocks, in which the clock signals can be observably large.
Operational Control Procedures for the Activated Sludge Process, Part III-A: Calculation Procedures.
ERIC Educational Resources Information Center
West, Alfred W.
This is the second in a series of documents developed by the National Training and Operational Technology Center describing operational control procedures for the activated sludge process used in wastewater treatment. This document deals exclusively with the calculation procedures, including simplified mixing formulas, aeration tank…
Benefits of Applying Hierarchical Models to the Empirical Green's Function Approach
NASA Astrophysics Data System (ADS)
Denolle, M.; Van Houtte, C.
2017-12-01
Stress drops calculated from source spectral studies currently show larger variability than what is implied by empirical ground motion models. One of the potential origins of the inflated variability is the simplified model-fitting techniques used in most source spectral studies. This study improves upon these existing methods, and shows that the fitting method may explain some of the discrepancy. In particular, Bayesian hierarchical modelling is shown to be a method that can reduce bias, better quantify uncertainties and allow additional effects to be resolved. The method is applied to the Mw7.1 Kumamoto, Japan earthquake, and other global, moderate-magnitude, strike-slip earthquakes between Mw5 and Mw7.5. It is shown that the variation of the corner frequency, fc, and the falloff rate, n, across the focal sphere can be reliably retrieved without overfitting the data. Additionally, it is shown that methods commonly used to calculate corner frequencies can give substantial biases. In particular, if fc were calculated for the Kumamoto earthquake using a model with a falloff rate fixed at 2 instead of the best fit 1.6, the obtained fc would be as large as twice its realistic value. The reliable retrieval of the falloff rate allows deeper examination of this parameter for a suite of global, strike-slip earthquakes, and its scaling with magnitude. The earthquake sequences considered in this study are from Japan, New Zealand, Haiti and California.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana, Scott; Van Dam, Jeroen J; Damiani, Rick R
As part of an ongoing effort to improve the modeling and prediction of small wind turbine dynamics, the National Renewable Energy Laboratory (NREL) tested a small horizontal-axis wind turbine in the field at the National Wind Technology Center. The test turbine was a 2.1-kW downwind machine mounted on an 18-m multi-section fiberglass composite tower. The tower was instrumented and monitored for approximately 6 months. The collected data were analyzed to assess the turbine and tower loads and further validate the simplified loads equations from the International Electrotechnical Commission (IEC) 61400-2 design standards. Field-measured loads were also compared to the outputmore » of an aeroelastic model of the turbine. In particular, we compared fatigue loads as measured in the field, predicted by the aeroelastic model, and calculated using the simplified design equations. Ultimate loads at the tower base were assessed using both the simplified design equations and the aeroelastic model output. The simplified design equations in IEC 61400-2 do not accurately model fatigue loads and a discussion about the simplified design equations is discussed.« less
NASA Astrophysics Data System (ADS)
Eskandari, M. R.; Gheisari, R.; Kashian, S.
2006-02-01
This paper provides a theoretical complement to the experimental measurement of the population of excited dμ(2s) and dμ(1s) atoms in a deuterium. The population of these atoms plays an important role in a muon catalyzed fusion cycle. Symmetric and non-symmetric muonic molecular ions have been predicted to form in excited states in collisions between excited muonic atoms and hydrogen molecules. One example is the ddμ*, which is a muonic deuterium-deuterium symmetric ion in excited state and is initially produced in the interaction of dμ(2s) atoms with deuterium nuclei. Our calculations interpret the experimental findings in terms of the so-called side-path model. This model essentially deals with the interaction mentioned above in which the ddμ* ion undergoes Coulomb de-excitation where the excitation energy is shared between a dμ(1s) atom and one deuterium. The structure of ddμ* is studied here using the numerical, variational method and the given wavefunctions. Few resonance energies for ddμ* molecular states are calculated below the 2s threshold. For more precise assessment of the reliability of the given wavefunctions, the nucleus sizes and Coulomb decay rates for the zeroth, first and second vibrational meta-stable states of the mentioned ion are also calculated. The obtained results are close to those previously reported. The advantage of the given method over previous methods is that the used wavefunction has only two terms, which simplifies the calculations with the same results as those from the complicated coupled rearrangement channel method with a Gaussian basis set. These energies are the base data required for size, formation and decay rate calculations of the ddμ* ion.
Marginal Loss Calculations for the DCOPF
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldridge, Brent; O'Neill, Richard P.; Castillo, Andrea R.
2016-12-05
The purpose of this paper is to explain some aspects of including a marginal line loss approximation in the DCOPF. The DCOPF optimizes electric generator dispatch using simplified power flow physics. Since the standard assumptions in the DCOPF include a lossless network, a number of modifications have to be added to the model. Calculating marginal losses allows the DCOPF to optimize the location of power generation, so that generators that are closer to demand centers are relatively cheaper than remote generation. The problem formulations discussed in this paper will simplify many aspects of practical electric dispatch implementations in use today,more » but will include sufficient detail to demonstrate a few points with regard to the handling of losses.« less
Elaborate SMART MCNP Modelling Using ANSYS and Its Applications
NASA Astrophysics Data System (ADS)
Song, Jaehoon; Surh, Han-bum; Kim, Seung-jin; Koo, Bonsueng
2017-09-01
An MCNP 3-dimensional model can be widely used to evaluate various design parameters such as a core design or shielding design. Conventionally, a simplified 3-dimensional MCNP model is applied to calculate these parameters because of the cumbersomeness of modelling by hand. ANSYS has a function for converting the CAD `stp' format into an MCNP input in the geometry part. Using ANSYS and a 3- dimensional CAD file, a very detailed and sophisticated MCNP 3-dimensional model can be generated. The MCNP model is applied to evaluate the assembly weighting factor at the ex-core detector of SMART, and the result is compared with a simplified MCNP SMART model and assembly weighting factor calculated by DORT, which is a deterministic Sn code.
Morishita, Y
2001-05-01
The subject matters concerned with use of so-called simplified analytical systems for the purpose of useful utilizing are mentioned from the perspective of a laboratory technician. 1. The data from simplified analytical systems should to be agreed with those of particular reference methods not to occur the discrepancy of the data from different laboratories. 2. Accuracy of the measured results using simplified analytical systems is hard to be scrutinized thoroughly and correctly with the quality control surveillance procedure on the stored pooled serum or partly-processed blood. 3. It is necessary to present the guide line to follow about the contents of evaluation to guarantee on quality of simplified analytical systems. 4. Maintenance and manual performance of simplified analytical systems have to be standardized by a laboratory technician and a selling agent technician. 5. It calls attention, further that the cost of simplified analytical systems is much expensive compared to that of routine method with liquid reagents. 6. Various substances in human serum, like cytokine, hormone, tumor marker, and vitamin, etc. are also hoped to be measured by simplified analytical systems.
Revisiting the direct detection of dark matter in simplified models
NASA Astrophysics Data System (ADS)
Li, Tong
2018-07-01
In this work we numerically re-examine the loop-induced WIMP-nucleon scattering cross section for the simplified dark matter models and the constraint set by the latest direct detection experiment. We consider a fermion, scalar or vector dark matter component from five simplified models with leptophobic spin-0 mediators coupled only to Standard Model quarks and dark matter particles. The tree-level WIMP-nucleon cross sections in these models are all momentum-suppressed. We calculate the non-suppressed spin-independent WIMP-nucleon cross sections from loop diagrams and investigate the constrained space of dark matter mass and mediator mass by Xenon1T. The constraints from indirect detection and collider search are also discussed.
Notification: Methods for Procuring Supplies and Services Under Simplified Acquisition Procedures
Project #OA-FY15-0193, June 18, 2015. The EPA OIG plans to begin the preliminary research phase of auditing the methods used in procuring supplies and services under simplified acquisition procedures.
Tire-rim interface pressure of a commercial vehicle wheel under radial loads: theory and experiment
NASA Astrophysics Data System (ADS)
Wan, Xiaofei; Shan, Yingchun; Liu, Xiandong; He, Tian; Wang, Jiegong
2017-11-01
The simulation of the radial fatigue test of a wheel has been a necessary tool to improve the design of the wheel and calculate its fatigue life. The simulation model, including the strong nonlinearity of the tire structure and material, may produce accurate results, but often leads to a divergence in calculation. Thus, a simplified simulation model in which the complicated tire model is replaced with a tire-wheel contact pressure model is used extensively in the industry. In this paper, a simplified tire-rim interface pressure model of a wheel under a radial load is established, and the pressure of the wheel under different radial loads is tested. The tire-rim contact behavior affected by the radial load is studied and analyzed according to the test result, and the tire-rim interface pressure extracted from the test result is used to evaluate the simplified pressure model and the traditional cosine function model. The results show that the proposed model may provide a more accurate prediction of the wheel radial fatigue life than the traditional cosine function model.
Barbour, P S; Stone, M H; Fisher, J
1999-01-01
In some designs of hip joint simulator the cost of building a highly complex machine has been offset with the requirement for a large number of test stations. The application of the wear results generated by these machines depends on their ability to reproduce physiological wear rates and processes. In this study a hip joint simulator has been shown to reproduce physiological wear using only one load vector and two degrees of motion with simplified input cycles. The actual path of points on the femoral head relative to the acetabular cup were calculated and compared for physiological and simplified input cycles. The in vitro wear rates were found to be highly dependent on the shape of these paths and similarities could be drawn between the shape of the physiological paths and the simplified elliptical paths.
Kitayama, Tomoya; Kinoshita, Ayako; Sugimoto, Masahiro; Nakayama, Yoichi; Tomita, Masaru
2006-07-17
In order to improve understanding of metabolic systems there have been attempts to construct S-system models from time courses. Conventionally, non-linear curve-fitting algorithms have been used for modelling, because of the non-linear properties of parameter estimation from time series. However, the huge iterative calculations required have hindered the development of large-scale metabolic pathway models. To solve this problem we propose a novel method involving power-law modelling of metabolic pathways from the Jacobian of the targeted system and the steady-state flux profiles by linearization of S-systems. The results of two case studies modelling a straight and a branched pathway, respectively, showed that our method reduced the number of unknown parameters needing to be estimated. The time-courses simulated by conventional kinetic models and those described by our method behaved similarly under a wide range of perturbations of metabolite concentrations. The proposed method reduces calculation complexity and facilitates the construction of large-scale S-system models of metabolic pathways, realizing a practical application of reverse engineering of dynamic simulation models from the Jacobian of the targeted system and steady-state flux profiles.
Chen, Zhaoxue; Chen, Hao
2014-01-01
A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.
Sousa, Marcelo R; Jones, Jon P; Frind, Emil O; Rudolph, David L
2013-01-01
In contaminant travel from ground surface to groundwater receptors, the time taken in travelling through the unsaturated zone is known as the unsaturated zone time lag. Depending on the situation, this time lag may or may not be significant within the context of the overall problem. A method is presented for assessing the importance of the unsaturated zone in the travel time from source to receptor in terms of estimates of both the absolute and the relative advective times. A choice of different techniques for both unsaturated and saturated travel time estimation is provided. This method may be useful for practitioners to decide whether to incorporate unsaturated processes in conceptual and numerical models and can also be used to roughly estimate the total travel time between points near ground surface and a groundwater receptor. This method was applied to a field site located in a glacial aquifer system in Ontario, Canada. Advective travel times were estimated using techniques with different levels of sophistication. The application of the proposed method indicates that the time lag in the unsaturated zone is significant at this field site and should be taken into account. For this case, sophisticated and simplified techniques lead to similar assessments when the same knowledge of the hydraulic conductivity field is assumed. When there is significant uncertainty regarding the hydraulic conductivity, simplified calculations did not lead to a conclusive decision. Copyright © 2012 Elsevier B.V. All rights reserved.
Sun, Huaiwei; Tong, Juxiu; Luo, Wenbing; Wang, Xiugui; Yang, Jinzhong
2016-08-01
Accurate modeling of soil water content is required for a reasonable prediction of crop yield and of agrochemical leaching in the field. However, complex mathematical models faced the difficult-to-calibrate parameters and the distinct knowledge between the developers and users. In this study, a deterministic model is presented and is used to investigate the effects of controlled drainage on soil moisture dynamics in a shallow groundwater area. This simplified one-dimensional model is formulated to simulate soil moisture in the field on a daily basis and takes into account only the vertical hydrological processes. A linear assumption is proposed and is used to calculate the capillary rise from the groundwater. The pipe drainage volume is calculated by using a steady-state approximation method and the leakage rate is calculated as a function of soil moisture. The model is successfully calibrated by using field experiment data from four different pipe drainage treatments with several field observations. The model was validated by comparing the simulations with observed soil water content during the experimental seasons. The comparison results demonstrated the robustness and effectiveness of the model in the prediction of average soil moisture values. The input data required to run the model are widely available and can be measured easily in the field. It is observed that controlled drainage results in lower groundwater contribution to the root zone and lower depth of percolation to the groundwater, thus helping in the maintenance of a low level of soil salinity in the root zone.
TAIR: A transonic airfoil analysis computer code
NASA Technical Reports Server (NTRS)
Dougherty, F. C.; Holst, T. L.; Grundy, K. L.; Thomas, S. D.
1981-01-01
The operation of the TAIR (Transonic AIRfoil) computer code, which uses a fast, fully implicit algorithm to solve the conservative full-potential equation for transonic flow fields about arbitrary airfoils, is described on two levels of sophistication: simplified operation and detailed operation. The program organization and theory are elaborated to simplify modification of TAIR for new applications. Examples with input and output are given for a wide range of cases, including incompressible, subcritical compressible, and transonic calculations.
NASA Astrophysics Data System (ADS)
Soulis, K. X.; Valiantzas, J. D.
2012-03-01
The Soil Conservation Service Curve Number (SCS-CN) approach is widely used as a simple method for predicting direct runoff volume for a given rainfall event. The CN parameter values corresponding to various soil, land cover, and land management conditions can be selected from tables, but it is preferable to estimate the CN value from measured rainfall-runoff data if available. However, previous researchers indicated that the CN values calculated from measured rainfall-runoff data vary systematically with the rainfall depth. Hence, they suggested the determination of a single asymptotic CN value observed for very high rainfall depths to characterize the watersheds' runoff response. In this paper, the hypothesis that the observed correlation between the calculated CN value and the rainfall depth in a watershed reflects the effect of soils and land cover spatial variability on its hydrologic response is being tested. Based on this hypothesis, the simplified concept of a two-CN heterogeneous system is introduced to model the observed CN-rainfall variation by reducing the CN spatial variability into two classes. The behaviour of the CN-rainfall function produced by the simplified two-CN system is approached theoretically, it is analysed systematically, and it is found to be similar to the variation observed in natural watersheds. Synthetic data tests, natural watersheds examples, and detailed study of two natural experimental watersheds with known spatial heterogeneity characteristics were used to evaluate the method. The results indicate that the determination of CN values from rainfall runoff data using the proposed two-CN system approach provides reasonable accuracy and it over performs the previous methods based on the determination of a single asymptotic CN value. Although the suggested method increases the number of unknown parameters to three (instead of one), a clear physical reasoning for them is presented.
Genetic Algorithm for Initial Orbit Determination with Too Short Arc (Continued)
NASA Astrophysics Data System (ADS)
Li, Xin-ran; Wang, Xin
2017-04-01
When the genetic algorithm is used to solve the problem of too short-arc (TSA) orbit determination, due to the difference of computing process between the genetic algorithm and the classical method, the original method for outlier deletion is no longer applicable. In the genetic algorithm, the robust estimation is realized by introducing different loss functions for the fitness function, then the outlier problem of the TSA orbit determination is solved. Compared with the classical method, the genetic algorithm is greatly simplified by introducing in different loss functions. Through the comparison on the calculations of multiple loss functions, it is found that the least median square (LMS) estimation and least trimmed square (LTS) estimation can greatly improve the robustness of the TSA orbit determination, and have a high breakdown point.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 1 2011-10-01 2011-10-01 false [Reserved] 13.304 Section 13.304 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.304 [Reserved] ...
Advanced quantitative magnetic nondestructive evaluation methods - Theory and experiment
NASA Technical Reports Server (NTRS)
Barton, J. R.; Kusenberger, F. N.; Beissner, R. E.; Matzkanin, G. A.
1979-01-01
The paper reviews the scale of fatigue crack phenomena in relation to the size detection capabilities of nondestructive evaluation methods. An assessment of several features of fatigue in relation to the inspection of ball and roller bearings suggested the use of magnetic methods; magnetic domain phenomena including the interaction of domains and inclusions, and the influence of stress and magnetic field on domains are discussed. Experimental results indicate that simplified calculations can be used to predict many features of these results; the data predicted by analytic models which use finite element computer analysis predictions do not agree with respect to certain features. Experimental analyses obtained on rod-type fatigue specimens which show experimental magnetic measurements in relation to the crack opening displacement and volume and crack depth should provide methods for improved crack characterization in relation to fracture mechanics and life prediction.
Lagrangian methods in the analysis of nonlinear wave interactions in plasma
NASA Technical Reports Server (NTRS)
Galloway, J. J.
1972-01-01
An averaged-Lagrangian method is developed for obtaining the equations which describe the nonlinear interactions of the wave (oscillatory) and background (nonoscillatory) components which comprise a continuous medium. The method applies to monochromatic waves in any continuous medium that can be described by a Lagrangian density, but is demonstrated in the context of plasma physics. The theory is presented in a more general and unified form by way of a new averaged-Lagrangian formalism which simplifies the perturbation ordering procedure. Earlier theory is extended to deal with a medium distributed in velocity space and to account for the interaction of the background with the waves. The analytic steps are systematized, so as to maximize calculational efficiency. An assessment of the applicability and limitations of the method shows that it has some definite advantages over other approaches in efficiency and versatility.
Research into Influence of Gaussian Beam on Terahertz Radar Cross Section of a Semicircular Boss
NASA Astrophysics Data System (ADS)
Li, Hui-Yu; Li, Qi; She, Jian-Yu; Zhao, Yong-Peng; Chen, De-Ying; Wang, Qi
2013-08-01
In radar cross section (RCS) calculation of a rough surface, the model can be simplified into the scattering of geometrically idealized bosses on a surface. Thus the problem of the RCS calculation of a rough surface is changed to the RCS calculation of the semicircular boss. The RCS measurement of scale model can help save time and money. The utilization of terahertz in RCS is attractive because of its special properties: the wavelength of the terahertz wave can help limit the size of the model in a suitable range in the measurement of the scale model and get more detailed data in the measurement of the real object. However, usually the incident beam of a terahertz source is a Gaussian beam; in the theoretical RCS estimation, usually a plane wave is assumed as the incident beam for sake of simplicity which may lead to an error between the measurement and calculation results. In this paper, the method of images is used to calculate the RCS of a semicircular boss at 2.52 THz and the results are compared to the one calculated when the incident beam is a plane wave.
Methods and Applications of the Audibility Index in Hearing Aid Selection and Fitting
Amlani, Amyn M.; Punch, Jerry L.; Ching, Teresa Y. C.
2002-01-01
During the first half of the 20th century, communications engineers at Bell Telephone Laboratories developed the articulation model for predicting speech intelligibility transmitted through different telecommunication devices under varying electroacoustic conditions. The profession of audiology adopted this model and its quantitative aspects, known as the Articulation Index and Speech Intelligibility Index, and applied these indices to the prediction of unaided and aided speech intelligibility in hearing-impaired listeners. Over time, the calculation methods of these indices—referred to collectively in this paper as the Audibility Index—have been continually refined and simplified for clinical use. This article provides (1) an overview of the basic principles and the calculation methods of the Audibility Index, the Speech Transmission Index and related indices, as well as the Speech Recognition Sensitivity Model, (2) a review of the literature on using the Audibility Index to predict speech intelligibility of hearing-impaired listeners, (3) a review of the literature on the applicability of the Audibility Index to the selection and fitting of hearing aids, and (4) a discussion of future scientific needs and clinical applications of the Audibility Index. PMID:25425917
The frozen nucleon approximation in two-particle two-hole response functions
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.; ...
2017-07-10
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
The frozen nucleon approximation in two-particle two-hole response functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruiz Simo, I.; Amaro, J. E.; Barbaro, M. B.
Here, we present a fast and efficient method to compute the inclusive two-particle two-hole (2p–2h) electroweak responses in the neutrino and electron quasielastic inclusive cross sections. The method is based on two approximations. The first neglects the motion of the two initial nucleons below the Fermi momentum, which are considered to be at rest. This approximation, which is reasonable for high values of the momentum transfer, turns out also to be quite good for moderate values of the momentum transfer q ≳kF. The second approximation involves using in the “frozen” meson-exchange currents (MEC) an effective Δ-propagator averaged over the Fermimore » sea. Within the resulting “frozen nucleon approximation”, the inclusive 2p–2h responses are accurately calculated with only a one-dimensional integral over the emission angle of one of the final nucleons, thus drastically simplifying the calculation and reducing the computational time. The latter makes this method especially well-suited for implementation in Monte Carlo neutrino event generators.« less
New approaches for calculating Moran's index of spatial autocorrelation.
Chen, Yanguang
2013-01-01
Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran's index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran's index. Moran's scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran's index and Geary's coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran's index and Geary's coefficient will be clarified and defined. One of theoretical findings is that Moran's index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation.
26 CFR 1.199-4 - Costs allocable to domestic production gross receipts.
Code of Federal Regulations, 2010 CFR
2010-04-01
... using the simplified deduction method. Paragraph (f) of this section provides a small business... taxpayer for internal management or other business purposes; whether the method is used for other Federal... than a taxpayer that uses the small business simplified overall method of paragraph (f) of this section...
Calculation of ground vibration spectra from heavy military vehicles
NASA Astrophysics Data System (ADS)
Krylov, V. V.; Pickup, S.; McNuff, J.
2010-07-01
The demand for reliable autonomous systems capable to detect and identify heavy military vehicles becomes an important issue for UN peacekeeping forces in the current delicate political climate. A promising method of detection and identification is the one using the information extracted from ground vibration spectra generated by heavy military vehicles, often termed as their seismic signatures. This paper presents the results of the theoretical investigation of ground vibration spectra generated by heavy military vehicles, such as tanks and armed personnel carriers. A simple quarter car model is considered to identify the resulting dynamic forces applied from a vehicle to the ground. Then the obtained analytical expressions for vehicle dynamic forces are used for calculations of generated ground vibrations, predominantly Rayleigh surface waves, using Green's function method. A comparison of the obtained theoretical results with the published experimental data shows that analytical techniques based on the simplified quarter car vehicle model are capable of producing ground vibration spectra of heavy military vehicles that reproduce basic properties of experimental spectra.
Calculation of wall effects of flow on a perforated wall with a code of surface singularities
NASA Astrophysics Data System (ADS)
Piat, J. F.
1994-07-01
Simplifying assumptions are inherent in the analytic method previously used for the determination of wall interferences on a model in a wind tunnel. To eliminate these assumptions, a new code based on the vortex lattice method was developed. It is suitable for processing any shape of test sections with limited areas of porous wall, the characteristic of which can be nonlinear. Calculation of wall effects in S3MA wind tunnel, whose test section is rectangular 0.78 m x 0.56 m, and fitted with two or four perforated walls, have been performed. Wall porosity factors have been adjusted to obtain the best fit between measured and computed pressure distributions on the test section walls. The code was checked by measuring nearly equal drag coefficients for a model tested in S3MA wind tunnel (after wall corrections) and in S2MA wind tunnel whose test section is seven times larger (negligible wall corrections).
MRI contrast agent concentration and tumor interstitial fluid pressure.
Liu, L J; Schlesinger, M
2016-10-07
The present work describes the relationship between tumor interstitial fluid pressure (TIFP) and the concentration of contrast agent for dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). We predict the spatial distribution of TIFP based on that of contrast agent concentration. We also discuss the cases for estimating tumor interstitial volume fraction (void fraction or porosity of porous medium), ve, and contrast volume transfer constant, K(trans), by measuring the ratio of contrast agent concentration in tissue to that in plasma. A linear fluid velocity distribution may reflect a quadratic function of TIFP distribution and lead to a practical method for TIFP estimation. To calculate TIFP, the parameters or variables should preferably be measured along the direction of the linear fluid velocity (this is in the same direction as the gray value distribution of the image, which is also linear). This method may simplify the calculation for estimating TIFP. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.
Radiative Heating Methodology for the Huygens Probe
NASA Technical Reports Server (NTRS)
Johnston, Christopher O.; Hollis, Brian R.; Sutton, Kenneth
2007-01-01
The radiative heating environment for the Huygens probe near peak heating conditions for Titan entry is investigated in this paper. The task of calculating the radiation-coupled flowfield, accounting for non-Boltzmann and non-optically thin radiation, is simplified to a rapid yet accurate calculation. This is achieved by using the viscous-shock layer (VSL) technique for the stagnation-line flowfield calculation and a modified smeared rotational band (SRB) model for the radiation calculation. These two methods provide a computationally efficient alternative to a Navier-Stokes flowfield and line-by-line radiation calculation. The results of the VSL technique are shown to provide an excellent comparison with the Navier-Stokes results of previous studies. It is shown that a conventional SRB approach is inadequate for the partially optically-thick conditions present in the Huygens shock-layer around the peak heating trajectory points. A simple modification is proposed to the SRB model that improves its accuracy in these partially optically-thick conditions. This modified approach, labeled herein as SRBC, is compared throughout this study with a detailed line-by-line (LBL) calculation and is shown to compare within 5% in all cases. The SRBC method requires many orders-of-magnitude less computational time than the LBL method, which makes it ideal for coupling to the flowfield. The application of a collisional-radiative (CR) model for determining the population of the CN electronic states, which govern the radiation for Huygens entry, is discussed and applied. The non-local absorption term in the CR model is formulated in terms of an escape factor, which is then curve-fit with temperature. Although the curve-fit is an approximation, it is shown to compare well with the exact escape factor calculation, which requires a computationally intensive iteration procedure.
Dispersion relations for electromagnetic wave propagation in chiral plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao, M. X.; Guo, B., E-mail: binguo@whut.edu.cn; Peng, L.
2014-11-15
The dispersion relations for electromagnetic wave propagation in chiral plasmas are derived using a simplified method and investigated in detail. With the help of the dispersion relations for each eignwave, we explore how the chiral plasmas exhibit negative refraction and investigate the frequency region for negative refraction. The results show that chirality can induce negative refraction in plasmas. Moreover, both the degree of chirality and the external magnetic field have a significant effect on the critical frequency and the bandwidth of the frequency for negative refraction in chiral plasmas. The parameter dependence of the effects is calculated and discussed.
Principal component regression analysis with SPSS.
Liu, R X; Kuang, J; Gong, Q; Hou, X L
2003-06-01
The paper introduces all indices of multicollinearity diagnoses, the basic principle of principal component regression and determination of 'best' equation method. The paper uses an example to describe how to do principal component regression analysis with SPSS 10.0: including all calculating processes of the principal component regression and all operations of linear regression, factor analysis, descriptives, compute variable and bivariate correlations procedures in SPSS 10.0. The principal component regression analysis can be used to overcome disturbance of the multicollinearity. The simplified, speeded up and accurate statistical effect is reached through the principal component regression analysis with SPSS.
NASA Astrophysics Data System (ADS)
Andreev, Vladimir
2018-03-01
The paper deals with the problem of determining the stress state of the pressure vessel (PV) with considering the concrete temperature inhomogeneity. Such structures are widely used in heat power engineering, for example, in nuclear power engineering. The structures of such buildings are quite complex and a comprehensive analysis of the stress state in them can be carried out either by numerical or experimental methods. However, a number of fundamental questions can be solved on the basis of simplified models, in particular, studies of the effect on the stressed state of the inhomogeneity caused by the temperature field.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dana, S.; Damiani, R.; vanDam, J.
As part of an ongoing effort to improve the modeling and prediction of small wind turbine dynamics, NREL tested a small horizontal axis wind turbine in the field at the National Wind Technology Center (NWTC). The test turbine was a 2.1-kW downwind machine mounted on an 18-meter multi-section fiberglass composite tower. The tower was instrumented and monitored for approximately 6 months. The collected data were analyzed to assess the turbine and tower loads and further validate the simplified loads equations from the International Electrotechnical Commission (IEC) 61400-2 design standards. Field-measured loads were also compared to the output of an aeroelasticmore » model of the turbine. Ultimate loads at the tower base were assessed using both the simplified design equations and the aeroelastic model output. The simplified design equations in IEC 61400-2 do not accurately model fatigue loads. In this project, we compared fatigue loads as measured in the field, as predicted by the aeroelastic model, and as calculated using the simplified design equations.« less
2014-01-01
Background The measurement of mechanosensitivity is a key method for the study of pain in animal models. This is often accomplished with the use of von Frey filaments in an up-down testing paradigm. The up-down method described by Chaplan et al. (J Neurosci Methods 53:55–63, 1994) for mechanosensitivity testing in rodents remains one of the most widely used methods for measuring pain in animals. However, this method results in animals receiving a varying number of stimuli, which may lead to animals in different groups receiving different testing experiences that influences their later responses. To standardize the measurement of mechanosensitivity we developed a simplified up-down method (SUDO) for estimating paw withdrawal threshold (PWT) with von Frey filaments that uses a constant number of five stimuli per test. We further refined the PWT calculation to allow the estimation of PWT directly from the behavioral response to the fifth stimulus, omitting the need for look-up tables. Results The PWT estimates derived using SUDO strongly correlated (r > 0.96) with the PWT estimates determined with the conventional up-down method of Chaplan et al., and this correlation remained very strong across different levels of tester experience, different experimental conditions, and in tests from both mice and rats. The two testing methods also produced similar PWT estimates in prospective behavioral tests of mice at baseline and after induction of hyperalgesia by intraplantar capsaicin or complete Freund’s adjuvant. Conclusion SUDO thus offers an accurate, fast and user-friendly replacement for the widely used up-down method of Chaplan et al. PMID:24739328
48 CFR 13.302 - Purchase orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Purchase orders. 13.302 Section 13.302 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 13.302 Purchase...
Kohno, Ryosuke; Hotta, Kenji; Matsubara, Kana; Nishioka, Shie; Matsuura, Taeko; Kawashima, Mitsuhiko
2012-03-08
When in vivo proton dosimetry is performed with a metal-oxide semiconductor field-effect transistor (MOSFET) detector, the response of the detector depends strongly on the linear energy transfer. The present study reports a practical method to correct the MOSFET response for linear energy transfer dependence by using a simplified Monte Carlo dose calculation method (SMC). A depth-output curve for a mono-energetic proton beam in polyethylene was measured with the MOSFET detector. This curve was used to calculate MOSFET output distributions with the SMC (SMC(MOSFET)). The SMC(MOSFET) output value at an arbitrary point was compared with the value obtained by the conventional SMC(PPIC), which calculates proton dose distributions by using the depth-dose curve determined by a parallel-plate ionization chamber (PPIC). The ratio of the two values was used to calculate the correction factor of the MOSFET response at an arbitrary point. The dose obtained by the MOSFET detector was determined from the product of the correction factor and the MOSFET raw dose. When in vivo proton dosimetry was performed with the MOSFET detector in an anthropomorphic phantom, the corrected MOSFET doses agreed with the SMC(PPIC) results within the measurement error. To our knowledge, this is the first report of successful in vivo proton dosimetry with a MOSFET detector.
NASA Astrophysics Data System (ADS)
Fakhari, Abbas; Mitchell, Travis; Leonardi, Christopher; Bolster, Diogo
2017-11-01
Based on phase-field theory, we introduce a robust lattice-Boltzmann equation for modeling immiscible multiphase flows at large density and viscosity contrasts. Our approach is built by modifying the method proposed by Zu and He [Phys. Rev. E 87, 043301 (2013), 10.1103/PhysRevE.87.043301] in such a way as to improve efficiency and numerical stability. In particular, we employ a different interface-tracking equation based on the so-called conservative phase-field model, a simplified equilibrium distribution that decouples pressure and velocity calculations, and a local scheme based on the hydrodynamic distribution functions for calculation of the stress tensor. In addition to two distribution functions for interface tracking and recovery of hydrodynamic properties, the only nonlocal variable in the proposed model is the phase field. Moreover, within our framework there is no need to use biased or mixed difference stencils for numerical stability and accuracy at high density ratios. This not only simplifies the implementation and efficiency of the model, but also leads to a model that is better suited to parallel implementation on distributed-memory machines. Several benchmark cases are considered to assess the efficacy of the proposed model, including the layered Poiseuille flow in a rectangular channel, Rayleigh-Taylor instability, and the rise of a Taylor bubble in a duct. The numerical results are in good agreement with available numerical and experimental data.
NASA Technical Reports Server (NTRS)
Cheatwood, F. Mcneil; Dejarnette, Fred R.
1991-01-01
An approximate axisymmetric method was developed which can reliably calculate fully viscous hypersonic flows over blunt nosed bodies. By substituting Maslen's second order pressure expression for the normal momentum equation, a simplified form of the viscous shock layer (VSL) equations is obtained. This approach can solve both the subsonic and supersonic regions of the shock layer without a starting solution for the shock shape. The approach is applicable to perfect gas, equilibrium, and nonequilibrium flowfields. Since the method is fully viscous, the problems associated with a boundary layer solution with an inviscid layer solution are avoided. This procedure is significantly faster than the parabolized Navier-Stokes (PNS) or VSL solvers and would be useful in a preliminary design environment. Problems associated with a previously developed approximate VSL technique are addressed before extending the method to nonequilibrium calculations. Perfect gas (laminar and turbulent), equilibrium, and nonequilibrium solutions were generated for airflows over several analytic body shapes. Surface heat transfer, skin friction, and pressure predictions are comparable to VSL results. In addition, computed heating rates are in good agreement with experimental data. The present technique generates its own shock shape as part of its solution, and therefore could be used to provide more accurate initial shock shapes for higher order procedures which require starting solutions.
Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core
NASA Astrophysics Data System (ADS)
Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.
2017-01-01
The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.
Kimmel, Lara A; Holland, Anne E; Simpson, Pam M; Edwards, Elton R; Gabbe, Belinda J
2014-07-01
Early, accurate prediction of discharge destination from the acute hospital assists individual patients and the wider hospital system. The Trauma Rehabilitation and Prediction Tool (TRaPT), developed using registry data, determines probability of inpatient rehabilitation discharge for patients with isolated lower limb fractures. The aims of this study were: (1) to prospectively validatate the TRaPT, (2) to assess whether its performance could be improved by adding additional demographic data, and (3) to simplify it for use as a bedside tool. This was a cohort, measurement-focused study. Patients with isolated lower limb fractures (N=114) who were admitted to a major trauma center in Melbourne, Australia, were included. The participants' TRaPT scores were calculated from admission data. Performance of the TRaPT score alone, and in combination with frailty, weight-bearing status, and home supports, was assessed using measures of discrimination and calibration. A simplified TRaPT was developed by rounding the coefficients of variables in the original model and grouping age into 8 categories. Simplified TRaPT performance measures, including specificity, sensitivity, and positive and negative predictive values, were evaluated. Prospective validation of the TRaPT showed excellent discrimination (C-statistic=0.90 [95% confidence interval=0.82, 0.97]), a sensitivity of 80%, and specificity of 94%. All participants able to weight bear were discharged directly home. Simplified TRaPT scores had a sensitivity of 80% and a specificity of 88%. Generalizability may be limited given the compensation system that exists in Australia, but the methods used will assist in designing a similar tool in any population. The TRaPT accurately predicted discharge destination for 80% of patients and may form a useful aid for discharge decision making, with the simplified version facilitating its use as a bedside tool. © 2014 American Physical Therapy Association.
Probabilistic finite elements for transient analysis in nonlinear continua
NASA Technical Reports Server (NTRS)
Liu, W. K.; Belytschko, T.; Mani, A.
1985-01-01
The probabilistic finite element method (PFEM), which is a combination of finite element methods and second-moment analysis, is formulated for linear and nonlinear continua with inhomogeneous random fields. Analogous to the discretization of the displacement field in finite element methods, the random field is also discretized. The formulation is simplified by transforming the correlated variables to a set of uncorrelated variables through an eigenvalue orthogonalization. Furthermore, it is shown that a reduced set of the uncorrelated variables is sufficient for the second-moment analysis. Based on the linear formulation of the PFEM, the method is then extended to transient analysis in nonlinear continua. The accuracy and efficiency of the method is demonstrated by application to a one-dimensional, elastic/plastic wave propagation problem. The moments calculated compare favorably with those obtained by Monte Carlo simulation. Also, the procedure is amenable to implementation in deterministic FEM based computer programs.
Liu, Yong-Kuo; Chao, Nan; Xia, Hong; Peng, Min-Jun; Ayodeji, Abiodun
2018-05-17
This paper presents an improved and efficient virtual reality-based adaptive dose assessment method (VRBAM) applicable to the cutting and dismantling tasks in nuclear facility decommissioning. The method combines the modeling strength of virtual reality with the flexibility of adaptive technology. The initial geometry is designed with the three-dimensional computer-aided design tools, and a hybrid model composed of cuboids and a point-cloud is generated automatically according to the virtual model of the object. In order to improve the efficiency of dose calculation while retaining accuracy, the hybrid model is converted to a weighted point-cloud model, and the point kernels are generated by adaptively simplifying the weighted point-cloud model according to the detector position, an approach that is suitable for arbitrary geometries. The dose rates are calculated with the Point-Kernel method. To account for radiation scattering effects, buildup factors are calculated with the Geometric-Progression formula in the fitting function. The geometric modeling capability of VRBAM was verified by simulating basic geometries, which included a convex surface, a concave surface, a flat surface and their combination. The simulation results show that the VRBAM is more flexible and superior to other approaches in modeling complex geometries. In this paper, the computation time and dose rate results obtained from the proposed method were also compared with those obtained using the MCNP code and an earlier virtual reality-based method (VRBM) developed by the same authors. © 2018 IOP Publishing Ltd.
Verkest, K R; Fleeman, L M; Rand, J S; Morton, J M
2010-10-01
There is need for simple, inexpensive measures of glucose tolerance, insulin sensitivity, and insulin secretion in dogs. The aim of this study was to estimate the closeness of correlation between fasting and dynamic measures of insulin sensitivity and insulin secretion, the precision of fasting measures, and the agreement between results of standard and simplified glucose tolerance tests in dogs. A retrospective descriptive study using 6 naturally occurring obese and 6 lean dogs was conducted. Data from frequently sampled intravenous glucose tolerance tests (FSIGTTs) in 6 obese and 6 lean client-owned dogs were used to calculate HOMA, QUICKI, fasting glucose and insulin concentrations. Fasting measures of insulin sensitivity and secretion were compared with MINMOD analysis of FSIGTTs using Pearson correlation coefficients, and they were evaluated for precision by the discriminant ratio. Simplified sampling protocols were compared with standard FSIGTTs using Lin's concordance correlation coefficients, limits of agreement, and Pearson correlation coefficients. All fasting measures except fasting plasma glucose concentration were moderately correlated with MINMOD-estimated insulin sensitivity (|r| = 0.62-0.80; P < 0.03), and those that combined fasting insulin and glucose were moderately closely correlated with MINMOD-estimated insulin secretion (r = 0.60-0.79; P < 0.04). HOMA calculated using the nonlinear formulae had the closest estimated correlation (r = 0.77 and 0.74) and the best discrimination for insulin sensitivity and insulin secretion (discriminant ratio 4.4 and 3.4, respectively). Simplified sampling protocols with half as many samples collected over 3 h had close agreement with the full sampling protocol. Fasting measures and simplified intravenous glucose tolerance tests reflect insulin sensitivity and insulin secretion derived from frequently sampled glucose tolerance tests with MINMOD analysis in dogs. Copyright 2010 Elsevier Inc. All rights reserved.
Effects of shock on hypersonic boundary layer stability
NASA Astrophysics Data System (ADS)
Pinna, F.; Rambaud, P.
2013-06-01
The design of hypersonic vehicles requires the estimate of the laminar to turbulent transition location for an accurate sizing of the thermal protection system. Linear stability theory is a fast scientific way to study the problem. Recent improvements in computational capabilities allow computing the flow around a full vehicle instead of using only simplified boundary layer equations. In this paper, the effect of the shock is studied on a mean flow provided by steady Computational Fluid Dynamics (CFD) computations and simplified boundary layer calculations.
Longitudinal stability in relation to the use of an automatic pilot
NASA Technical Reports Server (NTRS)
Klemin, Alexander; Pepper, Perry A; Wittner, Howard A
1938-01-01
The effect of restraint in pitching introduced by an automatic pilot upon the longitudinal stability of an airplane has been studied. Customary simplifying assumptions have been made in setting down the equations of motion, and the results of computations based on the simplified equations are presented to show the effect of an automatic pilot installed in an airplane of known dimensions and characteristics. The equations developed have been applied by making calculations for a Clark biplane and a Fairchild 22 monoplane.
48 CFR 1313.302 - Purchase orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 1313.302 Section 1313.302 Federal Acquisition Regulations System DEPARTMENT OF COMMERCE CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisitions Methods 1313.302 Purchase orders. ...
48 CFR 813.302 - Purchase orders.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Purchase orders. 813.302 Section 813.302 Federal Acquisition Regulations System DEPARTMENT OF VETERANS AFFAIRS CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 813.302 Purchase...
48 CFR 1413.305 - Imprest fund.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 48 Federal Acquisition Regulations System 5 2011-10-01 2011-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...
48 CFR 1413.305 - Imprest fund.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 48 Federal Acquisition Regulations System 5 2010-10-01 2010-10-01 false Imprest fund. 1413.305 Section 1413.305 Federal Acquisition Regulations System DEPARTMENT OF THE INTERIOR CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES Simplified Acquisition Methods 1413.305 Imprest fund. ...
Simple design of slanted grating with simplified modal method.
Li, Shubin; Zhou, Changhe; Cao, Hongchao; Wu, Jun
2014-02-15
A simplified modal method (SMM) is presented that offers a clear physical image for subwavelength slanted grating. The diffraction characteristic of the slanted grating under Littrow configuration is revealed by the SMM as an equivalent rectangular grating, which is in good agreement with rigorous coupled-wave analysis. Based on the equivalence, we obtained an effective analytic solution for simplifying the design and optimization of a slanted grating. It offers a new approach for design of the slanted grating, e.g., a 1×2 beam splitter can be easily designed. This method should be helpful for designing various new slanted grating devices.
NASA Technical Reports Server (NTRS)
Kurtenbach, F. J.
1979-01-01
The technique which relies on afterburner duct pressure measurements and empirical corrections to an ideal one dimensional flow analysis to determine thrust is presented. A comparison of the calculated and facility measured thrust values is reported. The simplified model with the engine manufacturer's gas generator model are compared. The evaluation was conducted over a range of Mach numbers from 0.80 to 2.00 and at altitudes from 4020 meters to 15,240 meters. The effects of variations in inlet total temperature from standard day conditions were explored. Engine conditions were varied from those normally scheduled for flight. The technique was found to be accurate to a twice standard deviation of 2.89 percent, with accuracy a strong function of afterburner duct pressure difference.
Study of the Radiative Properties of Inhomogeneous Stratocumulus Clouds
NASA Technical Reports Server (NTRS)
Batey, Michael
1996-01-01
Clouds play an important role in the radiation budget of the atmosphere. A good understanding of how clouds interact with solar radiation is necessary when considering their effects in both general circulation models and climate models. This study examined the radiative properties of clouds in both an inhomogeneous cloud system, and a simplified cloud system through the use of a Monte Carlo model. The purpose was to become more familiar with the radiative properties of clouds, especially absorption, and to investigate the excess absorption of solar radiation from observations over that calculated from theory. The first cloud system indicated that the absorptance actually decreased as the cloud's inhomogeneity increased, and that cloud forcing does not indicate any changes. The simplified cloud system looked at two different cases of absorption of solar radiation in the cloud. The absorptances calculated from the Monte Carlo is compared to a correction method for calculating absorptances and found that the method can over or underestimate absorptances at cloud edges. Also the cloud edge effects due to solar radiation points to a possibility of overestimating the retrieved optical depth at the edge, and indicates a possible way to correct for it. The effective cloud fraction (Ne) for a long time has been calculated from a cloud's reflectance. From the reflectance it has been observed that the N, for most cloud geometries is greater than the actual cloud fraction (Nc) making a cloud appear wider than it is optically. Recent studies we have performed used a Monte Carlo model to calculate the N, of a cloud using not only the reflectance but also the absorptance. The derived Ne's from the absorptance in some of the Monte Carlo runs did not give the same results as derived from the reflectance. This study also examined the inhomogeneity of clouds to find a relationship between larger and smaller scales, or wavelengths, of the cloud. Both Fourier transforms and wavelet transforms were used to analyze the liquid water content of marine stratocumulus clouds taken during the ASTEX project. From the analysis it was found that the energy in the cloud is not uniformly distributed but is greater at the larger scales than at the smaller scales. This was determined by examining the slope of the power spectrum, and by comparing the variability at two scales from a wavelet analysis.
NASA Technical Reports Server (NTRS)
Kastner, S. O.; Bhatia, A. K.
1980-01-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
NASA Astrophysics Data System (ADS)
Kastner, S. O.; Bhatia, A. K.
1980-08-01
A generalized method for obtaining individual level population ratios is used to obtain relative intensities of extreme ultraviolet Fe XV emission lines in the range 284-500 A, which are density dependent for electron densities in the tokamak regime or higher. Four lines in particular are found to attain quite high intensities in the high-density limit. The same calculation provides inelastic contributions to linewidths. The method connects level populations and level widths through total probabilities t(ij), related to 'taboo' probabilities of Markov chain theory. The t(ij) are here evaluated for a real atomic system, being therefore of potential interest to random-walk theorists who have been limited to idealized systems characterized by simplified transition schemes.
Marom, Gil; Bluestein, Danny
2016-01-01
This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed.
NASA Astrophysics Data System (ADS)
Ma, Zhichao; Zhao, Hongwei; Ren, Luquan
2016-06-01
Most miniature in situ tensile devices compatible with scanning/transmission electron microscopes or optical microscopes adopt a horizontal layout. In order to analyze and calculate the measurement error of the tensile Young’s modulus, the effects of gravity and temperature changes, which would respectively lead to and intensify the bending deformation of thin specimens, are considered as influencing factors. On the basis of a decomposition method of static indeterminacy, equations of simplified deflection curves are obtained and, accordingly, the actual gage length is confirmed. By comparing the effects of uniaxial tensile load on the change of the deflection curve with gravity, the relation between the actual and directly measured tensile Young’s modulus is obtained. Furthermore, the quantitative effects of ideal gage length l o, temperature change ΔT and the density ρ of the specimen on the modulus difference and modulus ratio are calculated. Specimens with larger l o and ρ present more obvious measurement errors for Young’s modulus, but the effect of ΔT is not significant. The calculation method of Young’s modulus is particularly suitable for thin specimens.
Simplified power control method for cellular mobile communication
NASA Astrophysics Data System (ADS)
Leung, Y. W.
1994-04-01
The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.
Tao, Guohua; Miller, William H
2012-09-28
An efficient time-dependent (TD) Monte Carlo (MC) importance sampling method has recently been developed [G. Tao and W. H. Miller, J. Chem. Phys. 135, 024104 (2011)] for the evaluation of time correlation functions using the semiclassical (SC) initial value representation (IVR) methodology. In this TD-SC-IVR method, the MC sampling uses information from both time-evolved phase points as well as their initial values, and only the "important" trajectories are sampled frequently. Even though the TD-SC-IVR was shown in some benchmark examples to be much more efficient than the traditional time-independent sampling method (which uses only initial conditions), the calculation of the SC prefactor-which is computationally expensive, especially for large systems-is still required for accepted trajectories. In the present work, we present an approximate implementation of the TD-SC-IVR method that is completely prefactor-free; it gives the time correlation function as a classical-like magnitude function multiplied by a phase function. Application of this approach to flux-flux correlation functions (which yield reaction rate constants) for the benchmark H + H(2) system shows very good agreement with exact quantum results. Limitations of the approximate approach are also discussed.
Analysis of a new phase and height algorithm in phase measurement profilometry
NASA Astrophysics Data System (ADS)
Bian, Xintian; Zuo, Fen; Cheng, Ju
2018-04-01
Traditional phase measurement profilometry adopts divergent illumination to obtain the height distribution of a measured object accurately. However, the mapping relation between reference plane coordinates and phase distribution must be calculated before measurement. Data are then stored in a computer in the form of a data sheet for standby applications. This study improved the distribution of projected fringes and deducted the phase-height mapping algorithm when the two pupils of the projection and imaging systems are of unequal heights and when the projection and imaging axes are on different planes. With the algorithm, calculating the mapping relation between reference plane coordinates and phase distribution prior to measurement is unnecessary. Thus, the measurement process is simplified, and the construction of an experimental system is made easy. Computer simulation and experimental results confirm the effectiveness of the method.
Crystal growth and furnace analysis
NASA Technical Reports Server (NTRS)
Dakhoul, Youssef M.
1986-01-01
A thermal analysis of Hg/Cd/Te solidification in a Bridgman cell is made using Continuum's VAST code. The energy equation is solved in an axisymmetric, quasi-steady domain for both the molten and solid alloy regions. Alloy composition is calculated by a simplified one-dimensional model to estimate its effect on melt thermal conductivity and, consequently, on the temperature field within the cell. Solidification is assumed to occur at a fixed temperature of 979 K. Simplified boundary conditions are included to model both the radiant and conductive heat exchange between the furnace walls and the alloy. Calculations are performed to show how the steady-state isotherms are affected by: the hot and cold furnace temperatures, boundary condition parameters, and the growth rate which affects the calculated alloy's composition. The Advanced Automatic Directional Solidification Furnace (AADSF), developed by NASA, is also thermally analyzed using the CINDA code. The objective is to determine the performance and the overall power requirements for different furnace designs.
Risk-Screening Environmental Indicators (RSEI)
EPA's Risk-Screening Environmental Indicators (RSEI) is a geographically-based model that helps policy makers and communities explore data on releases of toxic substances from industrial facilities reporting to EPA??s Toxics Release Inventory (TRI). By analyzing TRI information together with simplified risk factors, such as the amount of chemical released, its fate and transport through the environment, each chemical??s relative toxicity, and the number of people potentially exposed, RSEI calculates a numeric score, which is designed to only be compared to other scores calculated by RSEI. Because it is designed as a screening-level model, RSEI uses worst-case assumptions about toxicity and potential exposure where data are lacking, and also uses simplifying assumptions to reduce the complexity of the calculations. A more refined assessment is required before any conclusions about health impacts can be drawn. RSEI is used to establish priorities for further investigation and to look at changes in potential impacts over time. Users can save resources by conducting preliminary analyses with RSEI.
NASA Astrophysics Data System (ADS)
Haqiqi, M. T.; Yuliansyah; Suwinarti, W.; Amirta, R.
2018-04-01
Short Rotation Coppice (SRC) system is an option to provide renewable and sustainable feedstock in generating electricity for rural area. Here in this study, we focussed on application of Response Surface Methodology (RSM) to simplify calculation protocols to point out wood chip production and energy potency from some tropical SRC species identified as Bauhinia purpurea, Bridelia tomentosa, Calliandra calothyrsus, Fagraea racemosa, Gliricidia sepium, Melastoma malabathricum, Piper aduncum, Vernonia amygdalina, Vernonia arborea and Vitex pinnata. The result showed that the highest calorific value was obtained from V. pinnata wood (19.97 MJ kg-1) due to its high lignin content (29.84 %, w/w). Our findings also indicated that the use of RSM for estimating energy-electricity of SRC wood had significant term regarding to the quadratic model (R2 = 0.953), whereas the solid-chip ratio prediction was accurate (R2 = 1.000). In the near future, the simple formula will be promising to calculate energy production easily from woody biomass, especially from SRC species.
Generalized stacking fault energies of alloys.
Li, Wei; Lu, Song; Hu, Qing-Miao; Kwon, Se Kyun; Johansson, Börje; Vitos, Levente
2014-07-02
The generalized stacking fault energy (γ surface) provides fundamental physics for understanding the plastic deformation mechanisms. Using the ab initio exact muffin-tin orbitals method in combination with the coherent potential approximation, we calculate the γ surface for the disordered Cu-Al, Cu-Zn, Cu-Ga, Cu-Ni, Pd-Ag and Pd-Au alloys. Studying the effect of segregation of the solute to the stacking fault planes shows that only the local chemical composition affects the γ surface. The calculated alloying trends are discussed using the electronic band structure of the base and distorted alloys.Based on our γ surface results, we demonstrate that the previous revealed 'universal scaling law' between the intrinsic energy barriers (IEBs) is well obeyed in random solid solutions. This greatly simplifies the calculations of the twinning measure parameters or the critical twinning stress. Adopting two twinnability measure parameters derived from the IEBs, we find that in binary Cu alloys, Al, Zn and Ga increase the twinnability, while Ni decreases it. Aluminum and gallium yield similar effects on the twinnability.
Madsen, Kristoffer H; Ewald, Lars; Siebner, Hartwig R; Thielscher, Axel
2015-01-01
Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector potential of the TMS coils. To develop an approach to reconstruct the magnetic vector potential based on automated measurements. We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel approach to determine the magnetic vector potential via volume integration of the measured field. The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well with that calculated using a model reconstructed from x-ray images. The setup can supply validated models for existing and newly appearing TMS coils. Copyright © 2015 Elsevier Inc. All rights reserved.
Iterative Addition of Kinetic Effects to Cold Plasma RF Wave Solvers
NASA Astrophysics Data System (ADS)
Green, David; Berry, Lee; RF-SciDAC Collaboration
2017-10-01
The hot nature of fusion plasmas requires a wave vector dependent conductivity tensor for accurate calculation of wave heating and current drive. Traditional methods for calculating the linear, kinetic full-wave plasma response rely on a spectral method such that the wave vector dependent conductivity fits naturally within the numerical method. These methods have seen much success for application to the well-confined core plasma of tokamaks. However, quantitative prediction of high power RF antenna designs for fusion applications has meant a requirement of resolving the geometric details of the antenna and other plasma facing surfaces for which the Fourier spectral method is ill-suited. An approach to enabling the addition of kinetic effects to the more versatile finite-difference and finite-element cold-plasma full-wave solvers was presented by where an operator-split iterative method was outlined. Here we expand on this approach, examine convergence and present a simplified kinetic current estimator for rapidly updating the right-hand side of the wave equation with kinetic corrections. This research used resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC05-00OR22725.
Lomond, Jasmine S; Tong, Anthony Z
2011-01-01
Analysis of dissolved methane, ethylene, acetylene, and ethane in water is crucial in evaluating anaerobic activity and investigating the sources of hydrocarbon contamination in aquatic environments. A rapid chromatographic method based on phase equilibrium between water and its headspace is developed for these analytes. The new method requires minimal sample preparation and no special apparatus except those associated with gas chromatography. Instead of Henry's Law used in similar previous studies, partition coefficients are used for the first time to calculate concentrations of dissolved hydrocarbon gases, which considerably simplifies the calculation involved. Partition coefficients are determined to be 128, 27.9, 1.28, and 96.3 at 30°C for methane, ethylene, acetylene, and ethane, respectively. It was discovered that the volume ratio of gas-to-liquid phase is critical to the accuracy of the measurements. The method performance can be readily improved by reducing the volume ratio of the two phases. Method validation shows less than 6% variation in accuracy and precision except at low levels of methane where interferences occur in ambient air. Method detection limits are determined to be in the low ng/L range for all analytes. The performance of the method is further tested using environmental samples collected from various sites in Nova Scotia.
NASA Astrophysics Data System (ADS)
Miquel, Benjamin
The dynamic or seismic behavior of hydraulic structures is, as for conventional structures, essential to assure protection of human lives. These types of analyses also aim at limiting structural damage caused by an earthquake to prevent rupture or collapse of the structure. The particularity of these hydraulic structures is that not only the internal displacements are caused by the earthquake, but also by the hydrodynamic loads resulting from fluid-structure interaction. This thesis reviews the existing complex and simplified methods to perform such dynamic analysis for hydraulic structures. For the complex existing methods, attention is placed on the difficulties arising from their use. Particularly, interest is given in this work on the use of transmitting boundary conditions to simulate the semi infinity of reservoirs. A procedure has been developed to estimate the error that these boundary conditions can introduce in finite element dynamic analysis. Depending on their formulation and location, we showed that they can considerably affect the response of such fluid-structure systems. For practical engineering applications, simplified procedures are still needed to evaluate the dynamic behavior of structures in contact with water. A review of the existing simplified procedures showed that these methods are based on numerous simplifications that can affect the prediction of the dynamic behavior of such systems. One of the main objectives of this thesis has been to develop new simplified methods that are more accurate than those existing. First, a new spectral analysis method has been proposed. Expressions for the fundamental frequency of fluid-structure systems, key parameter of spectral analysis, have been developed. We show that this new technique can easily be implemented in a spreadsheet or program, and that its calculation time is near instantaneous. When compared to more complex analytical or numerical method, this new procedure yields excellent prediction of the dynamic behavior of fluid-structure systems. Spectral analyses ignore the transient and oscillatory nature of vibrations. When such dynamic analyses show that some areas of the studied structure undergo excessive stresses, time history analyses allow a better estimate of the extent of these zones as well as a time notion of these excessive stresses. Furthermore, the existing spectral analyses methods for fluid-structure systems account only for the static effect of higher modes. Thought this can generally be sufficient for dams, for flexible structures the dynamic effect of these modes should be accounted for. New methods have been developed for fluid-structure systems to account for these observations as well as the flexibility of foundations. A first method was developed to study structures in contact with one or two finite or infinite water domains. This new technique includes flexibility of structures and foundations as well as the dynamic effect of higher vibration modes and variations of the levels of the water domains. Extension of this method was performed to study beam structures in contact with fluids. These new developments have also allowed extending existing analytical formulations of the dynamic properties of a dry beam to a new formulation that includes effect of fluid-structure interaction. The method yields a very good estimate of the dynamic behavior of beam-fluid systems or beam like structures in contact with fluid. Finally, a Modified Accelerogram Method (MAM) has been developed to modify the design earthquake into a new accelerogram that directly accounts for the effect of fluid-structure interaction. This new accelerogram can therefore be applied directly to the dry structure (i.e. without water) in order to calculate the dynamic response of the fluid-structure system. This original technique can include numerous parameters that influence the dynamic response of such systems and allows to treat analytically the fluid-structure interaction while keeping the advantages of finite element modeling.
Microcomputer software for calculating the western Oregon elk habitat effectiveness index.
Alan Ager; Mark Hitchcock
1992-01-01
This paper describes the operation of the microcomputer program HEIWEST, which was developed to automate calculation of the western Oregon elk habitat effectiveness index (HEI). HEIWEST requires little or no training to operate and vastly simplifies the task of measuring HEI for either site-specific project analysis or long-term monitoring of elk habitat. It is...
Wagner, Pablo; Standard, Shawn C; Herzenberg, John E
The multiplier method (MM) is frequently used to predict limb-length discrepancy and timing of epiphysiodesis. The traditional MM uses complex formulae and requires a calculator. A mobile application was developed in an attempt to simplify and streamline these calculations. We compared the accuracy and speed of using the traditional pencil and paper technique with that using the Multiplier App (MA). After attending a training lecture and a hands-on workshop on the MM and MA, 30 resident surgeons were asked to apply the traditional MM and the MA at different weeks of their rotations. They were randomized as to the method they applied first. Subjects performed calculations for 5 clinical exercises that involved congenital and developmental limb-length discrepancies and timing of epiphysiodesis. The amount of time required to complete the exercises and the accuracy of the answers were evaluated for each subject. The test subjects answered 60% of the questions correctly using the traditional MM and 80% of the questions correctly using the MA (P=0.001). The average amount of time to complete the 5 exercises with the MM and MA was 22 and 8 minutes, respectively (P<0.0001). Several reports state that the traditional MM is quick and easy to use. Nevertheless, even in the most experienced hands, performing the calculations in clinical practice can be time-consuming. Errors may result from choosing the wrong formulae and from performing the calculations by hand. Our data show that the MA is simpler, more accurate, and faster than the traditional MM from a practical standpoint. Level II.
Nonlinear transient analysis of multi-mass flexible rotors - theory and applications
NASA Technical Reports Server (NTRS)
Kirk, R. G.; Gunter, E. J.
1973-01-01
The equations of motion necessary to compute the transient response of multi-mass flexible rotors are formulated to include unbalance, rotor acceleration, and flexible damped nonlinear bearing stations. A method of calculating the unbalance response of flexible rotors from a modified Myklestad-Prohl technique is discussed in connection with the method of solution for the transient response. Several special cases of simplified rotor-bearing systems are presented and analyzed for steady-state response, stability, and transient behavior. These simplified rotor models produce extensive design information necessary to insure stable performance to elastic mounted rotor-bearing systems under varying levels and forms of excitation. The nonlinear journal bearing force expressions derived from the short bearing approximation are utilized in the study of the stability and transient response of the floating bush squeeze damper support system. Both rigid and flexible rotor models are studied, and results indicate that the stability of flexible rotors supported by journal bearings can be greatly improved by the use of squeeze damper supports. Results from linearized stability studies of flexible rotors indicate that a tuned support system can greatly improve the performance of the units from the standpoint of unbalanced response and impact loading. Extensive stability and design charts may be readily produced for given rotor specifications by the computer codes presented in this analysis.
A Simplified Diagnostic Method for Elastomer Bond Durability
NASA Technical Reports Server (NTRS)
White, Paul
2009-01-01
A simplified method has been developed for determining bond durability under exposure to water or high humidity conditions. It uses a small number of test specimens with relatively short times of water exposure at elevated temperature. The method is also gravimetric; the only equipment being required is an oven, specimen jars, and a conventional laboratory balance.
A Manual of Simplified Laboratory Methods for Operators of Wastewater Treatment Facilities.
ERIC Educational Resources Information Center
Westerhold, Arnold F., Ed.; Bennett, Ernest C., Ed.
This manual is designed to provide the small wastewater treatment plant operator, as well as the new or inexperienced operator, with simplified methods for laboratory analysis of water and wastewater. It is emphasized that this manual is not a replacement for standard methods but a guide for plants with insufficient equipment to perform analyses…
NASA Technical Reports Server (NTRS)
Richards, P. G.; Torr, D. G.
1981-01-01
A simplified method for the evaluation of theoretical photoelectron fluxes in the upper atmosphere resulting from the solar radiation at 304 A is presented. The calculation is based on considerations of primary and cascade (secondary) photoelectron production in the two-stream model, where photoelectron transport is described by two electron streams, one moving up and one moving down, and of loss rates due to collisions with neutral gases and thermal electrons. The calculation is illustrated for the case of photoelectrons at an energy of 24.5 eV, and it is noted that the 24.5-eV photoelectron flux may be used to monitor variations in the solar 304 A flux. Theoretical calculations based on various ionization and excitation cross sections of Banks et al. (1974) are shown to be in generally good agreement with AE-E measurements taken between 200 and 235 km, however the use of more recent, larger cross sections leads to photoelectron values a factor of two smaller than observations but in agreement with previous calculations. It is concluded that a final resolution of the photoelectron problem may depend on a reevaluation of the inelastic electron collision cross sections.
A Fast Method for Embattling Optimization of Ground-Based Radar Surveillance Network
NASA Astrophysics Data System (ADS)
Jiang, H.; Cheng, H.; Zhang, Y.; Liu, J.
A growing number of space activities have created an orbital debris environment that poses increasing impact risks to existing space systems and human space flight. For the safety of in-orbit spacecraft, a lot of observation facilities are needed to catalog space objects, especially in low earth orbit. Surveillance of Low earth orbit objects are mainly rely on ground-based radar, due to the ability limitation of exist radar facilities, a large number of ground-based radar need to build in the next few years in order to meet the current space surveillance demands. How to optimize the embattling of ground-based radar surveillance network is a problem to need to be solved. The traditional method for embattling optimization of ground-based radar surveillance network is mainly through to the detection simulation of all possible stations with cataloged data, and makes a comprehensive comparative analysis of various simulation results with the combinational method, and then selects an optimal result as station layout scheme. This method is time consuming for single simulation and high computational complexity for the combinational analysis, when the number of stations increases, the complexity of optimization problem will be increased exponentially, and cannot be solved with traditional method. There is no better way to solve this problem till now. In this paper, target detection procedure was simplified. Firstly, the space coverage of ground-based radar was simplified, a space coverage projection model of radar facilities in different orbit altitudes was built; then a simplified objects cross the radar coverage model was established according to the characteristics of space objects orbit motion; after two steps simplification, the computational complexity of the target detection was greatly simplified, and simulation results shown the correctness of the simplified results. In addition, the detection areas of ground-based radar network can be easily computed with the simplified model, and then optimized the embattling of ground-based radar surveillance network with the artificial intelligent algorithm, which can greatly simplifies the computational complexities. Comparing with the traditional method, the proposed method greatly improved the computational efficiency.
DOE Office of Scientific and Technical Information (OSTI.GOV)
MacKinnon, Robert J.; Kuhlman, Kristopher L
2016-05-01
We present a method of control variates for calculating improved estimates for mean performance quantities of interest, E(PQI) , computed from Monte Carlo probabilistic simulations. An example of a PQI is the concentration of a contaminant at a particular location in a problem domain computed from simulations of transport in porous media. To simplify the presentation, the method is described in the setting of a one- dimensional elliptical model problem involving a single uncertain parameter represented by a probability distribution. The approach can be easily implemented for more complex problems involving multiple uncertain parameters and in particular for application tomore » probabilistic performance assessment of deep geologic nuclear waste repository systems. Numerical results indicate the method can produce estimates of E(PQI)having superior accuracy on coarser meshes and reduce the required number of simulations needed to achieve an acceptable estimate.« less
A Micromechanics-Based Method for Multiscale Fatigue Prediction
NASA Astrophysics Data System (ADS)
Moore, John Allan
An estimated 80% of all structural failures are due to mechanical fatigue, often resulting in catastrophic, dangerous and costly failure events. However, an accurate model to predict fatigue remains an elusive goal. One of the major challenges is that fatigue is intrinsically a multiscale process, which is dependent on a structure's geometric design as well as its material's microscale morphology. The following work begins with a microscale study of fatigue nucleation around non- metallic inclusions. Based on this analysis, a novel multiscale method for fatigue predictions is developed. This method simulates macroscale geometries explicitly while concurrently calculating the simplified response of microscale inclusions. Thus, providing adequate detail on multiple scales for accurate fatigue life predictions. The methods herein provide insight into the multiscale nature of fatigue, while also developing a tool to aid in geometric design and material optimization for fatigue critical devices such as biomedical stents and artificial heart valves.
Performance advantages of CPML over UPML absorbing boundary conditions in FDTD algorithm
NASA Astrophysics Data System (ADS)
Gvozdic, Branko D.; Djurdjevic, Dusan Z.
2017-01-01
Implementation of absorbing boundary condition (ABC) has a very important role in simulation performance and accuracy in finite difference time domain (FDTD) method. The perfectly matched layer (PML) is the most efficient type of ABC. The aim of this paper is to give detailed insight in and discussion of boundary conditions and hence to simplify the choice of PML used for termination of computational domain in FDTD method. In particular, we demonstrate that using the convolutional PML (CPML) has significant advantages in terms of implementation in FDTD method and reducing computer resources than using uniaxial PML (UPML). An extensive number of numerical experiments has been performed and results have shown that CPML is more efficient in electromagnetic waves absorption. Numerical code is prepared, several problems are analyzed and relative error is calculated and presented.
Scaled effective on-site Coulomb interaction in the DFT+U method for correlated materials
NASA Astrophysics Data System (ADS)
Nawa, Kenji; Akiyama, Toru; Ito, Tomonori; Nakamura, Kohji; Oguchi, Tamio; Weinert, M.
2018-01-01
The first-principles calculation of correlated materials within density functional theory remains challenging, but the inclusion of a Hubbard-type effective on-site Coulomb term (Ueff) often provides a computationally tractable and physically reasonable approach. However, the reported values of Ueff vary widely, even for the same ionic state and the same material. Since the final physical results can depend critically on the choice of parameter and the computational details, there is a need to have a consistent procedure to choose an appropriate one. We revisit this issue from constraint density functional theory, using the full-potential linearized augmented plane wave method. The calculated Ueff parameters for the prototypical transition-metal monoxides—MnO, FeO, CoO, and NiO—are found to depend significantly on the muffin-tin radius RMT, with variations of more than 2-3 eV as RMT changes from 2.0 to 2.7 aB. Despite this large variation in Ueff, the calculated valence bands differ only slightly. Moreover, we find an approximately linear relationship between Ueff(RMT) and the number of occupied localized electrons within the sphere, and give a simple scaling argument for Ueff; these results provide a rationalization for the large variation in reported values. Although our results imply that Ueff values are not directly transferable among different calculation methods (or even the same one with different input parameters such as RMT), use of this scaling relationship should help simplify the choice of Ueff.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, C; Lin, H; Chuang, K
2016-06-15
Purpose: To monitor the activity distribution and needle position during and after implantation in operating rooms. Methods: Simulation studies were conducted to assess the feasibility of measurement activity distribution and seed localization using the DuPECT system. The system consists of a LaBr3-based probe and planar detection heads, a collimation system, and a coincidence circuit. The two heads can be manipulated independently. Simplified Yb-169 brachytherapy seeds were used. A water-filled cylindrical phantom with a 40-mm diameter and 40-mm length was used to model a simplified prostate of the Asian man. Two simplified seeds were placed at a radial distance of 10more » mm and tangential distance of 10 mm from the center of the phantom. The probe head was arranged perpendicular to the planar head. Results of various imaging durations were analyzed and the accuracy of the seed localization was assessed by calculating the centroid of the seed. Results: The reconstructed images indicate that the DuPECT can measure the activity distribution and locate the seeds dwelt in different positions intraoperatively. The calculated centroid on average turned out to be accurate within the pixel size of 0.5 mm. The two sources were identified when the duration is longer than 15 s. The sensitivity measured in water was merely 0.07 cps/MBq. Conclusion: Preliminary results show that the measurement of the activity distribution and seed localization are feasible using the DuPECT system intraoperatively. It indicates the DuPECT system has potential to be an approach for dose-distribution-validation. The efficacy of acvtivity distribution measurement and source localization using the DuPECT system will evaluated in more realistic phantom studies (e.g., various attenuation materials and greater number of seeds) in the future investigation.« less
Implementation of the SPH Procedure Within the MOOSE Finite Element Framework
NASA Astrophysics Data System (ADS)
Laurier, Alexandre
The goal of this thesis was to implement the SPH homogenization procedure within the MOOSE finite element framework at INL. Before this project, INL relied on DRAGON to do their SPH homogenization which was not flexible enough for their needs. As such, the SPH procedure was implemented for the neutron diffusion equation with the traditional, Selengut and true Selengut normalizations. Another aspect of this research was to derive the SPH corrected neutron transport equations and implement them in the same framework. Following in the footsteps of other articles, this feature was implemented and tested successfully with both the PN and S N transport calculation schemes. Although the results obtained for the power distribution in PWR assemblies show no advantages over the use of the SPH diffusion equation, we believe the inclusion of this transport correction will allow for better results in cases where either P N or SN are required. An additional aspect of this research was the implementation of a novel way of solving the non-linear SPH problem. Traditionally, this was done through a Picard, fixed-point iterative process whereas the new implementation relies on MOOSE's Preconditioned Jacobian-Free Newton Krylov (PJFNK) method to allow for a direct solution to the non-linear problem. This novel implementation showed a decrease in calculation time by a factor reaching 50 and generated SPH factors that correspond to those obtained through a fixed-point iterative process with a very tight convergence criteria: epsilon < 10-8. The use of the PJFNK SPH procedure also allows to reach convergence in problems containing important reflector regions and void boundary conditions, something that the traditional SPH method has never been able to achieve. At times when the PJFNK method cannot reach convergence to the SPH problem, a hybrid method is used where by the traditional SPH iteration forces the initial condition to be within the radius of convergence of the Newton method. This new method was tested on a simplified model of INL's TREAT reactor, a problem that includes very important graphite reflector regions as well as vacuum boundary conditions with great success. To demonstrate the power of PJFNK SPH on a more common case, the correction was applied to a simplified PWR reactor core from the BEAVRS benchmark that included 15 assemblies and the water reflector to obtain very good results. This opens up the possibility to apply the SPH correction to full reactor cores in order to reduce homogenization errors for use in transient or multi-physics calculations.
Simplified path integral for supersymmetric quantum mechanics and type-A trace anomalies
NASA Astrophysics Data System (ADS)
Bastianelli, Fiorenzo; Corradini, Olindo; Iacconi, Laura
2018-05-01
Particles in a curved space are classically described by a nonlinear sigma model action that can be quantized through path integrals. The latter require a precise regularization to deal with the derivative interactions arising from the nonlinear kinetic term. Recently, for maximally symmetric spaces, simplified path integrals have been developed: they allow to trade the nonlinear kinetic term with a purely quadratic kinetic term (linear sigma model). This happens at the expense of introducing a suitable effective scalar potential, which contains the information on the curvature of the space. The simplified path integral provides a sensible gain in the efficiency of perturbative calculations. Here we extend the construction to models with N = 1 supersymmetry on the worldline, which are applicable to the first quantized description of a Dirac fermion. As an application we use the simplified worldline path integral to compute the type-A trace anomaly of a Dirac fermion in d dimensions up to d = 16.
[Influence of trabecular microstructure modeling on finite element analysis of dental implant].
Shen, M J; Wang, G G; Zhu, X H; Ding, X
2016-09-01
To analyze the influence of trabecular microstructure modeling on the biomechanical distribution of implant-bone interface with a three-dimensional finite element mandible model of trabecular structure. Dental implants were embeded in the mandibles of a beagle dog. After three months of the implant installation, the mandibles with dental implants were harvested and scaned by micro-CT and cone-beam CT. Two three-dimensional finite element mandible models, trabecular microstructure(precise model) and macrostructure(simplified model), were built. The values of stress and strain of implant-bone interface were calculated using the software of Ansys 14.0. Compared with the simplified model, the precise models' average values of the implant bone interface stress increased obviously and its maximum values did not change greatly. The maximum values of quivalent stress of the precise models were 80% and 110% of the simplified model and the average values were 170% and 290% of simplified model. The maximum and average values of equivalent strain of precise models were obviously decreased, and the maximum values of the equivalent effect strain were 17% and 26% of simplified model and the average ones were 21% and 16% of simplified model respectively. Stress and strain concentrations at implant-bone interface were obvious in the simplified model. However, the distributions of stress and strain were uniform in the precise model. The precise model has significant effect on the distribution of stress and strain at implant-bone interface.
Computational study of single-expansion-ramp nozzles with external burning
NASA Astrophysics Data System (ADS)
Yungster, Shaye; Trefny, Charles J.
1992-04-01
A computational investigation of the effects of external burning on the performance of single expansion ramp nozzles (SERN) operating at transonic speeds is presented. The study focuses on the effects of external heat addition and introduces a simplified injection and mixing model based on a control volume analysis. This simplified model permits parametric and scaling studies that would have been impossible to conduct with a detailed CFD analysis. The CFD model is validated by comparing the computed pressure distribution and thrust forces, for several nozzle configurations, with experimental data. Specific impulse calculations are also presented which indicate that external burning performance can be superior to other methods of thrust augmentation at transonic speeds. The effects of injection fuel pressure and nozzle pressure ratio on the performance of SERN nozzles with external burning are described. The results show trends similar to those reported in the experimental study, and provide additional information that complements the experimental data, improving our understanding of external burning flowfields. A study of the effect of scale is also presented. The results indicate that combustion kinetics do not make the flowfield sensitive to scale.
Computational study of single-expansion-ramp nozzles with external burning
NASA Technical Reports Server (NTRS)
Yungster, Shaye; Trefny, Charles J.
1992-01-01
A computational investigation of the effects of external burning on the performance of single expansion ramp nozzles (SERN) operating at transonic speeds is presented. The study focuses on the effects of external heat addition and introduces a simplified injection and mixing model based on a control volume analysis. This simplified model permits parametric and scaling studies that would have been impossible to conduct with a detailed CFD analysis. The CFD model is validated by comparing the computed pressure distribution and thrust forces, for several nozzle configurations, with experimental data. Specific impulse calculations are also presented which indicate that external burning performance can be superior to other methods of thrust augmentation at transonic speeds. The effects of injection fuel pressure and nozzle pressure ratio on the performance of SERN nozzles with external burning are described. The results show trends similar to those reported in the experimental study, and provide additional information that complements the experimental data, improving our understanding of external burning flowfields. A study of the effect of scale is also presented. The results indicate that combustion kinetics do not make the flowfield sensitive to scale.
TH-C-12A-04: Dosimetric Evaluation of a Modulated Arc Technique for Total Body Irradiation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tsiamas, P; Czerminska, M; Makrigiorgos, G
2014-06-15
Purpose: A simplified Total Body Irradiation (TBI) was developed to work with minimal requirements in a compact linac room without custom motorized TBI couch. Results were compared to our existing fixed-gantry double 4 MV linac TBI system with prone patient and simultaneous AP/PA irradiation. Methods: Modulated arc irradiates patient positioned in prone/supine positions along the craniocaudal axis. A simplified inverse planning method developed to optimize dose rate as a function of gantry angle for various patient sizes without the need of graphical 3D treatment planning system. This method can be easily adapted and used with minimal resources. Fixed maximum fieldmore » size (40×40 cm2) is used to decrease radiation delivery time. Dose rate as a function of gantry angle is optimized to result in uniform dose inside rectangular phantoms of various sizes and a custom VMAT DICOM plans were generated using a DICOM editor tool. Monte Carlo simulations, film and ionization chamber dosimetry for various setups were used to derive and test an extended SSD beam model based on PDD/OAR profiles for Varian 6EX/ TX. Measurements were obtained using solid water phantoms. Dose rate modulation function was determined for various size patients (100cm − 200cm). Depending on the size of the patient arc range varied from 100° to 120°. Results: A PDD/OAR based beam model for modulated arc TBI therapy was developed. Lateral dose profiles produced were similar to profiles of our existing TBI facility. Calculated delivery time and full arc depended on the size of the patient (∼8min/ 100° − 10min/ 120°, 100 cGy). Dose heterogeneity varied by about ±5% − ±10% depending on the patient size and distance to the surface (buildup region). Conclusion: TBI using simplified modulated arc along craniocaudal axis of different size patients positioned on the floor can be achieved without graphical / inverse 3D planning.« less
NASA Astrophysics Data System (ADS)
Crevoisier, David; Chanzy, André; Voltz, Marc
2009-06-01
Ross [Ross PJ. Modeling soil water and solute transport - fast, simplified numerical solutions. Agron J 2003;95:1352-61] developed a fast, simplified method for solving Richards' equation. This non-iterative 1D approach, using Brooks and Corey [Brooks RH, Corey AT. Hydraulic properties of porous media. Hydrol. papers, Colorado St. Univ., Fort Collins; 1964] hydraulic functions, allows a significant reduction in computing time while maintaining the accuracy of the results. The first aim of this work is to confirm these results in a more extensive set of problems, including those that would lead to serious numerical difficulties for the standard numerical method. The second aim is to validate a generalisation of the Ross method to other mathematical representations of hydraulic functions. The Ross method is compared with the standard finite element model, Hydrus-1D [Simunek J, Sejna M, Van Genuchten MTh. The HYDRUS-1D and HYDRUS-2D codes for estimating unsaturated soil hydraulic and solutes transport parameters. Agron Abstr 357; 1999]. Computing time, accuracy of results and robustness of numerical schemes are monitored in 1D simulations involving different types of homogeneous soils, grids and hydrological conditions. The Ross method associated with modified Van Genuchten hydraulic functions [Vogel T, Cislerova M. On the reliability of unsaturated hydraulic conductivity calculated from the moisture retention curve. Transport Porous Media 1988;3:1-15] proves in every tested scenario to be more robust numerically, and the compromise of computing time/accuracy is seen to be particularly improved on coarse grids. Ross method run from 1.25 to 14 times faster than Hydrus-1D.
NASA Astrophysics Data System (ADS)
Koval, Viacheslav
The seismic design provisions of the CSA-S6 Canadian Highway Bridge Design Code and the AASHTO LRFD Seismic Bridge Design Specifications have been developed primarily based on historical earthquake events that have occurred along the west coast of North America. For the design of seismic isolation systems, these codes include simplified analysis and design methods. The appropriateness and range of application of these methods are investigated through extensive parametric nonlinear time history analyses in this thesis. It was found that there is a need to adjust existing design guidelines to better capture the expected nonlinear response of isolated bridges. For isolated bridges located in eastern North America, new damping coefficients are proposed. The applicability limits of the code-based simplified methods have been redefined to ensure that the modified method will lead to conservative results and that a wider range of seismically isolated bridges can be covered by this method. The possibility of further improving current simplified code methods was also examined. By transforming the quantity of allocated energy into a displacement contribution, an idealized analytical solution is proposed as a new simplified design method. This method realistically reflects the effects of ground-motion and system design parameters, including the effects of a drifted oscillation center. The proposed method is therefore more appropriate than current existing simplified methods and can be applicable to isolation systems exhibiting a wider range of properties. A multi-level-hazard performance matrix has been adopted by different seismic provisions worldwide and will be incorporated into the new edition of the Canadian CSA-S6-14 Bridge Design code. However, the combined effect and optimal use of isolation and supplemental damping devices in bridges have not been fully exploited yet to achieve enhanced performance under different levels of seismic hazard. A novel Dual-Level Seismic Protection (DLSP) concept is proposed and developed in this thesis which permits to achieve optimum seismic performance with combined isolation and supplemental damping devices in bridges. This concept is shown to represent an attractive design approach for both the upgrade of existing seismically deficient bridges and the design of new isolated bridges.
Fast and accurate grid representations for atom-based docking with partner flexibility.
de Vries, Sjoerd J; Zacharias, Martin
2017-06-30
Macromolecular docking methods can broadly be divided into geometric and atom-based methods. Geometric methods use fast algorithms that operate on simplified, grid-like molecular representations, while atom-based methods are more realistic and flexible, but far less efficient. Here, a hybrid approach of grid-based and atom-based docking is presented, combining precalculated grid potentials with neighbor lists for fast and accurate calculation of atom-based intermolecular energies and forces. The grid representation is compatible with simultaneous multibody docking and can tolerate considerable protein flexibility. When implemented in our docking method ATTRACT, grid-based docking was found to be ∼35x faster. With the OPLSX forcefield instead of the ATTRACT coarse-grained forcefield, the average speed improvement was >100x. Grid-based representations may allow atom-based docking methods to explore large conformational spaces with many degrees of freedom, such as multiple macromolecules including flexibility. This increases the domain of biological problems to which docking methods can be applied. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Accelerating separable footprint (SF) forward and back projection on GPU
NASA Astrophysics Data System (ADS)
Xie, Xiaobin; McGaffin, Madison G.; Long, Yong; Fessler, Jeffrey A.; Wen, Minhua; Lin, James
2017-03-01
Statistical image reconstruction (SIR) methods for X-ray CT can improve image quality and reduce radiation dosages over conventional reconstruction methods, such as filtered back projection (FBP). However, SIR methods require much longer computation time. The separable footprint (SF) forward and back projection technique simplifies the calculation of intersecting volumes of image voxels and finite-size beams in a way that is both accurate and efficient for parallel implementation. We propose a new method to accelerate the SF forward and back projection on GPU with NVIDIA's CUDA environment. For the forward projection, we parallelize over all detector cells. For the back projection, we parallelize over all 3D image voxels. The simulation results show that the proposed method is faster than the acceleration method of the SF projectors proposed by Wu and Fessler.13 We further accelerate the proposed method using multiple GPUs. The results show that the computation time is reduced approximately proportional to the number of GPUs.
A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE.
Al-Dweri, Feras M O; Lallena, Antonio M; Vilches, Manuel
2004-06-21
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3 degrees with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, v, z = 236 mm) show strong correlations between rho = (x2 + y2)(1/2) and their polar angle theta, on one side, and between tan(-1)(y/x) and their azimuthal angle phi, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.
NASA Technical Reports Server (NTRS)
Xue, W.-M.; Atluri, S. N.
1985-01-01
In this paper, all possible forms of mixed-hybrid finite element methods that are based on multi-field variational principles are examined as to the conditions for existence, stability, and uniqueness of their solutions. The reasons as to why certain 'simplified hybrid-mixed methods' in general, and the so-called 'simplified hybrid-displacement method' in particular (based on the so-called simplified variational principles), become unstable, are discussed. A comprehensive discussion of the 'discrete' BB-conditions, and the rank conditions, of the matrices arising in mixed-hybrid methods, is given. Some recent studies aimed at the assurance of such rank conditions, and the related problem of the avoidance of spurious kinematic modes, are presented.
Surrogates for numerical simulations; optimization of eddy-promoter heat exchangers
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Patera, Anthony
1993-01-01
Although the advent of fast and inexpensive parallel computers has rendered numerous previously intractable calculations feasible, many numerical simulations remain too resource-intensive to be directly inserted in engineering optimization efforts. An attractive alternative to direct insertion considers models for computational systems: the expensive simulation is evoked only to construct and validate a simplified, input-output model; this simplified input-output model then serves as a simulation surrogate in subsequent engineering optimization studies. A simple 'Bayesian-validated' statistical framework for the construction, validation, and purposive application of static computer simulation surrogates is presented. As an example, dissipation-transport optimization of laminar-flow eddy-promoter heat exchangers are considered: parallel spectral element Navier-Stokes calculations serve to construct and validate surrogates for the flowrate and Nusselt number; these surrogates then represent the originating Navier-Stokes equations in the ensuing design process.
McGuigan, John A S; Kay, James W; Elder, Hugh Y
2014-01-01
In Ca(2+)/Mg(2+) buffers the calculated ionised concentrations ([X(2+)]) can vary by up to a factor of seven. Since there are no defined standards it is impossible to check calculated [X(2+)], making measurement essential. The ligand optimisation method (LOM) is an accurate method to measure [X(2+)] in Ca(2+)/Mg(2+) buffers; independent estimation of ligand purity extends the method to pK(/) < 4. To simplify calculation, Excel programs ALE and AEC were compiled for LOM and its extension. This paper demonstrates that the slope of the electrode in the pX range 2.000-3.301 deviates from Nernstian behaviour as it depends on the value of the lumped interference, Σ. ALE was modified to include this effect; this modified program SALE, and the programs ALE and AEC were used on simulated data for Ca(2+)-EGTA and Mg(2+)-ATP buffers, to calculate electrode and buffer characteristics as a function of Σ. Ca(2+)-electrodes have a Σ < 10(-6) mol/l and there was no difference amongst the three methods. The Σ for Mg(2+)-electrodes lies between 10(-5) and 1.5 (∗) 10(-5) mol/l and calculated [Mg(2+)] with ALE were around 3% less than the true value. SALE and AEC correctly predicted [Mg(2+)]. SALE was used to recalculate K(/) and pK(/) on measured data for Ca(2+)-EGTA and Mg(2+)-EDTA buffers. These results demonstrated that it is pK(/) that is normally distributed. Until defined standards are available, [X(2+)] in Ca(2+)/Mg(2+) buffers have to be measured. The most appropriate method is to use Ca(2+)/Mg(2) electrodes combined with the Excel programs SALE or AEC. Copyright © 2014 Elsevier Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haverkamp, B.; Krone, J.; Shybetskyi, I.
2013-07-01
The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, the surface repository for solid low and intermediate level waste (LILW) is still being operated but its maximum capacity is nearly reached. Long-existing plans for increasing the capacity of the facility shall be implemented in the framework of the European Commission INSC Programme (Instrument for Nuclear Safety Co-operation). Within the first phase of this project, DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safetymore » analysis report (PSAR) for a future extended facility based on the planned enlargement. In addition to a detailed mathematical model, also simplified models have been developed to verify results of the former one and enhance confidence in the results. Comparison of the results show that - depending on the boundary conditions - simplifications like modeling the multi trench repository as one generic trench might have very limited influence on the overall results compared to the general uncertainties associated with respective long-term calculations. In addition to their value in regard to verification of more complex models which is important to increase confidence in the overall results, such simplified models can also offer the possibility to carry out time consuming calculations like probabilistic calculations or detailed sensitivity analysis in an economic manner. (authors)« less
77 FR 73965 - Allocation of Costs Under the Simplified Methods; Hearing
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-12
... DEPARTMENT OF THE TREASURY Internal Revenue Service 26 CFR Part 1 [REG-126770-06] RIN 1545-BG07 Allocation of Costs Under the Simplified Methods; Hearing AGENCY: Internal Revenue Service (IRS), Treasury. ACTION: Notice of public hearing on notice proposed rulemaking. SUMMARY: This document provides notice of...
Midtvedt, Daniel; Croy, Alexander
2016-06-10
We compare the simplified valence-force model for single-layer black phosphorus with the original model and recent ab initio results. Using an analytic approach and numerical calculations we find that the simplified model yields Young's moduli that are smaller compared to the original model and are almost a factor of two smaller than ab initio results. Moreover, the Poisson ratios are an order of magnitude smaller than values found in the literature.
A simplified model for tritium permeation transient predictions when trapping is active*1
NASA Astrophysics Data System (ADS)
Longhurst, G. R.
1994-09-01
This report describes a simplified one-dimensional tritium permeation and retention model. The model makes use of the same physical mechanisms as more sophisticated, time-transient codes such as implantation, recombination, diffusion, trapping and thermal gradient effects. It takes advantage of a number of simplifications and approximations to solve the steady-state problem and then provides interpolating functions to make estimates of intermediate states based on the steady-state solution. Comparison calculations with the verified and validated TMAP4 transient code show good agreement.
Multimodal far-field acoustic radiation pattern: An approximate equation
NASA Technical Reports Server (NTRS)
Rice, E. J.
1977-01-01
The far-field sound radiation theory for a circular duct was studied for both single mode and multimodal inputs. The investigation was intended to develop a method to determine the acoustic power produced by turbofans as a function of mode cut-off ratio. With reasonable simplifying assumptions the single mode radiation pattern was shown to be reducible to a function of mode cut-off ratio only. With modal cut-off ratio as the dominant variable, multimodal radiation patterns can be reduced to a simple explicit expression. This approximate expression provides excellent agreement with an exact calculation of the sound radiation pattern using equal acoustic power per mode.
77 FR 15969 - Waybill Data Released in Three-Benchmark Rail Rate Proceedings
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-19
... confidentiality of the contract rates, as required by 49 U.S.C. 11904. Background In Simplified Standards for Rail Rate Cases (Simplified Standards), EP 646 (Sub-No. 1) (STB served Sept. 5, 2007), aff'd sub nom. CSX...\\ Under the Three-Benchmark method as revised in Simplified Standards, each party creates and proffers to...
Code of Federal Regulations, 2014 CFR
2014-10-01
... 48 Federal Acquisition Regulations System 1 2014-10-01 2014-10-01 false List of laws inapplicable to contracts and subcontracts at or below the simplified acquisition threshold. 13.005 Section 13.005 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION CONTRACTING METHODS AND CONTRACT TYPES SIMPLIFIED ACQUISITION PROCEDURES...
Analytical calculation of vibrations of electromagnetic origin in electrical machines
NASA Astrophysics Data System (ADS)
McCloskey, Alex; Arrasate, Xabier; Hernández, Xabier; Gómez, Iratxo; Almandoz, Gaizka
2018-01-01
Electrical motors are widely used and are often required to satisfy comfort specifications. Thus, vibration response estimations are necessary to reach optimum machine designs. This work presents an improved analytical model to calculate vibration response of an electrical machine. The stator and windings are modelled as a double circular cylindrical shell. As the stator is a laminated structure, orthotropic properties are applied to it. The values of those material properties are calculated according to the characteristics of the motor and the known material properties taken from previous works. Therefore, the model proposed takes into account the axial direction, so that length is considered, and also the contribution of windings, which differs from one machine to another. These aspects make the model valuable for a wide range of electrical motor types. In order to validate the analytical calculation, natural frequencies are calculated and compared to those obtained by Finite Element Method (FEM), giving relative errors below 10% for several circumferential and axial mode order combinations. It is also validated the analytical vibration calculation with acceleration measurements in a real machine. The comparison shows good agreement for the proposed model, being the most important frequency components in the same magnitude order. A simplified two dimensional model is also applied and the results obtained are not so satisfactory.
Emission of dimers from a free surface of heated water
NASA Astrophysics Data System (ADS)
Bochkarev, A. A.; Polyakova, V. I.
2014-09-01
The emission rate of water dimers from a free surface and a wetted solid surface in various cases was calculated by a simplified Monte Carlo method with the use of the binding energy of water molecules. The binding energy of water molecules obtained numerically assuming equilibrium between the free surface of water and vapor in the temperature range of 298-438 K corresponds to the coordination number for liquid water equal to 4.956 and is close to the reference value. The calculation results show that as the water temperature increases, the free surface of water and the wetted solid surface become sources of free water dimers. At a temperature of 438 K, the proportion of dimers in the total flow of water molecules on its surface reaches 1%. It is found that in the film boiling mode, the emission rate of dimers decreases with decreasing saturation vapor. Two mechanisms of the emission are described.
Evolution of basic equations for nearshore wave field
ISOBE, Masahiko
2013-01-01
In this paper, a systematic, overall view of theories for periodic waves of permanent form, such as Stokes and cnoidal waves, is described first with their validity ranges. To deal with random waves, a method for estimating directional spectra is given. Then, various wave equations are introduced according to the assumptions included in their derivations. The mild-slope equation is derived for combined refraction and diffraction of linear periodic waves. Various parabolic approximations and time-dependent forms are proposed to include randomness and nonlinearity of waves as well as to simplify numerical calculation. Boussinesq equations are the equations developed for calculating nonlinear wave transformations in shallow water. Nonlinear mild-slope equations are derived as a set of wave equations to predict transformation of nonlinear random waves in the nearshore region. Finally, wave equations are classified systematically for a clear theoretical understanding and appropriate selection for specific applications. PMID:23318680
Development of the ICD-10 simplified version and field test.
Paoin, Wansa; Yuenyongsuwan, Maliwan; Yokobori, Yukiko; Endo, Hiroyoshi; Kim, Sukil
2018-05-01
The International Statistical Classification of Diseases and Related Health Problems, 10th Revision (ICD-10) has been used in various Asia-Pacific countries for more than 20 years. Although ICD-10 is a powerful tool, clinical coding processes are complex; therefore, many developing countries have not been able to implement ICD-10-based health statistics (WHO-FIC APN, 2007). This study aimed to simplify ICD-10 clinical coding processes, to modify index terms to facilitate computer searching and to provide a simplified version of ICD-10 for use in developing countries. The World Health Organization Family of International Classifications Asia-Pacific Network (APN) developed a simplified version of the ICD-10 and conducted field testing in Cambodia during February and March 2016. Ten hospitals were selected to participate. Each hospital sent a team to join a training workshop before using the ICD-10 simplified version to code 100 cases. All hospitals subsequently sent their coded records to the researchers. Overall, there were 1038 coded records with a total of 1099 ICD clinical codes assigned. The average accuracy rate was calculated as 80.71% (66.67-93.41%). Three types of clinical coding errors were found. These related to errors relating to the coder (14.56%), those resulting from the physician documentation (1.27%) and those considered system errors (3.46%). The field trial results demonstrated that the APN ICD-10 simplified version is feasible for implementation as an effective tool to implement ICD-10 clinical coding for hospitals. Developing countries may consider adopting the APN ICD-10 simplified version for ICD-10 code assignment in hospitals and health care centres. The simplified version can be viewed as an introductory tool which leads to the implementation of the full ICD-10 and may support subsequent ICD-11 adoption.
Hotta, Kenji; Matsubara, Kana; Nishioka, Shie; Matsuura, Taeko; Kawashima, Mitsuhiko
2012-01-01
When in vivo proton dosimetry is performed with a metal‐oxide semiconductor field‐effect transistor (MOSFET) detector, the response of the detector depends strongly on the linear energy transfer. The present study reports a practical method to correct the MOSFET response for linear energy transfer dependence by using a simplified Monte Carlo dose calculation method (SMC). A depth‐output curve for a mono‐energetic proton beam in polyethylene was measured with the MOSFET detector. This curve was used to calculate MOSFET output distributions with the SMC (SMCMOSFET). The SMCMOSFET output value at an arbitrary point was compared with the value obtained by the conventional SMCPPIC, which calculates proton dose distributions by using the depth‐dose curve determined by a parallel‐plate ionization chamber (PPIC). The ratio of the two values was used to calculate the correction factor of the MOSFET response at an arbitrary point. The dose obtained by the MOSFET detector was determined from the product of the correction factor and the MOSFET raw dose. When in vivo proton dosimetry was performed with the MOSFET detector in an anthropomorphic phantom, the corrected MOSFET doses agreed with the SMCPPIC results within the measurement error. To our knowledge, this is the first report of successful in vivo proton dosimetry with a MOSFET detector. PACS number: 87.56.‐v PMID:22402385
An, Zhao; Wen-Xin, Zhang; Zhong, Yao; Yu-Kuan, Ma; Qing, Liu; Hou-Lang, Duan; Yi-di, Shang
2016-06-29
To optimize and simplify the survey method of Oncomelania hupensis snail in marshland endemic region of schistosomiasis and increase the precision, efficiency and economy of the snail survey. A quadrate experimental field was selected as the subject of 50 m×50 m size in Chayegang marshland near Henghu farm in the Poyang Lake region and a whole-covered method was adopted to survey the snails. The simple random sampling, systematic sampling and stratified random sampling methods were applied to calculate the minimum sample size, relative sampling error and absolute sampling error. The minimum sample sizes of the simple random sampling, systematic sampling and stratified random sampling methods were 300, 300 and 225, respectively. The relative sampling errors of three methods were all less than 15%. The absolute sampling errors were 0.221 7, 0.302 4 and 0.047 8, respectively. The spatial stratified sampling with altitude as the stratum variable is an efficient approach of lower cost and higher precision for the snail survey.
2013-01-01
The monitoring of the cardiac output (CO) and other hemodynamic parameters, traditionally performed with the thermodilution method via a pulmonary artery catheter (PAC), is now increasingly done with the aid of less invasive and much easier to use devices. When used within the context of a hemodynamic optimization protocol, they can positively influence the outcome in both surgical and non-surgical patient populations. While these monitoring tools have simplified the hemodynamic calculations, they are subject to limitations and can lead to erroneous results if not used properly. In this article we will review the commercially available minimally invasive CO monitoring devices, explore their technical characteristics and describe the limitations that should be taken into consideration when clinical decisions are made. PMID:24472443
A model for explaining fusion suppression using classical trajectory method
NASA Astrophysics Data System (ADS)
Phookan, C. K.; Kalita, K.
2015-01-01
We adopt a semi-classical approach for explanation of projectile breakup and above barrier fusion suppression for the reactions 6Li+152Sm and 6Li+144Sm. The cut-off impact parameter for fusion is determined by employing quantum mechanical ideas. Within this cut-off impact parameter for fusion, the fraction of projectiles undergoing breakup is determined using the method of classical trajectory in two-dimensions. For obtaining the initial conditions of the equations of motion, a simplified model of the 6Li nucleus has been proposed. We introduce a simple formula for explanation of fusion suppression. We find excellent agreement between the experimental and calculated fusion cross section. A slight modification of the above formula for fusion suppression is also proposed for a three-dimensional model.
Influence of the electromagnetic parameters on the surface wave attenuation in thin absorbing layers
NASA Astrophysics Data System (ADS)
Li, Yinrui; Li, Dongmeng; Wang, Xian; Nie, Yan; Gong, Rongzhou
2018-05-01
This paper describes the relationships between the surface wave attenuation properties and the electromagnetic parameters of radar absorbing materials (RAMs). In order to conveniently obtain the attenuation constant of TM surface waves over a wide frequency range, the simplified dispersion equations in thin absorbing materials were firstly deduced. The validity of the proposed method was proved by comparing with the classical dispersion equations. Subsequently, the attenuation constants were calculated separately for the absorbing layers with hypothetical relative permittivity and permeability. It is found that the surface wave attenuation properties can be strongly tuned by the permeability of RAM. Meanwhile, the permittivity should be appropriate so as to maintain high cutoff frequency. The present work provides specific methods and designs to improve the attenuation performances of radar absorbing materials.
Espino, Daniel M; Shepherd, Duncan E T; Hukins, David W L
2014-01-01
A transient multi-physics model of the mitral heart valve has been developed, which allows simultaneous calculation of fluid flow and structural deformation. A recently developed contact method has been applied to enable simulation of systole (the stage when blood pressure is elevated within the heart to pump blood to the body). The geometry was simplified to represent the mitral valve within the heart walls in two dimensions. Only the mitral valve undergoes deformation. A moving arbitrary Lagrange-Euler mesh is used to allow true fluid-structure interaction (FSI). The FSI model requires blood flow to induce valve closure by inducing strains in the region of 10-20%. Model predictions were found to be consistent with existing literature and will undergo further development.
Marom, Gil; Bluestein, Danny
2016-01-01
Summary This paper evaluated the influence of various numerical implementation assumptions on predicting blood damage in cardiovascular devices using Lagrangian methods with Eulerian computational fluid dynamics. The implementation assumptions that were tested included various seeding patterns, stochastic walk model, and simplified trajectory calculations with pathlines. Post processing implementation options that were evaluated included single passage and repeated passages stress accumulation and time averaging. This study demonstrated that the implementation assumptions can significantly affect the resulting stress accumulation, i.e., the blood damage model predictions. Careful considerations should be taken in the use of Lagrangian models. Ultimately, the appropriate assumptions should be considered based the physics of the specific case and sensitivity analysis, similar to the ones presented here, should be employed. PMID:26679833
A transmission line model for propagation in elliptical core optical fibers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Georgantzos, E.; Boucouvalas, A. C.; Papageorgiou, C.
The calculation of mode propagation constants of elliptical core fibers has been the purpose of extended research leading to many notable methods, with the classic step index solution based on Mathieu functions. This paper seeks to derive a new innovative method for the determination of mode propagation constants in single mode fibers with elliptic core by modeling the elliptical fiber as a series of connected coupled transmission line elements. We develop a matrix formulation of the transmission line and the resonance of the circuits is used to calculate the mode propagation constants. The technique, used with success in the casemore » of cylindrical fibers, is now being extended for the case of fibers with elliptical cross section. The advantage of this approach is that it is very well suited to be able to calculate the mode dispersion of arbitrary refractive index profile elliptical waveguides. The analysis begins with the deployment Maxwell’s equations adjusted for elliptical coordinates. Further algebraic analysis leads to a set of equations where we are faced with the appearance of harmonics. Taking into consideration predefined fixed number of harmonics simplifies the problem and enables the use of the resonant circuits approach. According to each case, programs have been created in Matlab, providing with a series of results (mode propagation constants) that are further compared with corresponding results from the ready known Mathieu functions method.« less
Chung, Jung Wha; Shin, Eun; Kim, Haeryoung; Han, Ho-Seong; Cho, Jai Young; Choi, Young Rok; Hong, Sukho; Jang, Eun Sun; Kim, Jin-Wook; Jeong, Sook-Hyang
2018-05-01
Hepatic iron overload is associated with liver injury and hepatocarcinogenesis; however, it has not been evaluated in patients with hepatocellular carcinoma (HCC) in Asia. The aim of this study was to clarify the degree and distribution of intrahepatic iron deposition, and their effects on the survival of HCC patients. Intrahepatic iron deposition was examined using non-tumorous liver tissues from 204 HCC patients after curative resection, and they were scored by 2 semi-quantitative methods: simplified Scheuer's and modified Deugnier's methods. For the Scheuer's method, iron deposition in hepatocytes and Kupffer cells was separately evaluated, while for the modified Deugnier's method, hepatocyte iron score (HIS), sinusoidal iron score (SIS) and portal iron score (PIS) were systematically evaluated, and the corrected total iron score (cTIS) was calculated by multiplying the sum (TIS) of the HIS, SIS, and PIS by the coefficient. The overall prevalence of hepatic iron was 40.7% with the simplified Scheuer's method and 45.1% with the modified Deugnier's method with a mean cTIS score of 2.46. During a median follow-up of 67 months, the cTIS was not associated with overall survival. However, a positive PIS was significantly associated with a lower 5-year overall survival rate (50.0%) compared with a negative PIS (73.7%, P = .006). In the multivariate analysis, a positive PIS was an independent factor for overall mortality (hazard ratio, 2.310; 95% confidence interval, 1.181-4.517). Intrahepatic iron deposition was common, and iron overload in the portal tract indicated poor survival in curatively resected HCC patients. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
New Approaches for Calculating Moran’s Index of Spatial Autocorrelation
Chen, Yanguang
2013-01-01
Spatial autocorrelation plays an important role in geographical analysis; however, there is still room for improvement of this method. The formula for Moran’s index is complicated, and several basic problems remain to be solved. Therefore, I will reconstruct its mathematical framework using mathematical derivation based on linear algebra and present four simple approaches to calculating Moran’s index. Moran’s scatterplot will be ameliorated, and new test methods will be proposed. The relationship between the global Moran’s index and Geary’s coefficient will be discussed from two different vantage points: spatial population and spatial sample. The sphere of applications for both Moran’s index and Geary’s coefficient will be clarified and defined. One of theoretical findings is that Moran’s index is a characteristic parameter of spatial weight matrices, so the selection of weight functions is very significant for autocorrelation analysis of geographical systems. A case study of 29 Chinese cities in 2000 will be employed to validate the innovatory models and methods. This work is a methodological study, which will simplify the process of autocorrelation analysis. The results of this study will lay the foundation for the scaling analysis of spatial autocorrelation. PMID:23874592
Use of the Budyko Framework to Estimate the Virtual Water Content in Shijiazhuang Plain, North China
NASA Astrophysics Data System (ADS)
Zhang, E.; Yin, X.
2017-12-01
One of the most challenging steps in implementing analysis of virtual water content (VWC) of agricultural crops is how to properly assess the volume of consumptive water use (CWU) for crop production. In practice, CWU is considered equivalent to the crop evapotranspiration (ETc). Following the crop coefficient method, ETc can be calculated under standard or non-standard conditions by multiplying the reference evapotranspiration (ET0) by one or a few coefficients. However, when current crop growing conditions deviate from standard conditions, accurately determining the coefficients under non-standard conditions remains to be a complicated process and requires lots of field experimental data. Based on regional surface water-energy balance, this research integrates the Budyko framework into the traditional crop coefficient approach to simplify the coefficients determination. This new method enables us to assess the volume of agricultural VWC only based on some hydrometeorological data and agricultural statistic data in regional scale. To demonstrate the new method, we apply it to the Shijiazhuang Plain, which is an agricultural irrigation area in the North China Plain. The VWC of winter wheat and summer maize is calculated and we further subdivide VWC into blue and green water components. Compared with previous studies in this study area, VWC calculated by the Budyko-based crop coefficient approach uses less data and agrees well with some of the previous research. It shows that this new method may serve as a more convenient tool for assessing VWC.
NASA Astrophysics Data System (ADS)
Alvarez, Jose; Massey, Steven; Kalitsov, Alan; Velev, Julian
Nanopore sequencing via transverse current has emerged as a competitive candidate for mapping DNA methylation without needed bisulfite-treatment, fluorescent tag, or PCR amplification. By eliminating the error producing amplification step, long read lengths become feasible, which greatly simplifies the assembly process and reduces the time and the cost inherent in current technologies. However, due to the large error rates of nanopore sequencing, single base resolution has not been reached. A very important source of noise is the intrinsic structural noise in the electric signature of the nucleotide arising from the influence of neighboring nucleotides. In this work we perform calculations of the tunneling current through DNA molecules in nanopores using the non-equilibrium electron transport method within an effective multi-orbital tight-binding model derived from first-principles calculations. We develop a base-calling algorithm accounting for the correlations of the current through neighboring bases, which in principle can reduce the error rate below any desired precision. Using this method we show that we can clearly distinguish DNA methylation and other base modifications based on the reading of the tunneling current.
Two-port connecting-layer-based sandwiched grating by a polarization-independent design.
Li, Hongtao; Wang, Bo
2017-05-02
In this paper, a two-port connecting-layer-based sandwiched beam splitter grating with polarization-independent property is reported and designed. Such the grating can separate the transmission polarized light into two diffraction orders with equal energies, which can realize the nearly 50/50 output with good uniformity. For the given wavelength of 800 nm and period of 780 nm, a simplified modal method can design a optimal duty cycle and the estimation value of the grating depth can be calculated based on it. In order to obtain the precise grating parameters, a rigorous coupled-wave analysis can be employed to optimize grating parameters by seeking for the precise grating depth and the thickness of connecting layer. Based on the optimized design, a high-efficiency two-port output grating with the wideband performances can be gained. Even more important, diffraction efficiencies are calculated by using two analytical methods, which are proved to be coincided well with each other. Therefore, the grating is significant for practical optical photonic element in engineering.
GEN-IV Benchmarking of Triso Fuel Performance Models under accident conditions modeling input data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise Paul
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: • The modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release. • The modeling of the AGR-1 and HFR-EU1bis safety testing experiments. •more » The comparison of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from “Case 5” of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. “Case 5” of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to “effects of the numerical calculation method rather than the physical model” [IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary. 09/2016: Tables 6 and 8 updated. AGR-2 input data added« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Collin, Blaise P.
2014-09-01
This document presents the benchmark plan for the calculation of particle fuel performance on safety testing experiments that are representative of operational accidental transients. The benchmark is dedicated to the modeling of fission product release under accident conditions by fuel performance codes from around the world, and the subsequent comparison to post-irradiation experiment (PIE) data from the modeled heating tests. The accident condition benchmark is divided into three parts: the modeling of a simplified benchmark problem to assess potential numerical calculation issues at low fission product release; the modeling of the AGR-1 and HFR-EU1bis safety testing experiments; and, the comparisonmore » of the AGR-1 and HFR-EU1bis modeling results with PIE data. The simplified benchmark case, thereafter named NCC (Numerical Calculation Case), is derived from ''Case 5'' of the International Atomic Energy Agency (IAEA) Coordinated Research Program (CRP) on coated particle fuel technology [IAEA 2012]. It is included so participants can evaluate their codes at low fission product release. ''Case 5'' of the IAEA CRP-6 showed large code-to-code discrepancies in the release of fission products, which were attributed to ''effects of the numerical calculation method rather than the physical model''[IAEA 2012]. The NCC is therefore intended to check if these numerical effects subsist. The first two steps imply the involvement of the benchmark participants with a modeling effort following the guidelines and recommendations provided by this document. The third step involves the collection of the modeling results by Idaho National Laboratory (INL) and the comparison of these results with the available PIE data. The objective of this document is to provide all necessary input data to model the benchmark cases, and to give some methodology guidelines and recommendations in order to make all results suitable for comparison with each other. The participants should read this document thoroughly to make sure all the data needed for their calculations is provided in the document. Missing data will be added to a revision of the document if necessary.« less
Image segmentation algorithm based on improved PCNN
NASA Astrophysics Data System (ADS)
Chen, Hong; Wu, Chengdong; Yu, Xiaosheng; Wu, Jiahui
2017-11-01
A modified simplified Pulse Coupled Neural Network (PCNN) model is proposed in this article based on simplified PCNN. Some work have done to enrich this model, such as imposing restrictions items of the inputs, improving linking inputs and internal activity of PCNN. A self-adaptive parameter setting method of linking coefficient and threshold value decay time constant is proposed here, too. At last, we realized image segmentation algorithm for five pictures based on this proposed simplified PCNN model and PSO. Experimental results demonstrate that this image segmentation algorithm is much better than method of SPCNN and OTSU.
NASA Astrophysics Data System (ADS)
Lee, Sheng-Jui; Chen, Hung-Cheng; You, Zhi-Qiang; Liu, Kuan-Lin; Chow, Tahsin J.; Chen, I.-Chia; Hsu, Chao-Ping
2010-10-01
We calculate the electron transfer (ET) rates for a series of heptacyclo[6.6.0.02,6.03,13.014,11.05,9.010,14]-tetradecane (HCTD) linked donor-acceptor molecules. The electronic coupling factor was calculated by the fragment charge difference (FCD) [19] and the generalized Mulliken-Hush (GMH) schemes [20]. We found that the FCD is less prone to problems commonly seen in the GMH scheme, especially when the coupling values are small. For a 3-state case where the charge transfer (CT) state is coupled with two different locally excited (LE) states, we tested with the 3-state approach for the GMH scheme [30], and found that it works well with the FCD scheme. A simplified direct diagonalization based on Rust's 3-state scheme was also proposed and tested. This simplified scheme does not require a manual assignment of the states, and it yields coupling values that are largely similar to those from the full Rust's approach. The overall electron transfer (ET) coupling rates were also calculated.
Oxidant K edge x-ray emission spectroscopy of UF 4 and UO 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tobin, J. G.; Yu, S. -W.; Qiao, R.
The K-Edge (1s) x-ray emission spectroscopy of uranium tetrafluoride and uranium dioxide were compared to each other and to the results of a pair of earlier cluster calculations. Here, using a very simplified approach, it is possible to qualitatively reconstruct the main features of the x-ray emission spectra from the cluster calculation state energies and 2p percentages.
Oxidant K edge x-ray emission spectroscopy of UF 4 and UO 2
Tobin, J. G.; Yu, S. -W.; Qiao, R.; ...
2018-01-31
The K-Edge (1s) x-ray emission spectroscopy of uranium tetrafluoride and uranium dioxide were compared to each other and to the results of a pair of earlier cluster calculations. Here, using a very simplified approach, it is possible to qualitatively reconstruct the main features of the x-ray emission spectra from the cluster calculation state energies and 2p percentages.
Immersed boundary-simplified lattice Boltzmann method for incompressible viscous flows
NASA Astrophysics Data System (ADS)
Chen, Z.; Shu, C.; Tan, D.
2018-05-01
An immersed boundary-simplified lattice Boltzmann method is developed in this paper for simulations of two-dimensional incompressible viscous flows with immersed objects. Assisted by the fractional step technique, the problem is resolved in a predictor-corrector scheme. The predictor step solves the flow field without considering immersed objects, and the corrector step imposes the effect of immersed boundaries on the velocity field. Different from the previous immersed boundary-lattice Boltzmann method which adopts the standard lattice Boltzmann method (LBM) as the flow solver in the predictor step, a recently developed simplified lattice Boltzmann method (SLBM) is applied in the present method to evaluate intermediate flow variables. Compared to the standard LBM, SLBM requires lower virtual memories, facilitates the implementation of physical boundary conditions, and shows better numerical stability. The boundary condition-enforced immersed boundary method, which accurately ensures no-slip boundary conditions, is implemented as the boundary solver in the corrector step. Four typical numerical examples are presented to demonstrate the stability, the flexibility, and the accuracy of the present method.
An improved loopless mounting method for cryocrystallography
NASA Astrophysics Data System (ADS)
Qi, Jian-Xun; Jiang, Fan
2010-01-01
Based on a recent loopless mounting method, a simplified loopless and bufferless crystal mounting method is developed for macromolecular crystallography. This simplified crystal mounting system is composed of the following components: a home-made glass capillary, a brass seat for holding the glass capillary, a flow regulator, and a vacuum pump for evacuation. Compared with the currently prevalent loop mounting method, this simplified method has almost the same mounting procedure and thus is compatible with the current automated crystal mounting system. The advantages of this method include higher signal-to-noise ratio, more accurate measurement, more rapid flash cooling, less x-ray absorption and thus less radiation damage to the crystal. This method can be extended to the flash-freeing of a crystal without or with soaking it in a lower concentration of cryoprotectant, thus it may be the best option for data collection in the absence of suitable cryoprotectant. Therefore, it is suggested that this mounting method should be further improved and extensively applied to cryocrystallographic experiments.
Passive Acoustic Leak Detection for Sodium Cooled Fast Reactors Using Hidden Markov Models
NASA Astrophysics Data System (ADS)
Marklund, A. Riber; Kishore, S.; Prakash, V.; Rajan, K. K.; Michel, F.
2016-06-01
Acoustic leak detection for steam generators of sodium fast reactors have been an active research topic since the early 1970s and several methods have been tested over the years. Inspired by its success in the field of automatic speech recognition, we here apply hidden Markov models (HMM) in combination with Gaussian mixture models (GMM) to the problem. To achieve this, we propose a new feature calculation scheme, based on the temporal evolution of the power spectral density (PSD) of the signal. Using acoustic signals recorded during steam/water injection experiments done at the Indira Gandhi Centre for Atomic Research (IGCAR), the proposed method is tested. We perform parametric studies on the HMM+GMM model size and demonstrate that the proposed method a) performs well without a priori knowledge of injection noise, b) can incorporate several noise models and c) has an output distribution that simplifies false alarm rate control.
On importance assessment of aging multi-state system
NASA Astrophysics Data System (ADS)
Frenkel, Ilia; Khvatskin, Lev; Lisnianski, Anatoly
2017-01-01
Modern high-tech equipment requires precise temperature control and effective cooling below the ambient temperature. Greater cooling efficiencies will allow equipment to be operated for longer periods without overheating, providing a greater return on investment and increased in availability of the equipment. This paper presents application of the Lz-transform method to importance assessment of aging multi-state water-cooling system used in one of Israeli hospitals. The water cooling system consists of 3 principal sub-systems: chillers, heat exchanger and pumps. The performance of the system and the sub-systems is measured by their produced cooling capacity. Heat exchanger is an aging component. Straightforward Markov method applied to solve this problem will require building of a system model with numerous numbers of states and solving a corresponding system of multiple differential equations. Lz-transform method, which is used for calculation of the system elements importance, drastically simplified the solution. Numerical example is presented to illustrate the described approach.
Aeroacoustic Analysis of a Simplified Landing Gear
NASA Technical Reports Server (NTRS)
Lockard, David P.; Khorrami, Mehdi, R.; Li, Fei
2004-01-01
A hybrid approach is used to investigate the noise generated by a simplified landing gear without small scale parts such as hydraulic lines and fasteners. The Ffowcs Williams and Hawkings equation is used to predict the noise at far-field observer locations from flow data provided by an unsteady computational fluid dynamics calculation. A simulation with 13 million grid points has been completed, and comparisons are made between calculations with different turbulence models. Results indicate that the turbulence model has a profound effect on the levels and character of the unsteadiness. Flow data on solid surfaces and a set of permeable surfaces surrounding the gear have been collected. Noise predictions using the porous surfaces appear to be contaminated by errors caused by large wake fluctuations passing through the surfaces. However, comparisons between predictions using the solid surfaces with the near-field CFD solution are in good agreement giving confidence in the far-field results.
A simplified parsimonious higher order multivariate Markov chain model
NASA Astrophysics Data System (ADS)
Wang, Chao; Yang, Chuan-sheng
2017-09-01
In this paper, a simplified parsimonious higher-order multivariate Markov chain model (SPHOMMCM) is presented. Moreover, parameter estimation method of TPHOMMCM is give. Numerical experiments shows the effectiveness of TPHOMMCM.
Self-consistent modelling of line-driven hot-star winds with Monte Carlo radiation hydrodynamics
NASA Astrophysics Data System (ADS)
Noebauer, U. M.; Sim, S. A.
2015-11-01
Radiative pressure exerted by line interactions is a prominent driver of outflows in astrophysical systems, being at work in the outflows emerging from hot stars or from the accretion discs of cataclysmic variables, massive young stars and active galactic nuclei. In this work, a new radiation hydrodynamical approach to model line-driven hot-star winds is presented. By coupling a Monte Carlo radiative transfer scheme with a finite volume fluid dynamical method, line-driven mass outflows may be modelled self-consistently, benefiting from the advantages of Monte Carlo techniques in treating multiline effects, such as multiple scatterings, and in dealing with arbitrary multidimensional configurations. In this work, we introduce our approach in detail by highlighting the key numerical techniques and verifying their operation in a number of simplified applications, specifically in a series of self-consistent, one-dimensional, Sobolev-type, hot-star wind calculations. The utility and accuracy of our approach are demonstrated by comparing the obtained results with the predictions of various formulations of the so-called CAK theory and by confronting the calculations with modern sophisticated techniques of predicting the wind structure. Using these calculations, we also point out some useful diagnostic capabilities our approach provides. Finally, we discuss some of the current limitations of our method, some possible extensions and potential future applications.
A model for the rapid assessment of the impact of aviation noise near airports.
Torija, Antonio J; Self, Rod H; Flindell, Ian H
2017-02-01
This paper introduces a simplified model [Rapid Aviation Noise Evaluator (RANE)] for the calculation of aviation noise within the context of multi-disciplinary strategic environmental assessment where input data are both limited and constrained by compatibility requirements against other disciplines. RANE relies upon the concept of noise cylinders around defined flight-tracks with the Noise Radius determined from publicly available Noise-Power-Distance curves rather than the computationally intensive multiple point-to-point grid calculation with subsequent ISO-contour interpolation methods adopted in the FAA's Integrated Noise Model (INM) and similar models. Preliminary results indicate that for simple single runway scenarios, changes in airport noise contour areas can be estimated with minimal uncertainty compared against grid-point calculation methods such as INM. In situations where such outputs are all that is required for preliminary strategic environmental assessment, there are considerable benefits in reduced input data and computation requirements. Further development of the noise-cylinder-based model (such as the incorporation of lateral attenuation, engine-installation-effects or horizontal track dispersion via the assumption of more complex noise surfaces formed around the flight-track) will allow for more complex assessment to be carried out. RANE is intended to be incorporated into technology evaluators for the noise impact assessment of novel aircraft concepts.
Interactive Rapid Dose Assessment Model (IRDAM): reactor-accident assessment methods. Vol. 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Poeton, R.W.; Moeller, M.P.; Laughlin, G.J.
1983-05-01
As part of the continuing emphasis on emergency preparedness, the US Nuclear Regulatory Commission (NRC) sponsored the development of a rapid dose assessment system by Pacific Northwest Laboratory (PNL). This system, the Interactive Rapid Dose Assessment Model (IRDAM) is a micro-computer based program for rapidly assessing the radiological impact of accidents at nuclear power plants. This document describes the technical bases for IRDAM including methods, models and assumptions used in calculations. IRDAM calculates whole body (5-cm depth) and infant thyroid doses at six fixed downwind distances between 500 and 20,000 meters. Radionuclides considered primarily consist of noble gases and radioiodines.more » In order to provide a rapid assessment capability consistent with the capacity of the Osborne-1 computer, certain simplifying approximations and assumptions are made. These are described, along with default values (assumptions used in the absence of specific input) in the text of this document. Two companion volumes to this one provide additional information on IRDAM. The user's Guide (NUREG/CR-3012, Volume 1) describes the setup and operation of equipment necessary to run IRDAM. Scenarios for Comparing Dose Assessment Models (NUREG/CR-3012, Volume 3) provides the results of calculations made by IRDAM and other models for specific accident scenarios.« less
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less
NASA Astrophysics Data System (ADS)
Pei, C.; Bieber, J. W.; Burger, R. A.; Clem, J.
2010-12-01
We present a detailed description of our newly developed stochastic approach for solving Parker's transport equation, which we believe is the first attempt to solve it with time dependence in 3-D, evolving from our 3-D steady state stochastic approach. Our formulation of this method is general and is valid for any type of heliospheric magnetic field, although we choose the standard Parker field as an example to illustrate the steps to calculate the transport of galactic cosmic rays. Our 3-D stochastic method is different from other stochastic approaches in the literature in several ways. For example, we employ spherical coordinates to integrate directly, which makes the code much more efficient by reducing coordinate transformations. What is more, the equivalence between our stochastic differential equations and Parker's transport equation is guaranteed by Ito's theorem in contrast to some other approaches. We generalize the technique for calculating particle flux based on the pseudoparticle trajectories for steady state solutions and for time-dependent solutions in 3-D. To validate our code, first we show that good agreement exists between solutions obtained by our steady state stochastic method and a traditional finite difference method. Then we show that good agreement also exists for our time-dependent method for an idealized and simplified heliosphere which has a Parker magnetic field and a simple initial condition for two different inner boundary conditions.
Zhang, Heng; Pan, Zhongming; Zhang, Wenna
2018-06-07
An acoustic⁻seismic mixed feature extraction method based on the wavelet coefficient energy ratio (WCER) of the target signal is proposed in this study for classifying vehicle targets in wireless sensor networks. The signal was decomposed into a set of wavelet coefficients using the à trous algorithm, which is a concise method used to implement the wavelet transform of a discrete signal sequence. After the wavelet coefficients of the target acoustic and seismic signals were obtained, the energy ratio of each layer coefficient was calculated as the feature vector of the target signals. Subsequently, the acoustic and seismic features were merged into an acoustic⁻seismic mixed feature to improve the target classification accuracy after the acoustic and seismic WCER features of the target signal were simplified using the hierarchical clustering method. We selected the support vector machine method for classification and utilized the data acquired from a real-world experiment to validate the proposed method. The calculated results show that the WCER feature extraction method can effectively extract the target features from target signals. Feature simplification can reduce the time consumption of feature extraction and classification, with no effect on the target classification accuracy. The use of acoustic⁻seismic mixed features effectively improved target classification accuracy by approximately 12% compared with either acoustic signal or seismic signal alone.
Chemistry by Way of Density Functional Theory
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Ricca, Alessandra; Partridge, Harry; Langohff, Stephen R.; Arnold, James O. (Technical Monitor)
1996-01-01
In this work we demonstrate that density functional theory (DFT) methods make an important contribution to understanding chemical systems and are an important additional method for the computational chemist. We report calibration calculations obtained with different functionals for the 55 G2 molecules to justify our selection of the B3LYP functional. We show that accurate geometries and vibrational frequencies obtained at the B3LYP level can be combined with traditional methods to simplify the calculation of accurate heats of formation. We illustrate the application of the B3LYP approach to a variety of chemical problems from the vibrational frequencies of polycyclic aromatic hydrocarbons to transition metal systems. We show that the B3LYP method typically performs better than the MP2 method at a significantly lower computational cost. Thus the B3LYP method allows us to extend our studies to much larger systems while maintaining a high degree of accuracy. We show that for transition metal systems, the B3LYP bond energies are typically of sufficient accuracy that they can be used to explain experimental trends and even differentiate between different experimental values. We show that for boron clusters the B3LYP energetics are not as good as for many of the other systems presented, but even in this case the B3LYP approach is able to help understand the experimental trends.
Simplified half-life methods for the analysis of kinetic data
NASA Technical Reports Server (NTRS)
Eberhart, J. G.; Levin, E.
1988-01-01
The analysis of reaction rate data has as its goal the determination of the order rate constant which characterize the data. Chemical reactions with one reactant and present simplified methods for accomplishing this goal are considered. The approaches presented involve the use of half lives or other fractional lives. These methods are particularly useful for the more elementary discussions of kinetics found in general and physical chemistry courses.
A method for modeling finite-core vortices in wake-flow calculations
NASA Technical Reports Server (NTRS)
Stremel, P. M.
1984-01-01
A numerical method for computing nonplanar vortex wakes represented by finite-core vortices is presented. The approach solves for the velocity on an Eulerian grid, using standard finite-difference techniques; the vortex wake is tracked by Lagrangian methods. In this method, the distribution of continuous vorticity in the wake is replaced by a group of discrete vortices. An axially symmetric distribution of vorticity about the center of each discrete vortex is used to represent the finite-core model. Two distributions of vorticity, or core models, are investigated: a finite distribution of vorticity represented by a third-order polynomial, and a continuous distribution of vorticity throughout the wake. The method provides for a vortex-core model that is insensitive to the mesh spacing. Results for a simplified case are presented. Computed results for the roll-up of a vortex wake generated by wings with different spanwise load distributions are presented; contour plots of the flow-field velocities are included; and comparisons are made of the computed flow-field velocities with experimentally measured velocities.
Imaging shear wave propagation for elastic measurement using OCT Doppler variance method
NASA Astrophysics Data System (ADS)
Zhu, Jiang; Miao, Yusi; Qu, Yueqiao; Ma, Teng; Li, Rui; Du, Yongzhao; Huang, Shenghai; Shung, K. Kirk; Zhou, Qifa; Chen, Zhongping
2016-03-01
In this study, we have developed an acoustic radiation force orthogonal excitation optical coherence elastography (ARFOE-OCE) method for the visualization of the shear wave and the calculation of the shear modulus based on the OCT Doppler variance method. The vibration perpendicular to the OCT detection direction is induced by the remote acoustic radiation force (ARF) and the shear wave propagating along the OCT beam is visualized by the OCT M-scan. The homogeneous agar phantom and two-layer agar phantom are measured using the ARFOE-OCE system. The results show that the ARFOE-OCE system has the ability to measure the shear modulus beyond the OCT imaging depth. The OCT Doppler variance method, instead of the OCT Doppler phase method, is used for vibration detection without the need of high phase stability and phase wrapping correction. An M-scan instead of the B-scan for the visualization of the shear wave also simplifies the data processing.
Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E
2013-10-21
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
NASA Astrophysics Data System (ADS)
Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.
2013-10-01
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
Simplified neutrosophic sets and their applications in multi-criteria group decision-making problems
NASA Astrophysics Data System (ADS)
Peng, Juan-juan; Wang, Jian-qiang; Wang, Jing; Zhang, Hong-yu; Chen, Xiao-hong
2016-07-01
As a variation of fuzzy sets and intuitionistic fuzzy sets, neutrosophic sets have been developed to represent uncertain, imprecise, incomplete and inconsistent information that exists in the real world. Simplified neutrosophic sets (SNSs) have been proposed for the main purpose of addressing issues with a set of specific numbers. However, there are certain problems regarding the existing operations of SNSs, as well as their aggregation operators and the comparison methods. Therefore, this paper defines the novel operations of simplified neutrosophic numbers (SNNs) and develops a comparison method based on the related research of intuitionistic fuzzy numbers. On the basis of these operations and the comparison method, some SNN aggregation operators are proposed. Additionally, an approach for multi-criteria group decision-making (MCGDM) problems is explored by applying these aggregation operators. Finally, an example to illustrate the applicability of the proposed method is provided and a comparison with some other methods is made.
Cylindrical optical resonators: fundamental properties and bio-sensing characteristics
NASA Astrophysics Data System (ADS)
Khozeymeh, Foroogh; Razaghi, Mohammad
2018-04-01
In this paper, detailed theoretical analysis of cylindrical resonators is demonstrated. As illustrated, these kinds of resonators can be used as optical bio-sensing devices. The proposed structure is analyzed using an analytical method based on Lam's approximation. This method is systematic and has simplified the tedious process of whispering-gallery mode (WGM) wavelength analysis in optical cylindrical biosensors. By this method, analysis of higher radial orders of high angular momentum WGMs has been possible. Using closed-form analytical equations, resonance wavelengths of higher radial and angular order WGMs of TE and TM polarization waves are calculated. It is shown that high angular momentum WGMs are more appropriate for bio-sensing applications. Some of the calculations are done using a numerical non-linear Newton method. A perfect match of 99.84% between the analytical and the numerical methods has been achieved. In order to verify the validity of the calculations, Meep simulations based on the finite difference time domain (FDTD) method are performed. In this case, a match of 96.70% between the analytical and FDTD results has been obtained. The analytical predictions are in good agreement with other experimental work (99.99% match). These results validate the proposed analytical modelling for the fast design of optical cylindrical biosensors. It is shown that by extending the proposed two-layer resonator structure analyzing scheme, it is possible to study a three-layer cylindrical resonator structure as well. Moreover, by this method, fast sensitivity optimization in cylindrical resonator-based biosensors has been possible. Sensitivity of the WGM resonances is analyzed as a function of the structural parameters of the cylindrical resonators. Based on the results, fourth radial order WGMs, with a resonator radius of 50 μm, display the most bulk refractive index sensitivity of 41.50 (nm/RIU).
Formative Research on the Simplifying Conditions Method (SCM) for Task Analysis and Sequencing.
ERIC Educational Resources Information Center
Kim, YoungHwan; Reigluth, Charles M.
The Simplifying Conditions Method (SCM) is a set of guidelines for task analysis and sequencing of instructional content under the Elaboration Theory (ET). This article introduces the fundamentals of SCM and presents the findings from a formative research study on SCM. It was conducted in two distinct phases: design and instruction. In the first…
A Simplified Method for Tissue Engineering Skeletal Muscle Organoids in Vitro
NASA Technical Reports Server (NTRS)
Shansky, Janet; DelTatto, Michael; Chromiak, Joseph; Vandenburgh, Herman
1996-01-01
Tissue-engineered three dimensional skeletal muscle organ-like structures have been formed in vitro from primary myoblasts by several different techniques. This report describes a simplified method for generating large numbers of muscle organoids from either primary embryonic avian or neonatal rodent myoblasts, which avoids the requirements for stretching and other mechanical stimulation.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
NASA Astrophysics Data System (ADS)
Fiorentini, Raffaele; Kremer, Kurt; Potestio, Raffaello; Fogarty, Aoife C.
2017-06-01
The calculation of free energy differences is a crucial step in the characterization and understanding of the physical properties of biological molecules. In the development of efficient methods to compute these quantities, a promising strategy is that of employing a dual-resolution representation of the solvent, specifically using an accurate model in the proximity of a molecule of interest and a simplified description elsewhere. One such concurrent multi-resolution simulation method is the Adaptive Resolution Scheme (AdResS), in which particles smoothly change their resolution on-the-fly as they move between different subregions. Before using this approach in the context of free energy calculations, however, it is necessary to make sure that the dual-resolution treatment of the solvent does not cause undesired effects on the computed quantities. Here, we show how AdResS can be used to calculate solvation free energies of small polar solutes using Thermodynamic Integration (TI). We discuss how the potential-energy-based TI approach combines with the force-based AdResS methodology, in which no global Hamiltonian is defined. The AdResS free energy values agree with those calculated from fully atomistic simulations to within a fraction of kBT. This is true even for small atomistic regions whose size is on the order of the correlation length, or when the properties of the coarse-grained region are extremely different from those of the atomistic region. These accurate free energy calculations are possible because AdResS allows the sampling of solvation shell configurations which are equivalent to those of fully atomistic simulations. The results of the present work thus demonstrate the viability of the use of adaptive resolution simulation methods to perform free energy calculations and pave the way for large-scale applications where a substantial computational gain can be attained.
Code of Federal Regulations, 2010 CFR
2010-07-01
... employee pensions-IRS Form 5305-SEP. 2520.104-48 Section 2520.104-48 Labor Regulations Relating to Labor... compliance for model simplified employee pensions—IRS Form 5305-SEP. Under the authority of section 110 of... Security Act of 1974 in the case of a simplified employee pension (SEP) described in section 408(k) of the...
Chi, Wei-Jie; Li, Quan-Song; Li, Ze-Sheng
2016-03-21
Perovskite solar cells (PSCs) with organic small molecules as hole transport materials (HTMs) have attracted considerable attention due to their power conversion efficiencies as high as 20%. In the present work, three new spiro-type hole transport materials with spiro-cores, i.e. Spiro-F1, Spiro-F2 and Spiro-F3, are investigated by using density functional theory combined with the Marcus theory and Einstein relation. Based on the calculated and experimental highest occupied molecular orbital (HOMO) levels of 30 reference molecules, an empirical equation, which can predict the HOMO levels of hole transport materials accurately, is proposed. Moreover, a simplified method, in which the hole transport pathways are simplified to be one-dimensional, is presented and adopted to qualitatively compare the molecular hole mobilities. The calculated results show that the perovskite solar cells with the new hole transport materials can have higher open-circuit voltages due to the lower HOMO levels of Spiro-F1 (-5.31 eV), Spiro-F2 (-5.42 eV) and Spiro-F3 (-5.10 eV) compared with that of Spiro-OMeTAD (-5.09 eV). Furthermore, the hole mobilities of Spiro-F1 (1.75 × 10(-2) cm(2) V(-1) s(-1)) and Spiro-F3 (7.59 × 10(-3) cm(2) V(-1) s(-1)) are 3.1 and 1.4 times that of Spiro-OMeTAD (5.65 × 10(-3) cm(2) V(-1) s(-1)) respectively, due to small reorganization energies and large transfer integrals. Interestingly, the stability properties of Spiro-F1 and Spiro-F2 are shown to be comparable to that of Spiro-OMeTAD, and the dimers of Spiro-F2 and Spiro-F3 possess better stability than that of Spiro-OMeTAD. Taking into consideration the appropriate HOMO level, improved hole mobility and enhanced stability, Spiro-F1 and Spiro-F3 may become the most promising alternatives to Spiro-OMeTAD. The present work offers a new design strategy and reliable calculation methods towards the development of excellent organic small molecules as HTMs for highly efficient and stable PSCs.
Traino, A C; Marcatili, S; Avigo, C; Sollini, M; Erba, P A; Mariani, G
2013-04-01
Nonuniform activity within the target lesions and the critical organs constitutes an important limitation for dosimetric estimates in patients treated with tumor-seeking radiopharmaceuticals. The tumor control probability and the normal tissue complication probability are affected by the distribution of the radionuclide in the treated organ/tissue. In this paper, a straightforward method for calculating the absorbed dose at the voxel level is described. This new method takes into account a nonuniform activity distribution in the target/organ. The new method is based on the macroscopic S-values (i.e., the S-values calculated for the various organs, as defined in the MIRD approach), on the definition of the number of voxels, and on the raw-count 3D array, corrected for attenuation, scatter, and collimator resolution, in the lesion/organ considered. Starting from these parameters, the only mathematical operation required is to multiply the 3D array by a scalar value, thus avoiding all the complex operations involving the 3D arrays. A comparison with the MIRD approach, fully described in the MIRD Pamphlet No. 17, using S-values at the voxel level, showed a good agreement between the two methods for (131)I and for (90)Y. Voxel dosimetry is becoming more and more important when performing therapy with tumor-seeking radiopharmaceuticals. The method presented here does not require calculating the S-values at the voxel level, and thus bypasses the mathematical problems linked to the convolution of 3D arrays and to the voxel size. In the paper, the results obtained with this new simplified method as well as the possibility of using it for other radionuclides commonly employed in therapy are discussed. The possibility of using the correct density value of the tissue/organs involved is also discussed.
Greffier, J; Van Ngoc Ty, C; Bonniaud, G; Moliner, G; Ledermann, B; Schmutz, L; Cornillet, L; Cayla, G; Beregi, J P; Pereira, F
2017-06-01
To compare the use of a dose mapping software to Gafchromic film measurement for a simplified peak skin dose (PSD) estimation in interventional cardiology procedure. The study was conducted on a total of 40 cardiac procedures (20 complex coronary angioplasty of chronic total occlusion (CTO) and 20 coronary angiography and coronary angioplasty (CA-PTCA)) conducted between January 2014 to December 2015. PSD measurement (PSD Film ) was obtained by placing XR-RV3 Gafchromic under the patient's back for each procedure. PSD (PSD em.dose ) was computed with the software em.dose©. The calculation was performed on the dose metrics collected from the private dose report of each procedure. Two calculation methods (method A: fluoroscopic kerma equally spread on cine acquisition and B: fluoroscopic kerma is added to one air Kerma cine acquisition that contributes to the PSD) were used to calculate the fluoroscopic dose contribution as fluoroscopic data were not recorded in our interventional room. Statistical analyses were carried out to compare PSD Film and PSD em.dose . The PSD Film median (1st quartile; 3rd quartile) was 0.251(0.190;0.336)Gy for CA-PTCA and 1.453(0.767;2.011)Gy for CTO. For method-A, the PSD em.dose was 0.248(0.182;0.369)Gy for CA-PTCA and 1.601(0.892;2.178)Gy for CTO, and 0.267(0.223;0.446)Gy and 1.75 (0.912;2.584)Gy for method-B, respectively. For the two methods, the correlation between PSD Film and PSD em.dose was strong. For all cardiology procedures investigated, the mean deviation between PSD Film and PSD em.dose was 3.4±21.1% for method-A and 17.3%±23.9% for method-B. The dose mapping software is convenient to calculate peak skin dose in interventional cardiology. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A simplified model of the source channel of the Leksell GammaKnife® tested with PENELOPE
NASA Astrophysics Data System (ADS)
Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel
2004-06-01
Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife®. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3° with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photon trajectories reaching the output helmet collimators at (x, y, z = 236 mm) show strong correlations between rgr = (x2 + y2)1/2 and their polar angle thgr, on one side, and between tan-1(y/x) and their azimuthal angle phgr, on the other. This enables us to propose a simplified model which treats the full source channel as a mathematical collimator. This simplified model produces doses in good agreement with those found for the full geometry. In the region of maximal dose, the relative differences between both calculations are within 3%, for the 18 and 14 mm helmets, and 10%, for the 8 and 4 mm ones. Besides, the simplified model permits a strong reduction (larger than a factor 15) in the computational time.
Influence of mass transfer resistance on overall nitrate removal rate in upflow sludge bed reactors.
Ting, Wen-Huei; Huang, Ju-Sheng
2006-09-01
A kinetic model with intrinsic reaction kinetics and a simplified model with apparent reaction kinetics for denitrification in upflow sludge bed (USB) reactors were proposed. USB-reactor performance data with and without sludge wasting were also obtained for model verification. An independent batch study showed that the apparent kinetic constants k' did not differ from the intrinsic k but the apparent Ks' was significantly larger than the intrinsic Ks suggesting that the intra-granule mass transfer resistance can be modeled by changes in Ks. Calculations of the overall effectiveness factor, Thiele modulus, and Biot number combined with parametric sensitivity analysis showed that the influence of internal mass transfer resistance on the overall nitrate removal rate in USB reactors is more significant than the external mass transfer resistance. The simulated residual nitrate concentrations using the simplified model were in good agreement with the experimental data; the simulated results using the simplified model were also close to those using the kinetic model. Accordingly, the simplified model adequately described the overall nitrate removal rate and can be used for process design.
SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps
NASA Astrophysics Data System (ADS)
Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang
2018-06-01
SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.
NASA Astrophysics Data System (ADS)
Yao, Weiguang; Merchant, Thomas E.; Farr, Jonathan B.
2016-10-01
The lateral homogeneity assumption is used in most analytical algorithms for proton dose, such as the pencil-beam algorithms and our simplified analytical random walk model. To improve the dose calculation in the distal fall-off region in heterogeneous media, we analyzed primary proton fluence near heterogeneous media and propose to calculate the lateral fluence with voxel-specific Gaussian distributions. The lateral fluence from a beamlet is no longer expressed by a single Gaussian for all the lateral voxels, but by a specific Gaussian for each lateral voxel. The voxel-specific Gaussian for the beamlet of interest is calculated by re-initializing the fluence deviation on an effective surface where the proton energies of the beamlet of interest and the beamlet passing the voxel are the same. The dose improvement from the correction scheme was demonstrated by the dose distributions in two sets of heterogeneous phantoms consisting of cortical bone, lung, and water and by evaluating distributions in example patients with a head-and-neck tumor and metal spinal implants. The dose distributions from Monte Carlo simulations were used as the reference. The correction scheme effectively improved the dose calculation accuracy in the distal fall-off region and increased the gamma test pass rate. The extra computation for the correction was about 20% of that for the original algorithm but is dependent upon patient geometry.
Møller, Pål; Clark, Neal; Mæhle, Lovise
2011-05-01
A method for SImplified rapid Segregation Analysis (SISA) to assess penetrance and expression of genetic variants in pedigrees of any complexity is presented. For this purpose the probability for recombination between the variant and the gene is zero. An assumption is that the variant of undetermined significance (VUS) is introduced into the family once only. If so, all family members in between two members demonstrated to carry a VUS, are obligate carriers. Probabilities for cosegregation of disease and VUS by chance, penetrance, and expression, may be calculated. SISA return values do not include person identifiers and need no explicit informed consent. There will be no ethical complications in submitting SISA return values to central databases. Values for several families may be combined. Values for a family may be updated by the contributor. SISA is used to consider penetrance whenever sequencing demonstrates a VUS in the known cancer-predisposing genes. Any family structure at hand in a genetic clinic may be used. One may include an extended lineage in a family through demonstrating the same VUS in a distant relative, and thereby identifying all obligate carriers in between. Such extension is a way to escape the selection biases through expanding the families outside the clusters used to select the families. © 2011 Wiley-Liss, Inc.
Stephan, Peter; Schmid, Christina; Freckmann, Guido; Pleus, Stefan; Haug, Cornelia; Müller, Peter
2015-10-09
The measurement accuracy of systems for self-monitoring of blood glucose (SMBG) is usually analyzed by a method comparison in which the analysis results are displayed using difference plots or similar graphs. However, such plots become difficult to comprehend as the number of data points displayed increases. This article introduces a new approach, the rectangle target plot (RTP), which aims to provide a simplified and comprehensible visualization of accuracy data. The RTP is based on ISO 15197 accuracy evaluations of SMBG systems. Two-sided tolerance intervals for normally distributed data are calculated for absolute and relative differences at glucose concentrations <100 mg/dL and ≥100 mg/dL. These tolerance intervals provide an estimator of where a 90% proportion of results is found with a confidence level of 95%. Plotting these tolerance intervals generates a rectangle whose center indicates the systematic measurement difference of the investigated system relative to the comparison method. The size of the rectangle depends on the measurement variability. The RTP provides a means of displaying measurement accuracy data in a simple and comprehensible manner. The visualization is simplified by reducing the displayed information from typically 200 data points to just 1 rectangle. Furthermore, this allows data for several systems or several lots from 1 system to be displayed clearly and concisely in a single graph. © 2015 Diabetes Technology Society.
Simplified aerodynamic analysis of the cyclogiro rotating wing system
NASA Technical Reports Server (NTRS)
Wheatley, John B
1930-01-01
A simplified aerodynamic theory of the cyclogiro rotating wing is presented herein. In addition, examples have been calculated showing the effect on the rotor characteristics of varying the design parameters of the rotor. A performance prediction, on the basis of the theory here developed, is appended, showing the performance to be expected of a machine employing this system of sustentation. The aerodynamic principles of the cyclogiro are sound; hovering flight, vertical climb, and a reasonable forward speed may be obtained with a normal expenditure of power. Auto rotation in a gliding descent is available in the event of a power-plant failure.
Brudnik, Katarzyna; Twarda, Maria; Sarzyński, Dariusz; Jodkowski, Jerzy T
2013-10-01
Ab initio calculations at the G3 level were used in a theoretical description of the kinetics and mechanism of the chlorine abstraction reactions from mono-, di-, tri- and tetra-chloromethane by chlorine atoms. The calculated profiles of the potential energy surface of the reaction systems show that the mechanism of the studied reactions is complex and the Cl-abstraction proceeds via the formation of intermediate complexes. The multi-step reaction mechanism consists of two elementary steps in the case of CCl4 + Cl, and three for the other reactions. Rate constants were calculated using the theoretical method based on the RRKM theory and the simplified version of the statistical adiabatic channel model. The temperature dependencies of the calculated rate constants can be expressed, in temperature range of 200-3,000 K as [Formula: see text]. The rate constants for the reverse reactions CH3/CH2Cl/CHCl2/CCl3 + Cl2 were calculated via the equilibrium constants derived theoretically. The kinetic equations [Formula: see text] allow a very good description of the reaction kinetics. The derived expressions are a substantial supplement to the kinetic data necessary to describe and model the complex gas-phase reactions of importance in combustion and atmospheric chemistry.
DATMAN: A reliability data analysis program using Bayesian updating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Becker, M.; Feltus, M.A.
1996-12-31
Preventive maintenance (PM) techniques focus on the prevention of failures, in particular, system components that are important to plant functions. Reliability-centered maintenance (RCM) improves on the PM techniques by introducing a set of guidelines by which to evaluate the system functions. It also minimizes intrusive maintenance, labor, and equipment downtime without sacrificing system performance when its function is essential for plant safety. Both the PM and RCM approaches require that system reliability data be updated as more component failures and operation time are acquired. Systems reliability and the likelihood of component failures can be calculated by Bayesian statistical methods, whichmore » can update these data. The DATMAN computer code has been developed at Penn State to simplify the Bayesian analysis by performing tedious calculations needed for RCM reliability analysis. DATMAN reads data for updating, fits a distribution that best fits the data, and calculates component reliability. DATMAN provides a user-friendly interface menu that allows the user to choose from several common prior and posterior distributions, insert new failure data, and visually select the distribution that matches the data most accurately.« less
Achieving accuracy in first-principles calculations at extreme temperature and pressure
NASA Astrophysics Data System (ADS)
Mattsson, Ann; Wills, John
2013-06-01
First-principles calculations are increasingly used to provide EOS data at pressures and temperatures where experimental data is difficult or impossible to obtain. The lack of experimental data, however, also precludes validation of the calculations in those regimes. Factors influencing the accuracy of first-principles data include theoretical approximations, and computational approximations used in implementing and solving the underlying equations. The first category includes approximate exchange-correlation functionals and wave equations simplifying the Dirac equation. In the second category are, e.g., basis completeness and pseudo-potentials. While the first category is extremely hard to assess without experimental data, inaccuracies of the second type should be well controlled. We are using two rather different electronic structure methods (VASP and RSPt) to make explicit the requirements for accuracy of the second type. We will discuss the VASP Projector Augmented Wave potentials, with examples for Li and Mo. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Amplitudes of doping striations: comparison of numerical calculations and analytical approaches
NASA Astrophysics Data System (ADS)
Jung, T.; Müller, G.
1997-02-01
Transient, axisymmetric numerical calculations of the heat and species transport including convection were performed for a simplified vertical gradient freeze (Bridgman) process with bottom seeding for GaAs. Periodical oscillations were superimposed onto the transient heater temperature profile. The amplitudes of the resulting oscillations of the growth rate and the dopant concentration (striations) in the growing crystals are compared with the predictions of analytical models.
Intensity and absorbed-power distribution in a cylindrical solar-pumped dye laser
NASA Technical Reports Server (NTRS)
Williams, M. D.
1984-01-01
The internal intensity and absorbed-power distribution of a simplified hypothetical dye laser of cylindrical geometry is calculated. Total absorbed power is also calculated and compared with laboratory measurements of lasing-threshold energy deposition in a dye cell to determine the suitability of solar radiation as a pump source or, alternatively, what modifications, if any, are necessary to the hypothetical system for solar pumping.
Doing the math: A simple approach to topical timolol dosing for infantile hemangiomas.
Dalla Costa, Renata; Prindaville, Brea; Wiss, Karen
2018-03-01
Topical timolol maleate has recently gained popularity as a treatment for superficial infantile hemangiomas, but calculating a safe dose of timolol can be time consuming, which may limit the medication's use in fast-paced clinical environments. This report offers a simplified calculation of the maximum daily safe dosage as 1 drop of medication per kilogram of body weight. © 2018 Wiley Periodicals, Inc.
Modeling of Continuum Manipulators Using Pythagorean Hodograph Curves.
Singh, Inderjeet; Amara, Yacine; Melingui, Achille; Mani Pathak, Pushparaj; Merzouki, Rochdi
2018-05-10
Research on continuum manipulators is increasingly developing in the context of bionic robotics because of their many advantages over conventional rigid manipulators. Due to their soft structure, they have inherent flexibility, which makes it a huge challenge to control them with high performances. Before elaborating a control strategy of such robots, it is essential to reconstruct first the behavior of the robot through development of an approximate behavioral model. This can be kinematic or dynamic depending on the conditions of operation of the robot itself. Kinematically, two types of modeling methods exist to describe the robot behavior; quantitative methods describe a model-based method, and qualitative methods describe a learning-based method. In kinematic modeling of continuum manipulator, the assumption of constant curvature is often considered to simplify the model formulation. In this work, a quantitative modeling method is proposed, based on the Pythagorean hodograph (PH) curves. The aim is to obtain a three-dimensional reconstruction of the shape of the continuum manipulator with variable curvature, allowing the calculation of its inverse kinematic model (IKM). It is noticed that the performances of the PH-based kinematic modeling of continuum manipulators are considerable regarding position accuracy, shape reconstruction, and time/cost of the model calculation, than other kinematic modeling methods, for two cases: free load manipulation and variable load manipulation. This modeling method is applied to the compact bionic handling assistant (CBHA) manipulator for validation. The results are compared with other IKMs developed in case of CBHA manipulator.
NASA Technical Reports Server (NTRS)
Morris, C. E. K., Jr.
1981-01-01
Each cycle of the flight profile consists of climb while the vehicle is tracked and powered by a microwave beam, followed by gliding flight back to a minimum altitude. Parameter variations were used to define the effects of changes in the characteristics of the airplane aerodynamics, the power transmission systems, the propulsion system, and winds. Results show that wind effects limit the reduction of wing loading and increase the lift coefficient, two effective ways to obtain longer range and endurance for each flight cycle. Calculated climb performance showed strong sensitivity to some power and propulsion parameters. A simplified method of computing gliding endurance was developed.
Kahnert, Michael; Nousiainen, Timo; Lindqvist, Hannakaisa; Ebert, Martin
2012-04-23
Light scattering by light absorbing carbon (LAC) aggregates encapsulated into sulfate shells is computed by use of the discrete dipole method. Computations are performed for a UV, visible, and IR wavelength, different particle sizes, and volume fractions. Reference computations are compared to three classes of simplified model particles that have been proposed for climate modeling purposes. Neither model matches the reference results sufficiently well. Remarkably, more realistic core-shell geometries fall behind homogeneous mixture models. An extended model based on a core-shell-shell geometry is proposed and tested. Good agreement is found for total optical cross sections and the asymmetry parameter. © 2012 Optical Society of America
NASA Technical Reports Server (NTRS)
Rao, B. M.; Jones, W. P.
1974-01-01
A general method of predicting airloads is applied to helicopter rotor blades on a full three-dimensional basis using the general theory developed for a rotor blade at the psi = pi/2 position where flutter is most likely to occur. Calculations of aerodynamic coefficients for use in flutter analysis are made for forward and hovering flight with low inflow. The results are compared with values given by two-dimensional strip theory for a rigid rotor hinged at its root. The comparisons indicate the inadequacies of strip theory for airload prediction. One important conclusion drawn from this study is that the curved wake has a substantial effect on the chordwise load distribution.
Measuring the density of a molecular cluster injector via visible emission from an electron beam.
Lundberg, D P; Kaita, R; Majeski, R; Stotler, D P
2010-10-01
A method to measure the density distribution of a dense hydrogen gas jet is presented. A Mach 5.5 nozzle is cooled to 80 K to form a flow capable of molecular cluster formation. A 250 V, 10 mA electron beam collides with the jet and produces H(α) emission that is viewed by a fast camera. The high density of the jet, several 10(16) cm(-3), results in substantial electron depletion, which attenuates the H(α) emission. The attenuated emission measurement, combined with a simplified electron-molecule collision model, allows us to determine the molecular density profile via a simple iterative calculation.
Compliance and stress sensitivity of spur gear teeth
NASA Technical Reports Server (NTRS)
Cornell, R. W.
1983-01-01
The magnitude and variation of tooth pair compliance with load position affects the dynamics and loading significantly, and the tooth root stressing per load varies significantly with load position. Therefore, the recently developed time history, interactive, closed form solution for the dynamic tooth loads for both low and high contact ratio spur gears was expanded to include improved and simplified methods for calculating the compliance and stress sensitivity for three involute tooth forms as a function of load position. The compliance analysis has an improved fillet/foundation. The stress sensitivity analysis is a modified version of the Heywood method but with an improvement in the magnitude and location of the peak stress in the fillet. These improved compliance and stress sensitivity analyses are presented along with their evaluation using test, finite element, and analytic transformation results, which showed good agreement.
Computation and analysis of backward ray-tracing in aero-optics flow fields.
Xu, Liang; Xue, Deting; Lv, Xiaoyi
2018-01-08
A backward ray-tracing method is proposed for aero-optics simulation. Different from forward tracing, the backward tracing direction is from the internal sensor to the distant target. Along this direction, the tracing in turn goes through the internal gas region, the aero-optics flow field, and the freestream. The coordinate value, the density, and the refractive index are calculated at each tracing step. A stopping criterion is developed to ensure the tracing stops at the outer edge of the aero-optics flow field. As a demonstration, the analysis is carried out for a typical blunt nosed vehicle. The backward tracing method and stopping criterion greatly simplify the ray-tracing computations in the aero-optics flow field, and they can be extended to our active laser illumination aero-optics study because of the reciprocity principle.
NASA Astrophysics Data System (ADS)
Thompson, James H.; Apel, Thomas R.
1990-07-01
A technique for modeling microstrip discontinuities is presented which is derived from the transmission line matrix method of solving three-dimensional electromagnetic problems. In this technique the microstrip patch under investigation is divided into an integer number of square and half-square (triangle) subsections. An equivalent lumped-element model is calculated for each subsection. These individual models are then interconnected as dictated by the geometry of the patch. The matrix of lumped elements is then solved using either of two microwave CAD software interfaces with each port properly defined. Closed-form expressions for the lumped-element representation of the individual subsections is presented and experimentally verified through the X-band frequency range. A model demonstrating the use of symmetry and block construction of a circuit element is discussed, along with computer program development and CAD software interface.
Numeric calculation of unsteady forces over thin pointed wings in sonic flow
NASA Technical Reports Server (NTRS)
Kimble, K. R.; Wu, J. M.
1975-01-01
A fast and reasonably accurate numerical procedure is proposed for the solution of a simplified unsteady transonic equation. The approach described takes into account many of the effects of the steady flow field. The resulting accuracy is within a few per cent and can be carried out on a computer in less than one minute per case (one frequency and one mode of oscillation). The problem concerns a rigid pointed wing which performs harmonic pitching oscillations of small amplitude in a steady uniform transonic flow. Wake influence is ignored and shocks must be weak. It is shown that the method is more flexible than the transonic box method proposed by Rodemich and Andrew (1965) in that it can easily account for variable local Mach number and rather arbitrary planform so long as the basic assumptions are fulfilled.
Interference method for obtaining the potential flow past an arbitrary cascade of airfoils
NASA Technical Reports Server (NTRS)
Katzoff, S; Finn, Robert S; Laurence, James C
1947-01-01
A procedure is presented for obtaining the pressure distribution on an arbitrary airfoil section in cascade in a two-dimensional, incompressible, and nonviscous flow. The method considers directly the influence on a given airfoil of the rest of the cascade and evaluates this interference by an iterative process, which appeared to converge rapidly in the cases tried (about unit solidity, stagger angles of 0 degree and 45 degrees). Two variations of the basic interference calculations are described. One, which is accurate enough for most purposes, involves the substitution of sources, sinks, and vortices for the interfering airfoils; the other, which may be desirable for the final approximation, involves a contour integration. The computations are simplified by the use of a chart presented by Betz in a related paper. Illustrated examples are included.
NASA Technical Reports Server (NTRS)
Emrich, Bill
2006-01-01
A simple method of estimating vehicle parameters appropriate for interplanetary travel can provide a useful tool for evaluating the suitability of particular propulsion systems to various space missions. Although detailed mission analyses for interplanetary travel can be quite complex, it is possible to derive hirly simple correlations which will provide reasonable trip time estimates to the planets. In the present work, it is assumed that a constant thrust propulsion system propels a spacecraft on a round trip mission having equidistant outbound and inbound legs in which the spacecraft accelerates during the first portion of each leg of the journey and decelerates during the last portion of each leg of the journey. Comparisons are made with numerical calculations from low thrust trajectory codes to estimate the range of applicability of the simplified correlations.
Fault Diagnostics for Turbo-Shaft Engine Sensors Based on a Simplified On-Board Model
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient. PMID:23112645
Fault diagnostics for turbo-shaft engine sensors based on a simplified on-board model.
Lu, Feng; Huang, Jinquan; Xing, Yaodong
2012-01-01
Combining a simplified on-board turbo-shaft model with sensor fault diagnostic logic, a model-based sensor fault diagnosis method is proposed. The existing fault diagnosis method for turbo-shaft engine key sensors is mainly based on a double redundancies technique, and this can't be satisfied in some occasions as lack of judgment. The simplified on-board model provides the analytical third channel against which the dual channel measurements are compared, while the hardware redundancy will increase the structure complexity and weight. The simplified turbo-shaft model contains the gas generator model and the power turbine model with loads, this is built up via dynamic parameters method. Sensor fault detection, diagnosis (FDD) logic is designed, and two types of sensor failures, such as the step faults and the drift faults, are simulated. When the discrepancy among the triplex channels exceeds a tolerance level, the fault diagnosis logic determines the cause of the difference. Through this approach, the sensor fault diagnosis system achieves the objectives of anomaly detection, sensor fault diagnosis and redundancy recovery. Finally, experiments on this method are carried out on a turbo-shaft engine, and two types of faults under different channel combinations are presented. The experimental results show that the proposed method for sensor fault diagnostics is efficient.
Andrés, Axel; Rosés, Martí; Bosch, Elisabeth
2014-11-28
In previous work, a two-parameter model to predict chromatographic retention of ionizable analytes in gradient mode was proposed. However, the procedure required some previous experimental work to get a suitable description of the pKa change with the mobile phase composition. In the present study this previous experimental work has been simplified. The analyte pKa values have been calculated through equations whose coefficients vary depending on their functional group. Forced by this new approach, other simplifications regarding the retention of the totally neutral and totally ionized species also had to be performed. After the simplifications were applied, new prediction values were obtained and compared with the previously acquired experimental data. The simplified model gave pretty good predictions while saving a significant amount of time and resources. Copyright © 2014 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Ravichandran, K.; Philominathan, P.
2009-03-01
Highly crystalline and transparent cadmium sulphide films were fabricated at relatively low temperature by employing an inexpensive, simplified spray technique using perfume atomizer (generally used for cosmetics). The structural, surface morphological and optical properties of the films were studied and compared with that prepared by conventional spray pyrolysis using air as carrier gas and chemical bath deposition. The films deposited by the simplified spray have preferred orientation along (1 0 1) plane. The lattice parameters were calculated as a = 4.138 Å and c = 6.718 Å which are well agreed with that obtained from the other two techniques and also with the standard data. The optical transmittance in the visible range and the optical band gap were found as 85% and 2.43 eV, respectively. The structural and optical properties of the films fabricated by the simplified spray are found to be desirable for opto-electronic applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou, Shiyuan, E-mail: redaple@bit.edu.cn; Sun, Haoyu, E-mail: redaple@bit.edu.cn; Xu, Chunguang, E-mail: redaple@bit.edu.cn
The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of “energy coefficient” in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energymore » coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.« less
NASA Astrophysics Data System (ADS)
Zhou, Shiyuan; Sun, Haoyu; Xu, Chunguang; Cao, Xiandong; Cui, Liming; Xiao, Dingguo
2015-03-01
The echo signal energy is directly affected by the incident sound beam eccentricity or angle for thick-walled pipes inner longitudinal cracks detection. A method for analyzing the relationship between echo signal energy between the values of incident eccentricity is brought forward, which can be used to estimate echo signal energy when testing inside wall longitudinal crack of pipe, using mode-transformed compression wave adaptation of shear wave with water-immersion method, by making a two-dimension integration of "energy coefficient" in both circumferential and axial directions. The calculation model is founded for cylinder sound beam case, in which the refraction and reflection energy coefficients of different rays in the whole sound beam are considered different. The echo signal energy is calculated for a particular cylinder sound beam testing different pipes: a beam with a diameter of 0.5 inch (12.7mm) testing a φ279.4mm pipe and a φ79.4mm one. As a comparison, both the results of two-dimension integration and one-dimension (circumferential direction) integration are listed, and only the former agrees well with experimental results. The estimation method proves to be valid and shows that the usual method of simplifying the sound beam as a single ray for estimating echo signal energy and choosing optimal incident eccentricity is not so appropriate.
Parareal in time 3D numerical solver for the LWR Benchmark neutron diffusion transient model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baudron, Anne-Marie, E-mail: anne-marie.baudron@cea.fr; CEA-DRN/DMT/SERMA, CEN-Saclay, 91191 Gif sur Yvette Cedex; Lautard, Jean-Jacques, E-mail: jean-jacques.lautard@cea.fr
2014-12-15
In this paper we present a time-parallel algorithm for the 3D neutrons calculation of a transient model in a nuclear reactor core. The neutrons calculation consists in numerically solving the time dependent diffusion approximation equation, which is a simplified transport equation. The numerical resolution is done with finite elements method based on a tetrahedral meshing of the computational domain, representing the reactor core, and time discretization is achieved using a θ-scheme. The transient model presents moving control rods during the time of the reaction. Therefore, cross-sections (piecewise constants) are taken into account by interpolations with respect to the velocity ofmore » the control rods. The parallelism across the time is achieved by an adequate use of the parareal in time algorithm to the handled problem. This parallel method is a predictor corrector scheme that iteratively combines the use of two kinds of numerical propagators, one coarse and one fine. Our method is made efficient by means of a coarse solver defined with large time step and fixed position control rods model, while the fine propagator is assumed to be a high order numerical approximation of the full model. The parallel implementation of our method provides a good scalability of the algorithm. Numerical results show the efficiency of the parareal method on large light water reactor transient model corresponding to the Langenbuch–Maurer–Werner benchmark.« less
Computer programs simplify optical system analysis
NASA Technical Reports Server (NTRS)
1965-01-01
The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.
Impact-parameter dependence of the energy loss of fast molecular clusters in hydrogen
NASA Astrophysics Data System (ADS)
Fadanelli, R. C.; Grande, P. L.; Schiwietz, G.
2008-03-01
The electronic energy loss of molecular clusters as a function of impact parameter is far less understood than atomic energy losses. For instance, there are no analytical expressions for the energy loss as a function of impact parameter for cluster ions. In this work, we describe two procedures to evaluate the combined energy loss of molecules: Ab initio calculations within the semiclassical approximation and the coupled-channels method using atomic orbitals; and simplified models for the electronic cluster energy loss as a function of the impact parameter, namely the molecular perturbative convolution approximation (MPCA, an extension of the corresponding atomic model PCA) and the molecular unitary convolution approximation (MUCA, a molecular extension of the previous unitary convolution approximation UCA). In this work, an improved ansatz for MPCA is proposed, extending its validity for very compact clusters. For the simplified models, the physical inputs are the oscillators strengths of the target atoms and the target-electron density. The results from these models applied to an atomic hydrogen target yield remarkable agreement with their corresponding ab initio counterparts for different angles between cluster axis and velocity direction at specific energies of 150 and 300 keV/u.
Na, Hyuntae; Song, Guang
2015-07-01
In a recent work we developed a method for deriving accurate simplified models that capture the essentials of conventional all-atom NMA and identified two best simplified models: ssNMA and eANM, both of which have a significantly higher correlation with NMA in mean square fluctuation calculations than existing elastic network models such as ANM and ANMr2, a variant of ANM that uses the inverse of the squared separation distances as spring constants. Here, we examine closely how the performance of these elastic network models depends on various factors, namely, the presence of hydrogen atoms in the model, the quality of input structures, and the effect of crystal packing. The study reveals the strengths and limitations of these models. Our results indicate that ssNMA and eANM are the best fine-grained elastic network models but their performance is sensitive to the quality of input structures. When the quality of input structures is poor, ANMr2 is a good alternative for computing mean-square fluctuations while ANM model is a good alternative for obtaining normal modes. © 2015 Wiley Periodicals, Inc.
The statistical average of optical properties for alumina particle cluster in aircraft plume
NASA Astrophysics Data System (ADS)
Li, Jingying; Bai, Lu; Wu, Zhensen; Guo, Lixin
2018-04-01
We establish a model for lognormal distribution of monomer radius and number of alumina particle clusters in plume. According to the Multi-Sphere T Matrix (MSTM) theory, we provide a method for finding the statistical average of optical properties for alumina particle clusters in plume, analyze the effect of different distributions and different detection wavelengths on the statistical average of optical properties for alumina particle cluster, and compare the statistical average optical properties under the alumina particle cluster model established in this study and those under three simplified alumina particle models. The calculation results show that the monomer number of alumina particle cluster and its size distribution have a considerable effect on its statistical average optical properties. The statistical average of optical properties for alumina particle cluster at common detection wavelengths exhibit obvious differences, whose differences have a great effect on modeling IR and UV radiation properties of plume. Compared with the three simplified models, the alumina particle cluster model herein features both higher extinction and scattering efficiencies. Therefore, we may find that an accurate description of the scattering properties of alumina particles in aircraft plume is of great significance in the study of plume radiation properties.
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
NASA Astrophysics Data System (ADS)
Rubin, M. B.; Cardiff, P.
2017-11-01
Simo (Comput Methods Appl Mech Eng 66:199-219, 1988) proposed an evolution equation for elastic deformation together with a constitutive equation for inelastic deformation rate in plasticity. The numerical algorithm (Simo in Comput Methods Appl Mech Eng 68:1-31, 1988) for determining elastic distortional deformation was simple. However, the proposed inelastic deformation rate caused plastic compaction. The corrected formulation (Simo in Comput Methods Appl Mech Eng 99:61-112, 1992) preserves isochoric plasticity but the numerical integration algorithm is complicated and needs special methods for calculation of the exponential map of a tensor. Alternatively, an evolution equation for elastic distortional deformation can be proposed directly with a simplified constitutive equation for inelastic distortional deformation rate. This has the advantage that the physics of inelastic distortional deformation is separated from that of dilatation. The example of finite deformation J2 plasticity with linear isotropic hardening is used to demonstrate the simplicity of the numerical algorithm.
Simplified realistic human head model for simulating Tumor Treating Fields (TTFields).
Wenger, Cornelia; Bomzon, Ze'ev; Salvador, Ricardo; Basser, Peter J; Miranda, Pedro C
2016-08-01
Tumor Treating Fields (TTFields) are alternating electric fields in the intermediate frequency range (100-300 kHz) of low-intensity (1-3 V/cm). TTFields are an anti-mitotic treatment against solid tumors, which are approved for Glioblastoma Multiforme (GBM) patients. These electric fields are induced non-invasively by transducer arrays placed directly on the patient's scalp. Cell culture experiments showed that treatment efficacy is dependent on the induced field intensity. In clinical practice, a software called NovoTalTM uses head measurements to estimate the optimal array placement to maximize the electric field delivery to the tumor. Computational studies predict an increase in the tumor's electric field strength when adapting transducer arrays to its location. Ideally, a personalized head model could be created for each patient, to calculate the electric field distribution for the specific situation. Thus, the optimal transducer layout could be inferred from field calculation rather than distance measurements. Nonetheless, creating realistic head models of patients is time-consuming and often needs user interaction, because automated image segmentation is prone to failure. This study presents a first approach to creating simplified head models consisting of convex hulls of the tissue layers. The model is able to account for anisotropic conductivity in the cortical tissues by using a tensor representation estimated from Diffusion Tensor Imaging. The induced electric field distribution is compared in the simplified and realistic head models. The average field intensities in the brain and tumor are generally slightly higher in the realistic head model, with a maximal ratio of 114% for a simplified model with reasonable layer thicknesses. Thus, the present pipeline is a fast and efficient means towards personalized head models with less complexity involved in characterizing tissue interfaces, while enabling accurate predictions of electric field distribution.
The Use of the Nelder-Mead Method in Determining Projection Parameters for Globe Photographs
NASA Astrophysics Data System (ADS)
Gede, M.
2009-04-01
A photo of a terrestrial or celestial globe can be handled as a map. The only hard issue is its projection: the so-called Tilted Perspective Projection which, if the optical axis of the photo intersects the globe's centre, is simplified to the Vertical Near-Side Perspective Projection. When georeferencing such a photo, the exact parameters of the projections are also needed. These parameters depend on the position of the viewpoint of the camera. Several hundreds of globe photos had to be georeferenced during the Virtual Globes Museum project, which made necessary to automatize the calculation of the projection parameters. The author developed a program for this task which uses the Nelder-Mead Method in order to find the optimum parameters when a set of control points are given as input. The Nelder-Mead method is a numerical algorithm for minimizing a function in a many-dimensional space. The function in the present application is the average error of the control points calculated from the actual values of parameters. The parameters are the geographical coordinates of the projection centre, the image coordinates of the same point, the rotation of the projection, the height of the perspective point and the scale of the photo (calculated in pixels/km). The program reads the Global Mappers Ground Control Point (.GCP) file format as input and creates projection description files (.PRJ) for the same software. The initial values of the geographical coordinates of the projection centre are calculated as the average of the control points, while the other parameters are set to experimental values which represent the most common circumstances of taking a globe photograph. The algorithm runs until the change of the parameters sinks below a pre-defined limit. The minimum search can be refined by using the previous result parameter set as new initial values. This paper introduces the calculation mechanism and examples of the usage. Other possible other usages of the method are also discussed.
Wang, Ning; Chen, Jiajun; Zhang, Kun; Chen, Mingming; Jia, Hongzhi
2017-11-21
As thermoelectric coolers (TECs) have become highly integrated in high-heat-flux chips and high-power devices, the parasitic effect between component layers has become increasingly obvious. In this paper, a cyclic correction method for the TEC model is proposed using the equivalent parameters of the proposed simplified model, which were refined from the intrinsic parameters and parasitic thermal conductance. The results show that the simplified model agrees well with the data of a commercial TEC under different heat loads. Furthermore, the temperature difference of the simplified model is closer to the experimental data than the conventional model and the model containing parasitic thermal conductance at large heat loads. The average errors in the temperature difference between the proposed simplified model and the experimental data are no more than 1.6 K, and the error is only 0.13 K when the absorbed heat power Q c is equal to 80% of the maximum achievable absorbed heat power Q max . The proposed method and model provide a more accurate solution for integrated TECs that are small in size.
Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.
Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, Robert E.
2015-12-08
Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.
Highly simplified lateral flow-based nucleic acid sample preparation and passive fluid flow control
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cary, Robert B.
Highly simplified lateral flow chromatographic nucleic acid sample preparation methods, devices, and integrated systems are provided for the efficient concentration of trace samples and the removal of nucleic acid amplification inhibitors. Methods for capturing and reducing inhibitors of nucleic acid amplification reactions, such as humic acid, using polyvinylpyrrolidone treated elements of the lateral flow device are also provided. Further provided are passive fluid control methods and systems for use in lateral flow assays.
Calculating the Responses of Self-Powered Radiation Detectors.
NASA Astrophysics Data System (ADS)
Thornton, D. A.
Available from UMI in association with The British Library. The aim of this research is to review and develop the theoretical understanding of the responses of Self -Powered Radiation Detectors (SPDs) in Pressurized Water Reactors (PWRs). Two very different models are considered. A simple analytic model of the responses of SPDs to neutrons and gamma radiation is presented. It is a development of the work of several previous authors and has been incorporated into a computer program (called GENSPD), the predictions of which have been compared with experimental and theoretical results reported in the literature. Generally, the comparisons show reasonable consistency; where there is poor agreement explanations have been sought and presented. Two major limitations of analytic models have been identified; neglect of current generation in insulators and over-simplified electron transport treatments. Both of these are developed in the current work. A second model based on the Explicit Representation of Radiation Sources and Transport (ERRST) is presented and evaluated for several SPDs in a PWR at beginning of life. The model incorporates simulation of the production and subsequent transport of neutrons, gamma rays and electrons, both internal and external to the detector. Neutron fluxes and fuel power ratings have been evaluated with core physics calculations. Neutron interaction rates in assembly and detector materials have been evaluated in lattice calculations employing deterministic transport and diffusion methods. The transport of the reactor gamma radiation has been calculated with Monte Carlo, adjusted diffusion and point-kernel methods. The electron flux associated with the reactor gamma field as well as the internal charge deposition effects of the transport of photons and electrons have been calculated with coupled Monte Carlo calculations of photon and electron transport. The predicted response of a SPD is evaluated as the sum of contributions from individual response mechanisms.
Photographic and drafting techniques simplify method of producing engineering drawings
NASA Technical Reports Server (NTRS)
Provisor, H.
1968-01-01
Combination of photographic and drafting techniques has been developed to simplify the preparation of three dimensional and dimetric engineering drawings. Conventional photographs can be converted to line drawings by making copy negatives on high contrast film.
Simplified method for numerical modeling of fiber lasers.
Shtyrina, O V; Yarutkina, I A; Fedoruk, M P
2014-12-29
A simplified numerical approach to modeling of dissipative dispersion-managed fiber lasers is examined. We present a new numerical iteration algorithm for finding the periodic solutions of the system of nonlinear ordinary differential equations describing the intra-cavity dynamics of the dissipative soliton characteristics in dispersion-managed fiber lasers. We demonstrate that results obtained using simplified model are in good agreement with full numerical modeling based on the corresponding partial differential equations.
Development of a global aerosol model using a two-dimensional sectional method: 1. Model design
NASA Astrophysics Data System (ADS)
Matsui, H.
2017-08-01
This study develops an aerosol module, the Aerosol Two-dimensional bin module for foRmation and Aging Simulation version 2 (ATRAS2), and implements the module into a global climate model, Community Atmosphere Model. The ATRAS2 module uses a two-dimensional (2-D) sectional representation with 12 size bins for particles from 1 nm to 10 μm in dry diameter and 8 black carbon (BC) mixing state bins. The module can explicitly calculate the enhancement of absorption and cloud condensation nuclei activity of BC-containing particles by aging processes. The ATRAS2 module is an extension of a 2-D sectional aerosol module ATRAS used in our previous studies within a framework of a regional three-dimensional model. Compared with ATRAS, the computational cost of the aerosol module is reduced by more than a factor of 10 by simplifying the treatment of aerosol processes and 2-D sectional representation, while maintaining good accuracy of aerosol parameters in the simulations. Aerosol processes are simplified for condensation of sulfate, ammonium, and nitrate, organic aerosol formation, coagulation, and new particle formation processes, and box model simulations show that these simplifications do not substantially change the predicted aerosol number and mass concentrations and their mixing states. The 2-D sectional representation is simplified (the number of advected species is reduced) primarily by the treatment of chemical compositions using two interactive bin representations. The simplifications do not change the accuracy of global aerosol simulations. In part 2, comparisons with measurements and the results focused on aerosol processes such as BC aging processes are shown.
Temperature Histories in Ceramic-Insulated Heat-Sink Nozzle
NASA Technical Reports Server (NTRS)
Ciepluch, Carl C.
1960-01-01
Temperature histories were calculated for a composite nozzle wall by a simplified numerical integration calculation procedure. These calculations indicated that there is a unique ratio of insulation and metal heat-sink thickness that will minimize total wall thickness for a given operating condition and required running time. The optimum insulation and metal thickness will vary throughout the nozzle as a result of the variation in heat-transfer rate. The use of low chamber pressure results in a significant increase in the maximum running time of a given weight nozzle. Experimentally measured wall temperatures were lower than those calculated. This was due in part to the assumption of one-dimensional or slab heat flow in the calculation procedure.
ERIC Educational Resources Information Center
School Science Review, 1980
1980-01-01
Describes equipment, experiments, and activities useful in middle school science instruction, including demonstrating how strong paper can be, the inclined plane illusion, a simplified diet calculation, a magnetic levitator, science with soap bubbles, a model motor and dynamo, and a pocketed sorter for safety glasses. (SK)
Bertilson, Bo C; Brosjö, Eva; Billing, Hans; Strender, Lars-Erik
2010-09-10
Detection of nerve involvement originating in the spine is a primary concern in the assessment of spine symptoms. Magnetic resonance imaging (MRI) has become the diagnostic method of choice for this detection. However, the agreement between MRI and other diagnostic methods for detecting nerve involvement has not been fully evaluated. The aim of this diagnostic study was to evaluate the agreement between nerve involvement visible in MRI and findings of nerve involvement detected in a structured physical examination and a simplified pain drawing. Sixty-one consecutive patients referred for MRI of the lumbar spine were - without knowledge of MRI findings - assessed for nerve involvement with a simplified pain drawing and a structured physical examination. Agreement between findings was calculated as overall agreement, the p value for McNemar's exact test, specificity, sensitivity, and positive and negative predictive values. MRI-visible nerve involvement was significantly less common than, and showed weak agreement with, physical examination and pain drawing findings of nerve involvement in corresponding body segments. In spine segment L4-5, where most findings of nerve involvement were detected, the mean sensitivity of MRI-visible nerve involvement to a positive neurological test in the physical examination ranged from 16-37%. The mean specificity of MRI-visible nerve involvement in the same segment ranged from 61-77%. Positive and negative predictive values of MRI-visible nerve involvement in segment L4-5 ranged from 22-78% and 28-56% respectively. In patients with long-standing nerve root symptoms referred for lumbar MRI, MRI-visible nerve involvement significantly underestimates the presence of nerve involvement detected by a physical examination and a pain drawing. A structured physical examination and a simplified pain drawing may reveal that many patients with "MRI-invisible" lumbar symptoms need treatment aimed at nerve involvement. Factors other than present MRI-visible nerve involvement may be responsible for findings of nerve involvement in the physical examination and the pain drawing.
Agrafiotis, Michalis; Mpliamplias, Dimitrios; Papathanassiou, Maria; Ampatzidou, Fotini; Drossos, Georgios
2018-05-03
To suggest a simplified method for strong ion gap ([SIG]) calculation. To simplify [SIG] calculation, we used the following assumptions: (1) the major determinants of apparent strong ion difference ([SID a ]) are [Na + ], [K + ] and [Cl - ] (2) [Ca 2+ ] and [Mg 2+ ] do not contribute significantly to [SID a ] variation and can be replaced by their reference concentrations (3) physiologically relevant pH variation is at the order of 10 -2 and therefore we can assume a standard value of 7.4. In the new model, [SID a ] is replaced by its adjusted form, i.e. [SID a,adj ] = [Na + ] + [K + ] - [Cl - ] + 6.5 and [SIG] is replaced by "bicarbonate gap", i.e. [BIC gap ] = [SID a,adj ] - (0.25·[Albumin]) - (2·[Phosphate]) - [HCO 3 - ]. The model was tested in 224 postoperative cardiac surgical patients. Strong correlations were observed between [SID a,adj ] and [SID a ] (r = 0.93, p < 0.0001) and between [BIC gap ] and [SIG] (r = 0.95, p < 0.0001). The mean bias (limits of agreement) of [SID a,adj ] - [SID a ] and of [BIC gap ]-[SIG] was - 0.6 meq/l (- 2.7 to 1.5) and 0.2 meq/l (- 2 to 2.4), respectively. The intraclass correlation coefficients between [SID a,adj ] and [SID a ] and between [BIC gap ] and [SIG] were 0.90 and 0.95, respectively. The sensitivities and specificities for the prediction of a [lactate - ] > 4 meq/l were 73.4 and 82.3% for a [BIC gap ] > 12.2 meq/l and 74.5 and 83.1% for a [SIG] > 12 meq/l, respectively. The [BIC gap ] model bears a very good agreement with the [SIG] model while being simpler and easier to apply at the bedside. [BIC gap ] could be used as an alternative tool for the diagnosis of unmeasured ion acidosis.
Mathematical Modeling of Electrodynamics Near the Surface of Earth and Planetary Water Worlds
NASA Technical Reports Server (NTRS)
Tyler, Robert H.
2017-01-01
An interesting feature of planetary bodies with hydrospheres is the presence of an electrically conducting shell near the global surface. This conducting shell may typically lie between relatively insulating rock, ice, or atmosphere, creating a strong constraint on the flow of large-scale electric currents. All or parts of the shell may be in fluid motion relative to main components of the rotating planetary magnetic field (as well as the magnetic fields due to external bodies), creating motionally-induced electric currents that would not otherwise be present. As such, one may expect distinguishing features in the types of electrodynamic processes that occur, as well as an opportunity for imposing specialized mathematical methods that efficiently address this class of application. The purpose of this paper is to present and discuss such specialized methods. Specifically, thin-shell approximations for both the electrodynamics and fluid dynamics are combined to derive simplified mathematical formulations describing the behavior of these electric currents as well as their associated electric and magnetic fields. These simplified formulae allow analytical solutions featuring distinct aspects of the thin-shell electrodynamics in idealized cases. A highly efficient numerical method is also presented that is useful for calculations under inhomogeneous parameter distributions. Finally, the advantages as well as limitations in using this mathematical approach are evaluated. This evaluation is presented primarily for the generic case of bodies with water worlds or other thin spherical conducting shells. More specific discussion is given for the case of Earth, but also Europa and other satellites with suspected oceans.
The impact evaluation of soil liquefaction on low-rise building in the Meinong earthquake
NASA Astrophysics Data System (ADS)
Lu, Chih-Chieh; Hwang, Jin-Hung; Hsu, Shang-Yi
2017-08-01
This paper presents major preliminary observations on the liquefaction-induced damages in the Meinong earthquake ( M L = 6.4). The severe damages to buildings centered on Huian and Sanmin Streets in Tainan City where the places were reclaimed fish or farm ponds with poor construction quality from many decades ago. To better understand the effect due to the soil liquefaction at these sites, the information provided by the in situ 13 Standard Penetration Test boreholes and 5 Cone Penetration Test soundings accompanying with the PGAs derived from the near seismographs was used to conduct the soil liquefaction evaluation by the Seed method (Seed et al. in J Geotech Eng ASCE 111(12):1425-1445, 1985) when subject to the Meinong earthquake. The liquefaction potential index (LPI) was then evaluated accordingly. From the results, it was found that the estimated damage severity was not consistent to the field conditions if the local site effect was not taken into account. To better reflect the site response in such sites, the sites' PGAs in the PGA contour map were multiplied by 1.5 times to quantify the amplification effects due to the soft geological condition. In addition, the PGAs based on other simple approaches were evaluated as well for comparison. Besides, the effects of fines content and magnitude scaling factor were also discussed in this paper. After that, several common simplified methods were also used to calculate the LPI when subject to the Meinong earthquake in order to evaluate the applicability of these simplified methods.
One lens optical correlation: application to face recognition.
Jridi, Maher; Napoléon, Thibault; Alfalou, Ayman
2018-03-20
Despite its extensive use, the traditional 4f Vander Lugt Correlator optical setup can be further simplified. We propose a lightweight correlation scheme where the decision is taken in the Fourier plane. For this purpose, the Fourier plane is adapted and used as a decision plane. Then, the offline phase and the decision metric are re-examined in order to keep a reasonable recognition rate. The benefits of the proposed approach are numerous: (1) it overcomes the constraints related to the use of a second lens; (2) the optical correlation setup is simplified; (3) the multiplication with the correlation filter can be done digitally, which offers a higher adaptability according to the application. Moreover, the digital counterpart of the correlation scheme is lightened since with the proposed scheme we get rid of the inverse Fourier transform (IFT) calculation (i.e., decision directly in the Fourier domain without resorting to IFT). To assess the performance of the proposed approach, an insight into digital hardware resources saving is provided. The proposed method involves nearly 100 times fewer arithmetic operators. Moreover, from experimental results in the context of face verification-based correlation, we demonstrate that the proposed scheme provides comparable or better accuracy than the traditional method. One interesting feature of the proposed scheme is that it could greatly outperform the traditional scheme for face identification application in terms of sensitivity to face orientation. The proposed method is found to be digital/optical implementation-friendly, which facilitates its integration on a very broad range of scenarios.
Jo, Ayami; Kanazawa, Manabu; Sato, Yusuke; Iwaki, Maiko; Akiba, Norihisa; Minakuchi, Shunsuke
2015-08-01
To compare the effect of conventional complete dentures (CD) fabricated using two different impression methods on patient-reported outcomes in a randomized controlled trial (RCT). A cross-over RCT was performed with edentulous patients, required maxillomandibular CDs. Mandibular CDs were fabricated using two different methods. The conventional method used a custom tray border moulded with impression compound and a silicone. The simplified used a stock tray and an alginate. Participants were randomly divided into two groups. The C-S group had the conventional method used first, followed by the simplified. The S-C group was in the reverse order. Adjustment was performed four times. A wash out period was set for 1 month. The primary outcome was general patient satisfaction, measured using visual analogue scales, and the secondary outcome was oral health-related quality of life, measured using the Japanese version of the Oral Health Impact Profile for edentulous (OHIP-EDENT-J) questionnaire scores. Twenty-four participants completed the trial. With regard to general patient satisfaction, the conventional method was significantly more acceptable than the simplified. No significant differences were observed between the two methods in the OHIP-EDENT-J scores. This study showed CDs fabricated with a conventional method were significantly more highly rated for general patient satisfaction than a simplified. CDs, fabricated with the conventional method that included a preliminary impression made using alginate in a stock tray and subsequently a final impression made using silicone in a border moulded custom tray resulted in higher general patient satisfaction. UMIN000009875. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Ding, Liang; Gao, Haibo; Liu, Zhen; Deng, Zongquan; Liu, Guangjun
2015-12-01
Identifying the mechanical property parameters of planetary soil based on terramechanics models using in-situ data obtained from autonomous planetary exploration rovers is both an important scientific goal and essential for control strategy optimization and high-fidelity simulations of rovers. However, identifying all the terrain parameters is a challenging task because of the nonlinear and coupling nature of the involved functions. Three parameter identification methods are presented in this paper to serve different purposes based on an improved terramechanics model that takes into account the effects of slip, wheel lugs, etc. Parameter sensitivity and coupling of the equations are analyzed, and the parameters are grouped according to their sensitivity to the normal force, resistance moment and drawbar pull. An iterative identification method using the original integral model is developed first. In order to realize real-time identification, the model is then simplified by linearizing the normal and shearing stresses to derive decoupled closed-form analytical equations. Each equation contains one or two groups of soil parameters, making step-by-step identification of all the unknowns feasible. Experiments were performed using six different types of single-wheels as well as a four-wheeled rover moving on planetary soil simulant. All the unknown model parameters were identified using the measured data and compared with the values obtained by conventional experiments. It is verified that the proposed iterative identification method provides improved accuracy, making it suitable for scientific studies of soil properties, whereas the step-by-step identification methods based on simplified models require less calculation time, making them more suitable for real-time applications. The models have less than 10% margin of error comparing with the measured results when predicting the interaction forces and moments using the corresponding identified parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grimme, Stefan, E-mail: grimme@thch.uni-bonn.de; Bannwarth, Christoph
2016-08-07
The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the wellmore » established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H–Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first benchmarked for vertical excitation energies of open- and closed-shell systems in comparison to other semi-empirical methods and applied to exemplary problems in electronic spectroscopy. As side products of the development, a robust and efficient valence electron TB method for the accurate determination of atomic charges as well as a more accurate calculation scheme of dipole rotatory strengths within the Tamm-Dancoff approximation is proposed.« less
Statistical Issues for Calculating Reentry Hazards
NASA Technical Reports Server (NTRS)
Matney, Mark; Bacon, John
2016-01-01
A number of statistical tools have been developed over the years for assessing the risk of reentering object to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. This information, combined with information on the expected ground path of the reentry, is used to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of this analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper builds on previous IAASS work to re-examine many of these theoretical assumptions, including the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. This study also employs empirical and theoretical information to test these assumptions, and makes recommendations how to improve the accuracy of these calculations in the future.
Motion Planning of Two Stacker Cranes in a Large-Scale Automated Storage/Retrieval System
NASA Astrophysics Data System (ADS)
Kung, Yiheng; Kobayashi, Yoshimasa; Higashi, Toshimitsu; Ota, Jun
We propose a method for reducing the computational time of motion planning for stacker cranes. Most automated storage/retrieval systems (AS/RSs) are only equipped with one stacker crane. However, this is logistically challenging, and greater work efficiency in warehouses, such as those using two stacker cranes, is required. In this paper, a warehouse with two stacker cranes working simultaneously is proposed. Unlike warehouses with only one crane, trajectory planning in those with two cranes is very difficult. Since there are two cranes working together, a proper trajectory must be considered to avoid collision. However, verifying collisions is complicated and requires a considerable amount of computational time. As transport work in AS/RSs occurs randomly, motion planning cannot be conducted in advance. Planning an appropriate trajectory within a restricted duration would be a difficult task. We thereby address the current problem of motion planning requiring extensive calculation time. As a solution, we propose a “free-step” to simplify the procedure of collision verification and reduce the computational time. On the other hand, we proposed a method to reschedule the order of collision verification in order to find an appropriate trajectory in less time. By the proposed method, we reduce the calculation time to less than 1/300 of that achieved in former research.
Analog Signal Correlating Using an Analog-Based Signal Conditioning Front End
NASA Technical Reports Server (NTRS)
Prokop, Norman; Krasowski, Michael
2013-01-01
This innovation is capable of correlating two analog signals by using an analog-based signal conditioning front end to hard-limit the analog signals through adaptive thresholding into a binary bit stream, then performing the correlation using a Hamming "similarity" calculator function embedded in a one-bit digital correlator (OBDC). By converting the analog signal into a bit stream, the calculation of the correlation function is simplified, and less hardware resources are needed. This binary representation allows the hardware to move from a DSP where instructions are performed serially, into digital logic where calculations can be performed in parallel, greatly speeding up calculations.
2010-01-01
Background Detection of nerve involvement originating in the spine is a primary concern in the assessment of spine symptoms. Magnetic resonance imaging (MRI) has become the diagnostic method of choice for this detection. However, the agreement between MRI and other diagnostic methods for detecting nerve involvement has not been fully evaluated. The aim of this diagnostic study was to evaluate the agreement between nerve involvement visible in MRI and findings of nerve involvement detected in a structured physical examination and a simplified pain drawing. Methods Sixty-one consecutive patients referred for MRI of the lumbar spine were - without knowledge of MRI findings - assessed for nerve involvement with a simplified pain drawing and a structured physical examination. Agreement between findings was calculated as overall agreement, the p value for McNemar's exact test, specificity, sensitivity, and positive and negative predictive values. Results MRI-visible nerve involvement was significantly less common than, and showed weak agreement with, physical examination and pain drawing findings of nerve involvement in corresponding body segments. In spine segment L4-5, where most findings of nerve involvement were detected, the mean sensitivity of MRI-visible nerve involvement to a positive neurological test in the physical examination ranged from 16-37%. The mean specificity of MRI-visible nerve involvement in the same segment ranged from 61-77%. Positive and negative predictive values of MRI-visible nerve involvement in segment L4-5 ranged from 22-78% and 28-56% respectively. Conclusion In patients with long-standing nerve root symptoms referred for lumbar MRI, MRI-visible nerve involvement significantly underestimates the presence of nerve involvement detected by a physical examination and a pain drawing. A structured physical examination and a simplified pain drawing may reveal that many patients with "MRI-invisible" lumbar symptoms need treatment aimed at nerve involvement. Factors other than present MRI-visible nerve involvement may be responsible for findings of nerve involvement in the physical examination and the pain drawing. PMID:20831785
Simplified parent-child formalism for spin-0 and spin-1/2 parents
NASA Astrophysics Data System (ADS)
Butcher, J. B.; Jones, H. F.; Milani, P.
1980-06-01
We develop further the parent-child relation, that is the calculation of the cross-sections and correlations of observed particles, typically charged leptons, arising from the decay of long-lived primarily produced “parent” particles. In the high-momentum regime, when the momenta of parent and child are closely aligned, we show how, for spinless parents, the relation can be simplified by the introduction of “fragmentation” functions derived from the invariant inclusive decay distributions. We extend the formalism to the case of spin-1/2 parents and advocate its application to charm production and decay at the quark level.