Calculating Time-Integral Quantities in Depletion Calculations
Isotalo, Aarno
2016-06-02
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
VizieR Online Data Catalog: Thermodynamic quantities of molecular hydrogen (Popovas+, 2016)
NASA Astrophysics Data System (ADS)
Popovas, A.; Jorgensen, U. G.
2016-07-01
New partition functions for equilibrium, normal, and ortho and para hydrogen are calculated and thermodynamic quantities are reported for the temperature range 1-20000K. Our results are compared to previous estimates in the literature. The calculations are not limited to the ground electronic state, but include all bound and quasi-bound levels of excited electronic states. Dunham coefficients of these states of H2 are also reported. Reported internal partition functions and thermodynamic quantities in the present work are shown to be more accurate than previously available data. (4 data files).
Piskulich, Zeke A; Mesele, Oluwaseun O; Thompson, Ward H
2017-10-07
General approaches for directly calculating the temperature dependence of dynamical quantities from simulations at a single temperature are presented. The method is demonstrated for self-diffusion and OH reorientation in liquid water. For quantities which possess an activation energy, e.g., the diffusion coefficient and the reorientation time, the results from the direct calculation are in excellent agreement with those obtained from an Arrhenius plot. However, additional information is obtained, including the decomposition of the contributions to the activation energy. These results are discussed along with prospects for additional applications of the direct approach.
PQcalc, an Online Calculator for Science Learners
ERIC Educational Resources Information Center
Theis, Karsten
2015-01-01
PQcalc is an online calculator designed to support students in college-level science classes. Unlike a pocket calculator, PQcalc allows students to set up problems within the calculator just as one would on paper. This includes using proper units and naming quantities strategically in a way that helps finding the solution. Results of calculations…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beck, C.A.
1985-04-01
The high-resolution ultraviolet and visible spectra of typical test nuclear detonations up to and including Operation Ivy were analyzed and compared. Topics studied include the types of atomc and molecular material observed (with calculations, in some cases, of the relative quantities involved), the ultraviolet cutoff, and rotational temperatures. Variation of these quantities with the radiochemical yield of the bomb is indicated.
MODTRAN3: Suitability as a flux-divergence code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, G.P.; Chetwynd, J.H.; Wang, J.
1995-04-01
The Moderate Resolution Atmospheric Radiance and Transmittance Model (MODTRAN3) is the developmental version of MODTRAN and MODTRAN2. The Geophysics Directorate, Phillips Laboratory, released a beta version of this model in October 1994. It encompasses all the capabilities of LOWTRAN7, the historic 20 cm{sup -1} resolution (full width at half maximum, FWHM) radiance code, but incorporates a much more sensitive molecular band model with 2 cm{sup -1} resolution. The band model is based directly upon the HITRAN spectral parameters, including both temperature and pressure (line shape) dependencies. Validation against full Voigt line-by-line calculations (e.g., FASCODE) has shown excellent agreement. In addition,more » simple timing runs demonstrate potential improvement of more than a factor of 100 for a typical 500 cm{sup -1} spectral interval and comparable vertical layering. Not only is MODTRAN an excellent band model for {open_quotes}full path{close_quotes} calculations (that is, radiance and/or transmittance from point A to point B), but it replicates layer-specific quantities to a very high degree of accuracy. Such layer quantities, derived from ratios and differences of longer path MODTRAN calculations from point A to adjacent layer boundaries, can be used to provide inversion algorithm weighting functions or similarly formulated quantities. One of the most exciting new applications is the rapid calculation of reliable IR cooling rates, including species, altitude, and spectral distinctions, as well as the standard spectrally integrated quantities. Comparisons with prior line-by-line cooling rate calculations are excellent, and the techniques can be extended to incorporate global climatologies of both standard and trace atmospheric species.« less
Grimbergen, T W M; Wiegman, M M
2007-01-01
In order to arrive at recommendations for guidelines on maximum allowable quantities of radioactive material in laboratories, a proposed mathematical model was used for the calculation of transfer fractions for the air pathway. A set of incident scenarios was defined, including spilling, leakage and failure of the fume hood. For these 'common incidents', dose constraints of 1 mSv and 0.1 mSv are proposed in case the operations are being performed in a controlled area and supervised area, respectively. In addition, a dose constraint of 1 microSv is proposed for each operation under regular working conditions. Combining these dose constraints and the transfer fractions calculated with the proposed model, maximum allowable quantities were calculated for different laboratory operations and situations. Provided that the calculated transfer fractions can be experimentally validated and the dose constraints are acceptable, it can be concluded from the results that the dose constraint for incidents is the most restrictive one. For non-volatile materials this approach leads to quantities much larger than commonly accepted. In those cases, the results of the calculations in this study suggest that limitation of the quantity of radioactive material, which can be handled safely, should be based on other considerations than the inhalation risks. Examples of such considerations might be the level of external exposure, uncontrolled spread of radioactive material by surface contamination, emissions in the environment and severe accidents like fire.
System for characterizing semiconductor materials and photovoltaic devices through calibration
Sopori, Bhushan L.; Allen, Larry C.; Marshall, Craig; Murphy, Robert C.; Marshall, Todd
1998-01-01
A method and apparatus for measuring characteristics of a piece of material, typically semiconductor materials including photovoltaic devices. The characteristics may include dislocation defect density, grain boundaries, reflectance, external LBIC, internal LBIC, and minority carrier diffusion length. The apparatus includes a light source, an integrating sphere, and a detector communicating with a computer. The measurement or calculation of the characteristics is calibrated to provide accurate, absolute values. The calibration is performed by substituting a standard sample for the piece of material, the sample having a known quantity of one or more of the relevant characteristics. The quantity measured by the system of the relevant characteristic is compared to the known quantity and a calibration constant is created thereby.
System for characterizing semiconductor materials and photovoltaic devices through calibration
Sopori, B.L.; Allen, L.C.; Marshall, C.; Murphy, R.C.; Marshall, T.
1998-05-26
A method and apparatus are disclosed for measuring characteristics of a piece of material, typically semiconductor materials including photovoltaic devices. The characteristics may include dislocation defect density, grain boundaries, reflectance, external LBIC, internal LBIC, and minority carrier diffusion length. The apparatus includes a light source, an integrating sphere, and a detector communicating with a computer. The measurement or calculation of the characteristics is calibrated to provide accurate, absolute values. The calibration is performed by substituting a standard sample for the piece of material, the sample having a known quantity of one or more of the relevant characteristics. The quantity measured by the system of the relevant characteristic is compared to the known quantity and a calibration constant is created thereby. 44 figs.
MODTRAN2: Evolution and applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, G.P.; Chetwynd, J.H.; Kneizys, F.X.
1994-12-31
MODTRAN2 is the most recent version of the Moderate Resolution Atmospheric Radiance and Transmittance Model. It encompasses all the capabilities of LOWTRAN 7, the historic 20 cm{sup {minus}1} resolution (full width at half maximum, FWHM) radiance code, but incorporates a much more sensitive molecular band model with 2 cm{sup {minus}1} resolution. The band model is based directly upon the HITRAN spectral parameters, including both temperature and pressure (line shape) dependencies. Because the band model parameters and their applications to transmittance calculations have been independently developed using equivalent width binning procedures, validation against full Voigt line-by-line calculations is important. Extensive spectralmore » comparisons have shown excellent agreement. In addition, simple timing runs of MODTRAN vs. FASCOD3P show an improvement of more than a factor of 100 for a typical 500 cm{sup {minus}1} spectral interval and comparable vertical layering. It has been previously established that not only is MODTRAN an excellent band model for full path calculations, but it replicates layer-specific quantities to a very high degree of accuracy. Such layer quantities, derived from ratios and differences of longer path MODTRAN calculations from point A to adjacent layer boundaries, can be used to provide inversion algorithm weighting functions or similarly formulated quantities. One of the most exciting new applications is the rapid calculation of reliable IR cooling rates, including species, altitude, and spectral distinctions, as well as the standard integrated quantities. Comparisons with prior line-by-line cooling rate calculations are excellent, and the techniques can be extended to incorporate global climatologies. Enhancements expected to appear in MODTRAN3 relate directly to climate change studies. The addition of ultraviolet SO{sub 2} and NO{sub 2} in the UV, along with upgraded ozone Chappuis bands in the visible will also be part of MODTRAN3.« less
6 CFR 27.203 - Calculating the screening threshold quantity by security issue.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 6 Domestic Security 1 2012-01-01 2012-01-01 false Calculating the screening threshold quantity by security issue. 27.203 Section 27.203 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE... the screening threshold quantity by security issue. (a) General. In calculating whether a facility...
NASA Astrophysics Data System (ADS)
Freitas, Rodrigo; Frolov, Timofey; Asta, Mark
2017-04-01
A theory for the thermodynamic properties of steps on faceted crystalline surfaces is presented. The formalism leads to the definition of step excess quantities, including an excess step stress that is the step analogy of surface stress. The approach is used to develop a relationship between the temperature dependence of the step free energy (γst) and step excess quantities for energy and stress that can be readily calculated by atomistic simulations. We demonstrate the application of this formalism in thermodynamic-integration (TI) calculations of the step free energy, based on molecular-dynamics simulations, considering <110 > steps on the {111 } surface of a classical potential model for elemental Cu. In this application we employ the Frenkel-Ladd approach to compute the reference value of γst for the TI calculations. Calculated results for excess energy and stress show relatively weak temperature dependencies up to a homologous temperature of approximately 0.6, above which these quantities increase strongly and the step stress becomes more isotropic. From the calculated excess quantities we compute γst over the temperature range from zero up to the melting point (Tm). We find that γst remains finite up to Tm, indicating the absence of a roughening temperature for this {111 } surface facet, but decreases by roughly fifty percent from the zero-temperature value. The strongest temperature dependence occurs above homologous temperatures of approximately 0.6, where the step becomes configurationally disordered due to the formation of point defects and appreciable capillary fluctuations.
6 CFR 27.203 - Calculating the screening threshold quantity by security issue.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 6 Domestic Security 1 2014-01-01 2014-01-01 false Calculating the screening threshold quantity by security issue. 27.203 Section 27.203 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical Facility Security Program § 27.203 Calculating the screening threshold quantity by...
6 CFR 27.203 - Calculating the screening threshold quantity by security issue.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 6 Domestic Security 1 2011-01-01 2011-01-01 false Calculating the screening threshold quantity by security issue. 27.203 Section 27.203 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical Facility Security Program § 27.203 Calculating the screening threshold quantity by...
6 CFR 27.203 - Calculating the screening threshold quantity by security issue.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 6 Domestic Security 1 2013-01-01 2013-01-01 false Calculating the screening threshold quantity by security issue. 27.203 Section 27.203 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical Facility Security Program § 27.203 Calculating the screening threshold quantity by...
6 CFR 27.203 - Calculating the screening threshold quantity by security issue.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 6 Domestic Security 1 2010-01-01 2010-01-01 false Calculating the screening threshold quantity by security issue. 27.203 Section 27.203 Domestic Security DEPARTMENT OF HOMELAND SECURITY, OFFICE OF THE SECRETARY CHEMICAL FACILITY ANTI-TERRORISM STANDARDS Chemical Facility Security Program § 27.203 Calculating the screening threshold quantity by...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Isotalo, Aarno
A method referred to as tally nuclides is presented for accurately and efficiently calculating the time-step averages and integrals of any quantities that are weighted sums of atomic densities with constant weights during the step. The method allows all such quantities to be calculated simultaneously as a part of a single depletion solution with existing depletion algorithms. Some examples of the results that can be extracted include step-average atomic densities and macroscopic reaction rates, the total number of fissions during the step, and the amount of energy released during the step. Furthermore, the method should be applicable with several depletionmore » algorithms, and the integrals or averages should be calculated with an accuracy comparable to that reached by the selected algorithm for end-of-step atomic densities. The accuracy of the method is demonstrated in depletion calculations using the Chebyshev rational approximation method. Here, we demonstrate how the ability to calculate energy release in depletion calculations can be used to determine the accuracy of the normalization in a constant-power burnup calculation during the calculation without a need for a reference solution.« less
Characterization of Navy Solid Waste and Collection and Disposal Practices.
1980-01-01
26 A-7 Calculation of Design Capacity for Sample Cases.......A-30 A-8 Incineration Plant Capacities Considered for Economic Analysis ...CONSIDERED FOR ECONOMIC ANALYSIS Approximate Quantity of Plant Design Quantity of No. of Shifts Refuse Generateda Capacityb Refuse Burned Operated (tons/day...including a site visit to the 50-ton/day plant in Yokohama, Japan. (2) A preliminary technoeconomic evaluation of a fluidized bed combustor (preceded
Code of Federal Regulations, 2012 CFR
2012-10-01
.... Contracting office includes any contracting office that the acquisition is transferred to, such as another... projected learning or changes in quantity during the sharing period. It is calculated at the time the VECP...
40 CFR 98.184 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... calendar year. The monthly mass may be determined using plant instruments used for accounting purposes, including either direct measurement of the quantity of the material placed in the unit or by calculations...
40 CFR 98.114 - Monitoring and QA/QC requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... calendar year. The monthly mass may be determined using plant instruments used for accounting purposes, including either direct measurement of the quantity of the material placed in the unit or by calculations...
Recent advances in quantum scattering calculations on polyatomic bimolecular reactions.
Fu, Bina; Shan, Xiao; Zhang, Dong H; Clary, David C
2017-12-11
This review surveys quantum scattering calculations on chemical reactions of polyatomic molecules in the gas phase published in the last ten years. These calculations are useful because they provide highly accurate information on the dynamics of chemical reactions which can be compared in detail with experimental results. They also serve as quantum mechanical benchmarks for testing approximate theories which can more readily be applied to more complicated reactions. This review includes theories for calculating quantities such as rate constants which have many important scientific applications.
Interacting hadron resonance gas model in the K -matrix formalism
NASA Astrophysics Data System (ADS)
Dash, Ashutosh; Samanta, Subhasis; Mohanty, Bedangadas
2018-05-01
An extension of hadron resonance gas (HRG) model is constructed to include interactions using relativistic virial expansion of partition function. The noninteracting part of the expansion contains all the stable baryons and mesons and the interacting part contains all the higher mass resonances which decay into two stable hadrons. The virial coefficients are related to the phase shifts which are calculated using K -matrix formalism in the present work. We have calculated various thermodynamics quantities like pressure, energy density, and entropy density of the system. A comparison of thermodynamic quantities with noninteracting HRG model, calculated using the same number of hadrons, shows that the results of the above formalism are larger. A good agreement between equation of state calculated in K -matrix formalism and lattice QCD simulations is observed. Specifically, the lattice QCD calculated interaction measure is well described in our formalism. We have also calculated second-order fluctuations and correlations of conserved charges in K -matrix formalism. We observe a good agreement of second-order fluctuations and baryon-strangeness correlation with lattice data below the crossover temperature.
LLNL Mercury Project Trinity Open Science Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dawson, Shawn A.
The Mercury Monte Carlo particle transport code is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. In the proposed Trinity Open Science calculations, I will investigate computer science aspects of the code which are relevant to convergence of the simulation quantities with increasing Monte Carlo particle counts.
NASA Technical Reports Server (NTRS)
Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.
1988-01-01
A physically based ground hydrology model is presented that includes the processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff. Data from the Goddard Institute for Space Studies GCM were used as inputs for off-line tests of the model in four 8 x 10 deg regions, including Brazil, Sahel, Sahara, and India. Soil and vegetation input parameters were caculated as area-weighted means over the 8 x 10 deg gridbox; the resulting hydrological quantities were compared to ground hydrology model calculations performed on the 1 x 1 deg cells which comprise the 8 x 10 deg gridbox. Results show that the compositing procedure worked well except in the Sahel, where low soil water levels and a heterogeneous land surface produce high variability in hydrological quantities; for that region, a resolution better than 8 x 10 deg is needed.
Effects of Chemistry on Blunt-Body Wake Structure
NASA Technical Reports Server (NTRS)
Dogra, Virendra K.; Moss, James N.; Wilmoth, Richard G.; Taylor, Jeff C.; Hassan, H. A.
1995-01-01
Results of a numerical study are presented for hypersonic low-density flow about a 70-deg blunt cone using direct simulation Monte Carlo (DSMC) and Navier-Stokes calculations. Particular emphasis is given to the effects of chemistry on the near-wake structure and on the surface quantities and the comparison of the DSMC results with the Navier-Stokes calculations. The flow conditions simulated are those experienced by a space vehicle at an altitude of 85 km and a velocity of 7 km/s during Earth entry. A steady vortex forms in the near wake for these freestream conditions for both chemically reactive and nonreactive air gas models. The size (axial length) of the vortex for the reactive air calculations is 25% larger than that of the nonreactive air calculations. The forebody surface quantities are less sensitive to the chemistry than the base surface quantities. The presence of the afterbody has no effect on the forebody flow structure or the surface quantities. The comparisons of DSMC and Navier-Stokes calculations show good agreement for the wake structure and the forebody surface quantities.
Unleashing Empirical Equations with "Nonlinear Fitting" and "GUM Tree Calculator"
NASA Astrophysics Data System (ADS)
Lovell-Smith, J. W.; Saunders, P.; Feistel, R.
2017-10-01
Empirical equations having large numbers of fitted parameters, such as the international standard reference equations published by the International Association for the Properties of Water and Steam (IAPWS), which form the basis of the "Thermodynamic Equation of Seawater—2010" (TEOS-10), provide the means to calculate many quantities very accurately. The parameters of these equations are found by least-squares fitting to large bodies of measurement data. However, the usefulness of these equations is limited since uncertainties are not readily available for most of the quantities able to be calculated, the covariance of the measurement data is not considered, and further propagation of the uncertainty in the calculated result is restricted since the covariance of calculated quantities is unknown. In this paper, we present two tools developed at MSL that are particularly useful in unleashing the full power of such empirical equations. "Nonlinear Fitting" enables propagation of the covariance of the measurement data into the parameters using generalized least-squares methods. The parameter covariance then may be published along with the equations. Then, when using these large, complex equations, "GUM Tree Calculator" enables the simultaneous calculation of any derived quantity and its uncertainty, by automatic propagation of the parameter covariance into the calculated quantity. We demonstrate these tools in exploratory work to determine and propagate uncertainties associated with the IAPWS-95 parameters.
NASA Technical Reports Server (NTRS)
Szatmary, Steven A.; Gyekenyesi, John P.; Nemeth, Noel N.
1990-01-01
This manual describes the operation and theory of the PC-CARES (Personal Computer-Ceramic Analysis and Reliability Evaluation of Structures) computer program for the IBM PC and compatibles running PC-DOS/MS-DOR OR IBM/MS-OS/2 (version 1.1 or higher) operating systems. The primary purpose of this code is to estimate Weibull material strength parameters, the Batdorf crack density coefficient, and other related statistical quantities. Included in the manual is the description of the calculation of shape and scale parameters of the two-parameter Weibull distribution using the least-squares analysis and maximum likelihood methods for volume- and surface-flaw-induced fracture in ceramics with complete and censored samples. The methods for detecting outliers and for calculating the Kolmogorov-Smirnov and the Anderson-Darling goodness-of-fit statistics and 90 percent confidence bands about the Weibull line, as well as the techniques for calculating the Batdorf flaw-density constants are also described.
Deterministic photon bias in speckle imaging
NASA Technical Reports Server (NTRS)
Beletic, James W.
1989-01-01
A method for determining photo bias terms in speckle imaging is presented, and photon bias is shown to be a deterministic quantity that can be calculated without the use of the expectation operator. The quantities obtained are found to be identical to previous results. The present results have extended photon bias calculations to the important case of the bispectrum where photon events are assigned different weights, in which regime the bias is a frequency dependent complex quantity that must be calculated for each frame.
Quark–hadron phase structure, thermodynamics, and magnetization of QCD matter
NASA Astrophysics Data System (ADS)
Nasser Tawfik, Abdel; Magied Diab, Abdel; Hussein, M. T.
2018-05-01
The SU(3) Polyakov linear-sigma model (PLSM) is systematically implemented to characterize the quark-hadron phase structure and to determine various thermodynamic quantities and the magnetization of quantum chromodynamic (QCD) matter. Using mean-field approximation, the dependence of the chiral order parameter on a finite magnetic field is also calculated. Under a wide range of temperatures and magnetic field strengths, various thermodynamic quantities including trace anomaly, speed of sound squared, entropy density, and specific heat are presented, and some magnetic properties are described as well. Where available these results are compared to recent lattice QCD calculations. The temperature dependence of these quantities confirms our previous finding that the transition temperature is reduced with the increase in the magnetic field strength, i.e. QCD matter is characterized by an inverse magnetic catalysis. Furthermore, the temperature dependence of the magnetization showing that QCD matter has paramagnetic properties slightly below and far above the pseudo-critical temperature is confirmed as well. The excellent agreement with recent lattice calculations proves that our QCD-like approach (PLSM) seems to possess the correct degrees of freedom in both the hadronic and partonic phases and describes well the dynamics deriving confined hadrons to deconfined quark-gluon plasma.
Application of sensitivity-analysis techniques to the calculation of topological quantities
NASA Astrophysics Data System (ADS)
Gilchrist, Stuart
2017-08-01
Magnetic reconnection in the corona occurs preferentially at sites where the magnetic connectivity is either discontinuous or has a large spatial gradient. Hence there is a general interest in computing quantities (like the squashing factor) that characterize the gradient in the field-line mapping function. Here we present an algorithm for calculating certain (quasi)topological quantities using mathematical techniques from the field of ``sensitivity-analysis''. The method is based on the calculation of a three dimensional field-line mapping Jacobian from which all the present topological quantities of interest can be derived. We will present the algorithm and the details of a publicly available set of libraries that implement the algorithm.
Application of thermodynamics to silicate crystalline solutions
NASA Technical Reports Server (NTRS)
Saxena, S. K.
1972-01-01
A review of thermodynamic relations is presented, describing Guggenheim's regular solution models, the simple mixture, the zeroth approximation, and the quasi-chemical model. The possibilities of retrieving useful thermodynamic quantities from phase equilibrium studies are discussed. Such quantities include the activity-composition relations and the free energy of mixing in crystalline solutions. Theory and results of the study of partitioning of elements in coexisting minerals are briefly reviewed. A thermodynamic study of the intercrystalline and intracrystalline ion exchange relations gives useful information on the thermodynamic behavior of the crystalline solutions involved. Such information is necessary for the solution of most petrogenic problems and for geothermometry. Thermodynamic quantities for tungstates (CaWO4-SrWO4) are calculated.
NASA Astrophysics Data System (ADS)
Haddag, B.; Kagnaya, T.; Nouari, M.; Cutard, T.
2013-01-01
Modelling machining operations allows estimating cutting parameters which are difficult to obtain experimentally and in particular, include quantities characterizing the tool-workpiece interface. Temperature is one of these quantities which has an impact on the tool wear, thus its estimation is important. This study deals with a new modelling strategy, based on two steps of calculation, for analysis of the heat transfer into the cutting tool. Unlike the classical methods, considering only the cutting tool with application of an approximate heat flux at the cutting face, estimated from experimental data (e.g. measured cutting force, cutting power), the proposed approach consists of two successive 3D Finite Element calculations and fully independent on the experimental measurements; only the definition of the behaviour of the tool-workpiece couple is necessary. The first one is a 3D thermomechanical modelling of the chip formation process, which allows estimating cutting forces, chip morphology and its flow direction. The second calculation is a 3D thermal modelling of the heat diffusion into the cutting tool, by using an adequate thermal loading (applied uniform or non-uniform heat flux). This loading is estimated using some quantities obtained from the first step calculation, such as contact pressure, sliding velocity distributions and contact area. Comparisons in one hand between experimental data and the first calculation and at the other hand between measured temperatures with embedded thermocouples and the second calculation show a good agreement in terms of cutting forces, chip morphology and cutting temperature.
NASA Astrophysics Data System (ADS)
Hampel, B.; Liu, B.; Nording, F.; Ostermann, J.; Struszewski, P.; Langfahl-Klabes, J.; Bieler, M.; Bosse, H.; Güttler, B.; Lemmens, P.; Schilling, M.; Tutsch, R.
2018-03-01
In many cases, the determination of the measurement uncertainty of complex nanosystems provides unexpected challenges. This is in particular true for complex systems with many degrees of freedom, i.e. nanosystems with multiparametric dependencies and multivariate output quantities. The aim of this paper is to address specific questions arising during the uncertainty calculation of such systems. This includes the division of the measurement system into subsystems and the distinction between systematic and statistical influences. We demonstrate that, even if the physical systems under investigation are very different, the corresponding uncertainty calculation can always be realized in a similar manner. This is exemplarily shown in detail for two experiments, namely magnetic nanosensors and ultrafast electro-optical sampling of complex time-domain signals. For these examples the approach for uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) is explained, in which correlations between multivariate output quantities are captured. To illustate the versatility of the proposed approach, its application to other experiments, namely nanometrological instruments for terahertz microscopy, dimensional scanning probe microscopy, and measurement of concentration of molecules using surface enhanced Raman scattering, is shortly discussed in the appendix. We believe that the proposed approach provides a simple but comprehensive orientation for uncertainty calculation in the discussed measurement scenarios and can also be applied to similar or related situations.
Constant-Pressure Combustion Charts Including Effects of Diluent Addition
NASA Technical Reports Server (NTRS)
Turner, L Richard; Bogart, Donald
1949-01-01
Charts are presented for the calculation of (a) the final temperatures and the temperature changes involved in constant-pressure combustion processes of air and in products of combustion of air and hydrocarbon fuels, and (b) the quantity of hydrocarbon fuels required in order to attain a specified combustion temperature when water, alcohol, water-alcohol mixtures, liquid ammonia, liquid carbon dioxide, liquid nitrogen, liquid oxygen, or their mixtures are added to air as diluents or refrigerants. The ideal combustion process and combustion with incomplete heat release from the primary fuel and from combustible diluents are considered. The effect of preheating the mixture of air and diluents and the effect of an initial water-vapor content in the combustion air on the required fuel quantity are also included. The charts are applicable only to processes in which the final mixture is leaner than stoichiometric and at temperatures where dissociation is unimportant. A chart is also included to permit the calculation of the stoichiometric ratio of hydrocarbon fuel to air with diluent addition. The use of the charts is illustrated by numerical examples.
Thermophysical properties of gas phase uranium tetrafluoride
NASA Technical Reports Server (NTRS)
Watanabe, Yoichi; Anghaie, Samim
1993-01-01
Thermophysical data of gaseous uranium tetrafluoride (UF4) are theoretically obtained by taking into account dissociation of molecules at high temperatures (2000-6000 K). Determined quantities include specific heat, optical opacity, diffusion coefficient, viscosity, and thermal conductivity. A computer program is developed for the calculation.
Code of Federal Regulations, 2011 CFR
2011-07-01
... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...
Code of Federal Regulations, 2013 CFR
2013-07-01
... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...
Code of Federal Regulations, 2014 CFR
2014-07-01
... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...
Code of Federal Regulations, 2012 CFR
2012-07-01
... current process operating conditions. (iii) Design analysis based on accepted chemical engineering... quantity are production records, measurement of stream characteristics, and engineering calculations. (5...-end process operations using engineering assessment. Engineering assessment includes, but is not...
Calculation Methods and Conversions for Pesticide Application.
ERIC Educational Resources Information Center
Cole, Herbert, Jr.
This agriculture extension service publication from Pennsylvania State University consists of conversion tables and formulas for determining concentration and rate of application of pesticides. Contents include: (1) Area and volume conversions; (2) Important conversion formulae; (3) Conversions for rates of application; (4) Quantities of pesticide…
Computer-Delivered Interventions to Reduce College Student Drinking: A Meta-Analysis
Carey, Kate B.; Scott-Sheldon, Lori A. J.; Elliott, Jennifer C.; Bolles, Jamie R.; Carey, Michael P.
2009-01-01
Aims This meta-analysis evaluates the efficacy and moderators of computer-delivered interventions (CDIs) to reduce alcohol use among college students. Methods We included 35 manuscripts with 43 separate interventions, and calculated both between-group and within-group effect sizes for alcohol consumption and alcohol-related problems. Effects sizes were calculated for short-term (≤ 5 weeks) and longer-term (≥ 6 weeks) intervals. All studies were coded for study descriptors, participant characteristics, and intervention components. Results The effects of CDIs depended on the nature of the comparison condition: CDIs reduced quantity and frequency measures relative to assessment-only controls, but rarely differed from comparison conditions that included alcohol content. Small-to-medium within-group effect sizes can be expected for CDIs at short- and longer-term follow-ups; these changes are less than or equivalent to the within-group effect sizes observed for more intensive interventions. Conclusions CDIs reduce the quantity and frequency of drinking among college students. CDIs are generally equivalent to alternative alcohol-related comparison interventions. PMID:19744139
Calculating far-field radiated sound pressure levels from NASTRAN output
NASA Technical Reports Server (NTRS)
Lipman, R. R.
1986-01-01
FAFRAP is a computer program which calculates far field radiated sound pressure levels from quantities computed by a NASTRAN direct frequency response analysis of an arbitrarily shaped structure. Fluid loading on the structure can be computed directly by NASTRAN or an added-mass approximation to fluid loading on the structure can be used. Output from FAFRAP includes tables of radiated sound pressure levels and several types of graphic output. FAFRAP results for monopole and dipole sources compare closely with an explicit calculation of the radiated sound pressure level for those sources.
1986-01-01
by sensors in the test cell and sampled, digitized, averaged, and calibrated by the facility computer system. The data included flowrates calculated ...before the next test could be started. This required about 2 minutes. 6.4 Combat Damage Testing Appendix C contains calculations and analysis...were comparable (Figure 7-5). Agent quantities required per MIL-E-22285 were again calculated using the equations noted in paragraph 7.1.1. The
Experimental and Theoretical Understanding of Neutron Capture on Uranium Isotopes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ullmann, John Leonard
2017-09-21
Neutron capture cross sections on uranium isotopes are important quantities needed to model nuclear explosion performance, nuclear reactor design, nuclear test diagnostics, and nuclear forensics. It has been difficult to calculate capture accurately, and factors of 2 or more be- tween calculation and measurements are not uncommon, although normalization to measurements of the average capture width and nuclear level density can improve the result. The calculations of capture for 233,235,237,239U are further complicated by the need to accurately include the fission channel.
Propellant Mass Fraction Calculation Methodology for Launch Vehicles
NASA Technical Reports Server (NTRS)
Holt, James B.; Monk, Timothy S.
2009-01-01
Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between competing launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of a generic launch vehicle. This includes fundamental methods of pmf calculation which consider only the loaded propellant and the inert mass of the vehicle, more involved methods which consider the residuals and any other unusable propellant remaining in the vehicle, and other calculations which exclude large mass quantities such as the installed engine mass. Finally, a historic comparison is made between launch vehicles on the basis of the differing calculation methodologies.
2011-03-04
direct relationships between calculated quantities obtained by DFT and the “conveniently measurable” quantities α and rn...VCH Verlag, Weinheim, 2004). [11] A. D. Becke, “Density- funtional Thermochemistry. III. The Role of Exact Exchange”, J. Chem. Phys. 98, 5648-5652
40 CFR 61.348 - Standards: Treatment processes.
Code of Federal Regulations, 2013 CFR
2013-07-01
... enhanced biodegradation unit shall not be included in the calculation of the total annual benzene quantity, if the enhanced biodegradation unit is the first exempt unit in which the waste is managed or treated. A unit shall be considered enhanced biodegradation if it is a suspended-growth process that...
40 CFR 61.348 - Standards: Treatment processes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... enhanced biodegradation unit shall not be included in the calculation of the total annual benzene quantity, if the enhanced biodegradation unit is the first exempt unit in which the waste is managed or treated. A unit shall be considered enhanced biodegradation if it is a suspended-growth process that...
40 CFR 61.348 - Standards: Treatment processes.
Code of Federal Regulations, 2012 CFR
2012-07-01
... enhanced biodegradation unit shall not be included in the calculation of the total annual benzene quantity, if the enhanced biodegradation unit is the first exempt unit in which the waste is managed or treated. A unit shall be considered enhanced biodegradation if it is a suspended-growth process that...
40 CFR 61.348 - Standards: Treatment processes.
Code of Federal Regulations, 2014 CFR
2014-07-01
... enhanced biodegradation unit shall not be included in the calculation of the total annual benzene quantity, if the enhanced biodegradation unit is the first exempt unit in which the waste is managed or treated. A unit shall be considered enhanced biodegradation if it is a suspended-growth process that...
NASA Astrophysics Data System (ADS)
Popovas, A.; Jørgensen, U. G.
2016-11-01
Context. Hydrogen is the most abundant molecule in the Universe. Its thermodynamic quantities dominate the physical conditions in molecular clouds, protoplanetary disks, etc. It is also of high interest in plasma physics. Therefore thermodynamic data for molecular hydrogen have to be as accurate as possible in a wide temperature range. Aims: We here rigorously show the shortcomings of various simplifications that are used to calculate the total internal partition function. These shortcomings can lead to errors of up to 40 percent or more in the estimated partition function. These errors carry on to calculations of thermodynamic quantities. Therefore a more complicated approach has to be taken. Methods: Seven possible simplifications of various complexity are described, together with advantages and disadvantages of direct summation of experimental values. These were compared to what we consider the most accurate and most complete treatment (case 8). Dunham coefficients were determined from experimental and theoretical energy levels of a number of electronically excited states of H2. Both equilibrium and normal hydrogen was taken into consideration. Results: Various shortcomings in existing calculations are demonstrated, and the reasons for them are explained. New partition functions for equilibrium, normal, and ortho and para hydrogen are calculated and thermodynamic quantities are reported for the temperature range 1-20 000 K. Our results are compared to previous estimates in the literature. The calculations are not limited to the ground electronic state, but include all bound and quasi-bound levels of excited electronic states. Dunham coefficients of these states of H2 are also reported. Conclusions: For most of the relevant astrophysical cases it is strongly advised to avoid using simplifications, such as a harmonic oscillator and rigid rotor or ad hoc summation limits of the eigenstates to estimate accurate partition functions and to be particularly careful when using polynomial fits to the computed values. Reported internal partition functions and thermodynamic quantities in the present work are shown to be more accurate than previously available data. The full datasets in 1 K temperature steps are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/595/A130
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The methodology used to implement structural sensitivity calculations into a major, general-purpose finite-element analysis system (SPAR) is described. This implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calculating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of SPAR are also discussed.
Comparison of satellite-derived dynamical quantities for the stratosphere of the Southern Hemisphere
NASA Technical Reports Server (NTRS)
Miles, Thomas (Editor); Oneill, Alan (Editor)
1989-01-01
As part of the international Middle Atmosphere Program (MAP), a project was instituted to study the dynamics of the Middle Atmosphere in the Southern Hemisphere (MASH). A pre-MASH workshop was held with two aims: comparison of Southern Hemisphere dynamical quantities derived from various archives of satellite data; and assessing the impact of different base-level height information on such derived quantities. The dynamical quantities examined included geopotential height, zonal wind, potential vorticity, eddy heat and momentum fluxes, and Eliassen-Palm fluxes. It was found that while there was usually qualitative agreement between the different sets of fields, substantial quantitative differences were evident, particularly in high latitudes. The fidelity of the base-level analysis was found to be of prime importance in calculating derived quantities - especially the Eliassen-Palm flux divergence and potential vorticity. Improvements in base-level analyses are recommended. In particular, quality controls should be introduced to remove spurious localized features from analyses, and information from all Antarctic radiosondes should be utilized where possible. Caution in drawing quantitative inferences from satellite data for the middle atmosphere of the Southern Hemisphere is advised.
NASA Technical Reports Server (NTRS)
Thompkins, W. T., Jr.
1982-01-01
A FORTRAN-IV computer program was developed for the calculation of the inviscid transonic/supersonic flow field in a fully three dimensional blade passage of an axial compressor rotor or stator. Rotors may have dampers (part span shrouds). MacCormack's explicit time marching method is used to solve the unsteady Euler equations on a finite difference mesh. This technique captures shocks and smears them over several grid points. Input quantities are blade row geometry, operating conditions and thermodynamic quanities. Output quantities are three velocity components, density and internal energy at each mesh point. Other flow quanities are calculated from these variables. A short graphics package is included with the code, and may be used to display the finite difference grid, blade geometry and static pressure contour plots on blade to blade calculation surfaces or blade suction and pressure surfaces. The flow in a low aspect ratio transonic compressor was analyzed and compared with high response total pressure probe measurements and gas fluorescence static density measurements made in the MIT blowdown wind tunnel. These comparisons show that the computed flow fields accurately model the measured shock wave locations and overall aerodynamic performance.
Uncertainty Calculations in the First Introductory Physics Laboratory
NASA Astrophysics Data System (ADS)
Rahman, Shafiqur
2005-03-01
Uncertainty in a measured quantity is an integral part of reporting any experimental data. Consequently, Introductory Physics laboratories at many institutions require that students report the values of the quantities being measured as well as their uncertainties. Unfortunately, given that there are three main ways of calculating uncertainty, each suitable for particular situations (which is usually not explained in the lab manual), this is also an area that students feel highly confused about. It frequently generates large number of complaints in the end-of-the semester course evaluations. Students at some institutions are not asked to calculate uncertainty at all, which gives them a fall sense of the nature of experimental data. Taking advantage of the increased sophistication in the use of computers and spreadsheets that students are coming to college with, we have completely restructured our first Introductory Physics Lab to address this problem. Always in the context of a typical lab, we now systematically and sequentially introduce the various ways of calculating uncertainty including a theoretical understanding as opposed to a cookbook approach, all within the context of six three-hour labs. Complaints about the lab in student evaluations have dropped by 80%. * supported by a grant from A. V. Davis Foundation
NASA Technical Reports Server (NTRS)
Giles, G. L.; Rogers, J. L., Jr.
1982-01-01
The implementation includes a generalized method for specifying element cross-sectional dimensions as design variables that can be used in analytically calculating derivatives of output quantities from static stress, vibration, and buckling analyses for both membrane and bending elements. Limited sample results for static displacements and stresses are presented to indicate the advantages of analytically calclating response derivatives compared to finite difference methods. Continuing developments to implement these procedures into an enhanced version of the system are also discussed.
Johnston, Iain G; Rickett, Benjamin C; Jones, Nick S
2014-12-02
Back-of-the-envelope or rule-of-thumb calculations involving rough estimates of quantities play a central scientific role in developing intuition about the structure and behavior of physical systems, for example in so-called Fermi problems in the physical sciences. Such calculations can be used to powerfully and quantitatively reason about biological systems, particularly at the interface between physics and biology. However, substantial uncertainties are often associated with values in cell biology, and performing calculations without taking this uncertainty into account may limit the extent to which results can be interpreted for a given problem. We present a means to facilitate such calculations where uncertainties are explicitly tracked through the line of reasoning, and introduce a probabilistic calculator called CALADIS, a free web tool, designed to perform this tracking. This approach allows users to perform more statistically robust calculations in cell biology despite having uncertain values, and to identify which quantities need to be measured more precisely to make confident statements, facilitating efficient experimental design. We illustrate the use of our tool for tracking uncertainty in several example biological calculations, showing that the results yield powerful and interpretable statistics on the quantities of interest. We also demonstrate that the outcomes of calculations may differ from point estimates when uncertainty is accurately tracked. An integral link between CALADIS and the BioNumbers repository of biological quantities further facilitates the straightforward location, selection, and use of a wealth of experimental data in cell biological calculations. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kummerow, Christian; Giglio, Louis
1994-01-01
This paper describes a multichannel physical approach for retrieving rainfall and vertical structure information from satellite-based passive microwave observations. The algorithm makes use of statistical inversion techniques based upon theoretically calculated relations between rainfall rates and brightness temperatures. Potential errors introduced into the theoretical calculations by the unknown vertical distribution of hydrometeors are overcome by explicity accounting for diverse hydrometeor profiles. This is accomplished by allowing for a number of different vertical distributions in the theoretical brightness temperature calculations and requiring consistency between the observed and calculated brightness temperatures. This paper will focus primarily on the theoretical aspects of the retrieval algorithm, which includes a procedure used to account for inhomogeneities of the rainfall within the satellite field of view as well as a detailed description of the algorithm as it is applied over both ocean and land surfaces. The residual error between observed and calculated brightness temperatures is found to be an important quantity in assessing the uniqueness of the solution. It is further found that the residual error is a meaningful quantity that can be used to derive expected accuracies from this retrieval technique. Examples comparing the retrieved results as well as the detailed analysis of the algorithm performance under various circumstances are the subject of a companion paper.
Food Buying Guide for Type A School Lunches.
ERIC Educational Resources Information Center
Moss, Mary Ann; And Others
This guide provides information for planning and calculating quantities of food to be purchased and used by schools serving Type A lunches in the National School Lunch Program. This edition includes changes resulting from new developments in food production and processing as well as changes in marketing procedures, packages, and quality of foods…
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 28 2011-07-01 2011-07-01 false How do I calculate the quantity of an extremely hazardous substance present in mixtures? 355.13 Section 355.13 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) SUPERFUND, EMERGENCY PLANNING, AND COMMUNITY RIGHT-TO-KNOW PROGRAMS...
ERIC Educational Resources Information Center
Beddard, Godfrey S.
2011-01-01
Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…
Pressure algorithm for elliptic flow calculations with the PDF method
NASA Technical Reports Server (NTRS)
Anand, M. S.; Pope, S. B.; Mongia, H. C.
1991-01-01
An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.
NASA Technical Reports Server (NTRS)
Deloach, R.; Morris, A. L.; Mcbeth, R. B.
1976-01-01
A portable boundary-layer meteorological data-acquisition and analysis system is described which employs a small tethered balloon and a programmable calculator. The system is capable of measuring pressure, wet- and dry-bulb temperature, wind speed, and temperature fluctuations as a function of height and time. Other quantities, which can be calculated in terms of these, can also be made available in real time. All quantities, measured and calculated, can be printed, plotted, and stored on magnetic tape in the field during the data-acquisition phase of an experiment.
NASA Technical Reports Server (NTRS)
Huang, K.-N.; Aoyagi, M.; Mark, H.; Chen, M. H.; Crasemann, B.
1976-01-01
Electron binding energies in neutral atoms have been calculated relativistically, with the requirement of complete relaxation. Hartree-Fock-Slater wave functions served as zeroth-order eigenfunctions to compute the expectation of the total Hamiltonian. A first-order correction to the local approximation was thus included. Quantum-electrodynamic corrections were made. For all elements with atomic numbers ranging from 2 to 106, the following quantities are listed: total energies, electron kinetic energies, electron-nucleus potential energies, electron-electron potential energies consisting of electrostatic and Breit interaction (magnetic and retardation) terms, and vacuum polarization energies. Binding energies including relaxation are listed for all electrons in all atoms over the indicated range of atomic numbers. A self-energy correction is included for the 1s, 2s, and 2p(1/2) levels. Results for selected atoms are compared with energies calculated by other methods and with experimental values.
An analysis of the equivalent dose calculation for the remainder tissues
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zankl, M.; Drexler, G.
1995-09-01
In the 1990 Recommendations of the International Commission on Radiological Protection, the risk-weighted quantity {open_quotes}effective dose equivalent{close_quotes} was replaced by a similar quantity, {open_quotes}effective dose.{close_quotes} Among other alterations, the selection of the organs and tissues contributing to the risk-weighted quantity and their respective weighting factors were changed, including a modified definition of the so-called {open_quotes}remainder.{close_quotes} Close consideration of this latter definition shows that is causes certain ambiguities are unexpected effects which are dealt with in the following. For several geometries of external photon irradiation, the numerical differences of two possible methods of evaluating the remainder dose from the doses tomore » ten single organs, namely as arithmetic mean or as mass weighted average, are assessed. It is shown that deviation from these averaging procedures, as prescribed for these cases where a remainder organ receives a higher dose than an organ with a specified weighting factor, cause discontinuities in the energy dependence of the remainder dose and, consequently, also non-additivity of this quantity. These problems are discussed, and it is shown that, although the numerical consequences for the calculation of the effective dose are small, this unsatisfactory situation needs clarification. One approach might be to abolish some of the ICRP guidance relating to the appropriate tissue weighting factors for the remainder tissues and organs and to make other guidance more precise. 14 refs., 12 figs., 2 tabs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garza, Jorge; Nichols, Jeffrey A.; Dixon, David A.
2000-01-15
The Hartree product is analyzed in the context of Kohn-Sham theory. The differential equations that emerge from this theory are solved with the optimized effective potential using the Krieger, Li, and Iafrate approximation, in order to get a local potential as required by the ordinary Kohn-Sham procedure. Because the diagonal terms of the exact exchange energy are included in Hartree theory, it is self-interaction free and the exchange potential has the proper asymptotic behavior. We have examined the impact of this correct asymptotic behavior on local and global properties using this simple model to approximate the exchange energy. Local quantities,more » such as the exchange potential and the average local electrostatic potential are used to examine whether the shell structure in an atom is revealed by this theory. Global quantities, such as the highest occupied orbital energy (related to the ionization potential) and the exchange energy are also calculated. These quantities are contrasted with those obtained from calculations with the local density approximation, the generalized gradient approximation, and the self-interaction correction approach proposed by Perdew and Zunger. We conclude that the main characteristics in an atomic system are preserved with the Hartree theory. In particular, the behavior of the exchange potential obtained in this theory is similar to those obtained within other Kohn-Sham approximations. (c) 2000 American Institute of Physics.« less
NASA Astrophysics Data System (ADS)
Dasenbrock-Gammon, Nathan; Zacate, Matthew O.
2017-05-01
Baker et al. derived time-dependent expressions for calculating average number of jumps per encounter and displacement probabilities for vacancy diffusion in crystal lattice systems with infinitesimal vacancy concentrations. As shown in this work, their formulation is readily expanded to include finite vacancy concentration, which allows calculation of concentration-dependent, time-averaged quantities. This is useful because it provides a computationally efficient method to express lineshapes of nuclear spectroscopic techniques through the use of stochastic fluctuation models.
Variation in Prescription Drug Coverage for Triptans: Analysis of Insurance Formularies.
Minen, Mia T; Lindberg, Kate; Langford, Aisha; Loder, Elizabeth
2017-09-01
To analyze triptan coverage by insurers to examine (1) possible disparities in coverage for different formulations (oral, intranasal, etc) and (2) quantity limits and stepped care requirements to obtain triptans. Triptans are FDA approved migraine abortive medications. Patients frequently state that they have difficulty accessing triptans prescribed to them. We searched the 2015 drug formularies of commercial and government health insurers providing coverage in NY State. We created a spreadsheet with all of the commercially available triptans and included information about covered formulations, tier numbers and quantity limits for each drug. We then calculated the number of listed plans that cover or do not cover each triptan or triptan formulation, the total number of medications not covered by an insurance provided across all of its plans, as well as the percentage of plans offered by individual companies and across all companies that covered each drug. We also calculated the number and proportion of plans that imposed quantity limits or step therapy for each drug. Of the 100 formularies searched, generic sumatriptan (all formulations), naratriptan, and zolmitriptan tablets were covered by all plans, and rizatriptan tablets and ODTs were covered by 98% of plans. Brand triptans were less likely to be covered: 4/36 Medicaid plans covered brand triptans. Commercial insurers were more likely to cover brand triptans. All plans imposed quantity limits on 1+ triptan formulations, with >80% imposing quantity limits on 14/19 formulations studied. Almost all plans used tiers for cost allocation for different medications. Generic triptans were almost always in Tier 1. Brand triptans were most commonly in Tier 3. Approximately 40% of brand triptans required step therapy, compared with 11% of generic triptans. There are substantial variations in coverage and quantity limits and a high degree of complexity in triptan coverage for both government and commercial plans. © 2017 American Headache Society.
Collisional-radiative modeling of tungsten at temperatures of 1200–2400 eV
Colgan, James; Fontes, Christopher; Zhang, Honglin; ...
2015-04-30
We discuss new collisional-radiative modeling calculations of tungsten at moderate temperatures of 1200 to 2400 eV. Such plasma conditions are relevant to ongoing experimental work at ASDEX Upgrade and are expected to be relevant for ITER. Our calculations are made using the Los Alamos National Laboratory (LANL) collisional-radiative modeling ATOMIC code. These calculations formed part of a submission to the recent NLTE-8 workshop that was held in November 2013. This series of workshops provides a forum for detailed comparison of plasma and spectral quantities from NLTE collisional-radiative modeling codes. We focus on the LANL ATOMIC calculations for tungsten that weremore » submitted to the NLTE-8 workshop and discuss different models that were constructed to predict the tungsten emission. In particular, we discuss comparisons between semi-relativistic configuration-average and fully relativistic configuration-average calculations. As a result, we also present semi-relativistic calculations that include fine-structure detail, and discuss the difficult problem of ensuring completeness with respect to the number of configurations included in a CR calculation.« less
30 CFR 36.45 - Quantity of ventilating air.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 30 Mineral Resources 1 2014-07-01 2014-07-01 false Quantity of ventilating air. 36.45 Section 36... TRANSPORTATION EQUIPMENT Test Requirements § 36.45 Quantity of ventilating air. (a) Results of the engine tests shall be used to calculate ventilation (cubic feet of air per minute) that shall be supplied by positive...
30 CFR 36.45 - Quantity of ventilating air.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 30 Mineral Resources 1 2012-07-01 2012-07-01 false Quantity of ventilating air. 36.45 Section 36... TRANSPORTATION EQUIPMENT Test Requirements § 36.45 Quantity of ventilating air. (a) Results of the engine tests shall be used to calculate ventilation (cubic feet of air per minute) that shall be supplied by positive...
30 CFR 36.45 - Quantity of ventilating air.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 30 Mineral Resources 1 2013-07-01 2013-07-01 false Quantity of ventilating air. 36.45 Section 36... TRANSPORTATION EQUIPMENT Test Requirements § 36.45 Quantity of ventilating air. (a) Results of the engine tests shall be used to calculate ventilation (cubic feet of air per minute) that shall be supplied by positive...
Measurements of Reynolds stress profiles in unstratified tidal flow
Stacey, M.T.; Monismith, Stephen G.; Burau, J.R.
1999-01-01
In this paper we present a method for measuring profiles of turbulence quantities using a broadband acoustic doppler current profiler (ADCP). The method follows previous work on the continental shelf and extends the analysis to develop estimates of the errors associated with the estimation methods. ADCP data was collected in an unstratified channel and the results of the analysis are compared to theory. This comparison shows that the method provides an estimate of the Reynolds stresses, which is unbiased by Doppler noise, and an estimate of the turbulent kinetic energy (TKE) which is biased by an amount proportional to the Doppler noise. The noise in each of these quantities as well as the bias in the TKE match well with the theoretical values produced by the error analysis. The quantification of profiles of Reynolds stresses simultaneous with the measurement of mean velocity profiles allows for extensive analysis of the turbulence of the flow. In this paper, we examine the relation between the turbulence and the mean flow through the calculation of u*, the friction velocity, and Cd, the coefficient of drag. Finally, we calculate quantities of particular interest in turbulence modeling and analysis, the characteristic lengthscales, including a lengthscale which represents the stream-wise scale of the eddies which dominate the Reynolds stresses. Copyright 1999 by the American Geophysical Union.
NASA Technical Reports Server (NTRS)
Holt, James B.; Monk, Timothy S.
2009-01-01
Propellant Mass Fraction (pmf) calculation methods vary throughout the aerospace industry. While typically used as a means of comparison between candidate launch vehicle designs, the actual pmf calculation method varies slightly from one entity to another. It is the purpose of this paper to present various methods used to calculate the pmf of launch vehicles. This includes fundamental methods of pmf calculation that consider only the total propellant mass and the dry mass of the vehicle; more involved methods that consider the residuals, reserves and any other unusable propellant remaining in the vehicle; and calculations excluding large mass quantities such as the installed engine mass. Finally, a historical comparison is made between launch vehicles on the basis of the differing calculation methodologies, while the unique mission and design requirements of the Ares V Earth Departure Stage (EDS) are examined in terms of impact to pmf.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Prado, K
Purpose: Building up a TG-71 based electron monitor-unit (MU) calculation protocol usually involves massive measurements. This work investigates a minimum data set of measurements and its calculation accuracy and measurement time. Methods: For 6, 9, 12, 16, and 20 MeV of our Varian Clinac-Series linear accelerators, the complete measurements were performed at different depth using 5 square applicators (6, 10, 15, 20 and 25 cm) with different cutouts (2, 3, 4, 6, 10, 15 and 20 cm up to applicator size) for 5 different SSD’s. For each energy, there were 8 PDD scans and 150 point measurements for applicator factors,more » cutout factors and effective SSDs that were then converted to air-gap factors for SSD 99–110cm. The dependence of each dosimetric quantity on field size and SSD was examined to determine the minimum data set of measurements as a subset of the complete measurements. The “missing” data excluded in the minimum data set were approximated by linear or polynomial fitting functions based on the included data. The total measurement time and the calculated electron MU using the minimum and the complete data sets were compared. Results: The minimum data set includes 4 or 5 PDD’s and 51 to 66 point measurements for each electron energy, and more PDD’s and fewer point measurements are generally needed as energy increases. Using only <50% of complete measurement time, the minimum data set generates acceptable MU calculation results compared to those with the complete data set. The PDD difference is within 1 mm and the calculated MU difference is less than 1.5%. Conclusion: Data set measurement for TG-71 electron MU calculations can be minimized based on the knowledge of how each dosimetric quantity depends on various setup parameters. The suggested minimum data set allows acceptable MU calculation accuracy and shortens measurement time by a few hours.« less
Altimeter waveform software design
NASA Technical Reports Server (NTRS)
Hayne, G. S.; Miller, L. S.; Brown, G. S.
1977-01-01
Techniques are described for preprocessing raw return waveform data from the GEOS-3 radar altimeter. Topics discussed include: (1) general altimeter data preprocessing to be done at the GEOS-3 Data Processing Center to correct altimeter waveform data for temperature calibrations, to convert between engineering and final data units and to convert telemetered parameter quantities to more appropriate final data distribution values: (2) time "tagging" of altimeter return waveform data quantities to compensate for various delays, misalignments and calculational intervals; (3) data processing procedures for use in estimating spacecraft attitude from altimeter waveform sampling gates; and (4) feasibility of use of a ground-based reflector or transponder to obtain in-flight calibration information on GEOS-3 altimeter performance.
Staggered chiral perturbation theory at next-to-leading order
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharpe, Stephen R.; Van de Water, Ruth S.
2005-06-01
We study taste and Euclidean rotational symmetry violation for staggered fermions at nonzero lattice spacing using staggered chiral perturbation theory. We extend the staggered chiral Lagrangian to O(a{sup 2}p{sup 2}), O(a{sup 4}), and O(a{sup 2}m), the orders necessary for a full next-to-leading order calculation of pseudo-Goldstone boson masses and decay constants including analytic terms. We then calculate a number of SO(4) taste-breaking quantities, which involve only a small subset of these next-to-leading order operators. We predict relationships between SO(4) taste-breaking splittings in masses, pseudoscalar decay constants, and dispersion relations. We also find predictions for a few quantities that are notmore » SO(4) breaking. All these results hold also for theories in which the fourth root of the fermionic determinant is taken to reduce the number of quark tastes; testing them will therefore provide evidence for or against the validity of this trick.« less
Linear actuation using milligram quantities of CL-20 and TAGDNAT.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snedigar, Shane; Salton, Jonathan Robert; Tappan, Alexander Smith
2009-07-01
There are numerous applications for small-scale actuation utilizing pyrotechnics and explosives. In certain applications, especially when multiple actuation strokes are needed, or actuator reuse is required, it is desirable to have all gaseous combustion products with no condensed residue in the actuator cylinder. Toward this goal, we have performed experiments on utilizing milligram quantities of high explosives to drive a millimeter-diameter actuator with a stroke of 30 mm. Calculations were performed to select proper material quantities to provide 0.5 J of actuation energy. This was performed utilizing the thermochemical code Cheetah to calculate the impetus for numerous propellants and tomore » select quantities based on estimated efficiencies of these propellants at small scales. Milligram quantities of propellants were loaded into a small-scale actuator and ignited with an ignition increment and hot wire ignition. Actuator combustion chamber pressure was monitored with a pressure transducer and actuator stroke was monitored using a laser displacement meter. Total actuation energy was determined by calculating the kinetic energy of reaction mass motion against gravity. Of the materials utilized, the best performance was obtained with a mixture of 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (CL-20) and bis-triaminoguanidinium(3,3{prime}dinitroazotriazolate) (TAGDNAT).« less
Pattern of mathematic representation ability in magnetic electricity problem
NASA Astrophysics Data System (ADS)
Hau, R. R. H.; Marwoto, P.; Putra, N. M. D.
2018-03-01
The mathematic representation ability in solving magnetic electricity problem gives information about the way students understand magnetic electricity. Students have varied mathematic representation pattern ability in solving magnetic electricity problem. This study aims to determine the pattern of students' mathematic representation ability in solving magnet electrical problems.The research method used is qualitative. The subject of this study is the fourth semester students of UNNES Physics Education Study Program. The data collection is done by giving a description test that refers to the test of mathematical representation ability and interview about field line topic and Gauss law. The result of data analysis of student's mathematical representation ability in solving magnet electric problem is categorized into high, medium and low category. The ability of mathematical representations in the high category tends to use a pattern of making known and asked symbols, writing equations, using quantities of physics, substituting quantities into equations, performing calculations and final answers. The ability of mathematical representation in the medium category tends to use several patterns of writing the known symbols, writing equations, using quantities of physics, substituting quantities into equations, performing calculations and final answers. The ability of mathematical representations in the low category tends to use several patterns of making known symbols, writing equations, substituting quantities into equations, performing calculations and final answer.
Fluctuations of thermodynamic quantities calculated from the fundamental equation of thermodynamics
NASA Astrophysics Data System (ADS)
Yan, Zijun; Chen, Jincan
1992-02-01
On the basis of the probability distribution of the various values of the fluctuation and the fundamental equation of thermodynamics of any given system, a simple and useful method of calculating the fluctuations is presented. By using the method, the fluctuations of thermodynamic quantities can be directly determined from the fundamental equation of thermodynamics. Finally, some examples are given to illustrate the use of the method.
Staggered heavy baryon chiral perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, Jon A.
2008-03-01
Although taste violations significantly affect the results of staggered calculations of pseudoscalar and heavy-light mesonic quantities, those entering staggered calculations of baryonic quantities have not been quantified. Here I develop staggered chiral perturbation theory in the light-quark baryon sector by mapping the Symanzik action into heavy baryon chiral perturbation theory. For 2+1 dynamical quark flavors, the masses of flavor-symmetric nucleons are calculated to third order in partially quenched and fully dynamical staggered chiral perturbation theory. To this order the expansion includes the leading chiral logarithms, which come from loops with virtual decuplet-like states, as well as terms of O(m{sub {pi}}{supmore » 3}), which come from loops with virtual octet-like states. Taste violations enter through the meson propagators in loops and tree-level terms of O(a{sup 2}). The pattern of taste symmetry breaking and the resulting degeneracies and mixings are discussed in detail. The resulting chiral forms are appropriate to lattice results obtained with operators already in use and could be used to study the restoration of taste symmetry in the continuum limit. I assume that the fourth root of the fermion determinant can be incorporated in staggered chiral perturbation theory using the replica method.« less
Development and Application of Collaborative Optimization Software for Plate - fin Heat Exchanger
NASA Astrophysics Data System (ADS)
Chunzhen, Qiao; Ze, Zhang; Jiangfeng, Guo; Jian, Zhang
2017-12-01
This paper introduces the design ideas of the calculation software and application examples for plate - fin heat exchangers. Because of the large calculation quantity in the process of designing and optimizing heat exchangers, we used Visual Basic 6.0 as a software development carrier to design a basic calculation software to reduce the calculation quantity. Its design condition is plate - fin heat exchanger which was designed according to the boiler tail flue gas. The basis of the software is the traditional design method of the plate-fin heat exchanger. Using the software for design and calculation of plate-fin heat exchangers, discovery will effectively reduce the amount of computation, and similar to traditional methods, have a high value.
Age-dependent biochemical quantities: an approach for calculating reference intervals.
Bjerner, J
2007-01-01
A parametric method is often preferred when calculating reference intervals for biochemical quantities, as non-parametric methods are less efficient and require more observations/study subjects. Parametric methods are complicated, however, because of three commonly encountered features. First, biochemical quantities seldom display a Gaussian distribution, and there must either be a transformation procedure to obtain such a distribution or a more complex distribution has to be used. Second, biochemical quantities are often dependent on a continuous covariate, exemplified by rising serum concentrations of MUC1 (episialin, CA15.3) with increasing age. Third, outliers often exert substantial influence on parametric estimations and therefore need to be excluded before calculations are made. The International Federation of Clinical Chemistry (IFCC) currently recommends that confidence intervals be calculated for the reference centiles obtained. However, common statistical packages allowing for the adjustment of a continuous covariate do not make this calculation. In the method described in the current study, Tukey's fence is used to eliminate outliers and two-stage transformations (modulus-exponential-normal) in order to render Gaussian distributions. Fractional polynomials are employed to model functions for mean and standard deviations dependent on a covariate, and the model is selected by maximum likelihood. Confidence intervals are calculated for the fitted centiles by combining parameter estimation and sampling uncertainties. Finally, the elimination of outliers was made dependent on covariates by reiteration. Though a good knowledge of statistical theory is needed when performing the analysis, the current method is rewarding because the results are of practical use in patient care.
Calculations of condensation and chemistry in an aircraft contrail
NASA Technical Reports Server (NTRS)
Miake-Lye, Richard C.; Brown, R. C.; Anderson, M. R.; Kolb, C. E.
1994-01-01
The flow field, chemistry, and condensation nucleation behind a transport airplane are calculated in two regimes using two separate reacting flow codes: first the axisymmetric plume, then the three dimensional vortex wake. The included chemical kinetics equations follow the evolution of the NO(y) and SO(x) chemical families. In the plume regime, the chemistry is coupled with the binary homogeneous formation of sulfate condensation nuclei, where the calculated nucleation rates predict that copious quantities of H2SO4/H2O nuclei are produced in subnanometer sizes. These sulfate aerosols could play a major role in the subsequent condensation of water vapor and the formation of contrails under favorable atmospheric conditions.
NASA Technical Reports Server (NTRS)
Miller, C. G., III
1972-01-01
A computer program written in FORTRAN 4 language is presented which determines expansion-tube flow quantities for real test gases CO2 N2, O2, Ar, He, and H2, or mixtures of these gases, in thermochemical equilibrium. The effects of dissociation and first and second ionization are included. Flow quantities behind the incident shock into the quiescent test gas are determined from the pressure and temperature of the quiescent test gas in conjunction with: (1) incident-shock velocity, (2) static pressure immediately behind the incident shock, or (3) pressure and temperature of the driver gas (imperfect hydrogen or helium). The effect of the possible existence of a shock reflection at the secondary diaphragm of the expansion tube is included. Expansion-tube test-section flow conditions are obtained by performing an isentropic unsteady expansion from the conditions behind the incident shock or reflected shock to either the test-region velocity or the static pressure. Both a thermochemical-equilibrium expansion and a frozen expansion are included. Flow conditions immediately behind the bow shock of a model positioned at the test section are also determined. Results from the program are compared with preliminary experimental data obtained in the Langley 6-inch expansion tube.
Matthews, M E; Waldvogel, C F; Mahaffey, M J; Zemel, P C
1978-06-01
Preparation procedures of standardized quantity formulas were analyzed for similarities and differences in production activities, and three entrée classifications were developed, based on these activities. Two formulas from each classification were selected, preparation procedures were divided into elements of production, and the MSD Quantity Food Production Code was applied. Macro elements not included in the existing Code were simulated, coded, assigned associated Time Measurement Units, and added to the MSD Quantity Food Production Code. Repeated occurrence of similar elements within production methods indicated that macro elements could be synthesized for use within one or more entrée classifications. Basic elements were grouped, simulated, and macro elements were derived. Macro elements were applied in the simulated production of 100 portions of each entrée formula. Total production time for each formula and average production time for each entrée classification were calculated. Application of macro elements indicated that this method of predetermining production time was feasible and could be adapted by quantity foodservice managers as a decision technique used to evaluate menu mix, production personnel schedules, and allocation of equipment usage. These macro elements could serve as a basis for further development and refinement of other macro elements which could be applied to a variety of menu item formulas.
Monte Carlo calculation of the radiation field at aircraft altitudes.
Roesler, S; Heinrich, W; Schraube, H
2002-01-01
Energy spectra of secondary cosmic rays are calculated for aircraft altitudes and a discrete set of solar modulation parameters and rigidity cut-off values covering all possible conditions. The calculations are based on the Monte Carlo code FLUKA and on the most recent information on the interstellar cosmic ray flux including a detailed model of solar modulation. Results are compared to a large variety of experimental data obtained on the ground and aboard aircraft and balloons, such as neutron, proton, and muon spectra and yields of charged particles. Furthermore, particle fluence is converted into ambient dose equivalent and effective dose and the dependence of these quantities on height above sea level, solar modulation, and geographical location is studied. Finally, calculated dose equivalent is compared to results of comprehensive measurements performed aboard aircraft.
An experimental investigation of a three dimensional wall jet. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Catalano, G. D.
1977-01-01
One and two point statistical properties are measured in the flow fields of a coflowing turbulent jet. Two different confining surfaces (one flat, one with large curvature) are placed adjacent to the lip of the circular nozzle; and the resultant effects on the flow field are determined. The one point quantities measured include mean velocities, turbulent intensities, velocity and concentration autocorrelations and power spectral densities, and intermittencies. From the autocorrelation curves, the Taylor microscale and the integral length scale are calculated. Two point quantities measured include velocity and concentration space-time correlations and pressure velocity correlations. From the velocity space-time correlations, iso-correlation contours are constructed along with the lines of maximum maximorum. These lines allow a picture of the flow pattern to be determined. The pressures monitored in the pressure velocity correlations are measured both in the flow field and at the surface of the confining wall(s).
Stratification calculations in a heated cryogenic oxygen storage tank at zero gravity
NASA Technical Reports Server (NTRS)
Shuttles, J. T.; Smith, G. L.
1971-01-01
A cylindrical one-dimensional model of the Apollo cyrogenic oxygen storage tank has been developed to study the effect of stratification in the tank. Zero gravity was assumed, and only the thermally induced motions were considered. The governing equations were derived from conservation laws and solved on a digital computer. Realistic thermodynamic and transport properties were used. Calculations were made for a wide range of conditions. The results show the fluid behavior to be dependent on the quantity in the tank or equivalently the bulk fluid temperature. For high quantities (low temperatures) the tank pressure rose rapidly with heat addition, the heater temperature remained low, and significant pressure drop potentials accrued. For low quantities the tank pressure rose more slowly with heat addition and the heater temperature became high. A high degree of stratification resulted for all conditions; however, the stratified region extended appreciably into the tank only for the lowest tank quantity.
Atomic electron energies including relativistic effects and quantum electrodynamic corrections
NASA Technical Reports Server (NTRS)
Aoyagi, M.; Chen, M. H.; Crasemann, B.; Huang, K. N.; Mark, H.
1977-01-01
Atomic electron energies have been calculated relativistically. Hartree-Fock-Slater wave functions served as zeroth-order eigenfunctions to compute the expectation of the total Hamiltonian. A first order correction to the local approximation was thus included. Quantum-electrodynamic corrections were made. For all orbitals in all atoms with 2 less than or equal to Z less than or equal to 106, the following quantities are listed: total energies, electron kinetic energies, electron-nucleus potential energies, electron-electron potential energies consisting of electrostatic and Breit interaction (magnetic and retardation) terms, and vacuum polarization energies. These results will serve for detailed comparison of calculations based on other approaches. The magnitude of quantum electrodynamic corrections is exhibited quantitatively for each state.
Light element opacities of astrophysical interest from ATOMIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colgan, J.; Kilcrease, D. P.; Magee, N. H. Jr.
We present new calculations of local-thermodynamic-equilibrium (LTE) light element opacities from the Los Alamos ATOMIC code for systems of astrophysical interest. ATOMIC is a multi-purpose code that can generate LTE or non-LTE quantities of interest at various levels of approximation. Our calculations, which include fine-structure detail, represent a systematic improvement over previous Los Alamos opacity calculations using the LEDCOP legacy code. The ATOMIC code uses ab-initio atomic structure data computed from the CATS code, which is based on Cowan's atomic structure codes, and photoionization cross section data computed from the Los Alamos ionization code GIPPER. ATOMIC also incorporates a newmore » equation-of-state (EOS) model based on the chemical picture. ATOMIC incorporates some physics packages from LEDCOP and also includes additional physical processes, such as improved free-free cross sections and additional scattering mechanisms. Our new calculations are made for elements of astrophysical interest and for a wide range of temperatures and densities.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carmichael, G.R.; Potra, F.
1998-10-06
A major goal of this research was to quantify the interactions between UVR, ozone and aerosols. One method of quantification was to calculate sensitivity coefficients. A novel aspect of this work was the use of Automatic Differentiation software to calculate the sensitivities. The authors demonstrated the use of ADIFOR for the first time in a dimensional framework. Automatic Differentiation was used to calculate such quantities as: sensitivities of UV-B fluxes to changes in ozone and aerosols in the stratosphere and the troposphere; changes in ozone production/destruction rates to changes in UV-B flux; aerosol properties including loading, scattering properties (including relativemore » humidity effects), and composition (mineral dust, soot, and sulfate aerosol, etc.). The combined radiation/chemistry model offers an important test of the utility of Automatic Differentiation as a tool in atmospheric modeling.« less
Critical short-time dynamics in a system with interacting static and diffusive populations
NASA Astrophysics Data System (ADS)
Argolo, C.; Quintino, Yan; Gleria, Iram; Lyra, M. L.
2012-01-01
We study the critical short-time dynamical behavior of a one-dimensional model where diffusive individuals can infect a static population upon contact. The model presents an absorbing phase transition from an active to an inactive state. Previous calculations of the critical exponents based on quasistationary quantities have indicated an unusual crossover from the directed percolation to the diffusive contact process universality classes. Here we show that the critical exponents governing the slow short-time dynamic evolution of several relevant quantities, including the order parameter, its relative fluctuations, and correlation function, reinforce the lack of universality in this model. Accurate estimates show that the critical exponents are distinct in the regimes of low and high recovery rates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brinkman, J.J.; Griffioen, P.S.; Groot, S.
1987-03-01
The Netherlands have a rather complex water-management system consisting of a number of major rivers, canals, lakes and ditches. Water-quantity management on a regional scale is necessary for an effective water-quality policy. To support water management, a computer model was developed that includes both water quality and water quantity, based on three submodels: ABOPOL for the water movement, DELWAQ for the calculation of water quality variables and BLOOM-II for the phytoplankton growth. The northern province of Friesland was chosen as a test case for the integrated model to be developed, where water quality is highly related to the water distributionmore » and the main trade-off is minimizing the intake of (eutrophicated) alien water in order to minimize external nutrient load and maximizing the intake in order to flush channels and lakes. The results of the application of these models to this and to a number of hypothetical future situations are described.« less
Dissimilarities of reduced density matrices and eigenstate thermalization hypothesis
NASA Astrophysics Data System (ADS)
He, Song; Lin, Feng-Li; Zhang, Jia-ju
2017-12-01
We calculate various quantities that characterize the dissimilarity of reduced density matrices for a short interval of length ℓ in a two-dimensional (2D) large central charge conformal field theory (CFT). These quantities include the Rényi entropy, entanglement entropy, relative entropy, Jensen-Shannon divergence, as well as the Schatten 2-norm and 4-norm. We adopt the method of operator product expansion of twist operators, and calculate the short interval expansion of these quantities up to order of ℓ9 for the contributions from the vacuum conformal family. The formal forms of these dissimilarity measures and the derived Fisher information metric from contributions of general operators are also given. As an application of the results, we use these dissimilarity measures to compare the excited and thermal states, and examine the eigenstate thermalization hypothesis (ETH) by showing how they behave in high temperature limit. This would help to understand how ETH in 2D CFT can be defined more precisely. We discuss the possibility that all the dissimilarity measures considered here vanish when comparing the reduced density matrices of an excited state and a generalized Gibbs ensemble thermal state. We also discuss ETH for a microcanonical ensemble thermal state in a 2D large central charge CFT, and find that it is approximately satisfied for a small subsystem and violated for a large subsystem.
Atmospheric State, Cloud Microphysics and Radiative Flux
Mace, Gerald
2008-01-15
Atmospheric thermodynamics, cloud properties, radiative fluxes and radiative heating rates for the ARM Southern Great Plains (SGP) site. The data represent a characterization of the physical state of the atmospheric column compiled on a five-minute temporal and 90m vertical grid. Sources for this information include raw measurements, cloud property and radiative retrievals, retrievals and derived variables from other third-party sources, and radiative calculations using the derived quantities.
Electron Interactions with Non-Linear Polyatomic Molecules and Their Radicals
1993-12-01
developed which generates SCE quantities from molecular wave functions. This progress was realized in terms of some actual calculations on some molecules...section 4.A describes the basics of the Partial Differential Equation Theory; section 4.B describes the generalization to a finite element...Information Service (NTIS). At NTIS, it will be available to the general public, including foreign nations. This technical report has been reviewed and
Application of the GA-BP Neural Network in Earthwork Calculation
NASA Astrophysics Data System (ADS)
Fang, Peng; Cai, Zhixiong; Zhang, Ping
2018-01-01
The calculation of earthwork quantity is the key factor to determine the project cost estimate and the optimization of the scheme. It is of great significance and function in the excavation of earth and rock works. We use optimization principle of GA-BP intelligent algorithm running process, and on the basis of earthwork quantity and cost information database, the design of the GA-BP neural network intelligent computing model, through the network training and learning, the accuracy of the results meet the actual engineering construction of gauge fan requirements, it provides a new approach for other projects the calculation, and has good popularization value.
NASA Technical Reports Server (NTRS)
Mccarty, R. D.; Weber, L. A.
1972-01-01
The tables include entropy, enthalpy, internal energy, density, volume, speed of sound, specific heat, thermal conductivity, viscosity, thermal diffusivity, Prandtl number, and the dielectric constant for 65 isobars. Quantities of special utility in heat transfer and thermodynamic calculations are also included in the isobaric tables. In addition to the isobaric tables, tables for the saturated vapor and liquid are given, which include all of the above properties, plus the surface tension. Tables for the P-T of the freezing liquid, index of refraction, and the derived Joule-Thomson inversion curve are also presented.
46 CFR 108.439 - Quantity of CO2 for protection of spaces.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 4 2012-10-01 2012-10-01 false Quantity of CO2 for protection of spaces. 108.439... Systems § 108.439 Quantity of CO2 for protection of spaces. (a) The number of pounds of CO2 required to... calculated using the reductions allowed in 46 CFR 95.10-5(e). (c) If fuel can drain from a space to an...
46 CFR 108.439 - Quantity of CO2 for protection of spaces.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 4 2010-10-01 2010-10-01 false Quantity of CO2 for protection of spaces. 108.439... Systems § 108.439 Quantity of CO2 for protection of spaces. (a) The number of pounds of CO2 required to... calculated using the reductions allowed in 46 CFR 95.10-5(e). (c) If fuel can drain from a space to an...
46 CFR 108.439 - Quantity of CO2 for protection of spaces.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 4 2014-10-01 2014-10-01 false Quantity of CO2 for protection of spaces. 108.439... Systems § 108.439 Quantity of CO2 for protection of spaces. (a) The number of pounds of CO2 required to... calculated using the reductions allowed in 46 CFR 95.10-5(e). (c) If fuel can drain from a space to an...
46 CFR 108.439 - Quantity of CO2 for protection of spaces.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 4 2011-10-01 2011-10-01 false Quantity of CO2 for protection of spaces. 108.439... Systems § 108.439 Quantity of CO2 for protection of spaces. (a) The number of pounds of CO2 required to... calculated using the reductions allowed in 46 CFR 95.10-5(e). (c) If fuel can drain from a space to an...
46 CFR 108.439 - Quantity of CO2 for protection of spaces.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 4 2013-10-01 2013-10-01 false Quantity of CO2 for protection of spaces. 108.439... Systems § 108.439 Quantity of CO2 for protection of spaces. (a) The number of pounds of CO2 required to... calculated using the reductions allowed in 46 CFR 95.10-5(e). (c) If fuel can drain from a space to an...
Zhou, Yu-Ping; Jiang, Jin-Wu
2017-01-01
While most existing theoretical studies on the borophene are based on first-principles calculations, the present work presents molecular dynamics simulations for the lattice dynamical and mechanical properties in borophene. The obtained mechanical quantities are in good agreement with previous first-principles calculations. The key ingredients for these molecular dynamics simulations are the two efficient empirical potentials developed in the present work for the interaction of borophene with low-energy triangular structure. The first one is the valence force field model, which is developed with the assistance of the phonon dispersion of borophene. The valence force field model is a linear potential, so it is rather efficient for the calculation of linear quantities in borophene. The second one is the Stillinger-Weber potential, whose parameters are derived based on the valence force field model. The Stillinger-Weber potential is applicable in molecular dynamics simulations of nonlinear physical or mechanical quantities in borophene. PMID:28349983
NASA Astrophysics Data System (ADS)
Kwak, G.; Kim, K.; Park, Y.
2014-02-01
As the maritime boundary delimitation is important for the purpose of securing marine resources, in addition to the aspect of maritime security, interest in maritime boundary delimitation to help national benefits are increasing over the world. In Korea, the importance of maritime boundary delimitation with the neighbouring countries is also increasing in practice. The quantity of obtainable marine resources depending on maritime boundary acts as an important factor for maritime boundary delimitation. Accordingly, a study is required to calculate quantity of our obtainable marine resources depending on maritime boundary delimitation. This study intends to calculate obtainable marine resources depending on various maritime boundary scenarios insisted by several countries. It mainly aims at developing a GIS-based automation system to be utilized for decision making of the maritime boundary delimitation. For this target, it has designed a module using spatial analysis technique to automatically calculate profit and loss waters area of each country upon maritime boundary and another module to estimate economic profits and losses obtained by each country using the calculated waters area and pricing information of the marine resources. By linking both the designed modules, it has implemented an automatic economic profit and loss calculation system for the GIS-based maritime boundary delimitation. The system developed from this study automatically calculate quantity of the obtainable marine resources of a country for the maritime boundary to be added and created in the future. Thus, it is expected to support decision making for the maritime boundary negotiators.
A simple spectral model of the dynamics of the Venus ionosphere
NASA Technical Reports Server (NTRS)
Singhal, R. P.; Whitten, R. C.
1987-01-01
A two-dimensional model of the ionosphere of Venus has been constructed by expanding pertinent quantities in Legendre polynomials. The model is simplified by including only a single ion species, O(+). Horizontal plasma flow velocity and plasma density have been calculated as a coupled system. The calculated plasma flow velocity is found to be in good agreement with observations and the results of earlier studies. Solar zenith angle dependence of plasma density, particularly on the nightside, shows some features which differ from results of earlier studies and observed values. Effects of raising or lowering the ionopause height and changing the nightside neutral atmosphere have been discussed.
Massive Photons: An Infrared Regularization Scheme for Lattice QCD+QED.
Endres, Michael G; Shindler, Andrea; Tiburzi, Brian C; Walker-Loud, André
2016-08-12
Standard methods for including electromagnetic interactions in lattice quantum chromodynamics calculations result in power-law finite-volume corrections to physical quantities. Removing these by extrapolation requires costly computations at multiple volumes. We introduce a photon mass to alternatively regulate the infrared, and rely on effective field theory to remove its unphysical effects. Electromagnetic modifications to the hadron spectrum are reliably estimated with a precision and cost comparable to conventional approaches that utilize multiple larger volumes. A significant overall cost advantage emerges when accounting for ensemble generation. The proposed method may benefit lattice calculations involving multiple charged hadrons, as well as quantum many-body computations with long-range Coulomb interactions.
Investigation of dynamic noise affecting geodynamics information in a tethered subsatellite
NASA Technical Reports Server (NTRS)
Gullahorn, G. E.
1984-01-01
The effects of a tethered satellite system's internal dynamics on the subsatellite were calculated including both overall motions (libration and attitude oscillations) and internal tether oscillations. The SKYHOOK tether simulation program was modified to operate with atmospheric density variations and to output quantities of interest. Techniques and software for analyzing the results were developed including noise spectral analysis. A program was begun for computing a stable configuration of a tether system subject to air drag. These configurations will be of use as initial conditions for SKYHOOK and, through linearized analysis, directly for stability and dynamical studies. A case study in which the subsatellite traverses an atmospheric density enhancement confirmed some theoretical calculations, and pointed out some aspects of the interaction with the tether system dynamics.
Quantum entanglement of local operators in conformal field theories.
Nozaki, Masahiro; Numasawa, Tokiro; Takayanagi, Tadashi
2014-03-21
We introduce a series of quantities which characterize a given local operator in any conformal field theory from the viewpoint of quantum entanglement. It is defined by the increased amount of (Rényi) entanglement entropy at late time for an excited state defined by acting the local operator on the vacuum. We consider a conformal field theory on an infinite space and take the subsystem in the definition of the entanglement entropy to be its half. We calculate these quantities for a free massless scalar field theory in two, four and six dimensions. We find that these results are interpreted in terms of quantum entanglement of a finite number of states, including Einstein-Podolsky-Rosen states. They agree with a heuristic picture of propagations of entangled particles.
NASA Astrophysics Data System (ADS)
Kimura, Masaaki; Inoue, Haruo; Kusaka, Masahiro; Kaizu, Koichi; Fuji, Akiyoshi
This paper describes an analysis method of the friction torque and weld interface temperature during the friction process for steel friction welding. The joining mechanism model of the friction welding for the wear and seizure stages was constructed from the actual joining phenomena that were obtained by the experiment. The non-steady two-dimensional heat transfer analysis for the friction process was carried out by calculation with FEM code ANSYS. The contact pressure, heat generation quantity, and friction torque during the wear stage were calculated using the coefficient of friction, which was considered as the constant value. The thermal stress was included in the contact pressure. On the other hand, those values during the seizure stage were calculated by introducing the coefficient of seizure, which depended on the seizure temperature. The relationship between the seizure temperature and the relative speed at the weld interface in the seizure stage was determined using the experimental results. In addition, the contact pressure and heat generation quantity, which depended on the relative speed of the weld interface, were solved by taking the friction pressure, the relative speed and the yield strength of the base material into the computational conditions. The calculated friction torque and weld interface temperatures of a low carbon steel joint were equal to the experimental results when friction pressures were 30 and 90 MPa, friction speed was 27.5 s-1, and weld interface diameter was 12 mm. The calculation results of the initial peak torque and the elapsed time for initial peak torque were also equal to the experimental results under the same conditions. Furthermore, the calculation results of the initial peak torque and the elapsed time for initial peak torque at various friction pressures were equal to the experimental results.
Disasters and Impact of Sleep Quality and Quantity on National Guard Medical Personnel
2018-04-30
Impact of Sleep Quality & Quantity on National Guard Medical Personnel Sb. GRANT NUMBER Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Sd. PROJECT NUMBER...Std. 239.18 Adobe Professional 7 .0 Approved for Public Release ~••Unlmlted Disasters & Impact of Sleep Quality & Quantity on National Guard...College of Nursing 4/11/2018 6 Methods • Measures • Critical skills questions • Medication calculations +Licensed • Basic Life Support (BLS
NASA Astrophysics Data System (ADS)
Zhou, Chi-Chun; Dai, Wu-Sheng
2018-02-01
In statistical mechanics, for a system with a fixed number of particles, e.g. a finite-size system, strictly speaking, the thermodynamic quantity needs to be calculated in the canonical ensemble. Nevertheless, the calculation of the canonical partition function is difficult. In this paper, based on the mathematical theory of the symmetric function, we suggest a method for the calculation of the canonical partition function of ideal quantum gases, including ideal Bose, Fermi, and Gentile gases. Moreover, we express the canonical partition functions of interacting classical and quantum gases given by the classical and quantum cluster expansion methods in terms of the Bell polynomial in mathematics. The virial coefficients of ideal Bose, Fermi, and Gentile gases are calculated from the exact canonical partition function. The virial coefficients of interacting classical and quantum gases are calculated from the canonical partition function by using the expansion of the Bell polynomial, rather than calculated from the grand canonical potential.
Beyond-Standard-Model Tensor Interaction and Hadron Phenomenology.
Courtoy, Aurore; Baeßler, Stefan; González-Alonso, Martín; Liuti, Simonetta
2015-10-16
We evaluate the impact of recent developments in hadron phenomenology on extracting possible fundamental tensor interactions beyond the standard model. We show that a novel class of observables, including the chiral-odd generalized parton distributions, and the transversity parton distribution function can contribute to the constraints on this quantity. Experimental extractions of the tensor hadronic matrix elements, if sufficiently precise, will provide a, so far, absent testing ground for lattice QCD calculations.
ERIC Educational Resources Information Center
Wendling, Wayne
This report is divided into four sections. Section 1 is a short discussion of the economic theory underlying the construction of the cost of education index and an example of how the index is calculated. Also presented are descriptions of the factors included in the statistical analysis to control for quality, quantity, and cost differences and…
Exploring Flavor Physics with Lattice QCD
NASA Astrophysics Data System (ADS)
Du, Daping; Fermilab/MILC Collaborations Collaboration
2016-03-01
The Standard Model has been a very good description of the subatomic particle physics. In the search for physics beyond the Standard Model in the context of flavor physics, it is important to sharpen our probes using some gold-plated processes (such as B rare decays), which requires the knowledge of the input parameters, such as the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and other nonperturbative quantities, with sufficient precision. Lattice QCD is so far the only first-principle method which could compute these quantities with competitive and systematically improvable precision using the state of the art simulation techniques. I will discuss the recent progress of lattice QCD calculations on some of these nonpurturbative quantities and their applications in flavor physics. I will also discuss the implications and future perspectives of these calculations in flavor physics.
NASA Astrophysics Data System (ADS)
Lesiuk, Michał; Moszynski, Robert
2014-12-01
In this paper we consider the calculation of two-center exchange integrals over Slater-type orbitals (STOs). We apply the Neumann expansion of the Coulomb interaction potential and consider calculation of all basic quantities which appear in the resulting expression. Analytical closed-form equations for all auxiliary quantities have already been known but they suffer from large digital erosion when some of the parameters are large or small. We derive two differential equations which are obeyed by the most difficult basic integrals. Taking them as a starting point, useful series expansions for small parameter values or asymptotic expansions for large parameter values are systematically derived. The resulting expansions replace the corresponding analytical expressions when the latter introduce significant cancellations. Additionally, we reconsider numerical integration of some necessary quantities and present a new way to calculate the integrand with a controlled precision. All proposed methods are combined to lead to a general, stable algorithm. We perform extensive numerical tests of the introduced expressions to verify their validity and usefulness. Advances reported here provide methodology to compute two-electron exchange integrals over STOs for a broad range of the nonlinear parameters and large angular momenta.
NASA Astrophysics Data System (ADS)
Cave, Robert J.; Newton, Marshall D.
1996-01-01
A new method for the calculation of the electronic coupling matrix element for electron transfer processes is introduced and results for several systems are presented. The method can be applied to ground and excited state systems and can be used in cases where several states interact strongly. Within the set of states chosen it is a non-perturbative treatment, and can be implemented using quantities obtained solely in terms of the adiabatic states. Several applications based on quantum chemical calculations are briefly presented. Finally, since quantities for adiabatic states are the only input to the method, it can also be used with purely experimental data to estimate electron transfer matrix elements.
40 CFR 98.463 - Calculating GHG emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... (CONTINUED) MANDATORY GREENHOUSE GAS REPORTING Industrial Waste Landfills § 98.463 Calculating GHG emissions. (a) For each industrial waste landfill subject to the reporting requirements of this subpart... which emissions are calculated. Wx = Quantity of waste disposed in the industrial waste landfill in year...
Significant Figures in Measurements with Uncertainty: A Working Criterion
ERIC Educational Resources Information Center
Vilchis, Abraham
2017-01-01
Generally speaking, students have difficulty reporting out measurements and estimates of quantities used in the laboratory, and with handling the significant figures associated with them. When required to make calculation involving quantities with different numbers of significant figures, they have difficulty in assigning the corresponding digits…
The Momentum Distribution of Liquid ⁴He
Prisk, T. R.; Bryan, M. S.; Sokol, P. E.; ...
2017-07-24
We report a high-resolution neutron Compton scattering study of liquid ⁴He under milli-Kelvin temperature control. To interpret the scattering data, we performed Quantum Monte Carlo calculations of the atomic momentum distribution and final state effects for the conditions of temperature and density considered in the experiment. There is excellent agreement between the observed scattering and ab initio calculations of its lineshape at all temperatures. We also used model fit functions to obtain from the scattering data empirical estimates of the average atomic kinetic energy and Bose condensate fraction. These quantities are also in excellent agreement with ab initio calculations. Wemore » conclude that contemporary Quantum Monte Carlo methods can furnish accurate predictions for the properties of Bose liquids, including the condensate fraction, close to the superfluid transition temperature.« less
mrpy: Renormalized generalized gamma distribution for HMF and galaxy ensemble properties comparisons
NASA Astrophysics Data System (ADS)
Murray, Steven G.; Robotham, Aaron S. G.; Power, Chris
2018-02-01
mrpy calculates the MRP parameterization of the Halo Mass Function. It calculates basic statistics of the truncated generalized gamma distribution (TGGD) with the TGGD class, including mean, mode, variance, skewness, pdf, and cdf. It generates MRP quantities with the MRP class, such as differential number counts and cumulative number counts, and offers various methods for generating normalizations. It can generate the MRP-based halo mass function as a function of physical parameters via the mrp_b13 function, and fit MRP parameters to data in the form of arbitrary curves and in the form of a sample of variates with the SimFit class. mrpy also calculates analytic hessians and jacobians at any point, and allows the user to alternate parameterizations of the same form via the reparameterize module.
Rotating black holes in the teleparallel equivalent of general relativity
NASA Astrophysics Data System (ADS)
Nashed, Gamal G. L.
2016-05-01
We derive set of solutions with flat transverse sections in the framework of a teleparallel equivalent of general relativity which describes rotating black holes. The singularities supported from the invariants of torsion and curvature are explained. We investigate that there appear more singularities in the torsion scalars than in the curvature ones. The conserved quantities are discussed using Einstein-Cartan geometry. The physics of the constants of integration is explained through the calculations of conserved quantities. These calculations show that there is a unique solution that may describe true physical black hole.
Power flows and Mechanical Intensities in structural finite element analysis
NASA Technical Reports Server (NTRS)
Hambric, Stephen A.
1989-01-01
The identification of power flow paths in dynamically loaded structures is an important, but currently unavailable, capability for the finite element analyst. For this reason, methods for calculating power flows and mechanical intensities in finite element models are developed here. Formulations for calculating input and output powers, power flows, mechanical intensities, and power dissipations for beam, plate, and solid element types are derived. NASTRAN is used to calculate the required velocity, force, and stress results of an analysis, which a post-processor then uses to calculate power flow quantities. The SDRC I-deas Supertab module is used to view the final results. Test models include a simple truss and a beam-stiffened cantilever plate. Both test cases showed reasonable power flow fields over low to medium frequencies, with accurate power balances. Future work will include testing with more complex models, developing an interactive graphics program to view easily and efficiently the analysis results, applying shape optimization methods to the problem with power flow variables as design constraints, and adding the power flow capability to NASTRAN.
A general low frequency acoustic radiation capability for NASTRAN
NASA Technical Reports Server (NTRS)
Everstine, G. C.; Henderson, F. M.; Schroeder, E. A.; Lipman, R. R.
1986-01-01
A new capability called NASHUA is described for calculating the radiated acoustic sound pressure field exterior to a harmonically-excited arbitrary submerged 3-D elastic structure. The surface fluid pressures and velocities are first calculated by coupling a NASTRAN finite element model of the structure with a discretized form of the Helmholtz surface integral equation for the exterior fluid. After the fluid impedance is calculated, most of the required matrix operations are performed using the general matrix manipulation package (DMAP) available in NASTRAN. Far field radiated pressures are then calculated from the surface solution using the Helmholtz exterior integral equation. Other output quantities include the maximum sound pressure levels in each of the three coordinate planes, the rms and average surface pressures and normal velocities, the total radiated power and the radiation efficiency. The overall approach is illustrated and validated using known analytic solutions for submerged spherical shells subjected to both uniform and nonuniform applied loads.
NASA Astrophysics Data System (ADS)
Raburn, Daniel Louis
We have developed a preconditioned, globalized Jacobian-free Newton-Krylov (JFNK) solver for calculating equilibria with magnetic islands. The solver has been developed in conjunction with the Princeton Iterative Equilibrium Solver (PIES) and includes two notable enhancements over a traditional JFNK scheme: (1) globalization of the algorithm by a sophisticated backtracking scheme, which optimizes between the Newton and steepest-descent directions; and, (2) adaptive preconditioning, wherein information regarding the system Jacobian is reused between Newton iterations to form a preconditioner for our GMRES-like linear solver. We have developed a formulation for calculating saturated neoclassical tearing modes (NTMs) which accounts for the incomplete loss of a bootstrap current due to gradients of multiple physical quantities. We have applied the coupled PIES-JFNK solver to calculate saturated island widths on several shots from the Tokamak Fusion Test Reactor (TFTR) and have found reasonable agreement with experimental measurement.
Superfluid density and condensate fraction in the BCS-BEC crossover regime at finite temperatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fukushima, N.; Ohashi, Y.; Faculty of Science and Technology, Keio University, Hiyoshi, Yokohama 223
2007-03-15
The superfluid density is a fundamental quantity describing the response to a rotation as well as in two-fluid collisional hydrodynamics. We present extensive calculations of the superfluid density {rho}{sub s} in the BCS-BEC crossover regime of a uniform superfluid Fermi gas at finite temperatures. We include strong-coupling or fluctuation effects on these quantities within a Gaussian approximation. We also incorporate the same fluctuation effects into the BCS single-particle excitations described by the superfluid order parameter {delta} and Fermi chemical potential {mu}, using the Nozieres-Schmitt-Rink approximation. This treatment is shown to be necessary for consistent treatment of {rho}{sub s} over themore » entire BCS-BEC crossover. We also calculate the condensate fraction N{sub c} as a function of the temperature, a quantity which is quite different from the superfluid density {rho}{sub s}. We show that the mean-field expression for the condensate fraction N{sub c} is a good approximation even in the strong-coupling BEC regime. Our numerical results show how {rho}{sub s} and N{sub c} depend on temperature, from the weak-coupling BCS region to the BEC region of tightly bound Cooper pair molecules. In a companion paper [Phys. Rev. A 74, 063626 (2006)], we derive an equivalent expression for {rho}{sub s} from the thermodynamic potential, which exhibits the role of the pairing fluctuations in a more explicit manner.« less
NASA Astrophysics Data System (ADS)
Arce, Julio Cesar
1992-01-01
This work focuses on time-dependent quantum theory and methods for the study of the spectra and dynamics of atomic and molecular systems. Specifically, we have addressed the following two problems: (i) Development of a time-dependent spectral method for the construction of spectra of simple quantum systems--This includes the calculation of eigenenergies, the construction of bound and continuum eigenfunctions, and the calculation of photo cross-sections. Computational applications include the quadrupole photoabsorption spectra and dissociation cross-sections of molecular hydrogen from various vibrational states in its ground electronic potential -energy curve. This method is seen to provide an advantageous alternative, both from the computational and conceptual point of view, to existing standard methods. (ii) Explicit time-dependent formulation of photoabsorption processes --Analytical solutions of the time-dependent Schrodinger equation are constructed and employed for the calculation of probability densities, momentum distributions, fluxes, transition rates, expectation values and correlation functions. These quantities are seen to establish the link between the dynamics and the calculated, or measured, spectra and cross-sections, and to clarify the dynamical nature of the excitation, transition and ejection processes. Numerical calculations on atomic and molecular hydrogen corroborate and complement the previous results, allowing the identification of different regimes during the photoabsorption process.
Energy Weighted Angular Correlations Between Hadrons Produced in Electron-Positron Annihilation.
NASA Astrophysics Data System (ADS)
Strharsky, Roger Joseph
Electron-positron annihilation at large center of mass energy produces many hadronic particles. Experimentalists then measure the energies of these particles in calorimeters. This study investigated correlations between the angular locations of one or two such calorimeters and the angular orientation of the electron beam in the laboratory frame of reference. The calculation of these correlations includes weighting by the fraction of the total center of mass energy which the calorimeter measures. Starting with the assumption that the reaction proceeeds through the intermediate production of a single quark/anti-quark pair, a simple statistical model was developed to provide a phenomenological description of the distribution of final state hadrons. The model distributions were then used to calculate the one- and two-calorimeter correlation functions. Results of these calculations were compared with available data and several predictions were made for those quantities which had not yet been measured. Failure of the model to reproduce all of the data was discussed in terms of quantum chromodynamics, a fundamental theory which includes quark interactions.
Calculation of Radiation Protection Quantities and Analysis of Astronaut Orientation Dependence
NASA Technical Reports Server (NTRS)
Clowdsley, Martha S.; Nealy, John E.; Atwell, William; Anderson, Brooke M.; Luetke, Nathan J.; Wilson, John W.
2006-01-01
Health risk to astronauts due to exposure to ionizing radiation is a primary concern for exploration missions and may become the limiting factor for long duration missions. Methodologies for evaluating this risk in terms of radiation protection quantities such as dose, dose equivalent, gray equivalent, and effective dose are described. Environment models (galactic cosmic ray and solar particle event), vehicle/habitat geometry models, human geometry models, and transport codes are discussed and sample calculations for possible lunar and Mars missions are used as demonstrations. The dependence of astronaut health risk, in terms of dosimetric quantities, on astronaut orientation within a habitat is also examined. Previous work using a space station type module exposed to a proton spectrum modeling the October 1989 solar particle event showed that reorienting the astronaut within the module could change the calculated dose equivalent by a factor of two or more. Here the dose equivalent to various body tissues and the whole body effective dose due to both galactic cosmic rays and a solar particle event are calculated for a male astronaut in two different orientations, vertical and horizontal, in a representative lunar habitat. These calculations also show that the dose equivalent at some body locations resulting from a solar particle event can vary by a factor of two or more, but that the dose equivalent due to galactic cosmic rays has a much smaller (<15%) dependence on astronaut orientation.
Computing Q-D Relationships for Storage of Rocket Fuels
NASA Technical Reports Server (NTRS)
Jester, Keith
2005-01-01
The Quantity Distance Measurement Tool is a GIS BASEP computer program that aids safety engineers by calculating quantity-distance (Q-D) relationships for vessels that contain explosive chemicals used in testing rocket engines. (Q-D relationships are standard relationships between specified quantities of specified explosive materials and minimum distances by which they must be separated from persons, objects, and other explosives to obtain specified types and degrees of protection.) The program uses customized geographic-information-system (GIS) software and calculates Q-D relationships in accordance with NASA's Safety Standard For Explosives, Propellants, and Pyrotechnics. Displays generated by the program enable the identification of hazards, showing the relationships of propellant-storage-vessel safety buffers to inhabited facilities and public roads. Current Q-D information is calculated and maintained in graphical form for all vessels that contain propellants or other chemicals, the explosiveness of which is expressed in TNT equivalents [amounts of trinitrotoluene (TNT) having equivalent explosive effects]. The program is useful in the acquisition, siting, construction, and/or modification of storage vessels and other facilities in the development of an improved test-facility safety program.
Phonology and arithmetic in the language-calculation network.
Andin, Josefine; Fransson, Peter; Rönnberg, Jerker; Rudner, Mary
2015-04-01
Arithmetic and language processing involve similar neural networks, but the relative engagement remains unclear. In the present study we used fMRI to compare activation for phonological, multiplication and subtraction tasks, keeping the stimulus material constant, within a predefined language-calculation network including left inferior frontal gyrus and angular gyrus (AG) as well as superior parietal lobule and the intraparietal sulcus bilaterally. Results revealed a generally left lateralized activation pattern within the language-calculation network for phonology and a bilateral activation pattern for arithmetic, and suggested regional differences between tasks. In particular, we found a more prominent role for phonology than arithmetic in pars opercularis of the left inferior frontal gyrus but domain generality in pars triangularis. Parietal activation patterns demonstrated greater engagement of the visual and quantity systems for calculation than language. This set of findings supports the notion of a common, but regionally differentiated, language-calculation network. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Theoretical L-shell Coster-Kronig energies 11 or equal to z or equal to 103
NASA Technical Reports Server (NTRS)
Chen, M. H.; Crasemann, B.; Huang, K. N.; Aoyagi, M.; Mark, H.
1976-01-01
Relativistic relaxed-orbital calculations of L-shell Coster-Kronig transition energies have been performed for all possible transitions in atoms with atomic numbers. Hartree-Fock-Slater wave functions served as zeroth-order eigenfunctions to compute the expectation of the total Hamiltonian. A first-order approximation to the local approximation was thus included. Quantum-electrodynamic corrections were made. Each transition energy was computed as the difference between results of separate self-consistent-field calculations for the initial, singly ionized state and the final two-hole state. The following quantities are listed: total transition energy, 'electric' (Dirac-Hartree-Fock-Slater) contribution, magnetic and retardation contributions, and contributions due to vacuum polarization and self energy.
Neutron H*(10) estimation and measurements around 18MV linac.
Cerón Ramírez, Pablo Víctor; Díaz Góngora, José Antonio Irán; Paredes Gutiérrez, Lydia Concepción; Rivera Montalvo, Teodoro; Vega Carrillo, Héctor René
2016-11-01
Thermoluminescent dosimetry, analytical techniques and Monte Carlo calculations were used to estimate the dose of neutron radiation in a treatment room with a linear electron accelerator of 18MV. Measurements were carried out through neutron ambient dose monitors which include pairs of thermoluminescent dosimeters TLD 600 ( 6 LiF: Mg, Ti) and TLD 700 ( 7 LiF: Mg, Ti), which were placed inside a paraffin spheres. The measurements has allowed to use NCRP 151 equations, these expressions are useful to find relevant dosimetric quantities. In addition, photoneutrons produced by linac head were calculated through MCNPX code taking into account the geometry and composition of the linac head principal parts. Copyright © 2016 Elsevier Ltd. All rights reserved.
Covariant spectator theory of np scattering: Deuteron quadrupole moment
Gross, Franz
2015-01-26
The deuteron quadrupole moment is calculated using two CST model wave functions obtained from the 2007 high precision fits to np scattering data. Included in the calculation are a new class of isoscalar np interaction currents automatically generated by the nuclear force model used in these fits. The prediction for model WJC-1, with larger relativistic P-state components, is 2.5% smaller that the experiential result, in common with the inability of models prior to 2014 to predict this important quantity. However, model WJC-2, with very small P-state components, gives agreement to better than 1%, similar to the results obtained recently frommore » XEFT predictions to order N 3LO.« less
Significant Figures in Measurements with Uncertainty: A Working Criterion
NASA Astrophysics Data System (ADS)
Vilchis, Abraham
2017-03-01
Generally speaking, students have difficulty reporting out measurements and estimates of quantities used in the laboratory, and with handling the significant figures associated with them. When required to make calculation involving quantities with different numbers of significant figures, they have difficulty in assigning the corresponding digits to the final result. When in addition, the quantities have uncertainty, the operations entailed pose an even greater challenge. The article advocates for some working rules for students (and teachers) in an effort to combat this problem.
NASA Astrophysics Data System (ADS)
Alley, K. E.; Scambos, T.; Anderson, R. S.; Rajaram, H.; Pope, A.; Haran, T.
2017-12-01
Strain rates are fundamental measures of ice flow used in a wide variety of glaciological applications including investigations of bed properties, calculations of basal mass balance on ice shelves, application to Glen's flow law, and many other studies. However, despite their extensive application, strain rates are calculated using widely varying methods and length scales, and the calculation details are often not specified. In this study, we compare the results of nominal and logarithmic strain-rate calculations based on a satellite-derived velocity field of the Antarctic ice sheet generated from Landsat 8 satellite data. Our comparison highlights the differences between the two commonly used approaches in the glaciological literature. We evaluate the errors introduced by each code and their impacts on the results. We also demonstrate the importance of choosing and specifying a length scale over which strain-rate calculations are made, which can have large local impacts on other derived quantities such as basal mass balance on ice shelves. We present strain-rate data products calculated using an approximate viscous length-scale with satellite observations of ice velocity for the Antarctic continent. Finally, we explore the applications of comprehensive strain-rate maps to future ice shelf studies, including investigations of ice fracture, calving patterns, and stability analyses.
NASA Astrophysics Data System (ADS)
Abramopoulos, F.; Rosenzweig, C.; Choudhury, B.
1988-09-01
A physically based ground hydrology model is developed to improve the land-surface sensible and latent heat calculations in global climate models (GCMs). The processes of transpiration, evaporation from intercepted precipitation and dew, evaporation from bare soil, infiltration, soil water flow, and runoff are explicitly included in the model. The amount of detail in the hydrologic calculations is restricted to a level appropriate for use in a GCM, but each of the aforementioned processes is modeled on the basis of the underlying physical principles. Data from the Goddard Institute for Space Studies (GISS) GCM are used as inputs for off-line tests of the ground hydrology model in four 8° × 10° regions (Brazil, Sahel, Sahara, and India). Soil and vegetation input parameters are calculated as area-weighted means over the 8° × 10° gridhox. This compositing procedure is tested by comparing resulting hydrological quantities to ground hydrology model calculations performed on the 1° × 1° cells which comprise the 8° × 10° gridbox. Results show that the compositing procedure works well except in the Sahel where lower soil water levels and a heterogeneous land surface produce more variability in hydrological quantities, indicating that a resolution better than 8° × 10° is needed for that region. Modeled annual and diurnal hydrological cycles compare well with observations for Brazil, where real world data are available. The sensitivity of the ground hydrology model to several of its input parameters was tested; it was found to be most sensitive to the fraction of land covered by vegetation and least sensitive to the soil hydraulic conductivity and matric potential.
LLNL Mercury Project Trinity Open Science Final Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brantley, Patrick; Dawson, Shawn; McKinley, Scott
2016-04-20
The Mercury Monte Carlo particle transport code developed at Lawrence Livermore National Laboratory (LLNL) is used to simulate the transport of radiation through urban environments. These challenging calculations include complicated geometries and require significant computational resources to complete. As a result, a question arises as to the level of convergence of the calculations with Monte Carlo simulation particle count. In the Trinity Open Science calculations, one main focus was to investigate convergence of the relevant simulation quantities with Monte Carlo particle count to assess the current simulation methodology. Both for this application space but also of more general applicability, wemore » also investigated the impact of code algorithms on parallel scaling on the Trinity machine as well as the utilization of the Trinity DataWarp burst buffer technology in Mercury via the LLNL Scalable Checkpoint/Restart (SCR) library.« less
NASA Astrophysics Data System (ADS)
Rakhno, I. L.; Hylen, J.; Kasper, P.; Mokhov, N. V.; Quinn, M.; Striganov, S. I.; Vaziri, K.
2018-01-01
Measurements and calculations of the air activation at a high-energy proton accelerator are described. The quantity of radionuclides released outdoors depends on operation scenarios including details of the air exchange inside the facility. To improve the prediction of the air activation levels, the MARS15 Monte Carlo code radionuclide production model was modified to be used for these studies. Measurements were done to benchmark the new model and verify its use in optimization studies for the new DUNE experiment at the Long Baseline Neutrino Facility (LBNF) at Fermilab. The measured production rates for the most important radionuclides - 11C, 13N, 15O and 41Ar - are in a good agreement with those calculated with the improved MARS15 code.
Mutual Information Rate and Bounds for It
Baptista, Murilo S.; Rubinger, Rero M.; Viana, Emilson R.; Sartorelli, José C.; Parlitz, Ulrich; Grebogi, Celso
2012-01-01
The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two time series (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators. PMID:23112809
Maiello, M L; Harley, N H
1989-07-01
The rate of 218Po and 214Pb atoms collected electrostatically inside an environmental gamma-ray and 222Rn detector (EGARD) was measured. These measurements were used to directly infer the charged fraction of 218Po and to calculate the charged fraction of 214Pb. Thirty-two percent of the 218Po was collected electrostatically using approximately -1500 V on a 2.54 cm diameter Mylar covered disc inside a vented A1 EGARD of 1 L volume. About 91% of the 214Pb is collected electrostatically under the same conditions. The measurements were performed in a calibrated 222Rn test chamber at the Environmental Measurements Laboratory (EML) using the Thomas alpha-counting method with 222Rn concentrations averaging about 4300 Bq m-3. The atomic collection rates were used with other measured quantities to calculate the thermoluminescent dosimeter (TLD) signal acquired from EGARD for exposure to 1 Bq m-3 of 222Rn. The calculations account for 222Rn progeny collection using a Teflon electret and alpha and beta detection using TLDs inside EGARD. The measured quantities include the energies of 218Po and 214Po alpha-particles degraded by passage through the 25 microns thick electret. The TLD responses to these alpha- and beta-particles with an average energy approaching that obtained from the combined spectra of 214Pb and 214Bi were also measured. The calculated calibration factor is within 30% of the value obtained by exposing EGARD to a known concentration of 222Rn. This result supports our charged fraction estimates for 218Po and 214Pb.
Calculating Coronal Mass Ejection Magnetic Field at 1 AU Using Solar Observables
NASA Astrophysics Data System (ADS)
Chen, J.; Kunkel, V.
2013-12-01
It is well-established that most major nonrecurrent geomagnetic storms are caused by solar wind structures with long durations of strong southward (Bz < 0) interplanetary magnetic field (IMF). Such geoeffective IMF structures are associated with CME events at the Sun. Unfortunately, neither the duration nor the internal magnetic field vector of the ejecta--the key determinants of geoeffectiveness--is measurable until the observer (e.g., Earth) passes through the ejecta. In this paper, we discuss the quantitative relationships between the ejecta magnetic field at 1 AU and remotely observable solar quantities associated with the eruption of a given CME. In particular, we show that observed CME trajectories (position-time data) within, say, 1/3 AU of the Sun, contain sufficient information to allow the calculation of the ejecta magnetic field (magnitude and components) at 1 AU using the Erupting Flux Rope (EFR) model of CMEs. Furthermore, in order to accurately determine the size and arrival time of the ejecta as seen by a fixed observer at 1 AU (e.g., ACE), it is essential to accurately calculate the three-dimensional geometry of the underlying magnetic structure. Accordingly, we have extended the physics-based EFR model to include a self-consistent calculation of the transverse expansion taking into account the non-symmetric drag coupling between an expanding CME flux rope and the ambient solar wind. The dependence of the minor radius of the flux rope at 1 AU that determines the perceived size of the ejecta on solar quantities is discussed. Work supported by the NRL Base Program.
Spherically symmetric charged black holes in f(R) gravitational theories
NASA Astrophysics Data System (ADS)
Nashed, G. G. L.
2018-01-01
In this study, we have derived electric and magnetic spherically symmetric black holes for the class f(R)=R+ζ R2 without assuming any restrictions on the Ricci scalar. These black holes asymptotically behave as the de Sitter spacetime under certain constrains. We have shown that the magnetic charge contributes in the metric spacetime similarly to the electric charge. The most interesting feature of some of these black holes is the fact that the Cauchy horizon is not identical to the event horizon. We have calculated the invariants of Ricci and Kretschmann scalars to investigate the nature of singularities of such black holes. Also, we have calculated the conserved quantities to match the constants of integration with the physical quantities. Finally, the thermodynamical quantities, like Hawking temperature, entropy, etc., have been evaluated and the validity of the first law of thermodynamics has been verified.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, C
Purpose: To implement a novel, automatic, institutional customizable DVH quantities evaluation and PDF report tool on Philips Pinnacle treatment planning system (TPS) Methods: An add-on program (P3DVHStats) is developed by us to enable automatic DVH quantities evaluation (including both volume and dose based quantities, such as V98, V100, D2), and automatic PDF format report generation, for EMR convenience. The implementation is based on a combination of Philips Pinnacle scripting tool and Java language pre-installed on each Pinnacle Sun Solaris workstation. A single Pinnacle script provide user a convenient access to the program when needed. The activated script will first exportmore » DVH data for user selected ROIs from current Pinnacle plan trial; a Java program then provides a simple GUI interface, utilizes the data to compute any user requested DVH quantities, compare with preset institutional DVH planning goals; if accepted by users, the program will also generate a PDF report of the results and export it from Pinnacle to EMR import folder via FTP. Results: The program was tested thoroughly and has been released for clinical use at our institution (Pinnacle Enterprise server with both thin clients and P3PC access), for all dosimetry and physics staff, with excellent feedback. It used to take a few minutes to use MS-Excel worksheet to calculate these DVH quantities for IMRT/VMAT plans, and manually save them as PDF report; with the new program, it literally takes a few mouse clicks in less than 30 seconds to complete the same tasks. Conclusion: A Pinnacle scripting and Java language based program is successfully implemented, customized to our institutional needs. It is shown to dramatically reduce time and effort needed for DVH quantities computing and EMR reporting.« less
NASA Technical Reports Server (NTRS)
Nisenson, P.; Papaliolios, C.
1983-01-01
An analysis of the effects of photon noise on astronomical speckle image reconstruction using the Knox-Thompson algorithm is presented. It is shown that the quantities resulting from the speckle average arre biased, but that the biases are easily estimated and compensated. Calculations are also made of the convergence rate for the speckle average as a function of the source brightness. An illustration of the effects of photon noise on the image recovery process is included.
[From microdosimetry to nanodosimetry--the link between radiobiology and radiation physics].
Fu, Yuchuan; Li, Ping
2014-06-01
The link between micro- and macro-parameters for radiation interactions that take place in living biological systems is described in this paper. Meanwhile recent progress and development in microdosimetry and nanodosimetry are introduced, including the methods to measure and calculate these micro- or nano-parameters. The relationship between radiobiology and physical quantities in microdosimetry and nanodosimetry was presented. Both the current problems on their applications in radiation protection and radiotherapy and the future development direction are proposed.
Quantities for assessing high photon doses to the body: a calculational approach.
Eakins, Jonathan S; Ainsbury, Elizabeth A
2018-06-01
Tissue reactions are the most clinically significant consequences of high-dose exposures to ionising radiation. However, currently there is no universally recognized dose quantity that can be used to assess and report generalised risks to individuals following whole body exposures in the high-dose range. In this work, a number of potential dose quantities are presented and discussed, with mathematical modelling techniques employed to compare them and explore when their differences are most or least manifest. The results are interpreted to propose the average (D GRB ) of the absorbed doses to the stomach, small intestine, red bone marrow, and brain as the optimum quantity for informing assessments of risk. A second, maximally conservative dose quantity (D Max ) is also suggested, which places limits on any under-estimates resulting from the adoption of D GRB . The primary aim of this work is to spark debate, with further work required to refine the final choice of quantity or quantities most appropriate for the full range of different potential exposure scenarios.
Carbon Footprint Calculations: An Application of Chemical Principles
ERIC Educational Resources Information Center
Treptow, Richard S.
2010-01-01
Topics commonly taught in a general chemistry course can be used to calculate the quantity of carbon dioxide emitted into the atmosphere by various human activities. Each calculation begins with the balanced chemical equation for the reaction that produces the CO[subscript 2] gas. Stoichiometry, thermochemistry, the ideal gas law, and dimensional…
Ellis, Margaret S.; Gunther, Gregory L.; Flores, Romeo M.; Stricker, Gary D.; Ochs, Allan M.; Schuenemeyer, John H.
1998-01-01
The National Coal Resource Assessment of the Wyodak-Anderson coal zone includes reports on the geology, stratigraphy, quality, and quantity of coal. The calculation of resources is only one aspect of the assessment. Without thorough documentation of the coal resource study and the methods used, the results of our study could be misinterpreted. The task of calculating coal resources included many steps, the use of several commercial software programs, and the incorporation of custom programs. The methods used for calculating coal resources for the Wyodak-Anderson coal zone vary slightly from the methods used in other study areas, and by other workers in the National Coal Resource Assessment. The Wyodak-Anderson coal zone includes up to 10 coal beds in any given location. The net coal thickness of the zone at each data point location was calculated by summing the thickness of all of the coal beds that were greater than 2.5 ft thick. The amount of interburden is not addressed or reported in this coal resource assessment. The amount of overburden reported is the amount of rock above the stratigraphically highest coal bed in the zone. The resource numbers reported do not include coal within mine or lease areas, in areas containing mapped Wyodak-Anderson clinker, or in areas where the coal is extrapolated to be less than 2.5 ft thick. The resources of the Wyodak-Anderson coal zone are reported in Ellis and others (1998). A general description of how the resources were calculated is included in that report. The purpose of this report is to document in more detail some of the parameters and methods used, define our spatial data, compare resources calculated using different grid options and calculation methods, and explain the application of confidence limits to the resource calculation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Couture, A.
2013-06-07
Nuclear facilities sometimes use hand-held plastic scintillator detectors to detect attempts to divert special nuclear material in situations where portal monitors are impractical. MCNP calculations have been performed to determine the neutron and gamma radiation field arising from a Category I quantity of weapons-grade plutonium in various shielding configurations. The shields considered were composed of combinations of lead and high-density polyethylene such that the mass of the plutonium plus shield was 22.7 kilograms. Monte-Carlo techniques were also used to determine the detector response to each of the shielding configurations. The detector response calculations were verified using field measurements of high-,more » medium-, and low- energy gamma-ray sources as well as a Cf-252 neutron source.« less
A post-processor for the PEST code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Priesche, S.; Manickam, J.; Johnson, J.L.
1992-01-01
A new post-processor has been developed for use with output from the PEST tokamak stability code. It allows us to use quantities calculated by PEST and take better advantage of the physical picture of the plasma instability which they can provide. This will improve comparison with experimentally measured quantities as well as facilitate understanding of theoretical studies.
NASA Astrophysics Data System (ADS)
Shim, J. S.; Rastätter, L.; Kuznetsova, M.; Bilitza, D.; Codrescu, M.; Coster, A. J.; Emery, B. A.; Fedrizzi, M.; Förster, M.; Fuller-Rowell, T. J.; Gardner, L. C.; Goncharenko, L.; Huba, J.; McDonald, S. E.; Mannucci, A. J.; Namgaladze, A. A.; Pi, X.; Prokhorov, B. E.; Ridley, A. J.; Scherliess, L.; Schunk, R. W.; Sojka, J. J.; Zhu, L.
2017-10-01
In order to assess current modeling capability of reproducing storm impacts on total electron content (TEC), we considered quantities such as TEC, TEC changes compared to quiet time values, and the maximum value of the TEC and TEC changes during a storm. We compared the quantities obtained from ionospheric models against ground-based GPS TEC measurements during the 2006 AGU storm event (14-15 December 2006) in the selected eight longitude sectors. We used 15 simulations obtained from eight ionospheric models, including empirical, physics-based, coupled ionosphere-thermosphere, and data assimilation models. To quantitatively evaluate performance of the models in TEC prediction during the storm, we calculated skill scores such as RMS error, Normalized RMS error (NRMSE), ratio of the modeled to observed maximum increase (Yield), and the difference between the modeled peak time and observed peak time. Furthermore, to investigate latitudinal dependence of the performance of the models, the skill scores were calculated for five latitude regions. Our study shows that RMSE of TEC and TEC changes of the model simulations range from about 3 TECU (total electron content unit, 1 TECU = 1016 el m-2) (in high latitudes) to about 13 TECU (in low latitudes), which is larger than latitudinal average GPS TEC error of about 2 TECU. Most model simulations predict TEC better than TEC changes in terms of NRMSE and the difference in peak time, while the opposite holds true in terms of Yield. Model performance strongly depends on the quantities considered, the type of metrics used, and the latitude considered.
Parkhurst, David L.; Appelo, C.A.J.
1999-01-01
PHREEQC version 2 is a computer program written in the C programming language that is designed to perform a wide variety of low-temperature aqueous geochemical calculations. PHREEQC is based on an ion-association aqueous model and has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations involving reversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and irreversible reactions, which include specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters, within specified compositional uncertainty limits.New features in PHREEQC version 2 relative to version 1 include capabilities to simulate dispersion (or diffusion) and stagnant zones in 1D-transport calculations, to model kinetic reactions with user-defined rate expressions, to model the formation or dissolution of ideal, multicomponent or nonideal, binary solid solutions, to model fixed-volume gas phases in addition to fixed-pressure gas phases, to allow the number of surface or exchange sites to vary with the dissolution or precipitation of minerals or kinetic reactants, to include isotope mole balances in inverse modeling calculations, to automatically use multiple sets of convergence parameters, to print user-defined quantities to the primary output file and (or) to a file suitable for importation into a spreadsheet, and to define solution compositions in a format more compatible with spreadsheet programs. This report presents the equations that are the basis for chemical equilibrium, kinetic, transport, and inverse-modeling calculations in PHREEQC; describes the input for the program; and presents examples that demonstrate most of the program's capabilities.
NASA Astrophysics Data System (ADS)
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-11-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended value: a ratio simulation to IAEA of 1.01±0.10 was found.
The assessment of Urban Storm Inundation
NASA Astrophysics Data System (ADS)
Setyandito, Oki; Wijayanti, Yureana; Alwan, Muhammad; Chayati, Cholilul; Meilani
2017-12-01
A Sustainable and integrated plan in order to solve urban storm inundation problem, is an urgent issue in Indonesia. A reliable and complete datasets of urban storm inundation area in Indonesia should become its basis to give clear description of inundation area for formulating the best solution. In this study, Statistics Indonesia data in thirty three provinces were assessed during 2000 until 2012 providing data series of urban flood area, flood frequency and land cover changes. Drainage system condition in big cities should be well understood to ensure its infrastructure condition and performance. If inundation occurred, it can be concluded that there is drainage system problem. Inundation data is also important for drainage system design process in the future. The study result is provided estimation of urban storm inundation area based on calculation of Statistics Indonesia data. Moreover, this study is preceded by analyzing and reviewing the capacity of existing drainage channel, using case study of Mataram, West Nusa Tenggara. Rainfall data was obtained from three rainfall stations surround Mataram City. The storm water quantity was calculated using three different approaches as follows: 1) Rational Method; 2) Summation of existing inundation and surface run off discharge; 3) Discharge calculation from existing channel dimensions. After that, the result of these approaches was compared. The storm water quantity gap was concluded as quantity of inundation. The result shows that 36% of drainage channel in Brenyok Kanan River sub system could not accommodate the storm water runoff in this area, which causing inundation. The redesign of drainage channel using design discharge from Rational Method approach should be performed. Within area with the lowest level topography, a construction of detention or storage pond is essential to prevent inundation in this area. Furthermore, the benefits and drawbacks of the statistics database are discussed. Recommendations include utilizing more refined urban land use typologies that can better represent physical alteration of hydrological pathways
Testing of concrete by laser ablation
Flesher, Dann J.; Becker, David L.; Beem, William L.; Berry, Tommy C.; Cannon, N. Scott
1997-01-01
A method of testing concrete in a structure in situ, by: directing a succession of pulses of laser radiation at a point on the structure so that each pulse effects removal of a quantity of concrete and transfers energy to the concrete; detecting a characteristic of energy which has been transferred to the concrete; determining, separately from the detecting step, the total quantity of concrete removed by the succession of pulses; and calculating a property of the concrete on the basis of the detected energy characteristic and the determined total quantity of concrete removed.
Some calculable contributions to entanglement entropy.
Hertzberg, Mark P; Wilczek, Frank
2011-02-04
Entanglement entropy appears as a central property of quantum systems in broad areas of physics. However, its precise value is often sensitive to unknown microphysics, rendering it incalculable. By considering parametric dependence on correlation length, we extract finite, calculable contributions to the entanglement entropy for a scalar field between the interior and exterior of a spatial domain of arbitrary shape. The leading term is proportional to the area of the dividing boundary; we also extract finite subleading contributions for a field defined in the bulk interior of a waveguide in 3+1 dimensions, including terms proportional to the waveguide's cross-sectional geometry: its area, perimeter length, and integrated curvature. We also consider related quantities at criticality and suggest a class of systems for which these contributions might be measurable.
NASA Langley developments in response calculations needed for failure and life prediction
NASA Technical Reports Server (NTRS)
Housner, Jerrold M.
1993-01-01
NASA Langley developments in response calculations needed for failure and life predictions are discussed. Topics covered include: structural failure analysis in concurrent engineering; accuracy of independent regional modeling demonstrated on classical example; functional interface method accurately joins incompatible finite element models; interface method for insertion of local detail modeling extended to curve pressurized fuselage window panel; interface concept for joining structural regions; motivation for coupled 2D-3D analysis; compression panel with discontinuous stiffener coupled 2D-3D model and axial surface strains at the middle of the hat stiffener; use of adaptive refinement with multiple methods; adaptive mesh refinement; and studies on quantity effect of bow-type initial imperfections on reliability of stiffened panels.
Estimating post-marketing exposure to pharmaceutical products using ex-factory distribution data.
Telfair, Tamara; Mohan, Aparna K; Shahani, Shalini; Klincewicz, Stephen; Atsma, Willem Jan; Thomas, Adrian; Fife, Daniel
2006-10-01
The pharmaceutical industry has an obligation to identify adverse reactions to drug products during all phases of drug development, including the post-marketing period. Estimates of population exposure to pharmaceutical products are important to the post-marketing surveillance of drugs, and provide a context for assessing the various risks and benefits, including drug safety, associated with drug treatment. This paper describes a systematic approach to estimating post-marketing drug exposure using ex-factory shipment data to estimate the quantity of medication available, and dosage information (stratified by indication or other factors as appropriate) to convert the quantity of medication to person time of exposure. Unlike the non-standardized methods often used to estimate exposure, this approach provides estimates whose calculations are explicit, documented, and consistent across products and over time. The methods can readily be carried out by an individual or small group specializing in this function, and lend themselves to automation. The present estimation approach is practical and relatively uncomplicated to implement. We believe it is a useful innovation. Copyright 2006 John Wiley & Sons, Ltd.
A New Eddy Dissipation Rate Formulation for the Terminal Area PBL Prediction System(TAPPS)
NASA Technical Reports Server (NTRS)
Charney, Joseph J.; Kaplan, Michael L.; Lin, Yuh-Lang; Pfeiffer, Karl D.
2000-01-01
The TAPPS employs the MASS model to produce mesoscale atmospheric simulations in support of the Wake Vortex project at Dallas Fort-Worth International Airport (DFW). A post-processing scheme uses the simulated three-dimensional atmospheric characteristics in the planetary boundary layer (PBL) to calculate the turbulence quantities most important to the dissipation of vortices: turbulent kinetic energy and eddy dissipation rate. TAPPS will ultimately be employed to enhance terminal area productivity by providing weather forecasts for the Aircraft Vortex Spacing System (AVOSS). The post-processing scheme utilizes experimental data and similarity theory to determine the turbulence quantities from the simulated horizontal wind field and stability characteristics of the atmosphere. Characteristic PBL quantities important to these calculations are determined based on formulations from the Blackadar PBL parameterization, which is regularly employed in the MASS model to account for PBL processes in mesoscale simulations. The TAPPS forecasts are verified against high-resolution observations of the horizontal winds at DFW. Statistical assessments of the error in the wind forecasts suggest that TAPPS captures the essential features of the horizontal winds with considerable skill. Additionally, the turbulence quantities produced by the post-processor are shown to compare favorably with corresponding tower observations.
NASA Astrophysics Data System (ADS)
Wilson, H. F.
2013-12-01
First-principles atomistic simulation is a vital tool for understanding the properties of materials at the high-pressure high-temperature conditions prevalent in giant planet interiors, but properties such as solubility and phase boundaries are dependent on entropy, a quantity not directly accessible in simulation. Determining entropic properties from atomistic simulations is a difficult problem typically requiring a time-consuming integration over molecular dynamics trajectories. Here I will describe recent advances in first-principles thermodynamic calculations which substantially increase the simplicity and efficiency of thermodynamic integration and make entropic properties more readily accessible. I will also describe the use of first-principles thermodynamic calculations for understanding problems including core solubility in gas giants and superionic phase changes in ice giants, as well as future prospects for combining first-principles thermodynamics with planetary-scale models to help us understand the origin and consequences of compositional inhomogeneity in giant planet interiors.
Fission properties of superheavy nuclei for r -process calculations
NASA Astrophysics Data System (ADS)
Giuliani, Samuel A.; Martínez-Pinedo, Gabriel; Robledo, Luis M.
2018-03-01
We computed a new set of static fission properties suited for r -process calculations. The potential energy surfaces and collective inertias of 3640 nuclei in the superheavy region are obtained from self-consistent mean-field calculations using the Barcelona-Catania-Paris-Madrid energy density functional. The fission path is computed as a function of the quadrupole moment by minimizing the potential energy and exploring octupole and hexadecapole deformations. The spontaneous fission lifetimes are evaluated employing different schemes for the collective inertias and vibrational energy corrections. This allows us to explore the sensitivity of the lifetimes to those quantities together with the collective ground-state energy along the superheavy landscape. We computed neutron-induced stellar reaction rates relevant for r -process nucleosynthesis using the Hauser-Feshbach statistical approach and study the impact of collective inertias. The competition between different reaction channels including neutron-induced rates, spontaneous fission, and α decay is discussed for typical r -process conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rakhno, I. L.; Hylen, J.; Kasper, P.
Measurements and calculations of the air activation at a high-energy proton accelerator are described. The quantity of radionuclides released outdoors depends on operation scenarios including details of the air exchange inside the facility. To improve the prediction of the air activation levels, the MARS15 Monte Carlo code radionuclide production model was modified to be used for these studies. Measurements were done to benchmark the new model and verify its use in optimization studies for the new DUNE experiment at the Long Baseline Neutrino Facility (LBNF) at Fermilab. The measured production rates for the most important radionuclides – 11C, 13N, 15Omore » and 41Ar – are in a good agreement with those calculated with the improved MARS15 code.« less
Rakhno, I. L.; Hylen, J.; Kasper, P.; ...
2017-10-04
Measurements and calculations of the air activation at a high-energy proton accelerator are described. The quantity of radionuclides released outdoors depends on operation scenarios including details of the air exchange inside the facility. To improve the prediction of the air activation levels, the MARS15 Monte Carlo code radionuclide production model was modified to be used for these studies. Measurements were done to benchmark the new model and verify its use in optimization studies for the new DUNE experiment at the Long Baseline Neutrino Facility (LBNF) at Fermilab. The measured production rates for the most important radionuclides – 11C, 13N, 15Omore » and 41Ar – are in a good agreement with those calculated with the improved MARS15 code.« less
Implementation of the reduced charge state method of calculating impurity transport
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crume, E.C. Jr.; Arnurius, D.E.
1982-07-01
A recent review article by Hirshman and Sigmar includes expressions needed to calculate the parallel friction coefficients, the essential ingredients of the plateau-Pfirsch-Schluter transport coefficients, using the method of reduced charge states. These expressions have been collected and an expanded notation introduced in some cases to facilitate differentiation between reduced charge state and full charge state quantities. A form of the Coulomb logarithm relevant to the method of reduced charge states is introduced. This method of calculating the f/sub ij//sup ab/ has been implemented in the impurity transport simulation code IMPTAR and has resulted in an overall reduction in computationmore » time of approximately 25% for a typical simulation of impurity transport in the Impurity Study Experiment (ISX-B). Results obtained using this treatment are almost identical to those obtained using an earlier approximate theory of Hirshman.« less
Modules for estimating solid waste from fossil-fuel technologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crowther, M.A.; Thode, H.C. Jr.; Morris, S.C.
1980-10-01
Solid waste has become a subject of increasing concern to energy industries for several reasons. Increasingly stringent air and water pollution regulations result in a larger fraction of residuals in the form of solid wastes. Control technologies, particularly flue gas desulfurization, can multiply the amount of waste. With the renewed emphasis on coal utilization and the likelihood of oil shale development, increased amounts of solid waste will be produced. In the past, solid waste residuals used for environmental assessment have tended only to include total quantities generated. To look at environmental impacts, however, data on the composition of the solidmore » wastes are required. Computer modules for calculating the quantities and composition of solid waste from major fossil fuel technologies were therefore developed and are described in this report. Six modules have been produced covering physical coal cleaning, conventional coal combustion with flue gas desulfurization, atmospheric fluidized-bed combustion, coal gasification using the Lurgi process, coal liquefaction using the SRC-II process, and oil shale retorting. Total quantities of each solid waste stream are computed together with the major components and a number of trace elements and radionuclides.« less
NASA Astrophysics Data System (ADS)
Jones, Emlyn M.; Baird, Mark E.; Mongin, Mathieu; Parslow, John; Skerratt, Jenny; Lovell, Jenny; Margvelashvili, Nugzar; Matear, Richard J.; Wild-Allen, Karen; Robson, Barbara; Rizwi, Farhan; Oke, Peter; King, Edward; Schroeder, Thomas; Steven, Andy; Taylor, John
2016-12-01
Skillful marine biogeochemical (BGC) models are required to understand a range of coastal and global phenomena such as changes in nitrogen and carbon cycles. The refinement of BGC models through the assimilation of variables calculated from observed in-water inherent optical properties (IOPs), such as phytoplankton absorption, is problematic. Empirically derived relationships between IOPs and variables such as chlorophyll-a concentration (Chl a), total suspended solids (TSS) and coloured dissolved organic matter (CDOM) have been shown to have errors that can exceed 100 % of the observed quantity. These errors are greatest in shallow coastal regions, such as the Great Barrier Reef (GBR), due to the additional signal from bottom reflectance. Rather than assimilate quantities calculated using IOP algorithms, this study demonstrates the advantages of assimilating quantities calculated directly from the less error-prone satellite remote-sensing reflectance (RSR). To assimilate the observed RSR, we use an in-water optical model to produce an equivalent simulated RSR and calculate the mismatch between the observed and simulated quantities to constrain the BGC model with a deterministic ensemble Kalman filter (DEnKF). The traditional assumption that simulated surface Chl a is equivalent to the remotely sensed OC3M estimate of Chl a resulted in a forecast error of approximately 75 %. We show this error can be halved by instead using simulated RSR to constrain the model via the assimilation system. When the analysis and forecast fields from the RSR-based assimilation system are compared with the non-assimilating model, a comparison against independent in situ observations of Chl a, TSS and dissolved inorganic nutrients (NO3, NH4 and DIP) showed that errors are reduced by up to 90 %. In all cases, the assimilation system improves the simulation compared to the non-assimilating model. Our approach allows for the incorporation of vast quantities of remote-sensing observations that have in the past been discarded due to shallow water and/or artefacts introduced by terrestrially derived TSS and CDOM or the lack of a calibrated regional IOP algorithm.
Hart, George W.; Kern, Jr., Edward C.
1987-06-09
An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer.
Hart, G.W.; Kern, E.C. Jr.
1987-06-09
An apparatus and method is provided for monitoring a plurality of analog ac circuits by sampling the voltage and current waveform in each circuit at predetermined intervals, converting the analog current and voltage samples to digital format, storing the digitized current and voltage samples and using the stored digitized current and voltage samples to calculate a variety of electrical parameters; some of which are derived from the stored samples. The non-derived quantities are repeatedly calculated and stored over many separate cycles then averaged. The derived quantities are then calculated at the end of an averaging period. This produces a more accurate reading, especially when averaging over a period in which the power varies over a wide dynamic range. Frequency is measured by timing three cycles of the voltage waveform using the upward zero crossover point as a starting point for a digital timer. 24 figs.
Calculation of Hazard Category 2/3 Threshold Quantities Using Contemporary Dosimetric Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, William C.
The purpose of this report is to describe the methodology and selection of input data utilized to calculate updated Hazard Category 2 and Hazard Category 3 Threshold Quantities (TQs) using contemporary dosimetric information. The calculation of the updated TQs will be considered for use in the revision to the Department of Energy (DOE) Technical Standard (STD-) 1027-92 Change Notice (CN)-1, “Hazard Categorization and Accident Analysis Techniques for Compliance with DOE Order 5480.23, Nuclear Safety Analysis Reports.” The updated TQs documented in this report complement an effort previously undertaken by the National Nuclear Security Administration (NNSA), which in 2014 issued revisedmore » Supplemental Guidance documenting the calculation of updated TQs for approximately 100 radionuclides listed in DOE-STD-1027-92, CN-1. The calculations documented in this report complement the NNSA effort by expanding the set of radionuclides to more than 1,250 radionuclides with a published TQ. The development of this report was sponsored by the Department of Energy’s Office of Nuclear Safety (AU-30) within the Associate Under Secretary for Environment, Health, Safety, and Security organization.« less
Non-symbolic arithmetic in adults and young children.
Barth, Hilary; La Mont, Kristen; Lipton, Jennifer; Dehaene, Stanislas; Kanwisher, Nancy; Spelke, Elizabeth
2006-01-01
Five experiments investigated whether adults and preschool children can perform simple arithmetic calculations on non-symbolic numerosities. Previous research has demonstrated that human adults, human infants, and non-human animals can process numerical quantities through approximate representations of their magnitudes. Here we consider whether these non-symbolic numerical representations might serve as a building block of uniquely human, learned mathematics. Both adults and children with no training in arithmetic successfully performed approximate arithmetic on large sets of elements. Success at these tasks did not depend on non-numerical continuous quantities, modality-specific quantity information, the adoption of alternative non-arithmetic strategies, or learned symbolic arithmetic knowledge. Abstract numerical quantity representations therefore are computationally functional and may provide a foundation for formal mathematics.
Factors affecting calculation of L
NASA Astrophysics Data System (ADS)
Ciotola, Mark P.
2001-08-01
A detectable extraterrestrial civilization can be modeled as a series of successive regimes over time each of which is detectable for a certain proportion of its lifecycle. This methodology can be utilized to produce an estimate for L. Potential components of L include quantity of fossil fuel reserves, solar energy potential, quantity of regimes over time, lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and downtime between regimes. Relationships between these components provide a means of calculating the lifetime of communicative species in a detectable state, L. An example of how these factors interact is provided, utilizing values that are reasonable given known astronomical data for components such as solar energy potential while existing knowledge about the terrestrial case is used as a baseline for other components including fossil fuel reserves, quantity of regimes over time, and lifecycle patterns of regimes, proportion of lifecycle regime is actually detectable, and gaps of time between regimes due to recovery from catastrophic war or resource exhaustion. A range of values is calculated for L when parameters are established for each component so as to determine the lowest and highest values of L. roadmap for SETI research at the SETI Institute for the next few decades. Three different approaches were identified. 1) Continue the radio search: build an affordable array incorporating consumer market technologies, expand the search frequency, and increase the target list to 100,000 stars. This array will also serve as a technology demonstration and enable the international radio astronomy community to realize an array that is a hundred times larger and capable (among other things) of searching a million stars. 2) Begin searches for very fast optical pulses from a million stars. 3) As Moore's Law delivers increased computational capacity, build an omni-directional sky survey array capable of detecting strong, transient, radio signals from billions of stars. SETI could succeed tomorrow, or it may be an endeavor for multiple generations. We are a very young technology in a very old galaxy. While our own leakage radiation continues to outshine the Sun at many frequencies, we remain detectable to others. When our use of the spectrum becomes more efficient, it will be time to consider deliberate transmissions and the really tough questions: Who will speak for Earth? What will they say?
Molecular Dynamics in Physiological Solutions: Force Fields, Alkali Metal Ions, and Ionic Strength.
Zhang, Chao; Raugei, Simone; Eisenberg, Bob; Carloni, Paolo
2010-07-13
The monovalent ions Na(+) and K(+) and Cl(-) are present in any living organism. The fundamental thermodynamic properties of solutions containing such ions is given as the excess (electro-)chemical potential differences of single ions at finite ionic strength. This quantity is key for many biological processes, including ion permeation in membrane ion channels and DNA-protein interaction. It is given by a chemical contribution, related to the ion activity, and an electric contribution, related to the Galvani potential of the water/air interface. Here we investigate molecular dynamics based predictions of these quantities by using a variety of ion/water force fields commonly used in biological simulation, namely the AMBER (the newly developed), CHARMM, OPLS, Dang95 with TIP3P, and SPC/E water. Comparison with experiment is made with the corresponding values for salts, for which data are available. The calculations based on the newly developed AMBER force field with TIP3P water agrees well with experiment for both KCl and NaCl electrolytes in water solutions, as previously reported. The simulations based on the CHARMM-TIP3P and Dang95-SPC/E force fields agree well for the KCl and NaCl solutions, respectively. The other models are not as accurate. Single cations excess (electro-)chemical potential differences turn out to be similar for all the force fields considered here. In the case of KCl, the calculated electric contribution is consistent with higher level calculations. Instead, such agreement is not found with NaCl. Finally, we found that the calculated activities for single Cl(-) ions turn out to depend clearly on the type of counterion used, with all the force fields investigated. The implications of these findings for biomolecular systems are discussed.
Influence of wind speed averaging on estimates of dimethylsulfide emission fluxes
Chapman, E. G.; Shaw, W. J.; Easter, R. C.; ...
2002-12-03
The effect of various wind-speed-averaging periods on calculated DMS emission fluxes is quantitatively assessed. Here, a global climate model and an emission flux module were run in stand-alone mode for a full year. Twenty-minute instantaneous surface wind speeds and related variables generated by the climate model were archived, and corresponding 1-hour-, 6-hour-, daily-, and monthly-averaged quantities calculated. These various time-averaged, model-derived quantities were used as inputs in the emission flux module, and DMS emissions were calculated using two expressions for the mass transfer velocity commonly used in atmospheric models. Results indicate that the time period selected for averaging wind speedsmore » can affect the magnitude of calculated DMS emission fluxes. A number of individual marine cells within the global grid show DMS emissions fluxes that are 10-60% higher when emissions are calculated using 20-minute instantaneous model time step winds rather than monthly-averaged wind speeds, and at some locations the differences exceed 200%. Many of these cells are located in the southern hemisphere where anthropogenic sulfur emissions are low and changes in oceanic DMS emissions may significantly affect calculated aerosol concentrations and aerosol radiative forcing.« less
Testing of concrete by laser ablation
Flesher, D.J.; Becker, D.L.; Beem, W.L.; Berry, T.C.; Cannon, N.S.
1997-01-07
A method is disclosed for testing concrete in a structure in situ, by: directing a succession of pulses of laser radiation at a point on the structure so that each pulse effects removal of a quantity of concrete and transfers energy to the concrete; detecting a characteristic of energy which has been transferred to the concrete; determining, separately from the detecting step, the total quantity of concrete removed by the succession of pulses; and calculating a property of the concrete on the basis of the detected energy characteristic and the determined total quantity of concrete removed. 1 fig.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2014-01-01
This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.
NASA Astrophysics Data System (ADS)
Shadrack Jabes, B.; Krekeler, C.; Klein, R.; Delle Site, L.
2018-05-01
We employ the Grand Canonical Adaptive Resolution Simulation (GC-AdResS) molecular dynamics technique to test the spatial locality of the 1-ethyl 3-methyl imidazolium chloride liquid. In GC-AdResS, atomistic details are kept only in an open sub-region of the system while the environment is treated at coarse-grained level; thus, if spatial quantities calculated in such a sub-region agree with the equivalent quantities calculated in a full atomistic simulation, then the atomistic degrees of freedom outside the sub-region play a negligible role. The size of the sub-region fixes the degree of spatial locality of a certain quantity. We show that even for sub-regions whose radius corresponds to the size of a few molecules, spatial properties are reasonably reproduced thus suggesting a higher degree of spatial locality, a hypothesis put forward also by other researchers and that seems to play an important role for the characterization of fundamental properties of a large class of ionic liquids.
Hermann, Gunter; Pohl, Vincent; Tremblay, Jean Christophe; Paulus, Beate; Hege, Hans-Christian; Schild, Axel
2016-06-15
ORBKIT is a toolbox for postprocessing electronic structure calculations based on a highly modular and portable Python architecture. The program allows computing a multitude of electronic properties of molecular systems on arbitrary spatial grids from the basis set representation of its electronic wavefunction, as well as several grid-independent properties. The required data can be extracted directly from the standard output of a large number of quantum chemistry programs. ORBKIT can be used as a standalone program to determine standard quantities, for example, the electron density, molecular orbitals, and derivatives thereof. The cornerstone of ORBKIT is its modular structure. The existing basic functions can be arranged in an individual way and can be easily extended by user-written modules to determine any other derived quantity. ORBKIT offers multiple output formats that can be processed by common visualization tools (VMD, Molden, etc.). Additionally, ORBKIT possesses routines to order molecular orbitals computed at different nuclear configurations according to their electronic character and to interpolate the wavefunction between these configurations. The program is open-source under GNU-LGPLv3 license and freely available at https://github.com/orbkit/orbkit/. This article provides an overview of ORBKIT with particular focus on its capabilities and applicability, and includes several example calculations. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Neutron Star Models in Alternative Theories of Gravity
NASA Astrophysics Data System (ADS)
Manolidis, Dimitrios
We study the structure of neutron stars in a broad class of alternative theories of gravity. In particular, we focus on Scalar-Tensor theories and f(R) theories of gravity. We construct static and slowly rotating numerical star models for a set of equations of state, including a polytropic model and more realistic equations of state motivated by nuclear physics. Observable quantities such as masses, radii, etc are calculated for a set of parameters of the theories. Specifically for Scalar-Tensor theories, we also calculate the sensitivities of the mass and moment of inertia of the models to variations in the asymptotic value of the scalar field at infinity. These quantities enter post-Newtonian equations of motion and gravitational waveforms of two body systems that are used for gravitational-wave parameter estimation, in order to test these theories against observations. The construction of numerical models of neutron stars in f(R) theories of gravity has been difficult in the past. Using a new formalism by Jaime, Patino and Salgado we were able to construct models with high interior pressure, namely pc > rho c/3, both for constant density models and models with a polytropic equation of state. Thus, we have shown that earlier objections to f(R) theories on the basis of the inability to construct viable neutron star models are unfounded.
NASA Technical Reports Server (NTRS)
Sonnabend, David
1995-01-01
In a paper here last year, an idea was put forward that much greater performance could be obtained from an observer, relative to a Kalman filter if more general performance indices were adopted, and the full power spectra of all the noises were employed. The considerable progress since then is reported here. Included are an extension of the theory to regulators, direct calculation of the theory's fundamental quantities - the noise effect integrals - for several theoretical spectra, and direct derivations of the Riccati equations of LQG (Linear-Quadratic-Gaussian) and Kalman theory yielding new insights.
Spacecraft mass estimation, relationships and engine data: Task 1.1 of the lunar base systems study
NASA Technical Reports Server (NTRS)
1988-01-01
A collection of scaling equations, weight statements, scaling factors, etc., useful for doing conceptual designs of spacecraft are given. Rules of thumb and methods of calculating quantities of interest are provided. Basic relationships for conventional, and several non-conventional, propulsion systems (nuclear, solar electric and solar thermal) are included. The equations and other data were taken from a number of sources and are not at all consistent with each other in level of detail or method, but provide useful references for early estimation purposes.
Spectroscopic diagnostics of solar flares
NASA Astrophysics Data System (ADS)
Bely-Dubau, F.; Dubau, J.; Faucher, P.; Loulergue, M.; Steenman-Clarke, L.
Observations made with the X-ray polychromator (XRP) on board the Solar Maximum Mission satellite were analyzed. Data from the bent crystal spectrometer portion of the XRP experiment, in the spectral domain 1 to 3 A, with high spectral and temporal resolution, were used. Results for the spectrum analysis of iron are given. The possibility of polarization effects is considered. Although it is demonstrated that hyperfine analyses of a given spectrum are obtainable, provided calculations include large quantities of high precision atomic data, the interpretation is limited by the hypothesis of homogeneity of the emitting plasma.
40 CFR 98.397 - Records that must be retained.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., natural gas liquids, and biomass, as well as crude oil quantities measured on site at a refinery... gas liquids, biomass, and feedstocks reported under this subpart. (d) Reporters shall maintain... petroleum product or natural gas liquid for which CO2 emissions were calculated using Calculation...
40 CFR 98.397 - Records that must be retained.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., natural gas liquids, and biomass, as well as crude oil quantities measured on site at a refinery... gas liquids, biomass, and feedstocks reported under this subpart. (d) Reporters shall maintain... petroleum product or natural gas liquid for which CO2 emissions were calculated using Calculation...
40 CFR 98.397 - Records that must be retained.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., natural gas liquids, and biomass, as well as crude oil quantities measured on site at a refinery... gas liquids, biomass, and feedstocks reported under this subpart. (d) Reporters shall maintain... petroleum product or natural gas liquid for which CO2 emissions were calculated using Calculation...
S parameter and pseudo Nambu-Goldstone boson mass from lattice QCD.
Shintani, E; Aoki, S; Fukaya, H; Hashimoto, S; Kaneko, T; Matsufuru, H; Onogi, T; Yamada, N
2008-12-12
We present a lattice calculation of L10, one of the low-energy constants in chiral perturbation theory, and the charged-neutral pion squared-mass splitting, using dynamical overlap fermion. The exact chiral symmetry of the overlap fermion allows us to reliably extract these quantities from the difference of the vacuum polarization functions for vector and axial-vector currents. In the context of the technicolor models, these two quantities are read as the S parameter and the pseudo Nambu-Goldstone boson mass, respectively, and play an important role in discriminating the models from others. This calculation can serve as a feasibility study of the lattice techniques for more general technicolor gauge theories.
Deriving proper measurement uncertainty from Internal Quality Control data: An impossible mission?
Ceriotti, Ferruccio
2018-03-30
Measurement uncertainty (MU) is a "non-negative parameter characterizing the dispersion of the quantity values being attributed to a measurand, based on the information used". In the clinical laboratory the most convenient way to calculate MU is the "top down" approach based on the use of Internal Quality Control data. As indicated in the definition, MU depends on the information used for its calculation and so different estimates of MU can be obtained. The most problematic aspect is how to deal with bias. In fact bias is difficult to detect and quantify and it should be corrected including only the uncertainty derived from this correction. Several approaches to calculate MU starting from Internal Quality Control data are presented. The minimum requirement is to use only the intermediate precision data, provided to include 6 months of results obtained with a commutable quality control material at a concentration close to the clinical decision limit. This approach is the minimal requirement and it is convenient for all those measurands that are especially used for monitoring or where a reference measurement system does not exist and so a reference for calculating the bias is lacking. Other formulas including the uncertainty of the value of the calibrator, including the bias from a commutable certified reference material or from a material specifically prepared for trueness verification, including the bias derived from External Quality Assessment schemes or from historical mean of the laboratory are presented and commented. MU is an important parameter, but a single, agreed upon way to calculate it in a clinical laboratory is not yet available. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Barth, Timothy J.
2016-01-01
This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.
Correlated uncertainties in Monte Carlo reaction rate calculations
NASA Astrophysics Data System (ADS)
Longland, Richard
2017-07-01
Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.
Using Machine Learning to Predict MCNP Bias
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grechanuk, Pavel Aleksandrovi
For many real-world applications in radiation transport where simulations are compared to experimental measurements, like in nuclear criticality safety, the bias (simulated - experimental k eff) in the calculation is an extremely important quantity used for code validation. The objective of this project is to accurately predict the bias of MCNP6 [1] criticality calculations using machine learning (ML) algorithms, with the intention of creating a tool that can complement the current nuclear criticality safety methods. In the latest release of MCNP6, the Whisper tool is available for criticality safety analysts and includes a large catalogue of experimental benchmarks, sensitivity profiles,more » and nuclear data covariance matrices. This data, coming from 1100+ benchmark cases, is used in this study of ML algorithms for criticality safety bias predictions.« less
40 CFR 86.1837-01 - Rounding of emission measurements.
Code of Federal Regulations, 2014 CFR
2014-07-01
... additional significant figure, in accordance with 40 CFR 1065.20. (b) Fleet average NOX value calculations... calculating credits generated or needed as follows: manufacturers must round to the same number of significant figures that are contained in the quantity of vehicles in the denominator of the equation used to compute...
NASA Technical Reports Server (NTRS)
Armstrong, T. W.
1972-01-01
Several Monte Carlo radiation transport computer codes are used to predict quantities of interest in the fields of radiotherapy and radiobiology. The calculational methods are described and comparisions of calculated and experimental results are presented for dose distributions produced by protons, neutrons, and negatively charged pions. Comparisons of calculated and experimental cell survival probabilities are also presented.
Wheeler, J; Mariani, E; Piazolo, S; Prior, D J; Trimby, P; Drury, M R
2009-03-01
The Weighted Burgers Vector (WBV) is defined here as the sum, over all types of dislocations, of [(density of intersections of dislocation lines with a map) x (Burgers vector)]. Here we show that it can be calculated, for any crystal system, solely from orientation gradients in a map view, unlike the full dislocation density tensor, which requires gradients in the third dimension. No assumption is made about gradients in the third dimension and they may be non-zero. The only assumption involved is that elastic strains are small so the lattice distortion is entirely due to dislocations. Orientation gradients can be estimated from gridded orientation measurements obtained by EBSD mapping, so the WBV can be calculated as a vector field on an EBSD map. The magnitude of the WBV gives a lower bound on the magnitude of the dislocation density tensor when that magnitude is defined in a coordinate invariant way. The direction of the WBV can constrain the types of Burgers vectors of geometrically necessary dislocations present in the microstructure, most clearly when it is broken down in terms of lattice vectors. The WBV has three advantages over other measures of local lattice distortion: it is a vector and hence carries more information than a scalar quantity, it has an explicit mathematical link to the individual Burgers vectors of dislocations and, since it is derived via tensor calculus, it is not dependent on the map coordinate system. If a sub-grain wall is included in the WBV calculation, the magnitude of the WBV becomes dependent on the step size but its direction still carries information on the Burgers vectors in the wall. The net Burgers vector content of dislocations intersecting an area of a map can be simply calculated by an integration round the edge of that area, a method which is fast and complements point-by-point WBV calculations.
NASA Astrophysics Data System (ADS)
Lin, Lin
The computational cost of standard Kohn-Sham density functional theory (KSDFT) calculations scale cubically with respect to the system size, which limits its use in large scale applications. In recent years, we have developed an alternative procedure called the pole expansion and selected inversion (PEXSI) method. The PEXSI method solves KSDFT without solving any eigenvalue and eigenvector, and directly evaluates physical quantities including electron density, energy, atomic force, density of states, and local density of states. The overall algorithm scales as at most quadratically for all materials including insulators, semiconductors and the difficult metallic systems. The PEXSI method can be efficiently parallelized over 10,000 - 100,000 processors on high performance machines. The PEXSI method has been integrated into a number of community electronic structure software packages such as ATK, BigDFT, CP2K, DGDFT, FHI-aims and SIESTA, and has been used in a number of applications with 2D materials beyond 10,000 atoms. The PEXSI method works for LDA, GGA and meta-GGA functionals. The mathematical structure for hybrid functional KSDFT calculations is significantly different. I will also discuss recent progress on using adaptive compressed exchange method for accelerating hybrid functional calculations. DOE SciDAC Program, DOE CAMERA Program, LBNL LDRD, Sloan Fellowship.
The Boeing plastic analysis capability for engines
NASA Technical Reports Server (NTRS)
Vos, R. G.
1976-01-01
The current BOPACE program is described as a nonlinear stress analysis program, which is based on a family of isoparametric finite elements. The theoretical, user, programmer, preprocessing aspects are discussed, and example problems are included. New features in the current program version include substructuring, an out-of-core Gauss wavefront equation solver, multipoint constraints, combined material and geometric nonlinearities, automatic calculation of inertia effects, provision for distributed as well as concentrated mechanical loads, follower forces, singular crack-tip elements, the SAIL automatic generation capability, and expanded user control over input quantity definition, output selection, and program execution. BOPACE is written in FORTRAN 4 and is currently available for both the IBM 360/370 and the UNIVAC 1108 machines.
Dose conversion coefficients for photon exposure of the human eye lens.
Behrens, R; Dietze, G
2011-01-21
In recent years, several papers dealing with the eye lens dose have been published, because epidemiological studies implied that the induction of cataracts occurs even at eye lens doses of less than 500 mGy. Different questions were addressed: Which personal dose equivalent quantity is appropriate for monitoring the dose to the eye lens? Is a new definition of the dose quantity H(p)(3) based on a cylinder phantom to represent the human head necessary? Are current conversion coefficients from fluence to equivalent dose to the lens sufficiently accurate? To investigate the latter question, a realistic model of the eye including the inner structure of the lens was developed. Using this eye model, conversion coefficients for electrons have already been presented. In this paper, the same eye model-with the addition of the whole body-was used to calculate conversion coefficients from fluence (and air kerma) to equivalent dose to the lens for photon radiation from 5 keV to 10 MeV. Compared to the values adopted in 1996 by the International Commission on Radiological Protection (ICRP), the new values are similar between 40 keV and 1 MeV and lower by up to a factor of 5 and 7 for photon energies at about 10 keV and 10 MeV, respectively. Above 1 MeV, the new values (calculated without kerma approximation) should be applied in pure photon radiation fields, while the values adopted by the ICRP in 1996 (calculated with kerma approximation) should be applied in case a significant contribution from secondary electrons originating outside the body is present.
Dose conversion coefficients for photon exposure of the human eye lens
NASA Astrophysics Data System (ADS)
Behrens, R.; Dietze, G.
2011-01-01
In recent years, several papers dealing with the eye lens dose have been published, because epidemiological studies implied that the induction of cataracts occurs even at eye lens doses of less than 500 mGy. Different questions were addressed: Which personal dose equivalent quantity is appropriate for monitoring the dose to the eye lens? Is a new definition of the dose quantity Hp(3) based on a cylinder phantom to represent the human head necessary? Are current conversion coefficients from fluence to equivalent dose to the lens sufficiently accurate? To investigate the latter question, a realistic model of the eye including the inner structure of the lens was developed. Using this eye model, conversion coefficients for electrons have already been presented. In this paper, the same eye model—with the addition of the whole body—was used to calculate conversion coefficients from fluence (and air kerma) to equivalent dose to the lens for photon radiation from 5 keV to 10 MeV. Compared to the values adopted in 1996 by the International Commission on Radiological Protection (ICRP), the new values are similar between 40 keV and 1 MeV and lower by up to a factor of 5 and 7 for photon energies at about 10 keV and 10 MeV, respectively. Above 1 MeV, the new values (calculated without kerma approximation) should be applied in pure photon radiation fields, while the values adopted by the ICRP in 1996 (calculated with kerma approximation) should be applied in case a significant contribution from secondary electrons originating outside the body is present.
Program Predicts Performance of Optical Parametric Oscillators
NASA Technical Reports Server (NTRS)
Cross, Patricia L.; Bowers, Mark
2006-01-01
A computer program predicts the performances of solid-state lasers that operate at wavelengths from ultraviolet through mid-infrared and that comprise various combinations of stable and unstable resonators, optical parametric oscillators (OPOs), and sum-frequency generators (SFGs), including second-harmonic generators (SHGs). The input to the program describes the signal, idler, and pump beams; the SFG and OPO crystals; and the laser geometry. The program calculates the electric fields of the idler, pump, and output beams at three locations (inside the laser resonator, just outside the input mirror, and just outside the output mirror) as functions of time for the duration of the pump beam. For each beam, the electric field is used to calculate the fluence at the output mirror, plus summary parameters that include the centroid location, the radius of curvature of the wavefront leaving through the output mirror, the location and size of the beam waist, and a quantity known, variously, as a propagation constant or beam-quality factor. The program provides a typical Windows interface for entering data and selecting files. The program can include as many as six plot windows, each containing four graphs.
Huckle, Taisia; Huakau, John; Sweetsur, Paul; Huisman, Otto; Casswell, Sally
2008-10-01
This study examines the relationship between physical, socio-economic and social environments and alcohol consumption and drunkenness among a general population sample of drinkers aged 12-17 years. DESIGN, SETTING, PARTICIPANTS AND MEASURES: The study was conducted in Auckland, New Zealand. The design comprised two components: (i) environmental measures including alcohol outlet density, locality-based measure of willingness to sell alcohol (derived from purchase surveys of outlets) and a locality-based neighbourhood deprivation measure calculated routinely in New Zealand (known as NZDEP); and (ii) the second component was a random telephone survey to collect individual-level information from respondents aged 12-17 years including ethnicity, frequency of alcohol supplied socially (by parents, friends and others), young person's income; frequency of exposure to alcohol advertising; recall of brands of alcohol and self-reported purchase from alcohol outlets. A multi-level model was fitted to predict typical-occasion quantity, frequency of drinking and drunkenness in drinkers aged 12-17 years. Typical-occasion quantity was predicted by: frequency of social supply (by parents, friends and others); ethnicity and outlet density; and self-reported purchasing approached significance. NZDEP was correlated highly with outlet density so could not be analysed in the same model. In a separate model, NZDEP was associated with quantity consumed on a typical drinking occasion. Annual frequency was predicted by: frequency of social supply of alcohol, self-reported purchasing from alcohol outlets and ethnicity. Feeling drunk was predicted by frequency of social supply of alcohol, self-reported purchasing from alcohol outlets and ethnicity; outlet density approached significance. Age and gender also had effects in the models, but retailers' willingness to sell to underage patrons had no effects on consumption, nor did the advertising measures. The young person's income was influential on typical-occasion quantity once deprivation was taken into account. Alcohol outlet density was associated with quantities consumed among teenage drinkers in this study, as was neighbourhood deprivation. Supply by family, friends and others also predicted quantities consumed among underage drinkers and both social supply and self-reported purchase were associated with frequency of drinking and drunkenness. The ethnic status of young people also had an effect on consumption.
User's Manual: Routines for Radiative Heat Transfer and Thermometry
NASA Technical Reports Server (NTRS)
Risch, Timothy K.
2016-01-01
Determining the intensity and spectral distribution of radiation emanating from a heated surface has applications in many areas of science and engineering. Areas of research in which the quantification of spectral radiation is used routinely include thermal radiation heat transfer, infrared signature analysis, and radiation thermometry. In the analysis of radiation, it is helpful to be able to predict the radiative intensity and the spectral distribution of the emitted energy. Presented in this report is a set of routines written in Microsoft Visual Basic for Applications (VBA) (Microsoft Corporation, Redmond, Washington) and incorporating functions specific to Microsoft Excel (Microsoft Corporation, Redmond, Washington) that are useful for predicting the radiative behavior of heated surfaces. These routines include functions for calculating quantities of primary importance to engineers and scientists. In addition, the routines also provide the capability to use such information to determine surface temperatures from spectral intensities and for calculating the sensitivity of the surface temperature measurements to unknowns in the input parameters.
Decay of homogeneous turbulence from a specified state
NASA Technical Reports Server (NTRS)
Deissler, R. G.
1972-01-01
The homogeneous turbulence problem is formulated by first specifying the multipoint velocity correlations or their spectral equivalents at an initial time. Those quantities, together with the correlation or spectral equations, are then used to calculate initial time derivatives of correlations or spectra. The derivatives in turn are used in time series to calculate the evolution of turbulence quantities with time. When the problem is treated in this way, the correlation equations are closed by the initial specification of the turbulence and no closure assumption is necessary. An exponential series which is an iterative solution of the Navier stokes equations gave much better results than a Taylor power series when used with the limited available initial data. In general, the agreement between theory and experiment was good.
3D RISM theory with fast reciprocal-space electrostatics.
Heil, Jochen; Kast, Stefan M
2015-03-21
The calculation of electrostatic solute-solvent interactions in 3D RISM ("three-dimensional reference interaction site model") integral equation theory is recast in a form that allows for a computational treatment analogous to the "particle-mesh Ewald" formalism as used for molecular simulations. In addition, relations that connect 3D RISM correlation functions and interaction potentials with thermodynamic quantities such as the chemical potential and average solute-solvent interaction energy are reformulated in a way that calculations of expensive real-space electrostatic terms on the 3D grid are completely avoided. These methodical enhancements allow for both, a significant speedup particularly for large solute systems and a smoother convergence of predicted thermodynamic quantities with respect to box size, as illustrated for several benchmark systems.
Integrability of generalised type II defects in affine Toda field theory
NASA Astrophysics Data System (ADS)
Bristow, Rebecca
2017-11-01
The Liouville integrability of the generalised type II defects is investigated. Full integrability is not considered, only the existence of an infinite number of conserved quantities associated with a system containing a defect. For defects in affine Toda field theories (ATFTs) it is shown that momentum conservation is very likely to be a necessary condition for integrability. The defect Lax matrices which guarantee zero curvature, and so an infinite number of conserved quantities, are calculated for the momentum conserving Tzitzéica defect and the momentum conserving D 4 ATFT defect. Some additional calculations pertaining to the D 4 defect are also carried out to find a more complete set of defect potentials than has appeared previously.
Computers: Yesterday, Today & Tomorrow.
1986-04-07
these repetitive calculations, he progressed through several scientific stages. THE ABACUS Invented more than 4,000 years ago, the abacus is considered...by many to have been the world’s first digital calculator. It uses beads and positional values to represent quantities. The abacus served as man’s...Pascal’s mathematical digital calculator, designed around the concept of serially connected decimal counting gears. These gears were interconnected in a 10
NASA Astrophysics Data System (ADS)
Belov, A. V.; Kurkov, Andrei S.; Chikolini, A. V.
1989-02-01
A method was developed for calculating the effective cutoff length, the size of a mode spot, and the chromatic dispersion over the profile of the refractive index (measured in the preform stage) of single-mode fiber waveguides with a depressed cladding. The results of such calculations are shown to agree with the results of measurements of these quantities.
NASA Technical Reports Server (NTRS)
Aboudi, Jacob; Pindera, Marek-Jerzy; Arnold, Steven M.
1993-01-01
A new micromechanical theory is presented for the response of heterogeneous metal matrix composites subjected to thermal gradients. In contrast to existing micromechanical theories that utilize classical homogenization schemes in the course of calculating microscopic and macroscopic field quantities, in the present approach the actual microstructural details are explicitly coupled with the macrostructure of the composite. Examples are offered that illustrate limitations of the classical homogenization approach in predicting the response of thin-walled metal matrix composites with large-diameter fibers when subjected to thermal gradients. These examples include composites with a finite number of fibers in the thickness direction that may be uniformly or nonuniformly spaced, thus admitting so-called functionally gradient composites. The results illustrate that the classical approach of decoupling micromechanical and macromechanical analyses in the presence of a finite number of large-diameter fibers, finite dimensions of the composite, and temperature gradient may produce excessively conservative estimates for macroscopic field quantities, while both underestimating and overestimating the local fluctuations of the microscopic quantities in different regions of the composite. Also demonstrated is the usefulness of the present approach in generating favorable stress distributions in the presence of thermal gradients by appropriately tailoring the internal microstructure details of the composite.
Hall, B; Tozer, S; Safford, B; Coroama, M; Steiling, W; Leneveu-Duchemin, M C; McNamara, C; Gibney, M
2007-11-01
Access to reliable exposure data is essential to evaluate the toxicological safety of ingredients in cosmetic products. This study was carried out by European cosmetic manufacturers acting within the trade association Colipa, with the aim to construct a probabilistic European population model of exposure. The study updates, in distribution form, the current exposure data on daily quantities of six cosmetic products. Data were collected using a combination of market information databases and a controlled product use study. In total 44,100 households and 18,057 individual consumers in five European countries provided data using their own products. All product use occasions were recorded, including those outside of home. The raw data were analysed using Monte Carlo simulation and a European Statistical Population Model of exposure was constructed. A significant finding was an inverse correlation between frequency of product use and quantity used per application for body lotion, facial moisturiser, toothpaste and shampoo. Thus it is not appropriate to calculate daily exposure to these products by multiplying the maximum frequency value by the maximum quantity per event value. The results largely confirm the exposure parameters currently used by the cosmetic industry. Design of this study could serve as a model for future assessments of population exposure to chemicals in products other than cosmetics.
Spreadsheet Modeling of (Q,R) Inventory Policies
ERIC Educational Resources Information Center
Cobb, Barry R.
2013-01-01
This teaching brief describes a method for finding an approximately optimal combination of order quantity and reorder point in a continuous review inventory model using a discrete expected shortage calculation. The technique is an alternative to a model where expected shortage is calculated by integration, and can allow students who have not had a…
2010-02-01
calculated the target strength of the most intense partial wave, a quantity termed the “effective target strength” by Kaduchak and Loeffler (1998...ed., United States Naval Institute, Annapolis, 417 pp. Kaduchak, G. and Loeffler , C.M. (1998). “Relationship between material parameters and
NASA Astrophysics Data System (ADS)
Briceño, Raúl A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-01
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate that the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Finally we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.
molgw 1: Many-body perturbation theory software for atoms, molecules, and clusters
Bruneval, Fabien; Rangel, Tonatiuh; Hamed, Samia M.; ...
2016-07-12
Here, we summarize the MOLGW code that implements density-functional theory and many-body perturbation theory in a Gaussian basis set. The code is dedicated to the calculation of the many-body self-energy within the GW approximation and the solution of the Bethe–Salpeter equation. These two types of calculations allow the user to evaluate physical quantities that can be compared to spectroscopic experiments. Quasiparticle energies, obtained through the calculation of the GW self-energy, can be compared to photoemission or transport experiments, and neutral excitation energies and oscillator strengths, obtained via solution of the Bethe–Salpeter equation, are measurable by optical absorption. The implementation choicesmore » outlined here have aimed at the accuracy and robustness of calculated quantities with respect to measurements. Furthermore, the algorithms implemented in MOLGW allow users to consider molecules or clusters containing up to 100 atoms with rather accurate basis sets, and to choose whether or not to apply the resolution-of-the-identity approximation. Finally, we demonstrate the parallelization efficacy of the MOLGW code over several hundreds of processors.« less
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
2017-07-11
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
Coefficients of productivity for Yellowstone's grizzly bear habitat
Mattson, David John; Barber, Kim; Maw, Ralene; Renkin, Roy
2004-01-01
This report describes methods for calculating coefficients used to depict habitat productivity for grizzly bears in the Yellowstone ecosystem. Calculations based on these coefficients are used in the Yellowstone Grizzly Bear Cumulative Effects Model to map the distribution of habitat productivity and account for the impacts of human facilities. The coefficients of habitat productivity incorporate detailed information that was collected over a 20-year period (1977-96) on the foraging behavior of Yellowstone's bears and include records of what bears were feeding on, when and where they fed, the extent of that feeding activity, and relative measures of the quantity consumed. The coefficients also incorporate information, collected primarily from 1986 to 1992, on the nutrient content of foods that were consumed, their digestibility, characteristic bite sizes, and the energy required to extract and handle each food. Coefficients were calculated for different time periods and different habitat types, specific to different parts of the Yellowstone ecosystem. Stratifications included four seasons of bear activity (spring, estrus, early hyperphagia, late hyperphagia), years when ungulate carrion and whitebark pine seed crops were abundant versus not, areas adjacent to (< 100 m) or far away from forest/nonforest edges, and areas inside or outside of ungulate winter ranges. Densities of bear activity in each region, habitat type, and time period were incorporated into calculations, controlling for the effects of proximity to human facilities. The coefficients described in this report and associated estimates of grizzly bear habitat productivity are unique among many efforts to model the conditions of bear habitat because calculations include information on energetics derived from the observed behavior of radio-marked bears.
Demonstrated reserve base for coal in New Mexico. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, G.K.
1995-02-01
The new demonstrated reserve base estimate of coal for the San Juan Basin, New Mexico, is 11.28 billion short tons. This compares with 4.429 billion short tons in the Energy Information Administration`s demonstrated reserve base of coal as of January 1, 1992 for all of New Mexico and 2.806 billion short tons for the San Juan Basin. The new estimate includes revised resource calculations in the San Juan Basin, in San Juan, McKinley, Sandoval, Rio Arriba, Bernalillo and Cibola counties, but does not include the Raton Basin and smaller fields in New Mexico. These estimated {open_quotes}remaining{close_quotes} coal resource quantities, however,more » include significant adjustments for depletion due to past mining, and adjustments for accessibility and recoverability.« less
TauFactor: An open-source application for calculating tortuosity factors from tomographic data
NASA Astrophysics Data System (ADS)
Cooper, S. J.; Bertei, A.; Shearing, P. R.; Kilner, J. A.; Brandon, N. P.
TauFactor is a MatLab application for efficiently calculating the tortuosity factor, as well as volume fractions, surface areas and triple phase boundary densities, from image based microstructural data. The tortuosity factor quantifies the apparent decrease in diffusive transport resulting from convolutions of the flow paths through porous media. TauFactor was originally developed to improve the understanding of electrode microstructures for batteries and fuel cells; however, the tortuosity factor has been of interest to a wide range of disciplines for over a century, including geoscience, biology and optics. It is still common practice to use correlations, such as that developed by Bruggeman, to approximate the tortuosity factor, but in recent years the increasing availability of 3D imaging techniques has spurred interest in calculating this quantity more directly. This tool provides a fast and accurate computational platform applicable to the big datasets (>108 voxels) typical of modern tomography, without requiring high computational power.
NASA Astrophysics Data System (ADS)
Kudo, K.; Hasegawa, H.; Nakatsugawa, M.
2017-12-01
This study addresses evaluation of water quality change of brackish lake based on the estimation of hydrological quantities resulting from long-term hydrologic process accompanying climate change. For brackish lakes, such as Lake Abashiri in Eastern Hokkaido, there are concerns about water quality deterioration due to increases in water temperature and salinity. For estimating some hydrological quantities in the Abashiri River basin, including Lake Abashiri, we propose the following methods: 1) MRI-NHRCM20, a regional climate model based on the Representative Concentration Pathways adopted by IPCC AR5, 2) generalized extreme value distribution for correcting bias, 3) kriging adopted variogram for downscaling and 4) Long term Hydrologic Assessment model considering Snow process (LoHAS). In addition, we calculate the discharge from Abashiri River into Lake Abashiri by using estimated hydrological quantities and a tank model, and simulate impacts on water quality of Lake Abashiri due to climate change by setting necessary conditions, including the initial conditions of water temperature and water quality, the pollution load from the inflow rivers, the duration of ice cover and salt pale boundary. The result of the simulation of water quality indicates that climate change is expected to raise the water temperature of the lake surface by approximately 4°C and increase salinity of surface of the lake by approximately 4psu, also if salt pale boundary in the lake raises by approximately 2-m, the concentration of COD, T-N and T-P in the bottom of the lake might increase. The processes leading to these results are likely to be as follows: increased river water flows in along salt pale boundary in lake, causing dynamic flow of surface water; saline bottom water is entrained upward, where it mixes with surface water; and the shear force acting at salt pale boundary helps to increase the supply of salts from bottom saline water to the surface water. In the future, we will conduct similar simulations for a larger area that includes the mouth of Abashiri River. The accuracy of flow field simulation for Lake Abashiri will increase when calculations incorporate the effects of climate change on tide level, water temperature and salinity at the river mouth.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shanahan, Phiala A.
I discuss recent lattice QCD studies of the gluon structure of hadrons and light nuclei. After very briefly highlighting new determinations of the gluon contributions to the nucleon's momentum and spin, presented by several collaborations over the last year, I describe first calculations of gluon generalised form factors. The generalised transversity gluon distributions are of particular interest since they are purely gluonic; they do not mix with quark distributions at leading twist. In light nuclei they moreover provide a clean signature of non-nucleonic gluon degrees of freedom, and I present the first evidence for such effects, based on lattice QCDmore » calculations. The planned Electron-Ion Collider, designed to access gluon structure quantities, will have the capability to test this prediction, and measure a range of gluon observables including generalised gluon distributions and transverse momentum dependent gluon distributions, within the next decade.« less
Initialization and assimilation of cloud and rainwater in a regional model
NASA Technical Reports Server (NTRS)
Raymond, William H.; Olson, William S.
1990-01-01
The initialization and assimilation of cloud and rainwater quantities in a mesoscale regional model was examined. Forecasts of explicit cloud and rainwater are made using conservation equations. The physical processes include condensation, evaporation, autoconversion, accretion, and the removal of rainwater by fallout. These physical processes, some of which are parameterized, represent source and sink in terms in the conservation equations. The question of how to initialize the explicit liquid water calculations in numerical models and how to retain information about precipitation processes during the 4-D assimilation cycle are important issues that are addressed.
NASA Astrophysics Data System (ADS)
Belov, G. V.; Dyachkov, S. A.; Levashov, P. R.; Lomonosov, I. V.; Minakov, D. V.; Morozov, I. V.; Sineva, M. A.; Smirnov, V. N.
2018-01-01
The database structure, main features and user interface of an IVTANTHERMO-Online system are reviewed. This system continues the series of the IVTANTHERMO packages developed in JIHT RAS. It includes the database for thermodynamic properties of individual substances and related software for analysis of experimental results, data fitting, calculation and estimation of thermodynamical functions and thermochemistry quantities. In contrast to the previous IVTANTHERMO versions it has a new extensible database design, the client-server architecture, a user-friendly web interface with a number of new features for online and offline data processing.
NASA Astrophysics Data System (ADS)
Marshall, Jason P.; Hudson, Troy L.; Andrade, José E.
2017-10-01
The InSight mission launches in 2018 to characterize several geophysical quantities on Mars, including the heat flow from the planetary interior. This quantity will be calculated by utilizing measurements of the thermal conductivity and the thermal gradient down to 5 meters below the Martian surface. One of the components of InSight is the Mole, which hammers into the Martian regolith to facilitate these thermal property measurements. In this paper, we experimentally investigated the effect of the Mole's penetrating action on regolith compaction and mechanical properties. Quasi-static and dynamic experiments were run with a 2D model of the 3D cylindrical mole. Force resistance data was captured with load cells. Deformation information was captured in images and analyzed using Digitial Image Correlation (DIC). Additionally, we used existing approximations of Martian regolith thermal conductivity to estimate the change in the surrounding granular material's thermal conductivity due to the Mole's penetration. We found that the Mole has the potential to cause a high degree of densification, especially if the initial granular material is relatively loose. The effect on the thermal conductivity from this densification was found to be relatively small in first-order calculations though more complete thermal models incorporating this densification should be a subject of further investigation. The results obtained provide an initial estimate of the Mole's impact on Martian regolith thermal properties.
NASA Astrophysics Data System (ADS)
Winnewisser, Manfred; Winnewisser, Brenda P.; Medvedev, Ivan R.; De Lucia, Frank, C.; Ross, Stephen C.; Koput, Jacek
2010-06-01
Quantum Monodromy has a strong impact on the ro-vibrational energy levels of chain molecules whose bending potential energy function has the form of the bottom of a champagne bottle (i.e. with a hump or punt) around the linear configuration. NCNCS is a particularly good example of such a molecule and clearly exhibits a distinctive monodromy-induced dislocation of the energy level pattern at the top of the potential energy hump. The generalized semi-rigid bender (GSRB) wave functions are used to show that the expectation values of any physical quantity which varies with the large amplitude bending coordinate will also have monodromy-induced dislocations. This includes the electric dipole moment components. High level ab initio calculations not only provided the molecular equilibrium structure of NCNCS, but also the electric dipole moment components μa and μb as functions of the large-amplitude bending coordinate. The calculated expectation values of these quantities indicate large ro-vibrational transition moments that will be discussed in pursuit of possible far-infrared bands. To our knowledge there is no NCNCS infrared spectrum reported in the literature. B. P. Winnewisser, M. Winnewisser, I. R. Medvedev, F. C. De Lucia, S. C. Ross and J. Koput, Phys. Chem. Chem. Phys., 2010, DOI:10.1039/B922023B.
Pergola, M; D'Amico, M; Celano, G; Palese, A M; Scuderi, A; Di Vita, G; Pappalardo, G; Inglese, P
2013-10-15
The island of Sicily has a long standing tradition in citrus growing. We evaluated the sustainability of orange and lemon orchards, under organic and conventional farming, using an energy, environmental and economic analysis of the whole production cycle by using a life cycle assessment approach. These orchard systems differ only in terms of a few of the inputs used and the duration of the various agricultural operations. The quantity of energy consumption in the production cycle was calculated by multiplying the quantity of inputs used by the energy conversion factors drawn from the literature. The production costs were calculated considering all internal costs, including equipment, materials, wages, and costs of working capital. The performance of the two systems (organic and conventional), was compared over a period of fifty years. The results, based on unit surface area (ha) production, prove the stronger sustainability of the organic over the conventional system, both in terms of energy consumption and environmental impact, especially for lemons. The sustainability of organic systems is mainly due to the use of environmentally friendly crop inputs (fertilizers, not use of synthetic products, etc.). In terms of production costs, the conventional management systems were more expensive, and both systems were heavily influenced by wages. In terms of kg of final product, the organic production system showed better environmental and energy performances. Copyright © 2013 Elsevier Ltd. All rights reserved.
Wirelessly Interrogated Wear or Temperature Sensors
NASA Technical Reports Server (NTRS)
Woodard, Stanley E.; Taylor, Bryant D.
2010-01-01
Sensors for monitoring surface wear and/or temperature without need for wire connections have been developed. Excitation and interrogation of these sensors are accomplished by means of a magnetic-field-response recorder. In a sensor of the present type as in the previously reported ones, the capacitance and, thus, the resonance frequency, varies as a known function of the quantity of interest that one seeks to determine. Hence, the resonance frequency is measured and used to calculate the quantity of interest.
Radiation Protection Quantities for Near Earth Environments
NASA Technical Reports Server (NTRS)
Clowdsley, Martha S.; Wilson, John W.; Kim, Myung-Hee; Anderson, Brooke M.; Nealy, John E.
2004-01-01
As humans travel beyond the protection of the Earth's magnetic field and mission durations grow, risk due to radiation exposure will increase and may become the limiting factor for such missions. Here, the dosimetric quantities recommended by the National Council on Radiation Protection and Measurements (NCRP) for the evaluation of health risk due to radiation exposure, effective dose and gray-equivalent to eyes, skin, and blood forming organs (BFO), are calculated for several near Earth environments. These radiation protection quantities are evaluated behind two different shielding materials, aluminum and polyethylene. Since exposure limits for missions beyond low Earth orbit (LEO) have not yet been defined, results are compared to limits recommended by the NCRP for LEO operations.
Miniature high temperature plug-type heat flux gauges
NASA Technical Reports Server (NTRS)
Liebert, Curt H.
1992-01-01
The objective is to describe continuing efforts to develop methods for measuring surface heat flux, gauge active surface temperature, and heat transfer coefficient quantities. The methodology involves inventing a procedure for fabricating improved plug-type heat flux gauges and also for formulating inverse heat conduction models and calculation procedures. These models and procedures are required for making indirect measurements of these quantities from direct temperature measurements at gauge interior locations. Measurements of these quantities were made in a turbine blade thermal cycling tester (TBT) located at MSFC. The TBT partially simulates the turbopump turbine environment in the Space Shuttle Main Engine. After the TBT test, experiments were performed in an arc lamp to analyze gauge quality.
NASA Astrophysics Data System (ADS)
Welden, Alicia Rae; Rusakov, Alexander A.; Zgid, Dominika
2016-11-01
Including finite-temperature effects from the electronic degrees of freedom in electronic structure calculations of semiconductors and metals is desired; however, in practice it remains exceedingly difficult when using zero-temperature methods, since these methods require an explicit evaluation of multiple excited states in order to account for any finite-temperature effects. Using a Matsubara Green's function formalism remains a viable alternative, since in this formalism it is easier to include thermal effects and to connect the dynamic quantities such as the self-energy with static thermodynamic quantities such as the Helmholtz energy, entropy, and internal energy. However, despite the promising properties of this formalism, little is known about the multiple solutions of the non-linear equations present in the self-consistent Matsubara formalism and only a few cases involving a full Coulomb Hamiltonian were investigated in the past. Here, to shed some light onto the iterative nature of the Green's function solutions, we self-consistently evaluate the thermodynamic quantities for a one-dimensional (1D) hydrogen solid at various interatomic separations and temperatures using the self-energy approximated to second-order (GF2). At many points in the phase diagram of this system, multiple phases such as a metal and an insulator exist, and we are able to determine the most stable phase from the analysis of Helmholtz energies. Additionally, we show the evolution of the spectrum of 1D boron nitride to demonstrate that GF2 is capable of qualitatively describing the temperature effects influencing the size of the band gap.
Transmission eigenchannels from nonequilibrium Green's functions
NASA Astrophysics Data System (ADS)
Paulsson, Magnus; Brandbyge, Mads
2007-09-01
The concept of transmission eigenchannels is described in a tight-binding nonequilibrium Green’s function (NEGF) framework. A simple procedure for calculating the eigenchannels is derived using only the properties of the device subspace and quantities normally available in a NEGF calculation. The method is exemplified by visualization in real space of the eigenchannels for three different molecular and atomic wires.
ERIC Educational Resources Information Center
Vargas, Francisco M.
2014-01-01
The temperature dependence of the Gibbs energy and important quantities such as Henry's law constants, activity coefficients, and chemical equilibrium constants is usually calculated by using the Gibbs-Helmholtz equation. Although, this is a well-known approach and traditionally covered as part of any physical chemistry course, the required…
Non-linear vacuum polarization in strong fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gyulassy, M.
1981-07-01
The Wichmann-Kroll formalism for calculating the vacuum polarization density to first order in ..cap alpha.. but to all orders in Z..cap alpha.. is derived. The most essential quantity is shown to be the electrons Green's function in these calculations. The method of constructing that Green's function in the field of finite radius nuclei is then presented.
Energetics of Meloidogyne incognita on Resistant and Susceptible Alyceclover Genotypes
Powers, L. E.; McSorley, R.
1993-01-01
To determine the energy cost of a population of Meloidogyne incognita on the roots of alyceclover, nematode biomass was estimated and equations in the literature were used to calculate energy budgets. Amounts of energy consumed, respired, or used in production of nematode biomass were calculated. Results suggested that severe infestations of root-knot nematodes can remove significant quantities of energy from their hosts. Over a 36-day period, a population of 2.6 females of M. incognita per root system removed less than 0.4 calories of energy from a resistant alyceclover plant but over 11 calories were removed by 28 females from a susceptible alyceclover. The calculations indicate that on the resistant alyceclover line, 53% of the energy assimilated by the root-knot population was allocated to respiration, with only 47% allocated to production, whereas on the susceptible line, 65% of the assimilated energy was allocated to production. Such energy demands by the parasite could result in significant reductions in yield quantity and quality at a field production level. PMID:19279766
The harmonic force field of benzene. A local density functional study
NASA Astrophysics Data System (ADS)
Bérces, Attila; Ziegler, Tom
1993-03-01
The harmonic force field of benzene has been calculated by a method based on local density functional theory (LDF). The calculations were carried out employing a triple zeta basis set with triple polarization on hydrogen and double polarization on carbon. The LDF force field was compared to the empirical field due to Ozkabak, Goodman, and Thakur [A. G. Ozkabak, L. Goodman, and S. N. Thakur, J. Phys. Chem. 95, 9044 (1991)], which has served as a benchmark for theoretical calculations as well as the theoretical field based on scaled Hartree-Fock ab initio calculation due to Pulay, Fogarasi, and Boggs [P. Pulay, G. Fogarasi, and J. E. Boggs, J. Chem. Phys. 74, 3999 (1981)]. The calculated LDF force field is in excellent qualitative and very good quantitative agreement with the theoretical field proposed by Pulay, Fogarasi, and Boggs as well as the empirical field due to Ozkabak, Goodman, and Thakur. The LDF field is closest to the values of Pulay and co-workers in those cases where the force constants due to Pulay, Fogarasi, and Boggs and to Ozkabak, Goodman, and Thakur differ in sign or magnitude. The accuracy of the LDF force field was investigated by evaluating a number of eigenvalue and eigenfunction dependent quantities from the the LDF force constants. The quantities under investigation include vibrational frequencies of seven isotopomers, isotopic shifts, as well as absorption intensities. The calculations were performed at both theoretical optimized and approximate equilibrium reference geometries. The predicted frequencies are usually within 1%-2% compared to the empirical harmonic frequencies. The least accurate frequency deviates by 5% from the experimental value. The average deviations from the empirical harmonic frequencies of C6H6 and C6D6 are 16.7 cm-1 (1.5%) and 15.2 cm-1 (1.7%), respectively, not including CH stretching frequencies, in the case where a theoretical reference geometry was used. The accuracy of the out-of-plane force field is especially remarkable; the average deviations for the C6H6 and C6D6 frequencies, based on the LDF force field, are 9.4 cm-1 (1.2%) and 7.3 cm-1 (1.2%), respectively. The absorption intensities were not predicted as accurately as it was expected based on the size of the basis set applied. An analysis is provided to ensure that the force constants are not significantly affected by numerical errors due to the numerical integration scheme employed.
NASA Astrophysics Data System (ADS)
Mokhtari, Ali; Alidoosti, Mohammad
2014-11-01
In the present work, we have performed first principles calculations to study the structural and electronic properties of the MgFBrxCl1-x quaternary alloys using the pseudo-potential plane wave approach within the framework of density functional theory. By using the optimized initial parameters, we have obtained the physical quantities such as equilibrium lattice constants a and c, cohesive energy and band gap and then fitted the results by a quadratic expression for all x compositions. The results of bulk modulus exhibit nearly linear concentration dependence (LCD) but other quantities show nonlinear dependence. Finally, we have calculated the total and angular momentum decomposed (partial) density of states and determined the contributions of different orbitals of each atoms.
Estimation of πd-Interactions in Organic Conductors Including Magnetic Anions
NASA Astrophysics Data System (ADS)
Mori, Takehiko; Katsuhara, Mao
2002-03-01
Magnetic interactions in organic conductors including magnetic anions, such as λ-(BETS)2FeCl4 and κ-(BETS)2FeX4 [X = Cl and Br], are estimated from intermolecular overlap integrals; the overlaps between anions afford Jdd, and those between anions and donors give Jπ d. From this, the most stable spin alignments are decided, and such quantities as the Néel and Weiss temperatures, as well as the magnitude of spin polarization on the π-molecules are evaluated on the basis of the mean-field theory of πd-systems. The calculation is extended to several other πd-conductors, which are classified depending on the relative magnitudes of the direct dd- and indirect πd-interactions.
Management of Water Quantity and Quality Based on Copula for a Tributary to Miyun Reservoir, Beijing
NASA Astrophysics Data System (ADS)
Zang, N.; Wang, X.; Liang, P.
2017-12-01
Due to the complex mutual influence between water quantity and water quality of river, it is difficult to reflect the actual characters of the tributaries to reservoir. In this study, the acceptable marginal probability distributions for water quantity and quality of reservoir inflow were calculated. A bivariate Archimedean copula was further applied to establish the joint distribution function of them. Then multiple combination scenarios of water quantity and water quality were designed to analyze their coexistence relationship and reservoir management strategies. Taking Bai river, an important tributary into the Miyun Reservoir, as a study case. The results showed that it is feasible to apply Frank copula function to describe the jointed distribution function of water quality and water quantity for Bai river. Furthermore, the monitoring of TP concentration needs to be strengthen in Bai river. This methodology can be extended to larger dimensions and is transferable to other reservoirs via establishment of models with relevant data for a particular area. Our findings help better analyzing the coexistence relationship and influence degree of the water quantity and quality of the tributary to reservoir for the purpose of water resources protection.
A Note on the Teaching of Arc Elasticity.
ERIC Educational Resources Information Center
Seldon, James R.
1986-01-01
Maintains that the Aba P. Lerner alternative for calculating arc elasticity is superior to the commonly used mean prices and quantities method typically used in intermediate microeconomics courses. (JDH)
Thermodynamic properties by Equation of state of liquid sodium under pressure
NASA Astrophysics Data System (ADS)
Li, Huaming; Sun, Yongli; Zhang, Xiaoxiao; Li, Mo
Isothermal bulk modulus, molar volume and speed of sound of molten sodium are calculated through an equation of state of a power law form within good precision as compared with the experimental data. The calculated internal energy data show the minimum along the isothermal lines as the previous result but with slightly larger values. The calculated values of isobaric heat capacity show the unexpected minimum in the isothermal compression. The temperature and pressure derivative of various thermodynamic quantities in liquid Sodium are derived. It is discussed about the contribution from entropy to the temperature and pressure derivative of isothermal bulk modulus. The expressions for acoustical parameter and nonlinearity parameter are obtained based on thermodynamic relations from the equation of state. Both parameters for liquid Sodium are calculated under high pressure along the isothermal lines by using the available thermodynamic data and numeric derivations. By comparison with the results from experimental measurements and quasi-thermodynamic theory, the calculated values are found to be very close at melting point at ambient condition. Furthermore, several other thermodynamic quantities are also presented. Scientific Research Starting Foundation from Taiyuan university of Technology, Shanxi Provincial government (``100-talents program''), China Scholarship Council and National Natural Science Foundation of China (NSFC) under Grant No. 11204200.
Li, Chuan; Petukh, Marharyta; Li, Lin; Alexov, Emil
2013-08-15
Due to the enormous importance of electrostatics in molecular biology, calculating the electrostatic potential and corresponding energies has become a standard computational approach for the study of biomolecules and nano-objects immersed in water and salt phase or other media. However, the electrostatics of large macromolecules and macromolecular complexes, including nano-objects, may not be obtainable via explicit methods and even the standard continuum electrostatics methods may not be applicable due to high computational time and memory requirements. Here, we report further development of the parallelization scheme reported in our previous work (Li, et al., J. Comput. Chem. 2012, 33, 1960) to include parallelization of the molecular surface and energy calculations components of the algorithm. The parallelization scheme utilizes different approaches such as space domain parallelization, algorithmic parallelization, multithreading, and task scheduling, depending on the quantity being calculated. This allows for efficient use of the computing resources of the corresponding computer cluster. The parallelization scheme is implemented in the popular software DelPhi and results in speedup of several folds. As a demonstration of the efficiency and capability of this methodology, the electrostatic potential, and electric field distributions are calculated for the bovine mitochondrial supercomplex illustrating their complex topology, which cannot be obtained by modeling the supercomplex components alone. Copyright © 2013 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Gomez, Thomas; Nagayama, Taisukue; Kilcrease, David; Hansen, Stephanie; Montgomery, Mike; Winget, Don
2018-01-01
The Rosseland-Mean opacity (RMO) is an important quantity in determining radiation transport through stars. The solar-convection-zone boundary predicted by the standard solar model disagrees with helioseismology measurements by many sigma; a 14% increase in the RMO would resolve this discrepancy. Experiments at Sandia National Laboratories are now measuring iron opacity at solar-interior conditions, and significant discrepancies are already observed. Highly-ionized oxygen is one of the dominant contributions to the RMO. The strongest line, Lyman alpha, is at the peak of the Rosseland weighting function. The accuracy of line-broadening calculations has been called into question due to various experimental results and comparisons between theory. We have developed an ab-initio calculation to explore different physical effects, our current focus is treating penetrating collisions explicitly. The equation of motion used to calculate line shapes within the relaxation and unified theories includes a projection operator, which performs an average over plasma electron states; this is neglected due to past calculations approximate treatment of penetrations. We now include this projection term explicitly, which results in a significant broadening of spectral lines from highly-charged ions (low-Z elements are not much affected). The additional broadening raises the O Ly-alpha wing opacity by a factor of 5; we examine the consequences of this additional broadening on the Rosseland mean.
Budget of Turbulent Kinetic Energy in a Shock Wave Boundary-Layer Interaction
NASA Technical Reports Server (NTRS)
Vyas, Manan; Waindim, Mbu; Gaitonde, Datta
2016-01-01
Implicit large-eddy simulation (ILES) of a shock wave boundary-layer interaction (SBLI) was performed. Quantities present in the exact equation of the turbulent kinetic energy (TKE) transport were accumulated. These quantities will be used to calculate the components of TKE-like production, dissipation, transport, and dilatation. Correlations of these terms will be presented to study the growth and interaction between various terms. A comparison with its RANS (Reynolds-Averaged Navier-Stokes) counterpart will also be presented.
Foundations of an effective-one-body model for coalescing binaries on eccentric orbits
NASA Astrophysics Data System (ADS)
Hinderer, Tanja; Babak, Stanislav
2017-11-01
We develop the foundations of an effective-one-body (EOB) model for eccentric binary coalescences that includes the conservative dynamics, radiation reaction, and gravitational waveform modes from the inspiral and the merger-ringdown signals. Our approach uses the strategy that is commonly employed in black-hole perturbation theory: we introduce an efficient, relativistic parameterization of the dynamics that is defined by the orbital geometry and consists of a set of phase variables and quantities that evolve only due to gravitational radiation reaction. Specializing to nonspinning binaries, we derive the EOB equations of motion for the new variables and make use of the fundamental frequencies of the motion to compute the binary's radiative multipole moments that determine the gravitational waves. Our treatment has several advantages over the quasi-Keplerian approach that is often used in post-Newtonian (PN) calculations: a smaller set of variables, parameters that reflect the features of strong-field dynamics, and a greater transparency of the calculations when using the fundamental frequencies that leads to simplifications and an unambiguous orbit-averaging operation. While our description of the conservative dynamics is fully relativistic, we limit explicit derivations in the radiative sector to 1.5PN order for simplicity. This already enables us to establish methods for computing both instantaneous and hereditary contributions to the gravitational radiation in EOB coordinates that have straightforward extensions to higher PN order. The weak-field, small eccentricity limit of our results for the orbit-averaged fluxes agrees with known PN results when expressed in terms of gauge-invariant quantities. We further address considerations for the numerical implementation of the model and the completion of the waveforms to include the merger and ringdown signals, and provide illustrative results.
Wang, Bruce C M; Hsu, Ping-Ning; Furnback, Wesley; Ney, John; Yang, Ya-Wen; Fang, Chi-Hui; Tang, Chao-Hsiun
2016-03-01
Rheumatoid arthritis (RA) is a chronic autoimmune disease characterized by inflammation and destruction of the joints. This research aims to estimate the economic burden of RA in Taiwan. The National Health Insurance Research Database (NHIRD), a claims-based dataset encompassing 99 % of Taiwan's population, was applied. We used a micro-costing approach for direct healthcare costs and indirect social costs by estimating the quantities and prices of cost categories. Direct costs included surgeries, hospitalizations, medical devices and materials, laboratory tests, and drugs. The costs and quantities of the direct economic burden were calculated based on 2011 data of NHIRD. We identified RA patients and a control cohort matched 1:4 on demographic and clinical covariates to calculate the incremental cost related to RA. Indirect costs were evaluated by missed work (absenteeism) and worker productivity (presenteeism). For the indirect burden, we estimated the rate of absenteeism and presenteeism from a patient survey. Costs were presented in US dollars (US$1 = 30 TWD). A total of 41,269 RA patients were included in the database with incremental total direct cost of US$86,413,971 and indirect cost of US$138,492,987. This resulted in an average incremental direct cost of US$2050 per RA patient. Within direct costs, the largest burdens were associated with drugs (US$73,028,944), laboratory tests (US$6,132,395), and hospitalizations (US$3,208,559). For indirect costs, absenteeism costs and presenteeism costs were US$16,059,681 and US$114,291,687, respectively. The economic burden of RA in Taiwan is driven by indirect healthcare costs, most notably presenteeism.
Wang, Bruce C M; Hsu, Ping-Ning; Furnback, Wesley; Ney, John; Yang, Ya-Wen; Fang, Chi-Hui; Tang, Chao-Hsiun
Rheumatoid arthritis (RA) is a chronic autoimmune disease characterized by inflammation and destruction of the joints. This research aims to estimate the economic burden of RA in Taiwan. The National Health Insurance Research Database (NHIRD), a claims-based dataset encompassing 99 % of Taiwan's population, was applied. We used a micro-costing approach for direct healthcare costs and indirect social costs by estimating the quantities and prices of cost categories. Direct costs included surgeries, hospitalizations, medical devices and materials, laboratory tests, and drugs. The costs and quantities of the direct economic burden were calculated based on 2011 data of NHIRD. We identified RA patients and a control cohort matched 1:4 on demographic and clinical covariates to calculate the incremental cost related to RA. Indirect costs were evaluated by missed work (absenteeism) and worker productivity (presenteeism). For the indirect burden, we estimated the rate of absenteeism and presenteeism from a patient survey. Costs were presented in US dollars (US$1 = 30 TWD). A total of 41,269 RA patients were included in the database with incremental total direct cost of US$86,413,971 and indirect cost of US$138,492,987. This resulted in an average incremental direct cost of US$2050 per RA patient. Within direct costs, the largest burdens were associated with drugs (US$73,028,944), laboratory tests (US$6,132,395), and hospitalizations (US$3,208,559). For indirect costs, absenteeism costs and presenteeism costs were US$16,059,681 and US$114,291,687, respectively. The economic burden of RA in Taiwan is driven by indirect healthcare costs, most notably presenteeism.
Using sensitivity analysis in model calibration efforts
Tiedeman, Claire; Hill, Mary C.
2003-01-01
In models of natural and engineered systems, sensitivity analysis can be used to assess relations among system state observations, model parameters, and model predictions. The model itself links these three entities, and model sensitivities can be used to quantify the links. Sensitivities are defined as the derivatives of simulated quantities (such as simulated equivalents of observations, or model predictions) with respect to model parameters. We present four measures calculated from model sensitivities that quantify the observation-parameter-prediction links and that are especially useful during the calibration and prediction phases of modeling. These four measures are composite scaled sensitivities (CSS), prediction scaled sensitivities (PSS), the value of improved information (VOII) statistic, and the observation prediction (OPR) statistic. These measures can be used to help guide initial calibration of models, collection of field data beneficial to model predictions, and recalibration of models updated with new field information. Once model sensitivities have been calculated, each of the four measures requires minimal computational effort. We apply the four measures to a three-layer MODFLOW-2000 (Harbaugh et al., 2000; Hill et al., 2000) model of the Death Valley regional ground-water flow system (DVRFS), located in southern Nevada and California. D’Agnese et al. (1997, 1999) developed and calibrated the model using nonlinear regression methods. Figure 1 shows some of the observations, parameters, and predictions for the DVRFS model. Observed quantities include hydraulic heads and spring flows. The 23 defined model parameters include hydraulic conductivities, vertical anisotropies, recharge rates, evapotranspiration rates, and pumpage. Predictions of interest for this regional-scale model are advective transport paths from potential contamination sites underlying the Nevada Test Site and Yucca Mountain.
Strong Helioseismic Constraints on Weakly-Coupled Plasmas
NASA Astrophysics Data System (ADS)
Nayfonov, Alan
The extraordinary accuracy of helioseismic data allows detailed theoretical studies of solar plasmas. The necessity to produce solar models matching the experimental results in accuracy imposes strong constrains on the equations of state of solar plasmas. Several discrepancies between the experimental data and models have been successfully identified as the signatures of various non-ideal phenomena. Of a particular interest are questions of the position of the energy levels and the continuum edge and of the effect of the excited states in the solar plasma. Calculations of energy level and continuum shifts, based on the Green function formalism, appeared recently in the literature. These results have been used to examine effects of the shifts on the thermodynamic quantities. A comparison with helioseismic data has shown that the calculations based on lower-level approximations, such as the static screening in the effective two-particle wave equation, agree very well with the experimental data. However, the case of full dynamic screening produces thermodynamic quantities inconsistent with observations. The study of the effect of different internal partition functions on a complete set of thermodynamic quantities has revealed the signature of the excited states in the MHD (Mihalas, Hummer, Dappen) equation of state. The presence of exited states causes a characteristic 'wiggle' in the thermodynamic quantities due to the density-dependent occupation probabilities. This effect is absent if the ACTEX (ACTivity EXpansion) equation of state is used. The wiggle has been found to be most prominent in the quantities sensitive to density. The size of this excited states effect is well within the observational power of helioseismology, and very recent inversion analyses of helioseismic data seem to indicate the presence of the wiggle in the sun. This has a potential importance for the helioseismic determination of the helium abundance of the sun.
Correlation consistent basis sets for lanthanides: The atoms La–Lu
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Qing; Peterson, Kirk A., E-mail: kipeters@wsu.edu
Using the 3rd-order Douglas-Kroll-Hess (DKH3) Hamiltonian, all-electron correlation consistent basis sets of double-, triple-, and quadruple-zeta quality have been developed for the lanthanide elements La through Lu. Basis sets designed for the recovery of valence correlation (defined here as 4f5s5p5d6s), cc-pVnZ-DK3, and outer-core correlation (valence + 4s4p4d), cc-pwCVnZ-DK3, are reported (n = D, T, and Q). Systematic convergence of both Hartree-Fock and correlation energies towards their respective complete basis set (CBS) limits are observed. Benchmark calculations of the first three ionization potentials (IPs) of La through Lu are reported at the DKH3 coupled cluster singles and doubles with perturbative triples,more » CCSD(T), level of theory, including effects of correlation down through the 4s electrons. Spin-orbit coupling is treated at the 2-component HF level. After extrapolation to the CBS limit, the average errors with respect to experiment were just 0.52, 1.14, and 4.24 kcal/mol for the 1st, 2nd, and 3rd IPs, respectively, compared to the average experimental uncertainties of 0.03, 1.78, and 2.65 kcal/mol, respectively. The new basis sets are also used in CCSD(T) benchmark calculations of the equilibrium geometries, atomization energies, and heats of formation for Gd{sub 2}, GdF, and GdF{sub 3}. Except for the equilibrium geometry and harmonic frequency of GdF, which are accurately known from experiment, all other calculated quantities represent significant improvements compared to the existing experimental quantities. With estimated uncertainties of about ±3 kcal/mol, the 0 K atomization energies (298 K heats of formation) are calculated to be (all in kcal/mol): 33.2 (160.1) for Gd{sub 2}, 151.7 (−36.6) for GdF, and 447.1 (−295.2) for GdF{sub 3}.« less
27 CFR 24.304 - Chaptalization (Brix adjustment) and amelioration record.
Code of Federal Regulations, 2013 CFR
2013-04-01
... or wine is ameliorated, the quantity of pure dry sugar added to juice will be included as..., the quantity of pure dry sugar added for chaptalization is not considered ameliorating material... quantities will be recorded in wine gallons, and, where sugar is used, the quantity will be determined either...
27 CFR 24.304 - Chaptalization (Brix adjustment) and amelioration record.
Code of Federal Regulations, 2014 CFR
2014-04-01
... or wine is ameliorated, the quantity of pure dry sugar added to juice will be included as..., the quantity of pure dry sugar added for chaptalization is not considered ameliorating material... quantities will be recorded in wine gallons, and, where sugar is used, the quantity will be determined either...
27 CFR 24.304 - Chaptalization (Brix adjustment) and amelioration record.
Code of Federal Regulations, 2012 CFR
2012-04-01
... or wine is ameliorated, the quantity of pure dry sugar added to juice will be included as..., the quantity of pure dry sugar added for chaptalization is not considered ameliorating material... quantities will be recorded in wine gallons, and, where sugar is used, the quantity will be determined either...
27 CFR 24.304 - Chaptalization (Brix adjustment) and amelioration record.
Code of Federal Regulations, 2010 CFR
2010-04-01
... or wine is ameliorated, the quantity of pure dry sugar added to juice will be included as..., the quantity of pure dry sugar added for chaptalization is not considered ameliorating material... quantities will be recorded in wine gallons, and, where sugar is used, the quantity will be determined either...
27 CFR 24.304 - Chaptalization (Brix adjustment) and amelioration record.
Code of Federal Regulations, 2011 CFR
2011-04-01
... or wine is ameliorated, the quantity of pure dry sugar added to juice will be included as..., the quantity of pure dry sugar added for chaptalization is not considered ameliorating material... quantities will be recorded in wine gallons, and, where sugar is used, the quantity will be determined either...
Bahadori, Amir A; Sato, Tatsuhiko; Slaba, Tony C; Shavers, Mark R; Semones, Edward J; Van Baalen, Mary; Bolch, Wesley E
2013-10-21
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
NASA Astrophysics Data System (ADS)
Bahadori, Amir A.; Sato, Tatsuhiko; Slaba, Tony C.; Shavers, Mark R.; Semones, Edward J.; Van Baalen, Mary; Bolch, Wesley E.
2013-10-01
NASA currently uses one-dimensional deterministic transport to generate values of the organ dose equivalent needed to calculate stochastic radiation risk following crew space exposures. In this study, organ absorbed doses and dose equivalents are calculated for 50th percentile male and female astronaut phantoms using both the NASA High Charge and Energy Transport Code to perform one-dimensional deterministic transport and the Particle and Heavy Ion Transport Code System to perform three-dimensional Monte Carlo transport. Two measures of radiation risk, effective dose and risk of exposure-induced death (REID) are calculated using the organ dose equivalents resulting from the two methods of radiation transport. For the space radiation environments and simplified shielding configurations considered, small differences (<8%) in the effective dose and REID are found. However, for the galactic cosmic ray (GCR) boundary condition, compensating errors are observed, indicating that comparisons between the integral measurements of complex radiation environments and code calculations can be misleading. Code-to-code benchmarks allow for the comparison of differential quantities, such as secondary particle differential fluence, to provide insight into differences observed in integral quantities for particular components of the GCR spectrum.
Skripnikov, L V; Titov, A V
2015-01-14
Recently, improved limits on the electron electric dipole moment, and dimensionless constant, kT,P, characterizing the strength of the T,P-odd pseudoscalar-scalar electron-nucleus neutral current interaction in the H(3)Δ1 state of ThO molecule were obtained by the ACME collaboration [J. Baron et al., Science 343, 269 (2014)]. The interpretation of the experiment in terms of these fundamental quantities is based on the results of theoretical study of appropriate ThO characteristics, the effective electric field acting on electron, Eeff, and a parameter of the T,P-odd pseudoscalar-scalar interaction, WT,P, given in Skripnikov et al. [J. Chem. Phys. 139, 221103 (2013)] by St. Petersburg group. To reduce the uncertainties of the given limits, we report improved calculations of the molecular state-specific quantities Eeff, 81.5 GV/cm, and WT,P, 112 kHz, with the uncertainty within 7% of the magnitudes. Thus, the values recommended to use for the upper limits of the quantities are 75.8 GV/cm and 104 kHz, correspondingly. The hyperfine structure constant, molecule-frame dipole moment of the H(3)Δ1 state, and the H(3)Δ1 → X(1)Σ(+) transition energy which, in general, can serve as a measure of reliability of the obtained Eeff and WT,P values are also calculated. In addition, we report the first calculation of g-factor for the H(3)Δ1 state of ThO. The results are compared to the earlier experimental and theoretical studies, and a detailed analysis of uncertainties of the calculations is given.
Magnetic field line random walk in two-dimensional dynamical turbulence
NASA Astrophysics Data System (ADS)
Wang, J. F.; Qin, G.; Ma, Q. M.; Song, T.; Yuan, S. B.
2017-08-01
The field line random walk (FLRW) of magnetic turbulence is one of the important topics in plasma physics and astrophysics. In this article, by using the field line tracing method, the mean square displacement (MSD) of FLRW is calculated on all possible length scales for pure two-dimensional turbulence with the damping dynamical model. We demonstrate that in order to describe FLRW with the damping dynamical model, a new dimensionless quantity R is needed to be introduced. On different length scales, dimensionless MSD shows different relationships with the dimensionless quantity R. Although the temporal effect affects the MSD of FLRW and even changes regimes of FLRW, it does not affect the relationship between the dimensionless MSD and dimensionless quantity R on all possible length scales.
Covariant diagrams for one-loop matching
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Zhengkang
Here, we present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed "covariant diagrams." The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We also show how such derivation canmore » be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.« less
Progress on China nuclear data processing code system
NASA Astrophysics Data System (ADS)
Liu, Ping; Wu, Xiaofei; Ge, Zhigang; Li, Songyang; Wu, Haicheng; Wen, Lili; Wang, Wenming; Zhang, Huanyu
2017-09-01
China is developing the nuclear data processing code Ruler, which can be used for producing multi-group cross sections and related quantities from evaluated nuclear data in the ENDF format [1]. The Ruler includes modules for reconstructing cross sections in all energy range, generating Doppler-broadened cross sections for given temperature, producing effective self-shielded cross sections in unresolved energy range, calculating scattering cross sections in thermal energy range, generating group cross sections and matrices, preparing WIMS-D format data files for the reactor physics code WIMS-D [2]. Programming language of the Ruler is Fortran-90. The Ruler is tested for 32-bit computers with Windows-XP and Linux operating systems. The verification of Ruler has been performed by comparison with calculation results obtained by the NJOY99 [3] processing code. The validation of Ruler has been performed by using WIMSD5B code.
Covariant diagrams for one-loop matching
Zhang, Zhengkang
2017-05-30
Here, we present a diagrammatic formulation of recently-revived covariant functional approaches to one-loop matching from an ultraviolet (UV) theory to a low-energy effective field theory. Various terms following from a covariant derivative expansion (CDE) are represented by diagrams which, unlike conventional Feynman diagrams, involve gauge-covariant quantities and are thus dubbed "covariant diagrams." The use of covariant diagrams helps organize and simplify one-loop matching calculations, which we illustrate with examples. Of particular interest is the derivation of UV model-independent universal results, which reduce matching calculations of specific UV models to applications of master formulas. We also show how such derivation canmore » be done in a more concise manner than the previous literature, and discuss how additional structures that are not directly captured by existing universal results, including mixed heavy-light loops, open covariant derivatives, and mixed statistics, can be easily accounted for.« less
The gluon structure of hadrons and nuclei from lattice QCD
NASA Astrophysics Data System (ADS)
Shanahan, Phiala
2018-03-01
I discuss recent lattice QCD studies of the gluon structure of hadrons and light nuclei. After very briefly highlighting new determinations of the gluon contributions to the nucleon's momentum and spin, presented by several collaborations over the last year, I describe first calculations of gluon generalised form factors. The generalised transversity gluon distributions are of particular interest since they are purely gluonic; they do not mix with quark distributions at leading twist. In light nuclei they moreover provide a clean signature of non-nucleonic gluon degrees of freedom, and I present the first evidence for such effects, based on lattice QCD calculations. The planned Electron-Ion Collider, designed to access gluon structure quantities, will have the capability to test this prediction, and measure a range of gluon observables including generalised gluon distributions and transverse momentum dependent gluon distributions, within the next decade.
NASA Technical Reports Server (NTRS)
Chin, Mian; Ginoux, Paul; Dubovik, Oleg; Holben, Brent; Kaufman, Yoram; chu, Allen; Anderson, Tad; Quinn, Patricia
2003-01-01
Aerosol climate forcing is one of the largest uncertainties in assessing the anthropogenic impact on the global climate system. This uncertainty arises from the poorly quantified aerosol sources, especially black carbon emissions, our limited knowledge of aerosol mixing state and optical properties, and the consequences of intercontinental transport of aerosols and their precursors. Here we use a global model GOCART to simulate atmospheric aerosols, including sulfate, black carbon, organic carbon, dust, and sea salt, from anthropogenic, biomass burning, and natural sources. We compare the model calculated aerosol extinction and absorption with those quantities from the ground-based sun photometer measurements from AERONET at several different wavelengths and the field observations from ACE-Asia, and model calculated total aerosol optical depth and fine mode fractions with the MODIS satellite retrieval. We will also estimate the intercontinental transport of pollution and dust aerosols from their source regions to other areas in different seasons.
NASA Technical Reports Server (NTRS)
Chin, Mian; Chu, Allen; Levy, Robert; Remer, Lorraine; Kaufman, Yoram; Dubovik, Oleg; Holben, Brent; Eck, Tom; Anderson, Tad; Quinn, Patricia
2004-01-01
Aerosol climate forcing is one of the largest uncertainties in assessing the anthropogenic impact on the global climate system. This uncertainty arises from the poorly quantified aerosol sources, especially black carbon emissions, our limited knowledge of aerosol mixing state and optical properties, and the consequences of intercontinental transport of aerosols and their precursors. Here we use a global model GOCART to simulate atmospheric aerosols, including sulfate, black carbon, organic carbon, dust, and sea salt, from anthropogenic, .biomass burning, and natural sources. We compare the model calculated aerosol extinction and absorption with those quantities from the ground-based sun photometer measurements from AERON" at several different wavelengths and the field observations from ACE-Asia, and model calculated total aerosol optical depth and fine mode fractions with the MODIS satellite retrieval. We will also estimate the intercontinental transport of pollution and dust aerosols from their source regions to other areas in different seasons.
The lagRST Model: A Turbulence Model for Non-Equilibrium Flows
NASA Technical Reports Server (NTRS)
Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.
2011-01-01
This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.
Seethaler, Pamela M; Fuchs, Lynn S; Fuchs, Douglas; Compton, Donald L
2012-02-01
The purpose of this study was to assess the value of dynamic assessment (DA; degree of scaffolding required to learn unfamiliar mathematics content) for predicting 1(st)-grade calculations (CA) and word problems (WP) development, while controlling for the role of traditional assessments. Among 184 1(st) graders, predictors (DA, Quantity Discrimination, Test of Mathematics Ability, language, and reasoning) were assessed near the start of 1(st) grade. CA and WP were assessed near the end of 1(st) grade. Planned regression and commonality analyses indicated that for forecasting CA development, Quantity Discrimination, which accounted for 8.84% of explained variance, was the single most powerful predictor, followed by Test of Mathematics Ability and DA; language and reasoning were not uniquely predictive. By contrast, for predicting WP development, DA was the single most powerful predictor, which accounted for 12.01% of explained variance, with Test of Mathematics Ability, Quantity Discrimination, and language also uniquely predictive. Results suggest that different constellations of cognitive resources are required for CA versus WP development and that DA may be useful in predicting 1(st)-grade mathematics development, especially WP.
Seethaler, Pamela M.; Fuchs, Lynn S.; Fuchs, Douglas; Compton, Donald L.
2012-01-01
The purpose of this study was to assess the value of dynamic assessment (DA; degree of scaffolding required to learn unfamiliar mathematics content) for predicting 1st-grade calculations (CA) and word problems (WP) development, while controlling for the role of traditional assessments. Among 184 1st graders, predictors (DA, Quantity Discrimination, Test of Mathematics Ability, language, and reasoning) were assessed near the start of 1st grade. CA and WP were assessed near the end of 1st grade. Planned regression and commonality analyses indicated that for forecasting CA development, Quantity Discrimination, which accounted for 8.84% of explained variance, was the single most powerful predictor, followed by Test of Mathematics Ability and DA; language and reasoning were not uniquely predictive. By contrast, for predicting WP development, DA was the single most powerful predictor, which accounted for 12.01% of explained variance, with Test of Mathematics Ability, Quantity Discrimination, and language also uniquely predictive. Results suggest that different constellations of cognitive resources are required for CA versus WP development and that DA may be useful in predicting 1st-grade mathematics development, especially WP. PMID:22347725
40 CFR 98.445 - Procedures for estimating missing data.
Code of Federal Regulations, 2013 CFR
2013-07-01
... quantities calculations is required. Whenever the monitoring procedures cannot be followed, you must use the...) A quarterly mass or volume of contents in containers received that is missing must be estimated as...
40 CFR 98.445 - Procedures for estimating missing data.
Code of Federal Regulations, 2014 CFR
2014-07-01
... quantities calculations is required. Whenever the monitoring procedures cannot be followed, you must use the...) A quarterly mass or volume of contents in containers received that is missing must be estimated as...
40 CFR 98.445 - Procedures for estimating missing data.
Code of Federal Regulations, 2012 CFR
2012-07-01
... quantities calculations is required. Whenever the monitoring procedures cannot be followed, you must use the...) A quarterly mass or volume of contents in containers received that is missing must be estimated as...
Low-energy electron-impact single ionization of helium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colgan, J.; Pindzola, M. S.; Childers, G.
2006-04-15
A study is made of low-energy electron-impact single ionization of ground-state helium. The time-dependent close-coupling method is used to calculate total integral, single differential, double differential, and triple differential ionization cross sections for impact electron energies ranging from 32 to 45 eV. For all quantities, the calculated cross sections are found to be in very good agreement with experiment, and for the triple differential cross sections, good agreement is also found with calculations made using the convergent close-coupling technique.
Key comparison CCPR-K1.a as an interlaboratory comparison of correlated color temperature
NASA Astrophysics Data System (ADS)
Kärhä, P.; Vaskuri, A.; Pulli, T.; Ikonen, E.
2018-02-01
We analyze the results of spectral irradiance key comparison CCPR-K1.a for correlated color temperature (CCT). For four participants out of 13, the uncertainties of CCT, calculated using traditional methods, not accounting for correlations, would be too small. The reason for the failure of traditional uncertainty calculation is spectral correlations, producing systematic deviations of the same sign over certain wavelength regions. The results highlight the importance of accounting for such correlations when calculating uncertainties of spectrally integrated quantities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pan, Guangsheng; Tan, Zhenyu, E-mail: tzy@sdu.edu.cn; Pan, Jie
In this work, a comparative study on the frequency effects of the electrical characteristics of the pulsed dielectric barrier discharges in He/O{sub 2} and in Ar/O{sub 2} at atmospheric pressure has been performed by means of the numerical simulation based on a 1-D fluid model at frequencies below 100 kHz. The frequency dependences of the characteristic quantities of the discharges in the two gases have been systematically calculated and analyzed under the oxygen concentrations below 2%. The characteristic quantities include the discharge current density, the averaged electron density, the electric field, and the averaged electron temperature. Especially, the frequency effects onmore » the averaged particle densities of the reactive species have also been calculated. This work gives the following significant results. For the two gases, there are two bipolar discharges in one period of applied voltage pulse under the considered frequency range and oxygen concentrations, as occurred in the pure noble gases. The frequency affects the two discharges in He/O{sub 2}, but in Ar/O{sub 2}, it induces a strong effect only on the first discharge. For the first discharge in each gas, there is a characteristic frequency at which the characteristic quantities reach their respective minimum, and this frequency appears earlier for Ar/O{sub 2}. For the second discharge in Ar/O{sub 2}, the averaged electron density presents a slight variation with the frequency. In addition, the discharge in Ar/O{sub 2} is strong and the averaged electron temperature is low, compared to those in He/O{sub 2.} The total averaged particle density of the reactive species in Ar/O{sub 2} is larger than those in He/O{sub 2} by about one order of magnitude.« less
Dose conversion coefficients for electron exposure of the human eye lens
NASA Astrophysics Data System (ADS)
Behrens, R.; Dietze, G.; Zankl, M.
2009-07-01
Recent epidemiological studies suggest a rather low dose threshold (below 0.5 Gy) for the induction of a cataract of the eye lens. Some other studies even assume that there is no threshold at all. Therefore, protection measures have to be optimized and current dose limits for the eye lens may be reduced in the future. Two questions arise from this situation: first, which dose quantity is related to the risk of developing a cataract, and second, which personal dose equivalent quantity is appropriate for monitoring this dose quantity. While the dose equivalent quantity Hp(0.07) has often been seen as being sufficiently accurate for monitoring the dose to the lens of the eye, this would be questionable in the case when the dose limits were reduced and, thus, it may be necessary to generally use the dose equivalent quantity Hp(3) for this purpose. The basis for a decision, however, must be the knowledge of accurate conversion coefficients from fluence to equivalent dose to the lens. This is especially important for low-penetrating radiation, for example, electrons. Formerly published values of conversion coefficients are based on quite simple models of the eye. In this paper, quite a sophisticated model of the eye including the inner structure of the lens was used for the calculations and precise conversion coefficients for electrons with energies between 0.2 MeV and 12 MeV, and for angles of radiation incidence between 0° and 45° are presented. Compared to the values adopted in 1996 by the International Commission on Radiological Protection (ICRP), the new values are up to 1000 times smaller for electron energies below 1 MeV, nearly equal at 1 MeV and above 4 MeV, and by a factor of 1.5 larger at about 1.5 MeV electron energy.
Dose conversion coefficients for electron exposure of the human eye lens.
Behrens, R; Dietze, G; Zankl, M
2009-07-07
Recent epidemiological studies suggest a rather low dose threshold (below 0.5 Gy) for the induction of a cataract of the eye lens. Some other studies even assume that there is no threshold at all. Therefore, protection measures have to be optimized and current dose limits for the eye lens may be reduced in the future. Two questions arise from this situation: first, which dose quantity is related to the risk of developing a cataract, and second, which personal dose equivalent quantity is appropriate for monitoring this dose quantity. While the dose equivalent quantity H(p)(0.07) has often been seen as being sufficiently accurate for monitoring the dose to the lens of the eye, this would be questionable in the case when the dose limits were reduced and, thus, it may be necessary to generally use the dose equivalent quantity H(p)(3) for this purpose. The basis for a decision, however, must be the knowledge of accurate conversion coefficients from fluence to equivalent dose to the lens. This is especially important for low-penetrating radiation, for example, electrons. Formerly published values of conversion coefficients are based on quite simple models of the eye. In this paper, quite a sophisticated model of the eye including the inner structure of the lens was used for the calculations and precise conversion coefficients for electrons with energies between 0.2 MeV and 12 MeV, and for angles of radiation incidence between 0 degrees and 45 degrees are presented. Compared to the values adopted in 1996 by the International Commission on Radiological Protection (ICRP), the new values are up to 1000 times smaller for electron energies below 1 MeV, nearly equal at 1 MeV and above 4 MeV, and by a factor of 1.5 larger at about 1.5 MeV electron energy.
Chandon, Pierre; Ordabayeva, Nailya
2017-02-01
Five studies show that people, including experts such as professional chefs, estimate quantity decreases more accurately than quantity increases. We argue that this asymmetry occurs because physical quantities cannot be negative. Consequently, there is a natural lower bound (zero) when estimating decreasing quantities but no upper bound when estimating increasing quantities, which can theoretically grow to infinity. As a result, the "accuracy of less" disappears (a) when a numerical or a natural upper bound is present when estimating quantity increases, or (b) when people are asked to estimate the (unbounded) ratio of change from 1 size to another for both increasing and decreasing quantities. Ruling out explanations related to loss aversion, symbolic number mapping, and the visual arrangement of the stimuli, we show that the "accuracy of less" influences choice and demonstrate its robustness in a meta-analysis that includes previously published results. Finally, we discuss how the "accuracy of less" may explain asymmetric reactions to the supersizing and downsizing of food portions, some instances of the endowment effect, and asymmetries in the perception of increases and decreases in physical and psychological distance. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Abdominal auscultation does not provide clear clinical diagnoses.
Durup-Dickenson, Maja; Christensen, Marie Kirk; Gade, John
2013-05-01
Abdominal auscultation is a part of the clinical examination of patients, but the determining factors in bowel sound evaluation are poorly described. The aim of this study was to assess inter- and intra-observer agreement in physicians' evaluation of pitch, intensity and quantity in abdominal auscultation. A total of 100 physicians were presented with 20 bowel sound recordings in a blinded set-up. Recordings had been made in a mix of healthy volunteers and emergency patients. They evaluated pitch, intensity and quantity of bowel sounds in a questionnaire with three, three and four categories of answers, respectively. Fleiss' multi-rater kappa (κ) coefficients were calculated for inter-observer agreement; for intra-observer agreement, calculation of probability was performed. Inter-observer agreement regarding pitch, intensity and quantity yielded κ-values of 0.19 (p < 0.0001), 0.30 (p < 0.0001) and 0.24 (p < 0.0001), respectively, corresponding to slight, fair and fair agreement. Regarding intra-observer agreement, the probability of agreement was 0.55 (95% confidence interval (CI): 0.51-0.59), 0.45 (95% CI: 0.42-0.49) and 0.41 (95% CI: 0.38-0.45) for pitch, intensity and quantity, respectively. Although relatively poor, observer agreement was slight to fair and thus better than expected by chance. Since the diagnostic value of auscultation increases with addition of history and clinics, and may be further improved by systematic training, it should still be used in the examination of patients with acute abdominal pain. not relevant. not relevant.
Use of computer code for dose distribution studies in A 60CO industrial irradiator
NASA Astrophysics Data System (ADS)
Piña-Villalpando, G.; Sloan, D. P.
1995-09-01
This paper presents a benchmark comparison between calculated and experimental absorbed dose values tor a typical product, in a 60Co industrial irradiator, located at ININ, México. The irradiator is a two levels, two layers system with overlapping product configuration with activity around 300kCi. Experimental values were obtanied from routine dosimetry, using red acrylic pellets. Typical product was Petri dishes packages, apparent density 0.13 g/cm3; that product was chosen because uniform size, large quantity and low density. Minimum dose was fixed in 15 kGy. Calculated values were obtained from QAD-CGGP code. This code uses a point kernel technique, build-up factors fitting was done by geometrical progression and combinatorial geometry is used for system description. Main modifications for the code were related with source sumilation, using punctual sources instead of pencils and an energy and anisotropic emission spectrums were included. Results were, for maximum dose, calculated value (18.2 kGy) was 8% higher than experimental average value (16.8 kGy); for minimum dose, calculated value (13.8 kGy) was 3% higher than experimental average value (14.3 kGy).
ORBIT: A Code for Collective Beam Dynamics in High-Intensity Rings
NASA Astrophysics Data System (ADS)
Holmes, J. A.; Danilov, V.; Galambos, J.; Shishlo, A.; Cousineau, S.; Chou, W.; Michelotti, L.; Ostiguy, J.-F.; Wei, J.
2002-12-01
We are developing a computer code, ORBIT, specifically for beam dynamics calculations in high-intensity rings. Our approach allows detailed simulation of realistic accelerator problems. ORBIT is a particle-in-cell tracking code that transports bunches of interacting particles through a series of nodes representing elements, effects, or diagnostics that occur in the accelerator lattice. At present, ORBIT contains detailed models for strip-foil injection, including painting and foil scattering; rf focusing and acceleration; transport through various magnetic elements; longitudinal and transverse impedances; longitudinal, transverse, and three-dimensional space charge forces; collimation and limiting apertures; and the calculation of many useful diagnostic quantities. ORBIT is an object-oriented code, written in C++ and utilizing a scripting interface for the convenience of the user. Ongoing improvements include the addition of a library of accelerator maps, BEAMLINE/MXYZPTLK; the introduction of a treatment of magnet errors and fringe fields; the conversion of the scripting interface to the standard scripting language, Python; and the parallelization of the computations using MPI. The ORBIT code is an open source, powerful, and convenient tool for studying beam dynamics in high-intensity rings.
Poisson mixture model for measurements using counting.
Miller, Guthrie; Justus, Alan; Vostrotin, Vadim; Dry, Donald; Bertelli, Luiz
2010-03-01
Starting with the basic Poisson statistical model of a counting measurement process, 'extraPoisson' variance or 'overdispersion' are included by assuming that the Poisson parameter representing the mean number of counts itself comes from another distribution. The Poisson parameter is assumed to be given by the quantity of interest in the inference process multiplied by a lognormally distributed normalising coefficient plus an additional lognormal background that might be correlated with the normalising coefficient (shared uncertainty). The example of lognormal environmental background in uranium urine data is discussed. An additional uncorrelated background is also included. The uncorrelated background is estimated from a background count measurement using Bayesian arguments. The rather complex formulas are validated using Monte Carlo. An analytical expression is obtained for the probability distribution of gross counts coming from the uncorrelated background, which allows straightforward calculation of a classical decision level in the form of a gross-count alarm point with a desired false-positive rate. The main purpose of this paper is to derive formulas for exact likelihood calculations in the case of various kinds of backgrounds.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Strategy for modeling putative multilevel ecosystems on Europa.
Irwin, Louis N; Schulze-Makuch, Dirk
2003-01-01
A general strategy for modeling ecosystems on other worlds is described. Two alternative biospheres beneath the ice surface of Europa are modeled, based on analogous ecosystems on Earth in potentially comparable habitats, with reallocation of biomass quantities consistent with different sources of energy and chemical constituents. The first ecosystem models a benthic biosphere supported by chemoautotrophic producers. The second models two concentrations of biota at the top and bottom of the subsurface water column supported by energy harvested from transmembrane ionic gradients. Calculations indicate the plausibility of both ecosystems, including small macroorganisms at the highest trophic levels, with ionotrophy supporting a larger biomass than chemoautotrophy.
Personalized Vehicle Energy Efficiency & Range Predictor/MyGreenCar
DOE Office of Scientific and Technical Information (OSTI.GOV)
SAXENA, SAMVEG
MyGreenCar provides users with the ability to predict the range capabilities, fuel economy, and operating costs for any vehicle for their individual driving patterns. Users launce the MyGreeCar mobile app on their smartphones to collect their driving patterns over any duration (e.g. serval days, weeks, months, etc) using a phones's locational capabilities. Using vehicle powertrain models for any user-specified vehicle type, MyGreenCar, calculates the component-level energy and power interactions for the chosen vehicle to predict several important quantities, including: 1. For Evs: Alleviating range anxiety 2. Comparing fuel economy, operating costs, and payback time across models and types.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shi, Lei
Magnetic confinement fusion is one of the most promising approaches to achieve fusion energy. With the rapid increase of the computational power over the past decades, numerical simulation have become an important tool to study the fusion plasmas. Eventually, the numerical models will be used to predict the performance of future devices, such as the International Thermonuclear Experiment Reactor (ITER) or DEMO. However, the reliability of these models needs to be carefully validated against experiments before the results can be trusted. The validation between simulations and measurements is hard particularly because the quantities directly available from both sides are different.more » While the simulations have the information of the plasma quantities calculated explicitly, the measurements are usually in forms of diagnostic signals. The traditional way of making the comparison relies on the diagnosticians to interpret the measured signals as plasma quantities. The interpretation is in general very complicated and sometimes not even unique. In contrast, given the plasma quantities from the plasma simulations, we can unambiguously calculate the generation and propagation of the diagnostic signals. These calculations are called synthetic diagnostics, and they enable an alternate way to compare the simulation results with the measurements. In this dissertation, we present a platform for developing and applying synthetic diagnostic codes. Three diagnostics on the platform are introduced. The reflectometry and beam emission spectroscopy diagnostics measure the electron density, and the electron cyclotron emission diagnostic measures the electron temperature. The theoretical derivation and numerical implementation of a new two dimensional Electron cyclotron Emission Imaging code is discussed in detail. This new code has shown the potential to address many challenging aspects of the present ECE measurements, such as runaway electron effects, and detection of the cross phase between the electron temperature and density fluctuations.« less
VizieR Online Data Catalog: Brussels nuclear reaction rate library (Aikawa+, 2005)
NASA Astrophysics Data System (ADS)
Aikawa, M.; Arnould, M.; Goriely, S.; Jorissen, A.; Takahashi, K.
2005-07-01
The present data is part of the Brussels nuclear reaction rate library (BRUSLIB) for astrophysics applications and concerns nuclear reaction rate predictions calculated within the statistical Hauser-Feshbach approximation and making use of global and coherent microscopic nuclear models for the quantities (nuclear masses, nuclear structure properties, nuclear level densities, gamma-ray strength functions, optical potentials) entering the rate calculations. (4 data files).
RMP Guidance for Offsite Consequence Analysis
Offsite consequence analysis (OCA) consists of a worst-case release scenario and alternative release scenarios. OCA is required from facilities with chemicals above threshold quantities. RMP*Comp software can be used to perform calculations described here.
Oyetayo, Folake Lucy; Ibitoye, Muyiwa Femi
2012-07-01
The fruit of the cherry tomato (Lycopersicon esculentum (Solanaceae)) was analysed for mineral and antinutrient composition. Phosphorus (33.04 ± 0.21 mg/100g) was the most abundant mineral in the fruit, followed by calcium (32.04 ± 0.06 mg/100 g), and potassium (11.9 ± 0.1 mg/100 g) and manganese (9.55 ± 0.28 mg/100 g) were also present in appreciable quantities. Antinutrients, including phytate, glycoside, saponin and tannin, were screened and quantified. Phytate (112.82 ± 0.1 mg/100 g), glycoside (2.33 ± 0.00 mg/100 g), saponin (1.31 ± 0.00 mg/100g) and tannin (0.21 ± 0.00 mg/100 g) were present in the fruit but phlobatanin and glycosides with steroidal rings were not found. The calculated calcium:phytate ratio of the fruits was below the critical value and the calculated [calcium] [phytate]:[zinc] molar ratio was less than the critical value. The calcium:phosphorus ratio (0.97 mg/100 g) shows the fruit to be a good source of food nutrients, while the sodium:potassium value was less than 1. Ca/P ratio below 0.5 indicates deficiency of these minerals while Na/K ratio above 1 is detrimental because of excessive sodium levels. The results of the study generally revealed the fruit to be rich in minerals but containing insufficient quantities of antinutrients to result in poor mineral bioavailability.
Updated and revised neutron reaction data for 236,238Np
NASA Astrophysics Data System (ADS)
Chen, Guochang; Wang, Jimin; Cao, Wentian; Tang, Guoyou; Yu, Baosheng
2017-09-01
Nuclear data with high accuracy for minor actinides play an important role in nuclear technology applications, including reactor design and operation, fuel cycle, estimation of the amount of minor actinides in high burn-up reactors and the minor actinides transmutation. Based on a new set of neutron optical model parameter and the reaction cross section systematics of fissile isotopes, a full set of 236,238Np neutron reaction data from 10-5 eV ˜20 MeV are updated and improved through theoretical calculation. Mainly revised quantities include the total, elastic, inelastic, fission, (n, 2n) and (n, γ) reaction cross sections as well as angular distribution etc. The promising results are obtained when the renewal evaluated data of 236,238Np will replace the evaluated data in CENDL-3.1 database.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nakashima, Hiroyuki; Nakatsuji, Hiroshi
2008-12-12
The local energy defined by H{psi}/{psi} must be equal to the exact energy E at any coordinate of an atom or molecule, as long as the {psi} under consideration is exact. The discrepancy from E of this quantity is a stringent test of the accuracy of the calculated wave function. The H-square error for a normalized {psi}, defined by {sigma}{sup 2}{identical_to}<{psi}|(H-E){sup 2}|{psi}>, is also a severe test of the accuracy. Using these quantities, we have examined the accuracy of our wave function of a helium atom calculated using the free complement method that was developed to solve the Schroedinger equation.more » Together with the variational upper bound, the lower bound of the exact energy calculated using a modified Temple's formula ensured the definitely correct value of the helium fixed-nucleus ground state energy to be -2.903 724 377 034 119 598 311 159 245 194 4 a.u., which is correct to 32 digits.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Briceno, Raul A.; Hansen, Maxwell T.; Monahan, Christopher J.
Lattice quantum chromodynamics (QCD) provides the only known systematic, nonperturbative method for first-principles calculations of nucleon structure. However, for quantities such as light-front parton distribution functions (PDFs) and generalized parton distributions (GPDs), the restriction to Euclidean time prevents direct calculation of the desired observable. Recently, progress has been made in relating these quantities to matrix elements of spatially nonlocal, zero-time operators, referred to as quasidistributions. Still, even for these time-independent matrix elements, potential subtleties have been identified in the role of the Euclidean signature. In this work, we investigate the analytic behavior of spatially nonlocal correlation functions and demonstrate thatmore » the matrix elements obtained from Euclidean lattice QCD are identical to those obtained using the Lehmann-Symanzik-Zimmermann reduction formula in Minkowski space. After arguing the equivalence on general grounds, we also show that it holds in a perturbative calculation, where special care is needed to identify the lattice prediction. Lastly, we present a proof of the uniqueness of the matrix elements obtained from Minkowski and Euclidean correlation functions to all order in perturbation theory.« less
Comparing fully general relativistic and Newtonian calculations of structure formation
NASA Astrophysics Data System (ADS)
East, William E.; Wojtak, Radosław; Abel, Tom
2018-02-01
In the standard approach to studying cosmological structure formation, the overall expansion of the Universe is assumed to be homogeneous, with the gravitational effect of inhomogeneities encoded entirely in a Newtonian potential. A topic of ongoing debate is to what degree this fully captures the dynamics dictated by general relativity, especially in the era of precision cosmology. To quantitatively assess this, we directly compare standard N-body Newtonian calculations to full numerical solutions of the Einstein equations, for cold matter with various magnitude initial inhomogeneities on scales comparable to the Hubble horizon. We analyze the differences in the evolution of density, luminosity distance, and other quantities defined with respect to fiducial observers. This is carried out by reconstructing the effective spacetime and matter fields dictated by the Newtonian quantities, and by taking care to distinguish effects of numerical resolution. We find that the fully general relativistic and Newtonian calculations show excellent agreement, even well into the nonlinear regime. They only notably differ in regions where the weak gravity assumption breaks down, which arise when considering extreme cases with perturbations exceeding standard values.
Thrust augmentation nozzle (TAN) concept for rocket engine booster applications
NASA Astrophysics Data System (ADS)
Forde, Scott; Bulman, Mel; Neill, Todd
2006-07-01
Aerojet used the patented thrust augmented nozzle (TAN) concept to validate a unique means of increasing sea-level thrust in a liquid rocket booster engine. We have used knowledge gained from hypersonic Scramjet research to inject propellants into the supersonic region of the rocket engine nozzle to significantly increase sea-level thrust without significantly impacting specific impulse. The TAN concept overcomes conventional engine limitations by injecting propellants and combusting in an annular region in the divergent section of the nozzle. This injection of propellants at moderate pressures allows for obtaining high thrust at takeoff without overexpansion thrust losses. The main chamber is operated at a constant pressure while maintaining a constant head rise and flow rate of the main propellant pumps. Recent hot-fire tests have validated the design approach and thrust augmentation ratios. Calculations of nozzle performance and wall pressures were made using computational fluid dynamics analyses with and without thrust augmentation flow, resulting in good agreement between calculated and measured quantities including augmentation thrust. This paper describes the TAN concept, the test setup, test results, and calculation results.
NASA Astrophysics Data System (ADS)
Dragoni, Daniele; Daff, Thomas D.; Csányi, Gábor; Marzari, Nicola
2018-01-01
We show that the Gaussian Approximation Potential (GAP) machine-learning framework can describe complex magnetic potential energy surfaces, taking ferromagnetic iron as a paradigmatic challenging case. The training database includes total energies, forces, and stresses obtained from density-functional theory in the generalized-gradient approximation, and comprises approximately 150,000 local atomic environments, ranging from pristine and defected bulk configurations to surfaces and generalized stacking faults with different crystallographic orientations. We find the structural, vibrational, and thermodynamic properties of the GAP model to be in excellent agreement with those obtained directly from first-principles electronic-structure calculations. There is good transferability to quantities, such as Peierls energy barriers, which are determined to a large extent by atomic configurations that were not part of the training set. We observe the benefit and the need of using highly converged electronic-structure calculations to sample a target potential energy surface. The end result is a systematically improvable potential that can achieve the same accuracy of density-functional theory calculations, but at a fraction of the computational cost.
Exact calculation of distributions on integers, with application to sequence alignment.
Newberg, Lee A; Lawrence, Charles E
2009-01-01
Computational biology is replete with high-dimensional discrete prediction and inference problems. Dynamic programming recursions can be applied to several of the most important of these, including sequence alignment, RNA secondary-structure prediction, phylogenetic inference, and motif finding. In these problems, attention is frequently focused on some scalar quantity of interest, a score, such as an alignment score or the free energy of an RNA secondary structure. In many cases, score is naturally defined on integers, such as a count of the number of pairing differences between two sequence alignments, or else an integer score has been adopted for computational reasons, such as in the test of significance of motif scores. The probability distribution of the score under an appropriate probabilistic model is of interest, such as in tests of significance of motif scores, or in calculation of Bayesian confidence limits around an alignment. Here we present three algorithms for calculating the exact distribution of a score of this type; then, in the context of pairwise local sequence alignments, we apply the approach so as to find the alignment score distribution and Bayesian confidence limits.
Spectral parameters and Hamaker constants of silicon hydride compounds and organic solvents.
Masuda, Takashi; Matsuki, Yasuo; Shimoda, Tatsuya
2009-12-15
Cyclopentasilane (CPS) and polydihydrosilane, which consist of hydrogen and silicon only, are unique materials that can be used to produce intrinsic silicon film in a liquid process, such as spin coating or an ink-jet method. Wettability and solubility of general organic solvents including the above can be estimated by Hamaker constants, which are calculated according to the Lifshitz theory. In order to calculate a Hamaker constant by the simple spectral method (SSM), it is necessary to obtain absorption frequency and function of oscillator strength in the ultraviolet region. In this report, these physical quantities were obtained by means of an optical method. As a result of examination of the relation between molecular structures and ultraviolet absorption frequencies, which were obtained from various liquid materials, it was concluded that ultraviolet absorption frequencies became smaller as electrons were delocalized. In particular, the absorption frequencies were found to be very small for CPS and polydihydrosilane due to sigma-conjugate of their electrons. The Hamaker constants of CPS and polydihydrosilane were successfully calculated based on the obtained absorption frequency and function of oscillator strength.
The measurement of energy exchange in man: an analysis.
Webb, P
1980-06-01
This report analyzes two kinds of studies of human energy balance; direct and indirect calorimetry for 24-hr periods, and complete measurements of food intake, waste, and tissue storage for 3 weeks and longer. Equations of energy balance are written to show that the daily quantity of metabolic energy, QM, is coupled with an unidentified quantity of unmeasured energy, QX, in order to make the equation balance. The equations challenge the assumed equivalence of direct and indirect calorimetry. The analysis takes the form of employing experimental data to calculate values for the arguable quantity, QX. Studies employing 24-hr direct calorimetry, 202 complete days, show that when food intake nearly matches QM, values for QX are small and probably insignificant, but when there is a large food deficit, large positive values for QX appear. Calculations are also made from studies of nutrient balance during prolonged overeating and undereating, and in nearly all cases there were large negative values for QX. In 52 sets of data from studies lasting 3 weeks or longer, where all the terms in the balance equation except QX were either directly measured or could be readily estimated, the average value for QX amounts to 705 kcal/day, or 27% of QM. A discussion of the nature of QX considers error and the noninclusion of small quantities like the energy of combustible gases, which are not thought to be sufficient to explain QX. It might represent the cost of mobilizing stored fuel, or of storing excess fuel, or it might represent a change in internal energy other than fuel stores, but none of these is thought to be likely. Finally, it is emphasized that entropy exchange in man as an open thermodynamic system is not presently included in the equations of energy balance, and perhaps it must be, even though it is not directly measurable. The significance of unmeasured energy is considered in light of the poor control of obesity, of the inability to predict weight change during prolonged diet restriction or intentional overeating, and of the energetics of tissue gain in growth and loss in cachexia. It is not even well established how much food man requires to maintain constant weight. New studies as they are undertaken should try to account completely for all the possible terms of energy exchange.
Calabi-Yau Volumes and Reflexive Polytopes
NASA Astrophysics Data System (ADS)
He, Yang-Hui; Seong, Rak-Kyeong; Yau, Shing-Tung
2018-04-01
We study various geometrical quantities for Calabi-Yau varieties realized as cones over Gorenstein Fano varieties, obtained as toric varieties from reflexive polytopes in various dimensions. Focus is made on reflexive polytopes up to dimension 4 and the minimized volumes of the Sasaki-Einstein base of the corresponding Calabi-Yau cone are calculated. By doing so, we conjecture new bounds for the Sasaki-Einstein volume with respect to various topological quantities of the corresponding toric varieties. We give interpretations about these volume bounds in the context of associated field theories via the AdS/CFT correspondence.
Nuclear Data Uncertainties for Typical LWR Fuel Assemblies and a Simple Reactor Core
NASA Astrophysics Data System (ADS)
Rochman, D.; Leray, O.; Hursin, M.; Ferroukhi, H.; Vasiliev, A.; Aures, A.; Bostelmann, F.; Zwermann, W.; Cabellos, O.; Diez, C. J.; Dyrda, J.; Garcia-Herranz, N.; Castro, E.; van der Marck, S.; Sjöstrand, H.; Hernandez, A.; Fleming, M.; Sublet, J.-Ch.; Fiorito, L.
2017-01-01
The impact of the current nuclear data library covariances such as in ENDF/B-VII.1, JEFF-3.2, JENDL-4.0, SCALE and TENDL, for relevant current reactors is presented in this work. The uncertainties due to nuclear data are calculated for existing PWR and BWR fuel assemblies (with burn-up up to 40 GWd/tHM, followed by 10 years of cooling time) and for a simplified PWR full core model (without burn-up) for quantities such as k∞, macroscopic cross sections, pin power or isotope inventory. In this work, the method of propagation of uncertainties is based on random sampling of nuclear data, either from covariance files or directly from basic parameters. Additionally, possible biases on calculated quantities are investigated such as the self-shielding treatment. Different calculation schemes are used, based on CASMO, SCALE, DRAGON, MCNP or FISPACT-II, thus simulating real-life assignments for technical-support organizations. The outcome of such a study is a comparison of uncertainties with two consequences. One: although this study is not expected to lead to similar results between the involved calculation schemes, it provides an insight on what can happen when calculating uncertainties and allows to give some perspectives on the range of validity on these uncertainties. Two: it allows to dress a picture of the state of the knowledge as of today, using existing nuclear data library covariances and current methods.
Improvements in sub-grid, microphysics averages using quadrature based approaches
NASA Astrophysics Data System (ADS)
Chowdhary, K.; Debusschere, B.; Larson, V. E.
2013-12-01
Sub-grid variability in microphysical processes plays a critical role in atmospheric climate models. In order to account for this sub-grid variability, Larson and Schanen (2013) propose placing a probability density function on the sub-grid cloud microphysics quantities, e.g. autoconversion rate, essentially interpreting the cloud microphysics quantities as a random variable in each grid box. Random sampling techniques, e.g. Monte Carlo and Latin Hypercube, can be used to calculate statistics, e.g. averages, on the microphysics quantities, which then feed back into the model dynamics on the coarse scale. We propose an alternate approach using numerical quadrature methods based on deterministic sampling points to compute the statistical moments of microphysics quantities in each grid box. We have performed a preliminary test on the Kessler autoconversion formula, and, upon comparison with Latin Hypercube sampling, our approach shows an increased level of accuracy with a reduction in sample size by almost two orders of magnitude. Application to other microphysics processes is the subject of ongoing research.
A history of slide rules for blackbody radiation computations
NASA Astrophysics Data System (ADS)
Johnson, R. Barry; Stewart, Sean M.
2012-10-01
During the Second World War the importance of utilizing detection devices capable of operating in the infrared portion of the electromagnetic spectrum was firmly established. Up until that time, laboriously constructed tables for blackbody radiation needed to be used in calculations involving the amount of radiation radiated within a given spectral region or for other related radiometric quantities. To rapidly achieve reasonably accurate calculations of such radiometric quantities, a blackbody radiation calculator was devised in slide rule form first in Germany in 1944 and soon after in England and the United States. In the immediate decades after its introduction, the radiation slide rule was widely adopted and recognized as a useful and important tool for engineers and scientists working in the infrared field. It reached its pinnacle in the United States in 1970 in a rule introduced by Electro Optical Industries, Inc. With the onset in the latter half of the 1970s of affordable, hand-held electronic calculators, the impending demise of the radiation slide rule was evident. No longer the calculational device of choice, the radiation slide rule all but disappeared within a few short years. Although today blackbody radiation calculations can be readily accomplished using anything from a programmable pocket calculator upwards, with each device making use of a wide variety of numerical approximations to the integral of Planck's function, radiation slide rules were in the early decades of infrared technology the definitive "workhorse" for those involved in infrared systems design and engineering. This paper presents a historical development of radiation slide rules with many versions being discussed.
Siggaard-Andersen, O; Siggaard-Andersen, M
1990-01-01
Input parameters for the program are the arterial pH, pCO2, and pO2 (measured by a blood gas analyzer), oxygen saturation, carboxy-, met-, and total hemoglobin (measured by a multi-wavelength spectrometer), supplemented by patient age, sex, temperature, inspired oxygen fraction, fraction of fetal hemoglobin, and ambient pressure. Output parameters are the inspired and alveolar oxygen partial pressures, pH,pCO2 and pO2 referring to the actual patient temperature, estimated shunt fraction, half-saturation tension, estimated 2,3-diphosphoglycerate concentration, oxygen content and oxygen capacity, extracellular base excess, and plasma bicarbonate concentration. Three parameters related to the blood oxygen availability are calculated: the oxygen extraction tension, concentration of extractable oxygen, and oxygen compensation factor. Calculations of the 'reverse' type may also be performed so that the effect of therapeutic measures on the oxygen status or the acid-base status can be predicted. The user may choose among several different units of measurement and two different conventions for symbols. The results are presented in a data display screen comprising all quantities together with age, sex, and temperature adjusted reference values. The program generates a 'laboratory diagnosis' of the oxygen status and the acid-base status and three graphs illustrating the oxygen status and the acid-base status of the patient: the oxygen graph, the acid-base chart and the blood gas map. A printed summary in one A4 page including a graphical display can be produced with an Epson or HP Laser compatible printer. The program is primarily intended for routine laboratories with a blood gas analyzer combined with a multi-wavelength spectrometer. Calculating the derived quantities may enhance the usefulness of the analyzers and improve patient care. The program may also be used as a teaching aid in acid-base and respiratory physiology. The program requires an IBM PC, XT, AT or similar compatible computer running under DOS version 2.11 or later. A VGA color monitor is preferred, but the program also supports EGA, CGA, and Hercules monitors. The program will be freely available at the cost of a discette and mailing expenses by courtesy of Radiometer Medical A/S, Emdrupvej 72, DK-2400 Copenhagen NV, Denmark (valid through 1991). A simplified algorithm for a programmable pocket calculator avoiding iterative calculations is given as an Appendix.
Du, Bing; Liu Aimin; Huang, Yeru
2014-09-01
Polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in soil samples were analyzed by isotope dilution method with high resolution gas chromatography and high resolution mass spectrometry (ID-HRGC/HRMS), and the toxic equivalent quantity (TEQ) were calculated. The impacts of major source of measurement uncertainty are discussed, and the combined relative standard uncertainties were calculated for each 2, 3, 7, 8 substituted con- gener. Furthermore, the concentration, combined uncertainty and expanded uncertainty for TEQ of PCDD/Fs in a soil sample in I-TEF, WHO-1998-TEF and WHO-2005-TEF schemes are provided as an example. I-TEF, WHO-1998-TEF and WHO-2005-TEF are the evaluation schemes of toxic equivalent factor (TEF), and are all currently used to describe 2,3,7,8 sub- stituted relative potencies.
NASA Astrophysics Data System (ADS)
Hendi, S. H.; Panahiyan, S.
2014-12-01
Motivated by the string corrections on the gravity and electrodynamics sides, we consider a quadratic Maxwell invariant term as a correction of the Maxwell Lagrangian to obtain exact solutions of higher dimensional topological black holes in Gauss-Bonnet gravity. We first investigate the asymptotically flat solutions and obtain conserved and thermodynamic quantities which satisfy the first law of thermodynamics. We also analyze thermodynamic stability of the solutions by calculating the heat capacity and the Hessian matrix. Then, we focus on horizon-flat solutions with an anti-de Sitter (AdS) asymptote and produce a rotating spacetime with a suitable transformation. In addition, we calculate the conserved and thermodynamic quantities for asymptotically AdS black branes which satisfy the first law of thermodynamics. Finally, we perform thermodynamic instability criterion to investigate the effects of nonlinear electrodynamics in canonical and grand canonical ensembles.
Electro-quasistatic analysis of an electrostatic induction micromotor using the cell method.
Monzón-Verona, José Miguel; Santana-Martín, Francisco Jorge; García-Alonso, Santiago; Montiel-Nelson, Juan Antonio
2010-01-01
An electro-quasistatic analysis of an induction micromotor has been realized by using the Cell Method. We employed the direct Finite Formulation (FF) of the electromagnetic laws, hence, avoiding a further discretization. The Cell Method (CM) is used for solving the field equations at the entire domain (2D space) of the micromotor. We have reformulated the field laws in a direct FF and analyzed physical quantities to make explicit the relationship between magnitudes and laws. We applied a primal-dual barycentric discretization of the 2D space. The electric potential has been calculated on each node of the primal mesh using CM. For verification purpose, an analytical electric potential equation is introduced as reference. In frequency domain, results demonstrate the error in calculating potential quantity is neglected (<3‰). In time domain, the potential value in transient state tends to the steady state value.
Electro-Quasistatic Analysis of an Electrostatic Induction Micromotor Using the Cell Method
Monzón-Verona, José Miguel; Santana-Martín, Francisco Jorge; García–Alonso, Santiago; Montiel-Nelson, Juan Antonio
2010-01-01
An electro-quasistatic analysis of an induction micromotor has been realized by using the Cell Method. We employed the direct Finite Formulation (FF) of the electromagnetic laws, hence, avoiding a further discretization. The Cell Method (CM) is used for solving the field equations at the entire domain (2D space) of the micromotor. We have reformulated the field laws in a direct FF and analyzed physical quantities to make explicit the relationship between magnitudes and laws. We applied a primal-dual barycentric discretization of the 2D space. The electric potential has been calculated on each node of the primal mesh using CM. For verification purpose, an analytical electric potential equation is introduced as reference. In frequency domain, results demonstrate the error in calculating potential quantity is neglected (<3‰). In time domain, the potential value in transient state tends to the steady state value. PMID:22163397
Characteristics and Generation of Household Hazardous Waste (HHW) in Semarang City Indonesia
NASA Astrophysics Data System (ADS)
Fikri, Elanda; Purwanto; Sunoko, Henna Rya
2018-02-01
Most of Household Hazardous Waste (HHW) is currently mixed with domestics waste. So that, it can impact human health and environmental quality. One important aspect in the management strategy is to determine the quantity generated and characteristics of HHW. The method used to determine the characteristics HHW refers to SNI 19-2454-2002, while the HHW generation refers to the SNI 19-3694-1994 calculated based on weight and volume. Research was conducted in four districts of Semarang. The samples used in this study were 400 families calculated based on the proportion of Slovin Formula. The characteristic of HHW in Semarang City is mainly infectious (79%), then poisonous (13%), combustible (6%) and corrosive materials (2%). The quantity HHW generated is 0.01 kg/person/day equivalent with 5.1% of municipal solid waste (MSW) in Semarang (linear equations : y=1,278x+82,00 (volume), y=0,216x+13,89 (weight).
NASA Technical Reports Server (NTRS)
Henderson, Robert A.; Schrag, Robert L.
1987-01-01
A method of modelling a system consisting of a cylindrical coil with its axis perpendicular to a metal plate of finite thickness, and a simple electrical circuit for producing a transient current in the coil, is discussed in the context of using such a system for de-icing aircraft surfaces. A transmission line model of the coil and metal plate is developed as the heart of the system model. It is shown that this transmission model is central to calculation of the coil impedance, the coil current, the magnetic fields established on the surfaces of the metal plate, and the resultant total force between the coil and the plate. FORTRAN algorithms were developed for numerical calculation of each of these quantities, and the algorithms were applied to an experimental prototype system in which these quantities had been measured. Good agreement is seen to exist between the predicted and measured results.
Exploration and Trapping of Mortal Random Walkers
NASA Astrophysics Data System (ADS)
Yuste, S. B.; Abad, E.; Lindenberg, Katja
2013-05-01
Exploration and trapping properties of random walkers that may evanesce at any time as they walk have seen very little treatment in the literature, and yet a finite lifetime is a frequent occurrence, and its effects on a number of random walk properties may be profound. For instance, whereas the average number of distinct sites visited by an immortal walker grows with time without bound, that of a mortal walker may, depending on dimensionality and rate of evanescence, remain finite or keep growing with the passage of time. This number can in turn be used to calculate other classic quantities such as the survival probability of a target surrounded by diffusing traps. If the traps are immortal, the survival probability will vanish with increasing time. However, if the traps are evanescent, the target may be spared a certain death. We analytically calculate a number of basic and broadly used quantities for evanescent random walkers.
NASA Technical Reports Server (NTRS)
Bonavito, N. L.; Nagai, O.; Tanaka, T.
1975-01-01
Previous spin wave theories of the antiferromagnet hematite were extended. The behavior of thermodynamic quantities around the Morin transition temperature was studied, and the latent heat of the Morin transition was calculated. The temperature dependence of the antiferromagnetic resonance frequency and the parallel and perpendicular critical spin-flop magnetic fields were calculated. It was found that the theory agrees well with experiment.
Monte Carlo calculation of dynamical properties of the two-dimensional Hubbard model
NASA Technical Reports Server (NTRS)
White, S. R.; Scalapino, D. J.; Sugar, R. L.; Bickers, N. E.
1989-01-01
A new method is introduced for analytically continuing imaginary-time data from quantum Monte Carlo calculations to the real-frequency axis. The method is based on a least-squares-fitting procedure with constraints of positivity and smoothness on the real-frequency quantities. Results are shown for the single-particle spectral-weight function and density of states for the half-filled, two-dimensional Hubbard model.
Direct numerical simulations and modeling of a spatially-evolving turbulent wake
NASA Technical Reports Server (NTRS)
Cimbala, John M.
1994-01-01
Understanding of turbulent free shear flows (wakes, jets, and mixing layers) is important, not only for scientific interest, but also because of their appearance in numerous practical applications. Turbulent wakes, in particular, have recently received increased attention by researchers at NASA Langley. The turbulent wake generated by a two-dimensional airfoil has been selected as the test-case for detailed high-resolution particle image velocimetry (PIV) experiments. This same wake has also been chosen to enhance NASA's turbulence modeling efforts. Over the past year, the author has completed several wake computations, while visiting NASA through the 1993 and 1994 ASEE summer programs, and also while on sabbatical leave during the 1993-94 academic year. These calculations have included two-equation (K-omega and K-epsilon) models, algebraic stress models (ASM), full Reynolds stress closure models, and direct numerical simulations (DNS). Recently, there has been mutually beneficial collaboration of the experimental and computational efforts. In fact, these projects have been chosen for joint presentation at the NASA Turbulence Peer Review, scheduled for September 1994. DNS calculations are presently underway for a turbulent wake at Re(sub theta) = 1000 and at a Mach number of 0.20. (Theta is the momentum thickness, which remains constant in the wake of a two dimensional body.) These calculations utilize a compressible DNS code written by M. M. Rai of NASA Ames, and modified for the wake by J. Cimbala. The code employs fifth-order accurate upwind-biased finite differencing for the convective terms, fourth-order accurate central differencing for the viscous terms, and an iterative-implicit time-integration scheme. The computational domain for these calculations starts at x/theta = 10, and extends to x/theta = 610. Fully developed turbulent wake profiles, obtained from experimental data from several wake generators, are supplied at the computational inlet, along with appropriate noise. After some adjustment period, the flow downstream of the inlet develops into a fully three-dimensional turbulent wake. Of particular interest in the present study is the far wake spreading rate and the self-similar mean and turbulence profiles. At the time of this writing, grid resolution studies are underway, and a code is being written to calculate turbulence statistics from these wake calculations; the statistics will be compared to those from the ongoing PIV wake measurements, those of previous experiments, and those predicted by the various turbulence models. These calculations will lead to significant long-term benefits for the turbulence modeling effort. In particular, quantities such as the pressure-strain correlation and the dissipation rate tensor can be easily calculated from the DNS results, whereas these quantities are nearly impossible to measure experimentally. Improvements to existing turbulence models (and development of new models) require knowledge about flow quantities such as these. Present turbulence models do a very good job at prediction of the shape of the mean velocity and Reynolds stress profiles in a turbulent wake, but significantly underpredict the magnitude of the stresses and the spreading rate of the wake. Thus, the turbulent wake is an ideal flow for turbulence modeling research. By careful comparison and analysis of each term in the modeled Reynolds stress equations, the DNS data can show where deficiencies in the models exist; improvements to the models can then be attempted.
Seth, Ajay; Delp, Scott L.
2015-01-01
Biomechanics researchers often use multibody models to represent biological systems. However, the mapping from biology to mechanics and back can be problematic. OpenSim is a popular open source tool used for this purpose, mapping between biological specifications and an underlying generalized coordinate multibody system called Simbody. One quantity of interest to biomechanical researchers and clinicians is “muscle moment arm,” a measure of the effectiveness of a muscle at contributing to a particular motion over a range of configurations. OpenSim can automatically calculate these quantities for any muscle once a model has been built. For simple cases, this calculation is the same as the conventional moment arm calculation in mechanical engineering. But a muscle may span several joints (e.g., wrist, neck, back) and may follow a convoluted path over various curved surfaces. A biological joint may require several bodies or even a mechanism to accurately represent in the multibody model (e.g., knee, shoulder). In these situations we need a careful definition of muscle moment arm that is analogous to the mechanical engineering concept, yet generalized to be of use to biomedical researchers. Here we present some biomechanical modeling challenges and how they are resolved in OpenSim and Simbody to yield biologically meaningful muscle moment arms. PMID:25905111
Sherman, Michael A; Seth, Ajay; Delp, Scott L
2013-08-01
Biomechanics researchers often use multibody models to represent biological systems. However, the mapping from biology to mechanics and back can be problematic. OpenSim is a popular open source tool used for this purpose, mapping between biological specifications and an underlying generalized coordinate multibody system called Simbody. One quantity of interest to biomechanical researchers and clinicians is "muscle moment arm," a measure of the effectiveness of a muscle at contributing to a particular motion over a range of configurations. OpenSim can automatically calculate these quantities for any muscle once a model has been built. For simple cases, this calculation is the same as the conventional moment arm calculation in mechanical engineering. But a muscle may span several joints (e.g., wrist, neck, back) and may follow a convoluted path over various curved surfaces. A biological joint may require several bodies or even a mechanism to accurately represent in the multibody model (e.g., knee, shoulder). In these situations we need a careful definition of muscle moment arm that is analogous to the mechanical engineering concept, yet generalized to be of use to biomedical researchers. Here we present some biomechanical modeling challenges and how they are resolved in OpenSim and Simbody to yield biologically meaningful muscle moment arms.
NASA Astrophysics Data System (ADS)
Qiu, Haixia; Kim, Michele M.; Penjweini, Rozhin; Zhu, Timothy C.
2016-08-01
Although photodynamic therapy (PDT) is an established modality for cancer treatment, current dosimetric quantities, such as light fluence and PDT dose, do not account for the differences in PDT oxygen consumption for different fluence rates (φ). A macroscopic model was adopted to evaluate using calculated reacted singlet oxygen concentration ([) to predict Photofrin-PDT outcome in mice bearing radiation-induced fibrosarcoma tumors, as singlet oxygen is the primary cytotoxic species responsible for cell death in type II PDT. Using a combination of fluences (50, 135, 200, and 250 J/cm2) and φ (50, 75, and 150 mW/cm2), tumor regrowth rate, k, was determined for each condition. A tumor cure index, CI=1-k/k, was calculated based on the k between PDT-treated groups and that of the control, k. The measured Photofrin concentration and light dose for each mouse were used to calculate PDT dose and [, while mean optical properties (μa=0.9 cm-1, μs‧=8.4 cm-1) were used to calculate φ for all mice. CI was correlated to the fluence, PDT dose, and [ with R2=0.35, 0.79, and 0.93, respectively. These results suggest that [ serves as a better dosimetric quantity for predicting PDT outcome.
Guan, Fada; Peeler, Christopher; Bronk, Lawrence; Geng, Changran; Taleei, Reza; Randeniya, Sharmalee; Ge, Shuaiping; Mirkovic, Dragan; Grosshans, David; Mohan, Radhe; Titt, Uwe
2015-01-01
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the geant 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from geant 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LETt and dose-averaged LET, LETd) using geant 4 for different tracking step size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LETt and LETd of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LETt but significant for LETd. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in geant 4 can result in incorrect LETd calculation results in the dose plateau region for small step limits. The erroneous LETd results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in geant 4. The incorrect LETd values lead to substantial differences in the calculated RBE. Conclusions: When the geant 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LETt in the dose plateau region and LETd around the Bragg peak. For a large step limit, i.e., 500 μm, LETd is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LETd and LETt becomes positive. PMID:26520716
Bounds on stochastic chemical kinetic systems at steady state
NASA Astrophysics Data System (ADS)
Dowdy, Garrett R.; Barton, Paul I.
2018-02-01
The method of moments has been proposed as a potential means to reduce the dimensionality of the chemical master equation (CME) appearing in stochastic chemical kinetics. However, attempts to apply the method of moments to the CME usually result in the so-called closure problem. Several authors have proposed moment closure schemes, which allow them to obtain approximations of quantities of interest, such as the mean molecular count for each species. However, these approximations have the dissatisfying feature that they come with no error bounds. This paper presents a fundamentally different approach to the closure problem in stochastic chemical kinetics. Instead of making an approximation to compute a single number for the quantity of interest, we calculate mathematically rigorous bounds on this quantity by solving semidefinite programs. These bounds provide a check on the validity of the moment closure approximations and are in some cases so tight that they effectively provide the desired quantity. In this paper, the bounded quantities of interest are the mean molecular count for each species, the variance in this count, and the probability that the count lies in an arbitrary interval. At present, we consider only steady-state probability distributions, intending to discuss the dynamic problem in a future publication.
Adequate sleep moderates the prospective association between alcohol use and consequences.
Miller, Mary Beth; DiBello, Angelo M; Lust, Sarah A; Carey, Michael P; Carey, Kate B
2016-12-01
Inadequate sleep and heavy alcohol use have been associated with negative outcomes among college students; however, few studies have examined the interactive effects of sleep and drinking quantity in predicting alcohol-related consequences. This study aimed to determine if adequate sleep moderates the prospective association between weekly drinking quantity and consequences. College students (N=568) who were mandated to an alcohol prevention intervention reported drinks consumed per week, typical sleep quantity (calculated from sleep/wake times), and perceptions of sleep adequacy as part of a larger research trial. Assessments were completed at baseline and one-, three-, and five-month follow-ups. Higher baseline quantities of weekly drinking and inadequate sleep predicted alcohol-related consequences at baseline and one-month follow-up. Significant interactions emerged between baseline weekly drinking quantity and adequate sleep in the prediction of alcohol-related consequences at baseline, one-, three-, and five-month assessments. Simple slopes analyses revealed that weekly drinking quantity was positively associated with alcohol-related consequences for those reporting both adequate and inadequate sleep, but this association was consistently stronger among those who reported inadequate sleep. Subjective evaluation of sleep adequacy moderates both the concurrent and prospective associations between weekly drinking quantity and consequences, such that heavy-drinking college students reporting inadequate sleep experience more consequences as a result of drinking. Research needs to examine the mechanism(s) by which inadequate sleep affects alcohol risk among young adults. Copyright © 2016 Elsevier Ltd. All rights reserved.
Critical phenomena and chemical potential of a charged AdS black hole
NASA Astrophysics Data System (ADS)
Wei, Shao-Wen; Liang, Bin; Liu, Yu-Xiao
2017-12-01
Inspired by the interpretation of the cosmological constant from the boundary gauge theory, we here treat it as the number of colors N and its conjugate quantity as the associated chemical potential μ in the black hole side. Then the thermodynamics and the chemical potential for a five-dimensional charged AdS black hole are studied. It is found that there exists a small-large black hole phase transition of van der Waals type. The critical phenomena are investigated in the N2-μ chart. The result implies that the phase transition can occur for large number of colors N , while is forbidden for small number. This to some extent implies that the interaction of the system increases with the number. In particular, in the reduced parameter space, all the thermodynamic quantities can be rescaled with the black hole charge such that these reduced quantities are charge-independent. Then we obtain the coexistence curve and the phase diagram. The latent heat is also numerically calculated. Moreover, the heat capacity and the thermodynamic scalar are studied. The result indicates that the information of the first-order black hole phase transition is encoded in the heat capacity and scalar. However, the phase transition point cannot be directly calculated with them. Nevertheless, the critical point linked to a second-order phase transition can be determined by either the heat capacity or the scalar. In addition, we calculate the critical exponents of the heat capacity and the scalar for the saturated small and large black holes near the critical point.
NASA Astrophysics Data System (ADS)
Russo, G.; Attili, A.; Battistoni, G.; Bertrand, D.; Bourhaleb, F.; Cappucci, F.; Ciocca, M.; Mairani, A.; Milian, F. M.; Molinelli, S.; Morone, M. C.; Muraro, S.; Orts, T.; Patera, V.; Sala, P.; Schmitt, E.; Vivaldo, G.; Marchetto, F.
2016-01-01
The calculation algorithm of a modern treatment planning system for ion-beam radiotherapy should ideally be able to deal with different ion species (e.g. protons and carbon ions), to provide relative biological effectiveness (RBE) evaluations and to describe different beam lines. In this work we propose a new approach for ion irradiation outcomes computations, the beamlet superposition (BS) model, which satisfies these requirements. This model applies and extends the concepts of previous fluence-weighted pencil-beam algorithms to quantities of radiobiological interest other than dose, i.e. RBE- and LET-related quantities. It describes an ion beam through a beam-line specific, weighted superposition of universal beamlets. The universal physical and radiobiological irradiation effect of the beamlets on a representative set of water-like tissues is evaluated once, coupling the per-track information derived from FLUKA Monte Carlo simulations with the radiobiological effectiveness provided by the microdosimetric kinetic model and the local effect model. Thanks to an extension of the superposition concept, the beamlet irradiation action superposition is applicable for the evaluation of dose, RBE and LET distributions. The weight function for the beamlets superposition is derived from the beam phase space density at the patient entrance. A general beam model commissioning procedure is proposed, which has successfully been tested on the CNAO beam line. The BS model provides the evaluation of different irradiation quantities for different ions, the adaptability permitted by weight functions and the evaluation speed of analitical approaches. Benchmarking plans in simple geometries and clinical plans are shown to demonstrate the model capabilities.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ghosh, Soumya; Soudackov, Alexander V.; Hammes-Schiffer, Sharon
Electron transfer and proton coupled electron transfer (PCET) reactions at electrochemical interfaces play an essential role in a broad range of energy conversion processes. The reorganization energy, which is a measure of the free energy change associated with solute and solvent rearrangements, is a key quantity for calculating rate constants for these reactions. We present a computational method for including the effects of the double layer and ionic environment of the diffuse layer in calculations of electrochemical solvent reorganization energies. This approach incorporates an accurate electronic charge distribution of the solute within a molecular-shaped cavity in conjunction with a dielectricmore » continuum treatment of the solvent, ions, and electrode using the integral equations formalism polarizable continuum model. The molecule-solvent boundary is treated explicitly, but the effects of the electrode-double layer and double layer-diffuse layer boundaries, as well as the effects of the ionic strength of the solvent, are included through an external Green’s function. The calculated total reorganization energies agree well with experimentally measured values for a series of electrochemical systems, and the effects of including both the double layer and ionic environment are found to be very small. This general approach was also extended to electrochemical PCET and produced total reorganization energies in close agreement with experimental values for two experimentally studied PCET systems. This research was supported as part of the Center for Molecular Electrocatalysis, an Energy Frontier Research Center, funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences.« less
library. Â Fission yields. Â Pre-calculated integral quantities. Â Improved zooming. New in version 3.0 , ENDF/B-VI.8 libraries. Â Neutron cross section distributions (MF=3). Â Experimental data in EXFOR
Testing Photoionization Calculations Using Chandra X-ray Spectra
NASA Technical Reports Server (NTRS)
Kallman, Tim
2008-01-01
A great deal of work has been devoted to the accumulation of accurate quantities describing atomic processes for use in analysis of astrophysical spectra. But in many situations of interest the interpretation of a quantity which is observed, such as a line flux, depends on the results of a modeling- or spectrum synthesis code. The results of such a code depends in turn on many atomic rates or cross sections, and the sensitivity of the observable quantity on the various rates and cross sections may be non-linear and if so cannot easily be derived analytically. In such cases the most practical approach to understanding the sensitivity of observables to atomic cross sections is to perform numerical experiments, by calculating models with various rates perturbed by random (but known) factors. In addition, it is useful to compare the results of such experiments with some sample observations, in order to focus attention on the rates which are of the greatest relevance to real observations. In this paper I will present some attempts to carry out this program, focussing on two sample datasets taken with the Chandra HETG. I will discuss the sensitivity of synthetic spectra to atomic data affecting ionization balance, temperature, and line opacity or emissivity, and discuss the implications for the ultimate goal of inferring astrophysical parameters.
Sleep Disruption Medical Intervention Forecasting (SDMIF) Module for the Integrated Medical Model
NASA Technical Reports Server (NTRS)
Lewandowski, Beth; Brooker, John; Mallis, Melissa; Hursh, Steve; Caldwell, Lynn; Myers, Jerry
2011-01-01
The NASA Integrated Medical Model (IMM) assesses the risk, including likelihood and impact of occurrence, of all credible in-flight medical conditions. Fatigue due to sleep disruption is a condition that could lead to operational errors, potentially resulting in loss of mission or crew. Pharmacological consumables are mitigation strategies used to manage the risks associated with sleep deficits. The likelihood of medical intervention due to sleep disruption was estimated with a well validated sleep model and a Monte Carlo computer simulation in an effort to optimize the quantity of consumables. METHODS: The key components of the model are the mission parameter program, the calculation of sleep intensity and the diagnosis and decision module. The mission parameter program was used to create simulated daily sleep/wake schedules for an ISS increment. The hypothetical schedules included critical events such as dockings and extravehicular activities and included actual sleep time and sleep quality. The schedules were used as inputs to the Sleep, Activity, Fatigue and Task Effectiveness (SAFTE) Model (IBR Inc., Baltimore MD), which calculated sleep intensity. Sleep data from an ISS study was used to relate calculated sleep intensity to the probability of sleep medication use, using a generalized linear model for binomial regression. A human yes/no decision process using a binomial random number was also factored into sleep medication use probability. RESULTS: These probability calculations were repeated 5000 times resulting in an estimate of the most likely amount of sleep aids used during an ISS mission and a 95% confidence interval. CONCLUSIONS: These results were transferred to the parent IMM for further weighting and integration with other medical conditions, to help inform operational decisions. This model is a potential planning tool for ensuring adequate sleep during sleep disrupted periods of a mission.
Binary collision rates of relativistic thermal plasmas. I Theoretical framework
NASA Technical Reports Server (NTRS)
Dermer, C. D.
1985-01-01
Binary collision rates for arbitrary scattering cross sections are derived in the case of a beam of particles interacting with a Maxwell-Boltzmann (MB) plasma, or in the case of two MB plasmas interacting at generally different temperatures. The expressions are valid for all beam energies and plasma temperatures, from the nonrelativistic to the extreme relativistic limits. The calculated quantities include the reaction rate, the energy exchange rate, and the average rate of change of the squared transverse momentum component of a monoenergetic particle beam as a result of scatterings with particles of a MB plasma. Results are specialized to elastic scattering processes, two-temperature reaction rates, or the cold plasma limit, reproducing previous work.
Anomaly in the band centre of the one-dimensional Anderson model
NASA Astrophysics Data System (ADS)
Kappus, M.; Wegner, F.
1981-03-01
We calculate the density of states and various characteristic lengths of the one-dimensional Anderson model in the limit of weak disorder. All these quantities show anomalous fluctuations near the band centre. This has already been observed for the density of states in a different model by Gorkov and Dorokhov, and is in close agreement with a Monte-Carlo calculation for the localization length by Czycholl, Kramer and Mac-Kinnon.
Unified Kinetic Approach for Simulation of Gas Flows in Rarefied and Continuum Regimes
2007-06-01
potential , iii) the Lennard - Jones potential , iv) the Coulomb potential , and v) the BGK model. For 2D simulations, the BGK model was implemented in a...were performed for the Lennard - Jones interaction potential . The agreement of experimental and calculated profiles indicates the high accuracy of the...calculations by two potentials (Hard Spheres and Lennard - Jones ) demonstrated similar behavior of the main quantities. The flow field structures are quite
A model for multiple-drop-impact erosion of brittle solids
NASA Technical Reports Server (NTRS)
Engel, O. G.
1971-01-01
A statistical model for the multiple-drop-impact erosion of brittle solids was developed. An equation for calculating the rate of erosion is given. The development is not complete since two quantities that are needed to calculate the rate of erosion with use of the equation must be assessed from experimental data. A partial test of the equation shows that it gives results that are in good agreement with experimental observation.
Optimal Information Processing in Biochemical Networks
NASA Astrophysics Data System (ADS)
Wiggins, Chris
2012-02-01
A variety of experimental results over the past decades provide examples of near-optimal information processing in biological networks, including in biochemical and transcriptional regulatory networks. Computing information-theoretic quantities requires first choosing or computing the joint probability distribution describing multiple nodes in such a network --- for example, representing the probability distribution of finding an integer copy number of each of two interacting reactants or gene products while respecting the `intrinsic' small copy number noise constraining information transmission at the scale of the cell. I'll given an overview of some recent analytic and numerical work facilitating calculation of such joint distributions and the associated information, which in turn makes possible numerical optimization of information flow in models of noisy regulatory and biochemical networks. Illustrating cases include quantification of form-function relations, ideal design of regulatory cascades, and response to oscillatory driving.
Calculation of continuum damping of Alfvén eigenmodes in tokamak and stellarator equilibria
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bowden, G. W.; Hole, M. J.; Könies, A.
2015-09-15
In an ideal magnetohydrodynamic (MHD) plasma, shear Alfvén eigenmodes may experience dissipationless damping due to resonant interaction with the shear Alfvén continuum. This continuum damping can make a significant contribution to the overall growth/decay rate of shear Alfvén eigenmodes, with consequent implications for fast ion transport. One method for calculating continuum damping is to solve the MHD eigenvalue problem over a suitable contour in the complex plane, thereby satisfying the causality condition. Such an approach can be implemented in three-dimensional ideal MHD codes which use the Galerkin method. Analytic functions can be fitted to numerical data for equilibrium quantities inmore » order to determine the value of these quantities along the complex contour. This approach requires less resolution than the established technique of calculating damping as resistivity vanishes and is thus more computationally efficient. The complex contour method has been applied to the three-dimensional finite element ideal MHD Code for Kinetic Alfvén waves. In this paper, we discuss the application of the complex contour technique to calculate the continuum damping of global modes in tokamak as well as torsatron, W7-X and H-1NF stellarator cases. To the authors' knowledge, these stellarator calculations represent the first calculation of continuum damping for eigenmodes in fully three-dimensional equilibria. The continuum damping of global modes in W7-X and H-1NF stellarator configurations investigated is found to depend sensitively on coupling to numerous poloidal and toroidal harmonics.« less
DiffPy-CMI-Python libraries for Complex Modeling Initiative
DOE Office of Scientific and Technical Information (OSTI.GOV)
Billinge, Simon; Juhas, Pavol; Farrow, Christopher
2014-02-01
Software to manipulate and describe crystal and molecular structures and set up structural refinements from multiple experimental inputs. Calculation and simulation of structure derived physical quantities. Library for creating customized refinements of atomic structures from available experimental and theoretical inputs.
40 CFR 372.10 - Recordkeeping.
Code of Federal Regulations, 2011 CFR
2011-07-01
... COMMUNITY RIGHT-TO-KNOW PROGRAMS TOXIC CHEMICAL RELEASE REPORTING: COMMUNITY RIGHT-TO-KNOW General....25 applies for each toxic chemical. (iii) Documentation supporting the calculations of the quantity of each toxic chemical released to the environment or transferred to an off-site location. (iv...
40 CFR 372.10 - Recordkeeping.
Code of Federal Regulations, 2010 CFR
2010-07-01
... COMMUNITY RIGHT-TO-KNOW PROGRAMS TOXIC CHEMICAL RELEASE REPORTING: COMMUNITY RIGHT-TO-KNOW General....25 applies for each toxic chemical. (iii) Documentation supporting the calculations of the quantity of each toxic chemical released to the environment or transferred to an off-site location. (iv...
40 CFR 61.356 - Recordkeeping requirements.
Code of Federal Regulations, 2012 CFR
2012-07-01
..., annual average flow-weighted benzene concentration, and annual benzene quantity. (2) For each waste... measurements, calculations, and other documentation used to determine that the continuous flow of process... benzene concentrations in the waste, the annual average flow-weighted benzene concentration of the waste...
40 CFR 61.356 - Recordkeeping requirements.
Code of Federal Regulations, 2014 CFR
2014-07-01
..., annual average flow-weighted benzene concentration, and annual benzene quantity. (2) For each waste... measurements, calculations, and other documentation used to determine that the continuous flow of process... benzene concentrations in the waste, the annual average flow-weighted benzene concentration of the waste...
40 CFR 61.356 - Recordkeeping requirements.
Code of Federal Regulations, 2013 CFR
2013-07-01
..., annual average flow-weighted benzene concentration, and annual benzene quantity. (2) For each waste... measurements, calculations, and other documentation used to determine that the continuous flow of process... benzene concentrations in the waste, the annual average flow-weighted benzene concentration of the waste...
Entanglement entropy and complexity for one-dimensional holographic superconductors
NASA Astrophysics Data System (ADS)
Kord Zangeneh, Mahdi; Ong, Yen Chin; Wang, Bin
2017-08-01
Holographic superconductor is an important arena for holography, as it allows concrete calculations to further understand the dictionary between bulk physics and boundary physics. An important quantity of recent interest is the holographic complexity. Conflicting claims had been made in the literature concerning the behavior of holographic complexity during phase transition. We clarify this issue by performing a numerical study on one-dimensional holographic superconductor. Our investigation shows that holographic complexity does not behave in the same way as holographic entanglement entropy. Nevertheless, the universal terms of both quantities are finite and reflect the phase transition at the same critical temperature.
Boundary conditions for developing a safety concept for an exothermal reaction.
Hauptmanns, Ulrich
2007-09-05
Kinetic calculations for an example exothermal chemical process, the production of TCB, are carried out. They address both parameter uncertainties and random failures of the cooling system. In this way, they enable one to establish comprehensive boundary conditions for a safety system in terms of unavailability, the quantities of the undesired by-product (TCDD) produced and the times available before a required intervention, if a pre-determined quantity of TCDD is tolerated. It is shown that accounting for stochastic effects and uncertainties derived from insufficient knowledge provides a broader and more realistic knowledge base for devising a viable safety concept.
NASA Astrophysics Data System (ADS)
Kärhä, Petri; Vaskuri, Anna; Mäntynen, Henrik; Mikkonen, Nikke; Ikonen, Erkki
2017-08-01
Spectral irradiance data are often used to calculate colorimetric properties, such as color coordinates and color temperatures of light sources by integration. The spectral data may contain unknown correlations that should be accounted for in the uncertainty estimation. We propose a new method for estimating uncertainties in such cases. The method goes through all possible scenarios of deviations using Monte Carlo analysis. Varying spectral error functions are produced by combining spectral base functions, and the distorted spectra are used to calculate the colorimetric quantities. Standard deviations of the colorimetric quantities at different scenarios give uncertainties assuming no correlations, uncertainties assuming full correlation, and uncertainties for an unfavorable case of unknown correlations, which turn out to be a significant source of uncertainty. With 1% standard uncertainty in spectral irradiance, the expanded uncertainty of the correlated color temperature of a source corresponding to the CIE Standard Illuminant A may reach as high as 37.2 K in unfavorable conditions, when calculations assuming full correlation give zero uncertainty, and calculations assuming no correlations yield the expanded uncertainties of 5.6 K and 12.1 K, with wavelength steps of 1 nm and 5 nm used in spectral integrations, respectively. We also show that there is an absolute limit of 60.2 K in the error of the correlated color temperature for Standard Illuminant A when assuming 1% standard uncertainty in the spectral irradiance. A comparison of our uncorrelated uncertainties with those obtained using analytical methods by other research groups shows good agreement. We re-estimated the uncertainties for the colorimetric properties of our 1 kW photometric standard lamps using the new method. The revised uncertainty of color temperature is a factor of 2.5 higher than the uncertainty assuming no correlations.
CheMentor Software System by H. A. Peoples
NASA Astrophysics Data System (ADS)
Reid, Brian P.
1997-09-01
CheMentor Software System H. A. Peoples. Computerized Learning Enhancements: http://www.ecis.com/~clehap; email: clehap@ecis.com; 1996 - 1997. CheMentor is a series of software packages for introductory-level chemistry, which includes Practice Items (I), Stoichiometry (I), Calculating Chemical Formulae, and the CheMentor Toolkit. The first three packages provide practice problems for students and various types of help to solve them; the Toolkit includes "calculators" for determining chemical quantities as well as the Practice Items (I) set of problems. The set of software packages is designed so that each individual product acts as a module of a common CheMentor program. As the name CheMentor implies, the software is designed as a "mentor" for students learning introductory chemistry concepts and problems. The typical use of the software would be by individual students (or perhaps small groups) as an adjunct to lectures. CheMentor is a HyperCard application and the modules are HyperCard stacks. The requirements to run the packages include a Macintosh computer with at least 1 MB of RAM, a hard drive with several MB of available space depending upon the packages selected (10 MB were required for all the packages reviewed here), and the Mac operating system 6.0.5 or later.
NASA Astrophysics Data System (ADS)
Abe, M.; Prasannaa, V. S.; Das, B. P.
2018-03-01
Heavy polar diatomic molecules are currently among the most promising probes of fundamental physics. Constraining the electric dipole moment of the electron (e EDM ), in order to explore physics beyond the standard model, requires a synergy of molecular experiment and theory. Recent advances in experiment in this field have motivated us to implement a finite-field coupled-cluster (FFCC) approach. This work has distinct advantages over the theoretical methods that we had used earlier in the analysis of e EDM searches. We used relativistic FFCC to calculate molecular properties of interest to e EDM experiments, that is, the effective electric field (Eeff) and the permanent electric dipole moment (PDM). We theoretically determine these quantities for the alkaline-earth monofluorides (AEMs), the mercury monohalides (Hg X ), and PbF. The latter two systems, as well as BaF from the AEMs, are of interest to e EDM searches. We also report the calculation of the properties using a relativistic finite-field coupled-cluster approach with single, double, and partial triples' excitations, which is considered to be the gold standard of electronic structure calculations. We also present a detailed error estimate, including errors that stem from our choice of basis sets, and higher-order correlation effects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barnard, James C.; Flynn, Donna M.
2002-10-08
The ability of the SBDART radiative transfer model to predict clear-sky diffuse and direct normal broadband shortwave irradiances is investigated. Model calculations of these quantities are compared with data from the Atmospheric Radiation Measurement (ARM) program’s Southern Great Plains (SGP) and North Slope of Alaska (NSA) sites. The model tends to consistently underestimate the direct normal irradiances at both sites by about 1%. In regards to clear-sky diffuse irradiance, the model overestimates this quantity at the SGP site in a manner similar to what has been observed in other studies (Halthore and Schwartz, 2000). The difference between the diffuse SBDARTmore » calculations and Halthore and Schwartz’s MODTRAN calculations is very small, thus demonstrating that SBDART performs similarly to MODTRAN. SBDART is then applied to the NSA site, and here it is found that the discrepancy between the model calculations and corrected diffuse measurements (corrected for daytime offsets, Dutton et al., 2001) is 0.4 W/m2 when averaged over the 12 cases considered here. Two cases of diffuse measurements from a shaded “black and white” pyranometer are also compared with the calculations and the discrepancy is again minimal. Thus, it appears as if the “diffuse discrepancy” that exists at the SGP site does not exist at the NSA sites. We cannot yet explain why the model predicts diffuse radiation well at one site but not at the other.« less
VizieR Online Data Catalog: Calibration of RAVE distances with Hipparcos (Francis, 2013)
NASA Astrophysics Data System (ADS)
Francis, C.
2013-09-01
A magnitude limited population of 18808 Hipparcos stars is used to calibrate distances for 52794 RAVE stars, including dwarfs, giants, and pre-main sequence stars. I give treatments for a number of types of bias affecting calculation, including bias from the non-linear relationship between the quantity of interest (e.g., distance or distance modulus) and the measured quantity (parallax or visual magnitude), the Lutz-Kelker bias, and bias due to variation in density of the stellar population. The use of a magnitude bound minimises the Malmquist and the Lutz-Kelker bias, and avoids a measurement bias because Hipparcos parallaxes are more accurate for brighter stars. The calibration is applicable to stars in 2MASS when there is some way to determine stellar class with reasonable confidence. For RAVE this is possible for hot dwarfs and using log g. The accuracy of the calibration is tested against Hipparcos stars with better than 2% parallax errors, and by comparison of the RAVE velocity distribution with that of Hipparcos, and is found to improve upon previous estimates of luminosity distance. An estimate of the LSR from RAVE data, (U0, V0, W0) = (14.9-1.7, 15.3-0.4, 6.9-0.1)km/s shows excellent agreement with the current best estimate from XHIP. The RAVE velocity distribution confirms the alignment of stellar motions with spiral structure. (2 data files).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ullmann, John Leonard; Couture, Aaron Joseph; Koehler, Paul E.
An accurate knowledge of the neutron capture cross section is important for many applications. Experimental measurements are important since theoretical calculations of capture have been notoriously difficult, with the ratio of measured to calculated cross sections often a factor of 2 or more in the 10 keV to 1 MeV region. However, a direct measurement of capture cannot be made on many interesting radioactive nuclides because of their short half-life or backgrounds caused by their nuclear decay. On the other hand, neutron transmission measurements of the total cross section are feasible for a wide range of radioactive nuclides since themore » detectors are far from the sample, and often are less sensitive to decay radiation. The parameters extracted from a total cross section measurement, which include the average resonance spacing, the neutron strength function, and the average total radiation width, (Γ γ), provide tight constraints on the calculation of the capture cross section, and when applied produce much more accurate results. These measurements can be made using the intense epithermal neutron flux at the Lujan Center on relatively small quantities of target material. It was the purpose of this project to investigate and develop the capability to make these measurements. A great deal of progress was made towards establishing this capability during 2016, including setting up the flight path and obtaining preliminary results, but more work remains to be done.« less
Dalyander, Patricia (Soupy); Mickey, Rangley C.; Long, Joseph W.; Flocks, James G.
2015-05-02
As part of a plan to preserve bird habitat on Breton Island, the southernmost extent of the Chandeleur Islands and part of the Breton National Wildlife Refuge in Louisiana, the U.S. Fish and Wildlife Service plans to increase island elevation with sand supplied from offshore resources. Proposed sand extraction sites include areas offshore where the seafloor morphology suggests suitable quantities of sediment may be found. Two proposed locations east and south of the island, between 5.5–9 kilometers from the island in 3–6 meters of water, have been identified. Borrow pits are perturbations to shallow-water bathymetry and thus can affect the wave field in a variety of ways, including alterations in sediment transport and new erosional or accretional patterns along the beach. A scenario-based numerical modeling strategy was used to assess the effects of the proposed offshore borrow pits on the nearshore wave field. Effects were assessed over a range of wave conditions and were gaged by changes in significant wave height and wave direction inshore of the borrow sites, as well as by changes in the calculated longshore sediment transport rate. The change in magnitude of the calculated sediment transport rate with the addition of the two borrow pits was an order of magnitude less than the calculated baseline transport rate.
48 CFR 52.228-15 - Performance and Payment Bonds-Construction.
Code of Federal Regulations, 2010 CFR
2010-10-01
...) Definitions. As used in this clause— Original contract price means the award price of the contract; or, for requirements contracts, the price payable for the estimated total quantity; or, for indefinite-quantity contracts, the price payable for the specified minimum quantity. Original contract price does not include...
Electronic Structure Calculation of Permanent Magnets using the KKR Green's Function Method
NASA Astrophysics Data System (ADS)
Doi, Shotaro; Akai, Hisazumi
2014-03-01
Electronic structure and magnetic properties of permanent magnetic materials, especially Nd2Fe14B, are investigated theoretically using the KKR Green's function method. Important physical quantities in magnetism, such as magnetic moment, Curie temperature, and anisotropy constant, which are obtained from electronics structure calculations in both cases of atomic-sphere-approximation and full-potential treatment, are compared with past band structure calculations and experiments. The site preference of heavy rare-earth impurities are also evaluated through the calculation of formation energy with the use of coherent potential approximations. Further, the development of electronic structure calculation code using the screened KKR for large super-cells, which is aimed at studying the electronic structure of realistic microstructures (e.g. grain boundary phase), is introduced with some test calculations.
Effect of Embolization Material in the Calculation of Dose Deposition in Arteriovenous Malformations
DOE Office of Scientific and Technical Information (OSTI.GOV)
De la Cruz, O. O. Galvan; Moreno-Jimenez, S.; Larraga-Gutierrez, J. M.
2010-12-07
In this work it is studied the impact of the incorporation of high Z materials (embolization material) in the dose calculation for stereotactic radiosurgery treatment for arteriovenous malformations. A statistical analysis is done to establish the variables that may impact in the dose calculation. To perform the comparison pencil beam (PB) and Monte Carlo (MC) calculation algorithms were used. The comparison between both dose calculations shows that PB overestimates the dose deposited. The statistical analysis, for the quantity of patients of the study (20), shows that the variable that may impact in the dose calculation is the volume of themore » high Z material in the arteriovenous malformation. Further studies have to be done to establish the clinical impact with the radiosurgery result.« less
Comparing fully general relativistic and Newtonian calculations of structure formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
East, William E.; Wojtak, Radosław; Abel, Tom
In the standard approach to studying cosmological structure formation, the overall expansion of the Universe is assumed to be homogeneous, with the gravitational effect of inhomogeneities encoded entirely in a Newtonian potential. A topic of ongoing debate is to what degree this fully captures the dynamics dictated by general relativity, especially in the era of precision cosmology. To quantitatively assess this, in this paper we directly compare standard N-body Newtonian calculations to full numerical solutions of the Einstein equations, for cold matter with various magnitude initial inhomogeneities on scales comparable to the Hubble horizon. We analyze the differences in themore » evolution of density, luminosity distance, and other quantities defined with respect to fiducial observers. This is carried out by reconstructing the effective spacetime and matter fields dictated by the Newtonian quantities, and by taking care to distinguish effects of numerical resolution. We find that the fully general relativistic and Newtonian calculations show excellent agreement, even well into the nonlinear regime. Finally, they only notably differ in regions where the weak gravity assumption breaks down, which arise when considering extreme cases with perturbations exceeding standard values.« less
A Kirchhoff approach to seismic modeling and prestack depth migration
NASA Astrophysics Data System (ADS)
Liu, Zhen-Yue
1993-05-01
The Kirchhoff integral provides a robust method for implementing seismic modeling and prestack depth migration, which can handle lateral velocity variation and turning waves. With a little extra computation cost, the Kirchoff-type migration can obtain multiple outputs that have the same phase but different amplitudes, compared with that of other migration methods. The ratio of these amplitudes is helpful in computing some quantities such as reflection angle. I develop a seismic modeling and prestack depth migration method based on the Kirchhoff integral, that handles both laterally variant velocity and a dip beyond 90 degrees. The method uses a finite-difference algorithm to calculate travel times and WKBJ amplitudes for the Kirchhoff integral. Compared to ray-tracing algorithms, the finite-difference algorithm gives an efficient implementation and single-valued quantities (first arrivals) on output. In my finite difference algorithm, the upwind scheme is used to calculate travel times, and the Crank-Nicolson scheme is used to calculate amplitudes. Moreover, interpolation is applied to save computation cost. The modeling and migration algorithms require a smooth velocity function. I develop a velocity-smoothing technique based on damped least-squares to aid in obtaining a successful migration.
Bayesian data analysis tools for atomic physics
NASA Astrophysics Data System (ADS)
Trassinelli, Martino
2017-10-01
We present an introduction to some concepts of Bayesian data analysis in the context of atomic physics. Starting from basic rules of probability, we present the Bayes' theorem and its applications. In particular we discuss about how to calculate simple and joint probability distributions and the Bayesian evidence, a model dependent quantity that allows to assign probabilities to different hypotheses from the analysis of a same data set. To give some practical examples, these methods are applied to two concrete cases. In the first example, the presence or not of a satellite line in an atomic spectrum is investigated. In the second example, we determine the most probable model among a set of possible profiles from the analysis of a statistically poor spectrum. We show also how to calculate the probability distribution of the main spectral component without having to determine uniquely the spectrum modeling. For these two studies, we implement the program Nested_fit to calculate the different probability distributions and other related quantities. Nested_fit is a Fortran90/Python code developed during the last years for analysis of atomic spectra. As indicated by the name, it is based on the nested algorithm, which is presented in details together with the program itself.
Comparing fully general relativistic and Newtonian calculations of structure formation
East, William E.; Wojtak, Radosław; Abel, Tom
2018-02-13
In the standard approach to studying cosmological structure formation, the overall expansion of the Universe is assumed to be homogeneous, with the gravitational effect of inhomogeneities encoded entirely in a Newtonian potential. A topic of ongoing debate is to what degree this fully captures the dynamics dictated by general relativity, especially in the era of precision cosmology. To quantitatively assess this, in this paper we directly compare standard N-body Newtonian calculations to full numerical solutions of the Einstein equations, for cold matter with various magnitude initial inhomogeneities on scales comparable to the Hubble horizon. We analyze the differences in themore » evolution of density, luminosity distance, and other quantities defined with respect to fiducial observers. This is carried out by reconstructing the effective spacetime and matter fields dictated by the Newtonian quantities, and by taking care to distinguish effects of numerical resolution. We find that the fully general relativistic and Newtonian calculations show excellent agreement, even well into the nonlinear regime. Finally, they only notably differ in regions where the weak gravity assumption breaks down, which arise when considering extreme cases with perturbations exceeding standard values.« less
Code of Federal Regulations, 2013 CFR
2013-07-01
..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...
Code of Federal Regulations, 2014 CFR
2014-07-01
..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...
Code of Federal Regulations, 2012 CFR
2012-07-01
..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...
Code of Federal Regulations, 2010 CFR
2010-07-01
..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...
Code of Federal Regulations, 2011 CFR
2011-07-01
..., and the rationale for selection; assumptions shall include use of any administrative controls and any... include the anticipated effect of the controls and mitigation on the release quantity and rate. (b) For... administrative controls and any mitigation that were assumed to limit the quantity that could be released...
Numerical investigation of finite-volume effects for the HVP
NASA Astrophysics Data System (ADS)
Boyle, Peter; Gülpers, Vera; Harrison, James; Jüttner, Andreas; Portelli, Antonin; Sachrajda, Christopher
2018-03-01
It is important to correct for finite-volume (FV) effects in the presence of QED, since these effects are typically large due to the long range of the electromagnetic interaction. We recently made the first lattice calculation of electromagnetic corrections to the hadronic vacuum polarisation (HVP). For the HVP, an analytical derivation of FV corrections involves a two-loop calculation which has not yet been carried out. We instead calculate the universal FV corrections numerically, using lattice scalar QED as an effective theory. We show that this method gives agreement with known analytical results for scalar mass FV effects, before applying it to calculate FV corrections for the HVP. This method for numerical calculation of FV effects is also widely applicable to quantities beyond the HVP.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pritychenko, B.; Mughabghab, S.F.
We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present papermore » contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.« less
Methodology comparison for gamma-heating calculations in material-testing reactors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lemaire, M.; Vaglio-Gaudard, C.; Lyoussi, A.
2015-07-01
The Jules Horowitz Reactor (JHR) is a Material-Testing Reactor (MTR) under construction in the south of France at CEA Cadarache (French Alternative Energies and Atomic Energy Commission). It will typically host about 20 simultaneous irradiation experiments in the core and in the beryllium reflector. These experiments will help us better understand the complex phenomena occurring during the accelerated ageing of materials and the irradiation of nuclear fuels. Gamma heating, i.e. photon energy deposition, is mainly responsible for temperature rise in non-fuelled zones of nuclear reactors, including JHR internal structures and irradiation devices. As temperature is a key parameter for physicalmore » models describing the behavior of material, accurate control of temperature, and hence gamma heating, is required in irradiation devices and samples in order to perform an advanced suitable analysis of future experimental results. From a broader point of view, JHR global attractiveness as a MTR depends on its ability to monitor experimental parameters with high accuracy, including gamma heating. Strict control of temperature levels is also necessary in terms of safety. As JHR structures are warmed up by gamma heating, they must be appropriately cooled down to prevent creep deformation or melting. Cooling-power sizing is based on calculated levels of gamma heating in the JHR. Due to these safety concerns, accurate calculation of gamma heating with well-controlled bias and associated uncertainty as low as possible is all the more important. There are two main kinds of calculation bias: bias coming from nuclear data on the one hand and bias coming from physical approximations assumed by computer codes and by general calculation route on the other hand. The former must be determined by comparison between calculation and experimental data; the latter by calculation comparisons between codes and between methodologies. In this presentation, we focus on this latter kind of bias. Nuclear heating is represented by the physical quantity called absorbed dose (energy deposition induced by particle-matter interactions, divided by mass). Its calculation with Monte Carlo codes is possible but computationally expensive as it requires transport simulation of charged particles, along with neutrons and photons. For that reason, the calculation of another physical quantity, called KERMA, is often preferred, as KERMA calculation with Monte Carlo codes only requires transport of neutral particles. However, KERMA is only an estimator of the absorbed dose and many conditions must be fulfilled for KERMA to be equal to absorbed dose, including so-called condition of electronic equilibrium. Also, Monte Carlo computations of absorbed dose still present some physical approximations, even though there is only a limited number of them. Some of these approximations are linked to the way how Monte Carlo codes apprehend the transport simulation of charged particles and the productive and destructive interactions between photons, electrons and positrons. There exists a huge variety of electromagnetic shower models which tackle this topic. Differences in the implementation of these models can lead to discrepancies in calculated values of absorbed dose between different Monte Carlo codes. The magnitude of order of such potential discrepancies should be quantified for JHR gamma-heating calculations. We consequently present a two-pronged plan. In a first phase, we intend to perform compared absorbed dose / KERMA Monte Carlo calculations in the JHR. This way, we will study the presence or absence of electronic equilibrium in the different JHR structures and experimental devices and we will give recommendations for the choice of KERMA or absorbed dose when calculating gamma heating in the JHR. In a second phase, we intend to perform compared TRIPOLI4 / MCNP absorbed dose calculations in a simplified JHR-representative geometry. For this comparison, we will use the same nuclear data library for both codes (the European library JEFF3.1.1 and photon library EPDL97) so as to isolate the effects from electromagnetic shower models on absorbed dose calculation. This way, we hope to get insightful feedback on these models and their implementation in Monte Carlo codes. (authors)« less
Design oriented structural analysis
NASA Technical Reports Server (NTRS)
Giles, Gary L.
1994-01-01
Desirable characteristics and benefits of design oriented analysis methods are described and illustrated by presenting a synoptic description of the development and uses of the Equivalent Laminated Plate Solution (ELAPS) computer code. ELAPS is a design oriented structural analysis method which is intended for use in the early design of aircraft wing structures. Model preparation is minimized by using a few large plate segments to model the wing box structure. Computational efficiency is achieved by using a limited number of global displacement functions that encompass all segments over the wing planform. Coupling with other codes is facilitated since the output quantities such as deflections and stresses are calculated as continuous functions over the plate segments. Various aspects of the ELAPS development are discussed including the analytical formulation, verification of results by comparison with finite element analysis results, coupling with other codes, and calculation of sensitivity derivatives. The effectiveness of ELAPS for multidisciplinary design application is illustrated by describing its use in design studies of high speed civil transport wing structures.
Charged hadrons in local finite-volume QED+QCD with C⋆ boundary conditions
NASA Astrophysics Data System (ADS)
Lucini, B.; Patella, A.; Ramos, A.; Tantalo, N.
2016-02-01
In order to calculate QED corrections to hadronic physical quantities by means of lattice simulations, a coherent description of electrically-charged states in finite volume is needed. In the usual periodic setup, Gauss's law and large gauge transformations forbid the propagation of electrically-charged states. A possible solution to this problem, which does not violate the axioms of local quantum field theory, has been proposed by Wiese and Polley, and is based on the use of C⋆ boundary conditions. We present a thorough analysis of the properties and symmetries of QED in isolation and QED coupled to QCD, with C⋆ boundary conditions. In particular we learn that a certain class of electrically-charged states can be constructed in a fully consistent fashion without relying on gauge fixing and without peculiar complications. This class includes single particle states of most stable hadrons. We also calculate finite-volume corrections to the mass of stable charged particles and show that these are much smaller than in non-local formulations of QED.
CosmoSIS: A system for MC parameter estimation
Bridle, S.; Dodelson, S.; Jennings, E.; ...
2015-12-23
CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less
Bond Length Equalization with molecular aromaticity-A new measurement of aromaticity.
Shen, Chen-Fei; Liu, Zi-Zhong; Liu, Hong-Xia; Zhang, Hui-Qing
2018-05-08
A new method to measure the amount of aromaticity is presented through the process of Bond Length Equalization (BLE). Degree of Aromaticity (DOA), a two-dimensional intensive quantity including geometric and energetic factors, as a new measurement of aromaticity is proposed. The unique characteristic of DOA and the formation of DOA will be displayed. The calculation of the geometrical optimization, DOA, Nucleus Independent Chemical Shifts (NICS) and Ring Stretching Vibration Raman Spectroscopy Frequency (RSVRSF) for the aromatic ring molecules - G n H n m (G = C, Si, Ge, n = 3, 5-8, m = +1, -1, 0, +1, +2) were calculated using the method of Density Functional Theory (DFT). The correlation between radius angle and molecular energy is absolute quadratic in the process of BLE. As the increasing of the number of ring atoms, the value of DOA decreasing gradually, the aromaticity decreased gradually, which was a same conclusion as NICS and RSVRSF concluded. Copyright © 2018 Elsevier B.V. All rights reserved.
Framework for cascade size calculations on random networks
NASA Astrophysics Data System (ADS)
Burkholz, Rebekka; Schweitzer, Frank
2018-04-01
We present a framework to calculate the cascade size evolution for a large class of cascade models on random network ensembles in the limit of infinite network size. Our method is exact and applies to network ensembles with almost arbitrary degree distribution, degree-degree correlations, and, in case of threshold models, for arbitrary threshold distribution. With our approach, we shift the perspective from the known branching process approximations to the iterative update of suitable probability distributions. Such distributions are key to capture cascade dynamics that involve possibly continuous quantities and that depend on the cascade history, e.g., if load is accumulated over time. As a proof of concept, we provide two examples: (a) Constant load models that cover many of the analytically tractable casacade models, and, as a highlight, (b) a fiber bundle model that was not tractable by branching process approximations before. Our derivations cover the whole cascade dynamics, not only their steady state. This allows us to include interventions in time or further model complexity in the analysis.
Njimou, Jacques Romain; Măicăneanu, Andrada; Indolean, Cerasella; Nanseu-Njiki, Charles Péguy; Ngameni, Emmanuel
2016-01-01
The biosorption characteristics of Cd (II) ions from synthetic wastewater using raw Ayous wood sawdust (Triplochiton scleroxylon), r-AS, immobilized by sodium alginate were investigated with respect to pH, biomass quantity, contact time, initial concentration of heavy metal, temperature and stirring rate. The experimental data fitted well with the Langmuir isotherm, suggesting that monolayer adsorption of the cadmium ions onto alginate-Ayous sawdust composite (a-ASC). The obtained monolayer adsorption capacity of a-ASC for Cd (II) was 6.21 mg/g. From the Dubinin-Radushkevich isotherm model, a 5.39 kJ/mol value for the mean free energy was calculated, indicating that Cd (II) biosorption could include an important physisorption stage. Thermodynamic calculations showed that the Cd (II) biosorption process was feasible, endothermic and spontaneous in nature under examined conditions. The results indicated that a-ASC could be an alternative material replacing more costly adsorbents used for the removal of heavy metals.
Time-Dependent Density Functional Theory for Extreme Environments
NASA Astrophysics Data System (ADS)
Baczewski, Andrew; Magyar, Rudolph; Shulenburger, Luke
2013-10-01
In recent years, DFT-MD has been shown to be a powerful tool for calculating the equation of state and constitutive properties of warm dense matter (WDM). These studies are validated through a number of experiments, including recently developed X-Ray Thomson Scattering (XRTS) techniques. Here, electronic temperatures and densities of WDM are accessible through x-ray scattering data, which is related to the system's dynamic structure factor (DSF)-a quantity that is accessible through DFT-MD calculations. Previous studies predict the DSF within the Born-Oppenheimer approximation, with the electronic state computed using Mermin DFT. A capability for including more general coupled electron-ion dynamics is desirable, to study both the effect on XRTS observables and the broader problem of electron-ion energy transfer in extreme WDM conditions. Progress towards such a capability will be presented, in the form of an Ehrenfest MD framework using TDDFT. Computational challenges and open theoretical questions will be discussed. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Security Administration under contract DE-AC04-94AL85000.
NASA Technical Reports Server (NTRS)
Toon, Owen B.; Mckay, C. P.; Ackerman, T. P.; Santhanam, K.
1989-01-01
The solution of the generalized two-stream approximation for radiative transfer in homogeneous multiple scattering atmospheres is extended to vertically inhomogeneous atmospheres in a manner which is numerically stable and computationally efficient. It is shown that solar energy deposition rates, photolysis rates, and infrared cooling rates all may be calculated with the simple modifications of a single algorithm. The accuracy of the algorithm is generally better than 10 percent, so that other uncertainties, such as in absorption coefficients, may often dominate the error in calculation of the quantities of interest to atmospheric studies.
NASA Technical Reports Server (NTRS)
Curtiss, L. A.; Langhoff, S. R.; Carney, G. D.
1979-01-01
The constant and linear terms in a Taylor series expansion of the dipole moment function of the ground state of ozone are calculated with Cartesian Gaussian basis sets ranging in quality from minimal to double zeta plus polarization. Results are presented at both the self-consistent field and configuration-interaction levels. Although the algebraic signs of the linear dipole moment derivatives are all established to be positive, the absolute magnitudes of these quantities, as well as the infrared intensities calculated from them, vary considerably with the level of theory.
Derenzini, M.; Pession, A.; Farabegoli, F.; Trerè, D.; Badiali, M.; Dehan, P.
1989-01-01
The relationship between the quantity of silver-stained interphasic nucleolar organizer regions (NORs) and nuclear synthetic activity, caryotype, and growth rate was studied in two established neuroblastoma cell lines (CHP 212 and HTB 10). Statistical analysis of silver-stained NORs revealed four times as many in CHP 212 cells compared with HTB 10 cells. No difference was observed in the ribosomal RNA synthesis between the two cell lines. The caryotype index was 1.2 for CHP 212 and 1.0 for HTB 10 cells. The number of chromosomes carrying NORs and the quantity of ribosomal genes was found to be the same for the two cell lines. Doubling time of CHP 212 cells was 20 hours compared with 54 hours for HTB 10 cells. In CHP 212 cells bindering of cell duplication by serum deprivation induced a progressive lowering (calculated at 48, 72, and 96 hours) of the quantity of silver-stained interphasic NORs. Recovery of duplication by new serum addition induced, after 24 hours, an increase of the quantity of silver-stained interphasic NORs up to control levels. In the light of available data, these results indicate that the quantity of interphasic NORs is strictly correlated only to the growth rate of the cell. Images Figure 2 Figure 3 Figure 4 PMID:2705511
Eigentime identities for on weighted polymer networks
NASA Astrophysics Data System (ADS)
Dai, Meifeng; Tang, Hualong; Zou, Jiahui; He, Di; Sun, Yu; Su, Weiyi
2018-01-01
In this paper, we first analytically calculate the eigenvalues of the transition matrix of a structure with very complex architecture and their multiplicities. We call this structure polymer network. Based on the eigenvalues obtained in the iterative manner, we then calculate the eigentime identity. We highlight two scaling behaviors (logarithmic and linear) for this quantity, strongly depending on the value of the weight factor. Finally, by making use of the obtained eigenvalues, we determine the weighted counting of spanning trees.
Structure and Randomness of Continuous-Time, Discrete-Event Processes
NASA Astrophysics Data System (ADS)
Marzen, Sarah E.; Crutchfield, James P.
2017-10-01
Loosely speaking, the Shannon entropy rate is used to gauge a stochastic process' intrinsic randomness; the statistical complexity gives the cost of predicting the process. We calculate, for the first time, the entropy rate and statistical complexity of stochastic processes generated by finite unifilar hidden semi-Markov models—memoryful, state-dependent versions of renewal processes. Calculating these quantities requires introducing novel mathematical objects (ɛ -machines of hidden semi-Markov processes) and new information-theoretic methods to stochastic processes.
Fluid Motion in a Spinning, Coning Cylinder via Spatial Eigenfunction Expansion.
1987-08-01
PRESSURE AND MOMENT COEFFICIENTS The velocity, pressure and moment exerted by the liquid on the container are quantities of physical interest calculated...with collocation; these are denoted by LS and COL, respectively. Of the physical parameters, the calculation is more sensitive to Re and f than A...Applied Ntate’atics and Physics (ZAMP), Vol. 33, pp. 189-201, March 1982. 4I REFERENCES (continued) 13. Gerber, N., and Sedney, R., "Moment on a
ERIC Educational Resources Information Center
Teo, Boon K.; Li, Wai-Kee
2011-01-01
This article is divided into two parts. In the first part, the atomic unit (au) system is introduced and the scales of time, space (length), and speed, as well as those of mass and energy, in the atomic world are discussed. In the second part, the utility of atomic units in quantum mechanical and spectroscopic calculations is illustrated with…
Overview of Nuclear Physics Data: Databases, Web Applications and Teaching Tools
NASA Astrophysics Data System (ADS)
McCutchan, Elizabeth
2017-01-01
The mission of the United States Nuclear Data Program (USNDP) is to provide current, accurate, and authoritative data for use in pure and applied areas of nuclear science and engineering. This is accomplished by compiling, evaluating, and disseminating extensive datasets. Our main products include the Evaluated Nuclear Structure File (ENSDF) containing information on nuclear structure and decay properties and the Evaluated Nuclear Data File (ENDF) containing information on neutron-induced reactions. The National Nuclear Data Center (NNDC), through the website www.nndc.bnl.gov, provides web-based retrieval systems for these and many other databases. In addition, the NNDC hosts several on-line physics tools, useful for calculating various quantities relating to basic nuclear physics. In this talk, I will first introduce the quantities which are evaluated and recommended in our databases. I will then outline the searching capabilities which allow one to quickly and efficiently retrieve data. Finally, I will demonstrate how the database searches and web applications can provide effective teaching tools concerning the structure of nuclei and how they interact. Work supported by the Office of Nuclear Physics, Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-98CH10886.
Effect of air-entry angle on performance of a 2-stroke-cycle compression-ignition engine
NASA Technical Reports Server (NTRS)
Earle, Sherod L; Dutee, Francis J
1937-01-01
An investigation was made to determine the effect of variations in the horizontal and vertical air-entry angles on the performance characteristics of a single-cylinder 2-stroke-cycle compression-ignition test engine. Performance data were obtained over a wide range of engine speed, scavenging pressure, fuel quantity, and injection advance angle with the optimum guide vanes. Friction and blower-power curves are included for calculating the indicated and net performances. The optimum horizontal air-entry angle was found to be 60 degrees from the radial and the optimum vertical angle to be zero, under which conditions a maximum power output of 77 gross brake horsepower for a specific fuel consumption of 0.52 pound per brake horsepower-hour was obtained at 1,800 r.p.m. and 16-1/2 inches of Hg scavenging pressure. The corresponding specific output was 0.65 gross brake horsepower per cubic inch of piston displacement. Tests revealed that the optimum scavenging pressure increased linearly with engine speed. The brake mean effective pressure increased uniformly with air quantity per cycle for any given vane angle and was independent of engine speed and scavenging pressure.
Spotorno, Nicola; McMillan, Corey T.; Powers, John P.; Clark, Robin; Grossman, Murray
2014-01-01
A growing amount of empirical data is showing that the ability to manipulate quantities in a precise and efficient fashion is rooted in cognitive mechanisms devoted to specific aspects of numbers processing. The Analog number system (ANS) has a reasonable representation of quantities up to about 4, and represents larger quantities on the basis of a numerical ratio between quantities. In order to represent the precise cardinality of a number, the ANS may be supported by external algorithms such as language, leading to a “Precise Number System”. In the setting of limited language, other number-related systems can appear. For example the Parallel Individuation system (PIS) supports a “chunking mechanism” that clusters units of larger numerosities into smaller subsets. In the present study we investigated number processing in non-aphasic patients with Corticobasal Syndrome (CBS) and Posterior Cortical Atrophy (PCA), two neurodegenerative conditions that are associated with progressive parietal atrophy. The present study investigated these number systems in CBS and PCA by assessing the property of the ANS associated with smaller and larger numerosities, and the chunking property of the PIS. The results revealed that CBS/PCA patients are impaired in simple calculations (e.g., addition and subtraction) and that their performance strongly correlates with the size of the numbers involved in these calculations, revealing a clear magnitude effect. This magnitude effect correlated with gray matter atrophy in parietal regions. Moreover, a numeral-dots transcoding task showed that CBS/PCA patients are able to take advantage of clustering in the spatial distribution of the dots of the array. The relative advantage associated with chunking compared to a random spatial distribution correlated with both parietal and prefrontal regions. These results shed light on the properties of systems for representing number knowledge in non-aphasic patients with CBS and PCA. PMID:25278132
Heinicke, Grant; Matthews, Frank; Schwartz, Joseph B
2005-01-01
Drugs layering experiments were performed in a fluid bed fitted with a rotor granulator insert using diltiazem as a model drug. The drug was applied in various quantities to sugar spheres of different mesh sizes to give a series of drug-layered sugar spheres (cores) of different potency, size, and weight per particle. The drug presence lowered the bulk density of the cores in proportion to the quantity of added drug. Polymer coating of each core lot was performed in a fluid bed fitted with a Wurster insert. A series of polymer-coated cores (pellets) was removed from each coating experiment. The mean diameter of each core and each pellet sample was determined by image analysis. The rate of change of diameter on polymer addition was determined for each starting size of core and compared to calculated values. The core diameter was displaced from the line of best fit through the pellet diameter data. Cores of different potency with the same size distribution were made by layering increasing quantities of drug onto sugar spheres of decreasing mesh size. Equal quantities of polymer were applied to the same-sized core lots and coat thickness was measured. Weight/weight calculations predict equal coat thickness under these conditions, but measurable differences were found. Simple corrections to core charge weight in the Wurster insert were successfully used to manufacture pellets having the same coat thickness. The sensitivity of the image analysis technique in measuring particle size distributions (PSDs) was demonstrated by measuring a displacement in PSD after addition of 0.5% w/w talc to a pellet sample.
Stable isotope deltas: Tiny, yet robust signatures in nature
Brand, Willi A.; Coplen, Tyler B.
2012-01-01
Although most of them are relatively small, stable isotope deltas of naturally occurring substances are robust and enable workers in anthropology, atmospheric sciences, biology, chemistry, environmental sciences, food and drug authentication, forensic science, geochemistry, geology, oceanography, and paleoclimatology to study a variety of topics. Two fundamental processes explain the stable isotope deltas measured in most terrestrial systems: isotopic fractionation and isotope mixing. Isotopic fractionation is the result of equilibrium or kinetic physicochemical processes that fractionate isotopes because of small differences in physical or chemical properties of molecular species having different isotopes. It is shown that the mixing of radioactive and stable isotope end members can be modelled to provide information on many natural processes, including 14C abundances in the modern atmosphere and the stable hydrogen and oxygen isotopic compositions of the oceans during glacial and interglacial times. The calculation of mixing fractions using isotope balance equations with isotope deltas can be substantially in error when substances with high concentrations of heavy isotopes (e.g. 13C, 2H, and 18O ) are mixed. In such cases, calculations using mole fractions are preferred as they produce accurate mixing fractions. Isotope deltas are dimensionless quantities. In the International System of Units (SI), these quantities have the unit 1 and the usual list of prefixes is not applicable. To overcome traditional limitations with expressing orders of magnitude differences in isotope deltas, we propose the term urey (symbol Ur), after Harold C. Urey, for the unit 1. In such a manner, an isotope delta value expressed traditionally as−25 per mil can be written as−25 mUr (or−2.5 cUr or−0.25 dUr; the use of any SI prefix is possible). Likewise, very small isotopic differences often expressed in per meg ‘units’ are easily included (e.g. either+0.015 ‰ or+15 per meg can be written as+15 μUr.
Aust, Shelly K; Ahrendsen, Dakota L; Kellar, P Roxanne
2015-01-01
Conservation of the evolutionary diversity among organisms should be included in the selection of priority regions for preservation of Earth's biodiversity. Traditionally, biodiversity has been determined from an assessment of species richness (S), abundance, evenness, rarity, etc. of organisms but not from variation in species' evolutionary histories. Phylogenetic diversity (PD) measures evolutionary differences between taxa in a community and is gaining acceptance as a biodiversity assessment tool. However, with the increase in the number of ways to calculate PD, end-users and decision-makers are left wondering how metrics compare and what data are needed to calculate various metrics. In this study, we used massively parallel sequencing to generate over 65,000 DNA characters from three cellular compartments for over 60 species in the asterid clade of flowering plants. We estimated asterid phylogenies from character datasets of varying nucleotide quantities, and then assessed the effect of varying character datasets on resulting PD metric values. We also compared multiple PD metrics with traditional diversity indices (including S) among two endangered grassland prairies in Nebraska (U.S.A.). Our results revealed that PD metrics varied based on the quantity of genes used to infer the phylogenies; therefore, when comparing PD metrics between sites, it is vital to use comparable datasets. Additionally, various PD metrics and traditional diversity indices characterize biodiversity differently and should be chosen depending on the research question. Our study provides empirical results that reveal the value of measuring PD when considering sites for conservation, and it highlights the usefulness of using PD metrics in combination with other diversity indices when studying community assembly and ecosystem functioning. Ours is just one example of the types of investigations that need to be conducted across the tree of life and across varying ecosystems in order to build a database of phylogenetic diversity assessments that lead to a pool of results upon which a guide through the plethora of PD metrics may be prepared for use by ecologists and conservation planners.
Waste Heat Recovery from High Temperature Off-Gases from Electric Arc Furnace
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nimbalkar, Sachin U; Thekdi, Arvind; Keiser, James R
2014-01-01
This article presents a study and review of available waste heat in high temperature Electric Arc Furnace (EAF) off gases and heat recovery techniques/methods from these gases. It gives details of the quality and quantity of the sensible and chemical waste heat in typical EAF off gases, energy savings potential by recovering part of this heat, a comprehensive review of currently used waste heat recovery methods and potential for use of advanced designs to achieve a much higher level of heat recovery including scrap preheating, steam production and electric power generation. Based on our preliminary analysis, currently, for all electricmore » arc furnaces used in the US steel industry, the energy savings potential is equivalent to approximately 31 trillion Btu per year or 32.7 peta Joules per year (approximately $182 million US dollars/year). This article describes the EAF off-gas enthalpy model developed at Oak Ridge National Laboratory (ORNL) to calculate available and recoverable heat energy for a given stream of exhaust gases coming out of one or multiple EAF furnaces. This Excel based model calculates sensible and chemical enthalpy of the EAF off-gases during tap to tap time accounting for variation in quantity and quality of off gases. The model can be used to estimate energy saved through scrap preheating and other possible uses such as steam generation and electric power generation using off gas waste heat. This article includes a review of the historical development of existing waste heat recovery methods, their operations, and advantages/limitations of these methods. This paper also describes a program to develop and test advanced concepts for scrap preheating, steam production and electricity generation through use of waste heat recovery from the chemical and sensible heat contained in the EAF off gases with addition of minimum amount of dilution or cooling air upstream of pollution control equipment such as bag houses.« less
Optimum performance of explosives in a quasistatic detonation cycle
NASA Astrophysics Data System (ADS)
Baker, Ernest L.; Stiel, Leonard I.
2017-01-01
Analyses were conducted on the behavior of explosives in a quasistatic detonation cycle. This type of cycle has been proposed for the determination of the maximum work that can be performed by the explosive. The Jaguar thermochemical equilibrium program enabled the direct analyses of explosive performance at the various steps in the detonation cycle. In all cases the explosive is initially detonated to a point on the Hugoniot curve for the reaction products. The maximum useful work that can be obtained from the explosive is equal to the P-V work on the isentrope for expansion after detonation to atmospheric pressure, minus one-half the square of the particle velocity at the detonation point. This quantity is calculated form the internal energy of the explosive at the initial and final atmospheric temperatures. Cycle efficiencies (net work/ heat added) are also calculated with these procedures. For several explosives including TNT, RDX, and aluminized compositions, maximum work effects were established through the Jaguar calculations for Hugoniot points corresponding to C-J, overdriven, underdriven and constant volume detonations. Detonation to the C-J point is found to result in the maximum net work in all cases.
Baxa, Michael C.; Haddadian, Esmael J.; Jumper, John M.; Freed, Karl F.; Sosnick, Tobin R.
2014-01-01
The loss of conformational entropy is a major contribution in the thermodynamics of protein folding. However, accurate determination of the quantity has proven challenging. We calculate this loss using molecular dynamic simulations of both the native protein and a realistic denatured state ensemble. For ubiquitin, the total change in entropy is TΔSTotal = 1.4 kcal⋅mol−1 per residue at 300 K with only 20% from the loss of side-chain entropy. Our analysis exhibits mixed agreement with prior studies because of the use of more accurate ensembles and contributions from correlated motions. Buried side chains lose only a factor of 1.4 in the number of conformations available per rotamer upon folding (ΩU/ΩN). The entropy loss for helical and sheet residues differs due to the smaller motions of helical residues (TΔShelix−sheet = 0.5 kcal⋅mol−1), a property not fully reflected in the amide N-H and carbonyl C=O bond NMR order parameters. The results have implications for the thermodynamics of folding and binding, including estimates of solvent ordering and microscopic entropies obtained from NMR. PMID:25313044
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sin, M.; Capote, R.; Herman, M. W.
Comprehensive calculations of cross sections for neutron-induced reactions on 232–237U targets are performed in this paper in the 10 keV–30 MeV incident energy range with the code EMPIRE–3.2 Malta. The advanced modelling and consistent calculation scheme are aimed at improving our knowledge of the neutron scattering and emission cross sections, and to assess the consistency of available evaluated libraries for light uranium isotopes. The reaction model considers a dispersive optical potential (RIPL 2408) that couples from five (even targets) to nine (odd targets) levels of the ground-state rotational band, and a triple-humped fission barrier with absorption in the wells describedmore » within the optical model for fission. A modified Lorentzian model (MLO) of the radiative strength function and Enhanced Generalized Superfluid Model nuclear level densities are used in Hauser-Feschbach calculations of the compound-nuclear decay that include width fluctuation corrections. The starting values for the model parameters are retrieved from RIPL. Excellent agreement with available experimental data for neutron emission and fission is achieved, giving confidence that the quantities for which there is no experimental information are also accurately predicted. Finally, deficiencies in existing evaluated libraries are highlighted.« less
North Slope, Alaska: Source rock distribution, richness, thermal maturity, and petroleum charge
Peters, K.E.; Magoon, L.B.; Bird, K.J.; Valin, Z.C.; Keller, M.A.
2006-01-01
Four key marine petroleum source rock units were identified, characterized, and mapped in the subsurface to better understand the origin and distribution of petroleum on the North Slope of Alaska. These marine source rocks, from oldest to youngest, include four intervals: (1) Middle-Upper Triassic Shublik Formation, (2) basal condensed section in the Jurassic-Lower Cretaceous Kingak Shale, (3) Cretaceous pebble shale unit, and (4) Cretaceous Hue Shale. Well logs for more than 60 wells and total organic carbon (TOC) and Rock-Eval pyrolysis analyses for 1183 samples in 125 well penetrations of the source rocks were used to map the present-day thickness of each source rock and the quantity (TOC), quality (hydrogen index), and thermal maturity (Tmax) of the organic matter. Based on assumptions related to carbon mass balance and regional distributions of TOC, the present-day source rock quantity and quality maps were used to determine the extent of fractional conversion of the kerogen to petroleum and to map the original TOC (TOCo) and the original hydrogen index (HIo) prior to thermal maturation. The quantity and quality of oil-prone organic matter in Shublik Formation source rock generally exceeded that of the other units prior to thermal maturation (commonly TOCo > 4 wt.% and HIo > 600 mg hydrocarbon/g TOC), although all are likely sources for at least some petroleum on the North Slope. We used Rock-Eval and hydrous pyrolysis methods to calculate expulsion factors and petroleum charge for each of the four source rocks in the study area. Without attempting to identify the correct methods, we conclude that calculations based on Rock-Eval pyrolysis overestimate expulsion factors and petroleum charge because low pressure and rapid removal of thermally cracked products by the carrier gas retards cross-linking and pyrobitumen formation that is otherwise favored by natural burial maturation. Expulsion factors and petroleum charge based on hydrous pyrolysis may also be high compared to nature for a similar reason. Copyright ?? 2006. The American Association of Petroleum Geologists. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nathan, S.; Loftin, B.; Abramczyk, G.
The Small Gram Quantity (SGQ) concept is based on the understanding that small amounts of hazardous materials, in this case radioactive materials (RAM), are significantly less hazardous than large amounts of the same materials. This paper describes a methodology designed to estimate an SGQ for several neutron and gamma emitting isotopes that can be shipped in a package compliant with 10 CFR Part 71 external radiation level limits regulations. These regulations require packaging for the shipment of radioactive materials, under both normal and accident conditions, to perform the essential functions of material containment, subcriticality, and maintain external radiation levels withinmore » the specified limits. By placing the contents in a helium leak-tight containment vessel, and limiting the mass to ensure subcriticality, the first two essential functions are readily met. Some isotopes emit sufficiently strong photon radiation that small amounts of material can yield a large dose rate outside the package. Quantifying the dose rate for a proposed content is a challenging issue for the SGQ approach. It is essential to quantify external radiation levels from several common gamma and neutron sources that can be safely placed in a specific packaging, to ensure compliance with federal regulations. The Packaging Certification Program (PCP) Methodology for Determining Dose Rate for Small Gram Quantities in Shipping Packagings provides bounding shielding calculations that define mass limits compliant with 10 CFR 71.47 for a set of proposed SGQ isotopes. The approach is based on energy superposition with dose response calculated for a set of spectral groups for a baseline physical packaging configuration. The methodology includes using the MCNP radiation transport code to evaluate a family of neutron and photon spectral groups using the 9977 shipping package and its associated shielded containers as the base case. This results in a set of multipliers for 'dose per particle' for each spectral group. For a given isotope, the source spectrum is folded with the response for each group. The summed contribution from all isotopes determines the total dose from the RAM in the container.« less
Using Order of Magnitude Calculations to Extend Student Comprehension of Laboratory Data
ERIC Educational Resources Information Center
Dean, Rob L.
2015-01-01
Author Rob Dean previously published an Illuminations article concerning "challenge" questions that encourage students to think imaginatively with approximate quantities, reasonable assumptions, and uncertain information. This article has promoted some interesting discussion, which has prompted him to present further examples. Examples…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
Complete texts of 123 communications to the Congress (in the original language; the majority in English, some in Russian, French), on the following topics; radiation perspective in the U.S., radiation and man, non-ionising radiation, radiation effects on animals, radiation quantities, radioecology, reactor experience, late radiation effects, dose calculations and radiation accidents.
Airborne remote sensing to detect greenbug stress to wheat
USDA-ARS?s Scientific Manuscript database
Vegetation indices calculated from the quantity of reflected electromagnetic radiation have been used to quantify levels of stress to plants. Greenbugs cause stress to wheat plants and therefore multi-spectral remote sensing may be useful for detecting greenbug infested wheat fields. The objective...
BOREHOLE FLOWMETERS: FIELD APPLICATION AND DATA ANALYSIS
This paper reviews application of borehole flowmeters in granular and fractured rocks. Basic data obtained in the field are the ambient flow log and the pumping-induced flow log. These basic logs may then be used to calculate other quantities of interest. The paper describes the ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ezato, K.; Shehata, A.M.; Kunugi, T.
1999-08-01
In order to treat strongly heated, forced gas flows at low Reynolds numbers in vertical circular tubes, the {kappa}-{epsilon} turbulence model of Abe, Kondoh, and Nagano (1994), developed for forced turbulent flow between parallel plates with the constant property idealization, has been successfully applied. For thermal energy transport, the turbulent Prandtl number model of Kays and Crawford (1993) was adopted. The capability to handle these flows was assessed via calculations at the conditions of experiments by Shehata (1984), ranging from essentially turbulent to laminarizing due to the heating. Predictions forecast the development of turbulent transport quantities, Reynolds stress, and turbulentmore » heat flux, as well as turbulent viscosity and turbulent kinetic energy. Overall agreement between the calculations and the measured velocity and temperature distributions is good, establishing confidence in the values of the forecast turbulence quantities--and the model which produced them. Most importantly, the model yields predictions which compare well with the measured wall heat transfer parameters and the pressure drop.« less
Patel, N S; Chiu-Tsao, S T; Tsao, H S; Harrison, L B
2001-01-01
Intravascular brachytherapy (IVBT) is an emerging modality for the treatment of atherosclerotic lesions in the artery. As part of the refinement in this rapidly evolving modality of treatment, the current simplistic dosimetry approach based on a fixed-point prescription must be challenged by future rigorous dosimetry method employing image-based three-dimensional (3D) treatment planning. The goals of 3D IVBT treatment planning calculations include (1) achieving high accuracy in a slim cylindrical region of interest, (2) accounting for the edge effect around the source ends, and (3) supporting multiple dwell positions. The formalism recommended by Task Group 60 (TG-60) of the American Association of Physicists in Medicine (AAPM) is applicable for gamma sources, as well as short beta sources with lengths less than twice the beta particle range. However, for the elongated beta sources and/or seed trains with lengths greater than twice the beta range, a new formalism is required to handle their distinctly different dose characteristics. Specifically, these characteristics consist of (a) flat isodose curves in the central region, (b) steep dose gradient at the source ends, and (c) exponential dose fall-off in the radial direction. In this paper, we present a novel formalism that evolved from TG-60 in maintaining the dose rate as a product of four key quantities. We propose to employ cylindrical coordinates (R, Z, phi), which are more natural and suitable to the slim cylindrical shape of the volume of interest, as opposed to the spherical coordinate system (r, theta, phi) used in the TG-60 formalism. The four quantities used in this formalism include (1) the distribution factor, H(R, Z), (2) the modulation function, M(R, Z), (3) the transverse dose function, h(R), and (4) the reference dose rate at 2 mm along the perpendicular bisector, D(R0=2 mm, Z0=0). The first three are counterparts of the geometry factor, the anisotropy function and the radial dose function in the TG-60 formalism, respectively. The reference dose rate is identical to that recommended by TG-60. The distribution factor is intended to resemble the dose profile due to the spatial distribution of activity in the elongated beta source, and it is a modified Fermi-Dirac function in mathematical form. The utility of this formalism also includes the slow-varying nature of the modulation function, allowing for more accurate treatment planning calculations based on interpolation. The transverse dose function describes the exponential fall-off of the dose in the radial direction, and an exponential or a polynomial can fit it. Simultaneously, the decoupling nature of these dose-related quantities facilitates image-based 3D treatment planning calculations for long beta sources used in IVBT. The new formalism also supports the dosimetry involving multiple dwell positions required for lesions longer than the source length. An example of the utilization of this formalism is illustrated for a 90Y coil source in a carbon dioxide-filled balloon. The pertinent dosimetric parameters were generated and tabulated for future use.
Simulations of Ground and Space-Based Oxygen Atom Experiments
NASA Technical Reports Server (NTRS)
Minton, T. K.; Cline, J. A.; Braunstein, M.
2002-01-01
Fast, pulsed atomic oxygen sources are a key tool in ground-based investigations of spacecraft contamination and surface erosion effects. These technically challenging ground-based studies provide a before and after picture of materials under low-earth-orbit (LEO) conditions. It would be of great interest to track in real time the pulsed flux from the source to the surface sample target and beyond in order to characterize the population of atoms and molecules that actually impact the surface and those that make it downstream to any coincident detectors. We have performed simulations in order to provide such detailed descriptions of these ground-based measurements and to provide an assessment of their correspondence to the actual LEO environment. Where possible we also make comparisons to measured fluxes and erosion yields. To perform the calculations we use a detailed description of a measurement beam and surface geometry based on the W, pulsed apparatus at Montana State University. In this system, a short pulse (on the order of 10 microseconds) of an O/O2 beam impacts a flat sample about 40 cm downstream and slightly displaced &om the beam s central axis. Past this target, at the end of the beam axis is a quadrupole mass spectrometer that measures the relative in situ flux of 0102 to give an overall normalized erosion yield. In our simulations we use the Direct Simulation Monte Carlo (DSMC) method, and track individual atoms within the atomic oxygen pulse. DSMC techniques are typically used to model rarefied (few collision) gas-flows which occur at altitudes above approximately 110 kilometers. These techniques are well suited for the conditions here, and multi-collision effects that can only be treated by this or a similar technique are included. This simulation includes collisions with the surface and among gas atoms that have scattered from the surface. The simulation also includes descriptions of the velocity spread and spatial profiles of the O/O2 beam obtained from separate measurements. These computations use basic engineering models for the gas-gas and gas-surface scattering and focus on the influence of multi-collision effects. These simulations characterize many important quantities of interest including the actual flux of atoms that reach the surface, the energy distribution of this flux, as well as the direction of the velocity of the flux that strikes the surface. These quantities are important in characterizing the conditions which give rise to measured surface erosion. The calculations also yield time- snapshots of the pulse as it impacts and flows around the surface. These snapshots reveal the local environment of gas near the surface for the duration of the pulse. We are also able to compute the flux of molecules that travel downstream and reach the spectrometer, and we characterize their velocity distribution. The number of atoms that reach the spectrometer can in fact be influenced by the presence of the surface due to gas-gas collisions from atoms scattered h m the surface, and it will generally be less than that with the surface absent. This amounts to an overall normalization factor in computing erosion yields. We discuss these quantities and their relationship to the gas-surf$ce interaction parameters. We have also performed similar calculations corresponding to conditions (number densities, temperatures, and velocities) of low-earth orbit. The steady-state nature and lower overall flux of the actual space environment give rise to differences in the nature of the gas-impacts on the surface from those of the ground-based measurements using a pulsed source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guan, Fada; Peeler, Christopher; Taleei, Reza
Purpose: The motivation of this study was to find and eliminate the cause of errors in dose-averaged linear energy transfer (LET) calculations from therapeutic protons in small targets, such as biological cell layers, calculated using the GEANT 4 Monte Carlo code. Furthermore, the purpose was also to provide a recommendation to select an appropriate LET quantity from GEANT 4 simulations to correlate with biological effectiveness of therapeutic protons. Methods: The authors developed a particle tracking step based strategy to calculate the average LET quantities (track-averaged LET, LET{sub t} and dose-averaged LET, LET{sub d}) using GEANT 4 for different tracking stepmore » size limits. A step size limit refers to the maximally allowable tracking step length. The authors investigated how the tracking step size limit influenced the calculated LET{sub t} and LET{sub d} of protons with six different step limits ranging from 1 to 500 μm in a water phantom irradiated by a 79.7-MeV clinical proton beam. In addition, the authors analyzed the detailed stochastic energy deposition information including fluence spectra and dose spectra of the energy-deposition-per-step of protons. As a reference, the authors also calculated the averaged LET and analyzed the LET spectra combining the Monte Carlo method and the deterministic method. Relative biological effectiveness (RBE) calculations were performed to illustrate the impact of different LET calculation methods on the RBE-weighted dose. Results: Simulation results showed that the step limit effect was small for LET{sub t} but significant for LET{sub d}. This resulted from differences in the energy-deposition-per-step between the fluence spectra and dose spectra at different depths in the phantom. Using the Monte Carlo particle tracking method in GEANT 4 can result in incorrect LET{sub d} calculation results in the dose plateau region for small step limits. The erroneous LET{sub d} results can be attributed to the algorithm to determine fluctuations in energy deposition along the tracking step in GEANT 4. The incorrect LET{sub d} values lead to substantial differences in the calculated RBE. Conclusions: When the GEANT 4 particle tracking method is used to calculate the average LET values within targets with a small step limit, such as smaller than 500 μm, the authors recommend the use of LET{sub t} in the dose plateau region and LET{sub d} around the Bragg peak. For a large step limit, i.e., 500 μm, LET{sub d} is recommended along the whole Bragg curve. The transition point depends on beam parameters and can be found by determining the location where the gradient of the ratio of LET{sub d} and LET{sub t} becomes positive.« less
INTERNATIONAL REPORTS: New International Standards for Quantities and Units
NASA Astrophysics Data System (ADS)
Thor, A. J.
1994-01-01
Each coherent system of units is based on a system of quantities in such a way that the equations between the numerical values expressed in coherent units have exactly the same form, including numerical factors, as the corresponding equations between the quantities. The highest international body responsible for the International System of Units (SI) is the Conférence Générale des Poids et Mesures (CGPM). However, the CGPM is not concerned with quantities or systems of quantities. That question lies within the scope of Technical Committee number twelve of the International Organization for Standardization (ISO/TC 12). Quantities, units, symbols, conversion factors. To fulfil its responsibility, ISO/TC 12 has prepared the International Standard ISO 31, Quantities and Units, which consists of fourteen parts. The new editions of the different parts of the International Standard are briefly presented here.
Estimating usable resources from historical industry data
Cargill, S.M.; Root, D.H.; Bailey, E.H.
1981-01-01
Historical production statistics are used to predict the quantity of remaining usable resources. The commodities considered are mercury, copper and its byproducts gold and silver, and petroleum; the production and discovery data are for the United States. The results of the study indicate that the cumulative return per unit of effort, herein measured as grade of metal ores and discovery rate of recoverable petroleum, is proportional to a negative power of total effort expended, herein measured as total ore mined and total exploratory wells or footage drilled. This power relationship can be extended to some limiting point (a lower ore grade or a maximum number of exploratory wells or footage), and the apparent quantity of available remaining resource at that limit can be calculated. For mercury ore of grades at and above 0.1 percent, the remaining usable resource in the United States is calculated to be 54 million kg (1,567,000 flasks). For copper ore of grades at and above 0.2 percent, the remaining usable copper resource is calculated to be 270 million metric tons (298 million short tons); remaining resources of its by-products gold and silver are calculated to be 3,656 metric tons (118 million troy ounces) and 64,676 metric tons (2,079 million troy ounces), respectively. The undiscovered recoverable crude oil resource in the conterminous United States, at 3 billion feet of additional exploratory drilling, is calculated to be nearly 37.6 billion barrels; the undiscovered recoverable petroleum resource in the Permian basin of western Texas and southeastern New Mexico, at 300 million feet of additional exploratory drilling or 50,000 additional exploratory wells, is calculated to be about 6.2 billion BOE (barrels of oil equivalent).
Henderson, Timothy M.; Wuttke, Gilbert H.
1977-01-01
A variable leak gas source and a method for obtaining the same which includes filling a quantity of hollow glass micro-spheres with a gas, storing said quantity in a confined chamber having a controllable outlet, heating said chamber above room temperature, and controlling the temperature of said chamber to control the quantity of gas passing out of said controllable outlet. Individual gas filled spheres may be utilized for calibration purposes by breaking a sphere having a known quantity of a known gas to calibrate a gas detection apparatus.
Analysing uncertainties of supply and demand in the future use of hydrogen as an energy vector
NASA Astrophysics Data System (ADS)
Lenel, U. R.; Davies, D. G. S.; Moore, M. A.
An analytical technique (Analysis with Uncertain Qualities), developed at Fulmer, is being used to examine the sensitivity of the outcome to uncertainties in input quantities in order to highlight which input quantities critically affect the potential role of hydrogen. The work presented here includes an outline of the model and the analysis technique, along with basic considerations of the input quantities to the model (demand, supply and constraints). Some examples are given of probabilistic estimates of input quantities.
Test and evaluation of the heat recovery incinerator system at Naval Station, Mayport, Florida
NASA Astrophysics Data System (ADS)
1981-05-01
This report describes test and evaluation of the two-ton/hr heat recovery incinerator (HRI) facility located at Mayport Naval Station, Fla., carried out during November and December 1980. The tests included: (1) Solid Waste: characterization, heating value, and ultimate analysis, (2) Ash: moisture, combustibles, and heating values of both bottom and cyclone ashes; Extraction Procedure toxicity tests on leachates from both bottom and cyclone ashes; trace metals in cyclone particulates, (3) Stack Emissions: particulates (quantity and size distribution), chlorides, oxygen, carbon dioxide, carbon monoxide, and trace elements, and (4) Heat and Mass Balance: all measurements required to carry out complete heat and mass balance calculations over the test period. The overall thermal efficiency of the HRI facility while operating at approximately 1.0 ton/hr was found to be 49% when the primary Btu equivalent of the electrical energy consumed during the test program was included.
Wave vector modification of the infinite order sudden approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sachs, J.G.; Bowman, J.M.
1980-10-15
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories ismore » run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities P/sub n/1..-->..nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when ..delta..n=such thatub f/-n/sub i/ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.« less
Wave vector modification of the infinite order sudden approximation
NASA Astrophysics Data System (ADS)
Sachs, Judith Grobe; Bowman, Joel M.
1980-10-01
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities Pn1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=‖nf-ni‖ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.
Optimal Redundancy Management in Reconfigurable Control Systems Based on Normalized Nonspecificity
NASA Technical Reports Server (NTRS)
Wu, N.Eva; Klir, George J.
1998-01-01
In this paper the notion of normalized nonspecificity is introduced. The nonspecifity measures the uncertainty of the estimated parameters that reflect impairment in a controlled system. Based on this notion, a quantity called a reconfiguration coverage is calculated. It represents the likelihood of success of a control reconfiguration action. This coverage links the overall system reliability to the achievable and required control, as well as diagnostic performance. The coverage, when calculated on-line, is used for managing the redundancy in the system.
Index cost estimate based BIM method - Computational example for sports fields
NASA Astrophysics Data System (ADS)
Zima, Krzysztof
2017-07-01
The paper presents an example ofcost estimation in the early phase of the project. The fragment of relative database containing solution, descriptions, geometry of construction object and unit cost of sports facilities was shown. The Index Cost Estimate Based BIM method calculationswith use of Case Based Reasoning were presented, too. The article presentslocal and global similarity measurement and example of BIM based quantity takeoff process. The outcome of cost calculations based on CBR method was presented as a final result of calculations.
The endothelial sample size analysis in corneal specular microscopy clinical examinations.
Abib, Fernando C; Holzchuh, Ricardo; Schaefer, Artur; Schaefer, Tania; Godois, Ronialci
2012-05-01
To evaluate endothelial cell sample size and statistical error in corneal specular microscopy (CSM) examinations. One hundred twenty examinations were conducted with 4 types of corneal specular microscopes: 30 with each BioOptics, CSO, Konan, and Topcon corneal specular microscopes. All endothelial image data were analyzed by respective instrument software and also by the Cells Analyzer software with a method developed in our lab. A reliability degree (RD) of 95% and a relative error (RE) of 0.05 were used as cut-off values to analyze images of the counted endothelial cells called samples. The sample size mean was the number of cells evaluated on the images obtained with each device. Only examinations with RE < 0.05 were considered statistically correct and suitable for comparisons with future examinations. The Cells Analyzer software was used to calculate the RE and customized sample size for all examinations. Bio-Optics: sample size, 97 ± 22 cells; RE, 6.52 ± 0.86; only 10% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 162 ± 34 cells. CSO: sample size, 110 ± 20 cells; RE, 5.98 ± 0.98; only 16.6% of the examinations had sufficient endothelial cell quantity (RE < 0.05); customized sample size, 157 ± 45 cells. Konan: sample size, 80 ± 27 cells; RE, 10.6 ± 3.67; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 336 ± 131 cells. Topcon: sample size, 87 ± 17 cells; RE, 10.1 ± 2.52; none of the examinations had sufficient endothelial cell quantity (RE > 0.05); customized sample size, 382 ± 159 cells. A very high number of CSM examinations had sample errors based on Cells Analyzer software. The endothelial sample size (examinations) needs to include more cells to be reliable and reproducible. The Cells Analyzer tutorial routine will be useful for CSM examination reliability and reproducibility.
Santman-Berends, Inge; Luttikholt, Saskia; den Brom, René Van; Schaik, Gerdien Van; Gonggrijp, Maaike; Hage, Han; Vellema, Piet
2014-01-01
The aim of this study was to estimate the quantity of antibiotics and classes of antibiotics used in the small ruminant industry in the Netherlands in 2011 and 2012. Twelve large veterinary practices, located throughout the Netherlands were selected for this study. All small ruminant farms associated with these practices that had complete records on the quantity of antibiotics prescribed were included. The veterinary practices provided data on all antibiotics prescribed, and the estimated animal used daily dose of antibiotics per year (AUDD/Y) was calculated for each farm. The median AUDD/Y in small ruminant farms was zero in both years (mean 0.60 in 2011, and 0.62 in 2012). The largest quantity of antibiotic use was observed in the professional goat industry (herds of ≥32 goats) with a median AUDD/Y of 1.22 in 2011 and 0.73 in 2012. In the professional sheep industry (flocks of ≥32 sheep), the median AUDD/Y was 0 in 2011 and 0.10 in 2012. In the small scale industry (flocks or herds of <32 sheep or goats), the median AUDD/Y never exceeded 0. The most frequently prescribed antibiotics in the small scale industry and professional sheep farms belonged to the penicillin class. In professional goat farms, antibiotics of the aminoglycoside class were most frequently prescribed. This study provides the first assessment on the quantity of antibiotic use in the small ruminant industry. Given a comparable attitude towards antibiotic use, these results might be valid for small ruminant populations in other north-western European countries as well. The antibiotic use in the small ruminant industry appeared to be low, and is expected to play a minor role in the development of antibiotic resistance. Nevertheless, several major zoonotic bacterial pathogens are associated with the small ruminant industry, and it remains important that antibiotics are used in a prudent way. PMID:25115998
BOREHOLE FLOWMETERS: FIELD APPLICATION AND DATA ANALYSIS
This paper reviews application of borehole flowmeters in granular and fractured rocks. asic data obtained in the field are the ambient flow log and the pumping-induced flow log. hese basic logs may then be used to calculate other quantities of interest. he paper describes the app...
A laboratory method for precisely determining the micro-volume-magnitudes of liquid efflux
NASA Technical Reports Server (NTRS)
Cloutier, R. L.
1969-01-01
Micro-volumetric quantities of ejected liquid are made to produce equal volumetric displacements of a more dense material. Weight measurements are obtained on the displaced heavier liquid and used to calculate volumes based upon the known density of the heavy medium.
Code of Federal Regulations, 2011 CFR
2011-04-01
...) The term mixed oxides means the sum of the quantities of aluminum, iron, calcium, and magnesium (in whatever combination they may exist in a coal-tar color) calculated as aluminum trioxide, ferric oxide, calcium oxide, and magnesium oxide. (k)-(m) [Reserved] (n) The term externally applied drugs and cosmetics...
Code of Federal Regulations, 2010 CFR
2010-04-01
...) The term mixed oxides means the sum of the quantities of aluminum, iron, calcium, and magnesium (in whatever combination they may exist in a coal-tar color) calculated as aluminum trioxide, ferric oxide, calcium oxide, and magnesium oxide. (k)-(m) [Reserved] (n) The term externally applied drugs and cosmetics...
NASA Technical Reports Server (NTRS)
Havill, Clinton H
1928-01-01
These tables are intended to provide a standard method and to facilitate the calculation of the quantity of "Standard Helium" in high pressure containers. The research data and the formulas used in the preparation of the tables were furnished by the Research Laboratory of Physical Chemistry, of the Massachusetts Institute of Technology.
DeWolf, Melissa; Bassok, Miriam; Holyoak, Keith J
2015-02-01
The standard number system includes several distinct types of notations, which differ conceptually and afford different procedures. Among notations for rational numbers, the bipartite format of fractions (a/b) enables them to represent 2-dimensional relations between sets of discrete (i.e., countable) elements (e.g., red marbles/all marbles). In contrast, the format of decimals is inherently 1-dimensional, expressing a continuous-valued magnitude (i.e., proportion) but not a 2-dimensional relation between sets of countable elements. Experiment 1 showed that college students indeed view these 2-number notations as conceptually distinct. In a task that did not involve mathematical calculations, participants showed a strong preference to represent partitioned displays of discrete objects with fractions and partitioned displays of continuous masses with decimals. Experiment 2 provided evidence that people are better able to identify and evaluate ratio relationships using fractions than decimals, especially for discrete (or discretized) quantities. Experiments 3 and 4 found a similar pattern of performance for a more complex analogical reasoning task. When solving relational reasoning problems based on discrete or discretized quantities, fractions yielded greater accuracy than decimals; in contrast, when quantities were continuous, accuracy was lower for both symbolic notations. Whereas previous research has established that decimals are more effective than fractions in supporting magnitude comparisons, the present study reveals that fractions are relatively advantageous in supporting relational reasoning with discrete (or discretized) concepts. These findings provide an explanation for the effectiveness of natural frequency formats in supporting some types of reasoning, and have implications for teaching of rational numbers.
Wan, You-peng; Yin, Kui-hao; Peng, Sheng-hua
2015-06-01
Taking a pumped storage reservoir located in southern China as the research object, the paper established a three-dimensional hydrodynamic and eutrophication model of the reservoir employing EFDC (environmental fluid dynamics code) model, calibrated and verified the model using long-term hydraulic and water quality data. Based on the model results, the effects of nitrogen and phosphorus concentrations on the algae growth were analyzed, and the response of algae to nitrogen and phosphorus concentration and quantity of pumping water was also calculated. The results showed that the nitrogen and phosphorus concentrations had little limit on algae growth rate in the reservoir. In the nutrients reduction scenarios, reducing phosphorus would gain greater algae biomass reduction than reducing nitrogen. When reducing 60 percent of nitrogen, the algae biomass did not decrease, while 12.4 percent of algae biomass reduction could be gained with the same reduction ratio of phosphorus. When the reduction ratio went to 90 percent, the algae biomass decreased by 17.9 percent and 35.1 percent for nitrogen and phosphorus reduction, respectively. In the pumping water quantity regulation scenarios, the algae biomass decreased with the increasing pumping water quantity when the pumping water quantity was greater than 20 percent of the current value; when it was less than 20 percent, the algae biomass increased with the increasing pumping water quantity. The algae biomass decreased by 25.7 percent when the pumping water quantity was doubled, and increased by 38.8 percent when it decreased to 20 percent. The study could play an important role in supporting eutrophication controlling in water source area.
Poeter, Eileen E.; Hill, Mary C.; Banta, Edward R.; Mehl, Steffen; Christensen, Steen
2006-01-01
This report documents the computer codes UCODE_2005 and six post-processors. Together the codes can be used with existing process models to perform sensitivity analysis, data needs assessment, calibration, prediction, and uncertainty analysis. Any process model or set of models can be used; the only requirements are that models have numerical (ASCII or text only) input and output files, that the numbers in these files have sufficient significant digits, that all required models can be run from a single batch file or script, and that simulated values are continuous functions of the parameter values. Process models can include pre-processors and post-processors as well as one or more models related to the processes of interest (physical, chemical, and so on), making UCODE_2005 extremely powerful. An estimated parameter can be a quantity that appears in the input files of the process model(s), or a quantity used in an equation that produces a value that appears in the input files. In the latter situation, the equation is user-defined. UCODE_2005 can compare observations and simulated equivalents. The simulated equivalents can be any simulated value written in the process-model output files or can be calculated from simulated values with user-defined equations. The quantities can be model results, or dependent variables. For example, for ground-water models they can be heads, flows, concentrations, and so on. Prior, or direct, information on estimated parameters also can be considered. Statistics are calculated to quantify the comparison of observations and simulated equivalents, including a weighted least-squares objective function. In addition, data-exchange files are produced that facilitate graphical analysis. UCODE_2005 can be used fruitfully in model calibration through its sensitivity analysis capabilities and its ability to estimate parameter values that result in the best possible fit to the observations. Parameters are estimated using nonlinear regression: a weighted least-squares objective function is minimized with respect to the parameter values using a modified Gauss-Newton method or a double-dogleg technique. Sensitivities needed for the method can be read from files produced by process models that can calculate sensitivities, such as MODFLOW-2000, or can be calculated by UCODE_2005 using a more general, but less accurate, forward- or central-difference perturbation technique. Problems resulting from inaccurate sensitivities and solutions related to the perturbation techniques are discussed in the report. Statistics are calculated and printed for use in (1) diagnosing inadequate data and identifying parameters that probably cannot be estimated; (2) evaluating estimated parameter values; and (3) evaluating how well the model represents the simulated processes. Results from UCODE_2005 and codes RESIDUAL_ANALYSIS and RESIDUAL_ANALYSIS_ADV can be used to evaluate how accurately the model represents the processes it simulates. Results from LINEAR_UNCERTAINTY can be used to quantify the uncertainty of model simulated values if the model is sufficiently linear. Results from MODEL_LINEARITY and MODEL_LINEARITY_ADV can be used to evaluate model linearity and, thereby, the accuracy of the LINEAR_UNCERTAINTY results. UCODE_2005 can also be used to calculate nonlinear confidence and predictions intervals, which quantify the uncertainty of model simulated values when the model is not linear. CORFAC_PLUS can be used to produce factors that allow intervals to account for model intrinsic nonlinearity and small-scale variations in system characteristics that are not explicitly accounted for in the model or the observation weighting. The six post-processing programs are independent of UCODE_2005 and can use the results of other programs that produce the required data-exchange files. UCODE_2005 and the other six codes are intended for use on any computer operating system. The programs con
Comment on "Continuum Lowering and Fermi-Surface Rising in Stromgly Coupled and Degenerate Plasmas"
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iglesias, C. A.; Sterne, P. A.
In a recent Letter, Hu [1] reported photon absorption cross sections in strongly coupled, degenerate plasmas from quantum molecular dynamics (QMD). The Letter claims that the K-edge shift as a function of plasma density computed with simple ionization potential depression (IPD) models are in violent disagreement with the QMD results. The QMD calculations displayed an increase in Kedge shift with increasing density while the simpler models yielded a decrease. Here, this Comment shows that the claimed large errors reported by Hu for the widely used Stewart- Pyatt (SP) model [2] stem from an invalid comparison of disparate physical quantities andmore » is largely resolved by including well-known corrections for degenerate systems.« less
Ionization-potential depression and dynamical structure factor in dense plasmas
NASA Astrophysics Data System (ADS)
Lin, Chengliang; Röpke, Gerd; Kraeft, Wolf-Dietrich; Reinholz, Heidi
2017-07-01
The properties of a bound electron system immersed in a plasma environment are strongly modified by the surrounding plasma. The modification of an essential quantity, the ionization energy, is described by the electronic and ionic self-energies, including dynamical screening within the framework of the quantum statistical theory. Introducing the ionic dynamical structure factor as the indicator for the ionic microfield, we demonstrate that ionic correlations and fluctuations play a critical role in determining the ionization potential depression. This is, in particular, true for mixtures of different ions with large mass and charge asymmetry. The ionization potential depression is calculated for dense aluminum plasmas as well as for a CH plasma and compared to the experimental data and more phenomenological approaches used so far.
NASA Technical Reports Server (NTRS)
1994-01-01
The ChemScan UV-6100 is a spectrometry system originally developed by Biotronics Technologies, Inc. under a Small Business Innovation Research (SBIR) contract. It is marketed to the water and wastewater treatment industries, replacing "grab sampling" with on-line data collection. It analyzes the light absorbance characteristics of a water sample, simultaneously detects hundreds of individual wavelengths absorbed by chemical substances in a process solution, and quantifies the information. Spectral data is then processed by ChemScan analyzer and compared with calibration files in the system's memory in order to calculate concentrations of chemical substances that cause UV light absorbance in specific patterns. Monitored substances can be analyzed for quality and quantity. Applications include detection of a variety of substances, and the information provided enables an operator to control a process more efficiently.
Comment on "Continuum Lowering and Fermi-Surface Rising in Stromgly Coupled and Degenerate Plasmas"
Iglesias, C. A.; Sterne, P. A.
2018-03-16
In a recent Letter, Hu [1] reported photon absorption cross sections in strongly coupled, degenerate plasmas from quantum molecular dynamics (QMD). The Letter claims that the K-edge shift as a function of plasma density computed with simple ionization potential depression (IPD) models are in violent disagreement with the QMD results. The QMD calculations displayed an increase in Kedge shift with increasing density while the simpler models yielded a decrease. Here, this Comment shows that the claimed large errors reported by Hu for the widely used Stewart- Pyatt (SP) model [2] stem from an invalid comparison of disparate physical quantities andmore » is largely resolved by including well-known corrections for degenerate systems.« less
Adhikari, Birendra; Jones, Michael G.; Orme, Christopher J.; Wendt, Daniel S.; Wilson, Aaron D.
2015-10-01
The switchable polarity solvent forward osmosis (SPS FO) desalination process requires use of a polishing filtration step to remove trace quantities of draw solution from the product water stream. Selected nanofiltration (NF) and reverse osmosis (RO) membranes were tested for their ability to recover water from 1-cyclohexylpiperidenium bicarbonate solutions in this application. This submission includes the experimental data used to calculate NF and RO membrane flux-normalized net driving pressure (FNNDP) and flux-normalized rejection (FNR) performance in recovering water from 1-cyclohexylpiperidenium bicarbonate solutions. This data is further described and visualized in the manuscript entitled "Compatibility study of nanofiltration and reverse osmosis membranes with 1 cyclohexylpiperidenium bicarbonate solutions" (see attached Compatibility Study Manuscript).
NASA Technical Reports Server (NTRS)
Gomberg, R. I.; Stewart, R. B.
1976-01-01
As part of a continuing study of the environmental effects of solid rocket motor (SRM) operations in the troposphere, a numerical model was used to simulate the afterburning processes occurring in solid rocket motor plumes and to predict the quantities of potentially harmful chemical species which are created. The calculations include the effects of finite-rate chemistry and turbulent mixing. It is found that the amount of NO produced is much less than the amount of HCl present in the plume, that chlorine will appear predominantly in the form of HCl although some molecular chlorine is present, and that combustion is complete as is evident from the predominance of carbon dioxide over carbon monoxide.
Narrow Angle Wide Spectral Range Radiometer Design FEANICS/REEFS Radiometer Design Report
NASA Technical Reports Server (NTRS)
Camperchioli, William
2005-01-01
A critical measurement for the Radiative Enhancement Effects on Flame Spread (REEFS) microgravity combustion experiment is the net radiative flux emitted from the gases and from the solid fuel bed. These quantities are measured using a set of narrow angle, wide spectral range radiometers. The radiometers are required to have an angular field of view of 1.2 degrees and measure over the spectral range of 0.6 to 30 microns, which presents a challenging design effort. This report details the design of this radiometer system including field of view, radiometer response, radiometric calculations, temperature effects, error sources, baffling and amplifiers. This report presents some radiometer specific data but does not present any REEFS experiment data.
Soliman, George; Yevick, David; Jessop, Paul
2014-09-01
This paper demonstrates that numerous calculations involving polarization transformations can be condensed by employing suitable geometric algebra formalism. For example, to describe polarization mode dispersion and polarization-dependent loss, both the material birefringence and differential loss enter as bivectors and can be combined into a single symmetric quantity. Their frequency and distance evolution, as well as that of the Stokes vector through an optical system, can then each be expressed as a single compact expression, in contrast to the corresponding Mueller matrix formulations. The intrinsic advantage of the geometric algebra framework is further demonstrated by presenting a simplified derivation of generalized Stokes parameters that include the electric field phase. This procedure simultaneously establishes the tensor transformation properties of these parameters.
Precise Measurement of Parity Nonconserving Optical Rotation in Atomic Thallium
DOE Office of Scientific and Technical Information (OSTI.GOV)
Edwards, N.H.; Phipp, S.J.; Baird, P.E.G.
1995-04-03
We report a new measurement of parity nonconserving (PNC) optical rotation on the 6{ital p}{sub 1/2}-6{ital p}{sub 3/2} transition in atomic thallium near 1283 nm. The result expressed in terms of the quantity R=Im{l_brace}{ital E}1{sup PNC}/{ital M}1{r_brace} is {minus}(15.68{plus_minus}0.45){times}10{sup {minus}8}, and is consistent with current calculations based on the standard model. In addition, limits have been set on the much smaller nuclear spin-dependent rotation amplitude at R{sub {ital S}}=(0.04{plus_minus}0.20){times}10{sup {minus}8}; this is consistent with theoretical estimates which include a nuclear anapole contribution.
Carpenter, Corey M G; Todorov, Dimitar; Driscoll, Charles T; Montesdeoca, Mario
2016-11-01
Syracuse, New York is working under a court-ordered agreement to limit combined sewer overflows (CSO) to local surface waters. Green infrastructure technologies, including green roofs, are being implemented as part of a CSO abatement strategy and to develop co-benefits of diminished stormwater runoff, including decreased loading of contaminants to the wastewater system and surface waters. The objective of this study was to examine the quantity and quality of discharge associated with precipitation events over an annual cycle from a green roof in Syracuse, NY and to compare measurements from this monitoring program with results from a roof irrigation experiment. Wet deposition, roof drainage, and water quality were measured for 87 storm events during an approximately 12 month period over 2011-2012. Water and nutrient (total phosphorus, total nitrogen, and dissolved organic carbon) mass balances were conducted on an event basis to evaluate retention annually and during the growing and non-growing seasons. These results are compared with a hydrological manipulation experiment, which comprised of artificially watering of the roof. Loadings of nutrients were calculated for experimental and actual storms using the concentration of nutrients and the flow data of water discharging the roof. The green roof was effective in retaining precipitation quantity from storm events (mean percent retention 96.8%, SD = 2.7%, n = 87), although the relative fraction of water retained decreased with increases in the size of the event. There was no difference in water retention of the green roof for the growing and non-growing seasons. Drainage waters exhibited high concentration of nutrients during the warm temperature growing season, particularly total nitrogen and dissolved organic carbon. Overall, nutrient losses were low because of the strong retention of water. However, there was marked variation in the retention of nutrients by season due to variations in concentrations in roof runoff. Copyright © 2016 Elsevier Ltd. All rights reserved.
40 CFR 98.157 - Records that must be retained.
Code of Federal Regulations, 2010 CFR
2010-07-01
... density measurements, and flowmeters used to measure the quantities reported under this rule, including..., volumetric and density measurements, and flowmeters used to measure the quantities reported under this...
40 CFR 98.157 - Records that must be retained.
Code of Federal Regulations, 2011 CFR
2011-07-01
... density measurements, and flowmeters used to measure the quantities reported under this rule, including..., volumetric and density measurements, and flowmeters used to measure the quantities reported under this...
40 CFR 98.157 - Records that must be retained.
Code of Federal Regulations, 2014 CFR
2014-07-01
... density measurements, and flowmeters used to measure the quantities reported under this rule, including..., volumetric and density measurements, and flowmeters used to measure the quantities reported under this...
40 CFR 98.157 - Records that must be retained.
Code of Federal Regulations, 2013 CFR
2013-07-01
... density measurements, and flowmeters used to measure the quantities reported under this rule, including..., volumetric and density measurements, and flowmeters used to measure the quantities reported under this...
40 CFR 98.157 - Records that must be retained.
Code of Federal Regulations, 2012 CFR
2012-07-01
... density measurements, and flowmeters used to measure the quantities reported under this rule, including..., volumetric and density measurements, and flowmeters used to measure the quantities reported under this...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Y; UT Southwestern Medical Center, Dallas, TX; Tian, Z
2015-06-15
Purpose: Intensity-modulated proton therapy (IMPT) is increasingly used in proton therapy. For IMPT optimization, Monte Carlo (MC) is desired for spots dose calculations because of its high accuracy, especially in cases with a high level of heterogeneity. It is also preferred in biological optimization problems due to the capability of computing quantities related to biological effects. However, MC simulation is typically too slow to be used for this purpose. Although GPU-based MC engines have become available, the achieved efficiency is still not ideal. The purpose of this work is to develop a new optimization scheme to include GPU-based MC intomore » IMPT. Methods: A conventional approach using MC in IMPT simply calls the MC dose engine repeatedly for each spot dose calculations. However, this is not the optimal approach, because of the unnecessary computations on some spots that turned out to have very small weights after solving the optimization problem. GPU-memory writing conflict occurring at a small beam size also reduces computational efficiency. To solve these problems, we developed a new framework that iteratively performs MC dose calculations and plan optimizations. At each dose calculation step, the particles were sampled from different spots altogether with Metropolis algorithm, such that the particle number is proportional to the latest optimized spot intensity. Simultaneously transporting particles from multiple spots also mitigated the memory writing conflict problem. Results: We have validated the proposed MC-based optimization schemes in one prostate case. The total computation time of our method was ∼5–6 min on one NVIDIA GPU card, including both spot dose calculation and plan optimization, whereas a conventional method naively using the same GPU-based MC engine were ∼3 times slower. Conclusion: A fast GPU-based MC dose calculation method along with a novel optimization workflow is developed. The high efficiency makes it attractive for clinical usages.« less
Relationships between alcohol intake and atherogenic indices in women.
Wakabayashi, Ichiro
2013-01-01
Light-to-moderate alcohol consumption is known to reduce the risk of coronary artery disease. The purpose of this study was to investigate relationships of alcohol intake with atherogenic indices, such as the ratio of low-density lipoprotein cholesterol to high-density lipoprotein cholesterol (LDL-C/HDL-C ratio) and the ratio of triglycerides to high-density lipoprotein cholesterol (TG/HDL-C ratio), in women. Subjects (14,067 women, 20-45 years) were divided by alcohol intake into three groups of nondrinkers, occasional drinkers, and regular drinkers, and each drinker group was further divided into lower- (<22 g ethanol/drinking day) and greater- (≥ 22 g ethanol/drinking day) quantity drinkers. Atherogenic indices were compared among the alcohol groups. Odds ratio (OR) for high LDL-C/HDL-C ratio or high TG/HDL-C ratio calculated after adjustment for age, body mass index, smoking, and habitual exercise was significantly lower (P < .05) than a reference level of 1.00 in regular or occasional lower- and higher quantity drinkers vs. nondrinkers (OR for high LDL-C/HDL-C ratio, 0.28 (95% confidence interval [95% CI], 0.18-0.44) in regular lower-quantity drinkers, 0.18 (95% CI, 0.12-0.28) in regular higher quantity drinkers, 0.71 (95% CI, 0.61-0.83) in occasional lower-quantity drinkers, and 0.53 (95% CI, 0.44-0.64) in occasional higher quantity drinkers; OR for high TG/HDL-C ratio, 0.52 (95% CI, 0.32-0.85) in regular lower-quantity drinkers, 0.67 (95% CI, 0.47-0.96) in regular higher-quantity drinkers, 0.61 (95% CI, 0.50-0.76) in occasional lower-quantity drinkers, and 0.63 (95% CI, 0.50-0.79) in occasional higher-quantity drinkers. Both LDL-C/HDL-C ratio and log-transformed TG/HDL-C ratio were significantly greater in smokers than in nonsmokers. Both in smokers and nonsmokers, LDL-C/HDL-C ratio and log-transformed TG/HDL-C ratio were significantly lower in regular lower- and higher-quantity drinkers than in nondrinkers. In nonsmokers, LDL-C/HDL-C ratio and log-transformed TG/HDL-C ratio tended to be lower and greater, respectively, in regular greater-quantity drinkers than in regular lower-quantity drinkers. In women, alcohol drinking is inversely associated with atherogenic indices irrespective of smoking status, and the inverse association of alcohol drinking with LDL-C/HDL-C ratio is stronger than that with TG/HDL-C ratio. Copyright © 2013 National Lipid Association. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Murthy, A. V.
1987-01-01
Correction of airfoil data for sidewall boundary-layer effects requires a knowledge of the boundary-layer displacement thickness and the shape factor with the tunnel empty. To facilitate calculation of these quantities under various test conditions for the Langley 0.3 m Transonic Cryogenic Tunnel, a computer program was written. This program reads the various tunnel parameters and the boundary-layer rake total head pressure measurements directly from the Engineering Unit tapes to calculate the required sidewall boundary-layer parameters. Details of the method along with the results for a sample case are presented.
Effects of Kerr space-time on spectral features from X-ray illuminated accretion discs
NASA Astrophysics Data System (ADS)
Martocchia, A.; Karas, V.; Matt, G.
2000-03-01
We performed detailed calculations of the relativistic effects acting on both the reflection continuum and the iron line from accretion discs around rotating black holes. Fully relativistic transfer of both illuminating and reprocessed photons has been considered in Kerr space-time. We calculated overall spectra, line profiles and integral quantities, and present their dependences on the black hole angular momentum. We show that the observed EW of the lines is substantially enlarged when the black hole rotates rapidly and/or the source of illumination is near above the hole. Therefore, such calculations provide a way to distinguish between different models of the central source.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xu, H; Guerrero, M; Chen, S
Purpose: The TG-71 report was published in 2014 to present standardized methodologies for MU calculations and determination of dosimetric quantities. This work explores the clinical implementation of a TG71-based electron MU calculation algorithm and compares it with a recently released commercial secondary calculation program–Mobius3D (Mobius Medical System, LP). Methods: TG-71 electron dosimetry data were acquired, and MU calculations were performed based on the recently published TG-71 report. The formalism in the report for extended SSD using air-gap corrections was used. The dosimetric quantities, such PDD, output factor, and f-air factors were incorporated into an organized databook that facilitates data accessmore » and subsequent computation. The Mobius3D program utilizes a pencil beam redefinition algorithm. To verify the accuracy of calculations, five customized rectangular cutouts of different sizes–6×12, 4×12, 6×8, 4×8, 3×6 cm{sup 2}–were made. Calculations were compared to each other and to point dose measurements for electron beams of energy 6, 9, 12, 16, 20 MeV. Each calculation / measurement point was at the depth of maximum dose for each cutout in a 10×10 cm{sup 2} or 15×15cm{sup 2} applicator with SSDs 100cm and 110cm. Validation measurements were made with a CC04 ion chamber in a solid water phantom for electron beams of energy 9 and 16 MeV. Results: Differences between the TG-71 and the commercial system relative to measurements were within 3% for most combinations of electron energy, cutout size, and SSD. A 5.6% difference between the two calculation methods was found only for the 6MeV electron beam with 3×6 cm{sup 2}cutout in the 10×10{sup 2}cm applicator at 110cm SSD. Both the TG-71 and the commercial calculations show good consistency with chamber measurements: for 5 cutouts, <1% difference for 100cm SSD, and 0.5–2.7% for 110cm SSD. Conclusions: Based on comparisons with measurements, a TG71-based computation method and a Mobius3D program produce reasonably accurate MU calculations for electron-beam therapy.« less
Determination of parameters of a nuclear reactor through noise measurements
Cohn, C.E.
1975-07-15
A method of measuring parameters of a nuclear reactor by noise measurements is described. Noise signals are developed by the detectors placed in the reactor core. The polarity coincidence between the noise signals is used to develop quantities from which various parameters of the reactor can be calculated. (auth)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jevicki, Antal; Suzuki, Kenta
We continue the study of the Sachdev-Ye-Kitaev model in the Large N limit. Following our formulation in terms of bi-local collective fields with dynamical reparametrization symmetry, we perform perturbative calculations around the conformal IR point. As a result, these are based on an ε expansion which allows for analytical evaluation of correlators and finite temperature quantities.
Sato, Tatsuhiko; Kase, Yuki; Watanabe, Ritsuko; Niita, Koji; Sihver, Lembit
2009-01-01
Microdosimetric quantities such as lineal energy, y, are better indexes for expressing the RBE of HZE particles in comparison to LET. However, the use of microdosimetric quantities in computational dosimetry is severely limited because of the difficulty in calculating their probability densities in macroscopic matter. We therefore improved the particle transport simulation code PHITS, providing it with the capability of estimating the microdosimetric probability densities in a macroscopic framework by incorporating a mathematical function that can instantaneously calculate the probability densities around the trajectory of HZE particles with a precision equivalent to that of a microscopic track-structure simulation. A new method for estimating biological dose, the product of physical dose and RBE, from charged-particle therapy was established using the improved PHITS coupled with a microdosimetric kinetic model. The accuracy of the biological dose estimated by this method was tested by comparing the calculated physical doses and RBE values with the corresponding data measured in a slab phantom irradiated with several kinds of HZE particles. The simulation technique established in this study will help to optimize the treatment planning of charged-particle therapy, thereby maximizing the therapeutic effect on tumors while minimizing unintended harmful effects on surrounding normal tissues.
Sato, Tatsuhiko; Watanabe, Ritsuko; Sihver, Lembit; Niita, Koji
2012-01-01
Microdosimetric quantities such as lineal energy are generally considered to be better indices than linear energy transfer (LET) for expressing the relative biological effectiveness (RBE) of high charge and energy particles. To calculate their probability densities (PD) in macroscopic matter, it is necessary to integrate microdosimetric tools such as track-structure simulation codes with macroscopic particle transport simulation codes. As an integration approach, the mathematical model for calculating the PD of microdosimetric quantities developed based on track-structure simulations was incorporated into the macroscopic particle transport simulation code PHITS (Particle and Heavy Ion Transport code System). The improved PHITS enables the PD in macroscopic matter to be calculated within a reasonable computation time, while taking their stochastic nature into account. The microdosimetric function of PHITS was applied to biological dose estimation for charged-particle therapy and risk estimation for astronauts. The former application was performed in combination with the microdosimetric kinetic model, while the latter employed the radiation quality factor expressed as a function of lineal energy. Owing to the unique features of the microdosimetric function, the improved PHITS has the potential to establish more sophisticated systems for radiological protection in space as well as for the treatment planning of charged-particle therapy.
Edwards, D M
2016-03-02
Damping of magnetization dynamics in a ferromagnetic metal, arising from spin-orbit coupling, is usually characterised by the Gilbert parameter α. Recent calculations of this quantity, using a formula due to Kambersky, find that it is infinite for a perfect crystal owing to an intraband scattering term which is of third order in the spin-orbit parameter ξ. This surprising result conflicts with recent work by Costa and Muniz who study damping numerically by direct calculation of the dynamical transverse susceptibility in the presence of spin-orbit coupling. We resolve this inconsistency by following the approach of Costa and Muniz for a slightly simplified model where it is possible to calculate α analytically. We show that to second order in ξ one retrieves the Kambersky result for α, but to higher order one does not obtain any divergent intraband terms. The present work goes beyond that of Costa and Muniz by pointing out the necessity of including the effect of long-range Coulomb interaction in calculating damping for large ξ. A direct derivation of the Kambersky formula is given which shows clearly the restriction of its validity to second order in ξ so that no intraband scattering terms appear. This restriction has an important effect on the damping over a substantial range of impurity content and temperature. The experimental situation is discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hramov, Alexander E.; Saratov State Technical University, Politechnicheskaja str., 77, Saratov 410054; Koronovskii, Alexey A.
2012-08-15
The spectrum of Lyapunov exponents is powerful tool for the analysis of the complex system dynamics. In the general framework of nonlinear dynamics, a number of the numerical techniques have been developed to obtain the spectrum of Lyapunov exponents for the complex temporal behavior of the systems with a few degree of freedom. Unfortunately, these methods cannot be applied directly to analysis of complex spatio-temporal dynamics of plasma devices which are characterized by the infinite phase space, since they are the spatially extended active media. In the present paper, we propose the method for the calculation of the spectrum ofmore » the spatial Lyapunov exponents (SLEs) for the spatially extended beam-plasma systems. The calculation technique is applied to the analysis of chaotic spatio-temporal oscillations in three different beam-plasma model: (1) simple plasma Pierce diode, (2) coupled Pierce diodes, and (3) electron-wave system with backward electromagnetic wave. We find an excellent agreement between the system dynamics and the behavior of the spectrum of the spatial Lyapunov exponents. Along with the proposed method, the possible problems of SLEs calculation are also discussed. It is shown that for the wide class of the spatially extended systems, the set of quantities included in the system state for SLEs calculation can be reduced using the appropriate feature of the plasma systems.« less
Calibration of RAVE distances to a large sample of Hipparcos stars
NASA Astrophysics Data System (ADS)
Francis, Charles
2013-12-01
A magnitude-limited population of 18 808 Hipparcos stars is used to calibrate distances for 52 794 RAdial Velocity Experiment (RAVE) stars, including dwarfs, giants and pre-main-sequence stars. I give treatments for a number of types of bias affecting calculation, including bias from the non-linear relationship between the quantity of interest (e.g., distance or distance modulus) and the measured quantity (parallax or visual magnitude), the Lutz-Kelker bias and bias due to variation in density of the stellar population. The use of a magnitude bound minimizes the Malmquist and the Lutz-Kelker bias, and avoids measurement bias resulting from the greater accuracy of Hipparcos parallaxes for brighter stars. The calibration is applicable to stars in 2MASS when there is some way to determine stellar class with reasonable confidence. For RAVE this is possible for hot dwarfs and using log g. The accuracy of the calibration is tested against Hipparcos stars with better than 2 per cent parallax errors, and by comparison of the RAVE velocity distribution with that of Hipparcos, and is found to improve upon previous estimates of luminosity distance. An estimate of the local standard of rest from RAVE data, (U0, V0, W0) = (14.9 ± 1.7, 15.3 ± 0.4, 6.9 ± 0.1) km s-1, shows excellent agreement with the current best estimate from extended Hipparcos compilation. The RAVE velocity distribution confirms the alignment of stellar motions with spiral structure.
NASA Astrophysics Data System (ADS)
Demir, I.; Villanueva, P.; Sermet, M. Y.
2016-12-01
Accurately measuring the surface level of a river is a vital component of environmental monitoring and modeling efforts. Reliable data points are required for calibrating the statistical models that are used for, among other things, flood prediction and model validation. While current embedded monitoring systems provide accurate measurements, the cost to replicate this current system on a large scale is prohibitively expensive, limiting the quantity of data available. In this project, we describe a new method to accurately measure river levels using smartphone sensors. We take three pictures of the same point on the river's surface and perform calculations based on the GPS location and spatial orientation of the smartphone for each picture using projected geometry. Augmented reality is used to improve the accuracy of smartphone sensor readings. This proposed implementation is significantly cheaper than existing water measuring systems while offering similar accuracy. Additionally, since the measurements are taken by sensors that are commonly found in smartphones, crowdsourcing the collection of river measurements to citizen-scientists is possible. Thus, our proposed method leads to a much higher quantity of reliable data points than currently possible at a fraction of the cost. Sample runs and an analysis of the results are included. The presentation concludes with a discussion of future work, including applications to other fields and plans to implement a fully automated system using this method in tandem with image recognition and machine learning.
Almansa, Julio F; Guerrero, Rafael; Torres, Javier; Lallena, Antonio M
60 Co sources have been commercialized as an alternative to 192 Ir sources for high-dose-rate (HDR) brachytherapy. One of them is the Flexisource Co-60 HDR source manufactured by Elekta. The only available dosimetric characterization of this source is that of Vijande et al. [J Contemp Brachytherapy 2012; 4:34-44], whose results were not included in the AAPM/ESTRO consensus document. In that work, the dosimetric quantities were calculated as averages of the results obtained with the Geant4 and PENELOPE Monte Carlo (MC) codes, though for other sources, significant differences have been quoted between the values obtained with these two codes. The aim of this work is to perform the dosimetric characterization of the Flexisource Co-60 HDR source using PENELOPE. The MC simulation code PENELOPE (v. 2014) has been used. Following the recommendations of the AAPM/ESTRO report, the radial dose function, the anisotropy function, the air-kerma strength, the dose rate constant, and the absorbed dose rate in water have been calculated. The results we have obtained exceed those of Vijande et al. In particular, the absorbed dose rate constant is ∼0.85% larger. A similar difference is also found in the other dosimetric quantities. The effect of the electrons emitted in the decay of 60 Co, usually neglected in this kind of simulations, is significant up to the distances of 0.25 cm from the source. The systematic and significant differences we have found between PENELOPE results and the average values found by Vijande et al. point out that the dosimetric characterizations carried out with the various MC codes should be provided independently. Copyright © 2017 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.
Strong, Mark; South, Gail; Carlisle, Robin
2009-01-01
Background Accurate spirometry is important in the management of COPD. The UK Quality and Outcomes Framework pay-for-performance scheme for general practitioners includes spirometry related indicators within its COPD domain. It is not known whether high achievement against QOF spirometry indicators is associated with spirometry to BTS standards. Methods Data were obtained from the records of 3,217 patients randomly sampled from 5,649 patients with COPD in 38 general practices in Rotherham, UK. Severity of airflow obstruction was categorised by FEV1 (% predicted) according to NICE guidelines. This was compared with clinician recorded COPD severity. The proportion of patients whose spirometry met BTS standards was calculated in each practice using a random sub-sample of 761 patients. The Spearman rank correlation between practice level QOF spirometry achievement and performance against BTS spirometry standards was calculated. Results Spirometry as assessed by clinical records was to BTS standards in 31% of cases (range at practice level 0% to 74%). The categorisation of airflow obstruction according to the most recent spirometry results did not agree well with the clinical categorisation of COPD recorded in the notes (Cohen's kappa = 0.34, 0.30 – 0.38). 12% of patients on COPD registers had FEV1 (% predicted) results recorded that did not support the diagnosis of COPD. There was no association between quality, as measured by adherence to BTS spirometry standards, and either QOF COPD9 achievement (Spearman's rho = -0.11), or QOF COPD10 achievement (rho = 0.01). Conclusion The UK Quality and Outcomes Framework currently assesses the quantity, but not the quality of spirometry. PMID:19558719
Strong, Mark; South, Gail; Carlisle, Robin
2009-06-28
Accurate spirometry is important in the management of COPD. The UK Quality and Outcomes Framework pay-for-performance scheme for general practitioners includes spirometry related indicators within its COPD domain. It is not known whether high achievement against QOF spirometry indicators is associated with spirometry to BTS standards. Data were obtained from the records of 3,217 patients randomly sampled from 5,649 patients with COPD in 38 general practices in Rotherham, UK. Severity of airflow obstruction was categorised by FEV1 (% predicted) according to NICE guidelines. This was compared with clinician recorded COPD severity. The proportion of patients whose spirometry met BTS standards was calculated in each practice using a random sub-sample of 761 patients. The Spearman rank correlation between practice level QOF spirometry achievement and performance against BTS spirometry standards was calculated. Spirometry as assessed by clinical records was to BTS standards in 31% of cases (range at practice level 0% to 74%). The categorisation of airflow obstruction according to the most recent spirometry results did not agree well with the clinical categorisation of COPD recorded in the notes (Cohen's kappa = 0.34, 0.30 - 0.38). 12% of patients on COPD registers had FEV1 (% predicted) results recorded that did not support the diagnosis of COPD. There was no association between quality, as measured by adherence to BTS spirometry standards, and either QOF COPD9 achievement (Spearman's rho = -0.11), or QOF COPD10 achievement (rho = 0.01). The UK Quality and Outcomes Framework currently assesses the quantity, but not the quality of spirometry.
A Web-Based System for Bayesian Benchmark Dose Estimation.
Shao, Kan; Shapiro, Andrew J
2018-01-11
Benchmark dose (BMD) modeling is an important step in human health risk assessment and is used as the default approach to identify the point of departure for risk assessment. A probabilistic framework for dose-response assessment has been proposed and advocated by various institutions and organizations; therefore, a reliable tool is needed to provide distributional estimates for BMD and other important quantities in dose-response assessment. We developed an online system for Bayesian BMD (BBMD) estimation and compared results from this software with U.S. Environmental Protection Agency's (EPA's) Benchmark Dose Software (BMDS). The system is built on a Bayesian framework featuring the application of Markov chain Monte Carlo (MCMC) sampling for model parameter estimation and BMD calculation, which makes the BBMD system fundamentally different from the currently prevailing BMD software packages. In addition to estimating the traditional BMDs for dichotomous and continuous data, the developed system is also capable of computing model-averaged BMD estimates. A total of 518 dichotomous and 108 continuous data sets extracted from the U.S. EPA's Integrated Risk Information System (IRIS) database (and similar databases) were used as testing data to compare the estimates from the BBMD and BMDS programs. The results suggest that the BBMD system may outperform the BMDS program in a number of aspects, including fewer failed BMD and BMDL calculations and estimates. The BBMD system is a useful alternative tool for estimating BMD with additional functionalities for BMD analysis based on most recent research. Most importantly, the BBMD has the potential to incorporate prior information to make dose-response modeling more reliable and can provide distributional estimates for important quantities in dose-response assessment, which greatly facilitates the current trend for probabilistic risk assessment. https://doi.org/10.1289/EHP1289.
Normalized vertical ice mass flux profiles from vertically pointing 8-mm-wavelength Doppler radar
NASA Technical Reports Server (NTRS)
Orr, Brad W.; Kropfli, Robert A.
1993-01-01
During the FIRE 2 (First International Satellite Cloud Climatology Project Regional Experiment) project, NOAA's Wave Propagation Laboratory (WPL) operated its 8-mm wavelength Doppler radar extensively in the vertically pointing mode. This allowed for the calculation of a number of important cirrus cloud parameters, including cloud boundary statistics, cloud particle characteristic sizes and concentrations, and ice mass content (imc). The flux of imc, or, alternatively, ice mass flux (imf), is also an important parameter of a cirrus cloud system. Ice mass flux is important in the vertical redistribution of water substance and thus, in part, determines the cloud evolution. It is important for the development of cloud parameterizations to be able to define the essential physical characteristics of large populations of clouds in the simplest possible way. One method would be to normalize profiles of observed cloud properties, such as those mentioned above, in ways similar to those used in the convective boundary layer. The height then scales from 0.0 at cloud base to 1.0 at cloud top, and the measured cloud parameter scales by its maximum value so that all normalized profiles have 1.0 as their maximum value. The goal is that there will be a 'universal' shape to profiles of the normalized data. This idea was applied to estimates of imf calculated from data obtained by the WPL cloud radar during FIRE II. Other quantities such as median particle diameter, concentration, and ice mass content can also be estimated with this radar, and we expect to also examine normalized profiles of these quantities in time for the 1993 FIRE II meeting.
a Protocol for High-Accuracy Theoretical Thermochemistry
NASA Astrophysics Data System (ADS)
Welch, Bradley; Dawes, Richard
2017-06-01
Theoretical studies of spectroscopy and reaction dynamics including the necessary development of potential energy surfaces rely on accurate thermochemical information. The Active Thermochemical Tables (ATcT) approach by Ruscic^{1} incorporates data for a large number of chemical species from a variety of sources (both experimental and theoretical) and derives a self-consistent network capable of making extremely accurate estimates of quantities such as temperature dependent enthalpies of formation. The network provides rigorous uncertainties, and since the values don't rely on a single measurement or calculation, the provenance of each quantity is also obtained. To expand and improve the network it is desirable to have a reliable protocol such as the HEAT approach^{2} for calculating accurate theoretical data. Here we present and benchmark an approach based on explicitly-correlated coupled-cluster theory and vibrational perturbation theory (VPT2). Methyldioxy and Methyl Hydroperoxide are important and well-characterized species in combustion processes and begin the family of (ethyl-, propyl-based, etc) similar compounds (much less is known about the larger members). Accurate anharmonic frequencies are essential to accurately describe even the 0 K enthalpies of formation, but are especially important for finite temperature studies. Here we benchmark the spectroscopic and thermochemical accuracy of the approach, comparing with available data for the smallest systems, and comment on the outlook for larger systems that are less well-known and characterized. ^{1}B. Ruscic, Active Thermochemical Tables (ATcT) values based on ver. 1.118 of the Thermochemical Network (2015); available at ATcT.anl.gov ^{2}A. Tajti, P. G. Szalay, A. G. Császár, M. Kállay, J. Gauss, E. F. Valeev, B. A. Flowers, J. Vázquez, and J. F. Stanton. JCP 121, (2004): 11599.
Optimization of the Multi-Spectral Euclidean Distance Calculation for FPGA-based Spaceborne Systems
NASA Technical Reports Server (NTRS)
Cristo, Alejandro; Fisher, Kevin; Perez, Rosa M.; Martinez, Pablo; Gualtieri, Anthony J.
2012-01-01
Due to the high quantity of operations that spaceborne processing systems must carry out in space, new methodologies and techniques are being presented as good alternatives in order to free the main processor from work and improve the overall performance. These include the development of ancillary dedicated hardware circuits that carry out the more redundant and computationally expensive operations in a faster way, leaving the main processor free to carry out other tasks while waiting for the result. One of these devices is SpaceCube, a FPGA-based system designed by NASA. The opportunity to use FPGA reconfigurable architectures in space allows not only the optimization of the mission operations with hardware-level solutions, but also the ability to create new and improved versions of the circuits, including error corrections, once the satellite is already in orbit. In this work, we propose the optimization of a common operation in remote sensing: the Multi-Spectral Euclidean Distance calculation. For that, two different hardware architectures have been designed and implemented in a Xilinx Virtex-5 FPGA, the same model of FPGAs used by SpaceCube. Previous results have shown that the communications between the embedded processor and the circuit create a bottleneck that affects the overall performance in a negative way. In order to avoid this, advanced methods including memory sharing, Native Port Interface (NPI) connections and Data Burst Transfers have been used.
Nucleon form factors with 2+1 flavor dynamical domain-wall fermions
NASA Astrophysics Data System (ADS)
Yamazaki, Takeshi; Aoki, Yasumichi; Blum, Tom; Lin, Huey-Wen; Ohta, Shigemi; Sasaki, Shoichi; Tweedie, Robert; Zanotti, James
2009-06-01
We report our numerical lattice QCD calculations of the isovector nucleon form factors for the vector and axial-vector currents: the vector, induced tensor, axial-vector, and induced pseudoscalar form factors. The calculation is carried out with the gauge configurations generated with Nf=2+1 dynamical domain-wall fermions and Iwasaki gauge actions at β=2.13, corresponding to a cutoff a-1=1.73GeV, and a spatial volume of (2.7fm)3. The up and down-quark masses are varied so the pion mass lies between 0.33 and 0.67 GeV while the strange quark mass is about 12% heavier than the physical one. We calculate the form factors in the range of momentum transfers, 0.2
Beck, P; Latocha, M; Dorman, L; Pelliccioni, M; Rollet, S
2007-01-01
As required by the European Directive 96/29/Euratom, radiation exposure due to natural ionizing radiation has to be taken into account at workplaces if the effective dose could become more than 1 mSv per year. An example of workers concerned by this directive is aircraft crew due to cosmic radiation exposure in the atmosphere. Extensive measurement campaigns on board aircrafts have been carried out to assess ambient dose equivalent. A consortium of European dosimetry institutes within EURADOS WG5 summarized experimental data and results of calculations, together with detailed descriptions of the methods for measurements and calculations. The radiation protection quantity of interest is the effective dose, E (ISO). The comparison of results by measurements and calculations is done in terms of the operational quantity ambient dose equivalent, H(10). This paper gives an overview of the EURADOS Aircraft Crew In-Flight Database and it presents a new empirical model describing fitting functions for this data. Furthermore, it describes numerical simulations performed with the Monte Carlo code FLUKA-2005 using an updated version of the cosmic radiation primary spectra. The ratio between ambient dose equivalent and effective dose at commercial flight altitudes, calculated with FLUKA-2005, is discussed. Finally, it presents the aviation dosimetry model AVIDOS based on FLUKA-2005 simulations for routine dose assessment. The code has been developed by Austrian Research Centers (ARC) for the public usage (http://avidos.healthphysics.at).
Microfluidic devices, systems, and methods for quantifying particles using centrifugal force
Schaff, Ulrich Y.; Sommer, Gregory J.; Singh, Anup K.
2015-11-17
Embodiments of the present invention are directed toward microfluidic systems, apparatus, and methods for measuring a quantity of cells in a fluid. Examples include a differential white blood cell measurement using a centrifugal microfluidic system. A method may include introducing a fluid sample containing a quantity of cells into a microfluidic channel defined in part by a substrate. The quantity of cells may be transported toward a detection region defined in part by the substrate, wherein the detection region contains a density media, and wherein the density media has a density lower than a density of the cells and higher than a density of the fluid sample. The substrate may be spun such that at least a portion of the quantity of cells are transported through the density media. Signals may be detected from label moieties affixed to the cells.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, John C; Peplow, Douglas E.; Mosher, Scott W
2014-01-01
This paper presents a new hybrid (Monte Carlo/deterministic) method for increasing the efficiency of Monte Carlo calculations of distributions, such as flux or dose rate distributions (e.g., mesh tallies), as well as responses at multiple localized detectors and spectra. This method, referred to as Forward-Weighted CADIS (FW-CADIS), is an extension of the Consistent Adjoint Driven Importance Sampling (CADIS) method, which has been used for more than a decade to very effectively improve the efficiency of Monte Carlo calculations of localized quantities, e.g., flux, dose, or reaction rate at a specific location. The basis of this method is the development ofmore » an importance function that represents the importance of particles to the objective of uniform Monte Carlo particle density in the desired tally regions. Implementation of this method utilizes the results from a forward deterministic calculation to develop a forward-weighted source for a deterministic adjoint calculation. The resulting adjoint function is then used to generate consistent space- and energy-dependent source biasing parameters and weight windows that are used in a forward Monte Carlo calculation to obtain more uniform statistical uncertainties in the desired tally regions. The FW-CADIS method has been implemented and demonstrated within the MAVRIC sequence of SCALE and the ADVANTG/MCNP framework. Application of the method to representative, real-world problems, including calculation of dose rate and energy dependent flux throughout the problem space, dose rates in specific areas, and energy spectra at multiple detectors, is presented and discussed. Results of the FW-CADIS method and other recently developed global variance reduction approaches are also compared, and the FW-CADIS method outperformed the other methods in all cases considered.« less
NASA Astrophysics Data System (ADS)
Fiorentini, Raffaele; Kremer, Kurt; Potestio, Raffaello; Fogarty, Aoife C.
2017-06-01
The calculation of free energy differences is a crucial step in the characterization and understanding of the physical properties of biological molecules. In the development of efficient methods to compute these quantities, a promising strategy is that of employing a dual-resolution representation of the solvent, specifically using an accurate model in the proximity of a molecule of interest and a simplified description elsewhere. One such concurrent multi-resolution simulation method is the Adaptive Resolution Scheme (AdResS), in which particles smoothly change their resolution on-the-fly as they move between different subregions. Before using this approach in the context of free energy calculations, however, it is necessary to make sure that the dual-resolution treatment of the solvent does not cause undesired effects on the computed quantities. Here, we show how AdResS can be used to calculate solvation free energies of small polar solutes using Thermodynamic Integration (TI). We discuss how the potential-energy-based TI approach combines with the force-based AdResS methodology, in which no global Hamiltonian is defined. The AdResS free energy values agree with those calculated from fully atomistic simulations to within a fraction of kBT. This is true even for small atomistic regions whose size is on the order of the correlation length, or when the properties of the coarse-grained region are extremely different from those of the atomistic region. These accurate free energy calculations are possible because AdResS allows the sampling of solvation shell configurations which are equivalent to those of fully atomistic simulations. The results of the present work thus demonstrate the viability of the use of adaptive resolution simulation methods to perform free energy calculations and pave the way for large-scale applications where a substantial computational gain can be attained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ortiz-Rodriguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetrymore » with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.« less
NASA Astrophysics Data System (ADS)
Ortiz-Rodríguez, J. M.; Reyes Alfaro, A.; Reyes Haro, A.; Solís Sánches, L. O.; Miranda, R. Castañeda; Cervantes Viramontes, J. M.; Vega-Carrillo, H. R.
2013-07-01
In this work the performance of two neutron spectrum unfolding codes based on iterative procedures and artificial neural networks is evaluated. The first one code based on traditional iterative procedures and called Neutron spectrometry and dosimetry from the Universidad Autonoma de Zacatecas (NSDUAZ) use the SPUNIT iterative algorithm and was designed to unfold neutron spectrum and calculate 15 dosimetric quantities and 7 IAEA survey meters. The main feature of this code is the automated selection of the initial guess spectrum trough a compendium of neutron spectrum compiled by the IAEA. The second one code known as Neutron spectrometry and dosimetry with artificial neural networks (NDSann) is a code designed using neural nets technology. The artificial intelligence approach of neural net does not solve mathematical equations. By using the knowledge stored at synaptic weights on a neural net properly trained, the code is capable to unfold neutron spectrum and to simultaneously calculate 15 dosimetric quantities, needing as entrance data, only the rate counts measured with a Bonner spheres system. Similarities of both NSDUAZ and NSDann codes are: they follow the same easy and intuitive user's philosophy and were designed in a graphical interface under the LabVIEW programming environment. Both codes unfold the neutron spectrum expressed in 60 energy bins, calculate 15 dosimetric quantities and generate a full report in HTML format. Differences of these codes are: NSDUAZ code was designed using classical iterative approaches and needs an initial guess spectrum in order to initiate the iterative procedure. In NSDUAZ, a programming routine was designed to calculate 7 IAEA instrument survey meters using the fluence-dose conversion coefficients. NSDann code use artificial neural networks for solving the ill-conditioned equation system of neutron spectrometry problem through synaptic weights of a properly trained neural network. Contrary to iterative procedures, in neural net approach it is possible to reduce the rate counts used to unfold the neutron spectrum. To evaluate these codes a computer tool called Neutron Spectrometry and dosimetry computer tool was designed. The results obtained with this package are showed. The codes here mentioned are freely available upon request to the authors.
Takahasi Nearest-Neighbour Gas Revisited II: Morse Gases
NASA Astrophysics Data System (ADS)
Matsumoto, Akira
2011-12-01
Some thermodynamic quantities for the Morse potential are analytically evaluated at an isobaric process. The parameters of Morse gases for 21 substances are obtained by the second virial coefficient data and the spectroscopic data of diatomic molecules. Also some thermodynamic quantities for water are calculated numerically and drawn graphically. The inflexion point of the length L which depends on temperature T and pressure P corresponds physically to a boiling point. L indicates the liquid phase from lower temperature to the inflexion point and the gaseous phase from the inflexion point to higher temperature. The boiling temperatures indicate reasonable values compared with experimental data. The behaviour of L suggests a chance of a first-order phase transition in one dimension.
A Dynamic Bayesian Network Model for the Production and Inventory Control
NASA Astrophysics Data System (ADS)
Shin, Ji-Sun; Takazaki, Noriyuki; Lee, Tae-Hong; Kim, Jin-Il; Lee, Hee-Hyol
In general, the production quantities and delivered goods are changed randomly and then the total stock is also changed randomly. This paper deals with the production and inventory control using the Dynamic Bayesian Network. Bayesian Network is a probabilistic model which represents the qualitative dependence between two or more random variables by the graph structure, and indicates the quantitative relations between individual variables by the conditional probability. The probabilistic distribution of the total stock is calculated through the propagation of the probability on the network. Moreover, an adjusting rule of the production quantities to maintain the probability of a lower limit and a ceiling of the total stock to certain values is shown.
Lattice QCD and heavy ion collisions: a review of recent progress.
Ratti, Claudia
2018-04-04
In the last few years, numerical simulations of QCD on the lattice have reached a new level of accuracy. A wide range of thermodynamic quantities is now available in the continuum limit and for physical quark masses. This allows a comparison with measurements from heavy ion collisions for the first time. Furthermore, calculations of dynamical quantities are also becoming available. The combined effort from first principles and experiment allows us to gain an unprecedented understanding of the properties of quark-gluon plasma. I will review the state-of-the-art results from lattice simulations and connect them to the experimental information from RHIC and the LHC. © 2018 IOP Publishing Ltd.
Weak correlations between local density and dynamics near the glass transition.
Conrad, J C; Starr, F W; Weitz, D A
2005-11-17
We perform experiments on two different dense colloidal suspensions with confocal microscopy to probe the relationship between local structure and dynamics near the glass transition. We calculate the Voronoi volume for our particles and show that this quantity is not a universal probe of glassy structure for all colloidal suspensions. We correlate the Voronoi volume to displacement and find that these quantities are only weakly correlated. We observe qualitatively similar results in a simulation of a polymer melt. These results suggest that the Voronoi volume does not predict dynamical behavior in experimental colloidal suspensions; a purely structural approach based on local single particle volume likely cannot describe the colloidal glass transition.
Flow analysis system and method
NASA Technical Reports Server (NTRS)
Hill, Wayne S. (Inventor); Barck, Bruce N. (Inventor)
1998-01-01
A non-invasive flow analysis system and method wherein a sensor, such as an acoustic sensor, is coupled to a conduit for transmitting a signal which varies depending on the characteristics of the flow in the conduit. The signal is amplified and there is a filter, responsive to the sensor signal, and tuned to pass a narrow band of frequencies proximate the resonant frequency of the sensor. A demodulator generates an amplitude envelope of the filtered signal and a number of flow indicator quantities are calculated based on variations in amplitude of the amplitude envelope. A neural network, or its equivalent, is then used to determine the flow rate of the flow in the conduit based on the flow indicator quantities.
NASA Astrophysics Data System (ADS)
Lider, M. C.; Yurtseven, H.
2018-05-01
The resonant frequency shifts are related to the thermodynamic quantities (compressibility, order parameter and susceptibility) for the α-β transition in quartz. The experimental data for the resonant frequencies and the bulk modulus from the literature are used for those correlations. By calculating the order parameter from the mean field theory, correlation between the resonant frequencies of various modes and the order parameter is examined according to the quasi-harmonic phonon theory for the α-β transition in quartz. Also, correlation between the bulk modulus in relation to the resonant frequency shifts and the order parameter susceptibility is constructed for the α-β transition in this crystalline system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
L.Y. Dodin and N.J. Fisch
2012-06-18
By restating geometrical optics within the eld-theoretical approach, the classical concept of a photon in arbitrary dispersive medium is introduced, and photon properties are calculated unambiguously. In particular, the canonical and kinetic momenta carried by a photon, as well as the two corresponding energy-momentum tensors of a wave, are derived straightforwardly from rst principles of Lagrangian mechanics. The Abraham-Minkowski controversy pertaining to the de nitions of these quantities is thereby resolved for linear waves of arbitrary nature, and corrections to the traditional formulas for the photon kinetic quantities are found. An application of axiomatic geometrical optics to electromagnetic waves ismore » also presented as an example.« less
Bracken, Robert E.; Brown, Philip J.
2006-01-01
On March 12, 2003, data were gathered at Yuma Proving Grounds, in Arizona, using a Tensor Magnetic Gradiometer System (TMGS). This report shows how these data were processed and explains concepts required for successful TMGS data reduction. Important concepts discussed include extreme attitudinal sensitivity of vector measurements, low attitudinal sensitivity of gradient measurements, leakage of the common-mode field into gradient measurements, consequences of thermal drift, and effects of field curvature. Spatial-data collection procedures and a spin-calibration method are addressed. Discussions of data-reduction procedures include tracking of axial data by mathematically matching transfer functions among the axes, derivation and application of calibration coefficients, calculation of sensor-pair gradients, thermal-drift corrections, and gradient collocation. For presentation, the magnetic tensor at each data station is converted to a scalar quantity, the I2 tensor invariant, which is easily found by calculating the determinant of the tensor. At important processing junctures, the determinants for all stations in the mapped area are shown in shaded relief map-view. Final processed results are compared to a mathematical model to show the validity of the assumptions made during processing and the reasonableness of the ultimate answer obtained.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Endres, Michael G.; Shindler, Andrea; Tiburzi, Brian C.
The commonly adopted approach for including electromagnetic interactions in lattice QCD simulations relies on using finite volume as the infrared regularization for QED. The long-range nature of the electromagnetic interaction, however, implies that physical quantities are susceptible to power-law finite volume corrections, which must be removed by performing costly simulations at multiple lattice volumes, followed by an extrapolation to the infinite volume limit. In this work, we introduce a photon mass as an alternative means for gaining control over infrared effects associated with electromagnetic interactions. We present findings for hadron mass shifts due to electromagnetic interactions (i.e., for the proton,more » neutron, charged and neutral kaon) and corresponding mass splittings, and compare the results with those obtained from conventional QCD+QED calculations. Results are reported for numerical studies of three flavor electroquenched QCD using ensembles corresponding to 800 MeV pions, ensuring that the only appreciable volume corrections arise from QED effects. The calculations are performed with three lattice volumes with spatial extents ranging from 3.4 - 6.7 fm. As a result, we find that for equal computing time (not including the generation of the lattice configurations), the electromagnetic mass shifts can be extracted from computations on a single (our smallest) lattice volume with comparable or better precision than the conventional approach.« less
Massive photons: An infrared regularization scheme for lattice QCD + QED
Endres, Michael G.; Shindler, Andrea; Tiburzi, Brian C.; ...
2016-08-10
The commonly adopted approach for including electromagnetic interactions in lattice QCD simulations relies on using finite volume as the infrared regularization for QED. The long-range nature of the electromagnetic interaction, however, implies that physical quantities are susceptible to power-law finite volume corrections, which must be removed by performing costly simulations at multiple lattice volumes, followed by an extrapolation to the infinite volume limit. In this work, we introduce a photon mass as an alternative means for gaining control over infrared effects associated with electromagnetic interactions. We present findings for hadron mass shifts due to electromagnetic interactions (i.e., for the proton,more » neutron, charged and neutral kaon) and corresponding mass splittings, and compare the results with those obtained from conventional QCD+QED calculations. Results are reported for numerical studies of three flavor electroquenched QCD using ensembles corresponding to 800 MeV pions, ensuring that the only appreciable volume corrections arise from QED effects. The calculations are performed with three lattice volumes with spatial extents ranging from 3.4 - 6.7 fm. As a result, we find that for equal computing time (not including the generation of the lattice configurations), the electromagnetic mass shifts can be extracted from computations on a single (our smallest) lattice volume with comparable or better precision than the conventional approach.« less
Impact of High Mathematics Education on the Number Sense
Castronovo, Julie; Göbel, Silke M.
2012-01-01
In adult number processing two mechanisms are commonly used: approximate estimation of quantity and exact calculation. While the former relies on the approximate number sense (ANS) which we share with animals and preverbal infants, the latter has been proposed to rely on an exact number system (ENS) which develops later in life following the acquisition of symbolic number knowledge. The current study investigated the influence of high level math education on the ANS and the ENS. Our results showed that the precision of non-symbolic quantity representation was not significantly altered by high level math education. However, performance in a symbolic number comparison task as well as the ability to map accurately between symbolic and non-symbolic quantities was significantly better the higher mathematics achievement. Our findings suggest that high level math education in adults shows little influence on their ANS, but it seems to be associated with a better anchored ENS and better mapping abilities between ENS and ANS. PMID:22558077
Impact of high mathematics education on the number sense.
Castronovo, Julie; Göbel, Silke M
2012-01-01
In adult number processing two mechanisms are commonly used: approximate estimation of quantity and exact calculation. While the former relies on the approximate number sense (ANS) which we share with animals and preverbal infants, the latter has been proposed to rely on an exact number system (ENS) which develops later in life following the acquisition of symbolic number knowledge. The current study investigated the influence of high level math education on the ANS and the ENS. Our results showed that the precision of non-symbolic quantity representation was not significantly altered by high level math education. However, performance in a symbolic number comparison task as well as the ability to map accurately between symbolic and non-symbolic quantities was significantly better the higher mathematics achievement. Our findings suggest that high level math education in adults shows little influence on their ANS, but it seems to be associated with a better anchored ENS and better mapping abilities between ENS and ANS.
Magneto-acupuncture stimuli effects on ultraweak photon emission from hands of healthy persons.
Park, Sang-Hyun; Kim, Jungdae; Koo, Tae-Hoi
2009-03-01
We investigated ultraweak photon emissions from the hands of 45 healthy persons before and after magneto-acupuncture stimuli. Photon emissions were measured by using two photomultiplier tubes in the spectral range of UV and visible. Several statistical quantities such as the average intensity, the standard deviation, the delta-value, and the degree of asymmetry were calculated from the measurements of photon emissions before and after the magneto-acupuncture stimuli. The distributions of the quantities from the measurements with the magneto-acupuncture stimuli were more differentiable than those of the groups without any stimuli and with the sham magnets. We also analyzed the magneto-acupuncture stimuli effects on the photon emissions through a year-long measurement for two subjects. The individualities of the subjects increased the differences of photon emissions compared to the above group study before and after magnetic stimuli. The changes on the ultraweak photon emission rates of hand for the magnet group were detected conclusively in the quantities of the averages and standard deviations.
Hydrocarbon-enhanced particulate filter regeneration via microwave ignition
Gonze, Eugene V.; Brown, David B.
2010-02-02
A regeneration method for a particulate filter includes estimating a quantity of particulate matter trapped within the particulate filter, comparing the quantity of particulate matter to a predetermined quantity, heating at least a portion of the particulate filter to a combustion temperature of the particulate matter, and introducing hydrocarbon fuel to the particulate filter. The hydrocarbon fuel facilitates combustion of the particulate matter to regenerate the particulate filter.
NASA Astrophysics Data System (ADS)
Butler, Jason E.; Shaqfeh, Eric S. G.
2005-01-01
Using methods adapted from the simulation of suspension dynamics, we have developed a Brownian dynamics algorithm with multibody hydrodynamic interactions for simulating the dynamics of polymer molecules. The polymer molecule is modeled as a chain composed of a series of inextensible, rigid rods with constraints at each joint to ensure continuity of the chain. The linear and rotational velocities of each segment of the polymer chain are described by the slender-body theory of Batchelor [J. Fluid Mech. 44, 419 (1970)]. To include hydrodynamic interactions between the segments of the chain, the line distribution of forces on each segment is approximated by making a Legendre polynomial expansion of the disturbance velocity on the segment, where the first two terms of the expansion are retained in the calculation. Thus, the resulting linear force distribution is specified by a center of mass force, couple, and stresslet on each segment. This method for calculating the hydrodynamic interactions has been successfully used to simulate the dynamics of noncolloidal suspensions of rigid fibers [O. G. Harlen, R. R. Sundararajakumar, and D. L. Koch, J. Fluid Mech. 388, 355 (1999); J. E. Butler and E. S. G. Shaqfeh, J. Fluid Mech. 468, 204 (2002)]. The longest relaxation time and center of mass diffusivity are among the quantities calculated with the simulation technique. Comparisons are made for different levels of approximation of the hydrodynamic interactions, including multibody interactions, two-body interactions, and the "freely draining" case with no interactions. For the short polymer chains studied in this paper, the results indicate a difference in the apparent scaling of diffusivity with polymer length for the multibody versus two-body level of approximation for the hydrodynamic interactions.
Butler, Jason E; Shaqfeh, Eric S G
2005-01-01
Using methods adapted from the simulation of suspension dynamics, we have developed a Brownian dynamics algorithm with multibody hydrodynamic interactions for simulating the dynamics of polymer molecules. The polymer molecule is modeled as a chain composed of a series of inextensible, rigid rods with constraints at each joint to ensure continuity of the chain. The linear and rotational velocities of each segment of the polymer chain are described by the slender-body theory of Batchelor [J. Fluid Mech. 44, 419 (1970)]. To include hydrodynamic interactions between the segments of the chain, the line distribution of forces on each segment is approximated by making a Legendre polynomial expansion of the disturbance velocity on the segment, where the first two terms of the expansion are retained in the calculation. Thus, the resulting linear force distribution is specified by a center of mass force, couple, and stresslet on each segment. This method for calculating the hydrodynamic interactions has been successfully used to simulate the dynamics of noncolloidal suspensions of rigid fibers [O. G. Harlen, R. R. Sundararajakumar, and D. L. Koch, J. Fluid Mech. 388, 355 (1999); J. E. Butler and E. S. G. Shaqfeh, J. Fluid Mech. 468, 204 (2002)]. The longest relaxation time and center of mass diffusivity are among the quantities calculated with the simulation technique. Comparisons are made for different levels of approximation of the hydrodynamic interactions, including multibody interactions, two-body interactions, and the "freely draining" case with no interactions. For the short polymer chains studied in this paper, the results indicate a difference in the apparent scaling of diffusivity with polymer length for the multibody versus two-body level of approximation for the hydrodynamic interactions. (c) 2005 American Institute of Physics.
Ashraf, Chowdhury; Jain, Abhishek; Xuan, Yuan; van Duin, Adri C T
2017-02-15
In this paper, we present the first atomistic-scale based method for calculating ignition front propagation speed and hypothesize that this quantity is related to laminar flame speed. This method is based on atomistic-level molecular dynamics (MD) simulations with the ReaxFF reactive force field. Results reported in this study are for supercritical (P = 55 MPa and T u = 1800 K) combustion of hydrocarbons as elevated pressure and temperature are required to accelerate the dynamics for reactive MD simulations. These simulations are performed for different types of hydrocarbons, including alkyne, alkane, and aromatic, and are able to successfully reproduce the experimental trend of reactivity of these hydrocarbons. Moreover, our results indicate that the ignition front propagation speed under supercritical conditions has a strong dependence on equivalence ratio, similar to experimentally measured flame speeds at lower temperatures and pressures which supports our hypothesis that ignition front speed is a related quantity to laminar flame speed. In addition, comparisons between results obtained from ReaxFF simulation and continuum simulations performed under similar conditions show good qualitative, and reasonable quantitative agreement. This demonstrates that ReaxFF based MD-simulations are a promising tool to study flame speed/ignition front speed in supercritical hydrocarbon combustion.
Analysis of the cylinder’s movement characteristics after entering water based on CFD
NASA Astrophysics Data System (ADS)
Liu, Xianlong
2017-10-01
It’s a variable speed motion after the cylinder vertical entry the water. Using dynamic mesh is mostly unstructured grid, and the calculation results are not ideal and consume huge computing resources. CFD method is used to calculate the resistance of the cylinder at different velocities. Cubic spline interpolation method is used to obtain the resistance at fixed speeds. The finite difference method is used to solve the motion equation, and the acceleration, velocity, displacement and other physical quantities are obtained after the cylinder enters the water.
Pellis, Lorenzo; Ball, Frank; Trapman, Pieter
2012-01-01
The basic reproduction number R0 is one of the most important quantities in epidemiology. However, for epidemic models with explicit social structure involving small mixing units such as households, its definition is not straightforward and a wealth of other threshold parameters has appeared in the literature. In this paper, we use branching processes to define R0, we apply this definition to models with households or other more complex social structures and we provide methods for calculating it. PMID:22085761
Binary data corruption due to a Brownian agent
NASA Astrophysics Data System (ADS)
Newman, T. J.; Triampo, Wannapong
1999-05-01
We introduce a model of binary data corruption induced by a Brownian agent (active random walker) on a d-dimensional lattice. A continuum formulation allows the exact calculation of several quantities related to the density of corrupted bits ρ, for example, the mean of ρ and the density-density correlation function. Excellent agreement is found with the results from numerical simulations. We also calculate the probability distribution of ρ in d=1, which is found to be log normal, indicating that the system is governed by extreme fluctuations.
A weight modification sequential method for VSC-MTDC power system state estimation
NASA Astrophysics Data System (ADS)
Yang, Xiaonan; Zhang, Hao; Li, Qiang; Guo, Ziming; Zhao, Kun; Li, Xinpeng; Han, Feng
2017-06-01
This paper presents an effective sequential approach based on weight modification for VSC-MTDC power system state estimation, called weight modification sequential method. The proposed approach simplifies the AC/DC system state estimation algorithm through modifying the weight of state quantity to keep the matrix dimension constant. The weight modification sequential method can also make the VSC-MTDC system state estimation calculation results more ccurate and increase the speed of calculation. The effectiveness of the proposed weight modification sequential method is demonstrated and validated in modified IEEE 14 bus system.
A discussion on leading renormalon in the pole mass
NASA Astrophysics Data System (ADS)
Komijani, J.
2017-08-01
Perturbative series of some quantities in quantum field theories, such as the pole mass of a quark, suffer from a kind of divergence called renormalon divergence. In this paper, the leading renormalon in the pole mass is investigated, and a map is introduced to suppress this renormalon. The inverse of the map is then used to generate the leading renormalon and obtain an expression to calculate its overall normalization. Finally, the overall normalization of the leading renormalon of the pole mass is calculated for several values of quark flavors.
Lattice field theory applications in high energy physics
NASA Astrophysics Data System (ADS)
Gottlieb, Steven
2016-10-01
Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects.
ERIC Educational Resources Information Center
Muehlberg, Jessica Marie
2013-01-01
Adelman (2006) observed that a large quantity of research on retention is "institution-specific or use institutional characteristics as independent variables" (p. 81). However, he observed that over 60% of the students he studied attended multiple institutions making the calculation of institutional effects highly problematic. He argued…
A School Experiment in Kinematics: Shooting from a Ballistic Cart
ERIC Educational Resources Information Center
Kranjc, T.; Razpet, N.
2011-01-01
Many physics textbooks start with kinematics. In the lab, students observe the motions, describe and make predictions, and get acquainted with basic kinematics quantities and their meaning. Then they can perform calculations and compare the results with experimental findings. In this paper we describe an experiment that is not often done, but is…
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-03
... zero or the lowest Minimum Trading Increment or (ii) the Expanded Quote Range has been calculated as zero. The proposal codifies existing functionality during the Exchange's Opening Process. Specifically... either zero or the lowest Minimum Trading Increment and market order sell interest has a quantity greater...
7 CFR Exhibit D to Subpart A of... - Thermal Performance Construction Standards
Code of Federal Regulations, 2013 CFR
2013-01-01
... floor insulation, the total heat loss attributed to the floor from the heated area shall not exceed the heat loss calculated for floors with required insulation. AInsulation may be omitted from floors over.... Definitions A. British thermal unit (Btu) means the quantity of heat required to raise the temperature of one...
7 CFR Exhibit D to Subpart A of... - Thermal Performance Construction Standards
Code of Federal Regulations, 2014 CFR
2014-01-01
... floor insulation, the total heat loss attributed to the floor from the heated area shall not exceed the heat loss calculated for floors with required insulation. AInsulation may be omitted from floors over.... Definitions A. British thermal unit (Btu) means the quantity of heat required to raise the temperature of one...
7 CFR Exhibit D to Subpart A of... - Thermal Performance Construction Standards
Code of Federal Regulations, 2012 CFR
2012-01-01
... floor insulation, the total heat loss attributed to the floor from the heated area shall not exceed the heat loss calculated for floors with required insulation. AInsulation may be omitted from floors over.... Definitions A. British thermal unit (Btu) means the quantity of heat required to raise the temperature of one...
7 CFR Exhibit D to Subpart A of... - Thermal Performance Construction Standards
Code of Federal Regulations, 2011 CFR
2011-01-01
... floor insulation, the total heat loss attributed to the floor from the heated area shall not exceed the heat loss calculated for floors with required insulation. AInsulation may be omitted from floors over.... Definitions A. British thermal unit (Btu) means the quantity of heat required to raise the temperature of one...
7 CFR Exhibit D to Subpart A of... - Thermal Performance Construction Standards
Code of Federal Regulations, 2010 CFR
2010-01-01
... floor insulation, the total heat loss attributed to the floor from the heated area shall not exceed the heat loss calculated for floors with required insulation. AInsulation may be omitted from floors over.... Definitions A. British thermal unit (Btu) means the quantity of heat required to raise the temperature of one...
40 CFR 98.343 - Calculating GHG emissions.
Code of Federal Regulations, 2010 CFR
2010-07-01
... potential (metric tons CH4/metric ton waste) = MCF × DOC × DOCF × F × 16/12. MCF = Methane correction factor... = Methane emissions from the landfill in the reporting year (metric tons CH4). GCH 4 = Modeled methane...). Emissions = Methane emissions from the landfill in the reporting year (metric tons CH4). R = Quantity of...
40 CFR 98.343 - Calculating GHG emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... landfill using paragraph (a)(3)(i) of this section for all containers and for all vehicles used to haul... determine the annual quantity of waste disposed of must be documented in the monitoring plan. (i) Use direct... methods: (A) Weigh using mass scales each vehicle or container used to haul waste as it enters the...
40 CFR 98.343 - Calculating GHG emissions.
Code of Federal Regulations, 2011 CFR
2011-07-01
... landfill using paragraph (a)(3)(i) of this section for all containers and for all vehicles used to haul... determine the annual quantity of waste disposed of must be documented in the monitoring plan. (i) Use direct... methods: (A) Weigh using mass scales each vehicle or container used to haul waste as it enters the...
40 CFR 98.343 - Calculating GHG emissions.
Code of Federal Regulations, 2013 CFR
2013-07-01
... landfill using paragraph (a)(3)(i) of this section for all containers and for all vehicles used to haul... determine the annual quantity of waste disposed of must be documented in the monitoring plan. (i) Use direct... methods: (A) Weigh using mass scales each vehicle or container used to haul waste as it enters the...
Fostering Formal Commutativity Knowledge with Approximate Arithmetic
Hansen, Sonja Maria; Haider, Hilde; Eichler, Alexandra; Godau, Claudia; Frensch, Peter A.; Gaschler, Robert
2015-01-01
How can we enhance the understanding of abstract mathematical principles in elementary school? Different studies found out that nonsymbolic estimation could foster subsequent exact number processing and simple arithmetic. Taking the commutativity principle as a test case, we investigated if the approximate calculation of symbolic commutative quantities can also alter the access to procedural and conceptual knowledge of a more abstract arithmetic principle. Experiment 1 tested first graders who had not been instructed about commutativity in school yet. Approximate calculation with symbolic quantities positively influenced the use of commutativity-based shortcuts in formal arithmetic. We replicated this finding with older first graders (Experiment 2) and third graders (Experiment 3). Despite the positive effect of approximation on the spontaneous application of commutativity-based shortcuts in arithmetic problems, we found no comparable impact on the application of conceptual knowledge of the commutativity principle. Overall, our results show that the usage of a specific arithmetic principle can benefit from approximation. However, the findings also suggest that the correct use of certain procedures does not always imply conceptual understanding. Rather, the conceptual understanding of commutativity seems to lag behind procedural proficiency during elementary school. PMID:26560311
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cave, Robert J., E-mail: Robert-Cave@hmc.edu; Stanton, John F., E-mail: JFStanton@gmail.com
We present a simple quasi-diabatization scheme applicable to spectroscopic studies that can be applied using any wavefunction for which one-electron properties and transition properties can be calculated. The method is based on rotation of a pair (or set) of adiabatic states to minimize the difference between the given transition property at a reference geometry of high symmetry (where the quasi-diabatic states and adiabatic states coincide) and points of lower symmetry where quasi-diabatic quantities are desired. Compared to other quasi-diabatization techniques, the method requires no special coding, facilitates direct comparison between quasi-diabatic quantities calculated using different types of wavefunctions, and ismore » free of any selection of configurations in the definition of the quasi-diabatic states. On the other hand, the method appears to be sensitive to multi-state issues, unlike recent methods we have developed that use a configurational definition of quasi-diabatic states. Results are presented and compared with two other recently developed quasi-diabatization techniques.« less
Taub-NUT Spacetime in the (A)dS/CFT and M-Theory [electronic resource
NASA Astrophysics Data System (ADS)
Clarkson, Richard
In the following thesis, I will conduct a thermodynamic analysis of the Taub-NUT spacetime in various dimensions, as well as show uses for Taub-NUT and other Hyper-Kahler spacetimes. Thermodynamic analysis (by which I mean the calculation of the entropy and other thermodynamic quantities, and the analysis of these quantities) has in the past been done by use of background subtraction. The recent derivation of the (A)dS/CFT correspondences from String theory has allowed for easier and quicker analysis. I will use Taub-NUT space as a template to test these correspondences against the standard thermodynamic calculations (via the N?ether method), with (in the Taub-NUT-dS case especially) some very interesting results. There is also interest in obtaining metrics in eleven dimensions that can be reduced down to ten dimensional string theory metrics. Taub-NUT and other Hyper-Kahler metrics already possess the form to easily facilitate the Kaluza-Klein reduction, and embedding such metricsinto eleven dimensional metrics containing M2 or M5 branes produces metrics with interesting Dp-brane results.
NASA Astrophysics Data System (ADS)
Seeberger, Pia; Vidal, Julien
2017-08-01
Formation entropy of point defects is one of the last crucial elements required to fully describe the temperature dependence of point defect formation. However, while many attempts have been made to compute them for very complicated systems, very few works have been carried out such as to assess the different effects of finite size effects and precision on such quantity. Large discrepancies can be found in the literature for a system as primitive as the silicon vacancy. In this work, we have proposed a systematic study of formation entropy for silicon vacancy in its 3 stable charge states: neutral, +2 and -2 for supercells with size not below 432 atoms. Rationalization of the formation entropy is presented, highlighting importance of finite size error and the difficulty to compute such quantities due to high numerical requirement. It is proposed that the direct calculation of formation entropy of VSi using first principles methods will be plagued by very high computational workload (or large numerical errors) and finite size dependent results.
User's Manual for FOMOCO Utilities-Force and Moment Computation Tools for Overset Grids
NASA Technical Reports Server (NTRS)
Chan, William M.; Buning, Pieter G.
1996-01-01
In the numerical computations of flows around complex configurations, accurate calculations of force and moment coefficients for aerodynamic surfaces are required. When overset grid methods are used, the surfaces on which force and moment coefficients are sought typically consist of a collection of overlapping surface grids. Direct integration of flow quantities on the overlapping grids would result in the overlapped regions being counted more than once. The FOMOCO Utilities is a software package for computing flow coefficients (force, moment, and mass flow rate) on a collection of overset surfaces with accurate accounting of the overlapped zones. FOMOCO Utilities can be used in stand-alone mode or in conjunction with the Chimera overset grid compressible Navier-Stokes flow solver OVERFLOW. The software package consists of two modules corresponding to a two-step procedure: (1) hybrid surface grid generation (MIXSUR module), and (2) flow quantities integration (OVERINT module). Instructions on how to use this software package are described in this user's manual. Equations used in the flow coefficients calculation are given in Appendix A.
Gyrokinetic statistical absolute equilibrium and turbulence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu Jianzhou; Hammett, Gregory W.
2010-12-15
A paradigm based on the absolute equilibrium of Galerkin-truncated inviscid systems to aid in understanding turbulence [T.-D. Lee, Q. Appl. Math. 10, 69 (1952)] is taken to study gyrokinetic plasma turbulence: a finite set of Fourier modes of the collisionless gyrokinetic equations are kept and the statistical equilibria are calculated; possible implications for plasma turbulence in various situations are discussed. For the case of two spatial and one velocity dimension, in the calculation with discretization also of velocity v with N grid points (where N+1 quantities are conserved, corresponding to an energy invariant and N entropy-related invariants), the negative temperaturemore » states, corresponding to the condensation of the generalized energy into the lowest modes, are found. This indicates a generic feature of inverse energy cascade. Comparisons are made with some classical results, such as those of Charney-Hasegawa-Mima in the cold-ion limit. There is a universal shape for statistical equilibrium of gyrokinetics in three spatial and two velocity dimensions with just one conserved quantity. Possible physical relevance to turbulence, such as ITG zonal flows, and to a critical balance hypothesis are also discussed.« less
Efficient variable time-stepping scheme for intense field-atom interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cerjan, C.; Kosloff, R.
1993-03-01
The recently developed Residuum method [Tal-Ezer, Kosloff, and Cerjan, J. Comput. Phys. 100, 179 (1992)], a Krylov subspace technique with variable time-step integration for the solution of the time-dependent Schroedinger equation, is applied to the frequently used soft Coulomb potential in an intense laser field. This one-dimensional potential has asymptotic Coulomb dependence with a softened'' singularity at the origin; thus it models more realistic phenomena. Two of the more important quantities usually calculated in this idealized system are the photoelectron and harmonic photon generation spectra. These quantities are shown to be sensitive to the choice of a numerical integration scheme:more » some spectral features are incorrectly calculated or missing altogether. Furthermore, the Residuum method allows much larger grid spacings for equivalent or higher accuracy in addition to the advantages of variable time stepping. Finally, it is demonstrated that enhanced high-order harmonic generation accompanies intense field stabilization and that preparation of the atom in an intermediate Rydberg state leads to stabilization at much lower laser intensity.« less
NASA Technical Reports Server (NTRS)
Posey, Joe W.; Dunn, M. H.; Farassat, F.
2004-01-01
This paper addresses two aspects of duct propagation and radiation which can contribute to more efficient fan noise predictions. First, we assess the effectiveness of Rayleigh's formula as a ducted fan noise prediction tool. This classical result which predicts the sound produced by a piston in a flanged duct is expanded to include the uniform axial inflow case. Radiation patterns using Rayleigh's formula with single radial mode input are compared to those obtained from the more precise ducted fan noise prediction code TBIEM3D. Agreement between the two methods is excellent in the peak noise regions both forward and aft. Next, we use TBIEM3D to calculate generalized radiation impedances and power transmission coefficients. These quantities are computed for a wide range of operating parameters. Results were obtained for higher Mach numbers, frequencies, and circumferential mode orders than have been previously published. Viewed as functions of frequency, calculated trends in lower order inlet impedances and power transmission coefficients are in agreement with known results. The relationships are more oscillatory for higher order modes and higher Mach numbers.
Bradshaw, Richard T; Essex, Jonathan W
2016-08-09
Hydration free energy (HFE) calculations are often used to assess the performance of biomolecular force fields and the quality of assigned parameters. The AMOEBA polarizable force field moves beyond traditional pairwise additive models of electrostatics and may be expected to improve upon predictions of thermodynamic quantities such as HFEs over and above fixed-point-charge models. The recent SAMPL4 challenge evaluated the AMOEBA polarizable force field in this regard but showed substantially worse results than those using the fixed-point-charge GAFF model. Starting with a set of automatically generated AMOEBA parameters for the SAMPL4 data set, we evaluate the cumulative effects of a series of incremental improvements in parametrization protocol, including both solute and solvent model changes. Ultimately, the optimized AMOEBA parameters give a set of results that are not statistically significantly different from those of GAFF in terms of signed and unsigned error metrics. This allows us to propose a number of guidelines for new molecule parameter derivation with AMOEBA, which we expect to have benefits for a range of biomolecular simulation applications such as protein-ligand binding studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sprecher, Daniel; Merkt, Frédéric, E-mail: frederic.merkt@phys.chem.ethz.ch; Jungen, Christian
2014-03-14
Multichannel quantum-defect theory (MQDT) is used to calculate the electron binding energies of np Rydberg states of H{sub 2}, HD, and D{sub 2} around n = 60 at an accuracy of better than 0.5 MHz. The theory includes the effects of rovibronic channel interactions and the hyperfine structure, and has been extended to the calculation of the asymmetric hyperfine structure of Rydberg states of a heteronuclear diatomic molecule (HD). Starting values for the eigenquantum-defect parameters of MQDT were extracted from ab initio potential-energy functions for the low-lying p Rydberg states of molecular hydrogen and subsequently refined in a global weighted fitmore » to available experimental data on the singlet and triplet Rydberg states of H{sub 2} and D{sub 2}. The electron binding energies of high-np Rydberg states derived in this work represent important quantities for future determinations of the adiabatic ionization energies of H{sub 2}, HD, and D{sub 2} at sub-MHz accuracy.« less
Monte Carlo simulations in radiotherapy dosimetry.
Andreo, Pedro
2018-06-27
The use of the Monte Carlo (MC) method in radiotherapy dosimetry has increased almost exponentially in the last decades. Its widespread use in the field has converted this computer simulation technique in a common tool for reference and treatment planning dosimetry calculations. This work reviews the different MC calculations made on dosimetric quantities, like stopping-power ratios and perturbation correction factors required for reference ionization chamber dosimetry, as well as the fully realistic MC simulations currently available on clinical accelerators, detectors and patient treatment planning. Issues are raised that include the necessity for consistency in the data throughout the entire dosimetry chain in reference dosimetry, and how Bragg-Gray theory breaks down for small photon fields. Both aspects are less critical for MC treatment planning applications, but there are important constraints like tissue characterization and its patient-to-patient variability, which together with the conversion between dose-to-water and dose-to-tissue, are analysed in detail. Although these constraints are common to all methods and algorithms used in different types of treatment planning systems, they make uncertainties involved in MC treatment planning to still remain "uncertain".
Life Cycle Cost Analysis of Ready Mix Concrete Plant
NASA Astrophysics Data System (ADS)
Topkar, V. M.; Duggar, A. R.; Kumar, A.; Bonde, P. P.; Girwalkar, R. S.; Gade, S. B.
2013-11-01
India, being a developing nation is experiencing major growth in its infrastructural sector. Concrete is the major component in construction. The requirement of good quality of concrete in large quantities can be fulfilled by ready mix concrete batching and mixing plants. The paper presents a technique of applying the value engineering tool life cycle cost analysis to a ready mix concrete plant. This will help an investor or an organization to take investment decisions regarding a ready mix concrete facility. No economic alternatives are compared in this study. A cost breakdown structure is prepared for the ready mix concrete plant. A market survey has been conducted to collect realistic costs for the ready mix concrete facility. The study establishes the cash flow for the ready mix concrete facility helpful in investment and capital generation related decisions. Transit mixers form an important component of the facility and are included in the calculations. A fleet size for transit mixers has been assumed for this purpose. The life cycle cost has been calculated for the system of the ready mix concrete plant and transit mixers.
Quantifying faculty teaching time in a department of obstetrics and gynecology.
Emmons, S
1998-10-01
The goal of this project was to develop a reproducible system that measures quantity and quality of teaching in unduplicated hours, such that comparisons of teaching activities could be drawn within and across departments. Such a system could be used for allocating teaching monies and for assessing teaching as part of the promotion and tenure process. Various teaching activities, including time spent in clinic, rounds, and doing procedures, were enumerated. The faculty were surveyed about their opinions on the proportion of clinical time spent in teaching. The literature also was reviewed. Based on analysis of the faculty survey and the literature, a series of calculations were developed to divide clinical time among resident teaching, medical student teaching, and patient care. The only input needed was total time spent in the various clinical activities, time spent in didactic activities, and the resident procedure database. This article describes a simple and fair database system to calculate time spent teaching from activities such as clinic, ward rounds, labor and delivery, and surgery. The teaching portfolio database calculates teaching as a proportion of the faculty member's total activities. The end product is a report that provides a reproducible yearly summary of faculty teaching time per activity and per type of learner.