SU-E-T-226: Correction of a Standard Model-Based Dose Calculator Using Measurement Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, M; Jiang, S; Lu, W
Purpose: To propose a hybrid method that combines advantages of the model-based and measurement-based method for independent dose calculation. Modeled-based dose calculation, such as collapsed-cone-convolution/superposition (CCCS) or the Monte-Carlo method, models dose deposition in the patient body accurately; however, due to lack of detail knowledge about the linear accelerator (LINAC) head, commissioning for an arbitrary machine is tedious and challenging in case of hardware changes. On the contrary, the measurement-based method characterizes the beam property accurately but lacks the capability of dose disposition modeling in heterogeneous media. Methods: We used a standard CCCS calculator, which is commissioned by published data,more » as the standard model calculator. For a given machine, water phantom measurements were acquired. A set of dose distributions were also calculated using the CCCS for the same setup. The difference between the measurements and the CCCS results were tabulated and used as the commissioning data for a measurement based calculator. Here we used a direct-ray-tracing calculator (ΔDRT). The proposed independent dose calculation consists of the following steps: 1. calculate D-model using CCCS. 2. calculate D-ΔDRT using ΔDRT. 3. combine Results: D=D-model+D-ΔDRT. Results: The hybrid dose calculation was tested on digital phantoms and patient CT data for standard fields and IMRT plan. The results were compared to dose calculated by the treatment planning system (TPS). The agreement of the hybrid and the TPS was within 3%, 3 mm for over 98% of the volume for phantom studies and lung patients. Conclusion: The proposed hybrid method uses the same commissioning data as those for the measurement-based method and can be easily extended to any non-standard LINAC. The results met the accuracy, independence, and simple commissioning criteria for an independent dose calculator.« less
Ballistics Modeling for Non-Axisymmetric Hypervelocity Smart Bullets
2014-06-03
can in principle come from experiments or computational fluid dynamics ( CFD ) calculations. CFD calculations are carried out for a standard bullet...come from experiments or com- putational fluid dynamics ( CFD ) calculations. CFD calculations are carried out for a standard bullet (0.308” 168 grain...11 2. Spin and Pitch Damping 11 3. Magnus Moment 12 IV. CFD Simulations and Ballistic Trajectories 12 A. CFD Modeling of a Standard Bullet 12 B
Updates to In-Line Calculation of Photolysis Rates
How photolysis rates are calculated affects ozone and aerosol concentrations predicted by the CMAQ model and the model?s run-time. The standard configuration of CMAQ uses the inline option that calculates photolysis rates by solving the radiative transfer equation for the needed ...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-23
... parts of the risk adjustment process--the risk adjustment model, the calculation of plan average... risk adjustment process. The risk adjustment model calculates individual risk scores. The calculation...'' to mean all data that are used in a risk adjustment model, the calculation of plan average actuarial...
Progress in the improved lattice calculation of direct CP-violation in the Standard Model
NASA Astrophysics Data System (ADS)
Kelly, Christopher
2018-03-01
We discuss the ongoing effort by the RBC & UKQCD collaborations to improve our lattice calculation of the measure of Standard Model direct CP violation, ɛ', with physical kinematics. We present our progress in decreasing the (dominant) statistical error and discuss other related activities aimed at reducing the systematic errors.
Gamma-ray dose from an overhead plume
McNaughton, Michael W.; Gillis, Jessica McDonnel; Ruedig, Elizabeth; ...
2017-05-01
Standard plume models can underestimate the gamma-ray dose when most of the radioactive material is above the heads of the receptors. Typically, a model is used to calculate the air concentration at the height of the receptor, and the dose is calculated by multiplying the air concentration by a concentration-to-dose conversion factor. Models indicate that if the plume is emitted from a stack during stable atmospheric conditions, the lower edges of the plume may not reach the ground, in which case both the ground-level concentration and the dose are usually reported as zero. However, in such cases, the dose frommore » overhead gamma-emitting radionuclides may be substantial. Such underestimates could impact decision making in emergency situations. The Monte Carlo N-Particle code, MCNP, was used to calculate the overhead shine dose and to compare with standard plume models. At long distances and during unstable atmospheric conditions, the MCNP results agree with the standard models. As a result, at short distances, where many models calculate zero, the true dose (as modeled by MCNP) can be estimated with simple equations.« less
On the Higgs-like boson in the minimal supersymmetric 3-3-1 model
NASA Astrophysics Data System (ADS)
Ferreira, J. G.; Pires, C. A. de S.; da Silva, P. S. Rodrigues; Siqueira, Clarissa
2018-03-01
It is imperative that any proposal of new physics beyond the standard model possesses a Higgs-like boson with 125 GeV of mass and couplings with the standard particles that recover the branching ratios and signal strengths as measured by CMS and ATLAS. We address this issue within the supersymmetric version of the minimal 3-3-1 model. For this we develop the Higgs potential with focus on the lightest Higgs provided by the model. Our proposal is to verify if it recovers the properties of the Standard Model Higgs. With respect to its mass, we calculate it up to one loop level by taking into account all contributions provided by the model. In regard to its couplings, we restrict our investigation to couplings of the Higgs-like boson with the standard particles, only. We then calculate the dominant branching ratios and the respective signal strengths and confront our results with the recent measurements of CMS and ATLAS. As distinctive aspects, we remark that our Higgs-like boson intermediates flavor changing neutral processes and has as signature the decay t → h+c. We calculate its branching ratio and compare it with current bounds. We also show that the Higgs potential of the model is stable for the region of parameter space employed in our calculations.
An ecological compensation standard based on emergy theory for the Xiao Honghe River Basin.
Guan, Xinjian; Chen, Moyu; Hu, Caihong
2015-01-01
The calculation of an ecological compensation standard is an important, but also difficult aspect of current ecological compensation research. In this paper, the factors affecting the ecological-economic system in the Xiao Honghe River Basin, China, including the flow of energy, materials, and money, were calculated using the emergy analysis method. A consideration of the relationships between the ecological-economic value of water resources and ecological compensation allowed the ecological-economic value to be calculated. On this basis, the amount of water needed for dilution was used to develop a calculation model for the ecological compensation standard of the basin. Using the Xiao Honghe River Basin as an example, the value of water resources and the ecological compensation standard were calculated using this model according to the emission levels of the main pollutant in the basin, chemical oxygen demand. The compensation standards calculated for the research areas in Xipin, Shangcai, Pingyu, and Xincai were 34.91 yuan/m3, 32.97 yuan/m3, 35.99 yuan/m3, and 34.70 yuan/m3, respectively, and such research output would help to generate and support new approaches to the long-term ecological protection of the basin and improvement of the ecological compensation system.
Wei, Guocui; Zhan, Tingting; Zhan, Xiancheng; Yu, Lan; Wang, Xiaolan; Tan, Xiaoying; Li, Chengrong
2016-09-01
The osmotic pressure of glucose solution at a wide concentration range was calculated using ASOG model and experimentally determined by our newly reported air humidity osmometry. The measurements from air humidity osmometry were compared with the well-established freezing point osmometry and ASOG model calculations at low concentrations and with only ASOG model calculations at high concentrations where no standard experimental method could serve as a reference for comparison. Results indicate that air humidity osmometry measurements are comparable to ASOG model calculations at a wide concentration range, while at low concentrations freezing point osmometry measurements provide better comparability with ASOG model calculations.
Galactic chemical evolution and nucleocosmochronology - Standard model with terminated infall
NASA Technical Reports Server (NTRS)
Clayton, D. D.
1984-01-01
Some exactly soluble families of models for the chemical evolution of the Galaxy are presented. The parameters considered include gas mass, the age-metallicity relation, the star mass vs. metallicity, the age distribution, and the mean age of dwarfs. A short BASIC program for calculating these parameters is given. The calculation of metallicity gradients, nuclear cosmochronology, and extinct radioactivities is addressed. An especially simple, mathematically linear model is recommended as a standard model of galaxies with truncated infall due to its internal consistency and compact display of the physical effects of the parameters.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czarnecki, A.; Krause, B.
1997-06-01
We present a complete three-loop calculation of the electric dipole moment of the u and d quarks in the standard model. For the d quark, more relevant for the experimentally important neutron electric dipole moment, we find cancellations which lead to an order of magnitude suppression compared with previous estimates. {copyright} {ital 1997} {ital The American Physical Society}
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of § 86.1801-12(j), CO2 fleet average exhaust emission standards apply to: (i) 2012 and later model... businesses meeting certain criteria may be exempted from the greenhouse gas emission standards in § 86.1818... standards applicable in a given model year are calculated separately for passenger automobiles and light...
Evaluation of standard radiation atmosphere aerosol models for a coastal environment
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Suttles, J. T.; Sebacher, D. I.; Fuller, W. H.; Lecroy, S. R.
1986-01-01
Calculations are compared with data from an experiment to evaluate the utility of standard radiation atmosphere (SRA) models for defining aerosol properties in atmospheric radiation computations. Initial calculations with only SRA aerosols in a four-layer atmospheric column simulation allowed a sensitivity study and the detection of spectral trends in optical depth, which differed from measurements. Subsequently, a more detailed analysis provided a revision in the stratospheric layer, which brought calculations in line with both optical depth and skylight radiance data. The simulation procedure allows determination of which atmospheric layers influence both downwelling and upwelling radiation spectra.
A log-normal distribution model for the molecular weight of aquatic fulvic acids
Cabaniss, S.E.; Zhou, Q.; Maurice, P.A.; Chin, Y.-P.; Aiken, G.R.
2000-01-01
The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a lognormal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured M(n) and M(w) and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several types of molecular weight data, including the shapes of high- pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.The molecular weight of humic substances influences their proton and metal binding, organic pollutant partitioning, adsorption onto minerals and activated carbon, and behavior during water treatment. We propose a log-normal model for the molecular weight distribution in aquatic fulvic acids to provide a conceptual framework for studying these size effects. The normal curve mean and standard deviation are readily calculated from measured Mn and Mw and vary from 2.7 to 3 for the means and from 0.28 to 0.37 for the standard deviations for typical aquatic fulvic acids. The model is consistent with several type's of molecular weight data, including the shapes of high-pressure size-exclusion chromatography (HP-SEC) peaks. Applications of the model to electrostatic interactions, pollutant solubilization, and adsorption are explored in illustrative calculations.
runDM: Running couplings of Dark Matter to the Standard Model
NASA Astrophysics Data System (ADS)
D'Eramo, Francesco; Kavanagh, Bradley J.; Panci, Paolo
2018-02-01
runDM calculates the running of the couplings of Dark Matter (DM) to the Standard Model (SM) in simplified models with vector mediators. By specifying the mass of the mediator and the couplings of the mediator to SM fields at high energy, the code can calculate the couplings at low energy, taking into account the mixing of all dimension-6 operators. runDM can also extract the operator coefficients relevant for direct detection, namely low energy couplings to up, down and strange quarks and to protons and neutrons.
Computation of confined coflow jets with three turbulence models
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T. H.
1993-01-01
A numerical study of confined jets in a cylindrical duct is carried out to examine the performance of two recently proposed turbulence models: an RNG-based K-epsilon model and a realizable Reynolds stress algebraic equation model. The former is of the same form as the standard K-epsilon model but has different model coefficients. The latter uses an explicit quadratic stress-strain relationship to model the turbulent stresses and is capable of ensuring the positivity of each turbulent normal stress. The flow considered involves recirculation with unfixed separation and reattachment points and severe adverse pressure gradients, thereby providing a valuable test of the predictive capability of the models for complex flows. Calculations are performed with a finite-volume procedure. Numerical credibility of the solutions is ensured by using second-order accurate differencing schemes and sufficiently fine grids. Calculations with the standard K-epsilon model are also made for comparison. Detailed comparisons with experiments show that the realizable Reynolds stress algebraic equation model consistently works better than does the standard K-epsilon model in capturing the essential flow features, while the RNG-based K-epsilon model does not seem to give improvements over the standard K-epsilon model under the flow conditions considered.
Higgs boson mass in the standard model at two-loop order and beyond
Martin, Stephen P.; Robertson, David G.
2014-10-01
We calculate the mass of the Higgs boson in the standard model in terms of the underlying Lagrangian parameters at complete 2-loop order with leading 3-loop corrections. A computer program implementing the results is provided. The program also computes and minimizes the standard model effective potential in Landau gauge at 2-loop order with leading 3-loop corrections.
An assessment of RELAP5-3D using the Edwards-O'Brien Blowdown problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tomlinson, E.T.; Aumiller, D.L.
1999-07-01
The RELAP5-3D (version bt) computer code was used to assess the United States Nuclear Regulatory Commission's Standard Problem 1 (Edwards-O'Brien Blowdown Test). The RELAP5-3D standard installation problem based on the Edwards-O'Brien Blowdown Test was modified to model the appropriate initial conditions and to represent the proper location of the instruments present in the experiment. The results obtained using the modified model are significantly different from the original calculation indicating the need to model accurately the experimental conditions if an accurate assessment of the calculational model is to be obtained.
Standard Model of Particle Physics--a health physics perspective.
Bevelacqua, J J
2010-11-01
The Standard Model of Particle Physics is reviewed with an emphasis on its relationship to the physics supporting the health physics profession. Concepts important to health physics are emphasized and specific applications are presented. The capability of the Standard Model to provide health physics relevant information is illustrated with application of conservation laws to neutron and muon decay and in the calculation of the neutron mean lifetime.
Insertion device calculations with mathematica
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carr, R.; Lidia, S.
1995-02-01
The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectorymore » solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.« less
The computation of standard solar models
NASA Technical Reports Server (NTRS)
Ulrich, Roger K.; Cox, Arthur N.
1991-01-01
Procedures for calculating standard solar models with the usual simplifying approximations of spherical symmetry, no mixing except in the surface convection zone, no mass loss or gain during the solar lifetime, and no separation of elements by diffusion are described. The standard network of nuclear reactions among the light elements is discussed including rates, energy production and abundance changes. Several of the equation of state and opacity formulations required for the basic equations of mass, momentum and energy conservation are presented. The usual mixing-length convection theory is used for these results. Numerical procedures for calculating the solar evolution, and current evolution and oscillation frequency results for the present sun by some recent authors are given.
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
...) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust emissions at the end of the model year for passenger... for sale, and certifying model types to standards as defined in § 86.1818-12. The model type carbon...
Lattice field theory applications in high energy physics
NASA Astrophysics Data System (ADS)
Gottlieb, Steven
2016-10-01
Lattice gauge theory was formulated by Kenneth Wilson in 1974. In the ensuing decades, improvements in actions, algorithms, and computers have enabled tremendous progress in QCD, to the point where lattice calculations can yield sub-percent level precision for some quantities. Beyond QCD, lattice methods are being used to explore possible beyond the standard model (BSM) theories of dynamical symmetry breaking and supersymmetry. We survey progress in extracting information about the parameters of the standard model by confronting lattice calculations with experimental results and searching for evidence of BSM effects.
NASA Astrophysics Data System (ADS)
Allanach, B. C.; Cridge, T.
2017-11-01
We describe a major extension of the SOFTSUSY spectrum calculator to include the calculation of the decays, branching ratios and lifetimes of sparticles into lighter sparticles, covering the next-to-minimal supersymmetric standard model (NMSSM) as well as the minimal supersymmetric standard model (MSSM). This document acts as a manual for the new version of SOFTSUSY, which includes the calculation of sparticle decays. We present a comprehensive collection of explicit expressions used by the program for the various partial widths of the different decay modes in the appendix. Program Files doi:http://dx.doi.org/10.17632/5hhwwmp43g.1 Licensing provisions: GPLv3 Programming language:C++, fortran Nature of problem: Calculating supersymmetric particle partial decay widths in the MSSM or the NMSSM, given the parameters and spectrum which have already been calculated by SOFTSUSY. Solution method: Analytic expressions for tree-level 2 body decays and loop-level decays and one-dimensional numerical integration for 3 body decays. Restrictions: Decays are calculated in the real R -parity conserving MSSM or the real R -parity conserving NMSSM only. No additional charge-parity violation (CPV) relative to the Standard Model (SM). Sfermion mixing has only been accounted for in the third generation of sfermions in the decay calculation. Decays in the MSSM are 2-body and 3-body, whereas decays in the NMSSM are 2-body only. Does the new version supersede the previous version?: Yes. Reasons for the new version: Significantly extended functionality. The decay rates and branching ratios of sparticles are particularly useful for collider searches. Decays calculated in the NMSSM will be a particularly useful check of the other programs in the literature, of which there are few. Summary of revisions: Addition of the calculation of sparticle and Higgs decays. All 2-body and important 3-body tree-level decays, including phenomenologically important loop-level decays (notably, Higgs decays to gg, γγ and Zγ). Next-to-leading order corrections are added to neutral Higgs decays to q q ¯ for quarks q of any flavour and to the neutral Higgs decays to gg. Additional comments: Program obtainable from: http://softsusy.hepforge.org/
750 GeV diphoton resonance and inflation
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Noumi, Toshifumi; Shiu, Gary; Sun, Sichun
2016-06-01
We study the possibility of a heavy scalar or pseudoscalar in TeV-scale beyond the Standard Model scenarios being the inflaton of the early universe in light of the recent O (750 ) GeV diphoton excess at the LHC. We consider a scenario in which the new scalar or pseudoscalar couples to the Standard Model gauge bosons at the loop level through new massive Standard Model charged vectorlike fermions with or without dark fermions. We calculate the renormalization group running of both the Standard Model and the new scalar couplings, and present two different models that are perturbative, with a stabilized vacuum up to near the Planck scale. Thus, the Standard Model Higgs and this possible new resonance may still preserve the minimalist features of Higgs inflation.
NASA Technical Reports Server (NTRS)
Guenther, D. B.
1994-01-01
The nonadiabatic frequencies of a standard solar model and a solar model that includes helium diffusion are discussed. The nonadiabatic pulsation calculation includes physics that describes the losses and gains due to radiation. Radiative gains and losses are modeled in both the diffusion approximation, which is only valid in optically thick regions, and the Eddington approximation, which is valid in both optically thin and thick regions. The calculated pulsation frequencies for modes with l less than or equal to 1320 are compared to the observed spectrum of the Sun. Compared to a strictly adiabatic calculation, the nonadiabatic calculation of p-mode frequencies improves the agreement between model and observation. When helium diffusion is included in the model the frequencies of the modes that are sensitive to regions near the base of the convection zone are improved (i.e., brought into closer agreement with observation), but the agreement is made worse for other modes. Cyclic variations in the frequency spacings of the Sun as a function of frequency of n are presented as evidence for a discontinuity in the structure of the Sun, possibly located near the base of the convection zone.
Tonkin, Matthew J.; Tiedeman, Claire; Ely, D. Matthew; Hill, Mary C.
2007-01-01
The OPR-PPR program calculates the Observation-Prediction (OPR) and Parameter-Prediction (PPR) statistics that can be used to evaluate the relative importance of various kinds of data to simulated predictions. The data considered fall into three categories: (1) existing observations, (2) potential observations, and (3) potential information about parameters. The first two are addressed by the OPR statistic; the third is addressed by the PPR statistic. The statistics are based on linear theory and measure the leverage of the data, which depends on the location, the type, and possibly the time of the data being considered. For example, in a ground-water system the type of data might be a head measurement at a particular location and time. As a measure of leverage, the statistics do not take into account the value of the measurement. As linear measures, the OPR and PPR statistics require minimal computational effort once sensitivities have been calculated. Sensitivities need to be calculated for only one set of parameter values; commonly these are the values estimated through model calibration. OPR-PPR can calculate the OPR and PPR statistics for any mathematical model that produces the necessary OPR-PPR input files. In this report, OPR-PPR capabilities are presented in the context of using the ground-water model MODFLOW-2000 and the universal inverse program UCODE_2005. The method used to calculate the OPR and PPR statistics is based on the linear equation for prediction standard deviation. Using sensitivities and other information, OPR-PPR calculates (a) the percent increase in the prediction standard deviation that results when one or more existing observations are omitted from the calibration data set; (b) the percent decrease in the prediction standard deviation that results when one or more potential observations are added to the calibration data set; or (c) the percent decrease in the prediction standard deviation that results when potential information on one or more parameters is added.
77 FR 61604 - Exposure Modeling Public Meeting; Notice of Public Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-10
..., birds, reptiles, and amphibians: Model Parameterization and Knowledge base Development. 4. Standard Operating Procedure for calculating degradation kinetics. 5. Aquatic exposure modeling using field studies...
Direct Final Rule for Exhaust Emission Standards for 2012 and Later Model Year Snowmobiles
In this action removing the NOX component from the Phase 3 emission standard calculation and deferring action on the 2012 CO and HC emission standards portion of the court’s remand to a separate rulemaking action.
Tibiofemoral wear in standard and non-standard squat: implication for total knee arthroplasty.
Fekete, Gusztáv; Sun, Dong; Gu, Yaodong; Neis, Patric Daniel; Ferreira, Ney Francisco; Innocenti, Bernardo; Csizmadia, Béla M
2017-01-01
Due to the more resilient biomaterials, problems related to wear in total knee replacements (TKRs) have decreased but not disappeared. In the design-related factors, wear is still the second most important mechanical factor that limits the lifetime of TKRs and it is also highly influenced by the local kinematics of the knee. During wear experiments, constant load and slide-roll ratio is frequently applied in tribo-tests beside other important parameters. Nevertheless, numerous studies demonstrated that constant slide-roll ratio is not accurate approach if TKR wear is modelled, while instead of a constant load, a flexion-angle dependent tibiofemoral force should be involved into the wear model to obtain realistic results. A new analytical wear model, based upon Archard's law, is introduced, which can determine the effect of the tibiofemoral force and the varying slide-roll on wear between the tibiofemoral connection under standard and non-standard squat movement. The calculated total wear with constant slide-roll during standard squat was 5.5 times higher compared to the reference value, while if total wear includes varying slide-roll during standard squat, the calculated wear was approximately 6.25 times higher. With regard to non-standard squat, total wear with constant slide-roll during standard squat was 4.16 times higher than the reference value. If total wear included varying slide-roll, the calculated wear was approximately 4.75 times higher. It was demonstrated that the augmented force parameter solely caused 65% higher wear volume while the slide-roll ratio itself increased wear volume by 15% higher compared to the reference value. These results state that the force component has the major effect on wear propagation while non-standard squat should be proposed for TKR patients as rehabilitation exercise.
Tibiofemoral wear in standard and non-standard squat: implication for total knee arthroplasty
Sun, Dong; Gu, Yaodong; Neis, Patric Daniel; Ferreira, Ney Francisco; Innocenti, Bernardo; Csizmadia, Béla M.
2017-01-01
Summary Introduction Due to the more resilient biomaterials, problems related to wear in total knee replacements (TKRs) have decreased but not disappeared. In the design-related factors, wear is still the second most important mechanical factor that limits the lifetime of TKRs and it is also highly influenced by the local kinematics of the knee. During wear experiments, constant load and slide-roll ratio is frequently applied in tribo-tests beside other important parameters. Nevertheless, numerous studies demonstrated that constant slide-roll ratio is not accurate approach if TKR wear is modelled, while instead of a constant load, a flexion-angle dependent tibiofemoral force should be involved into the wear model to obtain realistic results. Methods A new analytical wear model, based upon Archard’s law, is introduced, which can determine the effect of the tibiofemoral force and the varying slide-roll on wear between the tibiofemoral connection under standard and non-standard squat movement. Results The calculated total wear with constant slide-roll during standard squat was 5.5 times higher compared to the reference value, while if total wear includes varying slide-roll during standard squat, the calculated wear was approximately 6.25 times higher. With regard to non-standard squat, total wear with constant slide-roll during standard squat was 4.16 times higher than the reference value. If total wear included varying slide-roll, the calculated wear was approximately 4.75 times higher. Conclusions It was demonstrated that the augmented force parameter solely caused 65% higher wear volume while the slide-roll ratio itself increased wear volume by 15% higher compared to the reference value. These results state that the force component has the major effect on wear propagation while non-standard squat should be proposed for TKR patients as rehabilitation exercise. PMID:29721453
State-of-the-Art Calculation of the Decay Rate of Electroweak Vacuum in the Standard Model.
Chigusa, So; Moroi, Takeo; Shoji, Yutaro
2017-11-24
The decay rate of the electroweak (EW) vacuum is calculated in the framework of the standard model (SM) of particle physics, using the recent progress in the understanding of the decay rate of metastable vacuum in gauge theories. We give a manifestly gauge-invariant expression of the decay rate. We also perform a detailed numerical calculation of the decay rate. With the best-fit values of the SM parameters, we find that the decay rate of the EW vacuum per unit volume is about 10^{-554} Gyr^{-1} Gpc^{-3}; with the uncertainty in the top mass, the decay rate is estimated as 10^{-284}-10^{-1371} Gyr^{-1} Gpc^{-3}.
Standard model predictions for B→Kℓ(+)ℓ- with form factors from lattice QCD.
Bouchard, Chris; Lepage, G Peter; Monahan, Christopher; Na, Heechang; Shigemitsu, Junko
2013-10-18
We calculate, for the first time using unquenched lattice QCD form factors, the standard model differential branching fractions dB/dq2(B→Kℓ(+)ℓ(-)) for ℓ=e, μ, τ and compare with experimental measurements by Belle, BABAR, CDF, and LHCb. We report on B(B→Kℓ(+)ℓ(-)) in q2 bins used by experiment and predict B(B→Kτ(+)τ(-))=(1.41±0.15)×10(-7). We also calculate the ratio of branching fractions R(e)(μ)=1.00029(69) and predict R(ℓ)(τ)=1.176(40), for ℓ=e, μ. Finally, we calculate the "flat term" in the angular distribution of the differential decay rate F(H)(e,μ,τ) in experimentally motivated q2 bins.
D → Klv semileptonic decay using lattice QCD with HISQ at physical pion masses
NASA Astrophysics Data System (ADS)
Chakraborty, Bipasha; Davies, Christine; Koponen, Jonna; Lepage, G. Peter
2018-03-01
he quark flavor sector of the Standard Model is a fertile ground to look for new physics effects through a unitarity test of the Cabbibo-Kobayashi-Maskawa (CKM) matrix. We present a lattice QCD calculation of the scalar and the vector form factors (over a large q2 region including q2 = 0) associated with the D→ Klv semi-leptonic decay. This calculation will then allow us to determine the central CKM matrix element, Vcs in the Standard Model, by comparing the lattice QCD results for the form factors and the experimental decay rate. This form factor calculation has been performed on the Nf = 2 + 1 + 1 MILC HISQ ensembles with the physical light quark masses.
Standards for Community College Library Facilities.
ERIC Educational Resources Information Center
California State Postsecondary Education Commission, Sacramento.
This report contains proposed standards for community college library facilities developed by the California Postsecondary Education Commission. Formulae for calculating stack space, staff space, reader station space, and total space are included in the report. Three alternative models for revising the present library standards were considered:…
Perceptual video quality assessment in H.264 video coding standard using objective modeling.
Karthikeyan, Ramasamy; Sainarayanan, Gopalakrishnan; Deepa, Subramaniam Nachimuthu
2014-01-01
Since usage of digital video is wide spread nowadays, quality considerations have become essential, and industry demand for video quality measurement is rising. This proposal provides a method of perceptual quality assessment in H.264 standard encoder using objective modeling. For this purpose, quality impairments are calculated and a model is developed to compute the perceptual video quality metric based on no reference method. Because of the shuttle difference between the original video and the encoded video the quality of the encoded picture gets degraded, this quality difference is introduced by the encoding process like Intra and Inter prediction. The proposed model takes into account of the artifacts introduced by these spatial and temporal activities in the hybrid block based coding methods and an objective modeling of these artifacts into subjective quality estimation is proposed. The proposed model calculates the objective quality metric using subjective impairments; blockiness, blur and jerkiness compared to the existing bitrate only calculation defined in the ITU G 1070 model. The accuracy of the proposed perceptual video quality metrics is compared against popular full reference objective methods as defined by VQEG.
Liang, Xue; Ji, Hai-yan; Wang, Peng-xin; Rao, Zhen-hong; Shen, Bing-hui
2010-01-01
Preprocess method of multiplicative scatter correction (MSC) was used to reject noises in the original spectra produced by the environmental physical factor effectively, then the principal components of near-infrared spectroscopy were calculated by nonlinear iterative partial least squares (NIPALS) before building the back propagation artificial neural networks method (BP-ANN), and the numbers of principal components were calculated by the method of cross validation. The calculated principal components were used as the inputs of the artificial neural networks model, and the artificial neural networks model was used to find the relation between chlorophyll in winter wheat and reflective spectrum, which can predict the content of chlorophyll in winter wheat. The correlation coefficient (r) of calibration set was 0.9604, while the standard deviation (SD) and relative standard deviation (RSD) was 0.187 and 5.18% respectively. The correlation coefficient (r) of predicted set was 0.9600, and the standard deviation (SD) and relative standard deviation (RSD) was 0.145 and 4.21% respectively. It means that the MSC-ANN algorithm can reject noises in the original spectra produced by the environmental physical factor effectively and set up an exact model to predict the contents of chlorophyll in living leaves veraciously to replace the classical method and meet the needs of fast analysis of agricultural products.
A comparison of three algebraic stress closures for combustor flow calculations
NASA Technical Reports Server (NTRS)
Nikjooy, M.; So, R. M. C.; Hwang, B. C.
1985-01-01
A comparison is made of the performance of two locally nonequilibrium and one equilibrium algebraic stress closures in calculating combustor flows. Effects of four different pressure-strain models on these closure models are also analyzed. The results show that the pressure-strain models have a much greater influence on the calculated mean velocity and turbulence field than the algebraic stress closures, and that the best mean strain model for the pressure-strain terms is that proposed by Launder, Reece and Rodi (1975). However, the equilibrium algebraic stress closure with the Rotta return-to-isotropy model (1951) for the pressure-strain terms gives as good a correlation with measurements as when the Launder et al. mean strain model is included in the pressure-strain model. Finally, comparison of the calculations with the standard k-epsilon closure results show that the algebraic stress closures are better suited for simple turbulent flow calculations.
49 CFR 531.5 - Fuel economy standards.
Code of Federal Regulations, 2011 CFR
2011-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the average fuel economy standards in Table I, expressed in miles per gallon, in... passenger automobile fleet shall comply with the fuel economy level calculated for that model year according...
49 CFR 531.5 - Fuel economy standards.
Code of Federal Regulations, 2012 CFR
2012-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the average fuel economy standards in Table I, expressed in miles per gallon, in... passenger automobile fleet shall comply with the fuel economy level calculated for that model year according...
49 CFR 531.5 - Fuel economy standards.
Code of Federal Regulations, 2014 CFR
2014-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the fleet average fuel economy standards in Table I, expressed in miles per... passenger automobile fleet shall comply with the fleet average fuel economy level calculated for that model...
49 CFR 531.5 - Fuel economy standards.
Code of Federal Regulations, 2013 CFR
2013-10-01
... ADMINISTRATION, DEPARTMENT OF TRANSPORTATION PASSENGER AUTOMOBILE AVERAGE FUEL ECONOMY STANDARDS § 531.5 Fuel... automobiles shall comply with the fleet average fuel economy standards in Table I, expressed in miles per... passenger automobile fleet shall comply with the fleet average fuel economy level calculated for that model...
AGARD standard aeroelastic configurations for dynamic response. 1: Wing 445.6
NASA Technical Reports Server (NTRS)
Yates, E. Carson, Jr.
1988-01-01
This report contains experimental flutter data for the AGARD 3-D swept tapered standard configuration Wing 445.6, along with related descriptive data of the model properties required for comparative flutter calculations. As part of a cooperative AGARD-SMP program, guided by the Sub-Committee on Aeroelasticity, this standard configuration may serve as a common basis for comparison of calculated and measured aeroelastic behavior. These comparisons will promote a better understanding of the assumptions, approximations and limitations underlying the various aerodynamic methods applied, thus pointing the way to further improvements.
Hong, Cheng William; Mamidipalli, Adrija; Hooker, Jonathan C.; Hamilton, Gavin; Wolfson, Tanya; Chen, Dennis H.; Dehkordy, Soudabeh Fazeli; Middleton, Michael S.; Reeder, Scott B.; Loomba, Rohit; Sirlin, Claude B.
2017-01-01
Background Proton density fat fraction (PDFF) estimation requires spectral modeling of the hepatic triglyceride (TG) signal. Deviations in the TG spectrum may occur, leading to bias in PDFF quantification. Purpose To investigate the effects of varying six-peak TG spectral models on PDFF estimation bias. Study Type Retrospective secondary analysis of prospectively acquired clinical research data. Population Forty-four adults with biopsy-confirmed nonalcoholic steatohepatitis. Field Strength/Sequence Confounder-corrected chemical-shift-encoded 3T MRI (using a 2D multiecho gradient-recalled echo technique with magnitude reconstruction) and MR spectroscopy. Assessment In each patient, 61 pairs of colocalized MRI-PDFF and MRS-PDFF values were estimated: one pair used the standard six-peak spectral model, the other 60 were six-peak variants calculated by adjusting spectral model parameters over their biologically plausible ranges. MRI-PDFF values calculated using each variant model and the standard model were compared, and the agreement between MRI-PDFF and MRS-PDFF was assessed. Statistical Tests MRS-PDFF and MRI-PDFF were summarized descriptively. Bland–Altman (BA) analyses were performed between PDFF values calculated using each variant model and the standard model. Linear regressions were performed between BA biases and mean PDFF values for each variant model, and between MRI-PDFF and MRS-PDFF. Results Using the standard model, mean MRS-PDFF of the study population was 17.9±8.0% (range: 4.1–34.3%). The difference between the highest and lowest mean variant MRI-PDFF values was 1.5%. Relative to the standard model, the model with the greatest absolute BA bias overestimated PDFF by 1.2%. Bias increased with increasing PDFF (P < 0.0001 for 59 of the 60 variant models). MRI-PDFF and MRS-PDFF agreed closely for all variant models (R2=0.980, P < 0.0001). Data Conclusion Over a wide range of hepatic fat content, PDFF estimation is robust across the biologically plausible range of TG spectra. Although absolute estimation bias increased with higher PDFF, its magnitude was small and unlikely to be clinically meaningful. Level of Evidence 3 Technical Efficacy Stage 2 PMID:28851124
Calculations of turbulent separated flows
NASA Technical Reports Server (NTRS)
Zhu, J.; Shih, T. H.
1993-01-01
A numerical study of incompressible turbulent separated flows is carried out by using two-equation turbulence models of the K-epsilon type. On the basis of realizability analysis, a new formulation of the eddy-viscosity is proposed which ensures the positiveness of turbulent normal stresses - a realizability condition that most existing two-equation turbulence models are unable to satisfy. The present model is applied to calculate two backward-facing step flows. Calculations with the standard K-epsilon model and a recently developed RNG-based K-epsilon model are also made for comparison. The calculations are performed with a finite-volume method. A second-order accurate differencing scheme and sufficiently fine grids are used to ensure the numerical accuracy of solutions. The calculated results are compared with the experimental data for both mean and turbulent quantities. The comparison shows that the present model performs quite well for separated flows.
Higgs decays to Z Z and Z γ in the standard model effective field theory: An NLO analysis
NASA Astrophysics Data System (ADS)
Dawson, S.; Giardino, P. P.
2018-05-01
We calculate the complete one-loop electroweak corrections to the inclusive H →Z Z and H →Z γ decays in the dimension-6 extension of the Standard Model Effective Field Theory (SMEFT). The corrections to H →Z Z are computed for on-shell Z bosons and are a precursor to the physical H →Z f f ¯ calculation. We present compact numerical formulas for our results and demonstrate that the logarithmic contributions that result from the renormalization group evolution of the SMEFT coefficients are larger than the finite next-to-leading-order contributions to the decay widths. As a byproduct of our calculation, we obtain the first complete result for the finite corrections to Gμ in the SMEFT.
Big bang nucleosynthesis - The standard model and alternatives
NASA Technical Reports Server (NTRS)
Schramm, David N.
1991-01-01
The standard homogeneous-isotropic calculation of the big bang cosmological model is reviewed, and alternate models are discussed. The standard model is shown to agree with the light element abundances for He-4, H-2, He-3, and Li-7 that are available. Improved observational data from recent LEP collider and SLC results are discussed. The data agree with the standard model in terms of the number of neutrinos, and provide improved information regarding neutron lifetimes. Alternate models are reviewed which describe different scenarios for decaying matter or quark-hadron induced inhomogeneities. The baryonic density relative to the critical density in the alternate models is similar to that of the standard model when they are made to fit the abundances. This reinforces the conclusion that the baryonic density relative to critical density is about 0.06, and also reinforces the need for both nonbaryonic dark matter and dark baryonic matter.
Systematic void fraction studies with RELAP5, FRANCESCA and HECHAN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stosic, Z.; Preusche, G.
1996-08-01
In enhancing the scope of standard thermal-hydraulic codes applications beyond its capabilities, i.e. coupling with a one and/or three-dimensional kinetics core model, the void fraction, transferred from thermal-hydraulics to the core model, plays a determining role in normal operating range and high core flow, as the generated heat and axial power profiles are direct functions of void distribution in the core. Hence, it is very important to know if the void quality models in the programs which have to be coupled are compatible to allow the interactive exchange of data which are based on these constitutive void-quality relations. The presentedmore » void fraction study is performed in order to give the basis for the conclusion whether a transient core simulation using the RELAP5 void fractions can calculate the axial power shapes adequately. Because of that, the void fractions calculated with RELAP5 are compared with those calculated by BWR safety code for licensing--FRANCESCA and the best estimate model for pre- and post-dryout calculation in BWR heated channel--HECHAN. In addition, a comparison with standard experimental void-quality benchmark tube data is performed for the HECHAN code.« less
Ernren, A.T.; Arthur, R.; Glynn, P.D.; McMurry, J.
1999-01-01
Four researchers were asked to provide independent modeled estimates of the solubility of a radionuclide solid phase, specifically Pu(OH)4, under five specified sets of conditions. The objectives of the study were to assess the variability in the results obtained and to determine the primary causes for this variability.In the exercise, modelers were supplied with the composition, pH and redox properties of the water and with a description of the mineralogy of the surrounding fracture system A standard thermodynamic data base was provided to all modelers. Each modeler was encouraged to use other data bases in addition to the standard data base and to try different approaches to solving the problem.In all, about fifty approaches were used, some of which included a large number of solubility calculations. For each of the five test cases, the calculated solubilities from different approaches covered several orders of magnitude. The variability resulting from the use of different thermodynamic data bases was in most cases, far smaller than that resulting from the use of different approaches to solving the problem.
Precision determination of weak charge of {sup 133}Cs from atomic parity violation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Porsev, S. G.; School of Physics, University of New South Wales, Sydney, New South Wales 2052; Petersburg Nuclear Physics Institute, Gatchina, Leningrad District 188300
2010-08-01
We discuss results of the most accurate to-date test of the low-energy electroweak sector of the standard model of elementary particles. Combining previous measurements with our high-precision calculations we extracted the weak charge of the {sup 133}Cs nucleus, Q{sub W}=-73.16(29){sub exp}(20){sub th}[S. G. Porsev, K. Beloy, and A. Derevianko, Phys. Rev. Lett. 102, 181601 (2009)]. The result is in perfect agreement with Q{sub W}{sup SM} predicted by the standard model, Q{sub W}{sup SM}=-73.16(3), and confirms energy dependence (or running) of the electroweak interaction and places constraints on a variety of new physics scenarios beyond the standard model. In particular, wemore » increase the lower limit on the masses of extra Z-bosons predicted by models of grand unification and string theories. This paper provides additional details to the earlier paper. We discuss large-scale calculations in the framework of the coupled-cluster method, including full treatment of single, double, and valence triple excitations. To determine the accuracy of the calculations we computed energies, electric-dipole amplitudes, and hyperfine-structure constants. An extensive comparison with high-accuracy experimental data was carried out.« less
Realized Volatility Analysis in A Spin Model of Financial Markets
NASA Astrophysics Data System (ADS)
Takaishi, Tetsuya
We calculate the realized volatility of returns in the spin model of financial markets and examine the returns standardized by the realized volatility. We find that moments of the standardized returns agree with the theoretical values of standard normal variables. This is the first evidence that the return distributions of the spin financial markets are consistent with a finite-variance of mixture of normal distributions that is also observed empirically in real financial markets.
Sensitivity of atmospheric muon flux calculation to low energy hadronic interaction models
NASA Astrophysics Data System (ADS)
Djemil, T.; Attallah, R.; Capdevielle, J. N.
2007-10-01
We investigate in this paper the impact of some up-to-date hadronic interaction models on the calculation of the atmospheric muon flux. Calculations are carried out with the air shower simulation code CORSIKA in combination with the hadronic interaction models FLUKA and UrQMD below 80 GeV/nucleon and NEXUS elsewhere. We also examine the atmospheric effects using two different parametrizations of the US standard atmosphere. The cosmic ray spectra of protons and α particles, the only primary particles considered here, are taken according to the force field model which describes properly solar modulation. Numerical results are compared with the BESS-2001 experimental data.
Accuracy of contacts calculated from 3D images of occlusal surfaces.
DeLong, R; Knorr, S; Anderson, G C; Hodges, J; Pintado, M R
2007-06-01
Compare occlusal contacts calculated from 3D virtual models created from clinical records to contacts identified clinically using shimstock and transillumination. Upper and lower full arch alginate impressions and vinyl polysiloxane centric interocclusal records were made of 12 subjects. Stone casts made from the alginate impressions and the interocclusal records were optically scanned. Three-dimensional virtual models of the dental arches and interocclusal records were constructed using the Virtual Dental Patient Software. Contacts calculated from the virtual interocclusal records and from the aligned upper and lower virtual arch models were compared to those identified clinically using 0.01mm shimstock and transillumination of the interocclusal record. Virtual contacts and transillumination contacts were compared by anatomical region and by contacting tooth pairs to shimstock contacts. Because there is no accepted standard for identifying occlusal contacts, methods were compared in pairs with one labeled "standard" and the second labeled "test". Accuracy was defined as the number of contacts and non-contacts of the "test" that were in agreement with the "standard" divided by the total number of contacts and non-contacts of the "standard". Accuracy of occlusal contacts calculated from virtual interocclusal records and aligned virtual casts compared to transillumination were: 0.87+/-0.05 and 0.84+/-0.06 by region and 0.95+/-0.07 and 0.95+/-0.05 by tooth, respectively. Comparisons with shimstock were: 0.85+/-0.15 (record), 0.84+/-0.14 (casts), and 81+/-17 (transillumination). The virtual record, aligned virtual arches, and transillumination methods of identifying contacts are equivalent, and show better agreement with each other than with the shimstock method.
The early universe as a probe of new physics
NASA Astrophysics Data System (ADS)
Bird, Christopher Shane
The Standard Model of Particle Physics has been verified to unprecedented precision in the last few decades. However there are still phenomena in nature which cannot be explained, and as such new theories will be required. Since terrestrial experiments are limited in both the energy and precision that can be probed, new methods are required to search for signs of physics beyond the Standard Model. In this dissertation, I demonstrate how these theories can be probed by searching for remnants of their effects in the early Universe. In particular I focus on three possible extensions of the Standard Model: the addition of massive neutral particles as dark matter, the addition of charged massive particles, and the existence of higher dimensions. For each new model, I review the existing experimental bounds and the potential for discovering new physics in the next generation of experiments. For dark matter, I introduce six simple models which I have developed, and which involve a minimum amount of new physics, as well as reviewing one existing model of dark matter. For each model I calculate the latest constraints from astrophysics experiments, nuclear recoil experiments, and collider experiments. I also provide motivations for studying sub-GeV mass dark matter, and propose the possibility of searching for light WIMPs in the decay of B-mesons and other heavy particles. For charged massive relics, I introduce and review the recently proposed model of catalyzed Big Bang nucleosynthesis. In particular I review the production of 6Li by this mechanism, and calculate the abundance of 7Li after destruction of 7Be by charged relics. The result is that for certain natural relics CBBN is capable of removing tensions between the predicted and observed 6Li and 7Li abundances which are present in the standard model of BBN. For extra dimensions, I review the constraints on the ADD model from both astrophysics and collider experiments. I then calculate the constraints on this model from Big Bang nucleosynthesis in the early Universe. I also calculate the bounds on this model from Kaluza-Klein gravitons trapped in the galaxy which decay to electron-positron pairs, using the measured 511 keV gamma-ray flux. For each example of new physics, I find that remnants of the early Universe provide constraints on the models which are complementary to the existing constraints from colliders and other terrestrial experiments.
Timber harvest calculations on a National Forest: A case study
NASA Astrophysics Data System (ADS)
Carey, Henry H.
1983-03-01
Harvest calculations determine sawtimber flows from public lands and are closely scruntinized by a wide spectrum of forest users. This study examines the reliability of harvest calculations on a single national forest in New Mexico Forest Service determinations of an array of variables were reviewed and evaluated. The study revealed a lack of precision in Forest Service adherence to self-imposed procedural standards governing the calculation process. Timber sales have taken place on lands where such standards prohibit harvesting and these lands have been included in annual harvest calculations. Assumptions required by a mathematical model used by the Forest Service in calculating the harvest were not followed in the subsequent implementation of the harvest level. These factors suggest that the Forest Service could have significantly over-stated annual harvest rate for the first decade. Opportunities exist to improve the calculation, and benefits realized may greatly exceed additional costs of implementation
Theoretical uncertainties in the calculation of supersymmetric dark matter observables
NASA Astrophysics Data System (ADS)
Bergeron, Paul; Sandick, Pearl; Sinha, Kuver
2018-05-01
We estimate the current theoretical uncertainty in supersymmetric dark matter predictions by comparing several state-of-the-art calculations within the minimal supersymmetric standard model (MSSM). We consider standard neutralino dark matter scenarios — coannihilation, well-tempering, pseudoscalar resonance — and benchmark models both in the pMSSM framework and in frameworks with Grand Unified Theory (GUT)-scale unification of supersymmetric mass parameters. The pipelines we consider are constructed from the publicly available software packages SOFTSUSY, SPheno, FeynHiggs, SusyHD, micrOMEGAs, and DarkSUSY. We find that the theoretical uncertainty in the relic density as calculated by different pipelines, in general, far exceeds the statistical errors reported by the Planck collaboration. In GUT models, in particular, the relative discrepancies in the results reported by different pipelines can be as much as a few orders of magnitude. We find that these discrepancies are especially pronounced for cases where the dark matter physics relies critically on calculations related to electroweak symmetry breaking, which we investigate in detail, and for coannihilation models, where there is heightened sensitivity to the sparticle spectrum. The dark matter annihilation cross section today and the scattering cross section with nuclei also suffer appreciable theoretical uncertainties, which, as experiments reach the relevant sensitivities, could lead to uncertainty in conclusions regarding the viability or exclusion of particular models.
Results of the degradation kinetics project and describes a general approach for calculating and selecting representative half-life values from soil and aquatic transformation studies for risk assessment and exposure modeling purposes.
Model Energy Efficiency Program Impact Evaluation Guide
Find guidance on model approaches for calculating energy, demand, and emissions savings resulting from energy efficiency programs. It describes several standard approaches that can be used in order to make these programs more efficient.
NASA Astrophysics Data System (ADS)
Seibt, Joachim; Sláma, Vladislav; Mančal, Tomáš
2016-12-01
Standard application of the Frenkel exciton model neglects resonance coupling between collective molecular aggregate states with different number of excitations. These inter-band coupling terms are, however, of the same magnitude as the intra-band coupling between singly excited states. We systematically derive the Frenkel exciton model from quantum chemical considerations, and identify it as a variant of the configuration interaction method. We discuss all non-negligible couplings between collective aggregate states, and provide compact formulae for their calculation. We calculate absorption spectra of molecular aggregate of carotenoids and identify significant band shifts as a result of inter-band coupling. The presence of inter-band coupling terms requires renormalization of the system-bath coupling with respect to standard formulation, but renormalization effects are found to be weak. We present detailed discussion of molecular dimer and calculate its time-resolved two-dimensional Fourier transformed spectra to find weak but noticeable effects of peak amplitude redistribution due to inter-band coupling.
Fast calculation of the line-spread-function by transversal directions decoupling
NASA Astrophysics Data System (ADS)
Parravicini, Jacopo; Tartara, Luca; Hasani, Elton; Tomaselli, Alessandra
2016-07-01
We propose a simplified method to calculate the optical spread function of a paradigmatic system constituted by a pupil-lens with a line-shaped illumination (‘line-spread-function’). Our approach is based on decoupling the two transversal directions of the beam and treating the propagation by means of the Fourier optics formalism. This requires simpler calculations with respect to the more usual Bessel-function-based method. The model is discussed and compared with standard calculation methods by carrying out computer simulations. The proposed approach is found to be much faster than the Bessel-function-based one (CPU time ≲ 5% of the standard method), while the results of the two methods present a very good mutual agreement.
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Myhre, G.; Penner, J. E.; Randles, C.; Samset, B.; Schulz, M.; Yu, H.; Zhou, C.
2012-09-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in nine participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.51 W m-2 and the inter-model standard deviation is 0.70 W m-2, corresponding to a relative standard deviation of 15%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.26 W m-2, and the standard deviation increases to 1.21 W m-2, corresponding to a significant relative standard deviation of 96%. However, the top-of-atmosphere forcing variability owing to absorption is low, with relative standard deviations of 9% clear-sky and 12% all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment, demonstrates that host model uncertainties could explain about half of the overall sulfate forcing diversity of 0.13 W m-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Les Houches 2015: Physics at TeV Colliders Standard Model Working Group Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, J.R.; et al.
This Report summarizes the proceedings of the 2015 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) the new PDF4LHC parton distributions, (III) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (IV) a host of phenomenological studies essential for comparing LHC data from Run I with theoretical predictions and projections for future measurements in Run II, and (V) new developments in Monte Carlo event generators.
Les Houches 2017: Physics at TeV Colliders Standard Model Working Group Report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andersen, J.R.; et al.
This Report summarizes the proceedings of the 2017 Les Houches workshop on Physics at TeV Colliders. Session 1 dealt with (I) new developments relevant for high precision Standard Model calculations, (II) theoretical uncertainties and dataset dependence of parton distribution functions, (III) new developments in jet substructure techniques, (IV) issues in the theoretical description of the production of Standard Model Higgs bosons and how to relate experimental measurements, (V) phenomenological studies essential for comparing LHC data from Run II with theoretical predictions and projections for future measurements, and (VI) new developments in Monte Carlo event generators.
Hidden sector dark matter and the Galactic Center gamma-ray excess: a closer look
Escudero, Miguel; Witte, Samuel J.; Hooper, Dan
2017-11-24
Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case,more » we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. Here, we also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.« less
Hidden Sector Dark Matter and the Galactic Center Gamma-Ray Excess: A Closer Look
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escudero, Miguel; Witte, Samuel J.; Hooper, Dan
2017-09-20
Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case,more » we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. We also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.« less
Hidden sector dark matter and the Galactic Center gamma-ray excess: a closer look
NASA Astrophysics Data System (ADS)
Escudero, Miguel; Witte, Samuel J.; Hooper, Dan
2017-11-01
Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case, we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. We also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.
Hidden sector dark matter and the Galactic Center gamma-ray excess: a closer look
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escudero, Miguel; Witte, Samuel J.; Hooper, Dan
Stringent constraints from direct detection experiments and the Large Hadron Collider motivate us to consider models in which the dark matter does not directly couple to the Standard Model, but that instead annihilates into hidden sector particles which ultimately decay through small couplings to the Standard Model. We calculate the gamma-ray emission generated within the context of several such hidden sector models, including those in which the hidden sector couples to the Standard Model through the vector portal (kinetic mixing with Standard Model hypercharge), through the Higgs portal (mixing with the Standard Model Higgs boson), or both. In each case,more » we identify broad regions of parameter space in which the observed spectrum and intensity of the Galactic Center gamma-ray excess can easily be accommodated, while providing an acceptable thermal relic abundance and remaining consistent with all current constraints. Here, we also point out that cosmic-ray antiproton measurements could potentially discriminate some hidden sector models from more conventional dark matter scenarios.« less
Sakurai Prize: The Future of Higgs Physics
NASA Astrophysics Data System (ADS)
Dawson, Sally
2017-01-01
The discovery of the Higgs boson relied critically on precision calculations. The quantum contributions from the Higgs boson to the W and top quark masses suggested long before the Higgs discovery that a Standard Model Higgs boson should have a mass in the 100-200 GeV range. The experimental extraction of Higgs properties requires normalization to the predicted Higgs production and decay rates, for which higher order corrections are also essential. As Higgs physics becomes a mature subject, more and more precise calculations will be required. If there is new physics at high scales, it will contribute to the predictions and precision Higgs physics will be a window to beyond the Standard Model physics.
Can a pseudo-Nambu-Goldstone Higgs lead to symmetry non-restoration?
NASA Astrophysics Data System (ADS)
Kilic, Can; Swaminathan, Sivaramakrishnan
2016-01-01
The calculation of finite temperature contributions to the scalar potential in a quantum field theory is similar to the calculation of loop corrections at zero temperature. In natural extensions of the Standard Model where loop corrections to the Higgs potential cancel between Standard Model degrees of freedom and their symmetry partners, it is interesting to contemplate whether finite temperature corrections also cancel, raising the question of whether a broken phase of electroweak symmetry may persist at high temperature. It is well known that this does not happen in supersymmetric theories because the thermal contributions of bosons and fermions do not cancel each other. However, for theories with same spin partners, the answer is less obvious. Using the Twin Higgs model as a benchmark, we show that although thermal corrections do cancel at the level of quadratic divergences, subleading corrections still drive the system to a restored phase. We further argue that our conclusions generalize to other well-known extensions of the Standard Model where the Higgs is rendered natural by being the pseudo-Nambu-Goldstone mode of an approximate global symmetry.
Burst strength of tubing and casing based on twin shear unified strength theory.
Lin, Yuanhua; Deng, Kuanhai; Sun, Yongxing; Zeng, Dezhi; Liu, Wanying; Kong, Xiangwei; Singh, Ambrish
2014-01-01
The internal pressure strength of tubing and casing often cannot satisfy the design requirements in high pressure, high temperature and high H2S gas wells. Also, the practical safety coefficient of some wells is lower than the design standard according to the current API 5C3 standard, which brings some perplexity to the design. The ISO 10400: 2007 provides the model which can calculate the burst strength of tubing and casing better than API 5C3 standard, but the calculation accuracy is not desirable because about 50 percent predictive values are remarkably higher than real burst values. So, for the sake of improving strength design of tubing and casing, this paper deduces the plastic limit pressure of tubing and casing under internal pressure by applying the twin shear unified strength theory. According to the research of the influence rule of yield-to-tensile strength ratio and mechanical properties on the burst strength of tubing and casing, the more precise calculation model of tubing-casing's burst strength has been established with material hardening and intermediate principal stress. Numerical and experimental comparisons show that the new burst strength model is much closer to the real burst values than that of other models. The research results provide an important reference to optimize the tubing and casing design of deep and ultra-deep wells.
Burst Strength of Tubing and Casing Based on Twin Shear Unified Strength Theory
Lin, Yuanhua; Deng, Kuanhai; Sun, Yongxing; Zeng, Dezhi; Liu, Wanying; Kong, Xiangwei; Singh, Ambrish
2014-01-01
The internal pressure strength of tubing and casing often cannot satisfy the design requirements in high pressure, high temperature and high H2S gas wells. Also, the practical safety coefficient of some wells is lower than the design standard according to the current API 5C3 standard, which brings some perplexity to the design. The ISO 10400: 2007 provides the model which can calculate the burst strength of tubing and casing better than API 5C3 standard, but the calculation accuracy is not desirable because about 50 percent predictive values are remarkably higher than real burst values. So, for the sake of improving strength design of tubing and casing, this paper deduces the plastic limit pressure of tubing and casing under internal pressure by applying the twin shear unified strength theory. According to the research of the influence rule of yield-to-tensile strength ratio and mechanical properties on the burst strength of tubing and casing, the more precise calculation model of tubing-casing's burst strength has been established with material hardening and intermediate principal stress. Numerical and experimental comparisons show that the new burst strength model is much closer to the real burst values than that of other models. The research results provide an important reference to optimize the tubing and casing design of deep and ultra-deep wells. PMID:25397886
Application of linear regression analysis in accuracy assessment of rolling force calculations
NASA Astrophysics Data System (ADS)
Poliak, E. I.; Shim, M. K.; Kim, G. S.; Choo, W. Y.
1998-10-01
Efficient operation of the computational models employed in process control systems require periodical assessment of the accuracy of their predictions. Linear regression is proposed as a tool which allows separate systematic and random prediction errors from those related to measurements. A quantitative characteristic of the model predictive ability is introduced in addition to standard statistical tests for model adequacy. Rolling force calculations are considered as an example for the application. However, the outlined approach can be used to assess the performance of any computational model.
SU-E-T-276: Dose Calculation Accuracy with a Standard Beam Model for Extended SSD Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kisling, K; Court, L; Kirsner, S
2015-06-15
Purpose: While most photon treatments are delivered near 100cm SSD or less, a subset of patients may benefit from treatment at SSDs greater than 100cm. A proposed rotating chair for upright treatments would enable isocentric treatments at extended SSDs. The purpose of this study was to assess the accuracy of the Pinnacle{sup 3} treatment planning system dose calculation for standard beam geometries delivered at extended SSDs with a beam model commissioned at 100cm SSD. Methods: Dose to a water phantom at 100, 110, and 120cm SSD was calculated with the Pinnacle {sup 3} CC convolve algorithm for 6x beams formore » 5×5, 10×10, 20×20, and 30×30cm{sup 2} field sizes (defined at the water surface for each SSD). PDDs and profiles (depths of 1.5, 12.5, and 22cm) were compared to measurements in water with an ionization chamber. Point-by-point agreement was analyzed, as well as agreement in field size defined by the 50% isodose. Results: The deviations of the calculated PDDs from measurement, analyzed from depth of maximum dose to 23cm, were all within 1.3% for all beam geometries. In particular, the calculated PDDs at 10cm depth were all within 0.7% of measurement. For profiles, the deviations within the central 80% of the field were within 2.2% for all geometries. The field sizes all agreed within 2mm. Conclusion: The agreement of the PDDs and profiles calculated by Pinnacle3 for extended SSD geometries were within the acceptability criteria defined by Van Dyk (±2% for PDDs and ±3% for profiles). The accuracy of the calculation of more complex beam geometries at extended SSDs will be investigated to further assess the feasibility of using a standard beam model commissioned at 100cm SSD in Pinnacle3 for extended SSD treatments.« less
NASA Technical Reports Server (NTRS)
Hubeny, I.; Lanz, T.
1995-01-01
A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.
Analytical probabilistic proton dose calculation and range uncertainties
NASA Astrophysics Data System (ADS)
Bangert, M.; Hennig, P.; Oelfke, U.
2014-03-01
We introduce the concept of analytical probabilistic modeling (APM) to calculate the mean and the standard deviation of intensity-modulated proton dose distributions under the influence of range uncertainties in closed form. For APM, range uncertainties are modeled with a multivariate Normal distribution p(z) over the radiological depths z. A pencil beam algorithm that parameterizes the proton depth dose d(z) with a weighted superposition of ten Gaussians is used. Hence, the integrals ∫ dz p(z) d(z) and ∫ dz p(z) d(z)2 required for the calculation of the expected value and standard deviation of the dose remain analytically tractable and can be efficiently evaluated. The means μk, widths δk, and weights ωk of the Gaussian components parameterizing the depth dose curves are found with least squares fits for all available proton ranges. We observe less than 0.3% average deviation of the Gaussian parameterizations from the original proton depth dose curves. Consequently, APM yields high accuracy estimates for the expected value and standard deviation of intensity-modulated proton dose distributions for two dimensional test cases. APM can accommodate arbitrary correlation models and account for the different nature of random and systematic errors in fractionated radiation therapy. Beneficial applications of APM in robust planning are feasible.
Pinevol: a user's guide to a volume calculator for southern pines
Daniel J. Leduc
2006-01-01
Taper functions describe a model of the actual geometric shape of a tree. When this shape is assumed to be known, volume by any log rule and to any merchantability standard can be calculated. PINEVOL is a computer program for calculating the volume of the major southern pines using species-specific bole taper functions. It can use the Doyle, Scribner, or International...
Extension of the general thermal field equation for nanosized emitters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kyritsakis, A., E-mail: akyritsos1@gmail.com; Xanthakis, J. P.
2016-01-28
During the previous decade, Jensen et al. developed a general analytical model that successfully describes electron emission from metals both in the field and thermionic regimes, as well as in the transition region. In that development, the standard image corrected triangular potential barrier was used. This barrier model is valid only for planar surfaces and therefore cannot be used in general for modern nanometric emitters. In a recent publication, the authors showed that the standard Fowler-Nordheim theory can be generalized for highly curved emitters if a quadratic term is included to the potential model. In this paper, we extend thismore » generalization for high temperatures and include both the thermal and intermediate regimes. This is achieved by applying the general method developed by Jensen to the quadratic barrier model of our previous publication. We obtain results that are in good agreement with fully numerical calculations for radii R > 4 nm, while our calculated current density differs by a factor up to 27 from the one predicted by the Jensen's standard General-Thermal-Field (GTF) equation. Our extended GTF equation has application to modern sharp electron sources, beam simulation models, and vacuum breakdown theory.« less
Notional Scoring for Technical Review Weighting As Applied to Simulation Credibility Assessment
NASA Technical Reports Server (NTRS)
Hale, Joseph Peter; Hartway, Bobby; Thomas, Danny
2008-01-01
NASA's Modeling and Simulation Standard requires a credibility assessment for critical engineering data produced by models and simulations. Credibility assessment is thus a "qualifyingfactor" in reporting results from simulation-based analysis. The degree to which assessors should be independent of the simulation developers, users and decision makers is a recurring question. This paper provides alternative "weighting algorithms" for calculating the value-added for independence of the levels of technical review defined for the NASA Modeling and Simulation Standard.
NASA Astrophysics Data System (ADS)
Morozov, A. N.
2017-11-01
The article reviews the possibility of describing physical time as a random Poisson process. An equation allowing the intensity of physical time fluctuations to be calculated depending on the entropy production density within irreversible natural processes has been proposed. Based on the standard solar model the work calculates the entropy production density inside the Sun and the dependence of the intensity of physical time fluctuations on the distance to the centre of the Sun. A free model parameter has been established, and the method of its evaluation has been suggested. The calculations of the entropy production density inside the Sun showed that it differs by 2-3 orders of magnitude in different parts of the Sun. The intensity of physical time fluctuations on the Earth's surface depending on the entropy production density during the sunlight-to-Earth's thermal radiation conversion has been theoretically predicted. A method of evaluation of the Kullback's measure of voltage fluctuations in small amounts of electrolyte has been proposed. Using a simple model of the Earth's surface heat transfer to the upper atmosphere, the effective Earth's thermal radiation temperature has been determined. A comparison between the theoretical values of the Kullback's measure derived from the fluctuating physical time model and the experimentally measured values of this measure for two independent electrolytic cells showed a good qualitative and quantitative concurrence of predictions of both theoretical model and experimental data.
40 CFR 91.509 - Calculation and reporting of test results.
Code of Federal Regulations, 2011 CFR
2011-07-01
... applicable emission standard expressed to one additional significant figure. (ASTM E29-93a has been... contained in the applicable standard expressed to one additional significant figure. (c) The final... expressed to one additional significant figure. (d) If, at any time during the model year, the CumSum...
40 CFR 91.509 - Calculation and reporting of test results.
Code of Federal Regulations, 2014 CFR
2014-07-01
... applicable emission standard expressed to one additional significant figure. (ASTM E29-93a has been... contained in the applicable standard expressed to one additional significant figure. (c) The final... expressed to one additional significant figure. (d) If, at any time during the model year, the CumSum...
40 CFR 1037.150 - Interim provisions.
Code of Federal Regulations, 2014 CFR
2014-07-01
... earlier model years for electric vehicles) to the greenhouse gas standards of this part. (1) This... for any vehicles other than electric vehicles, you must certify your entire U.S.-directed production... electric vehicles, you must certify your entire U.S.-directed fleet to these standards. If you calculate a...
Solar g-modes? Comparison of detected asymptotic g-mode frequencies with solar model predictions
NASA Astrophysics Data System (ADS)
Wood, Suzannah Rebecca; Guzik, Joyce Ann; Mussack, Katie; Bradley, Paul A.
2018-06-01
After many years of searching for solar gravity modes, Fossat et al. (2017) reported detection of the nearly equally spaced high-order g-modes periods using a 15-year time series of GOLF data from the SOHO spacecraft. Here we report progress towards and challenges associated with calculating and comparing g-mode period predictions for several previously published standard solar models using various abundance mixtures and opacities, as well as the predictions for some non-standard models incorporating early mass loss, and compare with the periods reported by Fossat et al (2017). Additionally, we have a side-by-side comparison of results of different stellar pulsation codes for calculating g-mode predictions. These comparisons will allow for testing of nonstandard physics input that affect the core, including an early more massive Sun and dynamic electron screening.
Consistent use of the standard model effective potential.
Andreassen, Anders; Frost, William; Schwartz, Matthew D
2014-12-12
The stability of the standard model is determined by the true minimum of the effective Higgs potential. We show that the potential at its minimum when computed by the traditional method is strongly dependent on the gauge parameter. It moreover depends on the scale where the potential is calculated. We provide a consistent method for determining absolute stability independent of both gauge and calculation scale, order by order in perturbation theory. This leads to a revised stability bounds m(h)(pole)>(129.4±2.3) GeV and m(t)(pole)<(171.2±0.3) GeV. We also show how to evaluate the effect of new physics on the stability bound without resorting to unphysical field values.
An Improved Radiative Transfer Model for Climate Calculations
NASA Technical Reports Server (NTRS)
Bergstrom, Robert W.; Mlawer, Eli J.; Sokolik, Irina N.; Clough, Shepard A.; Toon, Owen B.
1998-01-01
This paper presents a radiative transfer model that has been developed to accurately predict the atmospheric radiant flux in both the infrared and the solar spectrum with a minimum of computational effort. The model is designed to be included in numerical climate models To assess the accuracy of the model, the results are compared to other more detailed models for several standard cases in the solar and thermal spectrum. As the thermal spectrum has been treated in other publications, we focus here on the solar part of the spectrum. We perform several example calculations focussing on the question of absorption of solar radiation by gases and aerosols.
A note on calculation of efficiency and emissions from wood and wood pellet stoves
NASA Astrophysics Data System (ADS)
Petrocelli, D.; Lezzi, A. M.
2015-11-01
In recent years, national laws and international regulations have introduced strict limits on efficiency and emissions from woody biomass appliances to promote the diffusion of models characterized by low emissions and high efficiency. The evaluation of efficiency and emissions is made during the certification process which consists in standardized tests. Standards prescribe the procedures to be followed during tests and the relations to be used to determine the mean value of efficiency and emissions. As a matter of fact these values are calculated using flue gas temperature and composition averaged over the whole test period, lasting from 1 to 6 hours. Typically, in wood appliances the fuel burning rate is not constant and this leads to a considerable variation in time of composition and flow rate of the flue gas. In this paper we show that this fact may cause significant differences between emission values calculated according to standards and those obtained integrating over the test period the instantaneous mass and energy balances. In addition, we propose some approximated relations and a method for wood stoves which supply more accurate results than those calculated according to standards. These relations can be easily implemented in a computer controlled data acquisition systems.
Finite elements for the calculation of turbulent flows in three-dimensional complex geometries
NASA Astrophysics Data System (ADS)
Ruprecht, A.
A finite element program for the calculation of incompressible turbulent flows is presented. In order to reduce the required storage an iterative algorithm is used which solves the necessary equations sequentially. The state of turbulence is defined by the k-epsilon model. In addition to the standard k-epsilon model, the modification of Bardina et al., taking into account the rotation of the mean flow, is investigated. With this program, the flow in the draft tube of a Kaplan turbine is examined. Calculations are carried out for swirling and nonswirling entrance flow. The results are compared with measurements.
NASA Astrophysics Data System (ADS)
Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Dai, Lianrong; Draayer, Jerry P.
2018-05-01
An extended pairing Hamiltonian that describes multi-pair interactions among isospin T = 1 and angular momentum J = 0 neutron-neutron, proton-proton, and neutron-proton pairs in a spherical mean field, such as the spherical shell model, is proposed based on the standard T = 1 pairing formalism. The advantage of the model lies in the fact that numerical solutions within the seniority-zero symmetric subspace can be obtained more easily and with less computational time than those calculated from the mean-field plus standard T = 1 pairing model. Thus, large-scale calculations within the seniority-zero symmetric subspace of the model is feasible. As an example of the application, the average neutron-proton interaction in even-even N ∼ Z nuclei that can be suitably described in the f5 pg9 shell is estimated in the present model, with a focus on the role of np-pairing correlations.
NASA Astrophysics Data System (ADS)
Dzuba, V. A.; Flambaum, V. V.; Stadnik, Y. V.
2017-12-01
In the presence of P -violating interactions, the exchange of vector bosons between electrons and nucleons induces parity-nonconserving (PNC) effects in atoms and molecules, while the exchange of vector bosons between nucleons induces anapole moments of nuclei. We perform calculations of such vector-mediated PNC effects in Cs, Ba+ , Yb, Tl, Fr, and Ra+ using the same relativistic many-body approaches as in earlier calculations of standard-model PNC effects, but with the long-range operator of the weak interaction. We calculate nuclear anapole moments due to vector-boson exchange using a simple nuclear model. From measured and predicted (within the standard model) values for the PNC amplitudes in Cs, Yb, and Tl, as well as the nuclear anapole moment of 133Cs, we constrain the P -violating vector-pseudovector nucleon-electron and nucleon-proton interactions mediated by a generic vector boson of arbitrary mass. Our limits improve on existing bounds from other experiments by many orders of magnitude over a very large range of vector-boson masses.
Beyond-Standard-Model Tensor Interaction and Hadron Phenomenology.
Courtoy, Aurore; Baeßler, Stefan; González-Alonso, Martín; Liuti, Simonetta
2015-10-16
We evaluate the impact of recent developments in hadron phenomenology on extracting possible fundamental tensor interactions beyond the standard model. We show that a novel class of observables, including the chiral-odd generalized parton distributions, and the transversity parton distribution function can contribute to the constraints on this quantity. Experimental extractions of the tensor hadronic matrix elements, if sufficiently precise, will provide a, so far, absent testing ground for lattice QCD calculations.
Modeling and calculation of turbulent lifted diffusion flames
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanders, J.P.H.; Lamers, A.P.G.G.
1994-01-01
Liftoff heights of turbulent diffusion flames have been modeled using the laminar diffusion flamelet concept of Peters and Williams. The strain rate of the smallest eddies is used as the stretch describing parameter, instead of the more common scalar dissipation rate. The h(U) curve, which is the mean liftoff height as a function of fuel exit velocity can be accurately predicted, while this was impossible with the scalar dissipation rate. Liftoff calculations performed in the flames as well as in the equivalent isothermal jets, using a standard k-[epsilon] turbulence model yield approximately the same correct slope for the h(U) curvemore » while the offset has to be reproduced by choosing an appropriate coefficient in the strain rate model. For the flame calculations a model for the pdf of the fluctuating flame base is proposed. The results are insensitive to its width. The temperature field is qualitatively different from the field calculated by Bradley et al. who used a premixed flamelet model for diffusion flames.« less
SuperLFV: An SLHA tool for lepton flavor violating observables in supersymmetric models
NASA Astrophysics Data System (ADS)
Murakami, Brandon
2014-02-01
We introduce SuperLFV, a numerical tool for calculating low-energy observables that exhibit charged lepton flavor violation (LFV) in the context of the minimal supersymmetric standard model (MSSM). As the Large Hadron Collider and MEG, a dedicated μ+→e+γ experiment, are presently acquiring data, there is need for tools that provide rapid discrimination of models that exhibit LFV. SuperLFV accepts a spectrum file compliant with the SUSY Les Houches Accord (SLHA), containing the MSSM couplings and masses with complex phases at the supersymmetry breaking scale. In this manner, SuperLFV is compatible with but divorced from existing SLHA spectrum calculators that provide the low energy spectrum. Hence, input spectra are not confined to the LFV sources provided by established SLHA spectrum calculators. Input spectra may be generated by personal code or by hand, allowing for arbitrary models not supported by existing spectrum calculators.
ADAPTION OF NONSTANDARD PIPING COMPONENTS INTO PRESENT DAY SEISMIC CODES
DOE Office of Scientific and Technical Information (OSTI.GOV)
D. T. Clark; M. J. Russell; R. E. Spears
2009-07-01
With spiraling energy demand and flat energy supply, there is a need to extend the life of older nuclear reactors. This sometimes requires that existing systems be evaluated to present day seismic codes. Older reactors built in the 1960s and early 1970s often used fabricated piping components that were code compliant during their initial construction time period, but are outside the standard parameters of present-day piping codes. There are several approaches available to the analyst in evaluating these non-standard components to modern codes. The simplest approach is to use the flexibility factors and stress indices for similar standard components withmore » the assumption that the non-standard component’s flexibility factors and stress indices will be very similar. This approach can require significant engineering judgment. A more rational approach available in Section III of the ASME Boiler and Pressure Vessel Code, which is the subject of this paper, involves calculation of flexibility factors using finite element analysis of the non-standard component. Such analysis allows modeling of geometric and material nonlinearities. Flexibility factors based on these analyses are sensitive to the load magnitudes used in their calculation, load magnitudes that need to be consistent with those produced by the linear system analyses where the flexibility factors are applied. This can lead to iteration, since the magnitude of the loads produced by the linear system analysis depend on the magnitude of the flexibility factors. After the loading applied to the nonstandard component finite element model has been matched to loads produced by the associated linear system model, the component finite element model can then be used to evaluate the performance of the component under the loads with the nonlinear analysis provisions of the Code, should the load levels lead to calculated stresses in excess of Allowable stresses. This paper details the application of component-level finite element modeling to account for geometric and material nonlinear component behavior in a linear elastic piping system model. Note that this technique can be applied to the analysis of B31 piping systems.« less
2015-06-30
401) 832-8689 Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39-18 i TABLE OF CONTENTS Section Page LIST OF ILLUSTRATIONS...calculated with a high degree of accuracy—leading to intensive computational calculations and long computational times when dealing with range-depth fields...be obtained using similitude analysis; it allows the comparison of differing explosive weights and provides the means to scale the pressure, energy
Darkflation-One scalar to rule them all?
NASA Astrophysics Data System (ADS)
Lalak, Zygmunt; Nakonieczny, Łukasz
2017-03-01
The problem of explaining both inflationary and dark matter physics in the framework of a minimal extension of the Standard Model was investigated. To this end, the Standard Model completed by a real scalar singlet playing a role of the dark matter candidate has been considered. We assumed both the dark matter field and the Higgs doublet to be nonminimally coupled to gravity. Using quantum field theory in curved spacetime we derived an effective action for the inflationary period and analyzed its consequences. In this approach, after integrating out both dark matter and Standard Model sectors we obtained the effective action expressed purely in terms of the gravitational field. We paid special attention to determination, by explicit calculations, of the form of coefficients controlling the higher-order in curvature gravitational terms. Their connection to the Standard Model coupling constants has been discussed.
Lattice Gauge Theories Within and Beyond the Standard Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gelzer, Zechariah John
The Standard Model of particle physics has been very successful in describing fundamental interactions up to the highest energies currently probed in particle accelerator experiments. However, the Standard Model is incomplete and currently exhibits tension with experimental data for interactions involvingmore » $B$~mesons. Consequently, $B$-meson physics is of great interest to both experimentalists and theorists. Experimentalists worldwide are studying the decay and mixing processes of $B$~mesons in particle accelerators. Theorists are working to understand the data by employing lattice gauge theories within and beyond the Standard Model. This work addresses the theoretical effort and is divided into two main parts. In the first part, I present a lattice-QCD calculation of form factors for exclusive semileptonic decays of $B$~mesons that are mediated by both charged currents ($$B \\to \\pi \\ell \
NASA Astrophysics Data System (ADS)
Stier, P.; Schutgens, N. A. J.; Bellouin, N.; Bian, H.; Boucher, O.; Chin, M.; Ghan, S.; Huneeus, N.; Kinne, S.; Lin, G.; Ma, X.; Myhre, G.; Penner, J. E.; Randles, C. A.; Samset, B.; Schulz, M.; Takemura, T.; Yu, F.; Yu, H.; Zhou, C.
2013-03-01
Simulated multi-model "diversity" in aerosol direct radiative forcing estimates is often perceived as a measure of aerosol uncertainty. However, current models used for aerosol radiative forcing calculations vary considerably in model components relevant for forcing calculations and the associated "host-model uncertainties" are generally convoluted with the actual aerosol uncertainty. In this AeroCom Prescribed intercomparison study we systematically isolate and quantify host model uncertainties on aerosol forcing experiments through prescription of identical aerosol radiative properties in twelve participating models. Even with prescribed aerosol radiative properties, simulated clear-sky and all-sky aerosol radiative forcings show significant diversity. For a purely scattering case with globally constant optical depth of 0.2, the global-mean all-sky top-of-atmosphere radiative forcing is -4.47 Wm-2 and the inter-model standard deviation is 0.55 Wm-2, corresponding to a relative standard deviation of 12%. For a case with partially absorbing aerosol with an aerosol optical depth of 0.2 and single scattering albedo of 0.8, the forcing changes to 1.04 Wm-2, and the standard deviation increases to 1.01 W-2, corresponding to a significant relative standard deviation of 97%. However, the top-of-atmosphere forcing variability owing to absorption (subtracting the scattering case from the case with scattering and absorption) is low, with absolute (relative) standard deviations of 0.45 Wm-2 (8%) clear-sky and 0.62 Wm-2 (11%) all-sky. Scaling the forcing standard deviation for a purely scattering case to match the sulfate radiative forcing in the AeroCom Direct Effect experiment demonstrates that host model uncertainties could explain about 36% of the overall sulfate forcing diversity of 0.11 Wm-2 in the AeroCom Direct Radiative Effect experiment. Host model errors in aerosol radiative forcing are largest in regions of uncertain host model components, such as stratocumulus cloud decks or areas with poorly constrained surface albedos, such as sea ice. Our results demonstrate that host model uncertainties are an important component of aerosol forcing uncertainty that require further attention.
Chatterjee, Abhijit; Bhattacharya, Swati
2015-09-21
Several studies in the past have generated Markov State Models (MSMs), i.e., kinetic models, of biomolecular systems by post-analyzing long standard molecular dynamics (MD) calculations at the temperature of interest and focusing on the maximally ergodic subset of states. Questions related to goodness of these models, namely, importance of the missing states and kinetic pathways, and the time for which the kinetic model is valid, are generally left unanswered. We show that similar questions arise when we generate a room-temperature MSM (denoted MSM-A) for solvated alanine dipeptide using state-constrained MD calculations at higher temperatures and Arrhenius relation — the main advantage of such a procedure being a speed-up of several thousand times over standard MD-based MSM building procedures. Bounds for rate constants calculated using probability theory from state-constrained MD at room temperature help validate MSM-A. However, bounds for pathways possibly missing in MSM-A show that alternate kinetic models exist that produce the same dynamical behaviour at short time scales as MSM-A but diverge later. Even in the worst case scenario, MSM-A is found to be valid longer than the time required to generate it. Concepts introduced here can be straightforwardly extended to other MSM building techniques.
A Snowflake Project: Calculating, Analyzing, and Optimizing with the Koch Snowflake.
ERIC Educational Resources Information Center
Bolte, Linda A.
2002-01-01
Presents a project that addresses several components of the Algebra and Communication Standards for Grades 9-12 presented in Principles and Standards for School Mathematics (NCTM, 2000). Describes doing mathematical modeling and using the language of mathematics to express a recursive relationship in the perimeter and area of the Koch snowflake.…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-11
... Requirement R3.1 of MOD-001-1. C. Benchmarking 14. In the Final Rule, the Commission directed the ERO to develop benchmarking and updating requirements for the MOD Reliability Standards to measure modeled... requirements should specify the frequency for benchmarking and updating the available transfer and flowgate...
Numerical determination of Paris law constants for carbon steel using a two-scale model
NASA Astrophysics Data System (ADS)
Mlikota, M.; Staib, S.; Schmauder, S.; Božić, Ž.
2017-05-01
For most engineering alloys, the long fatigue crack growth under a certain stress level can be described by the Paris law. The law provides a correlation between the fatigue crack growth rate (FCGR or da/dN), the range of stress intensity factor (ΔK), and the material constants C and m. A well-established test procedure is typically used to determine the Paris law constants C and m, considering standard specimens, notched and pre-cracked. Definition of all the details necessary to obtain feasible and comparable Paris law constants are covered by standards. However, these cost-expensive tests can be replaced by appropriate numerical calculations. In this respect, this paper deals with the numerical determination of Paris law constants for carbon steel using a two-scale model. A micro-model containing the microstructure of a material is generated using the Finite Element Method (FEM) to calculate the fatigue crack growth rate at a crack tip. The model is based on the Tanaka-Mura equation. On the other side, a macro-model serves for the calculation of the stress intensity factor. The analysis yields a relationship between the crack growth rates and the stress intensity factors for defined crack lengths which is then used to determine the Paris law constants.
Code of Federal Regulations, 2011 CFR
2011-10-01
... manufactured by a manufacturer in that compliance category in a particular model year have greater average fuel.... 32905) than that manufacturer's fuel economy standard for that compliance category and model year... year have lower average fuel economy (calculated in a manner that reflects the incentives for...
Code of Federal Regulations, 2010 CFR
2010-10-01
... manufactured by a manufacturer in that compliance category in a particular model year have greater average fuel.... 32905) than that manufacturer's fuel economy standard for that compliance category and model year... year have lower average fuel economy (calculated in a manner that reflects the incentives for...
NASA Technical Reports Server (NTRS)
Timofeyev, Y. M.
1979-01-01
In order to test the error of calculation in assumed values of the transmission function for Soviet and American radiometers sounding the atmosphere thermally from orbiting satellites, the assumptions of the transmission calculation is varied with respect to atmospheric CO2 content, transmission frequency, and atmospheric absorption. The error arising from variations of the assumptions from the standard basic model is calculated.
Photons coming from an opaque obstacle as a manifestation of heavy neutrino decays
NASA Astrophysics Data System (ADS)
Reynoso, Matías M.; Romero, Ismael; Sampayo, Oscar A.
2018-05-01
Within the framework of physics beyond the standard model, we study the possibility that mesons produced in the atmosphere by the cosmic-ray flux decay to heavy Majorana neutrinos and the latter, in turn, decay mostly to photons in the low-mass region. We study the photon flux produced by sterile Majorana neutrinos (N ) decaying after passing through a massive and opaque object such as a mountain. To model the production of N 's in the atmosphere and their decay to photons, we consider the interaction between the Majorana neutrinos and the standard matter as modeled by an effective theory. We then calculate the heavy neutrino flux originated by the decay of mesons in the atmosphere. The surviving photon flux, originated by N decays, is calculated using transport equations that include the effects of Majorana neutrino production and decay.
NASA Technical Reports Server (NTRS)
Witteborn, Fred C.; Cohen, Martin; Bregman, Jesse D.; Wooden, Diane H.; Heere, Karen; Shirley, Eric L.
1999-01-01
Infrared spectra of two celestial objects frequently used as flux standards are calibrated against an absolute laboratory flux standard at a spectral resolving power of 100 to 200. The spectrum of the KI.5 III star alpha Boo is measured from 3 to 30 microns, and that of the C-type asteroid 1 Ceres from 5 to 30 microns. While these "standard" spectra do not have the apparent precision of those based on calculated models, they do not require the assumptions involved in theoretical models of stars and asteroids. Specifically, they provide a model-independent means of calibrating celestial flux in the spectral range from 12 to 30 microns, where accurate absolute photometry is not available. The agreement found between the spectral shapes of alpha Boo and Ceres based on laboratory standards and those based on observed ratios to alpha CMa (Sirius) and alpha Lyr (Vega), flux-calibrated by theoretical modeling of these hot stars, strengthens our confidence in the applicability of the stellar models as primary irradiance standards.
NASA Technical Reports Server (NTRS)
Witteborn, Fred C.; Cohen, Martin; Bregman, Jess D.; Wooden, Diane; Heere, Karen; Shirley, Eric L.
1998-01-01
Infrared spectra of two celestial objects frequently used as flux standards are calibrated against an absolute laboratory flux standard at a spectral resolving power of 100 to 200. The spectrum of the K1.5III star, alpha Boo, is measured from 3 microns to 30 microns and that of the C-type asteroid, 1 Ceres, from 5 microns to 30 microns. While these 'standard' spectra do not have the apparent precision of those based on calculated models, they do not require the assumptions involved in theoretical models of stars and asteroids. Specifically they provide a model-independent means of calibrating celestial flux in the spectral range from 12 microns to 30 microns where accurate absolute photometry is not available. The agreement found between the spectral shapes of alpha Boo and Ceres based on laboratory standards, and those based on observed ratios to alpha CMa (Sirius) and alpha Lyr (Vega), flux calibrated by theoretical modeling of these hot stars strengthens our confidence in the applicability of the stellar models as primary irradiance standards.
NASA Astrophysics Data System (ADS)
Toyokuni, G.; Takenaka, H.
2007-12-01
We propose a method to obtain effective grid parameters for the finite-difference (FD) method with standard Earth models using analytical ways. In spite of the broad use of the heterogeneous FD formulation for seismic waveform modeling, accurate treatment of material discontinuities inside the grid cells has been a serious problem for many years. One possible way to solve this problem is to introduce effective grid elastic moduli and densities (effective parameters) calculated by the volume harmonic averaging of elastic moduli and volume arithmetic averaging of density in grid cells. This scheme enables us to put a material discontinuity into an arbitrary position in the spatial grids. Most of the methods used for synthetic seismogram calculation today receives the blessing of the standard Earth models, such as the PREM, IASP91, SP6, and AK135, represented as functions of normalized radius. For the FD computation of seismic waveform with such models, we first need accurate treatment of material discontinuities in radius. This study provides a numerical scheme for analytical calculations of the effective parameters for an arbitrary spatial grids in radial direction as to these major four standard Earth models making the best use of their functional features. This scheme can analytically obtain the integral volume averages through partial fraction decompositions (PFDs) and integral formulae. We have developed a FORTRAN subroutine to perform the computations, which is opened to utilization in a large variety of FD schemes ranging from 1-D to 3-D, with conventional- and staggered-grids. In the presentation, we show some numerical examples displaying the accuracy of the FD synthetics simulated with the analytical effective parameters.
Economic baselines for current underground coal mining technology
NASA Technical Reports Server (NTRS)
Mabe, W. B.
1979-01-01
The cost of mining coal using a room pillar mining method with continuous miner and a longwall mining system was calculated. Costs were calculated for the years 1975 and 2000 time periods and are to be used as economic standards against which advanced mining concepts and systems will be compared. Some assumptions were changed and some internal model stored data was altered from the original calculations procedure chosen, to obtain a result that more closely represented what was considered to be a standard mine. Coal seam thicknesses were varied from one and one-half feet to eight feet to obtain the cost of mining coal over a wide range. Geologic conditions were selected that had a minimum impact on the mining productivity.
Electroweak baryogenesis in the exceptional supersymmetric standard model
Chao, Wei
2015-08-28
Here, we study electroweak baryogenesis in the E 6 inspired exceptional supersymmetric standard model (E 6SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E 6SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.
Musil, Karel; Florianova, Veronika; Bucek, Pavel; Dohnal, Vlastimil; Kuca, Kamil; Musilek, Kamil
2016-01-05
Acetylcholinesterase reactivators (oximes) are compounds used for antidotal treatment in case of organophosphorus poisoning. The dissociation constants (pK(a1)) of ten standard or promising acetylcholinesterase reactivators were determined by ultraviolet absorption spectrometry. Two methods of spectra measurement (UV-vis spectrometry, FIA/UV-vis) were applied and compared. The soft and hard models for calculation of pK(a1) values were performed. The pK(a1) values were recommended in the range 7.00-8.35, where at least 10% of oximate anion is available for organophosphate reactivation. All tested oximes were found to have pK(a1) in this range. The FIA/UV-vis method provided rapid sample throughput, low sample consumption, high sensitivity and precision compared to standard UV-vis method. The hard calculation model was proposed as more accurate for pK(a1) calculation. Copyright © 2015 Elsevier B.V. All rights reserved.
Assessment of the Impacts of Standards and Labeling Programs inMexico (four products).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Itha; Pulido, Henry; McNeil, Michael A.
2007-06-12
This study analyzes impacts from energy efficiency standards and labeling in Mexico from 1994 through 2005 for four major products: household refrigerators, room air conditioners, three-phase (squirrel cage) induction motors, and clothes washers. It is a retrospective analysis, seeking to assess verified impacts on product efficiency in the Mexican market in the first ten years after standards were implemented. Such an analysis allows the Mexican government to compare actual to originally forecast program benefits. In addition, it provides an extremely valuable benchmark for other countries considering standards, and to the energy policy community as a whole. The methodology for evaluationmore » begins with historical test data taken for a large number of models of each product type between 1994 and 2005. The pre-standard efficiency of models in 1994 is taken as a baseline throughout the analysis. Model efficiency data were provided by an independent certification laboratory (ANCE), which tested products as part of the certification and enforcement mechanism defined by the standards program. Using this data, together with economic and market data provided by both government and private sector sources, the analysis considers several types of national level program impacts. These include: Energy savings; Environmental (emissions) impacts, and Net financial impacts to consumers, manufacturers and utilities. Energy savings impacts are calculated using the same methodology as the original projections, allowing a comparison. Other impacts are calculated using a robust and sophisticated methodology developed by the Instituto de Investigaciones Electricas (IIE) and Lawrence Berkeley National Laboratory (LBNL), in a collaboration supported by the Collaborative Labeling and Standards Program (CLASP).« less
MODTRAN3: Suitability as a flux-divergence code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, G.P.; Chetwynd, J.H.; Wang, J.
1995-04-01
The Moderate Resolution Atmospheric Radiance and Transmittance Model (MODTRAN3) is the developmental version of MODTRAN and MODTRAN2. The Geophysics Directorate, Phillips Laboratory, released a beta version of this model in October 1994. It encompasses all the capabilities of LOWTRAN7, the historic 20 cm{sup -1} resolution (full width at half maximum, FWHM) radiance code, but incorporates a much more sensitive molecular band model with 2 cm{sup -1} resolution. The band model is based directly upon the HITRAN spectral parameters, including both temperature and pressure (line shape) dependencies. Validation against full Voigt line-by-line calculations (e.g., FASCODE) has shown excellent agreement. In addition,more » simple timing runs demonstrate potential improvement of more than a factor of 100 for a typical 500 cm{sup -1} spectral interval and comparable vertical layering. Not only is MODTRAN an excellent band model for {open_quotes}full path{close_quotes} calculations (that is, radiance and/or transmittance from point A to point B), but it replicates layer-specific quantities to a very high degree of accuracy. Such layer quantities, derived from ratios and differences of longer path MODTRAN calculations from point A to adjacent layer boundaries, can be used to provide inversion algorithm weighting functions or similarly formulated quantities. One of the most exciting new applications is the rapid calculation of reliable IR cooling rates, including species, altitude, and spectral distinctions, as well as the standard spectrally integrated quantities. Comparisons with prior line-by-line cooling rate calculations are excellent, and the techniques can be extended to incorporate global climatologies of both standard and trace atmospheric species.« less
RICE bounds on cosmogenic neutrino fluxes and interactions
NASA Astrophysics Data System (ADS)
Hussain, Shahid
2005-04-01
Assuming standard model interactions we calculate shower rates induced by cosmogenic neutrinos in ice, and we bound the cosmogenic neutrino fluxes using RICE 2000-2004 results. Next we assume new interactions due to extra- dimensional, low-scale gravity (i.e. black hole production and decay; graviton mediated deep inelastic scattering) and calculate enhanced shower rates induced by cosmogenic neutrinos in ice. With the help of RICE 2000-2004 results, we survey bounds on low scale gravity parameters for a range of cosmogenic neutrino flux models.
Shurshakov, V A; Kartashov, D A; Kolomenskiĭ, A V; Petrov, V M; Red'ko, V I; Abramov, I P; Letkova, L I; Tikhomirov, E P
2006-01-01
Sampling irradiation of spacesuit "Orlan-M" allowed construction of a simulation model of the spacesuit shielding function for critical body organs. The critical organs self-shielding model is a Russian standard anthropomorphic phantom. Radiation protective quality of the spacesuit was assessed by calculating the dose attenuation rates for several critical body organs of an ISS crewmember implementing EVA. These calculations are intended for more accurate assessment of radiation risk to the ISS crews donning "Orlan-M" in near-Earth orbits.
NASA Technical Reports Server (NTRS)
Kurtenbach, F. J.
1979-01-01
The technique which relies on afterburner duct pressure measurements and empirical corrections to an ideal one dimensional flow analysis to determine thrust is presented. A comparison of the calculated and facility measured thrust values is reported. The simplified model with the engine manufacturer's gas generator model are compared. The evaluation was conducted over a range of Mach numbers from 0.80 to 2.00 and at altitudes from 4020 meters to 15,240 meters. The effects of variations in inlet total temperature from standard day conditions were explored. Engine conditions were varied from those normally scheduled for flight. The technique was found to be accurate to a twice standard deviation of 2.89 percent, with accuracy a strong function of afterburner duct pressure difference.
Exploring Flavor Physics with Lattice QCD
NASA Astrophysics Data System (ADS)
Du, Daping; Fermilab/MILC Collaborations Collaboration
2016-03-01
The Standard Model has been a very good description of the subatomic particle physics. In the search for physics beyond the Standard Model in the context of flavor physics, it is important to sharpen our probes using some gold-plated processes (such as B rare decays), which requires the knowledge of the input parameters, such as the Cabibbo-Kobayashi-Maskawa (CKM) matrix elements and other nonperturbative quantities, with sufficient precision. Lattice QCD is so far the only first-principle method which could compute these quantities with competitive and systematically improvable precision using the state of the art simulation techniques. I will discuss the recent progress of lattice QCD calculations on some of these nonpurturbative quantities and their applications in flavor physics. I will also discuss the implications and future perspectives of these calculations in flavor physics.
SU-E-T-120: Analytic Dose Verification for Patient-Specific Proton Pencil Beam Scanning Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C; Mah, D
2015-06-15
Purpose: To independently verify the QA dose of proton pencil beam scanning (PBS) plans using an analytic dose calculation model. Methods: An independent proton dose calculation engine is created using the same commissioning measurements as those employed to build our commercially available treatment planning system (TPS). Each proton PBS plan is exported from the TPS in DICOM format and calculated by this independent dose engine in a standard 40 x 40 x 40 cm water tank. This three-dimensional dose grid is then compared with the QA dose calculated by the commercial TPS, using standard Gamma criterion. A total of 18more » measured pristine Bragg peaks, ranging from 100 to 226 MeV, are used in the model. Intermediate proton energies are interpolated. Similarly, optical properties of the spots are measured in air over 15 cm upstream and downstream, and fitted to a second-order polynomial. Multiple Coulomb scattering in water is approximated analytically using Preston and Kohler formula for faster calculation. The effect of range shifters on spot size is modeled with generalized Highland formula. Note that the above formulation approximates multiple Coulomb scattering in water and we therefore chose not use the full Moliere/Hanson form. Results: Initial examination of 3 patient-specific prostate PBS plans shows that agreement exists between 3D dose distributions calculated by the TPS and the independent proton PBS dose calculation engine. Both calculated dose distributions are compared with actual measurements at three different depths per beam and good agreements are again observed. Conclusion: Results here showed that 3D dose distributions calculated by this independent proton PBS dose engine are in good agreement with both TPS calculations and actual measurements. This tool can potentially be used to reduce the amount of different measurement depths required for patient-specific proton PBS QA.« less
Improvements to Wire Bundle Thermal Modeling for Ampacity Determination
NASA Technical Reports Server (NTRS)
Rickman, Steve L.; Iannello, Christopher J.; Shariff, Khadijah
2017-01-01
Determining current carrying capacity (ampacity) of wire bundles in aerospace vehicles is critical not only to safety but also to efficient design. Published standards provide guidance on determining wire bundle ampacity but offer little flexibility for configurations where wire bundles of mixed gauges and currents are employed with varying external insulation jacket surface properties. Thermal modeling has been employed in an attempt to develop techniques to assist in ampacity determination for these complex configurations. Previous developments allowed analysis of wire bundle configurations but was constrained to configurations comprised of less than 50 elements. Additionally, for vacuum analyses, configurations with very low emittance external jackets suffered from numerical instability in the solution. A new thermal modeler is presented allowing for larger configurations and is not constrained for low bundle infrared emissivity calculations. Formulation of key internal radiation and interface conductance parameters is discussed including the effects of temperature and air pressure on wire to wire thermal conductance. Test cases comparing model-predicted ampacity and that calculated from standards documents are presented.
NASA Astrophysics Data System (ADS)
Lu, Lin; Chang, Yunlong; Li, Yingmin; He, Youyou
2013-05-01
A transverse magnetic field was introduced to the arc plasma in the process of welding stainless steel tubes by high-speed Tungsten Inert Gas Arc Welding (TIG for short) without filler wire. The influence of external magnetic field on welding quality was investigated. 9 sets of parameters were designed by the means of orthogonal experiment. The welding joint tensile strength and form factor of weld were regarded as the main standards of welding quality. A binary quadratic nonlinear regression equation was established with the conditions of magnetic induction and flow rate of Ar gas. The residual standard deviation was calculated to adjust the accuracy of regression model. The results showed that, the regression model was correct and effective in calculating the tensile strength and aspect ratio of weld. Two 3D regression models were designed respectively, and then the impact law of magnetic induction on welding quality was researched.
Inglis, Stephen; Melko, Roger G
2013-01-01
We implement a Wang-Landau sampling technique in quantum Monte Carlo (QMC) simulations for the purpose of calculating the Rényi entanglement entropies and associated mutual information. The algorithm converges an estimate for an analog to the density of states for stochastic series expansion QMC, allowing a direct calculation of Rényi entropies without explicit thermodynamic integration. We benchmark results for the mutual information on two-dimensional (2D) isotropic and anisotropic Heisenberg models, a 2D transverse field Ising model, and a three-dimensional Heisenberg model, confirming a critical scaling of the mutual information in cases with a finite-temperature transition. We discuss the benefits and limitations of broad sampling techniques compared to standard importance sampling methods.
Microscopic Shell Model Calculations for sd-Shell Nuclei
NASA Astrophysics Data System (ADS)
Barrett, Bruce R.; Dikmen, Erdal; Maris, Pieter; Shirokov, Andrey M.; Smirnova, Nadya A.; Vary, James P.
Several techniques now exist for performing detailed and accurate calculations of the structure of light nuclei, i.e., A ≤ 16. Going to heavier nuclei requires new techniques or extensions of old ones. One of these is the so-called No Core Shell Model (NCSM) with a Core approach, which involves an Okubo-Lee-Suzuki (OLS) transformation of a converged NCSM result into a single major shell, such as the sd-shell. The obtained effective two-body matrix elements can be separated into core and single-particle (s.p.) energies plus residual two-body interactions, which can be used for performing standard shell-model (SSM) calculations. As an example, an application of this procedure will be given for nuclei at the beginning ofthe sd-shell.
Nakatsuka, Haruo; Chiba, Keiko; Watanabe, Takao; Sawatari, Hideyuki; Seki, Takako
2016-11-01
Iodine intake by adults in farming districts in Northeastern Japan was evaluated by two methods: (1) government-approved food composition tables based calculation and (2) instrumental measurement. The correlation between these two values and a regression model for the calibration of calculated values was presented. Iodine intake was calculated, using the values in the Japan Standard Tables of Food Composition (FCT), through the analysis of duplicate samples of complete 24-h food consumption for 90 adult subjects. In cases where the value for iodine content was not available in the FCT, it was assumed to be zero for that food item (calculated values). Iodine content was also measured by ICP-MS (measured values). Calculated and measured values rendered geometric means (GM) of 336 and 279 μg/day, respectively. There was no statistically significant (p > 0.05) difference between calculated and measured values. The correlation coefficient was 0.646 (p < 0.05). With this high correlation coefficient, a simple regression line can be applied to estimate measured value from calculated value. A survey of the literature suggests that the values in this study were similar to values that have been reported to date for Japan, and higher than those for other countries in Asia. Iodine intake of Japanese adults was 336 μg/day (GM, calculated) and 279 μg/day (GM, measured). Both values correlated so well, with a correlation coefficient of 0.646, that a regression model (Y = 130.8 + 1.9479X, where X and Y are measured and calculated values, respectively) could be used to calibrate calculated values.
Filling the voids in the SRTM elevation model — A TIN-based delta surface approach
NASA Astrophysics Data System (ADS)
Luedeling, Eike; Siebert, Stefan; Buerkert, Andreas
The Digital Elevation Model (DEM) derived from NASA's Shuttle Radar Topography Mission is the most accurate near-global elevation model that is publicly available. However, it contains many data voids, mostly in mountainous terrain. This problem is particularly severe in the rugged Oman Mountains. This study presents a method to fill these voids using a fill surface derived from Russian military maps. For this we developed a new method, which is based on Triangular Irregular Networks (TINs). For each void, we extracted points around the edge of the void from the SRTM DEM and the fill surface. TINs were calculated from these points and converted to a base surface for each dataset. The fill base surface was subtracted from the fill surface, and the result added to the SRTM base surface. The fill surface could then seamlessly be merged with the SRTM DEM. For validation, we compared the resulting DEM to the original SRTM surface, to the fill DEM and to a surface calculated by the International Center for Tropical Agriculture (CIAT) from the SRTM data. We calculated the differences between measured GPS positions and the respective surfaces for 187,500 points throughout the mountain range (ΔGPS). Comparison of the means and standard deviations of these values showed that for the void areas, the fill surface was most accurate, with a standard deviation of the ΔGPS from the mean ΔGPS of 69 m, and only little accuracy was lost by merging it to the SRTM surface (standard deviation of 76 m). The CIAT model was much less accurate in these areas (standard deviation of 128 m). The results show that our method is capable of transferring the relative vertical accuracy of a fill surface to the void areas in the SRTM model, without introducing uncertainties about the absolute elevation of the fill surface. It is well suited for datasets with varying altitude biases, which is a common problem of older topographic information.
Liu, S Z; Yu, L; Chen, Q; Quan, P L; Cao, X Q; Sun, X B
2017-05-06
Objective: To investigate the incidence and survival of esophageal cancer with different histological types and to understand the incidence trend and burden of esophageal cancer in Linzhou during 2003-2012. Methods: All incidence records of esophageal cancer and population reported were collected from Linzhou Cancer Registry during 2003-2012. Incidence rate was calculated using gender and histological types. Age standardized incidence rate was calculated according to world Segi's population and Chinese census data in 2000. Age standardized incidence rate by world population between 2003 and 2012 was analyzed with JoinPoint regression model and estimated annual percentage change (EAPC) was calculated. 5-year survival rate was calculated with Kaplan-Meier model. Results: There were 8 229 esophageal cancer cases in Linzhou during 2003-2012. The average annual incidence rate was 80.08/100 000 (8 229/10 276 481). Among all esophageal cancer cases, 7 019 (85.3%) were diagnosed as esophageal squamous cell carcinoma (ESCC). In Linzhou, the age standardized incidence rate by Chinese standard population and by world standard population was 80.92/100 000 and 81.85/100 000 in 2003, 67.97/100 000 and 68.63/100 000 in 2012. JoinPoint regression model showed that EAPC was-12.9% (95 %CI: -16.4%--9.1%) for other and unspecified histological type between 2003 and 2012. The EAPC was-5.5% (95 %CI: -9.2%--1.6%) for esophageal cancer between 2007 and 2012,-5.4% (95 %CI: -7.0%--3.9%) for esophageal cancer in female between 2006 and 2012,-4.9% (95 %CI: -9.5%--0.1%) for ESCC between 2007 and 2012. The 5-year prevalence of esophageal cancer was 215.49 per 100 000 (2 337/1 084 493), and 5 489 died within 5 years after incidence. 5-year survival rate of esophageal cancer was 34.6% (95 %CI: 33.5%-35.6%). Conclusion: Esophageal cancer had a decreasing trend in Linzhou. The survival rate was increasing. But, esophageal cancer was still a major burden in Linzhou. The major histological type was ESCC. ESCC had a similar decreasing trend with esophageal cancer.
NASA Astrophysics Data System (ADS)
Tarumi, Moto; Nakai, Hiromi
2018-05-01
This letter proposes an approximate treatment of the harmonic solvation model (HSM) assuming the solute to be a rigid body (RB-HSM). The HSM method can appropriately estimate the Gibbs free energy for condensed phases even where an ideal gas model used by standard quantum chemical programs fails. The RB-HSM method eliminates calculations for intra-molecular vibrations in order to reduce the computational costs. Numerical assessments indicated that the RB-HSM method can evaluate entropies and internal energies with the same accuracy as the HSM method but with lower calculation costs.
A Hybrid Approach To Tandem Cylinder Noise
NASA Technical Reports Server (NTRS)
Lockard, David P.
2004-01-01
Aeolian tone generation from tandem cylinders is predicted using a hybrid approach. A standard computational fluid dynamics (CFD) code is used to compute the unsteady flow around the cylinders, and the acoustics are calculated using the acoustic analogy. The CFD code is nominally second order in space and time and includes several turbulence models, but the SST k - omega model is used for most of the calculations. Significant variation is observed between laminar and turbulent cases, and with changes in the turbulence model. A two-dimensional implementation of the Ffowcs Williams-Hawkings (FW-H) equation is used to predict the far-field noise.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Leheta, D; Shvydka, D; Parsai, E
2015-06-15
Purpose: For the photon dose calculation Philips Pinnacle Treatment Planning System (TPS) uses collapsed cone convolution algorithm, which relies on energy spectrum of the beam in computing the scatter component. The spectrum is modeled based on Linac’s standard commissioning data and typically is not independently verified. We explored a methodology of using transmission measurements in combination with regularization data processing to unfold Linac spectra. The measured spectra were compared to those modeled by the TPS, and the effect on patient plans was evaluated. Methods: Transmission measurements were conducted in narrow-beam geometry using a standard Farmer ionization chamber. Two attenuating materialsmore » and two build -up caps, having different atomic numbers, served to enhance discrimination between absorption of low and high-energy portions of the spectra, thus improving the accuracy of the results. The data was analyzed using a regularization technique implemented through spreadsheet-based calculations. Results: The unfolded spectra were found to deviate from the TPS beam models. The effect of such deviations on treatment planning was evaluated for patient plans through dose distribution calculations with either TPS modeled or measured energy spectra. The differences were reviewed through comparison of isodose distributions, and quantified based on maximum dose values for critical structures. While in most cases no drastic differences in the calculated doses were observed, plans with deviations of 4 to 8% in the maximum dose values for critical structures were discovered. The anatomical sites with large scatter contributions are the most vulnerable to inaccuracies in the modeled spectrum. Conclusion: An independent check of the TPS model spectrum is highly desirable and should be included as part of commissioning of a new Linac. The effect is particularly important for dose calculations in high heterogeneity regions. The developed approach makes acquisition of megavoltage Linac beam spectra achievable in a typical radiation oncology clinic.« less
Villa, Tomaso; La Barbera, Luigi; Galbusera, Fabio
2014-04-01
Preclinical evaluation of the long-term reliability of devices for lumbar fixation is a mandatory activity before they are put into market. The experimental setups are described in two different standards edited by the International Organization for Standardization (ISO) and the American Society for Testing Materials (ASTM), but the evaluation of the suitability of such tests to simulate the actual loading with in vivo situations has never been performed. To calculate through finite element (FE) simulations the stress in the rods of the fixator when subjected to ASTM and ISO standards. To compare the calculated stresses arising in the same fixator once it has been virtually mounted in a physiological environment and loaded with physiological forces and moments. FE simulations and validation experimental tests. FE models of the ISO and ASTM setups were created to conduct simulations of the tests prescribed by standards and calculate stresses in the rods. Validation of the simulations were performed through experimental tests; the same fixator was virtually mounted in an L2-L4 FE model of the lumbar spine and stresses in the rods were calculated when the spine was subjected to physiological forces and moments. The comparison between FE simulations and experimental tests showed good agreement between results obtained using the two methodologies, thus confirming the suitability of the FE method to evaluate stresses in the device in different loading situations. The usage of a physiological load with ASTM standard is impossible due to the extreme severity of the ASTM configuration; in this circumstance, the presence of an anterior support is suggested. Also, ISO prescriptions, although the choice of the setup correctly simulates the mechanical contribution of the discs, seem to overstress the device as compared with a physiological loading condition. Some daily activities, other than walking, can induce a further state of stress in the device that should be taken into account in setting up new experimental procedures. ISO standard loading prescriptions seems to be more severe than the expected physiological ones. The ASTM standard should be completed by including some anterior supporting device and declaring the value of the load to be imposed. Moreover, a further enhancement of standards would be simulating other movements representative of daily activities different from walking. Copyright © 2014 Elsevier Inc. All rights reserved.
Electroweak baryogenesis in the exceptional supersymmetric standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Wei, E-mail: chao@physics.umass.edu
2015-08-01
We study electroweak baryogenesis in the E{sub 6} inspired exceptional supersymmetric standard model (E{sub 6}SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E{sub 6}SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.
Electroweak baryogenesis in the exceptional supersymmetric standard model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chao, Wei
2015-08-28
We study electroweak baryogenesis in the E{sub 6} inspired exceptional supersymmetric standard model (E{sub 6}SSM). The relaxation coefficients driven by singlinos and the new gaugino as well as the transport equation of the Higgs supermultiplet number density in the E{sub 6}SSM are calculated. Our numerical simulation shows that both CP-violating source terms from singlinos and the new gaugino can solely give rise to a correct baryon asymmetry of the Universe via the electroweak baryogenesis mechanism.
NASA Technical Reports Server (NTRS)
Boudreau, R. D.
1973-01-01
A numerical model is developed which calculates the atmospheric corrections to infrared radiometric measurements due to absorption and emission by water vapor, carbon dioxide, and ozone. The corrections due to aerosols are not accounted for. The transmissions functions for water vapor, carbon dioxide, and water are given. The model requires as input the vertical distribution of temperature and water vapor as determined by a standard radiosonde. The vertical distribution of carbon dioxide is assumed to be constant. The vertical distribution of ozone is an average of observed values. The model also requires as input the spectral response function of the radiometer and the nadir angle at which the measurements were made. A listing of the FORTRAN program is given with details for its use and examples of input and output listings. Calculations for four model atmospheres are presented.
Anderson, H Glenn; Frazier, Lisa; Anderson, Stephanie L; Stanton, Robert; Gillette, Chris; Broedel-Zaugg, Kim; Yingling, Kevin
2017-05-01
Objective. To compare learning outcomes achieved from a pharmaceutical calculations course taught in a traditional lecture (lecture model) and a flipped classroom (flipped model). Methods. Students were randomly assigned to the lecture model and the flipped model. Course instructors, content, assessments, and instructional time for both models were equivalent. Overall group performance and pass rates on a standardized assessment (Pcalc OSCE) were compared at six weeks and at six months post-course completion. Results. Student mean exam scores in the flipped model were higher than those in the lecture model at six weeks and six months later. Significantly more students passed the OSCE the first time in the flipped model at six weeks; however, this effect was not maintained at six months. Conclusion. Within a 6 week course of study, use of a flipped classroom improves student pharmacy calculation skill achievement relative to a traditional lecture andragogy. Further study is needed to determine if the effect is maintained over time.
Frazier, Lisa; Anderson, Stephanie L.; Stanton, Robert; Gillette, Chris; Broedel-Zaugg, Kim; Yingling, Kevin
2017-01-01
Objective. To compare learning outcomes achieved from a pharmaceutical calculations course taught in a traditional lecture (lecture model) and a flipped classroom (flipped model). Methods. Students were randomly assigned to the lecture model and the flipped model. Course instructors, content, assessments, and instructional time for both models were equivalent. Overall group performance and pass rates on a standardized assessment (Pcalc OSCE) were compared at six weeks and at six months post-course completion. Results. Student mean exam scores in the flipped model were higher than those in the lecture model at six weeks and six months later. Significantly more students passed the OSCE the first time in the flipped model at six weeks; however, this effect was not maintained at six months. Conclusion. Within a 6 week course of study, use of a flipped classroom improves student pharmacy calculation skill achievement relative to a traditional lecture andragogy. Further study is needed to determine if the effect is maintained over time. PMID:28630511
NASA Astrophysics Data System (ADS)
Paula Leite, Rodolfo; Freitas, Rodrigo; Azevedo, Rodolfo; de Koning, Maurice
2016-11-01
The Uhlenbeck-Ford (UF) model was originally proposed for the theoretical study of imperfect gases, given that all its virial coefficients can be evaluated exactly, in principle. Here, in addition to computing the previously unknown coefficients B11 through B13, we assess its applicability as a reference system in fluid-phase free-energy calculations using molecular simulation techniques. Our results demonstrate that, although the UF model itself is too soft, appropriately scaled Uhlenbeck-Ford (sUF) models provide robust reference systems that allow accurate fluid-phase free-energy calculations without the need for an intermediate reference model. Indeed, in addition to the accuracy with which their free energies are known and their convenient scaling properties, the fluid is the only thermodynamically stable phase for a wide range of sUF models. This set of favorable properties may potentially put the sUF fluid-phase reference systems on par with the standard role that harmonic and Einstein solids play as reference systems for solid-phase free-energy calculations.
Searching for new physics at the frontiers with lattice quantum chromodynamics.
Van de Water, Ruth S
2012-07-01
Numerical lattice-quantum chromodynamics (QCD) simulations, when combined with experimental measurements, allow the determination of fundamental parameters of the particle-physics Standard Model and enable searches for physics beyond-the-Standard Model. We present the current status of lattice-QCD weak matrix element calculations needed to obtain the elements and phase of the Cabibbo-Kobayashi-Maskawa (CKM) matrix and to test the Standard Model in the quark-flavor sector. We then discuss evidence that may hint at the presence of new physics beyond the Standard Model CKM framework. Finally, we discuss two opportunities where we expect lattice QCD to play a pivotal role in searching for, and possibly discovery of, new physics at upcoming high-intensity experiments: rare decays and the muon anomalous magnetic moment. The next several years may witness the discovery of new elementary particles at the Large Hadron Collider (LHC). The interplay between lattice QCD, high-energy experiments at the LHC, and high-intensity experiments will be needed to determine the underlying structure of whatever physics beyond-the-Standard Model is realized in nature. © 2012 New York Academy of Sciences.
Modeling Nuclear Decay: A Point of Integration between Chemistry and Mathematics.
ERIC Educational Resources Information Center
Crippen, Kent J.; Curtright, Robert D.
1998-01-01
Describes four activities that use graphing calculators to model nuclear-decay phenomena. Students ultimately develop a notion about the radioactive waste produced by nuclear fission. These activities are in line with national educational standards and allow for the integration of science and mathematics. Contains 13 references. (Author/WRM)
The standard WASP7 stream transport model calculates water flow through a branching stream network that may include both free-flowing and ponded segments. This supplemental user manual documents the hydraulic algorithms, including the transport and hydrogeometry equations, the m...
Bent Bonds and Multiple Bonds.
ERIC Educational Resources Information Center
Robinson, Edward A.; Gillespie, Ronald J.
1980-01-01
Considers carbon-carbon multiple bonds in terms of Pauling's bent bond model, which allows direct calculation of double and triple bonds from the length of a CC single bond. Lengths of these multiple bonds are estimated from direct measurements on "bent-bond" models constructed of plastic tubing and standard kits. (CS)
Monte Carlo based toy model for fission process
NASA Astrophysics Data System (ADS)
Kurniadi, R.; Waris, A.; Viridi, S.
2014-09-01
There are many models and calculation techniques to obtain visible image of fission yield process. In particular, fission yield can be calculated by using two calculations approach, namely macroscopic approach and microscopic approach. This work proposes another calculation approach in which the nucleus is treated as a toy model. Hence, the fission process does not represent real fission process in nature completely. The toy model is formed by Gaussian distribution of random number that randomizes distance likesthe distance between particle and central point. The scission process is started by smashing compound nucleus central point into two parts that are left central and right central points. These three points have different Gaussian distribution parameters such as mean (μCN, μL, μR), and standard deviation (σCN, σL, σR). By overlaying of three distributions, the number of particles (NL, NR) that are trapped by central points can be obtained. This process is iterated until (NL, NR) become constant numbers. Smashing process is repeated by changing σL and σR, randomly.
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Standard calculation procedure. 434.510 Section 434.510... HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard calculation procedure. 510.1The Standard Calculation Procedure consists of methods and assumptions for...
Towards a dispersive determination of the pion transition form factor
NASA Astrophysics Data System (ADS)
Leupold, Stefan; Hoferichter, Martin; Kubis, Bastian; Niecknig, Franz; Schneider, Sebastian P.
2018-01-01
We start with a brief motivation why the pion transition form factor is interesting and, in particular, how it is related to the high-precision standard-model calculation of the gyromagnetic ratio of the muon. Then we report on the current status of our ongoing project to calculate the pion transition form factor using dispersion theory. Finally we present and discuss a wish list of experimental data that would help to improve the input for our calculations and/or to cross-check our results.
Beyond standard model calculations with Sherpa
Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; ...
2015-03-24
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.
Beyond standard model calculations with Sherpa.
Höche, Stefan; Kuttimalai, Silvan; Schumann, Steffen; Siegert, Frank
We present a fully automated framework as part of the Sherpa event generator for the computation of tree-level cross sections in Beyond Standard Model scenarios, making use of model information given in the Universal FeynRules Output format. Elementary vertices are implemented into C++ code automatically and provided to the matrix-element generator Comix at runtime. Widths and branching ratios for unstable particles are computed from the same building blocks. The corresponding decays are simulated with spin correlations. Parton showers, QED radiation and hadronization are added by Sherpa, providing a full simulation of arbitrary BSM processes at the hadron level.
Simulation of gamma-initiated showers
NASA Technical Reports Server (NTRS)
Stamenov, Y.; Vancov, K.; Vodenicharova, T.
1985-01-01
The main average characteristics of muon, electron and hadron components of extensive air showers were calculate using a standard model of nuclear interaction. The obtained results are in good agreement with Tien Shan experimental data.
Mark Hitchcock; Alan Ager
1992-01-01
National Forests in the Pacific Northwest Region have incorporated elk habitat standards into Forest plans to ensure that elk habitat objectives are met on multiple use land allocations. Many Forests have employed versions of the habitat effectiveness index (HEI) as a standard method to evaluate habitat. Field application of the HEI model unfortunately is a formidable...
Mapping AmeriFlux footprints: Towards knowing the flux source area across a network of towers
NASA Astrophysics Data System (ADS)
Menzer, O.; Pastorello, G.; Metzger, S.; Poindexter, C.; Agarwal, D.; Papale, D.
2014-12-01
The AmeriFlux network collects long-term carbon, water and energy flux measurements obtained with the eddy covariance method. In order to attribute fluxes to specific areas of the land surface, flux source calculations are essential. Consequently, footprint models can support flux up-scaling exercises to larger regions, often based on remote sensing data. However, flux footprints are not currently being routinely calculated; different approaches exist but have not been standardized. In part, this is due to varying instrumentation and data processing methods at the site level. The goal of this work is to map tower footprints for a future standardized AmeriFlux product to be generated at the network level. These footprints can be estimated by analytical models, Lagrangian simulations, and large-eddy simulations. However, for many sites, the datasets currently submitted to central databases generally do not include all variables required. The AmeriFlux network is moving to collection of raw data and expansion of the variables requested from sites, giving the possibility to calculate all parameters and variables needed to run most of the available footprint models. In this pilot study, we are applying state of the art footprint models across a subset of AmeriFlux sites, to evaluate the feasibility and merit of developing standardized footprint results. In addition to comparing outcomes from several footprint models, we will attempt to verify and validate the results in two ways: (i) Verification of our footprint calculations at sites where footprints have been experimentally estimated. (ii) Validation at towers situated in heterogeneous landscapes: here, variations in the observed fluxes are expected to correlate with spatiotemporal variations of the source area composition. Once implemented, the footprint results can be used as additional information within the AmeriFlux database that can support data interpretation and data assimilation. Lastly, we will explore the expandability of this approach to other flux networks by collaborating with and including sites from the ICOS and NEON networks in our analyses. This can enable utilizing the footprint model output to improve network interoperability, thus further promoting synthesis analyses and understanding of system-level questions in the future.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivest, R; Venkataraman, S; McCurdy, B
The objective of this work is to commission the 6MV-SRS beam model in COMPASS (v2.1, IBA-Dosimetry) and validate its use for patient specific QA of hypofractionated prostate treatments. The COMPASS system consists of a 2D ion chamber array (MatriXX{sup Evolution}), an independent gantry angle sensor and associated software. The system can either directly calculate or reconstruct (using measured detector responses) a 3D dose distribution on the patient CT dataset for plan verification. Beam models are developed and commissioned in the same manner as a beam model is commissioned in a standard treatment planning system. Model validation was initially performed bymore » comparing both COMPASS calculations and reconstructions to measured open field beam data. Next, 10 hypofractionated prostate RapidArc plans were delivered to both the COMPASS system and a phantom with ion chamber and film inserted. COMPASS dose distributions calculated and reconstructed on the phantom CT dataset were compared to the chamber and film measurements. The mean (± standard deviation) difference between COMPASS reconstructed dose and ion chamber measurement was 1.4 ± 1.0%. The maximum discrepancy was 2.6%. Corresponding values for COMPASS calculation were 0.9 ± 0.9% and 2.6%, respectively. The average gamma agreement index (3%/3mm) for COMPAS reconstruction and film was 96.7% and 95.3% when using 70% and 20% dose thresholds, respectively. The corresponding values for COMPASS calculation were 97.1% and 97.1%, respectively. Based on our results, COMPASS can be used for the patient specific QA of hypofractionated prostate treatments delivered with the 6MV-SRS beam.« less
MODTRAN4 radiative transfer modeling for atmospheric correction
NASA Astrophysics Data System (ADS)
Berk, Alexander; Anderson, Gail P.; Bernstein, Lawrence S.; Acharya, Prabhat K.; Dothe, H.; Matthew, Michael W.; Adler-Golden, Steven M.; Chetwynd, James H.; Richtsmeier, Steven C.; Pukall, Brian; Allred, Clark L.; Jeong, Laila S.; Hoke, Michael L.
1999-10-01
MODTRAN4, the latest publicly released version of MODTRAN, provides many new and important options for modeling atmospheric radiation transport. A correlated-k algorithm improves multiple scattering, eliminates Curtis-Godson averaging, and introduces Beer's Law dependencies into the band model. An optimized 15 cm(superscript -1) band model provides over a 10-fold increase in speed over the standard MODTRAN 1 cm(superscript -1) band model with comparable accuracy when higher spectral resolution results are unnecessary. The MODTRAN ground surface has been upgraded to include the effects of Bidirectional Reflectance Distribution Functions (BRDFs) and Adjacency. The BRDFs are entered using standard parameterizations and are coupled into line-of-sight surface radiance calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polsdofer, E; Crilly, R
Purpose: This study investigates the effect of eye size and eccentricity on doses to critical tissues by simulating doses in the Plaque Simulator (v. 6.3.1) software. Present OHSU plaque brachytherapy treatment focuses on delivering radiation to the tumor measured with ocular ultrasound plus a small margin and assumes the orbit has the dimensions of a “standard eye.” Accurately modeling the dimensions of the orbit requires a high resolution ocular CT. This study quantifies how standard differences in equatorial diameters and eccentricity affect calculated doses to critical structures in order to query the justification of the additional CT scan to themore » treatment planning process. Methods: Tumors of 10 mm × 10 mm × 5 mm were modeled at the 12:00:00 hour with a latitude of 45 degrees. Right eyes were modeled at a number of equatorial diameters from 17.5 to 28 mm for each of the standard non-notched COMS plaques with silastic inserts. The COMS plaques were fully loaded with uniform activity, centered on the tumor, and prescribed to a common tumor dose (85 Gy/100 hours). Variations in the calculated doses to normal structures were examined to see if the changes were significant. Results: The calculated dose to normal structures show a marked dependence on eye geometry. This is exemplified by fovea dose which more than doubled in the smaller eyes and nearly halved in the larger model. Additional significant dependence was found in plaque size on the calculated dose in spite of all plaques giving the same dose to the prescription point. Conclusion: The variation in dose with eye dimension fully justifies the addition of a high resolution ocular CT to the planning technique. Additional attention must be made to plaque size beyond simply covering the tumor when considering normal tissue dose.« less
NASA Astrophysics Data System (ADS)
Beecken, B. P.; Fossum, E. R.
1996-07-01
Standard statistical theory is used to calculate how the accuracy of a conversion-gain measurement depends on the number of samples. During the development of a theoretical basis for this calculation, a model is developed that predicts how the noise levels from different elements of an ideal detector array are distributed. The model can also be used to determine what dependence the accuracy of measured noise has on the size of the sample. These features have been confirmed by experiment, thus enhancing the credibility of the method for calculating the uncertainty of a measured conversion gain. detector-array uniformity, charge coupled device, active pixel sensor.
Pressure calculation in hybrid particle-field simulations
NASA Astrophysics Data System (ADS)
Milano, Giuseppe; Kawakatsu, Toshihiro
2010-12-01
In the framework of a recently developed scheme for a hybrid particle-field simulation techniques where self-consistent field (SCF) theory and particle models (molecular dynamics) are combined [J. Chem. Phys. 130, 214106 (2009)], we developed a general formulation for the calculation of instantaneous pressure and stress tensor. The expressions have been derived from statistical mechanical definition of the pressure starting from the expression for the free energy functional in the SCF theory. An implementation of the derived formulation suitable for hybrid particle-field molecular dynamics-self-consistent field simulations is described. A series of test simulations on model systems are reported comparing the calculated pressure with those obtained from standard molecular dynamics simulations based on pair potentials.
Universality, twisted fans, and the Ising model. [Renormalization, two-loop calculations, scale
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dash, J.W.; Harrington, S.J.
1975-06-24
Critical exponents are evaluated for the Ising model using universality in the form of ''twisted fans'' previously introduced in Reggeon field theory. The universality is with respect to scales induced through renormalization. Exact twists are obtained at ..beta.. = 0 in one loop for D = 2,3 with ..nu.. = 0.75 and 0.60 respectively. In two loops one obtains ..nu.. approximately 1.32 and 0.68. No twists are obtained for eta, however. The results for the standard two loop calculations are also presented as functions of a scale.
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
Squeezed light from conventionally pumped multi-level lasers
NASA Technical Reports Server (NTRS)
Ralph, T. C.; Savage, C. M.
1992-01-01
We have calculated the amplitude squeezing in the output of several conventionally pumped multi-level lasers. We present results which show that standard laser models can produce significantly squeezed outputs in certain parameter ranges.
Calculation of change in brain temperatures due to exposure to a mobile phone
NASA Astrophysics Data System (ADS)
Van Leeuwen, G. M. J.; Lagendijk, J. J. W.; Van Leersum, B. J. A. M.; Zwamborn, A. P. M.; Hornsleth, S. N.; Kotte, A. N. T. J.
1999-10-01
In this study we evaluated for a realistic head model the 3D temperature rise induced by a mobile phone. This was done numerically with the consecutive use of an FDTD model to predict the absorbed electromagnetic power distribution, and a thermal model describing bioheat transfer both by conduction and by blood flow. We calculated a maximum rise in brain temperature of 0.11 °C for an antenna with an average emitted power of 0.25 W, the maximum value in common mobile phones, and indefinite exposure. Maximum temperature rise is at the skin. The power distributions were characterized by a maximum averaged SAR over an arbitrarily shaped 10 g volume of approximately 1.6 W kg-1. Although these power distributions are not in compliance with all proposed safety standards, temperature rises are far too small to have lasting effects. We verified our simulations by measuring the skin temperature rise experimentally. Our simulation method can be instrumental in further development of safety standards.
The discount rate in the economic evaluation of prevention: a thought experiment.
Bonneux, L; Birnie, E
2001-02-01
In the standard economic model of evaluation, constant discount rates devalue the long term health benefits of prevention strongly. This study shows that it is unlikely that this reflects societal preference. A thought experiment in a cause elimination life table calculates savings of eliminating cardiovascular disease from the Dutch population. A cost effectiveness analysis calculates the acceptable costs of such an intervention at a threshold of 18 000 Euro per saved life year. Cause specific mortality (all cardiovascular causes of death and all other causes) and health care costs (all costs of cardiovascular disease and all other causes of costs) by age and male sex of 1994. At a 0% discount rate, an intervention eliminating cardiovascular disease may cost 71 100 Euro. At the same threshold but at discount rates of 3% or 6%, the same intervention may cost 8100 Euro (8.8 times less) or 1100 Euro (65 times less). The standard economic model needs more realistic duration dependent models of time preference, which reflect societal preference.
Precision electroweak physics at LEP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mannelli, M.
1994-12-01
Copious event statistics, a precise understanding of the LEP energy scale, and a favorable experimental situation at the Z{sup 0} resonance have allowed the LEP experiments to provide both dramatic confirmation of the Standard Model of strong and electroweak interactions and to place substantially improved constraints on the parameters of the model. The author concentrates on those measurements relevant to the electroweak sector. It will be seen that the precision of these measurements probes sensitively the structure of the Standard Model at the one-loop level, where the calculation of the observables measured at LEP is affected by the value chosenmore » for the top quark mass. One finds that the LEP measurements are consistent with the Standard Model, but only if the mass of the top quark is measured to be within a restricted range of about 20 GeV.« less
Garabedian, Stephen P.
1986-01-01
A nonlinear, least-squares regression technique for the estimation of ground-water flow model parameters was applied to the regional aquifer underlying the eastern Snake River Plain, Idaho. The technique uses a computer program to simulate two-dimensional, steady-state ground-water flow. Hydrologic data for the 1980 water year were used to calculate recharge rates, boundary fluxes, and spring discharges. Ground-water use was estimated from irrigated land maps and crop consumptive-use figures. These estimates of ground-water withdrawal, recharge rates, and boundary flux, along with leakance, were used as known values in the model calibration of transmissivity. Leakance values were adjusted between regression solutions by comparing model-calculated to measured spring discharges. In other simulations, recharge and leakance also were calibrated as prior-information regression parameters, which limits the variation of these parameters using a normalized standard error of estimate. Results from a best-fit model indicate a wide areal range in transmissivity from about 0.05 to 44 feet squared per second and in leakance from about 2.2x10 -9 to 6.0 x 10 -8 feet per second per foot. Along with parameter values, model statistics also were calculated, including the coefficient of correlation between calculated and observed head (0.996), the standard error of the estimates for head (40 feet), and the parameter coefficients of variation (about 10-40 percent). Additional boundary flux was added in some areas during calibration to achieve proper fit to ground-water flow directions. Model fit improved significantly when areas that violated model assumptions were removed. It also improved slightly when y-direction (northwest-southeast) transmissivity values were larger than x-direction (northeast-southwest) transmissivity values. The model was most sensitive to changes in recharge, and in some areas, to changes in transmissivity, particularly near the spring discharge area from Milner Dam to King Hill.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonelli, Perry Edward
A low-level model-to-model interface is presented that will enable independent models to be linked into an integrated system of models. The interface is based on a standard set of functions that contain appropriate export and import schemas that enable models to be linked with no changes to the models themselves. These ideas are presented in the context of a specific multiscale material problem that couples atomistic-based molecular dynamics calculations to continuum calculations of fluid ow. These simulations will be used to examine the influence of interactions of the fluid with an adjacent solid on the fluid ow. The interface willmore » also be examined by adding it to an already existing modeling code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and comparing it with our own molecular dynamics code.« less
Sensitivity of solar g-modes to varying G cosmologies
NASA Technical Reports Server (NTRS)
Guenther, D. B.; Sills, Ken; Demarque, Pierre; Krauss, Lawrence M.
1995-01-01
The sensitivity of the solar g-mode oscillation spectrum to variability in the universal gravitational constant G is described. Solar models in varying G cosmologies were constructed by evolving a zero-age main-sequence stellar model to the Sun's current age, while allowing the value of G to change according to the power law G(t) proportional to t(exp -beta), where Beta approximately equals delta G/GH and H is the Hubble constant. All solar models were constrained to the observed luminosity and radius at the current age of the Sun by adjusting the helium abundance and the mixing-length parameter of the models in the usual way for standard stellar models. Low-l g-mode oscillation periods were calculated for each of the models and compared to the claimed observation of the solar g-mode oscillation spectrum by Hill & Gu (1990). If one accepts Hill & Gu's claims, then within the uncertainties of the physics of the solar model calculation, our models rule out all but (delta G/GH) less than approximately 0.05. In other words, we conclude that G could not have varied by more than 2% over the past 4.5 Gyr, the lifetime of the present-day Sun. This result lends independent support to the validity of the standard solar model.
Identification of the numerical model of FEM in reference to measurements in situ
NASA Astrophysics Data System (ADS)
Jukowski, Michał; Bec, Jarosław; Błazik-Borowa, Ewa
2018-01-01
The paper deals with the verification of various numerical models in relation to the pilot-phase measurements of a rail bridge subjected to dynamic loading. Three types of FEM models were elaborated for this purpose. Static, modal and dynamic analyses were performed. The study consisted of measuring the acceleration values of the structural components of the object at the moment of the train passing. Based on this, FFT analysis was performed, the main natural frequencies of the bridge were determined, the structural damping ratio and the dynamic amplification factor (DAF) were calculated and compared with the standard values. Calculations were made using Autodesk Simulation Multiphysics (Algor).
NASA Technical Reports Server (NTRS)
Crane, R. K.; Blood, D. W.
1979-01-01
A single model for a standard of comparison for other models when dealing with rain attenuation problems in system design and experimentation is proposed. Refinements to the Global Rain Production Model are incorporated. Path loss and noise estimation procedures as the basic input to systems design for earth-to-space microwave links operating at frequencies from 1 to 300 GHz are provided. Topics covered include gaseous absorption, attenuation by rain, ionospheric and tropospheric scintillation, low elevation angle effects, radome attenuation, diversity schemes, link calculation, and receiver noise emission by atmospheric gases, rain, and antenna contributions.
NASA Astrophysics Data System (ADS)
Sandhu, Paramvir; Zong, Jing; Yang, Delian; Wang, Qiang
2013-05-01
To highlight the importance of quantitative and parameter-fitting-free comparisons among different models/methods, we revisited the comparisons made by Groot and Madden [J. Chem. Phys. 108, 8713 (1998), 10.1063/1.476300] and Chen et al. [J. Chem. Phys. 122, 104907 (2005), 10.1063/1.1860351] between their dissipative particle dynamics (DPD) simulations of the DPD model and the self-consistent field (SCF) calculations of the "standard" model done by Matsen and Bates [Macromolecules 29, 1091 (1996), 10.1021/ma951138i] for diblock copolymer (DBC) A-B melts. The small values of the invariant degree of polymerization used in the DPD simulations do not justify the use of the fluctuation theory of Fredrickson and Helfand [J. Chem. Phys. 87, 697 (1987), 10.1063/1.453566] by Groot and Madden, and their fitting between the DPD interaction parameters and the Flory-Huggins χ parameter in the "standard" model also has no rigorous basis. Even with their use of the fluctuation theory and the parameter-fitting, we do not find the "quantitative match" for the order-disorder transition of symmetric DBC claimed by Groot and Madden. For lamellar and cylindrical structures, we find that the system fluctuations/correlations decrease the bulk period and greatly suppress the large depletion of the total segmental density at the A-B interfaces as well as its oscillations in A- and B-domains predicted by our SCF calculations of the DPD model. At all values of the A-block volume fractions in the copolymer f (which are integer multiples of 0.1), our SCF calculations give the same sequence of phase transitions with varying χN as the "standard" model, where N denotes the number of segments on each DBC chain. All phase boundaries, however, are shifted to higher χN due to the finite interaction range in the DPD model, except at f = 0.1 (and 0.9), where χN at the transition between the disordered phase and the spheres arranged on a body-centered cubic lattice is lower due to N = 10 in the DPD model. Finally, in 11 of the total 20 cases (f-χN combinations) studied in the DPD simulations, a morphology different from the SCF prediction was obtained due to the differences between these two methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Philipona, J. R.; Dutton, Ellsworth G.; Stoffel, T.
2001-06-04
Because atmospheric longwave radiation is one of the most fundamental elements of an expected climate change, there has been a strong interest in improving measurements and model calculations in recent years. Important questions are how reliable and consistent are atmospheric longwave radiation measurements and calculations and what are the uncertainties? The First International Pyrgeometer and Absolute Sky-scanning Radiometer Comparison, which was held at the Atmospheric Radiation Measurement program's Souther Great Plains site in Oklahoma, answers these questions at least for midlatitude summer conditions and reflects the state of the art for atmospheric longwave radiation measurements and calculations. The 15 participatingmore » pyrgeometers were all calibration-traced standard instruments chosen from a broad international community. Two new chopped pyrgeometers also took part in the comparison. And absolute sky-scanning radiometer (ASR), which includes a pyroelectric detector and a reference blackbody source, was used for the first time as a reference standard instrument to field calibrate pyrgeometers during clear-sky nighttime measurements. Owner-provided and uniformly determined blackbody calibration factors were compared. Remarkable improvements and higher pyrgeometer precision were achieved with field calibration factors. Results of nighttime and daytime pyrgeometer precision and absolute uncertainty are presented for eight consecutive days of measurements, during which period downward longwave irradiance varied between 260 and 420 W m-2. Comparisons between pyrgeometers and the absolute ASR, the atmospheric emitted radiance interferometer, and radiative transfer models LBLRTM and MODTRAN show a surprisingly good agreement of <2 W m-2 for nighttime atmospheric longwave irradiance measurements and calculations.« less
NASA Astrophysics Data System (ADS)
Hawes, Frederick T.; Berk, Alexander; Richtsmeier, Steven C.
2016-05-01
A validated, polarimetric 3-dimensional simulation capability, P-MCScene, is being developed by generalizing Spectral Sciences' Monte Carlo-based synthetic scene simulation model, MCScene, to include calculation of all 4 Stokes components. P-MCScene polarimetric optical databases will be generated by a new version (MODTRAN7) of the government-standard MODTRAN radiative transfer algorithm. The conversion of MODTRAN6 to a polarimetric model is being accomplished by (1) introducing polarimetric data, by (2) vectorizing the MODTRAN radiation calculations and by (3) integrating the newly revised and validated vector discrete ordinate model VDISORT3. Early results, presented here, demonstrate a clear pathway to the long-term goal of fully validated polarimetric models.
Refiners Switch to RFG Complex Model
1998-01-01
On January 1, 1998, domestic and foreign refineries and importers must stop using the "simple" model and begin using the "complex" model to calculate emissions of volatile organic compounds (VOC), toxic air pollutants (TAP), and nitrogen oxides (NOx) from motor gasoline. The primary differences between application of the two models is that some refineries may have to meet stricter standards for the sulfur and olefin content of the reformulated gasoline (RFG) they produce and all refineries will now be held accountable for NOx emissions. Requirements for calculating emissions from conventional gasoline under the anti-dumping rule similarly change for exhaust TAP and NOx. However, the change to the complex model is not expected to result in an increase in the price premium for RFG or constrain supplies.
NASA Astrophysics Data System (ADS)
Monson, D. J.; Seegmiller, H. L.; McConnaughey, P. K.
1990-06-01
In this paper experimental measurements are compared with Navier-Stokes calculations using seven different turbulence models for the internal flow in a two-dimensional U-duct. The configuration is representative of many internal flows of engineering interst that experience strong curvature. In an effort to improve agreement, this paper tests several versions of the two-equation k-epsilon turbulence model including the standard version, an extended version with a production range time scale, and a version that includes curvature time scales. Each is tested in its high and low Reynolds number formulations. Calculations using these new models and the original mixing length model are compared here with measurements of mean and turbulence velocities, static pressure and skin friction in the U-duct at two Reynolds numbers. The comparisons show that only the low Reynolds number version of the extended k-epsilon model does a reasonable job of predicting the important features of this flow at both Reynolds numbers tested.
SU(6) GUT breaking on a projective plane
NASA Astrophysics Data System (ADS)
Anandakrishnan, Archana; Raby, Stuart
2013-03-01
We consider a 6-dimensional supersymmetric SU(6) gauge theory and compactify two extra-dimensions on a multiply-connected manifold with non-trivial topology. The SU(6) is broken down to the Standard Model gauge groups in two steps by an orbifold projection, followed by a Wilson line. The Higgs doublets of the low energy electroweak theory come from a chiral adjoint of SU(6). We thus have gauge-Higgs unification. The three families of the Standard Model can either be located in the 6D bulk or at 4D N=1 supersymmetric fixed points. We calculate the Kaluza-Klein spectrum of states arising as a result of the orbifolding. We also calculate the threshold corrections to the coupling constants due to this tower of states at the lowest compactification scale. We study the regions of parameter space of this model where the threshold corrections are consistent with low energy physics. We find that the couplings receive only logarithmic corrections at all scales. This feature can be attributed to the large N=2 6D SUSY of the underlying model.
Refining new-physics searches in B→Dτν with lattice QCD.
Bailey, Jon A; Bazavov, A; Bernard, C; Bouchard, C M; Detar, C; Du, Daping; El-Khadra, A X; Foley, J; Freeland, E D; Gámiz, E; Gottlieb, Steven; Heller, U M; Kim, Jongjeong; Kronfeld, A S; Laiho, J; Levkova, L; Mackenzie, P B; Meurice, Y; Neil, E T; Oktay, M B; Qiu, Si-Wei; Simone, J N; Sugar, R; Toussaint, D; Van de Water, R S; Zhou, Ran
2012-08-17
The semileptonic decay channel B→Dτν is sensitive to the presence of a scalar current, such as that mediated by a charged-Higgs boson. Recently, the BABAR experiment reported the first observation of the exclusive semileptonic decay B→Dτ(-)ν, finding an approximately 2σ disagreement with the standard-model prediction for the ratio R(D)=BR(B→Dτν)/BR(B→Dℓν), where ℓ = e,μ. We compute this ratio of branching fractions using hadronic form factors computed in unquenched lattice QCD and obtain R(D)=0.316(12)(7), where the errors are statistical and total systematic, respectively. This result is the first standard-model calculation of R(D) from ab initio full QCD. Its error is smaller than that of previous estimates, primarily due to the reduced uncertainty in the scalar form factor f(0)(q(2)). Our determination of R(D) is approximately 1σ higher than previous estimates and, thus, reduces the tension with experiment. We also compute R(D) in models with electrically charged scalar exchange, such as the type-II two-Higgs-doublet model. Once again, our result is consistent with, but approximately 1σ higher than, previous estimates for phenomenologically relevant values of the scalar coupling in the type-II model. As a by-product of our calculation, we also present the standard-model prediction for the longitudinal-polarization ratio P(L)(D)=0.325(4)(3).
A robust approach to using of the redundant information in the temperature calibration
NASA Astrophysics Data System (ADS)
Strnad, R.; Kňazovická, L.; Šindelář, M.; Kukal, J.
2013-09-01
In the calibration laboratories are used standard procedures for calculating of the calibration model coefficients based on well described standards (EN 60751, ITS-90, EN 60584, etc.). In practice, sensors are mostly calibrated in more points and redundant information is used as a validation of the model. This paper will present the influence of including all measured points with respect to their uncertainties to the measured models using standard weighted least square methods. A special case with regards of the different level of the uncertainty of the measured points in case of the robust approach will be discussed. This will go to the different minimization criteria and different uncertainty propagation methodology. This approach also will eliminate of the influence of the outline measurements in the calibration. In practical part will be three cases of this approach presented, namely industrial calibration according to the standard EN 60751, SPRT according to the ITS-90 and thermocouple according to the standard EN 60584.
Kass, Andrea E; Balantekin, Katherine N; Fitzsimmons-Craft, Ellen E; Jacobi, Corinna; Wilfley, Denise E; Taylor, C Barr
2017-03-01
Eating disorders (EDs) are serious health problems affecting college students. This article aimed to estimate the costs, in United States (US) dollars, of a stepped care model for online prevention and treatment among US college students to inform meaningful decisions regarding resource allocation and adoption of efficient care delivery models for EDs on college campuses. Using a payer perspective, we estimated the costs of (1) delivering an online guided self-help (GSH) intervention to individuals with EDs, including the costs of "stepping up" the proportion expected to "fail"; (2) delivering an online preventive intervention compared to a "wait and treat" approach to individuals at ED risk; and (3) applying the stepped care model across a population of 1,000 students, compared to standard care. Combining results for online GSH and preventive interventions, we estimated a stepped care model would cost less and result in fewer individuals needing in-person psychotherapy (after receiving less-intensive intervention) compared to standard care, assuming everyone in need received intervention. A stepped care model was estimated to achieve modest cost savings compared to standard care, but these estimates need to be tested with sensitivity analyses. Model assumptions highlight the complexities of cost calculations to inform resource allocation, and considerations for a disseminable delivery model are presented. Efforts are needed to systematically measure the costs and benefits of a stepped care model for EDs on college campuses, improve the precision and efficacy of ED interventions, and apply these calculations to non-US care systems with different cost structures. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Mimasu, Ken; Sanz, Verónica; Williams, Ciaran
2016-08-01
We present predictions for the associated production of a Higgs boson at NLO+PS accuracy, including the effect of anomalous interactions between the Higgs and gauge bosons. We present our results in different frameworks, one in which the interaction vertex between the Higgs boson and Standard Model W and Z bosons is parameterized in terms of general Lorentz structures, and one in which Electroweak symmetry breaking is manifestly linear and the resulting operators arise through a six-dimensional effective field theory framework. We present analytic calculations of the Standard Model and Beyond the Standard Model contributions, and discuss the phenomenological impact of the higher order pieces. Our results are implemented in the NLO Monte Carlo program MCFM, and interfaced to shower Monte Carlos through the Powheg box framework.
Hadron spectrum in quenched lattice QCD and distribution of zero modes
NASA Astrophysics Data System (ADS)
Iwasaki, Yoichi
1989-06-01
I report the results of the calculation of the hadron spectrum with the standard one-plaquette gauge action on a 16★★3★48 lattice at β=5.85 in the quenched lattice QCD. The result remarkably agrees with that of quark potential models for the case where the quark mass is equal to or is larger than the strange quark mass, even when one uses the standard one-plaquette gauge action. This is contrary to what is stated in the literature. We clarify the reason of the discrepancy, paying close attention to systematic errors in numerical calculations. Further, I show the distribution of zero modes of quark matrix, both in the cases of a RG improved gauge action and the standard action, and discuss the difference between the two cases.
Calculation of electron Dose Point Kernel in water with GEANT4 for medical application
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guimaraes, C. C.; Sene, F. F.; Martinelli, J. R.
2009-06-03
The rapid insertion of new technologies in medical physics in the last years, especially in nuclear medicine, has been followed by a great development of faster Monte Carlo algorithms. GEANT4 is a Monte Carlo toolkit that contains the tools to simulate the problems of particle transport through matter. In this work, GEANT4 was used to calculate the dose-point-kernel (DPK) for monoenergetic electrons in water, which is an important reference medium for nuclear medicine. The three different physical models of electromagnetic interactions provided by GEANT4 - Low Energy, Penelope and Standard - were employed. To verify the adequacy of these models,more » the results were compared with references from the literature. For all energies and physical models, the agreement between calculated DPKs and reported values is satisfactory.« less
NASA Technical Reports Server (NTRS)
Krueger, Ronald; Goetze, Dirk; Ransom, Jonathon (Technical Monitor)
2006-01-01
Strain energy release rates were computed along straight delamination fronts of Double Cantilever Beam, End-Notched Flexure and Single Leg Bending specimens using the Virtual Crack Closure Technique (VCCT). Th e results were based on finite element analyses using ABAQUS# and ANSYS# and were calculated from the finite element results using the same post-processing routine to assure a consistent procedure. Mixed-mode strain energy release rates obtained from post-processing finite elem ent results were in good agreement for all element types used and all specimens modeled. Compared to previous studies, the models made of s olid twenty-node hexahedral elements and solid eight-node incompatible mode elements yielded excellent results. For both codes, models made of standard brick elements and elements with reduced integration did not correctly capture the distribution of the energy release rate acr oss the width of the specimens for the models chosen. The results suggested that element types with similar formulation yield matching results independent of the finite element software used. For comparison, m ixed-mode strain energy release rates were also calculated within ABAQUS#/Standard using the VCCT for ABAQUS# add on. For all specimens mod eled, mixed-mode strain energy release rates obtained from ABAQUS# finite element results using post-processing were almost identical to re sults calculated using the VCCT for ABAQUS# add on.
The general ventilation multipliers calculated by using a standard Near-Field/Far-Field model.
Koivisto, Antti J; Jensen, Alexander C Ø; Koponen, Ismo K
2018-05-01
In conceptual exposure models, the transmission of pollutants in an imperfectly mixed room is usually described with general ventilation multipliers. This is the approach used in the Advanced REACH Tool (ART) and Stoffenmanager® exposure assessment tools. The multipliers used in these tools were reported by Cherrie (1999; http://dx.doi.org/10.1080/104732299302530 ) and Cherrie et al. (2011; http://dx.doi.org/10.1093/annhyg/mer092 ) who developed them by positing input values for a standard Near-Field/Far-Field (NF/FF) model and then calculating concentration ratios between NF and FF concentrations. This study revisited the calculations that produce the multipliers used in ART and Stoffenmanager and found that the recalculated general ventilation multipliers were up to 2.8 times (280%) higher than the values reported by Cherrie (1999) and the recalculated NF and FF multipliers for 1-hr exposure were up to 1.2 times (17%) smaller and for 8-hr exposure up to 1.7 times (41%) smaller than the values reported by Cherrie et al. (2011). Considering that Stoffenmanager and the ART are classified as higher-tier regulatory exposure assessment tools, the errors is general ventilation multipliers should not be ignored. We recommend revising the general ventilation multipliers. A better solution is to integrate the NF/FF model to Stoffenmanager and the ART.
Muthu, Satish; Childress, Amy; Brant, Jonathan
2014-08-15
Membrane fouling assessed from a fundamental standpoint within the context of the Derjaguin-Landau-Verwey-Overbeek (DLVO) model. The DLVO model requires that the properties of the membrane and foulant(s) be quantified. Membrane surface charge (zeta potential) and free energy values are characterized using streaming potential and contact angle measurements, respectively. Comparing theoretical assessments for membrane-colloid interactions between research groups requires that the variability of the measured inputs be established. The impact that such variability in input values on the outcome from interfacial models must be quantified to determine an acceptable variance in inputs. An interlaboratory study was conducted to quantify the variability in streaming potential and contact angle measurements when using standard protocols. The propagation of uncertainty from these errors was evaluated in terms of their impact on the quantitative and qualitative conclusions on extended DLVO (XDLVO) calculated interaction terms. The error introduced into XDLVO calculated values was of the same magnitude as the calculated free energy values at contact and at any given separation distance. For two independent laboratories to draw similar quantitative conclusions regarding membrane-foulant interfacial interactions the standard error in contact angle values must be⩽2.5°, while that for the zeta potential values must be⩽7 mV. Copyright © 2014 Elsevier Inc. All rights reserved.
CPsuperH2.3: An updated tool for phenomenology in the MSSM with explicit CP violation
NASA Astrophysics Data System (ADS)
Lee, J. S.; Carena, M.; Ellis, J.; Pilaftsis, A.; Wagner, C. E. M.
2013-04-01
We describe the Fortran code CPsuperH2.3, which incorporates the following updates compared with its predecessor CPsuperH2.0. It implements improved calculations of the Higgs-boson masses and mixing including stau contributions and finite threshold effects on the tau-lepton Yukawa coupling. It incorporates the LEP limits on the processes e+e-→HiZ,HiHj and the CMS limits on Hi→τ¯τ obtained from 4.6 fb-1 of data at a center-of-mass energy of 7 TeV. It also includes the decay mode Hi→Zγ and the Schiff-moment contributions to the electric dipole moments of Mercury and Radium 225, with several calculational options for the case of Mercury. These additions make CPsuperH2.3 a suitable tool for analyzing possible CP-violating effects in the MSSM in the era of the LHC and a new generation of EDM experiments.
CPsuperH2.3: an Updated Tool for Phenomenology in the MSSM with Explicit CP Violation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, J.S.; Carena, M.; Ellis, J.
2013-04-01
We describe the Fortran code CPsuperH2.3, which incorporates the following updates compared with its predecessor CPsuperH2.0. It implements improved calculations of the Higgs-boson masses and mixing including stau contributions and finite threshold effects on the tau-lepton Yukawa coupling. It incorporates the LEP limits on the processes e^+e^-->H_iZ,H_iH_j and the CMS limits on H_i->@t@?@t obtained from 4.6 fb^-^1 of data at a center-of-mass energy of 7 TeV. It also includes the decay mode H_i->Z@c and the Schiff-moment contributions to the electric dipole moments of Mercury and Radium 225, with several calculational options for the case of Mercury. These additions make CPsuperH2.3more » a suitable tool for analyzing possible CP-violating effects in the MSSM in the era of the LHC and a new generation of EDM experiments. Program summary: Program title: CPsuperH2.3 Catalogue identifier: ADSR_v3_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSR_v3_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 24058 No. of bytes in distributed program, including test data, etc.: 158721 Distribution format: tar.gz Programming language: Fortran77. Computer: PC running under Linux and computers in Unix environment. Operating system: Linux. RAM: 32 MB Classification: 11.1. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADSR_v2_0 Journal reference of previous version: Comput. Phys. Comm. 180(2009)312 Nature of problem: The calculations of mass spectrum, decay widths and branching ratios of the neutral and charged Higgs bosons in the Minimal Supersymmetric Standard Model with explicit CP violation have been improved. The program is based on renormalization-group-improved diagrammatic calculations that include dominant higher-order logarithmic and threshold corrections, b-quark and @t-lepton Yukawa-coupling resummation effects and improved treatment of Higgs-boson pole-mass shifts. The couplings of the Higgs bosons to the Standard Model gauge bosons and fermions, to their supersymmetric partners and all the trilinear and quartic Higgs-boson self-couplings are also calculated. Also included are a full treatment of the 4x4 (2x2) neutral (charged) Higgs propagator matrix together with the center-of-mass dependent Higgs-boson couplings to gluons and photons, and an integrated treatment of several B-meson observables. The new implementations include the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon, as well as the anomalous magnetic moment of muon, (g_@m-2), the top-quark decays, improved calculations of the Higgs-boson masses and mixing including stau contributions, the LEP limits, and the CMS limits on H_i->@t@t@?. It also implements the decay mode H_i->Z@c and includes the corresponding Standard Model branching ratios of the three neutral Higgs bosons in the array GAMBRN(IM,IWB = 2,IH). Solution method: One-dimensional numerical integration for several Higgs-decay modes and EDMs, iterative treatment of the threshold corrections and Higgs-boson pole masses, and the numerical diagonalization of the neutralino mass matrix. Reasons for new version: Mainly to provide the full calculations of the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon as well as (g_@m-2), improved calculations of the Higgs-boson masses and mixing including stau contributions, the LEP limits, the CMS limits on H_i->@t@t@?, the top-quark decays, H_i->Z@c decay, and the corresponding Standard Model branching ratios of the three neutral Higgs bosons. Summary of revisions: Full calculations of the EDMs of Thallium, neutron, Mercury, Deuteron, Radium, and muon as well as (g_@m-2). Improved treatment of Higgs-boson masses and mixing including stau contributions. The LEP limits. The CMS limits on H_i->@t@t@?. The top-quark decays. The H_i->Z@c decay. The corresponding Standard Model branching ratios of the three neutral Higgs bosons. Running time: Less than 1.0 s.« less
SPheno 3.1: extensions including flavour, CP-phases and models beyond the MSSM
NASA Astrophysics Data System (ADS)
Porod, W.; Staub, F.
2012-11-01
We describe recent extensions of the program SPhenoincluding flavour aspects, CP-phases, R-parity violation and low energy observables. In case of flavour mixing all masses of supersymmetric particles are calculated including the complete flavour structure and all possible CP-phases at the 1-loop level. We give details on implemented seesaw models, low energy observables and the corresponding extension of the SUSY Les Houches Accord. Moreover, we comment on the possibilities to include MSSM extensions in SPheno. Catalogue identifier: ADRV_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADRV_v2_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154062 No. of bytes in distributed program, including test data, etc.: 1336037 Distribution format: tar.gz Programming language: Fortran95. Computer: PC running under Linux, should run in every Unix environment. Operating system: Linux, Unix. Classification: 11.6. Catalogue identifier of previous version: ADRV_v1_0 Journal reference of previous version: Comput. Phys. Comm. 153(2003)275 Does the new version supersede the previous version?: Yes Nature of problem: The first issue is the determination of the masses and couplings of supersymmetric particles in various supersymmetric models, the R-parity conserved MSSM with generation mixing and including CP-violating phases, various seesaw extensions of the MSSM and the MSSM with bilinear R-parity breaking. Low energy data on Standard Model fermion masses, gauge couplings and electroweak gauge boson masses serve as constraints. Radiative corrections from supersymmetric particles to these inputs must be calculated. Theoretical constraints on the soft SUSY breaking parameters from a high scale theory are imposed and the parameters at the electroweak scale are obtained from the high scale parameters by evaluating the corresponding renormalisation group equations. These parameters must be consistent with the requirement of correct electroweak symmetry breaking. The second issue is to use the obtained masses and couplings for calculating decay widths and branching ratios of supersymmetric particles as well as the cross sections for these particles in electron-positron annihilation. The third issue is to calculate low energy constraints in the B-meson sector such as BR(b s), MB s, rare lepton decays, such as BR(e), the SUSY contributions to anomalous magnetic moments and electric dipole moments of leptons, the SUSY contributions to the ρ parameter as well as lepton flavour violating Z decays. Solution method: The renormalisation connecting a high scale and the electroweak scale is calculated by the Runge-Kutta method. Iteration provides a solution consistent with the multi-boundary conditions. In case of three-body decays and for the calculation of initial state radiation Gaussian quadrature is used for the numerical solution of the integrals. Reasons for new version: Inclusion of new models as well as additional observables. Moreover, a new standard for data transfer had been established, which is now supported. Summary of revisions: The already existing models have been extended to include also CP-violation and flavour mixing. The data transfer is done using the so-called SLHA2 standard. In addition new models have been included: all three types of seesaw models as well as bilinear R-parity violation. Moreover, additional observables are calculated: branching ratios for flavour violating lepton decays, EDMs of leptons and of the neutron, CP-violating mass difference in the B-meson sector and branching ratios for flavour violating b-quark decays. Restrictions: In case of R-parity violation the cross sections are not calculated. Running time: 0.2 seconds on an Intel(R) Core(TM)2 Duo CPU T9900 with 3.06 GHz
Conformal standard model with an extended scalar sector
NASA Astrophysics Data System (ADS)
Latosinski, Adam; Lewandowski, Adrian; Meissner, Krzysztof A.; Nicolai, Hermann
2015-10-01
We present an extended version of the Conformal Standard Model (characterized by the absence of any new intermediate scales between the electroweak scale and the Planck scale) with an enlarged scalar sector coupling to right-chiral neutrinos. The scalar potential and the Yukawa couplings involving only right-chiral neutrinos are invariant under a new global symmetry SU(3) N that complements the standard U(1) B-L symmetry, and is broken explicitly only by the Yukawa interaction, of order O (10-6), coupling right-chiral neutrinos and the electroweak lepton doublets. We point out four main advantages of this enlargement, namely: (1) the economy of the (non-supersymmetric) Standard Model, and thus its observational success, is preserved; (2) thanks to the enlarged scalar sector the RG improved one-loop effective potential is everywhere positive with a stable global minimum, thereby avoiding the notorious instability of the Standard Model vacuum; (3) the pseudo-Goldstone bosons resulting from spontaneous breaking of the SU(3) N symmetry are natural Dark Matter candidates with calculable small masses and couplings; and (4) the Majorana Yukawa coupling matrix acquires a form naturally adapted to leptogenesis. The model is made perturbatively consistent up to the Planck scale by imposing the vanishing of quadratic divergences at the Planck scale (`softly broken conformal symmetry'). Observable consequences of the model occur mainly via the mixing of the new scalars and the standard model Higgs boson.
High precision UTDR measurements by sonic velocity compensation with reference transducer.
Stade, Sam; Kallioinen, Mari; Mänttäri, Mika; Tuuva, Tuure
2014-07-02
An ultrasonic sensor design with sonic velocity compensation is developed to improve the accuracy of distance measurement in membrane modules. High accuracy real-time distance measurements are needed in membrane fouling and compaction studies. The benefits of the sonic velocity compensation with a reference transducer are compared to the sonic velocity calculated with the measured temperature and pressure using the model by Belogol'skii, Sekoyan et al. In the experiments the temperature was changed from 25 to 60 °C at pressures of 0.1, 0.3 and 0.5 MPa. The set measurement distance was 17.8 mm. Distance measurements with sonic velocity compensation were over ten times more accurate than the ones calculated based on the model. Using the reference transducer measured sonic velocity, the standard deviations for the distance measurements varied from 0.6 to 2.0 µm, while using the calculated sonic velocity the standard deviations were 21-39 µm. In industrial liquors, not only the temperature and the pressure, which were studied in this paper, but also the properties of the filtered solution, such as solute concentration, density, viscosity, etc., may vary greatly, leading to inaccuracy in the use of the Belogol'skii, Sekoyan et al. model. Therefore, calibration of the sonic velocity with reference transducers is needed for accurate distance measurements.
Genkawa, Takuma; Shinzawa, Hideyuki; Kato, Hideaki; Ishikawa, Daitaro; Murayama, Kodai; Komiyama, Makoto; Ozaki, Yukihiro
2015-12-01
An alternative baseline correction method for diffuse reflection near-infrared (NIR) spectra, searching region standard normal variate (SRSNV), was proposed. Standard normal variate (SNV) is an effective pretreatment method for baseline correction of diffuse reflection NIR spectra of powder and granular samples; however, its baseline correction performance depends on the NIR region used for SNV calculation. To search for an optimal NIR region for baseline correction using SNV, SRSNV employs moving window partial least squares regression (MWPLSR), and an optimal NIR region is identified based on the root mean square error (RMSE) of cross-validation of the partial least squares regression (PLSR) models with the first latent variable (LV). The performance of SRSNV was evaluated using diffuse reflection NIR spectra of mixture samples consisting of wheat flour and granular glucose (0-100% glucose at 5% intervals). From the obtained NIR spectra of the mixture in the 10 000-4000 cm(-1) region at 4 cm intervals (1501 spectral channels), a series of spectral windows consisting of 80 spectral channels was constructed, and then SNV spectra were calculated for each spectral window. Using these SNV spectra, a series of PLSR models with the first LV for glucose concentration was built. A plot of RMSE versus the spectral window position obtained using the PLSR models revealed that the 8680–8364 cm(-1) region was optimal for baseline correction using SNV. In the SNV spectra calculated using the 8680–8364 cm(-1) region (SRSNV spectra), a remarkable relative intensity change between a band due to wheat flour at 8500 cm(-1) and that due to glucose at 8364 cm(-1) was observed owing to successful baseline correction using SNV. A PLSR model with the first LV based on the SRSNV spectra yielded a determination coefficient (R2) of 0.999 and an RMSE of 0.70%, while a PLSR model with three LVs based on SNV spectra calculated in the full spectral region gave an R2 of 0.995 and an RMSE of 2.29%. Additional evaluation of SRSNV was carried out using diffuse reflection NIR spectra of marzipan and corn samples, and PLSR models based on SRSNV spectra showed good prediction results. These evaluation results indicate that SRSNV is effective in baseline correction of diffuse reflection NIR spectra and provides regression models with good prediction accuracy.
Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR
NASA Astrophysics Data System (ADS)
Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng
2017-06-01
The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.
Optimization Based Efficiencies in First Order Reliability Analysis
NASA Technical Reports Server (NTRS)
Peck, Jeffrey A.; Mahadevan, Sankaran
2003-01-01
This paper develops a method for updating the gradient vector of the limit state function in reliability analysis using Broyden's rank one updating technique. In problems that use commercial code as a black box, the gradient calculations are usually done using a finite difference approach, which becomes very expensive for large system models. The proposed method replaces the finite difference gradient calculations in a standard first order reliability method (FORM) with Broyden's Quasi-Newton technique. The resulting algorithm of Broyden updates within a FORM framework (BFORM) is used to run several example problems, and the results compared to standard FORM results. It is found that BFORM typically requires fewer functional evaluations that FORM to converge to the same answer.
Single Top Production at Next-to-Leading Order in the Standard Model Effective Field Theory.
Zhang, Cen
2016-04-22
Single top production processes at hadron colliders provide information on the relation between the top quark and the electroweak sector of the standard model. We compute the next-to-leading order QCD corrections to the three main production channels: t-channel, s-channel, and tW associated production, in the standard model including operators up to dimension six. The calculation can be matched to parton shower programs and can therefore be directly used in experimental analyses. The QCD corrections are found to significantly impact the extraction of the current limits on the operators, because both of an improved accuracy and a better precision of the theoretical predictions. In addition, the distributions of some of the key discriminating observables are modified in a nontrivial way, which could change the interpretation of measurements in terms of UV complete models.
Impact of combustion products from Space Shuttle launches on ambient air quality
NASA Technical Reports Server (NTRS)
Dumbauld, R. K.; Bowers, J. F.; Cramer, H. E.
1974-01-01
The present work describes some multilayer diffusion models and a computer program for these models developed to predict the impact of ground clouds formed during Space Shuttle launches on ambient air quality. The diffusion models are based on the Gaussian plume equation for an instantaneous volume source. Cloud growth is estimated on the basis of measurable meteorological parameters: standard deviation of the wind azimuth angle, standard deviation of wind elevation angle, vertical wind-speed shear, vertical wind-direction shear, and depth of the surface mixing layer. Calculations using these models indicate that Space Shuttle launches under a variety of meteorological regimes at Kennedy Space Center and Vandenberg AFB are unlikely to endanger the exposure standards for HCl; similar results have been obtained for CO and Al2O3. However, the possibility that precipitation scavenging of the ground cloud might result in an acidic rain that could damage vegetation has not been investigated.
Quantum crystallographic charge density of urea
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wall, Michael E.
Standard X-ray crystallography methods use free-atom models to calculate mean unit-cell charge densities. Real molecules, however, have shared charge that is not captured accurately using free-atom models. To address this limitation, a charge density model of crystalline urea was calculated using high-level quantum theory and was refined against publicly available ultra-high-resolution experimental Bragg data, including the effects of atomic displacement parameters. The resulting quantum crystallographic model was compared with models obtained using spherical atom or multipole methods. Despite using only the same number of free parameters as the spherical atom model, the agreement of the quantum model with the datamore » is comparable to the multipole model. The static, theoretical crystalline charge density of the quantum model is distinct from the multipole model, indicating the quantum model provides substantially new information. Hydrogen thermal ellipsoids in the quantum model were very similar to those obtained using neutron crystallography, indicating that quantum crystallography can increase the accuracy of the X-ray crystallographic atomic displacement parameters. Lastly, the results demonstrate the feasibility and benefits of integrating fully periodic quantum charge density calculations into ultra-high-resolution X-ray crystallographic model building and refinement.« less
Quantum crystallographic charge density of urea
Wall, Michael E.
2016-06-08
Standard X-ray crystallography methods use free-atom models to calculate mean unit-cell charge densities. Real molecules, however, have shared charge that is not captured accurately using free-atom models. To address this limitation, a charge density model of crystalline urea was calculated using high-level quantum theory and was refined against publicly available ultra-high-resolution experimental Bragg data, including the effects of atomic displacement parameters. The resulting quantum crystallographic model was compared with models obtained using spherical atom or multipole methods. Despite using only the same number of free parameters as the spherical atom model, the agreement of the quantum model with the datamore » is comparable to the multipole model. The static, theoretical crystalline charge density of the quantum model is distinct from the multipole model, indicating the quantum model provides substantially new information. Hydrogen thermal ellipsoids in the quantum model were very similar to those obtained using neutron crystallography, indicating that quantum crystallography can increase the accuracy of the X-ray crystallographic atomic displacement parameters. Lastly, the results demonstrate the feasibility and benefits of integrating fully periodic quantum charge density calculations into ultra-high-resolution X-ray crystallographic model building and refinement.« less
NASA Astrophysics Data System (ADS)
Altenkamp, Lukas; Boggia, Michele; Dittmaier, Stefan
2018-04-01
We consider an extension of the Standard Model by a real singlet scalar field with a ℤ2-symmetric Lagrangian and spontaneous symmetry breaking with vacuum expectation value for the singlet. Considering the lighter of the two scalars of the theory to be the 125 GeV Higgs particle, we parametrize the scalar sector by the mass of the heavy Higgs boson, a mixing angle α, and a scalar Higgs self-coupling λ 12. Taking into account theoretical constraints from perturbativity and vacuum stability, we compute next-to-leading-order electroweak and QCD corrections to the decays h → WW/ZZ → 4 fermions of the light Higgs boson for some scenarios proposed in the literature. We formulate two renormalization schemes and investigate the conversion of the input parameters between the schemes, finding sizeable effects. Solving the renormalization-group equations for the \\overline{MS} parameters α and λ 12, we observe a significantly reduced scale and scheme dependence in the next-to-leading-order results. For some scenarios suggested in the literature, the total decay width for the process h → 4 f is computed as a function of the mixing angle and compared to the width of a corresponding Standard Model Higgs boson, revealing deviations below 10%. Differential distributions do not show significant distortions by effects beyond the Standard Model. The calculations are implemented in the Monte Carlo generator P rophecy4 f, which is ready for applications in data analyses in the framework of the singlet extension.
NASA Technical Reports Server (NTRS)
Rauch, T.; Rudkowski, A.; Kampka, D.; Werner, K.; Kruk, J. W.; Moehler, S.
2014-01-01
Context. In the framework of the Virtual Observatory (VO), the German Astrophysical VO (GAVO) developed the registered service TheoSSA (Theoretical Stellar Spectra Access). It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code, generally for all effective temperatures, surface gravities, and elemental compositions. We will establish a database of SEDs of flux standards that are easily accessible via TheoSSA's web interface. Aims. The OB-type subdwarf Feige 110 is a standard star for flux calibration. State-of-the-art non-local thermodynamic equilibrium stellar-atmosphere models that consider opacities of species up to trans-iron elements will be used to provide a reliable synthetic spectrum to compare with observations. Methods. In case of Feige 110, we demonstrate that the model reproduces not only its overall continuum shape from the far-ultraviolet (FUV) to the optical wavelength range but also the numerous metal lines exhibited in its FUV spectrum. Results. We present a state-of-the-art spectral analysis of Feige 110. We determined Teff =47 250 +/- 2000 K, log g=6.00 +/- 0.20, and the abundances of He, N, P, S, Ti, V, Cr, Mn, Fe, Co, Ni, Zn, and Ge. Ti, V, Mn, Co, Zn, and Ge were identified for the first time in this star. Upper abundance limits were derived for C, O, Si, Ca, and Sc. Conclusions. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of astronomical data and cross-calibration between different instruments can be based on models and SEDs calculated with state-of-the-art model atmosphere codes.
A new radiation infrastructure for the Modular Earth Submodel System (MESSy, based on version 2.51)
NASA Astrophysics Data System (ADS)
Dietmüller, Simone; Jöckel, Patrick; Tost, Holger; Kunze, Markus; Gellhorn, Catrin; Brinkop, Sabine; Frömming, Christine; Ponater, Michael; Steil, Benedikt; Lauer, Axel; Hendricks, Johannes
2016-06-01
The Modular Earth Submodel System (MESSy) provides an interface to couple submodels to a base model via a highly flexible data management facility (Jöckel et al., 2010). In the present paper we present the four new radiation related submodels RAD, AEROPT, CLOUDOPT, and ORBIT. The submodel RAD (including the shortwave radiation scheme RAD_FUBRAD) simulates the radiative transfer, the submodel AEROPT calculates the aerosol optical properties, the submodel CLOUDOPT calculates the cloud optical properties, and the submodel ORBIT is responsible for Earth orbit calculations. These submodels are coupled via the standard MESSy infrastructure and are largely based on the original radiation scheme of the general circulation model ECHAM5, however, expanded with additional features. These features comprise, among others, user-friendly and flexibly controllable (by namelists) online radiative forcing calculations by multiple diagnostic calls of the radiation routines. With this, it is now possible to calculate radiative forcing (instantaneous as well as stratosphere adjusted) of various greenhouse gases simultaneously in only one simulation, as well as the radiative forcing of cloud perturbations. Examples of online radiative forcing calculations in the ECHAM/MESSy Atmospheric Chemistry (EMAC) model are presented.
Optimal and Most Exact Confidence Intervals for Person Parameters in Item Response Theory Models
ERIC Educational Resources Information Center
Doebler, Anna; Doebler, Philipp; Holling, Heinz
2013-01-01
The common way to calculate confidence intervals for item response theory models is to assume that the standardized maximum likelihood estimator for the person parameter [theta] is normally distributed. However, this approximation is often inadequate for short and medium test lengths. As a result, the coverage probabilities fall below the given…
PWR Facility Dose Modeling Using MCNP5 and the CADIS/ADVANTG Variance-Reduction Methodology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blakeman, Edward D; Peplow, Douglas E.; Wagner, John C
2007-09-01
The feasibility of modeling a pressurized-water-reactor (PWR) facility and calculating dose rates at all locations within the containment and adjoining structures using MCNP5 with mesh tallies is presented. Calculations of dose rates resulting from neutron and photon sources from the reactor (operating and shut down for various periods) and the spent fuel pool, as well as for the photon source from the primary coolant loop, were all of interest. Identification of the PWR facility, development of the MCNP-based model and automation of the run process, calculation of the various sources, and development of methods for visually examining mesh tally filesmore » and extracting dose rates were all a significant part of the project. Advanced variance reduction, which was required because of the size of the model and the large amount of shielding, was performed via the CADIS/ADVANTG approach. This methodology uses an automatically generated three-dimensional discrete ordinates model to calculate adjoint fluxes from which MCNP weight windows and source bias parameters are generated. Investigative calculations were performed using a simple block model and a simplified full-scale model of the PWR containment, in which the adjoint source was placed in various regions. In general, it was shown that placement of the adjoint source on the periphery of the model provided adequate results for regions reasonably close to the source (e.g., within the containment structure for the reactor source). A modification to the CADIS/ADVANTG methodology was also studied in which a global adjoint source is weighted by the reciprocal of the dose response calculated by an earlier forward discrete ordinates calculation. This method showed improved results over those using the standard CADIS/ADVANTG approach, and its further investigation is recommended for future efforts.« less
Taming Many-Parameter BSM Models with Bayesian Neural Networks
NASA Astrophysics Data System (ADS)
Kuchera, M. P.; Karbo, A.; Prosper, H. B.; Sanchez, A.; Taylor, J. Z.
2017-09-01
The search for physics Beyond the Standard Model (BSM) is a major focus of large-scale high energy physics experiments. One method is to look for specific deviations from the Standard Model that are predicted by BSM models. In cases where the model has a large number of free parameters, standard search methods become intractable due to computation time. This talk presents results using Bayesian Neural Networks, a supervised machine learning method, to enable the study of higher-dimensional models. The popular phenomenological Minimal Supersymmetric Standard Model was studied as an example of the feasibility and usefulness of this method. Graphics Processing Units (GPUs) are used to expedite the calculations. Cross-section predictions for 13 TeV proton collisions will be presented. My participation in the Conference Experience for Undergraduates (CEU) in 2004-2006 exposed me to the national and global significance of cutting-edge research. At the 2005 CEU, I presented work from the previous summer's SULI internship at Lawrence Berkeley Laboratory, where I learned to program while working on the Majorana Project. That work inspired me to follow a similar research path, which led me to my current work on computational methods applied to BSM physics.
Flame trench analysis of NLS vehicles
NASA Technical Reports Server (NTRS)
Zeytinoglu, Nuri
1993-01-01
The present study takes the initial steps of establishing a better flame trench design criteria for future National Launch System vehicles. A three-dimensional finite element computer model for predicting the transient thermal and structural behavior of the flame trench walls was developed using both I-DEAS and MSC/NASTRAN software packages. The results of JANNAF Standardized Plume flowfield calculations of sea-level exhaust plume of the Space Shuttle Main Engine (SSME), Space Transportation Main Engine (STME), and Advanced Solid Rocket Motors (ASRM) were analyzed for different axial distances. The results of sample calculations, using the developed finite element model, are included. The further suggestions are also reported for enhancing the overall analysis of the flame trench model.
NASA Technical Reports Server (NTRS)
Karol, Igor L.; Frolkis, Victor A.
1994-01-01
Radiative and temperature effects of the observed ozone and greenhouse gas atmospheric content changes in 1980 - 1990 are evaluated using the two-dimensional energy balance radiative-convective model of the zonally and annually averaged troposphere and stratosphere. Calculated radiative flux changes for standard conditions quantitatively agree with their estimates in WMO/UNEP 1991 review. Model estimates indicate rather small influence of ozone depletion in the lower stratosphere on the greenhouse tropospheric warming rate, being more significant in the non-tropical Southern Hemisphere. The calculated cooling of the lower stratosphere is close to the observed temperature trends there in the last decade.
Determination of streamflow of the Arkansas River near Bentley in south-central Kansas
Perry, Charles A.
2012-01-01
The Kansas Department of Agriculture, Division of Water Resources, requires that the streamflow of the Arkansas River just upstream from Bentley in south-central Kansas be measured or calculated before groundwater can be pumped from the well field. When the daily streamflow of the Arkansas River near Bentley is less than 165 cubic feet per second (ft3/s), pumping must be curtailed. Daily streamflow near Bentley was calculated by determining the relations between streamflow data from two reference streamgages with a concurrent record of 24 years, one located 17.2 miles (mi) upstream and one located 10.9 mi downstream, and streamflow at a temporary gage located just upstream from Bentley (Arkansas River near Bentley, Kansas). Flow-duration curves for the two reference streamgages indicate that during 1988?2011, the mean daily streamflow was less than 165 ft3/s 30 to 35 percent of the time. During extreme low-flow (drought) conditions, the reach of the Arkansas River between Hutchinson and Maize can lose flow to the adjacent alluvial aquifer, with streamflow losses as much as 1.6 cubic feet per second per mile. Three models were developed to calculate the streamflow of the Arkansas River near Bentley, Kansas. The model chosen depends on the data available and on whether the reach of the Arkansas River between Hutchinson and Maize is gaining or losing groundwater from or to the adjacent alluvial aquifer. The first model was a pair of equations developed from linear regressions of the relation between daily streamflow data from the Bentley streamgage and daily streamflow data from either the Arkansas River near Hutchinson, Kansas, station (station number 07143330) or the Arkansas River near Maize, Kansas, station (station number 07143375). The standard error of the Hutchinson-only equation was 22.8 ft3/s, and the standard error of the Maize-only equation was 22.3 ft3/s. The single-station model would be used if only one streamgage was available. In the second model, the flow gradient between the streamflow near Hutchinson and the streamflow near Maize was used to calculate the streamflow at the Bentley streamgage. This equation resulted in a standard error of 26.7 ft3/s. In the third model, a multiple regression analysis between both the daily streamflow of the Arkansas River near Hutchinson, Kansas, and the daily streamflow of the Arkansas River near Maize, Kansas, was used to calculate the streamflow at the Bentley streamgage. The multiple regression equation had a standard error of 21.2 ft3/s, which was the smallest of the standard errors for all the models. An analysis of the number of low-flow days and the number of days when the reach between Hutchinson and Maize loses flow to the adjacent alluvial aquifer indicates that the long-term trend is toward fewer days of losing conditions. This trend may indicate a long-term increase in water levels in the alluvial aquifer, which could be caused by one or more of several conditions, including an increase in rainfall, a decrease in pumping, a decrease in temperature, and an increase in streamflow upstream from the Hutchinson-to-Maize reach of the Arkansas River.
Planetary atmosphere models: A research and instructional web-based resource
NASA Astrophysics Data System (ADS)
Gray, Samuel Augustine
The effects of altitude change on the temperature, pressure, density, and speed of sound were investigated. These effects have been documented in Global Reference Atmospheric Models (GRAMs) to be used in calculating the conditions in various parts of the atmosphere for several planets. Besides GRAMs, there are several websites that provide online calculators for the 1976 US Standard Atmosphere. This thesis presents the creation of an online calculator of the atmospheres of Earth, Mars, Venus, Titan, and Neptune. The websites consist of input forms for altitude and temperature adjustment followed by a results table for the calculated data. The first phase involved creating a spreadsheet reference based on the 1976 US Standard Atmosphere and other planetary GRAMs available. Microsoft Excel was used to input the equations and make a graphical representation of the temperature, pressure, density, and speed of sound change as altitude changed using equations obtained from the GRAMs. These spreadsheets were used later as a reference for the JavaScript code in both the design and comparison of the data output of the calculators. The websites were created using HTML, CSS, and JavaScript coding languages. The calculators could accurately display the temperature, pressure, density, and speed of sound of these planets from surface values to various stages within the atmosphere. These websites provide a resource for students involved in projects and classes that require knowledge of these changes in these atmospheres. This project also created a chance for new project topics to arise for future students involved in aeronautics and astronautics.
NASA Astrophysics Data System (ADS)
Sihver, L.; Matthiä, D.; Koi, T.; Mancusi, D.
2008-10-01
Radiation exposure of aircrew is more and more recognized as an occupational hazard. The ionizing environment at standard commercial aircraft flight altitudes consists mainly of secondary particles, of which the neutrons give a major contribution to the dose equivalent. Accurate estimations of neutron spectra in the atmosphere are therefore essential for correct calculations of aircrew doses. Energetic solar particle events (SPE) could also lead to significantly increased dose rates, especially at routes close to the North Pole, e.g. for flights between Europe and USA. It is also well known that the radiation environment encountered by personnel aboard low Earth orbit (LEO) spacecraft or aboard a spacecraft traveling outside the Earth's protective magnetosphere is much harsher compared with that within the atmosphere since the personnel are exposed to radiation from both galactic cosmic rays (GCR) and SPE. The relative contribution to the dose from GCR when traveling outside the Earth's magnetosphere, e.g. to the Moon or Mars, is even greater, and reliable and accurate particle and heavy ion transport codes are essential to calculate the radiation risks for both aircrew and personnel on spacecraft. We have therefore performed calculations of neutron distributions in the atmosphere, total dose equivalents, and quality factors at different depths in a water sphere in an imaginary spacecraft during solar minimum in a geosynchronous orbit. The calculations were performed with the GEANT4 Monte Carlo (MC) code using both the binary cascade (BIC) model, which is part of the standard GEANT4 package, and the JQMD model, which is used in the particle and heavy ion transport code PHITS GEANT4.
Higgs Boson Searches at Hadron Colliders (1/4)
Jakobs, Karl
2018-05-21
In these Academic Training lectures, the phenomenology of Higgs bosons and search strategies at hadron colliders are discussed. After a brief introduction on Higgs bosons in the Standard Model and a discussion of present direct and indirect constraints on its mass the status of the theoretical cross section calculations for Higgs boson production at hadron colliders is reviewed. In the following lectures important experimental issues relevant for Higgs boson searches (trigger, measurements of leptons, jets and missing transverse energy) are presented. This is followed by a detailed discussion of the discovery potential for the Standard Model Higgs boson for both the Tevatron and the LHC experiments. In addition, various scenarios beyond the Standard Model, primarily the MSSM, are considered. Finally, the potential and strategies to measured Higgs boson parameters and the investigation of alternative symmetry breaking scenarios are addressed.
Martin, Shelby; Wagner, Jesse; Lupulescu-Mann, Nicoleta; Ramsey, Katrina; Cohen, Aaron; Graven, Peter; Weiskopf, Nicole G; Dorr, David A
2017-08-02
To measure variation among four different Electronic Health Record (EHR) system documentation locations versus 'gold standard' manual chart review for risk stratification in patients with multiple chronic illnesses. Adults seen in primary care with EHR evidence of at least one of 13 conditions were included. EHRs were manually reviewed to determine presence of active diagnoses, and risk scores were calculated using three different methodologies and five EHR documentation locations. Claims data were used to assess cost and utilization for the following year. Descriptive and diagnostic statistics were calculated for each EHR location. Criterion validity testing compared the gold standard verified diagnoses versus other EHR locations and risk scores in predicting future cost and utilization. Nine hundred patients had 2,179 probable diagnoses. About 70% of the diagnoses from the EHR were verified by gold standard. For a subset of patients having baseline and prediction year data (n=750), modeling showed that the gold standard was the best predictor of outcomes on average for a subset of patients that had these data. However, combining all data sources together had nearly equivalent performance for prediction as the gold standard. EHR data locations were inaccurate 30% of the time, leading to improvement in overall modeling from a gold standard from chart review for individual diagnoses. However, the impact on identification of the highest risk patients was minor, and combining data from different EHR locations was equivalent to gold standard performance. The reviewer's ability to identify a diagnosis as correct was influenced by a variety of factors, including completeness, temporality, and perceived accuracy of chart data.
Phenomenology of ultrahigh energy neutrino interactions and fluxes
NASA Astrophysics Data System (ADS)
Hussain, Shahid
There are several models that predict the existence of high and ultrahigh energy (UHE) neutrinos; neutrinos that have amazingly high energies---energies above 10 15 eV. No man-made machines, existing or planned, can produce any particles of this high energies. It is the energies of these neutrinos that make them very interesting for the particle physics and astrophysics community; these neutrinos can be a unique tool to study the unknown regimes of energy, space, and time. Consequently, there is intense experimental activity focused on the detection of these neutrinos; no UHE neutrinos have been detected by these experiments so far. However, most of the UHE neutrino flux models predict that the fluxes of these neutrinos might be too small to be detected by the current detectors. Therefore, more powerful detectors are being built and we are at the beginning of a new and exciting era in neutrino astronomy. The interactions and fluxes of UHE neutrinos both are unknown experimentally. Our focus here is to explore, by numerically calculating observable signals from these neutrinos, different scenarios that can arise by the inter play of UHE neutrino interaction and flux models. Given several AGN and cosmogenic neutrino flux models, we look at two possibilities for neutrino interactions: (i) Neutrinos have standard model weak interactions at ultrahigh energies. (ii) neutrino interactions are enhanced around a TeV mass-scale, as implied by low scale gravity models with extra dimensions. The standard model weak and low scale gravity enhanced neutrino-nucleon interactions of UHE neutrinos both produce observable signals. In standard model, the charged current neutrino-nucleon interactions give muons, taus, and particle showers, and the neutral current interactions give particle showers. In low scale gravity, the micro black hole formation (and its subsequent decay) and the graviton exchange both give particle showers. Muons, taus, and the showers can be detected by the optical Cherenkov radiation they produce; showers can also be detected by the coherent radio Cherenkov signal they produce which is much powerful than their optical Cherenkov signal. We give the formalism for calculating muon, tau, and shower rates for the optical (ICECUBE- like) and the shower rates for the radio (RICE-like) Cherenkov detectors. Our focus is on simulation of the radio signal from neutrino-initiated showers and calculation of the expected neutrino-initiated shower rates for RICE. Finally, given the calculated rates for muons, taus, and showers, we discuss what we can say about the models for UHE neutrino fluxes and interactions.
NASA Astrophysics Data System (ADS)
Davidson, S.; Cui, J.; Followill, D.; Ibbott, G.; Deasy, J.
2008-02-01
The Dose Planning Method (DPM) is one of several 'fast' Monte Carlo (MC) computer codes designed to produce an accurate dose calculation for advanced clinical applications. We have developed a flexible machine modeling process and validation tests for open-field and IMRT calculations. To complement the DPM code, a practical and versatile source model has been developed, whose parameters are derived from a standard set of planning system commissioning measurements. The primary photon spectrum and the spectrum resulting from the flattening filter are modeled by a Fatigue function, cut-off by a multiplying Fermi function, which effectively regularizes the difficult energy spectrum determination process. Commonly-used functions are applied to represent the off-axis softening, increasing primary fluence with increasing angle ('the horn effect'), and electron contamination. The patient dependent aspect of the MC dose calculation utilizes the multi-leaf collimator (MLC) leaf sequence file exported from the treatment planning system DICOM output, coupled with the source model, to derive the particle transport. This model has been commissioned for Varian 2100C 6 MV and 18 MV photon beams using percent depth dose, dose profiles, and output factors. A 3-D conformal plan and an IMRT plan delivered to an anthropomorphic thorax phantom were used to benchmark the model. The calculated results were compared to Pinnacle v7.6c results and measurements made using radiochromic film and thermoluminescent detectors (TLD).
Standard model light-by-light scattering in SANC: Analytic and numeric evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bardin, D. Yu., E-mail: bardin@nu.jinr.ru; Kalinovskaya, L. V., E-mail: kalinov@nu.jinr.ru; Uglov, E. D., E-mail: corner@nu.jinr.r
2010-11-15
The implementation of the Standard Model process {gamma}{gamma} {yields} {gamma}{gamma} through a fermion and boson loop into the framework of SANC system and additional precomputation modules used for calculation of massive box diagrams are described. The computation of this process takes into account nonzero mass of loop particles. The covariant and helicity amplitudes for this process, some particular cases of D{sub 0} and C{sub 0} Passarino-Veltman functions, and also numerical results of corresponding SANC module evaluation are presented. Whenever possible, the results are compared with those existing in the literature.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 2
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.
Random Predictor Models for Rigorous Uncertainty Quantification: Part 1
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2015-01-01
This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.
La Barbera, Luigi; Ottardi, Claudia; Villa, Tomaso
2015-10-01
Preclinical evaluation of the mechanical reliability of fixation devices is a mandatory activity before their introduction into market. There are two standardized protocols for preclinical testing of spinal implants. The American Society for Testing Materials (ASTM) recommends the F1717 standard, which describes a vertebrectomy condition that is relatively simple to implement, whereas the International Organization for Standardization (ISO) suggests the 12189 standard, which describes a more complex physiological anterior support-based setup. Moreover, ASTM F1717 is nowadays well established, whereas ISO 12189 has received little attention: A few studies tried to accurately describe the ISO experimental procedure through numeric models, but these studies totally neglect the recommended precompression step. This study aimed to build up a reliable, validated numeric model capable of describing the stress on the rods of a spinal fixator assembled according to ISO 12189 standard procedure. Such a model would more adequately represent the in vitro testing condition. This study used finite element (FE) simulations and experimental validation testing. An FE model of the ISO setup was built to calculate the stress on the rods. Simulation was validated by comparison with experimental strain gauges measurements. The same fixator has been previously virtually mounted in an L2-L4 FE model of the lumbar spine, and stresses in the rods were calculated when the spine was subjected to physiological forces and moments. The comparison between the FE predictions and experimental measurements is in good agreement, thus confirming the suitability of the FE method to evaluate the stresses in the device. The initial precompression induces a significant extension of the assembled construct. As the applied load increases, the initial extension is gradually compensated, so that at peak load the rods are bent in flexion: The final stress value predicted is thus reduced to about 50%, if compared with the previous model where the precompression was not considered. Neglecting the initial preload due to the assembly of the overall construct according to ISO 12189 standard could lead to an overestimation of the stress on the rods up to 50%. To correctly describe the state of stress on the posterior spinal fixator, tested according to the ISO procedure, it is important to take into account the initial preload due to the assembly of the overall construct. Copyright © 2015 Elsevier Inc. All rights reserved.
Chromosphere Active Region Plasma Diagnostics Based On Observations Of Millimeter Radiation
NASA Astrophysics Data System (ADS)
Loukitcheva, M.; Nagnibeda, V.
1999-10-01
In this paper we present the results of millimeter radiation calculations for different elements of chromospheric and transition region structures of the quiet Sun and S-component - elements of chromosphere network, sunspot groups and plages. The calculations were done on the basis of standard optical and UV models ( models by Vernazza et al. (1981,VAL), their modifications by Fontenla et al. (1993,FAL)). We also considered the sunspot model by Lites and Skumanich (1982,LS), S-component model by Staude et al.(1984) and modification of VAL and FAL models by Bocchialini and Vial - models NET and CELL. We compare these model calculations with observed characteristics of components of millimeter Solar radiation for the quiet Sun and S-component obtained with the radiotelescope RT-7.5 MGTU (wavelength 3.4 mm) and radioheliograph Nobeyama (wavelength 17.6 mm). From observations we derived spectral characteristics of millimeter sources and active region source structure. The comparison has shown that observed radio data are clearly in dissagrement with all the considered models. Finally, we propose further improvement of chromospheric and transition region models based on optical and UV observations in order to use for modelling information obtained from radio data.
Sokkar, Pandian; Boulanger, Eliot; Thiel, Walter; Sanchez-Garcia, Elsa
2015-04-14
We present a hybrid quantum mechanics/molecular mechanics/coarse-grained (QM/MM/CG) multiresolution approach for solvated biomolecular systems. The chemically important active-site region is treated at the QM level. The biomolecular environment is described by an atomistic MM force field, and the solvent is modeled with the CG Martini force field using standard or polarizable (pol-CG) water. Interactions within the QM, MM, and CG regions, and between the QM and MM regions, are treated in the usual manner, whereas the CG-MM and CG-QM interactions are evaluated using the virtual sites approach. The accuracy and efficiency of our implementation is tested for two enzymes, chorismate mutase (CM) and p-hydroxybenzoate hydroxylase (PHBH). In CM, the QM/MM/CG potential energy scans along the reaction coordinate yield reaction energies that are too large, both for the standard and polarizable Martini CG water models, which can be attributed to adverse effects of using large CG water beads. The inclusion of an atomistic MM water layer (10 Å for uncharged CG water and 5 Å for polarizable CG water) around the QM region improves the energy profiles compared to the reference QM/MM calculations. In analogous QM/MM/CG calculations on PHBH, the use of the pol-CG description for the outer water does not affect the stabilization of the highly charged FADHOOH-pOHB transition state compared to the fully atomistic QM/MM calculations. Detailed performance analysis in a glycine-water model system indicates that computation times for QM energy and gradient evaluations at the density functional level are typically reduced by 40-70% for QM/MM/CG relative to fully atomistic QM/MM calculations.
GAMBIT: the global and modular beyond-the-standard-model inference tool
NASA Astrophysics Data System (ADS)
Athron, Peter; Balazs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Dickinson, Hugh; Edsjö, Joakim; Farmer, Ben; Gonzalo, Tomás E.; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Lundberg, Johan; McKay, James; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Ripken, Joachim; Rogan, Christopher; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Seo, Seon-Hee; Serra, Nicola; Weniger, Christoph; White, Martin; Wild, Sebastian
2017-11-01
We describe the open-source global fitting package GAMBIT: the Global And Modular Beyond-the-Standard-Model Inference Tool. GAMBIT combines extensive calculations of observables and likelihoods in particle and astroparticle physics with a hierarchical model database, advanced tools for automatically building analyses of essentially any model, a flexible and powerful system for interfacing to external codes, a suite of different statistical methods and parameter scanning algorithms, and a host of other utilities designed to make scans faster, safer and more easily-extendible than in the past. Here we give a detailed description of the framework, its design and motivation, and the current models and other specific components presently implemented in GAMBIT. Accompanying papers deal with individual modules and present first GAMBIT results. GAMBIT can be downloaded from gambit.hepforge.org.
Efficient model learning methods for actor-critic control.
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
2012-06-01
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
The Radiological Physics Center's standard dataset for small field size output factors.
Followill, David S; Kry, Stephen F; Qin, Lihong; Lowenstein, Jessica; Molineu, Andrea; Alvarez, Paola; Aguirre, Jose Francisco; Ibbott, Geoffrey S
2012-08-08
Delivery of accurate intensity-modulated radiation therapy (IMRT) or stereotactic radiotherapy depends on a multitude of steps in the treatment delivery process. These steps range from imaging of the patient to dose calculation to machine delivery of the treatment plan. Within the treatment planning system's (TPS) dose calculation algorithm, various unique small field dosimetry parameters are essential, such as multileaf collimator modeling and field size dependence of the output. One of the largest challenges in this process is determining accurate small field size output factors. The Radiological Physics Center (RPC), as part of its mission to ensure that institutions deliver comparable and consistent radiation doses to their patients, conducts on-site dosimetry review visits to institutions. As a part of the on-site audit, the RPC measures the small field size output factors as might be used in IMRT treatments, and compares the resulting field size dependent output factors to values calculated by the institution's treatment planning system (TPS). The RPC has gathered multiple small field size output factor datasets for X-ray energies ranging from 6 to 18 MV from Varian, Siemens and Elekta linear accelerators. These datasets were measured at 10 cm depth and ranged from 10 × 10 cm(2) to 2 × 2 cm(2). The field sizes were defined by the MLC and for the Varian machines the secondary jaws were maintained at a 10 × 10 cm(2). The RPC measurements were made with a micro-ion chamber whose volume was small enough to gather a full ionization reading even for the 2 × 2 cm(2) field size. The RPC-measured output factors are tabulated and are reproducible with standard deviations (SD) ranging from 0.1% to 1.5%, while the institutions' calculated values had a much larger SD range, ranging up to 7.9% [corrected].The absolute average percent differences were greater for the 2 × 2 cm(2) than for the other field sizes. The RPC's measured small field output factors provide institutions with a standard dataset against which to compare their TPS calculated values. Any discrepancies noted between the standard dataset and calculated values should be investigated with careful measurements and with attention to the specific beam model.
Creating NDA working standards through high-fidelity spent fuel modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Skutnik, Steven E; Gauld, Ian C; Romano, Catherine E
2012-01-01
The Next Generation Safeguards Initiative (NGSI) is developing advanced non-destructive assay (NDA) techniques for spent nuclear fuel assemblies to advance the state-of-the-art in safeguards measurements. These measurements aim beyond the capabilities of existing methods to include the evaluation of plutonium and fissile material inventory, independent of operator declarations. Testing and evaluation of advanced NDA performance will require reference assemblies with well-characterized compositions to serve as working standards against which the NDA methods can be benchmarked and for uncertainty quantification. To support the development of standards for the NGSI spent fuel NDA project, high-fidelity modeling of irradiated fuel assemblies is beingmore » performed to characterize fuel compositions and radiation emission data. The assembly depletion simulations apply detailed operating history information and core simulation data as it is available to perform high fidelity axial and pin-by-pin fuel characterization for more than 1600 nuclides. The resulting pin-by-pin isotopic inventories are used to optimize the NDA measurements and provide information necessary to unfold and interpret the measurement data, e.g., passive gamma emitters, neutron emitters, neutron absorbers, and fissile content. A key requirement of this study is the analysis of uncertainties associated with the calculated compositions and signatures for the standard assemblies; uncertainties introduced by the calculation methods, nuclear data, and operating information. An integral part of this assessment involves the application of experimental data from destructive radiochemical assay to assess the uncertainty and bias in computed inventories, the impact of parameters such as assembly burnup gradients and burnable poisons, and the influence of neighboring assemblies on periphery rods. This paper will present the results of high fidelity assembly depletion modeling and uncertainty analysis from independent calculations performed using SCALE and MCNP. This work is supported by the Next Generation Safeguards Initiative, Office of Nuclear Safeguards and Security, National Nuclear Security Administration.« less
Czakon, Michal; Fiedler, Paul; Mitov, Alexander
2015-07-31
We determine the dominant missing standard model (SM) contribution to the top quark pair forward-backward asymmetry at the Tevatron. Contrary to past expectations, we find a large, around 27%, shift relative to the well-known value of the inclusive asymmetry in next-to-leading order QCD. Combining all known standard model corrections, we find that A(FB)(SM)=0.095±0.007. This value is in agreement with the latest DØ measurement [V. M. Abazov et al. (D0 Collaboration), Phys. Rev. D 90, 072011 (2014)] A(FB)(D∅)=0.106±0.03 and about 1.5σ below that of CDF [T. Aaltonen et al. (CDF Collaboration), Phys. Rev. D 87, 092002 (2013)] A(FB)(CDF)=0.164±0.047. Our result is derived from a fully differential calculation of the next-to-next-to leading order (NNLO) QCD corrections to inclusive top pair production at hadron colliders and includes-without any approximation-all partonic channels contributing to this process. This is the first complete fully differential calculation in NNLO QCD of a two-to-two scattering process with all colored partons.
NASA Astrophysics Data System (ADS)
Czakon, Michal; Fiedler, Paul; Mitov, Alexander
2015-07-01
We determine the dominant missing standard model (SM) contribution to the top quark pair forward-backward asymmetry at the Tevatron. Contrary to past expectations, we find a large, around 27%, shift relative to the well-known value of the inclusive asymmetry in next-to-leading order QCD. Combining all known standard model corrections, we find that AF BS M = 0.095 ±0.007 . This value is in agreement with the latest DØ measurement [V. M. Abazov et al. (D0 Collaboration), Phys. Rev. D 90, 072011 (2014)] AFBD ∅=0.106 ±0.03 and about 1.5 σ below that of CDF [T. Aaltonen et al. (CDF Collaboration), Phys. Rev. D 87, 092002 (2013)] AFBCDF=0.164 ±0.047 . Our result is derived from a fully differential calculation of the next-to-next-to leading order (NNLO) QCD corrections to inclusive top pair production at hadron colliders and includes—without any approximation—all partonic channels contributing to this process. This is the first complete fully differential calculation in NNLO QCD of a two-to-two scattering process with all colored partons.
A virtual photon energy fluence model for Monte Carlo dose calculation.
Fippel, Matthias; Haryanto, Freddy; Dohm, Oliver; Nüsslin, Fridtjof; Kriesen, Stephan
2003-03-01
The presented virtual energy fluence (VEF) model of the patient-independent part of the medical linear accelerator heads, consists of two Gaussian-shaped photon sources and one uniform electron source. The planar photon sources are located close to the bremsstrahlung target (primary source) and to the flattening filter (secondary source), respectively. The electron contamination source is located in the plane defining the lower end of the filter. The standard deviations or widths and the relative weights of each source are free parameters. Five other parameters correct for fluence variations, i.e., the horn or central depression effect. If these parameters and the field widths in the X and Y directions are given, the corresponding energy fluence distribution can be calculated analytically and compared to measured dose distributions in air. This provides a method of fitting the free parameters using the measurements for various square and rectangular fields and a fixed number of monitor units. The next step in generating the whole set of base data is to calculate monoenergetic central axis depth dose distributions in water which are used to derive the energy spectrum by deconvolving the measured depth dose curves. This spectrum is also corrected to take the off-axis softening into account. The VEF model is implemented together with geometry modules for the patient specific part of the treatment head (jaws, multileaf collimator) into the XVMC dose calculation engine. The implementation into other Monte Carlo codes is possible based on the information in this paper. Experiments are performed to verify the model by comparing measured and calculated dose distributions and output factors in water. It is demonstrated that open photon beams of linear accelerators from two different vendors are accurately simulated using the VEF model. The commissioning procedure of the VEF model is clinically feasible because it is based on standard measurements in air and water. It is also useful for IMRT applications because a full Monte Carlo simulation of the treatment head would be too time-consuming for many small fields.
Thermodynamics of Anharmonic Systems: Uncoupled Mode Approximations for Molecules
Li, Yi-Pei; Bell, Alexis T.; Head-Gordon, Martin
2016-05-26
The partition functions, heat capacities, entropies, and enthalpies of selected molecules were calculated using uncoupled mode (UM) approximations, where the full-dimensional potential energy surface for internal motions was modeled as a sum of independent one-dimensional potentials for each mode. The computational cost of such approaches scales the same with molecular size as standard harmonic oscillator vibrational analysis using harmonic frequencies (HO hf). To compute thermodynamic properties, a computational protocol for obtaining the energy levels of each mode was established. The accuracy of the UM approximation depends strongly on how the one-dimensional potentials of each modes are defined. If the potentialsmore » are determined by the energy as a function of displacement along each normal mode (UM-N), the accuracies of the calculated thermodynamic properties are not significantly improved versus the HO hf model. Significant improvements can be achieved by constructing potentials for internal rotations and vibrations using the energy surfaces along the torsional coordinates and the remaining vibrational normal modes, respectively (UM-VT). For hydrogen peroxide and its isotopologs at 300 K, UM-VT captures more than 70% of the partition functions on average. By con trast, the HO hf model and UM-N can capture no more than 50%. For a selected test set of C2 to C8 linear and branched alkanes and species with different moieties, the enthalpies calculated using the HO hf model, UM-N, and UM-VT are all quite accurate comparing with reference values though the RMS errors of the HO model and UM-N are slightly higher than UM-VT. However, the accuracies in entropy calculations differ significantly between these three models. For the same test set, the RMS error of the standard entropies calculated by UM-VT is 2.18 cal mol -1 K -1 at 1000 K. By contrast, the RMS error obtained using the HO model and UM-N are 6.42 and 5.73 cal mol -1 K -1, respectively. For a test set composed of nine alkanes ranging from C5 to C8, the heat capacities calculated with the UM-VT model agree with the experimental values to within a RMS error of 0.78 cal mol -1 K -1 , which is less than one-third of the RMS error of the HO hf (2.69 cal mol -1 K -1) and UM-N (2.41 cal mol -1 K -1) models.« less
The formation of cosmic structure in a texture-seeded cold dark matter cosmogony
NASA Technical Reports Server (NTRS)
Gooding, Andrew K.; Park, Changbom; Spergel, David N.; Turok, Neil; Gott, Richard, III
1992-01-01
The growth of density fluctuations induced by global texture in an Omega = 1 cold dark matter (CDM) cosmogony is calculated. The resulting power spectra are in good agreement with each other, with more power on large scales than in the standard inflation plus CDM model. Calculation of related statistics (two-point correlation functions, mass variances, cosmic Mach number) indicates that the texture plus CDM model compares more favorably than standard CDM with observations of large-scale structure. Texture produces coherent velocity fields on large scales, as observed. Excessive small-scale velocity dispersions, and voids less empty than those observed may be remedied by including baryonic physics. The topology of the cosmic structure agrees well with observation. The non-Gaussian texture induced density fluctuations lead to earlier nonlinear object formation than in Gaussian models and may also be more compatible with recent evidence that the galaxy density field is non-Gaussian on large scales. On smaller scales the density field is strongly non-Gaussian, but this appears to be primarily due to nonlinear gravitational clustering. The velocity field on smaller scales is surprisingly Gaussian.
Comparisons of a standard galaxy model with stellar observations in five fields
NASA Technical Reports Server (NTRS)
Bahcall, J. N.; Soneira, R. M.
1984-01-01
Modern data on the distribution of stellar colors and on the number of stars as a function of apparent magnitude in five directions in the Galaxy are analyzed. It is found that the standard model is consistent with all the available data. Detailed comparisons with the data for five separate fields are presented. The bright end of the spheroid luminosity function and the blue tip of the spheroid horizontal branch are analyzed. The allowed range of the disk scale heights and of fluctuations in the volume density is determined, and a lower limit is set on the disk scale length. Calculations based on the thick disk model of Gilmore and Reid (1983) are presented.
Madsen, Kristoffer H; Ewald, Lars; Siebner, Hartwig R; Thielscher, Axel
2015-01-01
Field calculations for transcranial magnetic stimulation (TMS) are increasingly implemented online in neuronavigation systems and in more realistic offline approaches based on finite-element methods. They are often based on simplified and/or non-validated models of the magnetic vector potential of the TMS coils. To develop an approach to reconstruct the magnetic vector potential based on automated measurements. We implemented a setup that simultaneously measures the three components of the magnetic field with high spatial resolution. This is complemented by a novel approach to determine the magnetic vector potential via volume integration of the measured field. The integration approach reproduces the vector potential with very good accuracy. The vector potential distribution of a standard figure-of-eight shaped coil determined with our setup corresponds well with that calculated using a model reconstructed from x-ray images. The setup can supply validated models for existing and newly appearing TMS coils. Copyright © 2015 Elsevier Inc. All rights reserved.
Steinbach, Sarah M L; Sturgess, Christopher P; Dunning, Mark D; Neiger, Reto
2015-06-01
Assessment of renal function by means of plasma clearance of a suitable marker has become standard procedure for estimation of glomerular filtration rate (GFR). Sinistrin, a polyfructan solely cleared by the kidney, is often used for this purpose. Pharmacokinetic modeling using adequate software is necessary to calculate disappearance rate and half-life of sinistrin. The purpose of this study was to describe the use of a Microsoft excel based add-in program to calculate plasma sinistrin clearance, as well as additional pharmacokinetic parameters such as transfer rates (k), half-life (t1/2) and volume of distribution (Vss) for sinistrin in dogs with varying degrees of renal function. Copyright © 2015 Elsevier Ltd. All rights reserved.
Low-temperature behavior of the quark-meson model
NASA Astrophysics Data System (ADS)
Tripolt, Ralf-Arno; Schaefer, Bernd-Jochen; von Smekal, Lorenz; Wambach, Jochen
2018-02-01
We revisit the phase diagram of strong-interaction matter for the two-flavor quark-meson model using the functional renormalization group. In contrast to standard mean-field calculations, an unusual phase structure is encountered at low temperatures and large quark chemical potentials. In particular, we identify a regime where the pressure decreases with increasing temperature and discuss possible reasons for this unphysical behavior.
Including electromagnetism in K → ππ decay calculations
NASA Astrophysics Data System (ADS)
Christ, Norman; Feng, Xu
2018-03-01
Because of the small size of the ratio A2/A0 of the I = 2 to I = 0 K → ππ decay amplitudes (the ΔI = 1/2 rule) the effects of electromagnetism on A2 may be a factor of 20 larger than given by a naive O(±EM) estimate. Thus, if future calculations of A2 and epsilon'/epsilon are to achieve 10% accuracy, these effects need to be included. Here we present the first steps toward including electromagnetism in a calculation of the standard model K → ππ decay amplitudes using lattice QCD.
Associated Higgs-W-boson production at hadron colliders: a fully exclusive QCD calculation at NNLO.
Ferrera, Giancarlo; Grazzini, Massimiliano; Tramontano, Francesco
2011-10-07
We consider QCD radiative corrections to standard model Higgs-boson production in association with a W boson in hadron collisions. We present a fully exclusive calculation up to next-to-next-to-leading order (NNLO) in QCD perturbation theory. To perform this NNLO computation, we use a recently proposed version of the subtraction formalism. Our calculation includes finite-width effects, the leptonic decay of the W boson with its spin correlations, and the decay of the Higgs boson into a bb pair. We present selected numerical results at the Tevatron and the LHC.
NASA Astrophysics Data System (ADS)
Sergievskii, V. V.; Rudakov, A. M.
2006-11-01
An analysis of the accepted methods for calculating the activity coefficients for the components of binary aqueous solutions was performed. It was demonstrated that the use of the osmotic coefficients in auxiliary calculations decreases the accuracy of estimates of the activity coefficients. The possibility of calculating the activity coefficient of the solute from the concentration dependence of the water activity was examined. It was established that, for weak electrolytes, the interpretation of data on heterogeneous equilibria within the framework of the standard assumption that the dissociation is complete encounters serious difficulties.
NASA Astrophysics Data System (ADS)
Barraclough, Brendan; Li, Jonathan G.; Lebron, Sharon; Fan, Qiyong; Liu, Chihray; Yan, Guanghua
2015-08-01
The ionization chamber volume averaging effect is a well-known issue without an elegant solution. The purpose of this study is to propose a novel convolution-based approach to address the volume averaging effect in model-based treatment planning systems (TPSs). Ionization chamber-measured beam profiles can be regarded as the convolution between the detector response function and the implicit real profiles. Existing approaches address the issue by trying to remove the volume averaging effect from the measurement. In contrast, our proposed method imports the measured profiles directly into the TPS and addresses the problem by reoptimizing pertinent parameters of the TPS beam model. In the iterative beam modeling process, the TPS-calculated beam profiles are convolved with the same detector response function. Beam model parameters responsible for the penumbra are optimized to drive the convolved profiles to match the measured profiles. Since the convolved and the measured profiles are subject to identical volume averaging effect, the calculated profiles match the real profiles when the optimization converges. The method was applied to reoptimize a CC13 beam model commissioned with profiles measured with a standard ionization chamber (Scanditronix Wellhofer, Bartlett, TN). The reoptimized beam model was validated by comparing the TPS-calculated profiles with diode-measured profiles. Its performance in intensity-modulated radiation therapy (IMRT) quality assurance (QA) for ten head-and-neck patients was compared with the CC13 beam model and a clinical beam model (manually optimized, clinically proven) using standard Gamma comparisons. The beam profiles calculated with the reoptimized beam model showed excellent agreement with diode measurement at all measured geometries. Performance of the reoptimized beam model was comparable with that of the clinical beam model in IMRT QA. The average passing rates using the reoptimized beam model increased substantially from 92.1% to 99.3% with 3%/3 mm and from 79.2% to 95.2% with 2%/2 mm when compared with the CC13 beam model. These results show the effectiveness of the proposed method. Less inter-user variability can be expected of the final beam model. It is also found that the method can be easily integrated into model-based TPS.
Standard model effective field theory: Integrating out neutralinos and charginos in the MSSM
NASA Astrophysics Data System (ADS)
Han, Huayong; Huo, Ran; Jiang, Minyuan; Shu, Jing
2018-05-01
We apply the covariant derivative expansion method to integrate out the neutralinos and charginos in the minimal supersymmetric Standard Model. The results are presented as set of pure bosonic dimension-six operators in the Standard Model effective field theory. Nontrivial chirality dependence in fermionic covariant derivative expansion is discussed carefully. The results are checked by computing the h γ γ effective coupling and the electroweak oblique parameters using the Standard Model effective field theory with our effective operators and direct loop calculation. In global fitting, the proposed lepton collider constraint projections, special phenomenological emphasis is paid to the gaugino mass unification scenario (M2≃2 M1) and anomaly mediation scenario (M1≃3.3 M2). These results show that the precision measurement experiments in future lepton colliders will provide a very useful complementary job in probing the electroweakino sector, in particular, filling the gap of the soft lepton plus the missing ET channel search left by the traditional collider, where the neutralino as the lightest supersymmetric particle is very degenerated with the next-to-lightest chargino/neutralino.
Use of Navier-Stokes methods for the calculation of high-speed nozzle flow fields
NASA Technical Reports Server (NTRS)
Georgiadis, Nicholas J.; Yoder, Dennis A.
1994-01-01
Flows through three reference nozzles have been calculated to determine the capabilities and limitations of the widely used Navier-Stokes solver, PARC. The nozzles examined have similar dominant flow characteristics as those considered for supersonic transport programs. Flows from an inverted velocity profile (IVP) nozzle, an under expanded nozzle, and an ejector nozzle were examined. PARC calculations were obtained with its standard algebraic turbulence model, Thomas, and the two-equation turbulence model, Chien k-epsilon. The Thomas model was run with the default coefficient of mixing set at both 0.09 and a larger value of 0.13 to improve the mixing prediction. Calculations using the default value substantially underpredicted the mixing for all three flows. The calculations obtained with the higher mixing coefficient better predicted mixing in the IVP and underexpanded nozzle flows but adversely affected PARC's convergence characteristics for the IVP nozzle case. The ejector nozzle case did not converge with the Thomas model and the higher mixing coefficient. The Chien k-epsilon results were in better agreement with the experimental data overall than were those of the Thomas run with the default mixing coefficient, but the default boundary conditions for k and epsilon underestimated the levels of mixing near the nozzle exits.
Population-Based Analysis and Projections of Liver Supply Under Redistricting.
Parikh, Neehar D; Marrero, Wesley J; Sonnenday, Christopher J; Lok, Anna S; Hutton, David W; Lavieri, Mariel S
2017-09-01
To reduce the geographic heterogeneity in liver transplant allocation, the United Network of Organ Sharing has proposed redistricting, which is impacted by both donor supply and liver transplantation demand. We aimed to determine the impact of demographic changes on the redistricting proposal and characterize causes behind geographic heterogeneity in donor supply. We analyzed adult donors from 2002 to 2014 from the United Network of Organ Sharing database and calculated regional liver donation and utilization stratified by age, race, and body mass index. We used US population data to make regional projections of available donors from 2016 to 2025, incorporating the proposed 8-region redistricting plan. We used donors/100 000 population age 18 to 84 years (D/100K) as a measure of equity. We calculated a coefficient of variation (standard deviation/mean) for each regional model. We performed an exploratory analysis where we used national rates of donation, utilization and both for each regional model. The overall projected D/100K will decrease from 2.53 to 2.49 from 2016 to 2025. The coefficient of variation in 2016 is expected to be 20.3% in the 11-region model and 13.2% in the 8-region model. We found that standardizing regional donation and utilization rates would reduce geographic heterogeneity to 4.9% in the 8-region model and 4.6% in the 11-region model. The 8-region allocation model will reduce geographic variation in donor supply to a significant extent; however, we project that geographic disparity will marginally increase over time. Though challenging, interventions to better standardize donation and utilization rates would be impactful in reducing geographic heterogeneity in organ supply.
NASA Astrophysics Data System (ADS)
Kitahara, Teppei; Nierste, Ulrich; Tremper, Paul
2016-12-01
The standard analytic solution of the renormalization group (RG) evolution for the Δ S = 1 Wilson coefficients involves several singularities, which complicate analytic solutions. In this paper we derive a singularity-free solution of the next-to-leading order (NLO) RG equations, which greatly facilitates the calculation of ɛ K ' , the measure of direct CP violation in K → ππ decays. Using our new RG evolution and the latest lattice results for the hadronic matrix elements, we calculate the ratio ɛ K ' /ɛ K (with ɛ K quantifying indirect CP violation) in the Standard Model (SM) at NLO to ɛ K ' /ɛ K = (1.06 ± 5.07) × 10- 4, which is 2 .8 σ below the experimental value. We also present the evolution matrix in the high-energy regime for calculations of new physics contributions and derive easy-to-use approximate formulae. We find that the RG amplification of new-physics contributions to Wilson coefficients of the electroweak penguin operators is further enhanced by the NLO corrections: if the new contribution is generated at the scale of 1-10 TeV, the RG evolution between the new-physics scale and the electroweak scale enhances these coefficients by 50-100%. Our solution contains a term of order α EM 2 / α s 2 , which is numerically unimportant for the SM case but should be included in studies of high-scale new-physics.
NASA Astrophysics Data System (ADS)
Allanach, B. C.; Athron, P.; Tunstall, Lewis C.; Voigt, A.; Williams, A. G.
2014-09-01
We describe an extension to the SOFTSUSY program that provides for the calculation of the sparticle spectrum in the Next-to-Minimal Supersymmetric Standard Model (NMSSM), where a chiral superfield that is a singlet of the Standard Model gauge group is added to the Minimal Supersymmetric Standard Model (MSSM) fields. Often, a Z3 symmetry is imposed upon the model. SOFTSUSY can calculate the spectrum in this case as well as the case where general Z3 violating (denoted as =) terms are added to the soft supersymmetry breaking terms and the superpotential. The user provides a theoretical boundary condition for the couplings and mass terms of the singlet. Radiative electroweak symmetry breaking data along with electroweak and CKM matrix data are used as weak-scale boundary conditions. The renormalisation group equations are solved numerically between the weak scale and a high energy scale using a nested iterative algorithm. This paper serves as a manual to the NMSSM mode of the program, detailing the approximations and conventions used. Catalogue identifier: ADPM_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADPM_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 154886 No. of bytes in distributed program, including test data, etc.: 1870890 Distribution format: tar.gz Programming language: C++, fortran. Computer: Personal computer. Operating system: Tested on Linux 3.x. Word size: 64 bits Classification: 11.1, 11.6. Does the new version supersede the previous version?: Yes Catalogue identifier of previous version: ADPM_v3_0 Journal reference of previous version: Comput. Phys. Comm. 183 (2012) 785 Nature of problem: Calculating supersymmetric particle spectrum and mixing parameters in the next-to-minimal supersymmetric standard model. The solution to the renormalisation group equations must be consistent with boundary conditions on supersymmetry breaking parameters, as well as on the weak-scale boundary condition on gauge couplings, Yukawa couplings and the Higgs potential parameters. Solution method: Nested iterative algorithm and numerical minimisation of the Higgs potential. Reasons for new version: Major extension to include the next-to-minimal supersymmetric standard model. Summary of revisions: Added additional supersymmetric and supersymmetry breaking parameters associated with the additional gauge singlet. Electroweak symmetry breaking conditions are significantly changed in the next-to-minimal mode, and some sparticle mixing changes. An interface to NMSSMTools has also been included. Some of the object structure has also changed, and the command line interface has been made more user friendly. Restrictions: SOFTSUSY will provide a solution only in the perturbative regime and it assumes that all couplings of the model are real (i.e. CP-conserving). If the parameter point under investigation is non-physical for some reason (for example because the electroweak potential does not have an acceptable minimum), SOFTSUSY returns an error message. Running time: A few seconds per parameter point.
Form Factor Measurements at BESIII for an Improved Standard Model Prediction of the Muon g-2
NASA Astrophysics Data System (ADS)
Destefanis, Marco
The anomalous part of the magnetic moment of the muon, (g-2)μ, allows for one of the most precise tests of the Standard Model of particle physics. We report on recent results by the BESIII Collaboration of exclusive hadronic cross section channels, such as the 2π, 3π, and 4π final states. These measurements are of utmost importance for an improved calculation of the hadronic vacuum polarization contribution of (g-2)μ, which currenty is limiting the overall Standard Model prediction of this quantity. BESIII has furthermore also intiatated a programme of spacelike transition form factor measurements, which can be used for a determination of the hadronic light-by-light contribution of (g-2)μ in a data-driven approach. These results are of relevance in view of the new and direct measurements of (g-2)μ as foreseen at Fermilab/USA and J-PARC/Japan.
NASA Astrophysics Data System (ADS)
Heilman, Jesse Alan
The search for the production of four top quarks decaying in the dileptonic channel in proton-proton collisions at the LHC is presented. The analysis utilises the data recorded by the CMS experiment at sqrt{s} = 13 TeV in 2015, which corresponds to an integrated luminosity of 2.6 inverse femtobarns. A boosted decision tree algorithm is used to select signal and suppress background events. Upper limits on dileptonic four top quark production of 14.9 times the predicted standard model cross section observed and 22.3 +16.2-8.4 times the predicted standard model cross section expected are calculated at the 95% confidence level. A combination is then performed with a parallel analysis of the single lepton channel to extend the reach of the search.
Modeling Bloch oscillations in ultra-small Josephson junctions
NASA Astrophysics Data System (ADS)
Vora, Heli; Kautz, Richard; Nam, Sae Woo; Aumentado, Jose
In a seminal paper, Likharev et al. developed a theory for ultra-small Josephson junctions with Josephson coupling energy (Ej) less than the charging energy (Ec) and showed that such junctions demonstrate Bloch oscillations which could be used to make a fundamental current standard that is a dual of the Josephson volt standard. Here, based on the model of Geigenmüller and Schön, we numerically calculate the current-voltage relationship of such an ultra-small junction which includes various error processes present in a nanoscale Josephson junction such as random quasiparticle tunneling events and Zener tunneling between bands. This model allows us to explore the parameter space to see the effect of each process on the width and height of the Bloch step and serves as a guide to determine whether it is possible to build a quantum current standard of a metrological precision using Bloch oscillations.
B→πll Form Factors for New Physics Searches from Lattice QCD.
Bailey, Jon A; Bazavov, A; Bernard, C; Bouchard, C M; DeTar, C; Du, Daping; El-Khadra, A X; Freeland, E D; Gámiz, E; Gottlieb, Steven; Heller, U M; Kronfeld, A S; Laiho, J; Levkova, L; Liu, Yuzhi; Lunghi, E; Mackenzie, P B; Meurice, Y; Neil, E; Qiu, Si-Wei; Simone, J N; Sugar, R; Toussaint, D; Van de Water, R S; Zhou, Ran
2015-10-09
The rare decay B→πℓ^{+}ℓ^{-} arises from b→d flavor-changing neutral currents and could be sensitive to physics beyond the standard model. Here, we present the first ab initio QCD calculation of the B→π tensor form factor f_{T}. Together with the vector and scalar form factors f_{+} and f_{0} from our companion work [J. A. Bailey et al., Phys. Rev. D 92, 014024 (2015)], these parametrize the hadronic contribution to B→π semileptonic decays in any extension of the standard model. We obtain the total branching ratio BR(B^{+}→π^{+}μ^{+}μ^{-})=20.4(2.1)×10^{-9} in the standard model, which is the most precise theoretical determination to date, and agrees with the recent measurement from the LHCb experiment [R. Aaij et al., J. High Energy Phys. 12 (2012) 125].
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allanach, B
2004-03-01
The work contained herein constitutes a report of the ''Beyond the Standard Model'' working group for the Workshop ''Physics at TeV Colliders'', Les Houches, France, 26 May-6 June, 2003. The research presented is original, and was performed specifically for the workshop. Tools for calculations in the minimal supersymmetric standard model are presented, including a comparison of the dark matter relic density predicted by public codes. Reconstruction of supersymmetric particle masses at the LHC and a future linear collider facility is examined. Less orthodox supersymmetric signals such as non-pointing photons and R-parity violating signals are studied. Features of extra dimensional modelsmore » are examined next, including measurement strategies for radions and Higgs', as well as the virtual effects of Kaluza Klein modes of gluons. Finally, there is an update on LHC Z' studies.« less
Introduction to Elementary Particle Physics
NASA Astrophysics Data System (ADS)
Bettini, Alessandro
The Standard Model is the most comprehensive physical theory ever developed. This textbook conveys the basic elements of the Standard Model using elementary concepts, without the theoretical rigor found in most other texts on this subject. It contains examples of basic experiments, allowing readers to see how measurements and theory interplay in the development of physics. The author examines leptons, hadrons and quarks, before presenting the dynamics and the surprising properties of the charges of the different forces. The textbook concludes with a brief discussion on the recent discoveries of physics beyond the Standard Model, and its connections with cosmology. Quantitative examples are given, and the reader is guided through the necessary calculations. Each chapter ends in the exercises, and solutions to some problems are included in the book. Complete solutions are available to instructors at www.cambridge.org/9780521880213. This textbook is suitable for advanced undergraduate students and graduate students.
$$B\\to\\pi\\ell\\ell$$ Form Factors for New-Physics Searches from Lattice QCD
Bailey, Jon A.
2015-10-07
The rare decay B→πℓ +ℓ - arises from b→d flavor-changing neutral currents and could be sensitive to physics beyond the standard model. Here, we present the first ab initio QCD calculation of the B→π tensor form factor f T. Together with the vector and scalar form factors f + and f 0 from our companion work [J. A. Bailey et al., Phys. Rev. D 92, 014024 (2015)], these parametrize the hadronic contribution to B→π semileptonic decays in any extension of the standard model. We obtain the total branching ratio BR(B +→π +μ +μ -)=20.4(2.1)×10 -9 in the standard model, whichmore » is the most precise theoretical determination to date, and agrees with the recent measurement from the LHCb experiment [R. Aaij et al., J. High Energy Phys. 12 (2012) 125].« less
Moerbeek, Mirjam; van Schie, Sander
2016-07-11
The number of clusters in a cluster randomized trial is often low. It is therefore likely random assignment of clusters to treatment conditions results in covariate imbalance. There are no studies that quantify the consequences of covariate imbalance in cluster randomized trials on parameter and standard error bias and on power to detect treatment effects. The consequences of covariance imbalance in unadjusted and adjusted linear mixed models are investigated by means of a simulation study. The factors in this study are the degree of imbalance, the covariate effect size, the cluster size and the intraclass correlation coefficient. The covariate is binary and measured at the cluster level; the outcome is continuous and measured at the individual level. The results show covariate imbalance results in negligible parameter bias and small standard error bias in adjusted linear mixed models. Ignoring the possibility of covariate imbalance while calculating the sample size at the cluster level may result in a loss in power of at most 25 % in the adjusted linear mixed model. The results are more severe for the unadjusted linear mixed model: parameter biases up to 100 % and standard error biases up to 200 % may be observed. Power levels based on the unadjusted linear mixed model are often too low. The consequences are most severe for large clusters and/or small intraclass correlation coefficients since then the required number of clusters to achieve a desired power level is smallest. The possibility of covariate imbalance should be taken into account while calculating the sample size of a cluster randomized trial. Otherwise more sophisticated methods to randomize clusters to treatments should be used, such as stratification or balance algorithms. All relevant covariates should be carefully identified, be actually measured and included in the statistical model to avoid severe levels of parameter and standard error bias and insufficient power levels.
NASA Astrophysics Data System (ADS)
Polichtchouk, Yuri; Tokareva, Olga; Bulgakova, Irina V.
2003-03-01
Methodical problems of space images processing for assessment of atmosphere pollution impact on forest ecosystems using geoinformation systems are developed. An approach to quantitative assessment of atmosphere pollution impact on forest ecosystems is based on calculating relative squares of forest landscapes which are inside atmosphere pollution zones. Landscape structure of forested territories in the southern part of Western Siberia are determined on the basis of procession of middle resolution space images from spaceborn Resource-O. Particularities of atmosphere pollution zones modeling caused by gas burning in torches on territories of oil fields are considered. Pollution zones were revealed by modeling of contaminants dispersal in atmosphere with standard models. Polluted landscapes squares are calculated depending on atmosphere pollution level.
NASA Astrophysics Data System (ADS)
Polichtchouk, Yuri; Ryukhko, Viatcheslav; Tokareva, Olga; Alexeeva, Mary
2002-02-01
Geoinformation modeling system structure for assessment of the environmental impact of atmospheric pollution on forest- swamp ecosystems of West Siberia is considered. Complex approach to the assessment of man-caused impact based on the combination of sanitary-hygienic and landscape-geochemical approaches is reported. Methodical problems of analysis of atmosphere pollution impact on vegetable biosystems using geoinformation systems and remote sensing data are developed. Landscape structure of oil production territories in southern part of West Siberia are determined on base of processing of space images from spaceborn Resource-O. Particularities of atmosphere pollution zones modeling caused by gas burning in torches in territories of oil fields are considered. For instance, a pollution zones were revealed modeling of contaminants dispersal in atmosphere by standard model. Polluted landscapes areas are calculated depending on oil production volume. It is shown calculated data is well approximated by polynomial models.
NASA Technical Reports Server (NTRS)
Rauch, T.; Werner, K.; Bohlin, R.; Kruk, J. W.
2013-01-01
Hydrogen-rich, DA-type white dwarfs are particularly suited as primary standard stars for flux calibration. State-of-the-art NLTE models consider opacities of species up to trans-iron elements and provide reliable synthetic stellar-atmosphere spectra to compare with observations. Aims. We will establish a database of theoretical spectra of stellar flux standards that are easily accessible via a web interface. Methods. In the framework of the Virtual Observatory, the German Astrophysical Virtual Observatory developed the registered service TheoSSA. It provides easy access to stellar spectral energy distributions (SEDs) and is intended to ingest SEDs calculated by any model-atmosphere code. In case of the DA white dwarf G191-B2B, we demonstrate that the model reproduces not only its overall continuum shape but also the numerous metal lines exhibited in its ultraviolet spectrum. Results. TheoSSA is in operation and contains presently a variety of SEDs for DA-type white dwarfs. It will be extended in the near future and can host SEDs of all primary and secondary flux standards. The spectral analysis of G191-B2B has shown that our hydrostatic models reproduce the observations best at Teff =60 000 +/- 2000K and log g=7.60 +/- 0.05.We newly identified Fe vi, Ni vi, and Zn iv lines. For the first time, we determined the photospheric zinc abundance with a logarithmic mass fraction of -4.89 (7.5 × solar). The abundances of He (upper limit), C, N, O, Al, Si, O, P, S, Fe, Ni, Ge, and Sn were precisely determined. Upper abundance limits of about 10% solar were derived for Ti, Cr, Mn, and Co. Conclusions. The TheoSSA database of theoretical SEDs of stellar flux standards guarantees that the flux calibration of all astronomical data and cross-calibration between different instruments can be based on the same models and SEDs calculated with different model-atmosphere codes and are easy to compare.
Heterotic line bundle models on elliptically fibered Calabi-Yau three-folds
NASA Astrophysics Data System (ADS)
Braun, Andreas P.; Brodie, Callum R.; Lukas, Andre
2018-04-01
We analyze heterotic line bundle models on elliptically fibered Calabi-Yau three-folds over weak Fano bases. In order to facilitate Wilson line breaking to the standard model group, we focus on elliptically fibered three-folds with a second section and a freely-acting involution. Specifically, we consider toric weak Fano surfaces as base manifolds and identify six such manifolds with the required properties. The requisite mathematical tools for the construction of line bundle models on these spaces, including the calculation of line bundle cohomology, are developed. A computer scan leads to more than 400 line bundle models with the right number of families and an SU(5) GUT group which could descend to standard-like models after taking the ℤ2 quotient. A common and surprising feature of these models is the presence of a large number of vector-like states.
Asymmetries of the B →K*μ+μ- decay and the search of new physics beyond the standard model
NASA Astrophysics Data System (ADS)
Fu, Hai-Bing; Wu, Xing-Gang; Cheng, Wei; Zhong, Tao; Sun, Zhan
2018-03-01
In this paper, we compute the forward-backward asymmetry and the isospin asymmetry of the B →K*μ+μ- decay. The B →K* transition form factors (TFFs) are key components of the decay. To achieve a more accurate QCD prediction, we adopt a chiral correlator for calculating the QCD light cone sum rules for those TFFs with the purpose of suppressing the uncertain high-twist distribution amplitudes. Our predictions show that the asymmetries under the standard model and the minimal supersymmetric standard model with minimal flavor violation are close in shape for q2≥6 GeV2 and are consistent with the Belle, LHCb, and CDF data within errors. When q2<2 GeV2, their predictions behave quite differently. Thus, a careful study on the B →K*μ+μ- decay within the small q2 region could be helpful for searching new physics beyond the standard model. As a further application, we also apply the B →K* TFFs to the branching ratio and longitudinal polarization fraction of the B →K*ν ν ¯ decay within different models.
NASA Astrophysics Data System (ADS)
Perrot, Y.; Degoul, F.; Auzeloux, P.; Bonnet, M.; Cachin, F.; Chezal, J. M.; Donnarieix, D.; Labarre, P.; Moins, N.; Papon, J.; Rbah-Vidal, L.; Vidal, A.; Miot-Noirault, E.; Maigne, L.
2014-05-01
The GATE Monte Carlo simulation platform based on the Geant4 toolkit is under constant improvement for dosimetric calculations. In this study, we explore its use for the dosimetry of the preclinical targeted radiotherapy of melanoma using a new specific melanin-targeting radiotracer labeled with iodine 131. Calculated absorbed fractions and S values for spheres and murine models (digital and CT-scan-based mouse phantoms) are compared between GATE and EGSnrc Monte Carlo codes considering monoenergetic electrons and the detailed energy spectrum of iodine 131. The behavior of Geant4 standard and low energy models is also tested. Following the different authors’ guidelines concerning the parameterization of electron physics models, this study demonstrates an agreement of 1.2% and 1.5% with EGSnrc, respectively, for the calculation of S values for small spheres and mouse phantoms. S values calculated with GATE are then used to compute the dose distribution in organs of interest using the activity distribution in mouse phantoms. This study gives the dosimetric data required for the translation of the new treatment to the clinic.
Woodings, S
2004-09-01
Iodine-131 patients pose a radiation risk to their family members, carers and colleagues. Doses from thyrotoxicosis and thyroid cancer patients undergoing standard treatments have been well characterised in the literature. However the resulting precautions cannot be easily adapted to circumstances where the patient has an unusual affliction, or an atypical family or occupational environment. In this study, a model for calculating dose from an I-131 patient is derived from first principles. The model is combined with existing results from the literature to determine a distance weighting factor between patients and family members. This technique reduces the uncertainty in the dose calculations by removing the need to guess the unknown patterns of close contact, a problem common to all previous dose calculation techniques. Data is presented for four unusual I-131 treatments; a child thyroid cancer patient, two thyroid cancer dialysis patients and a phaeochromocytoma patient. The model is used to calculate appropriate periods of restricted contact for these patients. The recommendations provide a useful guide for future unusual I-131 treatments.
Two-loop mass splittings in electroweak multiplets: Winos and minimal dark matter
NASA Astrophysics Data System (ADS)
McKay, James; Scott, Pat
2018-03-01
The radiatively-induced splitting of masses in electroweak multiplets is relevant for both collider phenomenology and dark matter. Precision two-loop corrections of O (MeV ) to the triplet mass splitting in the wino limit of the minimal supersymmetric standard model can affect particle lifetimes by up to 40%. We improve on previous two-loop self-energy calculations for the wino model by obtaining consistent input parameters to the calculation via two-loop renormalization-group running, and including the effect of finite light quark masses. We also present the first two-loop calculation of the mass splitting in an electroweak fermionic quintuplet, corresponding to the viable form of minimal dark matter (MDM). We place significant constraints on the lifetimes of the charged and doubly-charged fermions in this model. We find that the two-loop mass splittings in the MDM quintuplet are not constant in the large-mass limit, as might naively be expected from the triplet calculation. This is due to the influence of the additional heavy fermions in loop corrections to the gauge boson propagators.
A user-friendly one-dimensional model for wet volcanic plumes
Mastin, Larry G.
2007-01-01
This paper presents a user-friendly graphically based numerical model of one-dimensional steady state homogeneous volcanic plumes that calculates and plots profiles of upward velocity, plume density, radius, temperature, and other parameters as a function of height. The model considers effects of water condensation and ice formation on plume dynamics as well as the effect of water added to the plume at the vent. Atmospheric conditions may be specified through input parameters of constant lapse rates and relative humidity, or by loading profiles of actual atmospheric soundings. To illustrate the utility of the model, we compare calculations with field-based estimates of plume height (∼9 km) and eruption rate (>∼4 × 105 kg/s) during a brief tephra eruption at Mount St. Helens on 8 March 2005. Results show that the atmospheric conditions on that day boosted plume height by 1–3 km over that in a standard dry atmosphere. Although the eruption temperature was unknown, model calculations most closely match the observations for a temperature that is below magmatic but above 100°C.
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 3 2013-01-01 2013-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
10 CFR 434.605 - Standard Calculation Procedure.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Standard Calculation Procedure. 434.605 Section 434.605 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Compliance Alternative § 434.605 Standard Calculation...
A per-cent-level determination of the nucleon axial coupling from quantum chromodynamics.
Chang, C C; Nicholson, A N; Rinaldi, E; Berkowitz, E; Garron, N; Brantley, D A; Monge-Camacho, H; Monahan, C J; Bouchard, C; Clark, M A; Joó, B; Kurth, T; Orginos, K; Vranas, P; Walker-Loud, A
2018-06-01
The axial coupling of the nucleon, g A , is the strength of its coupling to the weak axial current of the standard model of particle physics, in much the same way as the electric charge is the strength of the coupling to the electromagnetic current. This axial coupling dictates the rate at which neutrons decay to protons, the strength of the attractive long-range force between nucleons and other features of nuclear physics. Precision tests of the standard model in nuclear environments require a quantitative understanding of nuclear physics that is rooted in quantum chromodynamics, a pillar of the standard model. The importance of g A makes it a benchmark quantity to determine theoretically-a difficult task because quantum chromodynamics is non-perturbative, precluding known analytical methods. Lattice quantum chromodynamics provides a rigorous, non-perturbative definition of quantum chromodynamics that can be implemented numerically. It has been estimated that a precision of two per cent would be possible by 2020 if two challenges are overcome 1,2 : contamination of g A from excited states must be controlled in the calculations and statistical precision must be improved markedly 2-10 . Here we use an unconventional method 11 inspired by the Feynman-Hellmann theorem that overcomes these challenges. We calculate a g A value of 1.271 ± 0.013, which has a precision of about one per cent.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Orrell, Alice C.; Dixon, Douglas R.
Using the wind data collected at a location in Fort Wainwright’s Donnelly Training Area (DTA) near the Cold Regions Test Center (CRTC) test track, Pacific Northwest National Laboratory (PNNL) estimated the gross and net energy productions that proposed turbine models would have produced exposed to the wind resource measured at the meteorological tower (met tower) location during the year of measurement. Calculations are based on the proposed turbine models’ standard atmospheric conditions power curves, the annual average wind speeds, wind shear estimates, and standard industry assumptions.
Standard Model and New physics for ɛ'k/ɛk
NASA Astrophysics Data System (ADS)
Kitahara, Teppei
2018-05-01
The first result of the lattice simulation and improved perturbative calculations have pointed to a discrepancy between data on ɛ'k/ɛk and the standard-model (SM) prediction. Several new physics (NP) models can explain this discrepancy, and such NP models are likely to predict deviations of ℬ(K → πv
Evaluation of portfolio credit risk based on survival analysis for progressive censored data
NASA Astrophysics Data System (ADS)
Jaber, Jamil J.; Ismail, Noriszura; Ramli, Siti Norafidah Mohd
2017-04-01
In credit risk management, the Basel committee provides a choice of three approaches to the financial institutions for calculating the required capital: the standardized approach, the Internal Ratings-Based (IRB) approach, and the Advanced IRB approach. The IRB approach is usually preferred compared to the standard approach due to its higher accuracy and lower capital charges. This paper use several parametric models (Exponential, log-normal, Gamma, Weibull, Log-logistic, Gompertz) to evaluate the credit risk of the corporate portfolio in the Jordanian banks based on the monthly sample collected from January 2010 to December 2015. The best model is selected using several goodness-of-fit criteria (MSE, AIC, BIC). The results indicate that the Gompertz distribution is the best model parametric model for the data.
Realizing three generations of the Standard Model fermions in the type IIB matrix model
NASA Astrophysics Data System (ADS)
Aoki, Hajime; Nishimura, Jun; Tsuchiya, Asato
2014-05-01
We discuss how the Standard Model particles appear from the type IIB matrix model, which is considered to be a nonperturbative formulation of superstring theory. In particular, we are concerned with a constructive definition of the theory, in which we start with finite- N matrices and take the large- N limit afterwards. In that case, it was pointed out recently that realizing chiral fermions in the model is more difficult than it had been thought from formal arguments at N = ∞ and that introduction of a matrix version of the warp factor is necessary. Based on this new insight, we show that two generations of the Standard Model fermions can be realized by considering a rather generic configuration of fuzzy S2 and fuzzy S2 × S2 in the extra dimensions. We also show that three generations can be obtained by squashing one of the S2's that appear in the configuration. Chiral fermions appear at the intersections of the fuzzy manifolds with nontrivial Yukawa couplings to the Higgs field, which can be calculated from the overlap of their wave functions.
Experimental Validation of Thermal Retinal Models of Damage from Laser Radiation
1979-08-01
for measuring relative intensity profile with a thermocouple or fiber-optic sensor .............................................. 72 B-2 Calculated...relative intensity profiles meas- ured by 5- and 10-pm-radius sensors of a Gaussian beam, with standard deviation of 10 Pm...the Air Force de - veloped a model for the mathematical prediction of thermal ef- fects of laser radiation on the eye (8). Given the characteris- tics
Light curves for bump Cepheids computed with a dynamically zoned pulsation code
NASA Technical Reports Server (NTRS)
Adams, T. F.; Castor, J. I.; Davis, C. G.
1980-01-01
The dynamically zoned pulsation code developed by Castor, Davis, and Davison was used to recalculate the Goddard model and to calculate three other Cepheid models with the same period (9.8 days). This family of models shows how the bumps and other features of the light and velocity curves change as the mass is varied at constant period. The use of a code that is capable of producing reliable light curves demonstrates that the light and velocity curves for 9.8 day Cepheid models with standard homogeneous compositions do not show bumps like those that are observed unless the mass is significantly lower than the 'evolutionary mass.' The light and velocity curves for the Goddard model presented here are similar to those computed independently by Fischel, Sparks, and Karp. They should be useful as standards for future investigators.
NASA Astrophysics Data System (ADS)
Landry, Brian R.; Subotnik, Joseph E.
2011-11-01
We evaluate the accuracy of Tully's surface hopping algorithm for the spin-boson model for the case of a small diabatic coupling parameter (V). We calculate the transition rates between diabatic surfaces, and we compare our results to the expected Marcus rates. We show that standard surface hopping yields an incorrect scaling with diabatic coupling (linear in V), which we demonstrate is due to an incorrect treatment of decoherence. By modifying standard surface hopping to include decoherence events, we recover the correct scaling (˜V2).
Legaz-García, María del Carmen; Martínez-Costa, Catalina; Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás
2012-01-01
Linking Electronic Healthcare Records (EHR) content to educational materials has been considered a key international recommendation to enable clinical engagement and to promote patient safety. This would suggest citizens to access reliable information available on the web and to guide them properly. In this paper, we describe an approach in that direction, based on the use of dual model EHR standards and standardized educational contents. The recommendation method will be based on the semantic coverage of the learning content repository for a particular archetype, which will be calculated by applying semantic web technologies like ontologies and semantic annotations.
Revision of the design of a standard for the dimensions of school furniture.
Molenbroek, J F M; Kroon-Ramaekers, Y M T; Snijders, C J
2003-06-10
In this study an anthropometric design process was followed. The aim was to improve the fit of school furniture sizes for European children. It was demonstrated statistically that the draft of a European standard does not cover the target population. No literature on design criteria for sizes exists, and in practice it is common to calculate the fit for only the mean values (P50). The calculations reported here used body dimensions of Dutch children, measured by the authors' Department, and used data from German and British national standards. A design process was followed that contains several steps, including: Target group, Anthropometric model and Percentage exclusion. The criteria developed in this study are (1) a fit on the basis of 1% exclusion (P1 or P99), and (2) a prescription based on popliteal height. Based on this new approach it was concluded that prescription of a set size should be based on popliteal height rather than body height. The drafted standard, Pren 1729, can be improved with this approach. A European standard for school furniture should include the exception that for Dutch children an extra large size is required.
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
A computational imaging target specific detectivity metric
NASA Astrophysics Data System (ADS)
Preece, Bradley L.; Nehmetallah, George
2017-05-01
Due to the large quantity of low-cost, high-speed computational processing available today, computational imaging (CI) systems are expected to have a major role for next generation multifunctional cameras. The purpose of this work is to quantify the performance of theses CI systems in a standardized manner. Due to the diversity of CI system designs that are available today or proposed in the near future, significant challenges in modeling and calculating a standardized detection signal-to-noise ratio (SNR) to measure the performance of these systems. In this paper, we developed a path forward for a standardized detectivity metric for CI systems. The detectivity metric is designed to evaluate the performance of a CI system searching for a specific known target or signal of interest, and is defined as the optimal linear matched filter SNR, similar to the Hotelling SNR, calculated in computational space with special considerations for standardization. Therefore, the detectivity metric is designed to be flexible, in order to handle various types of CI systems and specific targets, while keeping the complexity and assumptions of the systems to a minimum.
Secondary ionization in a flat universe
NASA Technical Reports Server (NTRS)
Atrio-Barandela, F.; Doroshkevich, A. G.
1994-01-01
We analyze the effect of a secondary ionization on the evolution of temperature fluctuations in cosmic background radiation. The main results presented in this paper are appropriate analytic expressions of the transfer function relating temperature fluctuations to matter density perturbations at recombination for all possible recombination histories. Furthermore, we particularize our calculation to the standard cold dark matter model, where we study the erasure of primordial temperature fluctuations and calculate the magnitude and angular scale of the damping induced by a late recombination.
Planar dielectric waveguides in rotation are optical fibers: comparison with the classical model.
Peña García, Antonio; Pérez-Ocón, Francisco; Jiménez, José Ramón
2008-01-21
A novel and simpler method to calculate the main parameters in fiber optics is presented. This method is based in a planar dielectric waveguide in rotation and, as an example, it is applied to calculate the turning points and the inner caustic in an optical fiber with a parabolic refractive index. It is shown that the solution found using this method agrees with the standard (and more complex) method, whose solutions for these points are also summarized in this paper.
Recent developments in the economic modeling of photovoltaic module manufacturing
NASA Technical Reports Server (NTRS)
Chamberlain, R. G.
1979-01-01
Recent developments in the solar array manufacturing industry costing standards (SAMICS) are described. Consideration is given to the added capability to handle arbitrary operating schedules and the revised procedure for calculation of one-time costs. The results of an extensive validation study are summarized.
The equation of state of n-pentane in the atomistic model TraPPE-EH
NASA Astrophysics Data System (ADS)
Valeev, B. U.; Pisarev, V. V.
2018-01-01
In this work, we study the vapor-liquid equilibrium in n-pentane. We use the TraPPE-EH (transferable potentials for phase equilibria-explicit hydrogen) forcefield, where each hydrogen and carbon atom is considered as independent center of force. The fluid behavior was investigated with different values of density and temperature by molecular dynamics method. The n-pentane evaporation curve was calculated in the temperature range of 290 to 390 K. The densities of the coexisting phases are also calculated. The compression curve at 370 K was calculated and isothermal bulk modulus was found. The simulated properties of n-pentane are in good agreement with data from a database of the National Institute of Standards and Technology, so the TraPPE-EH model can be recommended for simulations of hydrocarbons.
Implications of new physics in the decays Bc→(J /ψ , ηc)τ ν
NASA Astrophysics Data System (ADS)
Tran, C. T.; Ivanov, M. A.; Körner, J. G.; Santorelli, P.
2018-03-01
We study the semileptonic decays of the Bc meson into final charmonium states within the standard model and beyond. The relevant hadronic transition form factors are calculated in the framework of the covariant confined quark model developed by us. We focus on the tau mode of these decays, which may provide some hints of new physics effects. We extend the standard model by assuming a general effective Hamiltonian describing the b →c τ ν transition, which consists of the full set of the four-fermion operators. We then obtain experimental constraints on the Wilson coefficients corresponding to each operator and provide predictions for the branching fractions and other polarization observables in different new physics scenarios.
Modeling the gas-phase thermochemistry of organosulfur compounds.
Vandeputte, Aäron G; Sabbe, Maarten K; Reyniers, Marie-Françoise; Marin, Guy B
2011-06-27
Key to understanding the involvement of organosulfur compounds in a variety of radical chemistries, such as atmospheric chemistry, polymerization, pyrolysis, and so forth, is knowledge of their thermochemical properties. For organosulfur compounds and radicals, thermochemical data are, however, much less well documented than for hydrocarbons. The traditional recourse to the Benson group additivity method offers no solace since only a very limited number of group additivity values (GAVs) is available. In this work, CBS-QB3 calculations augmented with 1D hindered rotor corrections for 122 organosulfur compounds and 45 organosulfur radicals were used to derive 93 Benson group additivity values, 18 ring-strain corrections, 2 non-nearest-neighbor interactions, and 3 resonance corrections for standard enthalpies of formation, standard molar entropies, and heat capacities for organosulfur compounds and organosulfur radicals. The reported GAVs are consistent with previously reported GAVs for hydrocarbons and hydrocarbon radicals and include 77 contributions, among which 26 radical contributions, which, to the best of our knowledge, have not been reported before. The GAVs allow one to estimate the standard enthalpies of formation at 298 K, the standard entropies at 298 K, and standard heat capacities in the temperature range 300-1500 K for a large set of organosulfur compounds, that is, thiols, thioketons, polysulfides, alkylsulfides, thials, dithioates, and cyclic sulfur compounds. For a validation set of 26 organosulfur compounds, the mean absolute deviation between experimental and group additively modeled enthalpies of formation amounts to 1.9 kJ mol(-1). For an additional set of 14 organosulfur compounds, it was shown that the mean absolute deviations between calculated and group additively modeled standard entropies and heat capacities are restricted to 4 and 2 J mol(-1) K(-1), respectively. As an alternative to Benson GAVs, 26 new hydrogen-bond increments are reported, which can also be useful for the prediction of radical thermochemistry. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Long-range Ising model for credit portfolios with heterogeneous credit exposures
NASA Astrophysics Data System (ADS)
Kato, Kensuke
2016-11-01
We propose the finite-size long-range Ising model as a model for heterogeneous credit portfolios held by a financial institution in the view of econophysics. The model expresses the heterogeneity of the default probability and the default correlation by dividing a credit portfolio into multiple sectors characterized by credit rating and industry. The model also expresses the heterogeneity of the credit exposure, which is difficult to evaluate analytically, by applying the replica exchange Monte Carlo method to numerically calculate the loss distribution. To analyze the characteristics of the loss distribution for credit portfolios with heterogeneous credit exposures, we apply this model to various credit portfolios and evaluate credit risk. As a result, we show that the tail of the loss distribution calculated by this model has characteristics that are different from the tail of the loss distribution of the standard models used in credit risk modeling. We also show that there is a possibility of different evaluations of credit risk according to the pattern of heterogeneity.
Global ozone and air quality: a multi-model assessment of risks to human health and crops
NASA Astrophysics Data System (ADS)
Ellingsen, K.; Gauss, M.; van Dingenen, R.; Dentener, F. J.; Emberson, L.; Fiore, A. M.; Schultz, M. G.; Stevenson, D. S.; Ashmore, M. R.; Atherton, C. S.; Bergmann, D. J.; Bey, I.; Butler, T.; Drevet, J.; Eskes, H.; Hauglustaine, D. A.; Isaksen, I. S. A.; Horowitz, L. W.; Krol, M.; Lamarque, J. F.; Lawrence, M. G.; van Noije, T.; Pyle, J.; Rast, S.; Rodriguez, J.; Savage, N.; Strahan, S.; Sudo, K.; Szopa, S.; Wild, O.
2008-02-01
Within ACCENT, a European Network of Excellence, eighteen atmospheric models from the U.S., Europe, and Japan calculated present (2000) and future (2030) concentrations of ozone at the Earth's surface with hourly temporal resolution. Comparison of model results with surface ozone measurements in 14 world regions indicates that levels and seasonality of surface ozone in North America and Europe are characterized well by global models, with annual average biases typically within 5-10 nmol/mol. However, comparison with rather sparse observations over some regions suggest that most models overestimate annual ozone by 15-20 nmol/mol in some locations. Two scenarios from the International Institute for Applied Systems Analysis (IIASA) and one from the Intergovernmental Panel on Climate Change Special Report on Emissions Scenarios (IPCC SRES) have been implemented in the models. This study focuses on changes in near-surface ozone and their effects on human health and vegetation. Different indices and air quality standards are used to characterise air quality. We show that often the calculated changes in the different indices are closely inter-related. Indices using lower thresholds are more consistent between the models, and are recommended for global model analysis. Our analysis indicates that currently about two-thirds of the regions considered do not meet health air quality standards, whereas only 2-4 regions remain below the threshold. Calculated air quality exceedances show moderate deterioration by 2030 if current emissions legislation is followed and slight improvements if current emissions reduction technology is used optimally. For the "business as usual" scenario severe air quality problems are predicted. We show that model simulations of air quality indices are particularly sensitive to how well ozone is represented, and improved accuracy is needed for future projections. Additional measurements are needed to allow a more quantitative assessment of the risks to human health and vegetation from changing levels of surface ozone.
Calibrant-Free Analyte Quantitation via a Variable Velocity Flow Cell.
Beck, Jason G; Skuratovsky, Aleksander; Granger, Michael C; Porter, Marc D
2017-01-17
In this paper, we describe a novel method for analyte quantitation that does not rely on calibrants, internal standards, or calibration curves but, rather, leverages the relationship between disparate and predictable surface-directed analyte flux to an array of sensing addresses and a measured resultant signal. To reduce this concept to practice, we fabricated two flow cells such that the mean linear fluid velocity, U, was varied systematically over an array of electrodes positioned along the flow axis. This resulted in a predictable variation of the address-directed flux of a redox analyte, ferrocenedimethanol (FDM). The resultant limiting currents measured at a series of these electrodes, and accurately described by a convective-diffusive transport model, provided a means to calculate an "unknown" concentration without the use of calibrants, internal standards, or a calibration curve. Furthermore, the experiment and concentration calculation only takes minutes to perform. Deviation in calculated FDM concentrations from true values was minimized to less than 0.5% when empirically derived values of U were employed.
NASA Astrophysics Data System (ADS)
Wada, Kodai; Tomita, Koji; Takashiri, Masayuki
2018-06-01
The thermoelectric properties of bismuth telluride (Bi2Te3) nanoplate thin films were estimated using combined infrared spectroscopy and first-principles calculation, followed by comparing the estimated properties with those obtained using the standard electrical probing method. Hexagonal single-crystalline Bi2Te3 nanoplates were first prepared using solvothermal synthesis, followed by preparing Bi2Te3 nanoplate thin films using the drop-casting technique. The nanoplates were joined by thermally annealing them at 250 °C in Ar (95%)–H2 (5%) gas (atmospheric pressure). The electronic transport properties were estimated by infrared spectroscopy using the Drude model, with the effective mass being determined from the band structure using first-principles calculations based on the density functional theory. The electrical conductivity and Seebeck coefficient obtained using the combined analysis were higher than those obtained using the standard electrical probing method, probably because the contact resistance between the nanoplates was excluded from the estimation procedure of the combined analysis method.
A simple model of low-scale direct gauge mediation
NASA Astrophysics Data System (ADS)
Csáki, Csaba; Shirman, Yuri; Terning, John
2007-05-01
We construct a calculable model of low-energy direct gauge mediation making use of the metastable supersymmetry breaking vacua recently discovered by Intriligator, Seiberg and Shih. The standard model gauge group is a subgroup of the global symmetries of the SUSY breaking sector and messengers play an essential role in dynamical SUSY breaking: they are composites of a confining gauge theory, and the holomorphic scalar messenger mass appears as a consequence of the confining dynamics. The SUSY breaking scale is around 100 TeV nevertheless the model is calculable. The minimal non-renormalizable coupling of the Higgs to the DSB sector leads in a simple way to a μ-term, while the B-term arises at two-loop order resulting in a moderately large tan β. A novel feature of this class of models is that some particles from the dynamical SUSY breaking sector may be accessible at the LHC.
A Regression Framework for Effect Size Assessments in Longitudinal Modeling of Group Differences
Feingold, Alan
2013-01-01
The use of growth modeling analysis (GMA)--particularly multilevel analysis and latent growth modeling--to test the significance of intervention effects has increased exponentially in prevention science, clinical psychology, and psychiatry over the past 15 years. Model-based effect sizes for differences in means between two independent groups in GMA can be expressed in the same metric (Cohen’s d) commonly used in classical analysis and meta-analysis. This article first reviews conceptual issues regarding calculation of d for findings from GMA and then introduces an integrative framework for effect size assessments that subsumes GMA. The new approach uses the structure of the linear regression model, from which effect sizes for findings from diverse cross-sectional and longitudinal analyses can be calculated with familiar statistics, such as the regression coefficient, the standard deviation of the dependent measure, and study duration. PMID:23956615
Repopulation Kinetics and the Linear-Quadratic Model
NASA Astrophysics Data System (ADS)
O'Rourke, S. F. C.; McAneney, H.; Starrett, C.; O'Sullivan, J. M.
2009-08-01
The standard Linear-Quadratic (LQ) survival model for radiotherapy is used to investigate different schedules of radiation treatment planning for advanced head and neck cancer. We explore how these treament protocols may be affected by different tumour repopulation kinetics between treatments. The laws for tumour cell repopulation include the logistic and Gompertz models and this extends the work of Wheldon et al. [1], which was concerned with the case of exponential repopulation between treatments. Treatment schedules investigated include standarized and accelerated fractionation. Calculations based on the present work show, that even with growth laws scaled to ensure that the repopulation kinetics for advanced head and neck cancer are comparable, considerable variation in the survival fraction to orders of magnitude emerged. Calculations show that application of the Gompertz model results in a significantly poorer prognosis for tumour eradication. Gaps in treatment also highlight the differences in the LQ model with the effect of repopulation kinetics included.
Observed light yield of scintillation pixels: Extending the two-ray model
NASA Astrophysics Data System (ADS)
Kantorski, Igor; Jurkowski, Jacek; Drozdowski, Winicjusz
2016-09-01
In this paper we propose an extended, two dimensional model describing the propagation of scintillation photons inside a cuboid crystal until they reach a PMT window. In the simplest approach the model considers two main reasons for light losses: standard absorption obeying the classical Lambert-Beer law and non-ideal reflectivity of the "mummy" covering formed by several layers of Teflon tape wrapping the sample. Results of the model calculations are juxtaposed with experimental data as well as with predictions of an earlier, one dimensional model.
Histidine in Continuum Electrostatics Protonation State Calculations
Couch, Vernon; Stuchebruckhov, Alexei
2014-01-01
A modification to the standard continuum electrostatics approach to calculate protein pKas which allows for the decoupling of histidine tautomers within a two state model is presented. Histidine with four intrinsically coupled protonation states cannot be easily incorporated into a two state formalism because the interaction between the two protonatable sites of the imidazole ring is not purely electrostatic. The presented treatment, based on a single approximation of the interrelation between histidine’s charge states, allows for a natural separation of the two protonatable sites associated with the imidazole ring as well as the inclusion of all protonation states within the calculation. PMID:22072521
Calculation of recirculating flow behind flame-holders
NASA Astrophysics Data System (ADS)
Zeng, Q.; Sheng, Y.; Zhou, Q.
1985-10-01
Adoptability of standard K-epsilon turbulence model for numerical calculation of recirculating flow is discussed. Many computations of recirculating flows behind bluff-bodies used as flame-holders in afterburner of aeroengine have been completed. Blocking-off method to treat the incline walls of the flame-holder gives good results. In isothermal recirculating flows the flame-holder wall is assumed to be isolated. Therefore, it is possible to remove the inactive zone from the calculation domain in programming to save computer time. The computation for a V-shaped flame-holder exhibits an interesting phenomenon that the recirculation zone extends to the cavity of the flame-holder.
CFD Modeling of Flow, Temperature, and Concentration Fields in a Pilot-Scale Rotary Hearth Furnace
NASA Astrophysics Data System (ADS)
Liu, Ying; Su, Fu-Yong; Wen, Zhi; Li, Zhi; Yong, Hai-Quan; Feng, Xiao-Hong
2014-01-01
A three-dimensional mathematical model for simulation of flow, temperature, and concentration fields in a pilot-scale rotary hearth furnace (RHF) has been developed using a commercial computational fluid dynamics software, FLUENT. The layer of composite pellets under the hearth is assumed to be a porous media layer with CO source and energy sink calculated by an independent mathematical model. User-defined functions are developed and linked to FLUENT to process the reduction process of the layer of composite pellets. The standard k-ɛ turbulence model in combination with standard wall functions is used for modeling of gas flow. Turbulence-chemistry interaction is taken into account through the eddy-dissipation model. The discrete ordinates model is used for modeling of radiative heat transfer. A comparison is made between the predictions of the present model and the data from a test of the pilot-scale RHF, and a reasonable agreement is found. Finally, flow field, temperature, and CO concentration fields in the furnace are investigated by the model.
Radiative transfer code SHARM for atmospheric and terrestrial applications
NASA Astrophysics Data System (ADS)
Lyapustin, A. I.
2005-12-01
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Δ-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
Radiative transfer code SHARM for atmospheric and terrestrial applications.
Lyapustin, A I
2005-12-20
An overview of the publicly available radiative transfer Spherical Harmonics code (SHARM) is presented. SHARM is a rigorous code, as accurate as the Discrete Ordinate Radiative Transfer (DISORT) code, yet faster. It performs simultaneous calculations for different solar zenith angles, view zenith angles, and view azimuths and allows the user to make multiwavelength calculations in one run. The Delta-M method is implemented for calculations with highly anisotropic phase functions. Rayleigh scattering is automatically included as a function of wavelength, surface elevation, and the selected vertical profile of one of the standard atmospheric models. The current version of the SHARM code does not explicitly include atmospheric gaseous absorption, which should be provided by the user. The SHARM code has several built-in models of the bidirectional reflectance of land and wind-ruffled water surfaces that are most widely used in research and satellite data processing. A modification of the SHARM code with the built-in Mie algorithm designed for calculations with spherical aerosols is also described.
NASA Astrophysics Data System (ADS)
Johnson, J. N.; Dick, J. J.
2000-04-01
Data are presented for the spall fracture of Estane. Estane has been studied previously to determine its low-pressure Hugoniot properties and high-rate viscoelastic response [J.N. Johnson, J.J. Dick and R.S. Hixson, J. Appl. Phys. 84, 2520-2529, 1998]. These results are used in the current analysis of spall fracture data for this material. Calculations are carried out with the characteristics code CHARADE and the finite-difference code FIDO. Comparison of model calculations with experimental data show the onset of spall failure to occur when the longitudinal stress reaches approximately 130 MPa in tension. At this point complete material separation does not occur, but rather the tensile strength in the material falls to approximately one-half the value at onset, as determined by CHARADE calculations. Finite-difference calculations indicate that the standard void-growth model (used previously to describe spall in metals) gives a reasonable approximation to the dynamic failure process in Estane. [Research supported by the USDOE under contract W-7405-ENG-36
Modeling Zone-3 Protection with Generic Relay Models for Dynamic Contingency Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Qiuhua; Vyakaranam, Bharat GNVSR; Diao, Ruisheng
This paper presents a cohesive approach for calculating and coordinating the settings of multiple zone-3 protections for dynamic contingency analysis. The zone-3 protections are represented by generic distance relay models. A two-step approach for determining zone-3 relay settings is proposed. The first step is to calculate settings, particularly, the reach, of each zone-3 relay individually by iteratively running line open-end fault short circuit analysis; the blinder is also employed and properly set to meet the industry standard under extreme loading conditions. The second step is to systematically coordinate the protection settings of the zone-3 relays. The main objective of thismore » coordination step is to address the over-reaching issues. We have developed a tool to automate the proposed approach and generate the settings of all distance relays in a PSS/E dyr format file. The calculated zone-3 settings have been tested on a modified IEEE 300 system using a dynamic contingency analysis tool (DCAT).« less
30 CFR 250.302 - Definitions concerning air quality.
Code of Federal Regulations, 2011 CFR
2011-07-01
... and 250.304 of this part: Air pollutant means any combination of agents for which the Environmental... shown by monitored data or which is calculated by air quality modeling (or other methods determined by... standards established by EPA. Best available control technology (BACT) means an emission limitation based on...
Extraction of Coastlines with Fuzzy Approach Using SENTINEL-1 SAR Image
NASA Astrophysics Data System (ADS)
Demir, N.; Kaynarca, M.; Oy, S.
2016-06-01
Coastlines are important features for water resources, sea products, energy resources etc. Coastlines are changed dynamically, thus automated methods are necessary for analysing and detecting the changes along the coastlines. In this study, Sentinel-1 C band SAR image has been used to extract the coastline with fuzzy logic approach. The used SAR image has VH polarisation and 10x10m. spatial resolution, covers 57 sqkm area from the south-east of Puerto-Rico. Additionally, radiometric calibration is applied to reduce atmospheric and orbit error, and speckle filter is used to reduce the noise. Then the image is terrain-corrected using SRTM digital surface model. Classification of SAR image is a challenging task since SAR and optical sensors have very different properties. Even between different bands of the SAR sensors, the images look very different. So, the classification of SAR image is difficult with the traditional unsupervised methods. In this study, a fuzzy approach has been applied to distinguish the coastal pixels than the land surface pixels. The standard deviation and the mean, median values are calculated to use as parameters in fuzzy approach. The Mean-standard-deviation (MS) Large membership function is used because the large amounts of land and ocean pixels dominate the SAR image with large mean and standard deviation values. The pixel values are multiplied with 1000 to easify the calculations. The mean is calculated as 23 and the standard deviation is calculated as 12 for the whole image. The multiplier parameters are selected as a: 0.58, b: 0.05 to maximize the land surface membership. The result is evaluated using airborne LIDAR data, only for the areas where LIDAR dataset is available and secondly manually digitized coastline. The laser points which are below 0,5 m are classified as the ocean points. The 3D alpha-shapes algorithm is used to detect the coastline points from LIDAR data. Minimum distances are calculated between the LIDAR points of coastline with the extracted coastline. The statistics of the distances are calculated as following; the mean is 5.82m, standard deviation is 5.83m and the median value is 4.08 m. Secondly, the extracted coastline is also evaluated with manually created lines on SAR image. Both lines are converted to dense points with 1 m interval. Then the closest distances are calculated between the points from extracted coastline and manually created coastline. The mean is 5.23m, standard deviation is 4.52m. and the median value is 4.13m for the calculated distances. The evaluation values are within the accuracy of used SAR data for both quality assessment approaches.
Reliability of engineering methods of assessment the critical buckling load of steel beams
NASA Astrophysics Data System (ADS)
Rzeszut, Katarzyna; Folta, Wiktor; Garstecki, Andrzej
2018-01-01
In this paper the reliability assessment of buckling resistance of steel beam is presented. A number of parameters such as: the boundary conditions, the section height to width ratio, the thickness and the span are considered. The examples are solved using FEM procedures and formulas proposed in the literature and standards. In the case of the numerical models the following parameters are investigated: support conditions, mesh size, load conditions, steel grade. The numerical results are compared with approximate solutions calculated according to the standard formulas. It was observed that for high slenderness section the deformation of the cross-section had to be described by the following modes: longitudinal and transverse displacement, warping, rotation and distortion of the cross section shape. In this case we face interactive buckling problem. Unfortunately, neither the EN Standards nor the subject literature give close-form formulas to solve these problems. For this reason the reliability of the critical bending moment calculations is discussed.
NASA Astrophysics Data System (ADS)
Reidy, B.; Webb, J.; Misselbrook, T. H.; Menzi, H.; Luesink, H. H.; Hutchings, N. J.; Eurich-Menden, B.; Döhler, H.; Dämmgen, U.
Six N-flow models, used to calculate national ammonia (NH 3) emissions from agriculture in different European countries, were compared using standard data sets. Scenarios for litter-based systems were run separately for beef cattle and for broilers, with three different levels of model standardisation: (a) standardized inputs to all models (FF scenario); (b) standard N excretion, but national values for emission factors (EFs) (FN scenario); (c) national values for N excretion and EFs (NN scenario). Results of the FF scenario for beef cattle produced very similar estimates of total losses of total ammoniacal-N (TAN) (±6% of the mean total), but large differences in NH 3 emissions (±24% of the mean). These differences arose from the different approaches to TAN immobilization in litter, other N losses and mineralization in the models. As a result of those differences estimates of TAN available at spreading differed by a factor of almost 3. Results of the FF scenario for broilers produced a range of estimates of total changes in TAN (±9% of the mean total), and larger differences in the estimate of NH 3 emissions (±17% of the mean). The different approaches among the models to TAN immobilization, other N losses and mineralization, produced estimates of TAN available at spreading which differed by a factor of almost 1.7. The differences in estimates of NH 3 emissions decreased as estimates of immobilization and other N losses increased. Since immobilization and denitrification depend also on the C:N ratio in manure, there would be advantages to include C flows in mass-flow models. This would also provide an integrated model for the estimation of emissions of methane, non-methane VOCs and carbon dioxide. Estimation of these would also enable an estimate of mass loss, calculation of the N and TAN concentrations in litter-based manures and further validation of model outputs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dasgupta-Schubert, N.; Reyes, M.A.
2007-11-15
The predictive accuracy of the generalized liquid drop model (GLDM) formula for alpha-decay half-lives has been investigated in a detailed manner and a variant of the formula with improved coefficients is proposed. The method employs the experimental alpha half-lives of the well-known alpha standards to obtain the coefficients of the analytical formula using the experimental Q{sub {alpha}} values (the DSR-E formula), as well as the finite range droplet model (FRDM) derived Q{sub {alpha}} values (the FRDM-FRDM formula). The predictive accuracy of these formulae was checked against the experimental alpha half-lives of an independent set of nuclei (TEST) that span approximatelymore » the same Z, A region as the standards and possess reliable alpha spectroscopic data, and were found to yield good results for the DSR-E formula but not for the FRDM-FRDM formula. The two formulae were used to obtain the alpha half-lives of superheavy elements (SHE) and heavy nuclides where the relative accuracy was found to be markedly improved for the FRDM-FRDM formula, which corroborates the appropriateness of the FRDM masses and the GLDM prescription for high Z, A nuclides. Further improvement resulted, especially for the FRDM-FRDM formula, after a simple linear optimization over the calculated and experimental half-lives of TEST was used to re-calculate the half-lives of the SHE and heavy nuclides. The advantage of this optimization was that it required no re-calculation of the coefficients of the basic DSR-E or FRDM-FRDM formulae. The half-lives for 324 medium-mass to superheavy alpha decaying nuclides, calculated using these formulae and the comparison with experimental half-lives, are presented.« less
TRAP/SEE Code Users Manual for Predicting Trapped Radiation Environments
NASA Technical Reports Server (NTRS)
Armstrong, T. W.; Colborn, B. L.
2000-01-01
TRAP/SEE is a PC-based computer code with a user-friendly interface which predicts the ionizing radiation exposure of spacecraft having orbits in the Earth's trapped radiation belts. The code incorporates the standard AP8 and AE8 trapped proton and electron models but also allows application of an improved database interpolation method. The code treats low-Earth as well as highly-elliptical Earth orbits, taking into account trajectory perturbations due to gravitational forces from the Moon and Sun, atmospheric drag, and solar radiation pressure. Orbit-average spectra, peak spectra per orbit, and instantaneous spectra at points along the orbit trajectory are calculated. Described in this report are the features, models, model limitations and uncertainties, input and output descriptions, and example calculations and applications for the TRAP/SEE code.
Stereolithographic models of the solvent-accessible surface of biopolymers. Topical report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bradford, J.; Noel, P.; Emery, J.D.
1996-11-01
The solvent-accessible surfaces of several biopolymers were calculated. As part of the DOE education outreach activity, two high school students participated in this project. Computer files containing sets of triangles were produced. These files are called stl files and are the ISO 9001 standard. They have been written onto CD-ROMs for distribution to American companies. Stereolithographic models were made of some of them to ensure that the computer calculations were done correctly. Stereolithographic models were made of interleukin 1{beta} (IL-1{beta}), three antibodies (an anti-p-azobenzene arsonate, an anti-Brucella A cell wall polysaccharide, and an HIV neutralizing antibody), a triple stranded coiledmore » coil, and an engrailed homeodomain. Also, the biopolymers and their files are described.« less
Bimodality emerges from transport model calculations of heavy ion collisions at intermediate energy
NASA Astrophysics Data System (ADS)
Mallik, S.; Das Gupta, S.; Chaudhuri, G.
2016-04-01
This work is a continuation of our effort [S. Mallik, S. Das Gupta, and G. Chaudhuri, Phys. Rev. C 91, 034616 (2015)], 10.1103/PhysRevC.91.034616 to examine if signatures of a phase transition can be extracted from transport model calculations of heavy ion collisions at intermediate energy. A signature of first-order phase transition is the appearance of a bimodal distribution in Pm(k ) in finite systems. Here Pm(k ) is the probability that the maximum of the multiplicity distribution occurs at mass number k . Using a well-known model for event generation [Botzmann-Uehling-Uhlenbeck (BUU) plus fluctuation], we study two cases of central collision: mass 40 on mass 40 and mass 120 on mass 120. Bimodality is seen in both the cases. The results are quite similar to those obtained in statistical model calculations. An intriguing feature is seen. We observe that at the energy where bimodality occurs, other phase-transition-like signatures appear. There are breaks in certain first-order derivatives. We then examine if such breaks appear in standard BUU calculations without fluctuations. They do. The implication is interesting. If first-order phase transition occurs, it may be possible to recognize that from ordinary BUU calculations. Probably the reason this has not been seen already is because this aspect was not investigated before.
Dose conversion coefficients for neutron exposure to the lens of the human eye.
Manger, R P; Bellamy, M B; Eckerman, K F
2012-03-01
Dose conversion coefficients for the lens of the human eye have been calculated for neutron exposure at energies from 1 × 10(-9) to 20 MeV and several standard orientations: anterior-to-posterior, rotational and right lateral. MCNPX version 2.6.0, a Monte Carlo-based particle transport package, was used to determine the energy deposited in the lens of the eye. The human eyeball model was updated by partitioning the lens into sensitive and insensitive volumes as the anterior portion (sensitive volume) of the lens being more radiosensitive and prone to cataract formation. The updated eye model was used with the adult UF-ORNL mathematical phantom in the MCNPX transport calculations.
Ablation effects in oxygen-lead fragmentation at 2.1 GeV/nucleon
NASA Technical Reports Server (NTRS)
Townsend, L. W.
1984-01-01
The mechanism of particle evaporation was used to examine ablation effects in the fragmentation of 2.1 GeV/nucleon oxygen nuclei by lead targets. Following the initial abrasion process, the excited projectile prefragment is assumed to statistically decay in a manner analogous to that of a compound nucleus. The decay probabilities for the various particle emission channels are calculated by using the EVAP-4 Monte Carlo computer program. The input excitation energy spectrum for the prefragment is estimated from the geometric ""clean cut'' abrasion-ablation model. Isotope production cross sections are calculated and compared with experimental data and with the predictions from the standard geometric abrasion-ablation fragmentation model.
BIG BANG NUCLEOSYNTHESIS WITH A NON-MAXWELLIAN DISTRIBUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertulani, C. A.; Fuqua, J.; Hussein, M. S.
The abundances of light elements based on the big bang nucleosynthesis model are calculated using the Tsallis non-extensive statistics. The impact of the variation of the non-extensive parameter q from the unity value is compared to observations and to the abundance yields from the standard big bang model. We find large differences between the reaction rates and the abundance of light elements calculated with the extensive and the non-extensive statistics. We found that the observations are consistent with a non-extensive parameter q = 1{sub -} {sub 0.12}{sup +0.05}, indicating that a large deviation from the Boltzmann-Gibbs statistics (q = 1)more » is highly unlikely.« less
Variance of transionospheric VLF wave power absorption
NASA Astrophysics Data System (ADS)
Tao, X.; Bortnik, J.; Friedrich, M.
2010-07-01
To investigate the effects of D-region electron-density variance on wave power absorption, we calculate the power reduction of very low frequency (VLF) waves propagating through the ionosphere with a full wave method using the standard ionospheric model IRI and in situ observational data. We first verify the classic absorption curves of Helliwell's using our full wave code. Then we show that the IRI model gives overall smaller wave absorption compared with Helliwell's. Using D-region electron densities measured by rockets during the past 60 years, we demonstrate that the power absorption of VLF waves is subject to large variance, even though Helliwell's absorption curves are within ±1 standard deviation of absorption values calculated from data. Finally, we use a subset of the rocket data that are more representative of the D region of middle- and low-latitude VLF wave transmitters and show that the average quiet time wave absorption is smaller than that of Helliwell's by up to 100 dB at 20 kHz and 60 dB at 2 kHz, which would make the model-observation discrepancy shown by previous work even larger. This result suggests that additional processes may be needed to explain the discrepancy.
Absolute Spectrophotometric Calibration to 1% from the FUV through the near-IR
NASA Astrophysics Data System (ADS)
Finley, David
2005-07-01
We propose a significant improvement to the existing HST calibration. The current calibration is based on three primary DA white dwarf standards, GD 71, GD 153, and G 191-B2B. The standard fluxes are calculated using NLTE models, with effective temperatures and gravities that were derived from Balmer line fits using LTE models. We propose to improve the accuracy and internal consistency of the calibration by deriving corrected effective temperatures and gravities based on fitting the observed line profiles with updated NLTE models, and including the fit results from multiple STIS spectra, rather than the {usually} 1 or 2 ground-based spectra used previously. We will also determine the fluxes for 5 new, fainter primary or secondary standards, extending the standard V magnitude lower limit from 13.4 to 16.5, and extending the wavelength coverage from 0.1 to 2.5 micron. The goal is to achieve an overall flux accuracy of 1%, which will be needed, for example, for the upcoming supernova survey missions to measure the equation of state of the dark energy that is accelerating the expansion of the universe.
NASA Astrophysics Data System (ADS)
Tian, C.; Weng, J.; Liu, Y.
2017-11-01
The convection heat transfer coefficient is one of the evaluation indexes of the brake disc performance. The method used in this paper to calculate the convection heat transfer coefficient is a fluid-solid coupling simulation method, because the calculation results through the empirical formula method have great differences. The model, including a brake disc, a car body, a bogie and flow field, was built, meshed and simulated in the software FLUENT. The calculation models were K-epsilon Standard model and Energy model. The working condition of the brake disc was considered. The coefficient of various parts can be obtained through the method in this paper. The simulation result shows that, under 160 km/h speed, the radiating ribs have the maximum convection heat transfer coefficient and the value is 129.6W/(m2·K), the average coefficient of the whole disc is 100.4W/(m2·K), the windward of ribs is positive-pressure area and the leeward of ribs is negative-pressure area, the maximum pressure is 2663.53Pa.
Extended optical model for fission
Sin, M.; Capote, R.; Herman, M. W.; ...
2016-03-07
A comprehensive formalism to calculate fission cross sections based on the extension of the optical model for fission is presented. It can be used for description of nuclear reactions on actinides featuring multi-humped fission barriers with partial absorption in the wells and direct transmission through discrete and continuum fission channels. The formalism describes the gross fluctuations observed in the fission probability due to vibrational resonances, and can be easily implemented in existing statistical reaction model codes. The extended optical model for fission is applied for neutron induced fission cross-section calculations on 234,235,238U and 239Pu targets. A triple-humped fission barrier ismore » used for 234,235U(n,f), while a double-humped fission barrier is used for 238U(n,f) and 239Pu(n,f) reactions as predicted by theoretical barrier calculations. The impact of partial damping of class-II/III states, and of direct transmission through discrete and continuum fission channels, is shown to be critical for a proper description of the measured fission cross sections for 234,235,238U(n,f) reactions. The 239Pu(n,f) reaction can be calculated in the complete damping approximation. Calculated cross sections for 235,238U(n,f) and 239Pu(n,f) reactions agree within 3% with the corresponding cross sections derived within the Neutron Standards least-squares fit of available experimental data. Lastly, the extended optical model for fission can be used for both theoretical fission studies and nuclear data evaluation.« less
Cooper, Justin; Marx, Bernd; Buhl, Johannes; Hombach, Volker
2002-09-01
This paper investigates the minimum distance for a human body in the near field of a cellular telephone base station antenna for which there is compliance with the IEEE or ICNIRP threshold values for radio frequency electromagnetic energy absorption in the human body. First, local maximum specific absorption rates (SARs), measured and averaged over volumes equivalent to 1 and to 10 g tissue within the trunk region of a physical, liquid filled shell phantom facing and irradiated by a typical GSM 900 base station antenna, were compared to corresponding calculated SAR values. The calculation used a homogeneous Visible Human body model in front of a simulated base station antenna of the same type. Both real and simulated base station antennas operated at 935 MHz. Antenna-body distances were between 1 and 65 cm. The agreement between measurements and calculations was excellent. This gave confidence in the subsequent calculated SAR values for the heterogeneous Visible Human model, for which each tissue was assigned the currently accepted values for permittivity and conductivity at 935 MHz. Calculated SAR values within the trunk of the body were found to be about double those for the homogeneous case. When the IEEE standard and the ICNIRP guidelines are both to be complied with, the local SAR averaged over 1 g tissue was found to be the determining parameter. Emitted power values from the antenna that produced the maximum SAR value over 1 g specified in the IEEE standard at the base station are less than those needed to reach the ICNIRP threshold specified for the local SAR averaged over 10 g. For the GSM base station antenna investigated here operating at 935 MHz with 40 W emitted power, the model indicates that the human body should not be closer to the antenna than 18 cm for controlled environment exposure, or about 95 cm for uncontrolled environment exposure. These safe distance limits are for SARs averaged over 1 g tissue. The corresponding safety distance limits under the ICNIRP guidelines for SAR taken over 10 g tissue are 5 cm for occupational exposure and about 75 cm for general-public exposure. Copyright 2002 Wiley-Liss, Inc.
Mechanisms of Plasma Acceleration in Coronal Jets
NASA Astrophysics Data System (ADS)
Soto, N.; Reeves, K.; Savcheva, A. S.
2016-12-01
Jets are small explosions that occur frequently in the Sun possibly driven by the local reconfiguration of the magnetic field, or reconnection. There are two types of coronal jets: standard jets and blowout jets. The purpose of this project is to determine which mechanisms accelerate plasma in two different jets, one that occurred in January 17, 2015 at the disk of the sun and another in October 24, 2015 at the limb. Two possible acceleration mechanisms are chromospheric evaporation and magnetic acceleration. Using SDO/AIA, Hinode/XRT and IRIS data, we create height-time plots, and calculate the velocities of each wavelength for both jets. We calculate the potential magnetic field of the jet and the general region around it to gain a more detailed understanding of its structure, and determine if the jet is likely to be either a standard or blowout jet. Finally, we calculate the magnetic field strength for different heights along the jet spire, and use differential emission measures to calculate the plasma density. Once we have these two values, we calculate the Alfven speed. When analyzing our results we are looking for certain patterns in our velocities. If the plasma in a jet is accelerated by chromospheric evaporation, we expect the velocities to increase as function of temperature, which is what we observed in the October 24th jet. The magnetic models for this jet also show the Eiffel Tower shaped structure characteristic of standard jets, which tend to have plasma accelerated by this mechanism. On the other hand, if the acceleration mechanism were magnetic acceleration, we would expect the velocities to be similar regardless of temperature. For the January 17th jet, we saw that along the spire, the velocities where approximately 200 km/s in all wavelengths, but the velocities of hot plasma detected at the base were closer to the Alfven speed, which was estimated to be about 2,000 km/s. These observations suggest that the plasma in the January 17th jet is magnetically accelerated. The magnetic model for this jet needs to be studied further by using a NLFFF magnetic field model and not just the potential magnetic field. This work supported by the NSF-REU solar physics program at SAO, grant number AGS-1560313 and NASA Grant NNX15AF43G
Zen, E.-A.
1973-01-01
Reversed univariant hydrothermal phase-equilibrium reactions, in which a redox reaction occurs and is controlled by oxygen buffers, can be used to extract thermochemical data on minerals. The dominant gaseous species present, even for relatively oxidizing buffers such as the QFM buffer, are H2O and H2; the main problem is to calculate the chemical potentials of these components in a binary mixture. The mixing of these two species in the gas phase was assumed by Eugster and Wones (1962) to be ideal; this assumption allows calculation of the chemical potentials of the two components in a binary gas mixture, using data in the literature. A simple-mixture model of nonideal mixing, such as that proposed by Shaw (1967), can also be combined with the equations of state for oxygen buffers to permit derivation of the chemical potentials of the two components. The two mixing models yield closely comparable results for the more oxidizing buffers such as the QFM buffer. For reducing buffers such as IQF, the nonideal-mixing correction can be significant and the Shaw model is better. The procedure of calculation of mineralogical thermochemical data, in reactions where hydrogen and H2O simultaneously appear, is applied to the experimental data on annite, given by Wones et al. (1971), and on almandine, given by Hsu (1968). For annite the results are: Standard entropy of formation from the elements, Sf0 (298, 1)=-283.35??2.2 gb/gf, S0 (298, 1) =+92.5 gb/gf. Gf0 (298, 1)=-1148.2??6 kcal, and Hf0 (298, 1)=-1232.7??7 kcal. For almandine, the calculation takes into account the mutual solution of FeAl2O4 (Hc) in magnetite and of Fe3O4 (Mt) in hercynite and the temperature dependence of this solid solution, as given by Turnock and Eugster (1962); the calculations assume a regular-solution model for this binary spinel system. The standard entropy of formation of almandine, Sf,A0 (298, 1) is -272.33??3 gb/gf. The third law entropy, S0 (298, 1) is +68.3??3 gb/gf, a value much less than the oxide-sum estimate but the deviation is nearly the same as that of grossularite, referring to a comparable set of oxide standard states. The Gibbs free energy Gf,A0 (298, 1) is -1192.36??4 kcal, and the enthalpy Hf,A0 (298, 1) is -1273.56??5 kcal. ?? 1973 Springer-Verlag.
Extending the Standard Model with Confining and Conformal Dynamics
NASA Astrophysics Data System (ADS)
McRaven, John Emory
This dissertation will provide a survey of models that involve extending the standard model with confining and conformal dynamics. We will study a series of models, describe them in detail, outline their phenomenology, and provide some search strategies for finding them. The Gaugephobic Higgs model provides an interpolation between three different models of electroweak symmetry breaking: Higgsless models, Randall-Sundrum models, and the Standard Model. At parameter points between the extremes, Standard Model Higgs signals are present at reduced rates, and Higgsless Kaluza-Klein excitations are present with shifted masses and couplings, as well as signals from exotic quarks necessary to protect the Zbb coupling. Using a new implementation of the model in SHERPA, we show the LHC signals which differentiate the generic Gaugephobic Higgs model from its limiting cases. These are all signals involving a Higgs coupling to a Kaluza-Klein gauge boson or quark. We identify the clean signal pp → W (i) → WH mediated by a Kaluza-Klein W, which can be present at large rates and is enhanced for even Kaluza-Klein numbers. Due to the very hard lepton coming from the W+/- decay, this signature has little background, and provides a better discovery channel for the Higgs than any of the Standard Model modes, over its entire mass range. A Higgs radiated from new heavy quarks also has large rates, but is much less promising due to very high multiplicity final states. The AdS/CFT conjectures a relation between Extra Dimensional models in AdS5 space, such as the Gaugephobic Higgs Model, and 4D Conformal Field theories. The notion of conformality has found its way into several phenomenological models for TeV-scale physics extending the standard model. We proceed to explore the phenomenology of a new heavy quark that transforms under a hidden strongly coupled conformal gauge group in addition to transforming under QCD. This object would form states similar to R-Hadrons. The heavy state would leave very little of its energy in the calorimeter, so while detecting the presence of a heavy stable state would be easy, measuring the strength of the detecting it would require accurate measurements of missing energy, or the ability to identify it in the muon tracker. We then study the phenomenology of a 4D model of electroweak symmetry breaking through the condensation of magnetic monopoles. A new generation of fermions with magnetic charges in addition to electric charges is introduced. The dyons condense and break the electroweak symmetry. The magnetic coupling is inversely proportional to the electric coupling, causing it to be strong. The processes involving magnetic couplings thus provide interesting phenomenology to study. We primarily study the processes involving di-photon production and compare it to early LHC results. Finally, we calculate triangle anomalies for fermions with non-canonical scaling dimensions. The most well known example of such fermions (aka unfermions) occurs in Seiberg duality where the matching of anomalies (including mesinos with scaling dimensions between 3/2 and 5/2) is a crucial test of duality. By weakly gauging the non-local action for an unfermion, we calculate the one-loop three-current amplitude. Despite the fact that there are more graphs with more complicated propagators and vertices, we find that the calculation can be completed in a way that nearly parallels the usual case. We show that the anomaly factor for fermionic unparticles is independent of the scaling dimension and identical to that for ordinary fermions. This can be viewed as a confirmation that unparticle actions correctly capture the physics of conformal fixed point theories like Banks-Zaks or SUSY QCD.
The virialization density of peaks with general density profiles under spherical collapse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rubin, Douglas; Loeb, Abraham, E-mail: dsrubin@physics.harvard.edu, E-mail: aloeb@cfa.harvard.edu
2013-12-01
We calculate the non-linear virialization density, Δ{sub c}, of halos under spherical collapse from peaks with an arbitrary initial and final density profile. This is in contrast to the standard calculation of Δ{sub c} which assumes top-hat profiles. Given our formalism, the non-linear halo density can be calculated once the shape of the initial peak's density profile and the shape of the virialized halo's profile are provided. We solve for Δ{sub c} for halos in an Einstein de-Sitter and a ΛCDM universe. As examples, we consider power-law initial profiles as well as spherically averaged peak profiles calculated from the statisticsmore » of a Gaussian random field. We find that, depending on the profiles used, Δ{sub c} is smaller by a factor of a few to as much as a factor of 10 as compared to the density given by the standard calculation ( ≈ 200). Using our results, we show that, for halo finding algorithms that identify halos through an over-density threshold, the halo mass function measured from cosmological simulations can be enhanced at all halo masses by a factor of a few. This difference could be important when using numerical simulations to assess the validity of analytic models of the halo mass function.« less
Astrophysical tests for radiative decay of neutrinos and fundamental physics implications
NASA Technical Reports Server (NTRS)
Stecker, F. W.; Brown, R. W.
1981-01-01
The radiative lifetime tau for the decay of massious neutrinos was calculated using various physical models for neutrino decay. The results were then related to the astrophysical problem of the detectability of the decay photons from cosmic neutrinos. Conversely, the astrophysical data were used to place lower limits on tau. These limits are all well below predicted values. However, an observed feature at approximately 1700 A in the ultraviolet background radiation at high galactic latitudes may be from the decay of neutrinos with mass approximately 14 eV. This would require a decay rate much larger than the predictions of standard models but could be indicative of a decay rate possible in composite models or other new physics. Thus an important test for substructure in leptons and quarks or other physics beyond the standard electroweak model may have been found.
The behavior of the Higgs field in the new inflationary universe
NASA Technical Reports Server (NTRS)
Guth, Alan H.; Pi, So-Young
1986-01-01
Answers are provided to questions about the standard model of the new inflationary universe (NIU) which have raised concerns about the model's validity. A baby toy problem which consists of the study of a single particle moving in one dimension under the influence of a potential with the form of an upside-down harmonic oscillator is studied, showing that the quantum mechanical wave function at large times is accurately described by classical physics. Then, an exactly soluble toy model for the behavior of the Higgs field in the NIU is described which should provide a reasonable approximation to the behavior of the Higgs field in the NIU. The dynamics of the toy model is described, and calculative results are reviewed which, the authors claim, provide strong evidence that the basic features of the standard picture are correct.
Raghubar, Kimberly P.; Barnes, Marcia A.; Dennis, Maureen; Cirino, Paul T.; Taylor, Heather; Landry, Susan
2015-01-01
Objective Math and attention are related in neurobiological and behavioral models of mathematical cognition. This study employed model-driven assessments of attention and math in children with spina bifida myelomeningocele (SBM), who have known math difficulties and specific attentional deficits, to more directly examine putative relations between attention and mathematical processing. The relation of other domain general abilities and math was also investigated. Method Participants were 9.5-year-old children with SBM (N = 44) and typically developing children (N = 50). Participants were administered experimental exact and approximate arithmetic tasks, and standardized measures of math fluency and calculation. Cognitive measures included the Attention Network Test (ANT), and standardized measures of fine motor skills, verbal working memory (WM), and visual-spatial WM. Results Children with SBM performed similarly to peers on exact arithmetic but more poorly on approximate and standardized arithmetic measures. On the ANT, children with SBM differed from controls on orienting attention but not alerting and executive attention. Multiple mediation models showed that: fine motor skills and verbal WM mediated the relation of group to approximate arithmetic; fine motor skills and visual-spatial WM mediated the relation of group to math fluency; and verbal and visual-spatial WM mediated the relation of group to math calculation. Attention was not a significant mediator of the effects of group for any aspect of math in this study. Conclusions Results are discussed with reference to models of attention, WM, and mathematical cognition. PMID:26011113
Raghubar, Kimberly P; Barnes, Marcia A; Dennis, Maureen; Cirino, Paul T; Taylor, Heather; Landry, Susan
2015-11-01
Math and attention are related in neurobiological and behavioral models of mathematical cognition. This study employed model-driven assessments of attention and math in children with spina bifida myelomeningocele (SBM), who have known math difficulties and specific attentional deficits, to more directly examine putative relations between attention and mathematical processing. The relation of other domain general abilities and math was also investigated. Participants were 9.5-year-old children with SBM (n = 44) and typically developing children (n = 50). Participants were administered experimental exact and approximate arithmetic tasks, and standardized measures of math fluency and calculation. Cognitive measures included the Attention Network Test (ANT), and standardized measures of fine motor skills, verbal working memory (WM), and visual-spatial WM. Children with SBM performed similarly to peers on exact arithmetic, but more poorly on approximate and standardized arithmetic measures. On the ANT, children with SBM differed from controls on orienting attention, but not on alerting and executive attention. Multiple mediation models showed that fine motor skills and verbal WM mediated the relation of group to approximate arithmetic; fine motor skills and visual-spatial WM mediated the relation of group to math fluency; and verbal and visual-spatial WM mediated the relation of group to math calculation. Attention was not a significant mediator of the effects of group for any aspect of math in this study. Results are discussed with reference to models of attention, WM, and mathematical cognition. (c) 2015 APA, all rights reserved).
A generalized estimating equations approach for resting-state functional MRI group analysis.
D'Angelo, Gina M; Lazar, Nicole A; Eddy, William F; Morris, John C; Sheline, Yvette I
2011-01-01
An Alzheimer's fMRI study has motivated us to evaluate inter-regional correlations between groups. The overall objective is to assess inter-regional correlations at a resting-state with no stimulus or task. We propose using a generalized estimating equation (GEE) transition model and a GEE marginal model to model the within-subject correlation for each region. Residuals calculated from the GEE models are used to correlate brain regions and assess between group differences. The standard pooling approach of group averages of the Fisher-z transformation assuming temporal independence is a typical approach used to compare group correlations. The GEE approaches and standard Fisher-z pooling approach are demonstrated with an Alzheimer's disease (AD) connectivity study in a population of AD subjects and healthy control subjects. We also compare these methods using simulation studies and show that the transition model may have better statistical properties.
NASA Astrophysics Data System (ADS)
Elwina; Yunardi; Bindar, Yazid
2018-04-01
this paper presents results obtained from the application of a computational fluid dynamics (CFD) code Fluent 6.3 to modelling of temperature in propane flames with and without air preheat. The study focuses to investigate the effect of air preheat temperature on the temperature of the flame. A standard k-ε model and Eddy Dissipation model are utilized to represent the flow field and combustion of the flame being investigated, respectively. The results of calculations are compared with experimental data of propane flame taken from literature. The results of the study show that a combination of the standard k-ε turbulence model and eddy dissipation model is capable of producing reasonable predictions of temperature, particularly in axial profile of all three flames. Both experimental works and numerical simulation showed that increasing the temperature of the combustion air significantly increases the flame temperature.
Identifying fMRI Model Violations with Lagrange Multiplier Tests
Cassidy, Ben; Long, Christopher J; Rae, Caroline; Solo, Victor
2013-01-01
The standard modeling framework in Functional Magnetic Resonance Imaging (fMRI) is predicated on assumptions of linearity, time invariance and stationarity. These assumptions are rarely checked because doing so requires specialised software, although failure to do so can lead to bias and mistaken inference. Identifying model violations is an essential but largely neglected step in standard fMRI data analysis. Using Lagrange Multiplier testing methods we have developed simple and efficient procedures for detecting model violations such as non-linearity, non-stationarity and validity of the common Double Gamma specification for hemodynamic response. These procedures are computationally cheap and can easily be added to a conventional analysis. The test statistic is calculated at each voxel and displayed as a spatial anomaly map which shows regions where a model is violated. The methodology is illustrated with a large number of real data examples. PMID:22542665
Systematic investigations of deep sub-barrier fusion reactions using an adiabatic approach
NASA Astrophysics Data System (ADS)
Ichikawa, Takatoshi
2015-12-01
Background: At extremely low incident energies, unexpected decreases in fusion cross sections, compared to the standard coupled-channels (CC) calculations, have been observed in a wide range of fusion reactions. These significant reductions of the fusion cross sections are often referred to as the fusion hindrance. However, the physical origin of the fusion hindrance is still unclear. Purpose: To describe the fusion hindrance based on an adiabatic approach, I propose a novel extension of the standard CC model by introducing a damping factor that describes a smooth transition from sudden to adiabatic processes, that is, the transition from the separated two-body to the united dinuclear system. I demonstrate the performance of this model by systematically investigating various deep sub-barrier fusion reactions. Method: I extend the standard CC model by introducing a damping factor into the coupling matrix elements in the standard CC model. This avoids double counting of the CC effects, when two colliding nuclei overlap one another. I adopt the Yukawa-plus-exponential (YPE) model as a basic heavy ion-ion potential, which is advantageous for a unified description of the one- and two-body potentials. For the purpose of these systematic investigations, I approximate the one-body potential with a third-order polynomial function based on the YPE model. Results: Calculated fusion cross sections for the medium-heavy mass systems of 64Ni+64Ni , 58Ni+58Ni , and 58Ni+54Fe , the medium-light mass systems of 40Ca+40Ca , 48Ca+48Ca , and 24Mg+30Si , and the mass-asymmetric systems of 48Ca+96Zr and 16O+208Pb are consistent with the experimental data. The astrophysical S factor and logarithmic derivative representations of these are also in good agreement with the experimental data. The values obtained for the individual radius and diffuseness parameters in the damping factor, which reproduce the fusion cross sections well, are nearly equal to the average value for all the systems. Conclusions: Since the results calculated with the damping factor are in excellent agreement with the experimental data in all systems, I conclude that a coordinate-dependent coupling strength is responsible for the fusion hindrance. In all systems, the potential energies at the touching point VTouch strongly correlate with the incident threshold energies for which the fusion hindrance starts to emerge, except for the medium-light mass systems.
Supersymmetric and non-supersymmetric models without catastrophic Goldstone bosons
NASA Astrophysics Data System (ADS)
Braathen, Johannes; Goodsell, Mark D.; Staub, Florian
2017-11-01
The calculation of the Higgs mass in general renormalisable field theories has been plagued by the so-called "Goldstone Boson Catastrophe," where light (would-be) Goldstone bosons give infra-red divergent loop integrals. In supersymmetric models, previous approaches included a workaround that ameliorated the problem for most, but not all, parameter space regions; while giving divergent results everywhere for non-supersymmetric models! We present an implementation of a general solution to the problem in the public code SARAH, along with new calculations of some necessary loop integrals and generic expressions. We discuss the validation of our code in the Standard Model, where we find remarkable agreement with the known results. We then show new applications in Split SUSY, the NMSSM, the Two-Higgs-Doublet Model, and the Georgi-Machacek model. In particular, we take some first steps to exploring where the habit of using tree-level mass relations in non-supersymmetric models breaks down, and show that the loop corrections usually become very large well before naive perturbativity bounds are reached.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
NASA Technical Reports Server (NTRS)
Thuan, T. X.; Hart, M. H.; Ostriker, J. P.
1975-01-01
The two basic approaches of physical theory required to calculate the evolution of a galactic system are considered, taking into account stellar evolution theory and the dynamics of a gas-star system. Attention is given to intrinsic (stellar) physics, extrinsic (dynamical) physics, and computations concerning the fractionation of an initial mass of gas into stars. The characteristics of a 'standard' model and its variants are discussed along with the results obtained with the aid of these models.
NASA Astrophysics Data System (ADS)
Buzan, J. R.; Huber, M.
2015-12-01
The summer of 2015 has experienced major heat waves on 4 continents, and heat stress left ~4000 people dead in India and Pakistan. Heat stress is caused by a combination of meteorological factors: temperature, humidity, and radiation. The International Organization for Standardization (ISO) uses Wet Bulb Globe Temperature (WBGT)—an empirical metric this is calibrated with temperature, humidity, and radiation—for determining labor capacity during heat stress. Unfortunately, most literature studying global heat stress focuses on extreme temperature events, and a limited number of studies use the combination of temperature and humidity. Recent global assessments use WBGT, yet omit the radiation component without recalibrating the metric.Here we explicitly calculate future WBGT within a land surface model, including radiative fluxes as produced by a modeled globe thermometer. We use the Community Land Model version 4.5 (CLM4.5), which is a component model of the Community Earth System Model (CESM), and is maintained by the National Center for Atmospheric Research (NCAR). To drive our CLM4.5 simulations, we use greenhouse gasses Representative Concentration Pathway 8.5 (business as usual), and atmospheric output from the CMIP5 Archive. Humans work in a variety of environments, and we place the modeled globe thermometer in a variety of environments. We modify CLM4.5 code to calculate solar and thermal radiation fluxes below and above canopy vegetation, and in bare ground. To calculate wet bulb temperature, we implemented the HumanIndexMod into CLM4.5. The temperature, wet bulb temperature, and radiation fields are calculated at every model time step and are outputted 4x Daily. We use these fields to calculate WBGT and labor capacity for two time slices: 2026-2045 and 2081-2100.
NASA Technical Reports Server (NTRS)
Keating, G. M. (Editor)
1989-01-01
A set of preliminary reference atmosphere models of significant trace species which play important roles in controlling the chemistry, radiation budget, and circulation patterns of the atmosphere were produced. These models of trace species distributions are considered to be reference models rather than standard models; thus, it was not crucial that they be correct in an absolute sense. These reference models can serve as a means of comparison between individual observations, as a first guess in inversion algorithms, and as an approximate representation of observations for comparison to theoretical calculations.
A comparison of linear and nonlinear statistical techniques in performance attribution.
Chan, N H; Genovese, C R
2001-01-01
Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.
Treatment evolution and new standards of care: implications for cost-effectiveness analysis.
Shechter, Steven M
2011-01-01
Traditional approaches to cost-effectiveness analysis have not considered the downstream possibility of a new standard of care coming out of the research and development pipeline. However, the treatment landscape for patients may change significantly over the course of their lifetimes. To present a Markov modeling framework that incorporates the possibility of treatment evolution into the incremental cost-effectiveness ratio (ICER) that compares treatments available at the present time. . Markov model evaluated by matrix algebra. Measurements. The author evaluates the difference between the new and traditional ICER calculations for patients with chronic diseases facing a lifetime of treatment. The bias of the traditional ICER calculation may be substantial, with further testing revealing that it may be either positive or negative depending on the model parameters. The author also performs probabilistic sensitivity analyses with respect to the possible timing of a new treatment discovery and notes the increase in the magnitude of the bias when the new treatment is likely to appear sooner rather than later. Limitations. The modeling framework is intended as a proof of concept and therefore makes simplifying assumptions such as time stationarity of model parameters and consideration of a single new drug discovery. For diseases with a more active research and development pipeline, the possibility of a new treatment paradigm may be at least as important to consider in sensitivity analysis as other parameters that are often considered.
Lin, Ting; Harmsen, Stephen C.; Baker, Jack W.; Luco, Nicolas
2013-01-01
The conditional spectrum (CS) is a target spectrum (with conditional mean and conditional standard deviation) that links seismic hazard information with ground-motion selection for nonlinear dynamic analysis. Probabilistic seismic hazard analysis (PSHA) estimates the ground-motion hazard by incorporating the aleatory uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties in ground-motion prediction models (GMPMs) and seismic source models. Typical CS calculations to date are produced for a single earthquake scenario using a single GMPM, but more precise use requires consideration of at least multiple causal earthquakes and multiple GMPMs that are often considered in a PSHA computation. This paper presents the mathematics underlying these more precise CS calculations. Despite requiring more effort to compute than approximate calculations using a single causal earthquake and GMPM, the proposed approach produces an exact output that has a theoretical basis. To demonstrate the results of this approach and compare the exact and approximate calculations, several example calculations are performed for real sites in the western United States. The results also provide some insights regarding the circumstances under which approximate results are likely to closely match more exact results. To facilitate these more precise calculations for real applications, the exact CS calculations can now be performed for real sites in the United States using new deaggregation features in the U.S. Geological Survey hazard mapping tools. Details regarding this implementation are discussed in this paper.
Comparative pulsation calculations with OP and OPAL opacities
NASA Technical Reports Server (NTRS)
Kanbur, Shashi M.; Simon, Norman R.
1994-01-01
Comparative linear nonadiabatic pulsation calculations are presented using the OPAL and Opacity Project opacities. The two sets of opacities include effects due to intermediate coupling and fine structure as well as new abundances. We used two mass luminosity (M-L) relations, one standard (BIT), and one employing substantial convective core overshoot (COV). The two sets of opacities cannot be differentiated on the basis of the stellar pulsation calculations presented here. The BIT relation can model the beat and bump Cepheids with masses between 4 and 7 solar mass, while if the overshoot relation is used, masses between 2 and 6 solar mass are required. In the RR Lyrae regime, we find the inferred masses of globular cluster RRd stars to be little influenced by the choice of OPAL or OP. Finally, the limited modeling we have done is not able to constrain the Cepheid M-L relation based upon period ratios observed in the beat and bump stars.
Freight Calculation Model: A Case Study of Coal Distribution
NASA Astrophysics Data System (ADS)
Yunianto, I. T.; Lazuardi, S. D.; Hadi, F.
2018-03-01
Coal has been known as one of energy alternatives that has been used as energy source for several power plants in Indonesia. During its transportation from coal sites to power plant locations is required the eligible shipping line services that are able to provide the best freight rate. Therefore, this study aims to obtain the standardized formulations for determining the ocean freight especially for coal distribution based on the theoretical concept. The freight calculation model considers three alternative transport modes commonly used in coal distribution: tug-barge, vessel and self-propelled barge. The result shows there are two cost components very dominant in determining the value of freight with the proportion reaching 90% or even more, namely: time charter hire and fuel cost. Moreover, there are three main factors that have significant impacts on the freight calculation, which are waiting time at ports, time charter rate and fuel oil price.
Distributed activation energy model parameters of some Turkish coals
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gunes, M.; Gunes, S.K.
2008-07-01
A multi-reaction model based on distributed activation energy has been applied to some Turkish coals. The kinetic parameters of distributed activation energy model were calculated via computer program developed for this purpose. It was observed that the values of mean of activation energy distribution vary between 218 and 248 kJ/mol, and the values of standard deviation of activation energy distribution vary between 32 and 70 kJ/mol. The correlations between kinetic parameters of the distributed activation energy model and certain properties of coal have been investigated.
Paudel, Moti R; Kim, Anthony; Sarfehnia, Arman; Ahmad, Sayed B; Beachey, David J; Sahgal, Arjun; Keller, Brian M
2016-11-08
A new GPU-based Monte Carlo dose calculation algorithm (GPUMCD), devel-oped by the vendor Elekta for the Monaco treatment planning system (TPS), is capable of modeling dose for both a standard linear accelerator and an Elekta MRI linear accelerator. We have experimentally evaluated this algorithm for a standard Elekta Agility linear accelerator. A beam model was developed in the Monaco TPS (research version 5.09.06) using the commissioned beam data for a 6 MV Agility linac. A heterogeneous phantom representing several scenarios - tumor-in-lung, lung, and bone-in-tissue - was designed and built. Dose calculations in Monaco were done using both the current clinical Monte Carlo algorithm, XVMC, and the new GPUMCD algorithm. Dose calculations in a Pinnacle TPS were also produced using the collapsed cone convolution (CCC) algorithm with heterogeneity correc-tion. Calculations were compared with the measured doses using an ionization chamber (A1SL) and Gafchromic EBT3 films for 2 × 2 cm2, 5 × 5 cm2, and 10 × 10 cm2 field sizes. The percentage depth doses (PDDs) calculated by XVMC and GPUMCD in a homogeneous solid water phantom were within 2%/2 mm of film measurements and within 1% of ion chamber measurements. For the tumor-in-lung phantom, the calculated doses were within 2.5%/2.5 mm of film measurements for GPUMCD. For the lung phantom, doses calculated by all of the algorithms were within 3%/3 mm of film measurements, except for the 2 × 2 cm2 field size where the CCC algorithm underestimated the depth dose by ~ 5% in a larger extent of the lung region. For the bone phantom, all of the algorithms were equivalent and calculated dose to within 2%/2 mm of film measurements, except at the interfaces. Both GPUMCD and XVMC showed interface effects, which were more pronounced for GPUMCD and were comparable to film measurements, whereas the CCC algorithm showed these effects poorly. © 2016 The Authors.
Wear Calculation Approach for Sliding - Friction Pairs
NASA Astrophysics Data System (ADS)
Springis, G.; Rudzitis, J.; Lungevics, J.; Berzins, K.
2017-05-01
One of the most important things how to predict the service life of different products is always connected with the choice of adequate method. With the development of production technologies and measuring devices and with ever increasing precision one can get the appropriate data to be used in analytic calculations. Historically one can find several theoretical wear calculation methods but still there are no exact wear calculation model that could be applied to all cases of wear processes because of difficulties connected with a variety of parameters that are involved in wear process of two or several surfaces. Analysing the wear prediction theories that could be classified into definite groups one can state that each of them has shortcomings that might impact the results thus making unnecessary theoretical calculations. The offered wear calculation method is based on the theories of different branches of science. It includes the description of 3D surface micro-topography using standardized roughness parameters, explains the regularities of particle separation from the material in the wear process using fatigue theory and takes into account material’s physical and mechanical characteristics and definite conditions of product’s working time. The proposed wear calculation model could be of value for prediction of the exploitation time for sliding friction pairs thus allowing the best technologies to be chosen for many mechanical details.
Selection of reference standard during method development using the analytical hierarchy process.
Sun, Wan-yang; Tong, Ling; Li, Dong-xiang; Huang, Jing-yi; Zhou, Shui-ping; Sun, Henry; Bi, Kai-shun
2015-03-25
Reference standard is critical for ensuring reliable and accurate method performance. One important issue is how to select the ideal one from the alternatives. Unlike the optimization of parameters, the criteria of the reference standard are always immeasurable. The aim of this paper is to recommend a quantitative approach for the selection of reference standard during method development based on the analytical hierarchy process (AHP) as a decision-making tool. Six alternative single reference standards were assessed in quantitative analysis of six phenolic acids from Salvia Miltiorrhiza and its preparations by using ultra-performance liquid chromatography. The AHP model simultaneously considered six criteria related to reference standard characteristics and method performance, containing feasibility to obtain, abundance in samples, chemical stability, accuracy, precision and robustness. The priority of each alternative was calculated using standard AHP analysis method. The results showed that protocatechuic aldehyde is the ideal reference standard, and rosmarinic acid is about 79.8% ability as the second choice. The determination results successfully verified the evaluation ability of this model. The AHP allowed us comprehensive considering the benefits and risks of the alternatives. It was an effective and practical tool for optimization of reference standards during method development. Copyright © 2015 Elsevier B.V. All rights reserved.
Enting, I. G.; Wigley, M. L.; Heimann, M.
1995-01-01
This database contains the results of various projections of the relation between future CO2 concentrations and future industrial emissions. These projections were contributed by groups from a number of countries as part of the scientific assessment for the report, "Radiative Forcing of Climate Change" (1994), issued by Working Group 1 of the Intergovernmental Panel on Climate Change. There were three types of calculations: (1) forward projections, calculating the atmospheric CO2 concentrations resulting from specified emissions scenarios; (2) inverse calculations, determining the emission rates that would be required to achieve stabilization of CO2 concentrations via specified pathways; (3) impulse response function calculations, required for determining Global Warming Potentials. The projections were extrapolations of global carbon cycle models from pre-industrial times (starting at 1765) to 2100 or 2200 A.D. There were two aspects to the exercise: (1) an assessment of the uncertainty due to uncertainties regarding the current carbon budget, and (2) an assessment of the uncertainties arising from differences between models. To separate these effects, a set of standard conditions was used to explore inter-model differences and then a series of sensitivity studies was used to explore the consequences of current uncertainties in the carbon cycle.
Standardized Automated CO2/H2O Flux Systems for Individual Research Groups and Flux Networks
NASA Astrophysics Data System (ADS)
Burba, George; Begashaw, Israel; Fratini, Gerardo; Griessbaum, Frank; Kathilankal, James; Xu, Liukang; Franz, Daniela; Joseph, Everette; Larmanou, Eric; Miller, Scott; Papale, Dario; Sabbatini, Simone; Sachs, Torsten; Sakai, Ricardo; McDermitt, Dayle
2017-04-01
In recent years, spatial and temporal flux data coverage improved significantly, and on multiple scales, from a single station to continental networks, due to standardization, automation, and management of data collection, and better handling of the extensive amounts of generated data. With more stations and networks, larger data flows from each station, and smaller operating budgets, modern tools are required to effectively and efficiently handle the entire process. Such tools are needed to maximize time dedicated to authoring publications and answering research questions, and to minimize time and expenses spent on data acquisition, processing, and quality control. Thus, these tools should produce standardized verifiable datasets and provide a way to cross-share the standardized data with external collaborators to leverage available funding, promote data analyses and publications. LI-COR gas analyzers are widely used in past and present flux networks such as AmeriFlux, ICOS, AsiaFlux, OzFlux, NEON, CarboEurope, and FluxNet-Canada, etc. These analyzers have gone through several major improvements over the past 30 years. However, in 2016, a three-prong development was completed to create an automated flux system which can accept multiple sonic anemometer and datalogger models, compute final and complete fluxes on-site, merge final fluxes with supporting weather soil and radiation data, monitor station outputs and send automated alerts to researchers, and allow secure sharing and cross-sharing of the station and data access. Two types of these research systems were developed: open-path (LI-7500RS) and enclosed-path (LI-7200RS). Key developments included: • Improvement of gas analyzer performance • Standardization and automation of final flux calculations onsite, and in real-time • Seamless integration with latest site management and data sharing tools In terms of the gas analyzer performance, the RS analyzers are based on established LI-7500/A and LI-7200 models, and the improvements focused on increased stability in the presence of contamination, refining temperature control and compensation, and providing more accurate fast gas concentration measurements. In terms of the flux calculations, improvements focused on automating the on-site flux calculations using EddyPro® software run by a weatherized fully digital microcomputer, SmartFlux2. In terms of site management and data sharing, the development focused on web-based software, FluxSuite, which allows real-time station monitoring and data access by multiple users. The presentation will describe details for the key developments and will include results from field tests of the RS gas analyzer models in comparison with older models and control reference instruments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flores-Tlalpa, A.; Montano, J.; Ramirez-Zavaleta, F.
We perform a complete calculation at the one-loop level for the Zggg and Z{sup '}ggg couplings in the context of the minimal 331 model, which predicts the existence of a new Z{sup '} gauge boson and new exotic quarks. Bose symmetry is exploited to write a compact and manifest SU{sub C}(3)-invariant vertex function for the Vggg (V=Z, Z{sup '}) coupling. Previous results on the Z{yields}ggg decay in the standard model are reproduced. It is found that this decay is insensitive to the effects of the new exotic quarks. This in contrast with the Z{sup '}{yields}ggg decay, which is sensitive tomore » both the standard model and exotic quarks, whose branching ratio is larger than that of the Z{yields}ggg transition by about a factor of 4.« less
NASA Technical Reports Server (NTRS)
Steffen, C. J., Jr.
1993-01-01
Turbulent backward-facing step flow was examined using four low turbulent Reynolds number k-epsilon models and one standard high Reynolds number technique. A tunnel configuration of 1:9 (step height: exit tunnel height) was used. The models tested include: the original Jones and Launder; Chien; Launder and Sharma; and the recent Shih and Lumley formulation. The experimental reference of Driver and Seegmiller was used to make detailed comparisons between reattachment length, velocity, pressure, turbulent kinetic energy, Reynolds shear stress, and skin friction predictions. The results indicated that the use of a wall function for the standard k-epsilon technique did not reduce the calculation accuracy for this separated flow when compared to the low turbulent Reynolds number techniques.
Toriihara, Akira; Ohtake, Makoto; Tateishi, Kensuke; Hino-Shishikura, Ayako; Yoneyama, Tomohiro; Kitazume, Yoshio; Inoue, Tomio; Kawahara, Nobutaka; Tateishi, Ukihide
2018-05-01
The potential of positron emission tomography/computed tomography using 62 Cu-diacetyl-bis (N 4 -methylthiosemicarbazone) ( 62 Cu-ATSM PET/CT), which was originally developed as a hypoxic tracer, to predict therapeutic resistance and prognosis has been reported in various cancers. Our purpose was to investigate prognostic value of 62 Cu-ATSM PET/CT in patients with glioma, compared to PET/CT using 2-deoxy-2-[ 18 F]fluoro-D-glucose ( 18 F-FDG). 56 patients with glioma of World Health Organization grade 2-4 were enrolled. All participants had undergone both 62 Cu-ATSM PET/CT and 18 F-FDG PET/CT within mean 33.5 days prior to treatment. Maximum standardized uptake value and tumor/background ratio were calculated within areas of increased radiotracer uptake. The prognostic significance for progression-free survival and overall survival were assessed by log-rank test and Cox's proportional hazards model. Disease progression and death were confirmed in 37 and 27 patients in follow-up periods, respectively. In univariate analysis, there was significant difference of both progression-free survival and overall survival in age, tumor grade, history of chemoradiotherapy, maximum standardized uptake value and tumor/background ratio calculated using 62 Cu-ATSM PET/CT. Multivariate analysis revealed that maximum standardized uptake value calculated using 62 Cu-ATSM PET/CT was an independent predictor of both progression-free survival and overall survival (p < 0.05). In a subgroup analysis including patients of grade 4 glioma, only the maximum standardized uptake values calculated using 62 Cu-ATSM PET/CT showed significant difference of progression-free survival (p < 0.05). 62 Cu-ATSM PET/CT is a more promising imaging method to predict prognosis of patients with glioma compared to 18 F-FDG PET/CT.
The Dualized Standard Model and its Applications — AN Interim Report
NASA Astrophysics Data System (ADS)
Chan, Hong-Mo; Tsou, Sheung Tsun
Based on a non-Abelian generalization of electric-magnetic duality, the Dualized Standard Model (DSM) suggests a natural explanation for exactly three generations of fermions as the "dual colour" widetilde SU (3) symmetry broken in a particular manner. The resulting scheme then offers on the one hand a fermion mass hierarchy and a perturbative method for calculating the mass and mixing parameters of the Standard Model fermions, and on the other hand testable predictions for new phenomena ranging from rare meson decays to ultra-high energy cosmic rays. Calculations to one-loop order gives, at the cost of adjusting only three real parameters, values for the following quantities all (except one) in very good agreement with experiment: the quark CKM matrix elements dvbr Vrsdvbr , the lepton CKM matrix elements dvbr Ursdvbr, and the second generation masses mc, ms, mμ. This means, in particular, that it gives near maximal mixing Uμ3 between νμ and ντ as observed by SuperKamiokande, Kamiokande and Soudan, while keeping small the corresponding quark angles Vcb, Vts. In addition, the scheme gives (i) rough order-of-magnitude estimates for the masses of the lowest generation, (ii) predictions for low energy FCNC effects such as KL→ eμ, and (iii) a possible explanation for the long-standing puzzle of air showers beyond the GZK cut-off. All these together, however, still represent but a portion of the possible physical consequences derivable from the DSM scheme, the majority of which are yet to be explored.
Analysis of household refrigerators for different testing standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bansal, P.K.; McGill, I.
This study highlights the salient differences among various testing standards for household refrigerator-freezers and proposes a methodology for predicting the performance of a single evaporator-based vapor-compression refrigeration system (either refrigerator or freezer) from one test standard (where the test data are available-the reference case) to another (the alternative case). The standards studied during this investigation include the Australian-New Zealand Standard (ANZS), the International Standard (ISO), the American National Standard (ANSI), the Japanese Industrial Standard (JIS), and the Chinese National Standard (CNS). A simple analysis in conjunction with the BICYCLE model (Bansal and Rice 1993) is used to calculate the energymore » consumption of two refrigerator cabinets from the reference case to the alternative cases. The proposed analysis includes the effect of door openings (as required by the JIS) as well as defrost heaters. The analytical results are found to agree reasonably well with the experimental observations for translating energy consumption information from one standard to another.« less
NASA Technical Reports Server (NTRS)
Whitlock, C. H.; Suttles, J. T.; Lecroy, S. R.
1985-01-01
Tabular values of phase function, Legendre polynominal coefficients, 180 deg backscatter, and extinction cross section are given for eight wavelengths in the atmospheric windows between 0.4 and 2.2 microns. Also included are single scattering albedo, asymmetry factor, and refractive indices. These values are based on Mie theory calculations for the standard rediation atmospheres (continental, maritime, urban, unperturbed stratospheric, volcanic, upper atmospheric, soot, oceanic, dust, and water-soluble) assest measured volcanic aerosols at several time intervals following the El Chichon eruption. Comparisons of extinction to 180 deg backscatter for different aerosol models are presented and related to lidar data.
Renormalization group invariant of lepton Yukawa couplings
NASA Astrophysics Data System (ADS)
Tsuyuki, Takanao
2015-04-01
By using quark Yukawa matrices only, we can construct renormalization invariants that are exact at the one-loop level in the standard model. One of them, Iq, is accidentally consistent with unity, even though quark masses are strongly hierarchical. We calculate a lepton version of the invariant Il for Dirac and Majorana neutrino cases and find that Il can also be close to unity. For the Dirac neutrino and inverted hierarchy case, if the lightest neutrino mass is 3.0 meV to 8.8 meV, an equality Iq=Il can be satisfied. These invariants are not changed even if new particles couple to the standard model particles, as long as those couplings are generation independent.
A combination strategy for tracking the serial criminal
NASA Astrophysics Data System (ADS)
He, Chuan; Zhang, Yuan-Biao; Wan, Jiadi; Yu, Wenjing
2010-08-01
We build a Geographic Profiling Model to generate the criminal's geographical profile, by combining two complementary strategies: the Spatial Distribution Strategy and the Probability Distance Strategy. In the first strategy, we designate the mean of all the known crime sites as the anchor point, and build a Standard Deviational Ellipse Model, considering the effect of landscape. In the second strategy, we take many factors such as the buffer zone and distance decay theory into consideration and calculate the probability of the offender's residence in a certain area by using the Bayesian Theorem and the Rossmo Algorithm. Then, we combine the result of two strategies and get three search areas suit different conditions of the police to track the serial criminal. Apply the model to the English serial killer Peter Sutcliffe's case, the calculation result shows that the model can effectively be used to track serial criminal.
Testing collapse models by a thermometer
NASA Astrophysics Data System (ADS)
Bahrami, M.
2018-05-01
Collapse models postulate that space is filled with a collapse noise field, inducing quantum Brownian motions, which are dominant during the measurement, thus causing collapse of the wave function. An important manifestation of the collapse noise field, if any, is thermal energy generation, thus disturbing the temperature profile of a system. The experimental investigation of a collapse-driven heating effect has provided, so far, the most promising test of collapse models against standard quantum theory. In this paper, we calculate the collapse-driven heat generation for a three-dimensional multi-atomic Bravais lattice by solving stochastic Heisenberg equations. We perform our calculation for the mass-proportional continuous spontaneous localization collapse model with nonwhite noise. We obtain the temperature distribution of a sphere under stationary-state and insulated surface conditions. However, the exact quantification of the collapse-driven heat-generation effect highly depends on the actual value of cutoff in the collapse noise spectrum.
Spin-flip transitions and departure from the Rashba model in the Au(111) surface
NASA Astrophysics Data System (ADS)
Ibañez-Azpiroz, Julen; Bergara, Aitor; Sherman, E. Ya.; Eiguren, Asier
2013-09-01
We present a detailed analysis of the spin-flip excitations induced by a periodic time-dependent electric field in the Rashba prototype Au(111) noble metal surface. Our calculations incorporate the full spinor structure of the spin-split surface states and employ a Wannier-based scheme for the spin-flip matrix elements. We find that the spin-flip excitations associated with the surface states exhibit an strong dependence on the electron momentum magnitude, a feature that is absent in the standard Rashba model [E. I. Rashba, Sov. Phys. Solid State 2, 1109 (1960)]. Furthermore, we demonstrate that the maximum of the calculated spin-flip absorption rate is about twice the model prediction. These results show that, although the Rashba model accurately describes the spectrum and spin polarization, it does not fully account for the dynamical properties of the surface states.
NASA Astrophysics Data System (ADS)
Pan, Kok-Kwei
We have generalized the linked cluster expansion method to solve more many-body quantum systems, such as quantum spin systems with crystal-field potentials and the Hubbard model. The technique sums up all connected diagrams to a certain order of the perturbative Hamiltonian. The modified multiple-site Wick reduction theorem and the simple tau dependence of the standard basis operators have been used to facilitate the evaluation of the integration procedures in the perturbation expansion. Computational methods are developed to calculate all terms in the series expansion. As a first example, the perturbation series expansion of thermodynamic quantities of the single-band Hubbard model has been obtained using a linked cluster series expansion technique. We have made corrections to all previous results of several papers (up to fourth order). The behaviors of the three dimensional simple cubic and body-centered cubic systems have been discussed from the qualitative analysis of the perturbation series up to fourth order. We have also calculated the sixth-order perturbation series of this model. As a second example, we present the magnetic properties of spin-one Heisenberg model with arbitrary crystal-field potential using a linked cluster series expansion. The calculation of the thermodynamic properties using this method covers the whole range of temperature, in both magnetically ordered and disordered phases. The series for the susceptibility and magnetization have been obtained up to fourth order for this model. The method sums up all perturbation terms to certain order and estimates the result using a well -developed and highly successful extrapolation method (the standard ratio method). The dependence of critical temperature on the crystal-field potential and the magnetization as a function of temperature and crystal-field potential are shown. The critical behaviors at zero temperature are also shown. The range of the crystal-field potential for Ni(2+) compounds is roughly estimated based on this model using known experimental results.
Sphalerons in composite and nonstandard Higgs models
NASA Astrophysics Data System (ADS)
Spannowsky, Michael; Tamarit, Carlos
2017-01-01
After the discovery of the Higgs boson and the rather precise measurement of all electroweak boson's masses the local structure of the electroweak symmetry breaking potential is already quite well established. However, despite being a key ingredient to a fundamental understanding of the underlying mechanism of electroweak symmetry breaking, the global structure of the electroweak potential remains entirely unknown. The existence of sphalerons, unstable solutions of the classical action of motion that are interpolating between topologically distinct vacua, is a direct consequence of the Standard Model's SU (2 )L gauge group. Nevertheless, the sphaleron energy depends on the shape of the Higgs potential away from the minimum and can therefore be a litmus test for its global structure. Focusing on two scenarios, the minimal composite Higgs model SO (5 )/SO (4 ) or an elementary Higgs with a deformed electroweak potential, we calculate the change of the sphaleron energy compared to the Standard Model prediction. We find that the sphaleron energy would have to be measured to O (10 )% accuracy to exclude sizeable global deviations from the Standard Model Higgs potential. We further find that because of the periodicity of the scalar potential in composite Higgs models a second sphaleron branch with larger energy arises.
Carbon Management In the Post-Cap-and-Trade Carbon Economy-Part II
NASA Astrophysics Data System (ADS)
DeGroff, F. A.
2014-12-01
This is the second installment in our search for a comprehensive economic model to mitigate climate change due to anthropogenic activity. Last year we presented how the unique features of our economic model measure changes in carbon flux due to anthropogenic activity, referred to as carbon quality or CQ, and how the model is used to value such changes in the climate system. This year, our paper focuses on how carbon quality can be implemented to capture the effect of economic activity and international trade on the climate system, thus allowing us to calculate a Return on Climate System (RoCS) for all economic assets and activity. The result is that the RoCS for each public and private economic activity and entity can be calculated by summing up the RoCS for each individual economic asset and activity in which an entity is engaged. Such a macro-level scale is used to rank public and private entities including corporations, governments, and even entire nations, as well as human adaptation and carbon storage activities, providing status and trending insights to evaluate policies on both a micro- and macro-economic level. With international trade, RoCS measures the embodied effects on climate change that will be needed to assess border fees to insure carbon parity on all imports and exports. At the core of our vision is a comprehensive, 'open-source' construct of which our carbon quality metric is the first element. One goal is to recognize each country's endemic resources and infrastructure that affect their ability to manage carbon, while preventing spatial and temporal shifting of carbon emissions that reduce or reverse efforts to mitigate climate change. The standards for calculating the RoCS can be promulgated as part of the Generally Accepted Accounted Principles (GAAP) and the International Financial Reporting Standards (IFRS) to ensure standard and consistent reporting. The value of such insights on the climate system at all levels will be crucial to managing anthropogenic activity in order to minimize the effect on the climate system. Without the insights provided by a comprehensive, standardized and verifiable RoCS, managing anthropogenic activity will be elusive and difficult to achieve, at best. Such a model may also be useful to manage the effect of anthropogenic activity on the nitrogen and phosphorous cycles.
40 CFR 91.207 - Credit calculation and manufacturer compliance with emission standards.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of nitrogen credit status for an engine family, whether generating positive credits or negative... with model year 2000, a manufacturer having a negative credit balance during one period of up to four... regulation under this part of 1000 or less; and (2) The manufacturer has not had a negative credit balance...
40 CFR 91.207 - Credit calculation and manufacturer compliance with emission standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of nitrogen credit status for an engine family, whether generating positive credits or negative... with model year 2000, a manufacturer having a negative credit balance during one period of up to four... regulation under this part of 1000 or less; and (2) The manufacturer has not had a negative credit balance...
40 CFR 91.207 - Credit calculation and manufacturer compliance with emission standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of nitrogen credit status for an engine family, whether generating positive credits or negative... with model year 2000, a manufacturer having a negative credit balance during one period of up to four... regulation under this part of 1000 or less; and (2) The manufacturer has not had a negative credit balance...
40 CFR 91.207 - Credit calculation and manufacturer compliance with emission standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of nitrogen credit status for an engine family, whether generating positive credits or negative... with model year 2000, a manufacturer having a negative credit balance during one period of up to four... regulation under this part of 1000 or less; and (2) The manufacturer has not had a negative credit balance...
40 CFR 91.207 - Credit calculation and manufacturer compliance with emission standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of nitrogen credit status for an engine family, whether generating positive credits or negative... with model year 2000, a manufacturer having a negative credit balance during one period of up to four... regulation under this part of 1000 or less; and (2) The manufacturer has not had a negative credit balance...
Ensuring Positiveness of the Scaled Difference Chi-Square Test Statistic
ERIC Educational Resources Information Center
Satorra, Albert; Bentler, Peter M.
2010-01-01
A scaled difference test statistic T[tilde][subscript d] that can be computed from standard software of structural equation models (SEM) by hand calculations was proposed in Satorra and Bentler (Psychometrika 66:507-514, 2001). The statistic T[tilde][subscript d] is asymptotically equivalent to the scaled difference test statistic T[bar][subscript…
49 CFR 571.301 - Standard No. 301; Fuel system integrity.
Code of Federal Regulations, 2011 CFR
2011-10-01
... regarding which of the compliance options it has selected for a particular vehicle or make/model. S6... more than one manufacturer. For the purpose of calculating average annual production of vehicles for... of gravity is located 1,372 mm ±38 mm rearward of the front wheel axis, in the vertical longitudinal...
Quantum Electrodynamics: Theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lincoln, Don
The Standard Model of particle physics is composed of several theories that are added together. The most precise component theory is the theory of quantum electrodynamics or QED. In this video, Fermilab’s Dr. Don Lincoln explains how theoretical QED calculations can be done. This video links to other videos, giving the viewer a deep understanding of the process.
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2013 CFR
2013-07-01
... different strategies are and why they are used. (i) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust... as defined in § 86.1818-12. The model type carbon-related exhaust emission results determined...
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2011 CFR
2011-07-01
... different strategies are and why they are used. (i) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust... as defined in § 86.1818-12. The model type carbon-related exhaust emission results determined...
40 CFR 86.1865-12 - How to comply with the fleet average CO2 standards.
Code of Federal Regulations, 2012 CFR
2012-07-01
... different strategies are and why they are used. (i) Calculating the fleet average carbon-related exhaust emissions. (1) Manufacturers must compute separate production-weighted fleet average carbon-related exhaust... as defined in § 86.1818-12. The model type carbon-related exhaust emission results determined...
Value-Added Results for Public Virtual Schools in California
ERIC Educational Resources Information Center
Ford, Richard; Rice, Kerry
2015-01-01
The objective of this paper is to present value-added calculation methods that were applied to determine whether online schools performed at the same or different levels relative to standardized testing. This study includes information on how we approached our value added model development and the results for 32 online public high schools in…
NASA Astrophysics Data System (ADS)
Muduli, Pradyut; Das, Sarat
2014-06-01
This paper discusses the evaluation of liquefaction potential of soil based on standard penetration test (SPT) dataset using evolutionary artificial intelligence technique, multi-gene genetic programming (MGGP). The liquefaction classification accuracy (94.19%) of the developed liquefaction index (LI) model is found to be better than that of available artificial neural network (ANN) model (88.37%) and at par with the available support vector machine (SVM) model (94.19%) on the basis of the testing data. Further, an empirical equation is presented using MGGP to approximate the unknown limit state function representing the cyclic resistance ratio (CRR) of soil based on developed LI model. Using an independent database of 227 cases, the overall rates of successful prediction of occurrence of liquefaction and non-liquefaction are found to be 87, 86, and 84% by the developed MGGP based model, available ANN and the statistical models, respectively, on the basis of calculated factor of safety (F s) against the liquefaction occurrence.
NASA Technical Reports Server (NTRS)
Bahcall, J. N.; Pinsonneault, M. H.
1992-01-01
We calculate improved standard solar models using the new Livermore (OPAL) opacity tables, an accurate (exportable) nuclear energy generation routine which takes account of recent measurements and analyses, and the recent Anders-Grevesse determination of heavy element abundances. We also evaluate directly the effect of the diffusion of helium with respect to hydrogen on the calculated neutrino fluxes, on the primordial solar helium abundance, and on the depth of the convective zone. Helium diffusion increases the predicted event rates by about 0.8 SNU, or 11 percent of the total rate, in the chlorine solar neutrino experiment, by about 3.5 SNU, or 3 percent, in the gallium solar neutrino experiments, and by about 12 percent in the Kamiokande and SNO solar neutrino experiments. The best standard solar model including helium diffusion and the most accurate nuclear parameters, element abundances, and radiative opacity predicts a value of 8.0 SNU +/- 3.0 SNU for the C1-37 experiment and 132 +21/-17 SNU for the Ga - 71 experiment, where the uncertainties include 3 sigma errors for all measured input parameters.
The lagRST Model: A Turbulence Model for Non-Equilibrium Flows
NASA Technical Reports Server (NTRS)
Lillard, Randolph P.; Oliver, A. Brandon; Olsen, Michael E.; Blaisdell, Gregory A.; Lyrintzis, Anastasios S.
2011-01-01
This study presents a new class of turbulence model designed for wall bounded, high Reynolds number flows with separation. The model addresses deficiencies seen in the modeling of nonequilibrium turbulent flows. These flows generally have variable adverse pressure gradients which cause the turbulent quantities to react at a finite rate to changes in the mean flow quantities. This "lag" in the response of the turbulent quantities can t be modeled by most standard turbulence models, which are designed to model equilibrium turbulent boundary layers. The model presented uses a standard 2-equation model as the baseline for turbulent equilibrium calculations, but adds transport equations to account directly for non-equilibrium effects in the Reynolds Stress Tensor (RST) that are seen in large pressure gradients involving shock waves and separation. Comparisons are made to several standard turbulence modeling validation cases, including an incompressible boundary layer (both neutral and adverse pressure gradients), an incompressible mixing layer and a transonic bump flow. In addition, a hypersonic Shock Wave Turbulent Boundary Layer Interaction with separation is assessed along with a transonic capsule flow. Results show a substantial improvement over the baseline models for transonic separated flows. The results are mixed for the SWTBLI flows assessed. Separation predictions are not as good as the baseline models, but the over prediction of the peak heat flux downstream of the reattachment shock that plagues many models is reduced.
Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.
2014-01-01
This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.
Using the Full Cycle of GOCE Data in the Quasi-Geoid Modelling of Finland
NASA Astrophysics Data System (ADS)
Saari, Timo; Bilker-Koivula, Mirjam; Poutanen, Markku
2016-08-01
In the Dragon 3 project 10519 "Case study on heterogeneous geoid/quasigeoid based on space borne and terrestrial data combination with special consideration of GOCE mission data impact" we combined the latest GOCE models with the terrestrial gravity data of Finland and surrounding areas to calculate a quasi-geoid model for Finland. Altogether 249 geoid models with different modifications were calculated using the GOCE DIR5 models up to spherical harmonic degree and order 240 and 300 and the EIGEN-6C4 up to degree and order 1000 and 2190.The calculated quasi-geoid models were compared against the ground truth in Finland with two independent GPS-levelling datasets. The best GOCE- only models gave standard deviations of 2.8 cm, 2.6 cm (DIR5 d/o 240) and 2.7 cm, 2.3 cm (DIR5 d/o 300) in Finnish territory for NLS-FIN and EUVN-DA datasets, respectively. For the high resolution model EIGEN-6C4 (which includes the full cycle of the GOCE data), the results were 2.4 cm, 1.8 cm (d/o 1000) and 2.5 cm, 1.7 (d/o 2190). The sub-2-centimetre (and near 2 cm with GOCE-only) accuracy is an improvement over the previous and current Finnish geoid models, thus leading to a conclusion of the great impact of the GOCE- mission on regional geoid modelling.
Verification of calculated skin doses in postmastectomy helical tomotherapy.
Ito, Shima; Parker, Brent C; Levine, Renee; Sanders, Mary Ella; Fontenot, Jonas; Gibbons, John; Hogstrom, Kenneth
2011-10-01
To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi·Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. The mean difference and standard error of the mean difference between measurement and calculation for the scar measurements was -1.8% ± 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% ± 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% ± 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%. Copyright © 2011 Elsevier Inc. All rights reserved.
Verification of Calculated Skin Doses in Postmastectomy Helical Tomotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ito, Shima; Parker, Brent C., E-mail: bcparker@marybird.com; Mary Bird Perkins Cancer Center, Baton Rouge, LA
2011-10-01
Purpose: To verify the accuracy of calculated skin doses in helical tomotherapy for postmastectomy radiation therapy (PMRT). Methods and Materials: In vivo thermoluminescent dosimeters (TLDs) were used to measure the skin dose at multiple points in each of 14 patients throughout the course of treatment on a TomoTherapy Hi.Art II system, for a total of 420 TLD measurements. Five patients were evaluated near the location of the mastectomy scar, whereas 9 patients were evaluated throughout the treatment volume. The measured dose at each location was compared with calculations from the treatment planning system. Results: The mean difference and standard errormore » of the mean difference between measurement and calculation for the scar measurements was -1.8% {+-} 0.2% (standard deviation [SD], 4.3%; range, -11.1% to 10.6%). The mean difference and standard error of the mean difference between measurement and calculation for measurements throughout the treatment volume was -3.0% {+-} 0.4% (SD, 4.7%; range, -18.4% to 12.6%). The mean difference and standard error of the mean difference between measurement and calculation for all measurements was -2.1% {+-} 0.2% (standard deviation, 4.5%: range, -18.4% to 12.6%). The mean difference between measured and calculated TLD doses was statistically significant at two standard deviations of the mean, but was not clinically significant (i.e., was <5%). However, 23% of the measured TLD doses differed from the calculated TLD doses by more than 5%. Conclusions: The mean of the measured TLD doses agreed with TomoTherapy calculated TLD doses within our clinical criterion of 5%.« less
Impacts of Climate Policy on Regional Air Quality, Health, and Air Quality Regulatory Procedures
NASA Astrophysics Data System (ADS)
Thompson, T. M.; Selin, N. E.
2011-12-01
Both the changing climate, and the policy implemented to address climate change can impact regional air quality. We evaluate the impacts of potential selected climate policies on modeled regional air quality with respect to national pollution standards, human health and the sensitivity of health uncertainty ranges. To assess changes in air quality due to climate policy, we couple output from a regional computable general equilibrium economic model (the US Regional Energy Policy [USREP] model), with a regional air quality model (the Comprehensive Air Quality Model with Extensions [CAMx]). USREP uses economic variables to determine how potential future U.S. climate policy would change emissions of regional pollutants (CO, VOC, NOx, SO2, NH3, black carbon, and organic carbon) from ten emissions-heavy sectors of the economy (electricity, coal, gas, crude oil, refined oil, energy intensive industry, other industry, service, agriculture, and transportation [light duty and heavy duty]). Changes in emissions are then modeled using CAMx to determine the impact on air quality in several cities in the Northeast US. We first calculate the impact of climate policy by using regulatory procedures used to show attainment with National Ambient Air Quality Standards (NAAQS) for ozone and particulate matter. Building on previous work, we compare those results with the calculated results and uncertainties associated with human health impacts due to climate policy. This work addresses a potential disconnect between NAAQS regulatory procedures and the cost/benefit analysis required for and by the Clean Air Act.
Electromagnetic plasma simulation in realistic geometries
NASA Astrophysics Data System (ADS)
Brandon, S.; Ambrosiano, J. J.; Nielsen, D.
1991-08-01
Particle-in-Cell (PIC) calculations have become an indispensable tool to model the nonlinear collective behavior of charged particle species in electromagnetic fields. Traditional finite difference codes, such as CONDOR (2-D) and ARGUS (3-D), are used extensively to design experiments and develop new concepts. A wide variety of physical processes can be modeled simply and efficiently by these codes. However, experiments have become more complex. Geometrical shapes and length scales are becoming increasingly more difficult to model. Spatial resolution requirements for the electromagnetic calculation force large grids and small time steps. Many hours of CRAY YMP time may be required to complete 2-D calculation -- many more for 3-D calculations. In principle, the number of mesh points and particles need only to be increased until all relevant physical processes are resolved. In practice, the size of a calculation is limited by the computer budget. As a result, experimental design is being limited by the ability to calculate, not by the experimenters ingenuity or understanding of the physical processes involved. Several approaches to meet these computational demands are being pursued. Traditional PIC codes continue to be the major design tools. These codes are being actively maintained, optimized, and extended to handle large and more complex problems. Two new formulations are being explored to relax the geometrical constraints of the finite difference codes. A modified finite volume test code, TALUS, uses a data structure compatible with that of standard finite difference meshes. This allows a basic conformal boundary/variable grid capability to be retrofitted to CONDOR. We are also pursuing an unstructured grid finite element code, MadMax. The unstructured mesh approach provides maximum flexibility in the geometrical model while also allowing local mesh refinement.
Theoretical Study of Watershed Eco-Compensation Standards
NASA Astrophysics Data System (ADS)
Yan, Dandan; Fu, Yicheng; Liu, Biu; Sha, Jinxia
2018-01-01
Watershed eco-compensation is an effective way to solve conflicts over water allocation and ecological destruction problems in the exploitation of water resources. Despite an increasing interest in the topic, the researches has neglected the effect of water quality and lacked systematic calculation method. In this study we reviewed and analyzed the current literature and proposedatheoretical framework to improve the calculation of co-compensation standard.Considering the perspectives of the river ecosystems, forest ecosystems and wetland ecosystems, the benefit compensation standard was determined by the input-output corresponding relationship. Based on the opportunity costs related to limiting development and water conservation loss, the eco-compensation standard was calculated.In order to eliminate the defects of eco-compensation implementation, the improvement suggestions were proposed for the compensation standard calculation and implementation.
Biological effects and equivalent doses in radiotherapy: A software solution
Voyant, Cyril; Julian, Daniel; Roustit, Rudy; Biffi, Katia; Lantieri, Céline
2013-01-01
Background The limits of TDF (time, dose, and fractionation) and linear quadratic models have been known for a long time. Medical physicists and physicians are required to provide fast and reliable interpretations regarding delivered doses or any future prescriptions relating to treatment changes. Aim We, therefore, propose a calculation interface under the GNU license to be used for equivalent doses, biological doses, and normal tumor complication probability (Lyman model). Materials and methods The methodology used draws from several sources: the linear-quadratic-linear model of Astrahan, the repopulation effects of Dale, and the prediction of multi-fractionated treatments of Thames. Results and conclusions The results are obtained from an algorithm that minimizes an ad-hoc cost function, and then compared to an equivalent dose computed using standard calculators in seven French radiotherapy centers. PMID:24936319
Wang, Xiao-Lan; Zhan, Ting-Ting; Zhan, Xian-Cheng; Tan, Xiao-Ying; Qu, Xiao-You; Wang, Xin-Yue; Li, Cheng-Rong
2014-01-01
The osmotic pressure of ammonium sulfate solutions has been measured by the well-established freezing point osmometry in dilute solutions and we recently reported air humidity osmometry in a much wider range of concentration. Air humidity osmometry cross-validated the theoretical calculations of osmotic pressure based on the Pitzer model at high concentrations by two one-sided test (TOST) of equivalence with multiple testing corrections, where no other experimental method could serve as a reference for comparison. Although more strict equivalence criteria were established between the measurements of freezing point osmometry and the calculations based on the Pitzer model at low concentration, air humidity osmometry is the only currently available osmometry applicable to high concentration, serves as an economic addition to standard osmometry.
A composite model for the 750 GeV diphoton excess
Harigaya, Keisuke; Nomura, Yasunori
2016-03-14
We study a simple model in which the recently reported 750 GeV diphoton excess arises from a composite pseudo Nambu-Goldstone boson — hidden pion — produced by gluon fusion and decaying into two photons. The model only introduces an extra hidden gauge group at the TeV scale with a vectorlike quark in the bifundamental representation of the hidden and standard model gauge groups. We calculate the masses of all the hidden pions and analyze their experimental signatures and constraints. We find that two colored hidden pions must be near the current experimental limits, and hence are probed in the nearmore » future. We study physics of would-be stable particles — the composite states that do not decay purely by the hidden and standard model gauge dynamics — in detail, including constraints from cosmology. We discuss possible theoretical structures above the TeV scale, e.g. conformal dynamics and supersymmetry, and their phenomenological implications. We also discuss an extension of the minimal model in which there is an extra hidden quark that is singlet under the standard model and has a mass smaller than the hidden dynamical scale. This provides two standard model singlet hidden pions that can both be viewed as diphoton/diboson resonances produced by gluon fusion. We discuss several scenarios in which these (and other) resonances can be used to explain various excesses seen in the LHC data.« less
De Koster, J; Hostens, M; Hermans, K; Van den Broeck, W; Opsomer, G
2016-10-01
The aim of the present research was to compare different measures of insulin sensitivity in dairy cows at the end of the dry period. To do so, 10 clinically healthy dairy cows with a varying body condition score were selected. By performing hyperinsulinemic euglycemic clamp (HEC) tests, we previously demonstrated a negative association between the insulin sensitivity and insulin responsiveness of glucose metabolism and the body condition score of these animals. In the same animals, other measures of insulin sensitivity were determined and the correlation with the HEC test, which is considered as the gold standard, was calculated. Measures derived from the intravenous glucose tolerance test (IVGTT) are based on the disappearance of glucose after an intravenous glucose bolus. Glucose concentrations during the IVGTT were used to calculate the area under the curve of glucose and the clearance rate of glucose. In addition, glucose and insulin data from the IVGTT were fitted in the minimal model to derive the insulin sensitivity parameter, Si. Based on blood samples taken before the start of the IVGTT, basal concentrations of glucose, insulin, NEFA, and β-hydroxybutyrate were determined and used to calculate surrogate indices for insulin sensitivity, such as the homeostasis model of insulin resistance, the quantitative insulin sensitivity check index, the revised quantitative insulin sensitivity check index and the revised quantitative insulin sensitivity check index including β-hydroxybutyrate. Correlation analysis revealed no association between the results obtained by the HEC test and any of the surrogate indices for insulin sensitivity. For the measures derived from the IVGTT, the area under the curve for the first 60 min of the test and the Si derived from the minimal model demonstrated good correlation with the gold standard. Copyright © 2016 Elsevier Inc. All rights reserved.
Stability of radiomic features in CT perfusion maps
NASA Astrophysics Data System (ADS)
Bogowicz, M.; Riesterer, O.; Bundschuh, R. A.; Veit-Haibach, P.; Hüllner, M.; Studer, G.; Stieb, S.; Glatz, S.; Pruschy, M.; Guckenberger, M.; Tanadini-Lang, S.
2016-12-01
This study aimed to identify a set of stable radiomic parameters in CT perfusion (CTP) maps with respect to CTP calculation factors and image discretization, as an input for future prognostic models for local tumor response to chemo-radiotherapy. Pre-treatment CTP images of eleven patients with oropharyngeal carcinoma and eleven patients with non-small cell lung cancer (NSCLC) were analyzed. 315 radiomic parameters were studied per perfusion map (blood volume, blood flow and mean transit time). Radiomics robustness was investigated regarding the potentially standardizable (image discretization method, Hounsfield unit (HU) threshold, voxel size and temporal resolution) and non-standardizable (artery contouring and noise threshold) perfusion calculation factors using the intraclass correlation (ICC). To gain added value for our model radiomic parameters correlated with tumor volume, a well-known predictive factor for local tumor response to chemo-radiotherapy, were excluded from the analysis. The remaining stable radiomic parameters were grouped according to inter-parameter Spearman correlations and for each group the parameter with the highest ICC was included in the final set. The acceptance level was 0.9 and 0.7 for the ICC and correlation, respectively. The image discretization method using fixed number of bins or fixed intervals gave a similar number of stable radiomic parameters (around 40%). The potentially standardizable factors introduced more variability into radiomic parameters than the non-standardizable ones with 56-98% and 43-58% instability rates, respectively. The highest variability was observed for voxel size (instability rate >97% for both patient cohorts). Without standardization of CTP calculation factors none of the studied radiomic parameters were stable. After standardization with respect to non-standardizable factors ten radiomic parameters were stable for both patient cohorts after correction for inter-parameter correlations. Voxel size, image discretization, HU threshold and temporal resolution have to be standardized to build a reliable predictive model based on CTP radiomics analysis.
Dose conversion coefficients for neutron exposure to the lens of the human eye
DOE Office of Scientific and Technical Information (OSTI.GOV)
Manger, Ryan P; Bellamy, Michael B; Eckerman, Keith F
Dose conversion coefficients for the lens of the human eye have been calculated for neutron exposure at energies from 1 x 10{sup -9} to 20 MeV and several standard orientations: anterior-to-posterior, rotational and right lateral. MCNPX version 2.6.0, a Monte Carlo-based particle transport package, was used to determine the energy deposited in the lens of the eye. The human eyeball model was updated by partitioning the lens into sensitive and insensitive volumes as the anterior portion (sensitive volume) of the lens being more radiosensitive and prone to cataract formation. The updated eye model was used with the adult UF-ORNL mathematicalmore » phantom in the MCNPX transport calculations.« less
Standardized input for Hanford environmental impact statements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Napier, B.A.
1981-05-01
Models and computer programs for simulating the environmental behavior of radionuclides in the environment and the resulting radiation dose to humans have been developed over the years by the Environmental Analysis Section staff, Ecological Sciences Department at the Pacific Northwest Laboratory (PNL). Methodologies have evolved for calculating raidation doses from many exposure pathways for any type of release mechanism. Depending on the situation or process being simulated, different sets of computer programs, assumptions, and modeling techniques must be used. This report is a compilation of recommended computer programs and necessary input information for use in calculating doses to members ofmore » the general public for environmental impact statements prepared for DOE activities to be conducted on or near the Hanford Reservation.« less
Predicting mosaics and wildlife diversity resulting from fire disturbance to a forest ecosystem
NASA Astrophysics Data System (ADS)
Potter, Meredith W.; Kessell, Stephen R.
1980-05-01
A model for predicting community mosaics and wildlife diversity resulting from fire disturbance to a forest ecosystem is presented. It applies an algorithm that delineates the size and shape of each patch from grid-based input data and calculates standard diversity measures for the entire mosaic of community patches and their included animal species. The user can print these diversity calculations, maps of the current community-type-age-class mosaic, and maps of habitat utilization by each animal species. Furthermore, the user can print estimates of changes in each resulting from natural disturbance. Although data and resolution level independent, the model is demonstrated and tested with data from the Lewis and Clark National Forest in Montana.
Mathematical modeling of tomographic scanning of cylindrically shaped test objects
NASA Astrophysics Data System (ADS)
Kapranov, B. I.; Vavilova, G. V.; Volchkova, A. V.; Kuznetsova, I. S.
2018-05-01
The paper formulates mathematical relationships that describe the length of the radiation absorption band in the test object for the first generation tomographic scan scheme. A cylindrically shaped test object containing an arbitrary number of standard circular irregularities is used to perform mathematical modeling. The obtained mathematical relationships are corrected with respect to chemical composition and density of the test object material. The equations are derived to calculate the resulting attenuation radiation from cobalt-60 isotope when passing through the test object. An algorithm to calculate the radiation flux intensity is provided. The presented graphs describe the dependence of the change in the γ-quantum flux intensity on the change in the radiation source position and the scanning angle of the test object.
Neutron Spectroscopic Factors from Transfer Reactions
NASA Astrophysics Data System (ADS)
Lee, Jenny; Tsang, M. B.
2007-05-01
We have extracted the ground state to ground state neutron spectroscopic factors for 80 nuclei ranging in Z from 3 to 24 by analyzing the past measurements of the angular distributions from (d,p) and (p,d) reactions. We demonstrate an approach that provides systematic and consistent values with a minimum of assumptions. A three-body model with global optical potentials and standard geometry of n-potential is applied. For the 60 nuclei where modern shell model calculations are available, such analysis reproduces, to within 20%, the experimental spectroscopic factors for most nuclei. If we constraint the nucleon-target optical potential and the geometries of the bound neutron-wave function with the modern Hartree-Fock calculations, our deduced neutron spectroscopic factors are reduced by 30% on average.
Interference between light and heavy neutrinos for 0 νββ decay in the left–right symmetric model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahmed, Fahim; Neacsu, Andrei; Horoi, Mihai
Neutrinoless double-beta decay is proposed as an important low energy phenomenon that could test beyond the Standard Model physics. There are several potentially competing beyond the Standard Model mechanisms that can induce the process. It thus becomes important to disentangle the different processes. In the present study we consider the interference effect between the light left-handed and heavy right-handed Majorana neutrino exchange mechanisms. The decay rate, and consequently, the phase-space factors for the interference term are derived, based on the left–right symmetric model. The numerical values for the interference phase-space factors for several nuclides are calculated, taking into consideration themore » relativistic Coulomb distortion of the electron wave function and finite-size of the nucleus. As a result, the variation of the interference effect with the Q-value of the process is studied.« less
Search for a dark photon in e(+)e(-) collisions at BABAR.
Lees, J P; Poireau, V; Tisserand, V; Grauges, E; Palano, A; Eigen, G; Stugu, B; Brown, D N; Feng, M; Kerth, L T; Kolomensky, Yu G; Lee, M J; Lynch, G; Koch, H; Schroeder, T; Hearty, C; Mattison, T S; McKenna, J A; So, R Y; Khan, A; Blinov, V E; Buzykaev, A R; Druzhinin, V P; Golubev, V B; Kravchenko, E A; Onuchin, A P; Serednyakov, S I; Skovpen, Yu I; Solodov, E P; Todyshev, K Yu; Lankford, A J; Mandelkern, M; Dey, B; Gary, J W; Long, O; Campagnari, C; Franco Sevilla, M; Hong, T M; Kovalskyi, D; Richman, J D; West, C A; Eisner, A M; Lockman, W S; Panduro Vazquez, W; Schumm, B A; Seiden, A; Chao, D S; Cheng, C H; Echenard, B; Flood, K T; Hitlin, D G; Miyashita, T S; Ongmongkolkul, P; Porter, F C; Andreassen, R; Huard, Z; Meadows, B T; Pushpawela, B G; Sokoloff, M D; Sun, L; Bloom, P C; Ford, W T; Gaz, A; Smith, J G; Wagner, S R; Ayad, R; Toki, W H; Spaan, B; Bernard, D; Verderi, M; Playfer, S; Bettoni, D; Bozzi, C; Calabrese, R; Cibinetto, G; Fioravanti, E; Garzia, I; Luppi, E; Piemontese, L; Santoro, V; Calcaterra, A; de Sangro, R; Finocchiaro, G; Martellotti, S; Patteri, P; Peruzzi, I M; Piccolo, M; Rama, M; Zallo, A; Contri, R; Lo Vetere, M; Monge, M R; Passaggio, S; Patrignani, C; Robutti, E; Bhuyan, B; Prasad, V; Adametz, A; Uwer, U; Lacker, H M; Dauncey, P D; Mallik, U; Chen, C; Cochran, J; Prell, S; Ahmed, H; Gritsan, A V; Arnaud, N; Davier, M; Derkach, D; Grosdidier, G; Le Diberder, F; Lutz, A M; Malaescu, B; Roudeau, P; Stocchi, A; Wormser, G; Lange, D J; Wright, D M; Coleman, J P; Fry, J R; Gabathuler, E; Hutchcroft, D E; Payne, D J; Touramanis, C; Bevan, A J; Di Lodovico, F; Sacco, R; Cowan, G; Bougher, J; Brown, D N; Davis, C L; Denig, A G; Fritsch, M; Gradl, W; Griessinger, K; Hafner, A; Schubert, K R; Barlow, R J; Lafferty, G D; Cenci, R; Hamilton, B; Jawahery, A; Roberts, D A; Cowan, R; Sciolla, G; Cheaib, R; Patel, P M; Robertson, S H; Neri, N; Palombo, F; Cremaldi, L; Godang, R; Sonnek, P; Summers, D J; Simard, M; Taras, P; De Nardo, G; Onorato, G; Sciacca, C; Martinelli, M; Raven, G; Jessop, C P; LoSecco, J M; Honscheid, K; Kass, R; Feltresi, E; Margoni, M; Morandin, M; Posocco, M; Rotondo, M; Simi, G; Simonetto, F; Stroili, R; Akar, S; Ben-Haim, E; Bomben, M; Bonneaud, G R; Briand, H; Calderini, G; Chauveau, J; Leruste, Ph; Marchiori, G; Ocariz, J; Biasini, M; Manoni, E; Pacetti, S; Rossi, A; Angelini, C; Batignani, G; Bettarini, S; Carpinelli, M; Casarosa, G; Cervelli, A; Chrzaszcz, M; Forti, F; Giorgi, M A; Lusiani, A; Oberhof, B; Paoloni, E; Perez, A; Rizzo, G; Walsh, J J; Lopes Pegna, D; Olsen, J; Smith, A J S; Faccini, R; Ferrarotto, F; Ferroni, F; Gaspero, M; Li Gioi, L; Pilloni, A; Piredda, G; Bünger, C; Dittrich, S; Grünberg, O; Hartmann, T; Hess, M; Leddig, T; Voß, C; Waldi, R; Adye, T; Olaiya, E O; Wilson, F F; Emery, S; Vasseur, G; Anulli, F; Aston, D; Bard, D J; Cartaro, C; Convery, M R; Dorfan, J; Dubois-Felsmann, G P; Dunwoodie, W; Ebert, M; Field, R C; Fulsom, B G; Graham, M T; Hast, C; Innes, W R; Kim, P; Leith, D W G S; Lewis, P; Lindemann, D; Luitz, S; Luth, V; Lynch, H L; MacFarlane, D B; Muller, D R; Neal, H; Perl, M; Pulliam, T; Ratcliff, B N; Roodman, A; Salnikov, A A; Schindler, R H; Snyder, A; Su, D; Sullivan, M K; Va'vra, J; Wisniewski, W J; Wulsin, H W; Purohit, M V; White, R M; Wilson, J R; Randle-Conde, A; Sekula, S J; Bellis, M; Burchat, P R; Puccio, E M T; Alam, M S; Ernst, J A; Gorodeisky, R; Guttman, N; Peimer, D R; Soffer, A; Spanier, S M; Ritchie, J L; Ruland, A M; Schwitters, R F; Wray, B C; Izen, J M; Lou, X C; Bianchi, F; De Mori, F; Filippi, A; Gamba, D; Lanceri, L; Vitale, L; Martinez-Vidal, F; Oyanguren, A; Villanueva-Perez, P; Albert, J; Banerjee, Sw; Beaulieu, A; Bernlochner, F U; Choi, H H F; King, G J; Kowalewski, R; Lewczuk, M J; Lueck, T; Nugent, I M; Roney, J M; Sobie, R J; Tasneem, N; Gershon, T J; Harrison, P F; Latham, T E; Band, H R; Dasu, S; Pan, Y; Prepost, R; Wu, S L
2014-11-14
Dark sectors charged under a new Abelian interaction have recently received much attention in the context of dark matter models. These models introduce a light new mediator, the so-called dark photon (A^{'}), connecting the dark sector to the standard model. We present a search for a dark photon in the reaction e^{+}e^{-}→γA^{'}, A^{'}→e^{+}e^{-}, μ^{+}μ^{-} using 514 fb^{-1} of data collected with the BABAR detector. We observe no statistically significant deviations from the standard model predictions, and we set 90% confidence level upper limits on the mixing strength between the photon and dark photon at the level of 10^{-4}-10^{-3} for dark photon masses in the range 0.02-10.2 GeV. We further constrain the range of the parameter space favored by interpretations of the discrepancy between the calculated and measured anomalous magnetic moment of the muon.
Beard, Brian B; Kainz, Wolfgang
2004-10-13
We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head.
Interference between light and heavy neutrinos for 0 νββ decay in the left–right symmetric model
Ahmed, Fahim; Neacsu, Andrei; Horoi, Mihai
2017-03-31
Neutrinoless double-beta decay is proposed as an important low energy phenomenon that could test beyond the Standard Model physics. There are several potentially competing beyond the Standard Model mechanisms that can induce the process. It thus becomes important to disentangle the different processes. In the present study we consider the interference effect between the light left-handed and heavy right-handed Majorana neutrino exchange mechanisms. The decay rate, and consequently, the phase-space factors for the interference term are derived, based on the left–right symmetric model. The numerical values for the interference phase-space factors for several nuclides are calculated, taking into consideration themore » relativistic Coulomb distortion of the electron wave function and finite-size of the nucleus. As a result, the variation of the interference effect with the Q-value of the process is studied.« less
Hyperquarks and bosonic preon bound states
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmid, Michael L.; Buchmann, Alfons J.
2009-11-01
In a model in which leptons, quarks, and the recently introduced hyperquarks are built up from two fundamental spin-(1/2) preons, the standard model weak gauge bosons emerge as preon bound states. In addition, the model predicts a host of new composite gauge bosons, in particular, those responsible for hyperquark and proton decay. Their presence entails a left-right symmetric extension of the standard model weak interactions and a scheme for a partial and grand unification of nongravitational interactions based on, respectively, the effective gauge groups SU(6){sub P} and SU(9){sub G}. This leads to a prediction of the Weinberg angle at lowmore » energies in good agreement with experiment. Furthermore, using evolution equations for the effective coupling strengths, we calculate the partial and grand unification scales, the hyperquark mass scale, as well as the mass and decay rate of the lightest hyperhadron.« less
Beard, Brian B; Kainz, Wolfgang
2004-01-01
We reviewed articles using computational RF dosimetry to compare the Specific Anthropomorphic Mannequin (SAM) to anatomically correct models of the human head. Published conclusions based on such comparisons have varied widely. We looked for reasons that might cause apparently similar comparisons to produce dissimilar results. We also looked at the information needed to adequately compare the results of computational RF dosimetry studies. We concluded studies were not comparable because of differences in definitions, models, and methodology. Therefore we propose a protocol, developed by an IEEE standards group, as an initial step in alleviating this problem. The protocol calls for a benchmark validation study comparing the SAM phantom to two anatomically correct models of the human head. It also establishes common definitions and reporting requirements that will increase the comparability of all computational RF dosimetry studies of the human head. PMID:15482601
Straub, Niels; Grunert, Philipp; von Kries, Rüdiger; Koletzko, Berthold
2011-12-01
The reported effect sizes of early nutrition programming on long-term health outcomes are often small, and it has been questioned whether early interventions would be worthwhile in enhancing public health. We explored the possible health economic consequences of early nutrition programming by performing a model calculation, based on the only published study currently available for analysis, to evaluate the effects of supplementing infant formula with long-chain polyunsaturated fatty acids (LC-PUFAs) on lowering blood pressure and lowering the risk of hypertension-related diseases in later life. The costs and health effects of LC-PUFA-enriched and standard infant formulas were compared by using a Markov model, including all relevant direct and indirect costs based on German statistics. We assessed the effect size of blood pressure reduction from LC-PUFA-supplemented formula, the long-term persistence of the effect, and the effect of lowered blood pressure on hypertension-related morbidity. The cost-effectiveness analysis showed an increased life expectancy of 1.2 quality-adjusted life-years and an incremental cost-effectiveness ratio of -630 Euros (discounted to present value) for the LC-PUFA formula in comparison with standard formula. LC-PUFA nutrition was the superior strategy even when the blood pressure-lowering effect was reduced to the lower 95% CI. Breastfeeding is the recommended feeding practice, but infants who are not breastfed should receive an appropriate infant formula. Following this model calculation, LC-PUFA supplementation of infant formula represents an economically worthwhile prevention strategy, based on the costs derived from hypertension-linked diseases in later life. However, because our analysis was based on a single randomized controlled trial, further studies are required to verify the validity of this thesis.
Size exclusion deep bed filtration: Experimental and modelling uncertainties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Badalyan, Alexander, E-mail: alexander.badalyan@adelaide.edu.au; You, Zhenjiang; Aji, Kaiser
A detailed uncertainty analysis associated with carboxyl-modified latex particle capture in glass bead-formed porous media enabled verification of the two theoretical stochastic models for prediction of particle retention due to size exclusion. At the beginning of this analysis it is established that size exclusion is a dominant particle capture mechanism in the present study: calculated significant repulsive Derjaguin-Landau-Verwey-Overbeek potential between latex particles and glass beads is an indication of their mutual repulsion, thus, fulfilling the necessary condition for size exclusion. Applying linear uncertainty propagation method in the form of truncated Taylor's series expansion, combined standard uncertainties (CSUs) in normalised suspendedmore » particle concentrations are calculated using CSUs in experimentally determined parameters such as: an inlet volumetric flowrate of suspension, particle number in suspensions, particle concentrations in inlet and outlet streams, particle and pore throat size distributions. Weathering of glass beads in high alkaline solutions does not appreciably change particle size distribution, and, therefore, is not considered as an additional contributor to the weighted mean particle radius and corresponded weighted mean standard deviation. Weighted mean particle radius and LogNormal mean pore throat radius are characterised by the highest CSUs among all experimental parameters translating to high CSU in the jamming ratio factor (dimensionless particle size). Normalised suspended particle concentrations calculated via two theoretical models are characterised by higher CSUs than those for experimental data. The model accounting the fraction of inaccessible flow as a function of latex particle radius excellently predicts normalised suspended particle concentrations for the whole range of jamming ratios. The presented uncertainty analysis can be also used for comparison of intra- and inter-laboratory particle size exclusion data.« less
Dynamic performance of a suspended reinforced concrete footbridge under pedestrian movements
NASA Astrophysics Data System (ADS)
Drygala, I.; Dulinska, J.; Kondrat, K.
2018-02-01
In the paper the dynamic analysis of a suspended reinforced concrete footbridge over a national road located in South Poland was carried out. Firstly, modes and values of natural frequencies of vibration of the structure were calculated. The results of the numerical modal investigation shown that the natural frequencies of the structure coincided with the frequency of human beings during motion steps (walking fast or running). Hence, to consider the comfort standards, the dynamic response of the footbridge to a runner dynamic motion should be calculated. Secondly, the dynamic response of the footbridge was calculated taking into consideration two models of dynamic forces produced by a single running pedestrian: a ‘sine’ and ‘half-sine’ model. It occurred that the values of accelerations and displacements obtained for the ‘half-sine’ model of dynamic forces were greater than those obtained for the ‘sine’ model up 20%. The ‘sine’ model is appropriate only for walking users of the walkways, because the nature of their motion has continues characteristic. In the case of running users of walkways this theory is unfitting, since the forces produced by a running pedestrian has a discontinuous nature. In this scenario of calculations, a ‘half-sine’ model seemed to be more effective. Finally, the comfort conditions for the footbridge were evaluated. The analysis proved that the vertical comfort criteria were not exceeded for a single user of footbridge running or walking fast.
Arivazhagan, M; Muniappan, P; Meenakshi, R; Rajavel, G
2013-03-15
This study represents an integral approach towards understanding the electronic and structural aspects of 1-bromo-2,3-dichlorobenzene (BDCB). The experimental spectral bands were structurally assigned with the theoretical calculation, and the thermodynamic properties of the studied compound were obtained from the theoretically calculated frequencies. The relationship between the structure and absorption spectrum and effects of solvents have been discussed. It turns that the hybrid PBE1PBE functional with 6-311+G(d,p) basis provide reliable λ(max) when solvent effects are included in the model. The NBO analysis reveals that the studied compound presents a structural characteristic of electron-transfer within the compound. The frontier molecular orbitals (HOMO-LUMO) are responsible for the electron polarization and electron-transfer properties. The reactivity sites are identified by mapping the electron density into electrostatic potential surface (MESP). Besides, (13)C and (1)H have been calculated using the gauge-invariant atomic orbital (GIAO) method. The thermodynamic properties at different temperatures were calculated, revealing the correlations between standard heat capacity, standard entropy, standard enthalpy changes and temperatures. Furthermore, the studied compound can be used as a good nonlinear optical material due to the higher value of first hyper polarizability (5.7 times greater than that of urea (0.37289×10(-30) esu)). Finally, it is worth to mentioning that solvent induces a considerable red shift of the absorption maximum going from the gas phase, and a slight blue shift of the transition S(0)→S(1) going from less polar to more polar solvents. Copyright © 2012 Elsevier B.V. All rights reserved.
[Development of ophthalmologic software for handheld devices].
Grottone, Gustavo Teixeira; Pisa, Ivan Torres; Grottone, João Carlos; Debs, Fernando; Schor, Paulo
2006-01-01
The formulas for calculation of intraocular lenses have evolved since the first theoretical formulas by Fyodorov. Among the second generation formulas, the SRK-I formula has a simple calculation, taking into account a calculation that only involved anteroposterior length, IOL constant and average keratometry. With the evolution of those formulas, complexicity increased making the reconfiguration of parameters in special situations impracticable. In this way the production and development of software for such a purpose, can help surgeons to recalculate those values if needed. To idealize, develop and test a Brazilian software for calculation of IOL dioptric power for handheld computers. For the development and programming of software for calculation of IOL, we used PocketC program (OrbWorks Concentrated Software, USA). We compared the results collected from a gold-standard device (Ultrascan/Alcon Labs) with the simulation of 100 fictitious patients, using the same IOL parameters. The results were grouped for ULTRASCAN data and SOFTWARE data. Using SRK/T formula the range of those parameters included a keratometry varying between 35 and 55D, axial length between 20 and 28 mm, IOL constants of 118.7, 118.3 and 115.8. Using Wilcoxon test, it was shown that the groups do not differ (p=0.314). We had a variation in the Ultrascan sample between 11.82 and 27.97. In the tested program sample the variation was practically similar (11.83-27.98). The average of the Ultrascan group was 20.93. The software group had a similar average. The standard deviation of the samples was also similar (4.53). The precision of IOL software for handheld devices was similar to that of the standard devices using the SRK/T formula. The software worked properly, was steady without bugs in tested models of operational system.
ERIC Educational Resources Information Center
Larsen, Ralph I.
1973-01-01
Makes recommendations for a single air quality data system (using average time) for interrelating air pollution effects, air quality standards, air quality monitoring, diffusion calculations, source-reduction calculations, and emission standards. (JR)
NASA Astrophysics Data System (ADS)
Osipova, Irina Y.; Chyzh, Igor H.
2001-06-01
The influence of eye jumps on the accuracy of estimation of Zernike coefficients from eye transverse aberration measurements was investigated. By computer modeling the ametropy and astigmatism have been examined. The standard deviation of the wave aberration function was calculated. It was determined that the standard deviation of the wave aberration function achieves the minimum value if the number of scanning points is equal to the number of eye jumps in scanning period. The recommendations for duration of measurement were worked out.
Towards a well-founded and reproducible snow load map for Austria
NASA Astrophysics Data System (ADS)
Winkler, Michael; Schellander, Harald
2017-04-01
"EN 1991-1-3 Eurocode 1: Part 1-3: Snow Loads" provides standard for the determination of the snow load to be used for the structural design of buildings etc. Since 2006 national specifications for Austria define a snow load map with four "load zones", allowing the calculation of the characteristic ground snow load sk for locations below 1500 m asl. A quadratic regression between altitude and sk is used, as suggested by EN 1991-1-3. The actual snow load map is based on best meteorological practice, but still it is somewhat subjective and non-reproducible. Underlying snow data series often end in the 1980s; in the best case data until about 2005 is used. Moreover, extreme value statistics only rely on the Gumbel distribution and the way in which snow depths are converted to snow loads is generally unknown. This might be enough reasons to rethink the snow load standard for Austria, all the more since today's situation is different to what it was some 15 years ago: Firstly, Austria is rich of multi-decadal, high quality snow depth measurements. These data are not well represented in the actual standard. Secondly, semi-empirical snow models allow sufficiently precise calculations of snow water equivalents and snow loads from snow depth measurements without the need of other parameters like temperature etc. which often are not available at the snow measurement sites. With the help of these tools, modelling of daily snow load series from daily snow depth measurements is possible. Finally, extreme value statistics nowadays offers convincing methods to calculate snow depths and loads with a return period of 50 years, which is the base of sk, and allows reproducible spatial extrapolation. The project introduced here will investigate these issues in order to update the Austrian snow load standard by providing a well-founded and reproducible snow load map for Austria. Not least, we seek for contact with standards bodies of neighboring countries to find intersections as well as to avoid inconsistencies and duplications of effort.
40 CFR 1066.630 - PDP, SSV, and CFV flow rate calculations.
Code of Federal Regulations, 2014 CFR
2014-07-01
... per revolution, as determined in paragraph (a)(2) of this section. T std = standard temperature = 293.... p std = standard pressure= 101.325 kPa. (2) Calculate V rev using the following equation: ER28AP14... std = standard temperature. p std = standard pressure. Z = compressibility factor. M mix = molar mass...
SU-F-T-69: Correction Model of NIPAM Gel and Presage for Electron and Proton PDD Measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, C; Lin, C; Tu, P
Purpose: The current standard equipment for proton PDD measurement is multilayer-parallel-ion-chamber. Disadvantage of multilayer-parallel-ion-chamber is expensive and complexity manipulation. NIPAM-gel and Presage are options for PDD measurement. Due to different stopping power, the result of NIPAM-gel and Presage need to be corrected. This study aims to create a correction model for NIPAM-gel and Presage PDD measurement. Methods: Standard water based PDD profiles of electron 6MeV, 12MeV, and proton 90MeV were acquired. Electron PDD profile after 1cm thickness of NIPAM-gel added on the top of water was measured. Electron PDD profile with extra 1cm thickness of solid water, PTW RW3, wasmore » measured. The distance shift among standard PDD, NIPAM-gel PDD, and solid water PDD at R50% was compared and water equivalent thickness correction factor (WET) was calculated. Similar process was repeated. WETs for electron with Presage, proton with NIPAM-gel, and proton with Presage were calculated. PDD profiles of electron and proton with NIPAM-gel and Presage columns were corrected with each WET. The corrected profiles were compared with standard profiles. Results: WET for electron 12MeV with NIPAM-gel was 1.135, and 1.034 for electron 12Mev with Presage. After correction, PDD profile matched to the standard profile at the fall-off range well. The difference at R50% was 0.26mm shallower and 0.39mm deeper. The same WET was used to correct electron 6MeV profile. Energy independence of electron WET was observed. The difference at R50% was 0.17mm deeper for NIPAM-gel and 0.54mm deeper for Presage. WET for proton 90MeV with NIPAM-gel was 1.056. The difference at R50% was 0.37 deeper. Quenching effect at Bragg peak was revealed. The underestimated dose percentage at Bragg peak was 27%. Conclusion: This correction model can be used to modify PDD profile with depth error within 1mm. With this correction model, NIPAM-gel and Presage can be practical at PDD profile measurement.« less
Simulating Turbulent Wind Fields for Offshore Turbines in Hurricane-Prone Regions (Poster)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Y.; Damiani, R.; Musial, W.
Extreme wind load cases are one of the most important external conditions in the design of offshore wind turbines in hurricane prone regions. Furthermore, in these areas, the increase in load with storm return-period is higher than in extra-tropical regions. However, current standards have limited information on the appropriate models to simulate wind loads from hurricanes. This study investigates turbulent wind models for load analysis of offshore wind turbines subjected to hurricane conditions. Suggested extreme wind models in IEC 61400-3 and API/ABS (a widely-used standard in oil and gas industry) are investigated. The present study further examines the wind turbinemore » response subjected to Hurricane wind loads. Three-dimensional wind simulator, TurbSim, is modified to include the API wind model. Wind fields simulated using IEC and API wind models are used for an offshore wind turbine model established in FAST to calculate turbine loads and response.« less
Blum, Thomas; Chowdhury, Saumitra; Hayakawa, Masashi; Izubuchi, Taku
2015-01-09
The most compelling possibility for a new law of nature beyond the four fundamental forces comprising the standard model of high-energy physics is the discrepancy between measurements and calculations of the muon anomalous magnetic moment. Until now a key part of the calculation, the hadronic light-by-light contribution, has only been accessible from models of QCD, the quantum description of the strong force, whose accuracy at the required level may be questioned. A first principles calculation with systematically improvable errors is needed, along with the upcoming experiments, to decisively settle the matter. For the first time, the form factor that yields the light-by-light scattering contribution to the muon anomalous magnetic moment is computed in such a framework, lattice QCD+QED and QED. A nonperturbative treatment of QED is used and checked against perturbation theory. The hadronic contribution is calculated for unphysical quark and muon masses, and only the diagram with a single quark loop is computed for which statistically significant signals are obtained. Initial results are promising, and the prospect for a complete calculation with physical masses and controlled errors is discussed.
Jobs and Renewable Energy Project
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sterzinger, George
2006-12-19
Early in 2002, REPP developed the Jobs Calculator, a tool that calculates the number of direct jobs resulting from renewable energy development under RPS (Renewable Portfolio Standard) legislation or other programs to accelerate renewable energy development. The calculator is based on a survey of current industry practices to assess the number and type of jobs that will result from the enactment of a RPS. This project built upon and significantly enhanced the initial Jobs Calculator model by (1) expanding the survey to include other renewable technologies (the original model was limited to wind, solar PV and biomass co-firing technologies); (2)more » more precisely calculating the economic development benefits related to renewable energy development; (3) completing and regularly updating the survey of the commercially active renewable energy firms to determine kinds and number of jobs directly created; and (4) developing and implementing a technology to locate where the economic activity related to each type of renewable technology is likely to occur. REPP worked directly with groups in the State of Nevada to interpret the results and develop policies to capture as much of the economic benefits as possible for the state through technology selection, training program options, and outreach to manufacturing groups.« less
BARTTest: Community-Standard Atmospheric Radiative-Transfer and Retrieval Tests
NASA Astrophysics Data System (ADS)
Harrington, Joseph; Himes, Michael D.; Cubillos, Patricio E.; Blecic, Jasmina; Challener, Ryan C.
2018-01-01
Atmospheric radiative transfer (RT) codes are used both to predict planetary and brown-dwarf spectra and in retrieval algorithms to infer atmospheric chemistry, clouds, and thermal structure from observations. Observational plans, theoretical models, and scientific results depend on the correctness of these calculations. Yet, the calculations are complex and the codes implementing them are often written without modern software-verification techniques. The community needs a suite of test calculations with analytically, numerically, or at least community-verified results. We therefore present the Bayesian Atmospheric Radiative Transfer Test Suite, or BARTTest. BARTTest has four categories of tests: analytically verified RT tests of simple atmospheres (single line in single layer, line blends, saturation, isothermal, multiple line-list combination, etc.), community-verified RT tests of complex atmospheres, synthetic retrieval tests on simulated data with known answers, and community-verified real-data retrieval tests.BARTTest is open-source software intended for community use and further development. It is available at https://github.com/ExOSPORTS/BARTTest. We propose this test suite as a standard for verifying atmospheric RT and retrieval codes, analogous to the Held-Suarez test for general circulation models. This work was supported by NASA Planetary Atmospheres grant NX12AI69G, NASA Astrophysics Data Analysis Program grant NNX13AF38G, and NASA Exoplanets Research Program grant NNX17AB62G.
Thermochemistry of myricetin flavonoid
NASA Astrophysics Data System (ADS)
Abil'daeva, A. Z.; Kasenova, Sh. B.; Kasenov, B. K.; Sagintaeva, Zh. I.; Kuanyshbekov, E. E.; Rakhimova, B. B.; Polyakov, V. V.; Adekenov, S. M.
2014-08-01
The enthalpies of myricetin dissolution are measured by means of calorimetry with mol dilutions of flavonoid: 96 mol % ethanol equal to 1: 9000, 1: 18000, and 1: 36000. The standard enthalpies of dissolution for the biologically active substance in an infinitely diluted (standard) solution of 96% ethanol are calculated from the experimental data. Physicochemical means of approximation are used to estimate the values of the standard enthalpy of combustion, and the enthalpy of melting is calculated for the investigated flavonoid. Finally, the compound's standard enthalpy of formation is calculated using the Hess cycle.
NASA Astrophysics Data System (ADS)
Glushkov, A. V.; Khetselius, O. Yu; Agayar, E. V.; Buyadzhi, V. V.; Romanova, A. V.; Mansarliysky, V. F.
2017-10-01
We present a new effective approach to analysis and modelling the natural air ventilation in an atmosphere of the industrial city, which is based on the Arakawa-Schubert and Glushkov models, modified to calculate the current involvement of the ensemble of clouds, and advanced mathematical methods of modelling an unsteady turbulence in the urban area. For the first time the methods of a plane complex field and spectral expansion algorithms are applied to calculate the air circulation for the cloud layer arrays, penetrating into the territory of the industrial city. We have also taken into account for the mechanisms of transformation of the cloud system advection over the territory of the urban area. The results of test computing the air ventilation characteristics are presented for the Odessa city. All above cited methods and models together with the standard monitoring and management systems can be considered as a basis for comprehensive “Green City” construction technology.
Three-dimensional finite element modelling of muscle forces during mastication.
Röhrle, Oliver; Pullan, Andrew J
2007-01-01
This paper presents a three-dimensional finite element model of human mastication. Specifically, an anatomically realistic model of the masseter muscles and associated bones is used to investigate the dynamics of chewing. A motion capture system is used to track the jaw motion of a subject chewing standard foods. The three-dimensional nonlinear deformation of the masseter muscles are calculated via the finite element method, using the jaw motion data as boundary conditions. Motion-driven muscle activation patterns and a transversely isotropic material law, defined in a muscle-fibre coordinate system, are used in the calculations. Time-force relationships are presented and analysed with respect to different tasks during mastication, e.g. opening, closing, and biting, and are also compared to a more traditional one-dimensional model. The results strongly suggest that, due to the complex arrangement of muscle force directions, modelling skeletal muscles as conventional one-dimensional lines of action might introduce a significant source of error.
3D Printing of Preoperative Simulation Models of a Splenic Artery Aneurysm: Precision and Accuracy.
Takao, Hidemasa; Amemiya, Shiori; Shibata, Eisuke; Ohtomo, Kuni
2017-05-01
Three-dimensional (3D) printing is attracting increasing attention in the medical field. This study aimed to apply 3D printing to the production of hollow splenic artery aneurysm models for use in the simulation of endovascular treatment, and to evaluate the precision and accuracy of the simulation model. From 3D computed tomography (CT) angiography data of a splenic artery aneurysm, 10 hollow models reproducing the vascular lumen were created using a fused deposition modeling-type desktop 3D printer. After filling with water, each model was scanned using T2-weighted magnetic resonance imaging for the evaluation of the lumen. All images were coregistered, binarized, and then combined to create an overlap map. The cross-sectional area of the splenic artery aneurysm and its standard deviation (SD) were calculated perpendicular to the x- and y-axes. Most voxels overlapped among the models. The cross-sectional areas were similar among the models, with SDs <0.05 cm 2 . The mean cross-sectional areas of the splenic artery aneurysm were slightly smaller than those calculated from the original mask images. The maximum mean cross-sectional areas calculated perpendicular to the x- and y-axes were 3.90 cm 2 (SD, 0.02) and 4.33 cm 2 (SD, 0.02), whereas those calculated from the original mask images were 4.14 cm 2 and 4.66 cm 2 , respectively. The mean cross-sectional areas of the afferent artery were, however, almost the same as those calculated from the original mask images. The results suggest that 3D simulation modeling of a visceral artery aneurysm using a fused deposition modeling-type desktop 3D printer and computed tomography angiography data is highly precise and accurate. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.
van den Besselaar, A M H P; Chantarangkul, V; Angeloni, F; Binder, N B; Byrne, M; Dauer, R; Gudmundsdottir, B R; Jespersen, J; Kitchen, S; Legnani, C; Lindahl, T L; Manning, R A; Martinuzzo, M; Panes, O; Pengo, V; Riddell, A; Subramanian, S; Szederjesi, A; Tantanate, C; Herbel, P; Tripodi, A
2018-01-01
Essentials Two candidate International Standards for thromboplastin (coded RBT/16 and rTF/16) are proposed. International Sensitivity Index (ISI) of proposed standards was assessed in a 20-centre study. The mean ISI for RBT/16 was 1.21 with a between-centre coefficient of variation of 4.6%. The mean ISI for rTF/16 was 1.11 with a between-centre coefficient of variation of 5.7%. Background The availability of International Standards for thromboplastin is essential for the calibration of routine reagents and hence the calculation of the International Normalized Ratio (INR). Stocks of the current Fourth International Standards are running low. Candidate replacement materials have been prepared. This article describes the calibration of the proposed Fifth International Standards for thromboplastin, rabbit, plain (coded RBT/16) and for thromboplastin, recombinant, human, plain (coded rTF/16). Methods An international collaborative study was carried out for the assignment of International Sensitivity Indexes (ISIs) to the candidate materials, according to the World Health Organization (WHO) guidelines for thromboplastins and plasma used to control oral anticoagulant therapy with vitamin K antagonists. Results Results were obtained from 20 laboratories. In several cases, deviations from the ISI calibration model were observed, but the average INR deviation attributabled to the model was not greater than 10%. Only valid ISI assessments were used to calculate the mean ISI for each candidate. The mean ISI for RBT/16 was 1.21 (between-laboratory coefficient of variation [CV]: 4.6%), and the mean ISI for rTF/16 was 1.11 (between-laboratory CV: 5.7%). Conclusions The between-laboratory variation of the ISI for candidate material RBT/16 was similar to that of the Fourth International Standard (RBT/05), and the between-laboratory variation of the ISI for candidate material rTF/16 was slightly higher than that of the Fourth International Standard (rTF/09). The candidate materials have been accepted by WHO as the Fifth International Standards for thromboplastin, rabbit plain, and thromboplastin, recombinant, human, plain. © 2017 International Society on Thrombosis and Haemostasis.
Liu, Xiang; Makeyev, Oleksandr; Besio, Walter
2011-01-01
We have simulated a four-layer concentric spherical head model. We calculated the spline and tripolar Laplacian estimates and compared them to the analytical Laplacian on the spherical surface. In the simulations we used five different dipole groups and two electrode configurations. The comparison shows that the tripolar Laplacian has higher correlation coefficient to the analytical Laplacian in the electrode configurations tested (19, standard 10/20 locations and 64 electrodes).
Assessment of a novel biomechanical fracture model for distal radius fractures
2012-01-01
Background Distal radius fractures (DRF) are one of the most common fractures and often need surgical treatment, which has been validated through biomechanical tests. Currently a number of different fracture models are used, none of which resemble the in vivo fracture location. The aim of the study was to develop a new standardized fracture model for DRF (AO-23.A3) and compare its biomechanical behavior to the current gold standard. Methods Variable angle locking volar plates (ADAPTIVE, Medartis) were mounted on 10 pairs of fresh-frozen radii. The osteotomy location was alternated within each pair (New: 10 mm wedge 8 mm / 12 mm proximal to the dorsal / volar apex of the articular surface; Gold standard: 10 mm wedge 20 mm proximal to the articular surface). Each specimen was tested in cyclic axial compression (increasing load by 100 N per cycle) until failure or −3 mm displacement. Parameters assessed were stiffness, displacement and dissipated work calculated for each cycle and ultimate load. Significance was tested using a linear mixed model and Wald test as well as t-tests. Results 7 female and 3 male pairs of radii aged 74 ± 9 years were tested. In most cases (7/10), the two groups showed similar mechanical behavior at low loads with increasing differences at increasing loads. Overall the novel fracture model showed a significant different biomechanical behavior than the gold standard model (p < 0,001). The average final loads resisted were significantly lower in the novel model (860 N ± 232 N vs. 1250 N ± 341 N; p = 0.001). Conclusion The novel biomechanical fracture model for DRF more closely mimics the in vivo fracture site and shows a significantly different biomechanical behavior with increasing loads when compared to the current gold standard. PMID:23244634
NASA Astrophysics Data System (ADS)
Akhmetova, I. G.; Chichirova, N. D.
2017-11-01
Currently the actual problem is a precise definition of the normative and actual heat loss. Existing methods - experimental, on metering devices, on the basis of mathematical modeling methods are not without drawbacks. Heat losses establishing during the heat carrier transport has an impact on the tariff structure of heat supply organizations. This quantity determination also promotes proper choice of main and auxiliary equipment power, temperature chart of heat supply networks, as well as the heating system structure choice with the decentralization. Calculation of actual heat loss and their comparison with standard values justifies the performance of works on improvement of the heat networks with the replacement of piping or its insulation. To determine the cause of discrepancies between normative and actual heat losses thermal tests on the magnitude of the actual heat losses in the 124 sections of heat networks in Kazan. As were carried out the result mathematical model of the regulatory definition of heat losses is developed and tested. This model differ from differs the existing according the piping insulation type. The application of this factor will bring the value of calculative normative losses heat energy to their actual value. It is of great importance for enterprises operating distribution networks and because of the conditions of their configuration and extensions do not have the technical ability to produce thermal testing.
Modeling single event induced crosstalk in nanometer technologies
NASA Astrophysics Data System (ADS)
Boorla, Vijay K.
Radiation effects become more important in combinational logic circuits with newer technologies. When a high energetic particle strikes at the sensitive region within the combinational logic circuit a voltage pulse called Single Event Transient is created. Recently, researchers reported Single Event Crosstalk because of increasing coupling effects. In this work, the closed form expression for SE crosstalk noise is formulated for the first time. For all calculations, 4-pi model is used in this work. The crosstalk model uses a reduced transfer function between aggressor coupling node and victim node to reduce information loss. Aggressor coupling node waveform is obtained and then applied to transfer function between the coupling node and the victim output to obtain victim noise voltage. This work includes both effect of passive aggressor loading on victim and victim loading on aggressor by considering resistive shielding effect. Noise peak expressions derived in this work show very good results in comparison to HSPICE results. Results show that average error for noise peak is 3.794% while allowing for very fast analysis. Once the SE crosstalk noise is calculated, one can hire mitigation techniques such as driver sizing. A standard DTMOS technique along with sizing is proposed in this work to mitigate SE crosstalk. This combined approach can saves in some areas compared to driver sizing alone. Key Words: Crosstalk Noise, Closed Form Modeling, Standard DTMOS
Hendriks, A Jan; Awkerman, Jill A; de Zwart, Dick; Huijbregts, Mark A J
2013-11-01
While variable sensitivity of model species to common toxicants has been addressed in previous studies, a systematic analysis of inter-species variability for different test types, modes of action and species is as of yet lacking. Hence, the aim of the present study was to identify similarities and differences in contaminant levels affecting cold-blooded and warm-blooded species administered via different routes. To that end, data on lethal water concentrations LC50, tissue residues LR50 and oral doses LD50 were collected from databases, each representing the largest of its kind. LC50 data were multiplied by a bioconcentration factor (BCF) to convert them to internal concentrations that allow for comparison among species. For each endpoint data set, we calculated the mean and standard deviation of species' lethal level per compound. Next, the means and standard deviations were averaged by mode of action. Both the means and standard deviations calculated depended on the number of species tested, which is at odds with quality standard setting procedures. Means calculated from (BCF) LC50, LR50 and LD50 were largely similar, suggesting that different administration routes roughly yield similar internal levels. Levels for compounds interfering biochemically with elementary life processes were about one order of magnitude below that of narcotics disturbing membranes, and neurotoxic pesticides and dioxins induced death in even lower amounts. Standard deviations for LD50 data were similar across modes of action, while variability of LC50 values was lower for narcotics than for substances with a specific mode of action. The study indicates several directions to go for efficient use of available data in risk assessment and reduction of species testing. Copyright © 2013 Elsevier Inc. All rights reserved.
Ginsberg, Gary; Adunsky, Abraham; Rasooly, Iris
2013-01-01
The economic burden associated with hip fractures calls for the investigation of innovative new cost-utility forms of organisation and integration of services for these patients. To carry out a cost-utility analysis integrating epidemiological and economic aspects for hip fracture patients treated within a comprehensive orthogeriatric model (COGM) of care, as compared with standard of care model (SOCM). A demonstration study conducted in a major tertiary medical centre, operating both a COGM ward and standard orthopaedic and rehabilitation wards. Data was collected on the clinical outcomes and health care costs of the two different treatment modalities, in order to calculate the absolute cost and disability-adjusted life years (DALY) ratio. The COGM model used 23% fewer resources per patient ($14,919 vs. $19,363) than the SOCM model and to avert 0.226 additional DALY per patient, mainly as a result of lower 1-year mortality rates among COGM patients (14.8% vs. 17.3%). A comprehensive ortho-geriatric care modality is more cost-effective, providing additional quality-adjusted life years (QALY) while using fewer resources compared with standard of care approach. The results should assist health policy-makers in optimising healthcare use and healthcare planning.
Bouwman, R W; van Engen, R E; Young, K C; Veldkamp, W J H; Dance, D R
2015-01-07
Slabs of polymethyl methacrylate (PMMA) or a combination of PMMA and polyethylene (PE) slabs are used to simulate standard model breasts for the evaluation of the average glandular dose (AGD) in digital mammography (DM) and digital breast tomosynthesis (DBT). These phantoms are optimized for the energy spectra used in DM and DBT, which normally have a lower average energy than used in contrast enhanced digital mammography (CEDM). In this study we have investigated whether these phantoms can be used for the evaluation of AGD with the high energy x-ray spectra used in CEDM. For this purpose the calculated values of the incident air kerma for dosimetry phantoms and standard model breasts were compared in a zero degree projection with the use of an anti scatter grid. It was found that the difference in incident air kerma compared to standard model breasts ranges between -10% to +4% for PMMA slabs and between 6% and 15% for PMMA-PE slabs. The estimated systematic error in the measured AGD for both sets of phantoms were considered to be sufficiently small for the evaluation of AGD in quality control procedures for CEDM. However, the systematic error can be substantial if AGD values from different phantoms are compared.
Rolison, John M.; Treinen, Kerri C.; McHugh, Kelly C.; ...
2017-11-06
Uranium certified reference materials (CRM) issued by New Brunswick Laboratory were subjected to dating using four independent uranium-series radiochronometers. In all cases, there was acceptable agreement between the model ages calculated using the 231Pa– 235U, 230Th– 234U, 227Ac– 235U or 226Ra– 234U radiochronometers and either the certified 230Th– 234U model date (CRM 125-A and CRM U630), or the known purification date (CRM U050 and CRM U100). Finally, the agreement between the four independent radiochronometers establishes these uranium certified reference materials as ideal informal standards for validating dating techniques utilized in nuclear forensic investigations in the absence of standards with certifiedmore » model ages for multiple radiochronometers.« less
A probabilistic assessment of health risks associated with short-term exposure to tropospheric ozone
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whitfield, R.G; Biller, W.F.; Jusko, M.J.
1996-06-01
The work described in this report is part of a larger risk assessment sponsored by the U.S. Environmental Protection Agency. Earlier efforts developed exposure-response relationships for acute health effects among populations engaged in heavy exertion. Those efforts also developed a probabilistic national ambient air quality standards exposure model and a general methodology for integrating probabilistic exposure-response relation- ships and exposure estimates to calculate overall risk results. Recently published data make it possible to model additional health endpoints (for exposure at moderate exertion), including hospital admissions. New air quality and exposure estimates for alternative national ambient air quality standards for ozonemore » are combined with exposure-response models to produce the risk results for hospital admissions and acute health effects. Sample results explain the methodology and introduce risk output formats.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rolison, John M.; Treinen, Kerri C.; McHugh, Kelly C.
Uranium certified reference materials (CRM) issued by New Brunswick Laboratory were subjected to dating using four independent uranium-series radiochronometers. In all cases, there was acceptable agreement between the model ages calculated using the 231Pa– 235U, 230Th– 234U, 227Ac– 235U or 226Ra– 234U radiochronometers and either the certified 230Th– 234U model date (CRM 125-A and CRM U630), or the known purification date (CRM U050 and CRM U100). Finally, the agreement between the four independent radiochronometers establishes these uranium certified reference materials as ideal informal standards for validating dating techniques utilized in nuclear forensic investigations in the absence of standards with certifiedmore » model ages for multiple radiochronometers.« less
Folded supersymmetry with a twist
Cohen, Timothy; Craig, Nathaniel; Lou, Hou Keong; ...
2016-03-30
Folded supersymmetry (f-SUSY) stabilizes the weak scale against radiative corrections from the top sector via scalar partners whose gauge quantum numbers differ from their Standard Model counterparts. This non-trivial pairing of states can be realized in extra-dimensional theories with appropriate supersymmetry-breaking boundary conditions. We present a class of calculable f-SUSY models that are parametrized by a non-trivial twist in 5D boundary conditions and can accommodate the observed Higgs mass and couplings. Although the distinctive phenomenology associated with the novel folded states should provide strong evidence for this mechanism, the most stringent constraints are currently placed by conventional supersymmetry searches. Asmore » a result, these models remain minimally fine-tuned in light of LHC8 data and provide a range of both standard and exotic signatures accessible at LHC13.« less
MSSM-inspired multifield inflation
NASA Astrophysics Data System (ADS)
Dubinin, M. N.; Petrova, E. Yu.; Pozdeeva, E. O.; Sumin, M. V.; Vernov, S. Yu.
2017-12-01
Despite the fact that experimentally with a high degree of statistical significance only a single Standard Model-like Higgs boson is discovered at the LHC, extended Higgs sectors with multiple scalar fields not excluded by combined fits of the data are more preferable theoretically for internally consistent realistic models of particle physics. We analyze the inflationary scenarios which could be induced by the two-Higgs-doublet potential of the Minimal Supersymmetric Standard Model (MSSM) where five scalar fields have non-minimal couplings to gravity. Observables following from such MSSM-inspired multifield inflation are calculated and a number of consistent inflationary scenarios are constructed. Cosmological evolution with different initial conditions for the multifield system leads to consequences fully compatible with observational data on the spectral index and the tensor-to-scalar ratio. It is demonstrated that the strong coupling approximation is precise enough to describe such inflationary scenarios.
Testing stellar evolution models with detached eclipsing binaries
NASA Astrophysics Data System (ADS)
Higl, J.; Weiss, A.
2017-12-01
Stellar evolution codes, as all other numerical tools, need to be verified. One of the standard stellar objects that allow stringent tests of stellar evolution theory and models, are detached eclipsing binaries. We have used 19 such objects to test our stellar evolution code, in order to see whether standard methods and assumptions suffice to reproduce the observed global properties. In this paper we concentrate on three effects that contain a specific uncertainty: atomic diffusion as used for standard solar model calculations, overshooting from convective regions, and a simple model for the effect of stellar spots on stellar radius, which is one of the possible solutions for the radius problem of M dwarfs. We find that in general old systems need diffusion to allow for, or at least improve, an acceptable fit, and that systems with convective cores indeed need overshooting. Only one system (AI Phe) requires the absence of it for a successful fit. To match stellar radii for very low-mass stars, the spot model proved to be an effective approach, but depending on model details, requires a high percentage of the surface being covered by spots. We briefly discuss improvements needed to further reduce the freedom in modelling and to allow an even more restrictive test by using these objects.
Neutrinoless double-β decay of 48Ca in the shell model: Closure versus nonclosure approximation
NASA Astrophysics Data System (ADS)
Sen'kov, R. A.; Horoi, M.
2013-12-01
Neutrinoless double-β decay (0νββ) is a unique process that could reveal physics beyond the Standard Model. Essential ingredients in the analysis of 0νββ rates are the associated nuclear matrix elements. Most of the approaches used to calculate these matrix elements rely on the closure approximation. Here we analyze the light neutrino-exchange matrix elements of 48Ca 0νββ decay and test the closure approximation in a shell-model approach. We calculate the 0νββ nuclear matrix elements for 48Ca using both the closure approximation and a nonclosure approach, and we estimate the uncertainties associated with the closure approximation. We demonstrate that the nonclosure approach has excellent convergence properties which allow us to avoid unmanageable computational cost. Combining the nonclosure and closure approaches we propose a new method of calculation for 0νββ decay rates which can be applied to the 0νββ decay rates of heavy nuclei, such as 76Ge or 82Se.
Zhu, Tong; Zhang, John Z H; He, Xiao
2014-09-14
In this work, protein side chain (1)H chemical shifts are used as probes to detect and correct side-chain packing errors in protein's NMR structures through structural refinement. By applying the automated fragmentation quantum mechanics/molecular mechanics (AF-QM/MM) method for ab initio calculation of chemical shifts, incorrect side chain packing was detected in the NMR structures of the Pin1 WW domain. The NMR structure is then refined by using molecular dynamics simulation and the polarized protein-specific charge (PPC) model. The computationally refined structure of the Pin1 WW domain is in excellent agreement with the corresponding X-ray structure. In particular, the use of the PPC model yields a more accurate structure than that using the standard (nonpolarizable) force field. For comparison, some of the widely used empirical models for chemical shift calculations are unable to correctly describe the relationship between the particular proton chemical shift and protein structures. The AF-QM/MM method can be used as a powerful tool for protein NMR structure validation and structural flaw detection.
Calculations of microwave brightness temperature of rough soil surfaces: Bare field
NASA Technical Reports Server (NTRS)
Mo, T.; Schmugge, T. J.; Wang, J. R.
1985-01-01
A model for simulating the brightness temperatures of soils with rough surfaces is developed. The surface emissivity of the soil media is obtained by the integration of the bistatic scattering coefficients for rough surfaces. The roughness of a soil surface is characterized by two parameters, the surface height standard deviation sigma and its horizontal correlation length l. The model calculations are compared to the measured angular variations of the polarized brightness temperatures at both 1.4 GHz and 5 GHz frequences. A nonlinear least-squares fitting method is used to obtain the values of delta and l that best characterize the surface roughness. The effect of shadowing is incorporated by introducing a function S(theta), which represents the probability that a point on a rough surface is not shadowed by other parts of the surface. The model results for the horizontal polarization are in excellent agreement with the data. However, for the vertical polarization, some discrepancies exist between the calculations and data, particularly at the 1.4 GHz frequency. Possible causes of the discrepancy are discussed.
Calculating the Malliavin derivative of some stochastic mechanics problems
Hauseux, Paul; Hale, Jack S.
2017-01-01
The Malliavin calculus is an extension of the classical calculus of variations from deterministic functions to stochastic processes. In this paper we aim to show in a practical and didactic way how to calculate the Malliavin derivative, the derivative of the expectation of a quantity of interest of a model with respect to its underlying stochastic parameters, for four problems found in mechanics. The non-intrusive approach uses the Malliavin Weight Sampling (MWS) method in conjunction with a standard Monte Carlo method. The models are expressed as ODEs or PDEs and discretised using the finite difference or finite element methods. Specifically, we consider stochastic extensions of; a 1D Kelvin-Voigt viscoelastic model discretised with finite differences, a 1D linear elastic bar, a hyperelastic bar undergoing buckling, and incompressible Navier-Stokes flow around a cylinder, all discretised with finite elements. A further contribution of this paper is an extension of the MWS method to the more difficult case of non-Gaussian random variables and the calculation of second-order derivatives. We provide open-source code for the numerical examples in this paper. PMID:29261776
Permeability of Two Parachute Fabrics: Measurements, Modeling, and Application
NASA Technical Reports Server (NTRS)
Cruz, Juan R.; O'Farrell, Clara; Hennings, Elsa; Runnells, Paul
2017-01-01
Two parachute fabrics, described by Parachute Industry Specifications PIA-C-7020D Type I and PIA-C-44378D Type I, were tested to obtain their permeabilities in air (i.e., flow-through volume of air per area per time) over the range of differential pressures from 0.146 psf (7 Pa) to 25 psf (1197 Pa). Both fabrics met their specification permeabilities at the standard differential pressure of 0.5 inch of water (2.60 psf, 124 Pa). The permeability results were transformed into an effective porosity for use in calculations related to parachutes. Models were created that related the effective porosity to the unit Reynolds number for each of the fabrics. As an application example, these models were used to calculate the total porosities for two geometrically-equivalent subscale Disk-Gap-Band (DGB) parachutes fabricated from each of the two fabrics, and tested at the same operating conditions in a wind tunnel. Using the calculated total porosities and the results of the wind tunnel tests, the drag coefficient of a geometrically-equivalent full-scale DGB operating on Mars was estimated.
Permeability of Two Parachute Fabrics - Measurements, Modeling, and Application
NASA Technical Reports Server (NTRS)
Cruz, Juan R.; O'Farrell, Clara; Hennings, Elsa; Runnells, Paul
2016-01-01
Two parachute fabrics, described by Parachute Industry Specifications PIA-C-7020D Type I and PIA-C-44378D Type I, were tested to obtain their permeabilities in air (i.e., flow-through volume of air per area per time) over the range of differential pressures from 0.146 psf (7 Pa) to 25 psf (1197 Pa). Both fabrics met their specification permeabilities at the standard differential pressure of 0.5 inch of water (2.60 psf, 124 Pa). The permeability results were transformed into an effective porosity for use in calculations related to parachutes. Models were created that related the effective porosity to the unit Reynolds number for each of the fabrics. As an application example, these models were used to calculate the total porosities for two geometrically-equivalent subscale Disk-Gap-Band (DGB) parachutes fabricated from each of the two fabrics, and tested at the same operating conditions in a wind tunnel. Using the calculated total porosities and the results of the wind tunnel tests, the drag coefficient of a geometrically-equivalent full-scale DGB operating on Mars was estimated.
Linear and non-linear perturbations in dark energy models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Escamilla-Rivera, Celia; Casarini, Luciano; Fabris, Júlio C.
2016-11-01
In this work we discuss observational aspects of three time-dependent parameterisations of the dark energy equation of state w ( z ). In order to determine the dynamics associated with these models, we calculate their background evolution and perturbations in a scalar field representation. After performing a complete treatment of linear perturbations, we also show that the non-linear contribution of the selected w ( z ) parameterisations to the matter power spectra is almost the same for all scales, with no significant difference from the predictions of the standard ΛCDM model.
Higgs bosons in heavy supersymmetry with an intermediate m A
Lee, Gabriel; Wagner, Carlos E. M.
2015-10-23
The minimal supersymmetric standard model leads to precise predictions of the properties of the light Higgs boson degrees of freedom that depend on only a few relevant supersymmetry-breaking parameters. In particular, there is an upper bound on the mass of the lightest neutral Higgs boson, which for a supersymmetric spectrum of the order of a TeV is barely above the one of the Higgs resonance recently observed at the LHC. This bound can be raised by considering a heavier supersymmetric spectrum, relaxing the tension between theory and experiment. In a previous article, we studied the predictions for the lightest CP-evenmore » Higgs mass for large values of the scalar-top and heavy Higgs boson masses. In this article we perform a similar analysis, considering also the case of a CP-odd Higgs boson mass m A of the order of the weak scale. We perform the calculation using effective theory techniques, considering a two-Higgs doublet model and a Standard Model-like theory and resumming the large logarithmic corrections that appear at scales above and below m A, respectively. In conclusion, we calculate the mass and couplings of the lightest CP-even Higgs boson and compare our results with the ones obtained by other methods.« less
Wang, A H; Leng, P B; Bian, G L; Li, X H; Mao, G C; Zhang, M B
2016-10-20
Objective: To explore the applicability of 2 different models of occupational health risk assessment in wooden furniture manufacturing industry. Methods: American EPA inhalation risk model and ICMM model of occupational health risk assessment were conducted to assess occupational health risk in a small wooden furniture enterprises, respectively. Results: There was poor protective measure and equipment of occupational disease in the plant. The concentration of wood dust in the air of two workshops was over occupational exposure limit (OEL) , and the C TWA was 8.9 mg/m 3 and 3.6 mg/m 3 , respectively. According to EPA model, the workers who exposed to benzene in this plant had high risk (9.7×10 -6 ~34.3×10 -6 ) of leukemia, and who exposed to formaldehyde had high risk (11.4 × 10 -6 ) of squamous cell carcinoma. There were inconsistent evaluation results using the ICMM tools of standard-based matrix and calculated risk rating. There were very high risks to be attacked by rhinocarcinoma of the workers who exposed to wood dust for the tool of calculated risk rating, while high risk for the tool of standard-based matrix. For the workers who exposed to noise, risk of noise-induced deafness was unacceptable and medium risk using two tools, respectively. Conclusion: Both EPA model and ICMM model can appropriately predict and assessthe occupational health risk in wooden furniture manufactory, ICMM due to the relatively simple operation, easy evaluation parameters, assessment of occupational - disease - inductive factors comprehensively, and more suitable for wooden furniture production enterprise.
Infinite variance in fermion quantum Monte Carlo calculations.
Shi, Hao; Zhang, Shiwei
2016-03-01
For important classes of many-fermion problems, quantum Monte Carlo (QMC) methods allow exact calculations of ground-state and finite-temperature properties without the sign problem. The list spans condensed matter, nuclear physics, and high-energy physics, including the half-filled repulsive Hubbard model, the spin-balanced atomic Fermi gas, and lattice quantum chromodynamics calculations at zero density with Wilson Fermions, and is growing rapidly as a number of problems have been discovered recently to be free of the sign problem. In these situations, QMC calculations are relied on to provide definitive answers. Their results are instrumental to our ability to understand and compute properties in fundamental models important to multiple subareas in quantum physics. It is shown, however, that the most commonly employed algorithms in such situations have an infinite variance problem. A diverging variance causes the estimated Monte Carlo statistical error bar to be incorrect, which can render the results of the calculation unreliable or meaningless. We discuss how to identify the infinite variance problem. An approach is then proposed to solve the problem. The solution does not require major modifications to standard algorithms, adding a "bridge link" to the imaginary-time path integral. The general idea is applicable to a variety of situations where the infinite variance problem may be present. Illustrative results are presented for the ground state of the Hubbard model at half-filling.
Duality linking standard and tachyon scalar field cosmologies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Avelino, P. P.; Bazeia, D.; Losano, L.
2010-09-15
In this work we investigate the duality linking standard and tachyon scalar field homogeneous and isotropic cosmologies in N+1 dimensions. We determine the transformation between standard and tachyon scalar fields and between their associated potentials, corresponding to the same background evolution. We show that, in general, the duality is broken at a perturbative level, when deviations from a homogeneous and isotropic background are taken into account. However, we find that for slow-rolling fields the duality is still preserved at a linear level. We illustrate our results with specific examples of cosmological relevance, where the correspondence between scalar and tachyon scalarmore » field models can be calculated explicitly.« less
40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?
Code of Federal Regulations, 2013 CFR
2013-07-01
... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...
40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?
Code of Federal Regulations, 2011 CFR
2011-07-01
... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...
40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?
Code of Federal Regulations, 2014 CFR
2014-07-01
... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...
40 CFR 1048.510 - What transient duty cycles apply for laboratory testing?
Code of Federal Regulations, 2012 CFR
2012-07-01
... model year, measure emissions by testing the engine on a dynamometer with the duty cycle described in Appendix II to determine whether it meets the transient emission standards in § 1048.101(a). (b) Calculate cycle statistics and compare with the established criteria as specified in 40 CFR 1065.514 to confirm...
49 CFR 571.301 - Standard No. 301; Fuel system integrity.
Code of Federal Regulations, 2013 CFR
2013-10-01
... regarding which of the compliance options it has selected for a particular vehicle or make/model. S6... more than one manufacturer. For the purpose of calculating average annual production of vehicles for... gravity is located 1,372 mm ±38 mm rearward of the front wheel axis, in the vertical longitudinal plane of...
49 CFR 571.301 - Standard No. 301; Fuel system integrity.
Code of Federal Regulations, 2012 CFR
2012-10-01
... regarding which of the compliance options it has selected for a particular vehicle or make/model. S6... more than one manufacturer. For the purpose of calculating average annual production of vehicles for... gravity is located 1,372 mm ±38 mm rearward of the front wheel axis, in the vertical longitudinal plane of...
49 CFR 571.301 - Standard No. 301; Fuel system integrity.
Code of Federal Regulations, 2014 CFR
2014-10-01
... regarding which of the compliance options it has selected for a particular vehicle or make/model. S6... more than one manufacturer. For the purpose of calculating average annual production of vehicles for... gravity is located 1,372 mm ±38 mm rearward of the front wheel axis, in the vertical longitudinal plane of...
Simulation Platform for Vision Aided Inertial Navigation
2014-09-18
Brown , R. G., & Hwang , P. Y. (1992). Introduction to Random Signals and Applied Kalman Filtering (2nd ed.). New York: John Wiley & Son. Chowdhary, G...Parameters for Various Timing Standards ( Brown & Hwang , 1992...were then calculated using the true PVA information from the ASPN data. Next, a two-state clock from ( Brown & Hwang , 1992) was used to model the
BSM Kaon Mixing at the Physical Point
NASA Astrophysics Data System (ADS)
Boyle, Peter; Garron, Nicolas; Kettle, Julia; Khamseh, Ava; Tsang, Justus Tobias
2018-03-01
We present a progress update on the RBC-UKQCD calculation of beyond the standard model (BSM) kaon mixing matrix elements at the physical point. Simulations are performed using 2+1 flavour domain wall lattice QCD with the Iwasaki gauge action at 3 lattice spacings and with pion masses ranging from 430 MeV to the physical pion mass.
Quantum Electrodynamics: Theory
Lincoln, Don
2018-01-16
The Standard Model of particle physics is composed of several theories that are added together. The most precise component theory is the theory of quantum electrodynamics or QED. In this video, Fermilabâs Dr. Don Lincoln explains how theoretical QED calculations can be done. This video links to other videos, giving the viewer a deep understanding of the process.
Quantifying falsifiability of scientific theories
NASA Astrophysics Data System (ADS)
Nemenman, Ilya
I argue that the notion of falsifiability, a key concept in defining a valid scientific theory, can be quantified using Bayesian Model Selection, which is a standard tool in modern statistics. This relates falsifiability to the quantitative version of the statistical Occam's razor, and allows transforming some long-running arguments about validity of scientific theories from philosophical discussions to rigorous mathematical calculations.
Evaluation of MOSTAS computer code for predicting dynamic loads in two bladed wind turbines
NASA Technical Reports Server (NTRS)
Kaza, K. R. V.; Janetzke, D. C.; Sullivan, T. L.
1979-01-01
Calculated dynamic blade loads were compared with measured loads over a range of yaw stiffnesses of the DOE/NASA Mod-O wind turbine to evaluate the performance of two versions of the MOSTAS computer code. The first version uses a time-averaged coefficient approximation in conjunction with a multi-blade coordinate transformation for two bladed rotors to solve the equations of motion by standard eigenanalysis. The second version accounts for periodic coefficients while solving the equations by a time history integration. A hypothetical three-degree of freedom dynamic model was investigated. The exact equations of motion of this model were solved using the Floquet-Lipunov method. The equations with time-averaged coefficients were solved by standard eigenanalysis.
Using SPL (Spent Pot-Lining) as an Alternative Fuel in Metallurgical Furnaces
NASA Astrophysics Data System (ADS)
Gao, Lei; Mostaghel, Sina; Ray, Shamik; Chattopadyay, Kinnor
2016-09-01
Replacing coke (coal) in a metallurgical furnace with other alternative fuels is beneficial for process economics and environmental friendliness. Coal injection is a common practice in blast furnace ironmaking, and spent pot-lining (SPL) was conceptualized as an alternative to coal. SPL is a resourceful waste from primary Aluminum production, with high carbon value. Equilibrium thermodynamics was used to calculate the energy content of SPL, and the compositional changes during SPL combustion. In order to capture the kinetics and mass transfer aspects, a blast furnace tuyere region CFD model was developed. The results of SPL combustion were compared with standard PCI coals, which are commonly used in blast furnaces. The CFD model was validated with experimental results for standard high volatile coals.
Standard model CP violation and cold electroweak baryogenesis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tranberg, Anders
2011-10-15
Using large-scale real-time lattice simulations, we calculate the baryon asymmetry generated at a fast, cold electroweak symmetry breaking transition. CP-violation is provided by the leading effective bosonic term resulting from integrating out the fermions in the Minimal Standard Model at zero-temperature, and performing a covariant gradient expansion [A. Hernandez, T. Konstandin, and M. G. Schmidt, Nucl. Phys. B812, 290 (2009).]. This is an extension of the work presented in [A. Tranberg, A. Hernandez, T. Konstandin, and M. G. Schmidt, Phys. Lett. B 690, 207 (2010).]. The numerical implementation is described in detail, and we address issues specifically related to usingmore » this CP-violating term in the context of Cold Electroweak Baryogenesis.« less
High precision Hugoniot measurements of D2 near maximum compression
NASA Astrophysics Data System (ADS)
Benage, John; Knudson, Marcus; Desjarlais, Michael
2015-11-01
The Hugoniot response of liquid deuterium has been widely studied due to its general importance and to the significant discrepancy in the inferred shock response obtained from early experiments. With improvements in dynamic compression platforms and experimental standards these results have converged and show general agreement with several equation of state (EOS) models, including quantum molecular dynamics (QMD) calculations within the Generalized Gradient Approximation (GGA). This approach to modeling the EOS has also proven quite successful for other materials and is rapidly becoming a standard approach. However, small differences remain among predictions obtained using different local and semi-local density functionals; these small differences show up in the deuterium Hugoniot at ~ 30-40 GPa near the region of maximum compression. Here we present experimental results focusing on that region of the Hugoniot and take advantage of advancements in the platform and standards, resulting in data with significantly higher precision than that obtained in previous studies. These new data may prove to distinguish between the subtle differences predicted by the various density functionals. Results of these experiments will be presented along with comparison to various QMD calculations. Sandia National Laboratories is a multi-program laboratory operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin company, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Decision analysis with cumulative prospect theory.
Bayoumi, A M; Redelmeier, D A
2000-01-01
Individuals sometimes express preferences that do not follow expected utility theory. Cumulative prospect theory adjusts for some phenomena by using decision weights rather than probabilities when analyzing a decision tree. The authors examined how probability transformations from cumulative prospect theory might alter a decision analysis of a prophylactic therapy in AIDS, eliciting utilities from patients with HIV infection (n = 75) and calculating expected outcomes using an established Markov model. They next focused on transformations of three sets of probabilities: 1) the probabilities used in calculating standard-gamble utility scores; 2) the probabilities of being in discrete Markov states; 3) the probabilities of transitioning between Markov states. The same prophylaxis strategy yielded the highest quality-adjusted survival under all transformations. For the average patient, prophylaxis appeared relatively less advantageous when standard-gamble utilities were transformed. Prophylaxis appeared relatively more advantageous when state probabilities were transformed and relatively less advantageous when transition probabilities were transformed. Transforming standard-gamble and transition probabilities simultaneously decreased the gain from prophylaxis by almost half. Sensitivity analysis indicated that even near-linear probability weighting transformations could substantially alter quality-adjusted survival estimates. The magnitude of benefit estimated in a decision-analytic model can change significantly after using cumulative prospect theory. Incorporating cumulative prospect theory into decision analysis can provide a form of sensitivity analysis and may help describe when people deviate from expected utility theory.
Forecasting coconut production in the Philippines with ARIMA model
NASA Astrophysics Data System (ADS)
Lim, Cristina Teresa
2015-02-01
The study aimed to depict the situation of the coconut industry in the Philippines for the future years applying Autoregressive Integrated Moving Average (ARIMA) method. Data on coconut production, one of the major industrial crops of the country, for the period of 1990 to 2012 were analyzed using time-series methods. Autocorrelation (ACF) and partial autocorrelation functions (PACF) were calculated for the data. Appropriate Box-Jenkins autoregressive moving average model was fitted. Validity of the model was tested using standard statistical techniques. The forecasting power of autoregressive moving average (ARMA) model was used to forecast coconut production for the eight leading years.
NASA Astrophysics Data System (ADS)
Sharma, R.; McCalley, J. D.
2016-12-01
Geomagnetic disturbance (GMD) causes the flow of geomagnetically induced currents (GIC) in the power transmission system that may cause large scale power outages and power system equipment damage. In order to plan for defense against GMD, it is necessary to accurately estimate the flow of GICs in the power transmission system. The current calculation as per NERC standards uses the 1-D earth conductivity models that don't reflect the coupling between the geoelectric and geomagnetic field components in the same direction. For accurate estimation of GICs, it is important to have spatially granular 3-D earth conductivity tensors, accurate DC network model of the transmission system and precisely estimated or measured input in the form of geomagnetic or geoelectric field data. Using these models and data the pre event, post event and online planning and assessment can be performed. The pre, post and online planning can be done by calculating GIC, analyzing voltage stability margin, identifying protection system vulnerabilities and estimating heating in transmission equipment. In order to perform the above mentioned tasks, an established GIC calculation and analysis procedure is needed that uses improved geophysical and DC network models obtained by model parameter tuning. The issue is addressed by performing the following tasks; 1) Geomagnetic field data and improved 3-D earth conductivity tensors are used to plot the geoelectric field map of a given area. The obtained geoelectric field map then serves as an input to the PSS/E platform, where through DC circuit analysis the GIC flows are calculated. 2) The computed GIC is evaluated against GIC measurements in order to fine tune the geophysical and DC network model parameters for any mismatch in the calculated and measured GIC. 3) The GIC calculation procedure is then adapted for a one in 100 year storm, in order to assess the impact of the worst case GMD on the power system. 4) Using the transformer models, the voltage stability margin would be analyzed for various real and synthetic geomagnetic or geoelectric field inputs, by calculating the reactive power absorbed by the transformers during an event. All four steps will help the electric utilities and planners to make use of better and accurate estimation techniques for GIC calculation, and impact assessment for future GMD events.
Naumov, Sergej; von Sonntag, Clemens
2011-11-01
Free radicals are common intermediates in the chemistry of ozone in aqueous solution. Their reactions with ozone have been probed by calculating the standard Gibbs free energies of such reactions using density functional theory (Jaguar 7.6 program). O(2) reacts fast and irreversibly only with simple carbon-centered radicals. In contrast, ozone also reacts irreversibly with conjugated carbon-centered radicals such as bisallylic (hydroxycylohexadienyl) radicals, with conjugated carbon/oxygen-centered radicals such as phenoxyl radicals, and even with nitrogen- oxygen-, sulfur-, and halogen-centered radicals. In these reactions, further ozone-reactive radicals are generated. Chain reactions may destroy ozone without giving rise to products other than O(2). This may be of importance when ozonation is used in pollution control, and reactions of free radicals with ozone have to be taken into account in modeling such processes.
Davidson, Scott E; Cui, Jing; Kry, Stephen; Deasy, Joseph O; Ibbott, Geoffrey S; Vicic, Milos; White, R Allen; Followill, David S
2016-08-01
A dose calculation tool, which combines the accuracy of the dose planning method (DPM) Monte Carlo code and the versatility of a practical analytical multisource model, which was previously reported has been improved and validated for the Varian 6 and 10 MV linear accelerators (linacs). The calculation tool can be used to calculate doses in advanced clinical application studies. One shortcoming of current clinical trials that report dose from patient plans is the lack of a standardized dose calculation methodology. Because commercial treatment planning systems (TPSs) have their own dose calculation algorithms and the clinical trial participant who uses these systems is responsible for commissioning the beam model, variation exists in the reported calculated dose distributions. Today's modern linac is manufactured to tight specifications so that variability within a linac model is quite low. The expectation is that a single dose calculation tool for a specific linac model can be used to accurately recalculate dose from patient plans that have been submitted to the clinical trial community from any institution. The calculation tool would provide for a more meaningful outcome analysis. The analytical source model was described by a primary point source, a secondary extra-focal source, and a contaminant electron source. Off-axis energy softening and fluence effects were also included. The additions of hyperbolic functions have been incorporated into the model to correct for the changes in output and in electron contamination with field size. A multileaf collimator (MLC) model is included to facilitate phantom and patient dose calculations. An offset to the MLC leaf positions was used to correct for the rudimentary assumed primary point source. Dose calculations of the depth dose and profiles for field sizes 4 × 4 to 40 × 40 cm agree with measurement within 2% of the maximum dose or 2 mm distance to agreement (DTA) for 95% of the data points tested. The model was capable of predicting the depth of the maximum dose within 1 mm. Anthropomorphic phantom benchmark testing of modulated and patterned MLCs treatment plans showed agreement to measurement within 3% in target regions using thermoluminescent dosimeters (TLD). Using radiochromic film normalized to TLD, a gamma criteria of 3% of maximum dose and 2 mm DTA was applied with a pass rate of least 85% in the high dose, high gradient, and low dose regions. Finally, recalculations of patient plans using DPM showed good agreement relative to a commercial TPS when comparing dose volume histograms and 2D dose distributions. A unique analytical source model coupled to the dose planning method Monte Carlo dose calculation code has been modified and validated using basic beam data and anthropomorphic phantom measurement. While this tool can be applied in general use for a particular linac model, specifically it was developed to provide a singular methodology to independently assess treatment plan dose distributions from those clinical institutions participating in National Cancer Institute trials.
Experimental Guidance for Isospin Symmetry Breaking Calculations via Single Neutron Pickup Reactions
NASA Astrophysics Data System (ADS)
Leach, K. G.; Garrett, P. E.; Bangay, J. C.; Bianco, L.; Demand, G. A.; Finlay, P.; Green, K. L.; Phillips, A. A.; Rand, E. T.; Sumithrarachchi, C. S.; Svensson, C. E.; Triambak, S.; Wong, J.; Ball, G.; Faestermann, T.; Krücken, R.; Hertenberger, R.; Wirth, H.-F.; Towner, I. S.
2013-03-01
Recent activity in superallowed isospin-symmetry-breaking correction calculations has prompted interest in experimental confirmation of these calculation techniques. The shellmodel set of Towner and Hardy (2008) include the opening of specific core orbitals that were previously frozen. This has resulted in significant shifts in some of the δC values, and an improved agreement of the individual corrected {F}t values with the adopted world average of the 13 cases currently included in the high-precision evaluation of Vud. While the nucleus-to-nucleus variation of {F}t is consistent with the conserved-vector-current (CVC) hypothesis of the Standard Model, these new calculations must be thoroughly tested, and guidance must be given for their improvement. Presented here are details of a 64Zn(ěcd, t)63Zn experiment, undertaken to provide such guidance.
Kholod, N; Evans, M; Gusev, E; Yu, S; Malyshev, V; Tretyakova, S; Barinov, A
2016-03-15
This paper presents a methodology for calculating exhaust emissions from on-road transport in cities with low-quality traffic data and outdated vehicle registries. The methodology consists of data collection approaches and emission calculation methods. For data collection, the paper suggests using video survey and parking lot survey methods developed for the International Vehicular Emissions model. Additional sources of information include data from the largest transportation companies, vehicle inspection stations, and official vehicle registries. The paper suggests using the European Computer Programme to Calculate Emissions from Road Transport (COPERT) 4 model to calculate emissions, especially in countries that implemented European emissions standards. If available, the local emission factors should be used instead of the default COPERT emission factors. The paper also suggests additional steps in the methodology to calculate emissions only from diesel vehicles. We applied this methodology to calculate black carbon emissions from diesel on-road vehicles in Murmansk, Russia. The results from Murmansk show that diesel vehicles emitted 11.7 tons of black carbon in 2014. The main factors determining the level of emissions are the structure of the vehicle fleet and the level of vehicle emission controls. Vehicles without controls emit about 55% of black carbon emissions. Copyright © 2015 Elsevier B.V. All rights reserved.
Model for Correlating Real-Time Survey Results to Contaminant Concentrations - 12183
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Stuart A.
2012-07-01
The U.S. Environmental Protection Agency (EPA) Superfund program is developing a new Counts Per Minute (CPM) calculator to correlate real-time survey results, which are often expressed as counts per minute, to contaminant concentrations that are more typically provided in risk assessments or for cleanup levels, usually expressed in pCi/g or pCi/m{sup 2}. Currently there is no EPA guidance for Superfund sites on correlating count per minute field survey readings back to risk, dose, or other ARAR based concentrations. The CPM calculator is a web-based model that estimates a gamma detector response for a given level of contamination. The intent ofmore » the CPM calculator is to facilitate more real-time measurements within a Superfund response framework. The draft of the CPM calculator is still undergoing internal EPA review. This will be followed by external peer review. It is expected that the CPM calculator will at least be in peer review by the time of WM2012 and possibly finalized at that time. The CPM calculator should facilitate greater use of real-time measurement at Superfund sites. The CPM calculator may also standardize the process of converting lab data to real time measurements. It will thus lessen the amount of lab sampling that is needed for site characterization and confirmation surveys, but it will not remove the need for sampling. (authors)« less
Wave vector modification of the infinite order sudden approximation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sachs, J.G.; Bowman, J.M.
1980-10-15
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories ismore » run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities P/sub n/1..-->..nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when ..delta..n=such thatub f/-n/sub i/ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.« less
Wave vector modification of the infinite order sudden approximation
NASA Astrophysics Data System (ADS)
Sachs, Judith Grobe; Bowman, Joel M.
1980-10-01
A simple method is proposed to modify the infinite order sudden approximation (IOS) in order to extend its region of quantitative validity. The method involves modifying the phase of the IOS scattering matrix to include a part calculated at the outgoing relative kinetic energy as well as a part calculated at the incoming kinetic energy. An immediate advantage of this modification is that the resulting S matrix is symmetric. We also present a closely related method in which the relative kinetic energies used in the calculation of the phase are determined from quasiclassical trajectory calculations. A set of trajectories is run with the initial state being the incoming state, and another set is run with the initial state being the outgoing state, and the average final relative kinetic energy of each set is obtained. One part of the S-operator phase is then calculated at each of these kinetic energies. We apply these methods to vibrationally inelastic collinear collisions of an atom and a harmonic oscillator, and calculate transition probabilities Pn1→nf for three model systems. For systems which are sudden, or nearly so, the agreement with exact quantum close-coupling calculations is substantially improved over standard IOS ones when Δn=‖nf-ni‖ is large, and the corresponding transition probability is small, i.e., less than 0.1. However, the modifications we propose will not improve the accuracy of the IOS transition probabilities for any collisional system unless the standard form of IOS already gives at least qualitative agreement with exact quantal calculations. We also suggest comparisons between some classical quantities and sudden predictions which should help in determining the validity of the sudden approximation. This is useful when exact quantal data is not available for comparison.
LANL* V1.0: a radiation belt drift shell model suitable for real-time and reanalysis applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koller, Josep; Reeves, Geoffrey D; Friedel, Reiner H W
2008-01-01
Space weather modeling, forecasts, and predictions, especially for the radiation belts in the inner magnetosphere, require detailed information about the Earth's magnetic field. Results depend on the magnetic field model and the L* (pron. L-star) values which are used to describe particle drift shells. Space wather models require integrating particle motions along trajectories that encircle the Earth. Numerical integration typically takes on the order of 10{sup 5} calls to a magnetic field model which makes the L* calculations very slow, in particular when using a dynamic and more accurate magnetic field model. Researchers currently tend to pick simplistic models overmore » more accurate ones but also risking large inaccuracies and even wrong conclusions. For example, magnetic field models affect the calculation of electron phase space density by applying adiabatic invariants including the drift shell value L*. We present here a new method using a surrogate model based on a neural network technique to replace the time consuming L* calculations made with modern magnetic field models. The advantage of surrogate models (or meta-models) is that they can compute the same output in a fraction of the time while adding only a marginal error. Our drift shell model LANL* (Los Alamos National Lab L-star) is based on L* calculation using the TSK03 model. The surrogate model has currently been tested and validated only for geosynchronous regions but the method is generally applicable to any satellite orbit. Computations with the new model are several million times faster compared to the standard integration method while adding less than 1% error. Currently, real-time applications for forecasting and even nowcasting inner magnetospheric space weather is limited partly due to the long computing time of accurate L* values. Without them, real-time applications are limited in accuracy. Reanalysis application of past conditions in the inner magnetosphere are used to understand physical processes and their effect. Without sufficiently accurate L* values, the interpretation of reanalysis results becomes difficult and uncertain. However, with a method that can calculate accurate L* values orders of magnitude faster, analyzing whole solar cycles worth of data suddenly becomes feasible.« less
A systematic uncertainty analysis of an evaluative fate and exposure model.
Hertwich, E G; McKone, T E; Pease, W S
2000-08-01
Multimedia fate and exposure models are widely used to regulate the release of toxic chemicals, to set cleanup standards for contaminated sites, and to evaluate emissions in life-cycle assessment. CalTOX, one of these models, is used to calculate the potential dose, an outcome that is combined with the toxicity of the chemical to determine the Human Toxicity Potential (HTP), used to aggregate and compare emissions. The comprehensive assessment of the uncertainty in the potential dose calculation in this article serves to provide the information necessary to evaluate the reliability of decisions based on the HTP A framework for uncertainty analysis in multimedia risk assessment is proposed and evaluated with four types of uncertainty. Parameter uncertainty is assessed through Monte Carlo analysis. The variability in landscape parameters is assessed through a comparison of potential dose calculations for different regions in the United States. Decision rule uncertainty is explored through a comparison of the HTP values under open and closed system boundaries. Model uncertainty is evaluated through two case studies, one using alternative formulations for calculating the plant concentration and the other testing the steady state assumption for wet deposition. This investigation shows that steady state conditions for the removal of chemicals from the atmosphere are not appropriate and result in an underestimate of the potential dose for 25% of the 336 chemicals evaluated.
NASA Astrophysics Data System (ADS)
Xianqiang, He; Delu, Pan; Yan, Bai; Qiankun, Zhu
2005-10-01
The numerical model of the vector radiative transfer of the coupled ocean-atmosphere system is developed based on the matrix-operator method, which is named PCOART. In PCOART, using the Fourier analysis, the vector radiative transfer equation (VRTE) splits up into a set of independent equations with zenith angle as only angular coordinate. Using the Gaussian-Quadrature method, VRTE is finally transferred into the matrix equation, which is calculated by using the adding-doubling method. According to the reflective and refractive properties of the ocean-atmosphere interface, the vector radiative transfer numerical model of ocean and atmosphere is coupled in PCOART. By comparing with the exact Rayleigh scattering look-up-table of MODIS(Moderate-resolution Imaging Spectroradiometer), it is shown that PCOART is an exact numerical calculation model, and the processing methods of the multi-scattering and polarization are correct in PCOART. Also, by validating with the standard problems of the radiative transfer in water, it is shown that PCOART could be used to calculate the underwater radiative transfer problems. Therefore, PCOART is a useful tool to exactly calculate the vector radiative transfer of the coupled ocean-atmosphere system, which can be used to study the polarization properties of the radiance in the whole ocean-atmosphere system and the remote sensing of the atmosphere and ocean.
Propagation of the Hawaiian-Emperor volcano chain by Pacific plate cooling stress
Stuart, W.D.; Foulger, G.R.; Barall, M.
2007-01-01
The lithosphere crack model, the main alternative to the mantle plume model for age-progressive magma emplacement along the Hawaiian-Emperor volcano chain, requires the maximum horizontal tensile stress to be normal to the volcano chain. However, published stress fields calculated from Pacific lithosphere tractions and body forces (e.g., subduction pull, basal drag, lithosphere density) are not optimal for southeast propagation of a stress-free, vertical tensile crack coincident with the Hawaiian segment of the Hawaiian-Emperor chain. Here we calculate the thermoelastic stress rate for present-day cooling of the Pacific plate using a spherical shell finite element representation of the plate geometry. We use observed seafloor isochrons and a standard model for lithosphere cooling to specify the time dependence of vertical temperature profiles. The calculated stress rate multiplied by a time increment (e.g., 1 m.y.) then gives a thermoelastic stress increment for the evolving Pacific plate. Near the Hawaiian chain position, the calculated stress increment in the lower part of the shell is tensional, with maximum tension normal to the chain direction. Near the projection of the chain trend to the southeast beyond Hawaii, the stress increment is compressive. This incremental stress field has the form necessary to maintain and propagate a tensile crack or similar lithosphere flaw and is thus consistent with the crack model for the Hawaiian volcano chain.?? 2007 The Geological Society of America.
Benavides, A L; Aragones, J L; Vega, C
2016-03-28
The solubility of NaCl in water is evaluated by using three force field models: Joung-Cheatham for NaCl dissolved in two different water models (SPC/E and TIP4P/2005) and Smith Dang NaCl model in SPC/E water. The methodology based on free-energy calculations [E. Sanz and C. Vega, J. Chem. Phys. 126, 014507 (2007)] and [J. L. Aragones et al., J. Chem. Phys. 136, 244508 (2012)] has been used, except, that all calculations for the NaCl in solution were obtained by using molecular dynamics simulations with the GROMACS package instead of homemade MC programs. We have explored new lower molalities and made longer runs to improve the accuracy of the calculations. Exploring the low molality region allowed us to obtain an analytical expression for the chemical potential of the ions in solution as a function of molality valid for a wider range of molalities, including the infinite dilute case. These new results are in better agreement with recent estimations of the solubility obtained with other methodologies. Besides, two empirical simple rules have been obtained to have a rough estimate of the solubility of a certain model, by analyzing the ionic pairs formation as a function of molality and/or by calculating the difference between the NaCl solid chemical potential and the standard chemical potential of the salt in solution.
NASA Technical Reports Server (NTRS)
Adams, Mitzi; HabashKrause, Linda
2012-01-01
Recent interest in using electrodynamic tethers (EDTs) for orbital maneuvering in Low Earth Orbit (LEO) has prompted the development of the Marshall ElectroDynamic Tether Orbit Propagator (MEDTOP) model. The model is comprised of several modules which address various aspects of EDT propulsion, including calculation of state vectors using a standard orbit propagator (e.g., J2), an atmospheric drag model, realistic ionospheric and magnetic field models, space weather effects, and tether librations. The natural electromotive force (EMF) attained during a radially-aligned conductive tether results in electrons flowing down the tether and accumulating on the lower-altitude spacecraft. The energy that drives this EMF is sourced from the orbital energy of the system; thus, EDTs are often proposed as de-orbiting systems. However, when the current is reversed using satellite charged particle sources, then propulsion is possible. One of the most difficult challenges of the modeling effort is to ascertain the equivalent circuit between the spacecraft and the ionospheric plasma. The present study investigates the use of the NASA Charging Analyzer Program (NASCAP) to calculate currents to and from the tethered satellites and the ionospheric plasma. NASCAP is a sophisticated set of computational tools to model the surface charging of three-dimensional (3D) spacecraft surfaces in a time-varying space environment. The model's surface is tessellated into a collection of facets, and NASCAP calculates currents and potentials for each one. Additionally, NASCAP provides for the construction of one or more nested grids to calculate space potential and time-varying electric fields. This provides for the capability to track individual particles orbits, to model charged particle wakes, and to incorporate external charged particle sources. With this study, we have developed a model of calculating currents incident onto an electrodynamic tethered satellite system, and first results are shown here.
Patient-specific catheter shaping for the minimally invasive closure of the left atrial appendage.
Graf, Eva C; Ott, Ilka; Praceus, Julian; Bourier, Felix; Lueth, Tim C
2018-06-01
The minimally invasive closure of the left atrial appendage is a promising alternative to anticoagulation for stroke prevention in patients suffering from atrial fibrillation. One of the challenges of this procedure is the correct positioning and the coaxial alignment of the tip of the catheter sheath to the implant landing zone. In this paper, a novel preoperative planning system is proposed that allows patient-individual shaping of catheters to facilitate the correct positioning of the catheter sheath by offering a patient-specific catheter shape. Based on preoperative three-dimensional image data, anatomical points and the planned implant position are marked interactively and a patient-specific catheter shape is calculated if the standard catheter is not considered as suitable. An approach to calculate a catheter shape with four bends by maximization of the bending radii is presented. Shaping of the catheter is supported by a bending form that is automatically generated in the planning program and can be directly manufactured by using additive manufacturing methods. The feasibility of the planning and shaping of the catheter could be successfully shown using six data sets. The patient-specific catheters were tested in comparison with standard catheters by physicians on heart models. In four of the six tested models, the participating physicians rated the patient-individual catheters better than the standard catheter. The novel approach for preoperatively planned and shaped patient-specific catheters designed for the minimally invasive closure of the left atrial appendage could be successfully implemented and a feasibility test showed promising results in anatomies that are difficult to access with the standard catheter.
Modeling oil generation with time-temperature index graphs based on the Arrhenius equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hunt, J.M.; Lewan, M.D.; Hennet, R.J.C.
1991-04-01
The time and depth of oil generation from petroleum source rocks containing type II kerogens can be determined using time-temperature index (TTI) graphs based on the Arrhenius equation. Activation energies (E) and frequency factors (A) used in the Arrhenius equation were obtained from hydrous pyrolysis experiments on rock samples in which the kerogens represent the range of type II kerogen compositions encountered in most petroleum basins. The E and A values obtained were used to construct graphs that define the beginning and end of oil generation for most type II kerogens having chemical compositions in the range of these standards.more » Activation energies of these standard kerogens vary inversely with their sulfur content. The kerogen with the highest sulfur content had the lowest E value and was the fastest in generating oil, whereas the kerogen with the lowest sulfur content had the highest E value and was the slowest in generating oil. These standard kerogens were designated as types IIA, B, C, and D on the basis of decreasing sulfur content and corresponding increasing time-temperature requirements for generating oil. The {Sigma}TTI{sub ARR} values determined graphically with these type II kerogen standards in two basin models were compared with a computer calculation using 2,000 increments. The graphical method came within {plus minus} 3% of the computer calculation. As type II kerogens are the major oil generators in the world, these graphs should have wide application in making preliminary evaluations of the depth of the oil window in exploration areas.« less
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 3 2011-01-01 2011-01-01 false Standard calculation procedure. 434.510 Section 434.510 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard...
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 3 2012-01-01 2012-01-01 false Standard calculation procedure. 434.510 Section 434.510 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard...
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 3 2014-01-01 2014-01-01 false Standard calculation procedure. 434.510 Section 434.510 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard...
40 CFR 420.04 - Calculation of pretreatment standards.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Calculation of pretreatment standards. 420.04 Section 420.04 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) EFFLUENT GUIDELINES AND STANDARDS IRON AND STEEL MANUFACTURING POINT SOURCE CATEGORY General Provisions § 420.04...
10 CFR 434.510 - Standard calculation procedure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Standard calculation procedure. 434.510 Section 434.510 Energy DEPARTMENT OF ENERGY ENERGY CONSERVATION ENERGY CODE FOR NEW FEDERAL COMMERCIAL AND MULTI-FAMILY HIGH RISE RESIDENTIAL BUILDINGS Building Energy Cost Compliance Alternative § 434.510 Standard...
Conservative Tests under Satisficing Models of Publication Bias.
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%-rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs.
Conservative Tests under Satisficing Models of Publication Bias
McCrary, Justin; Christensen, Garret; Fanelli, Daniele
2016-01-01
Publication bias leads consumers of research to observe a selected sample of statistical estimates calculated by producers of research. We calculate critical values for statistical significance that could help to adjust after the fact for the distortions created by this selection effect, assuming that the only source of publication bias is file drawer bias. These adjusted critical values are easy to calculate and differ from unadjusted critical values by approximately 50%—rather than rejecting a null hypothesis when the t-ratio exceeds 2, the analysis suggests rejecting a null hypothesis when the t-ratio exceeds 3. Samples of published social science research indicate that on average, across research fields, approximately 30% of published t-statistics fall between the standard and adjusted cutoffs. PMID:26901834
McCaffrey, J P; Mainegra-Hing, E; Kawrakow, I; Shortt, K R; Rogers, D W O
2004-06-21
The basic equation for establishing a 60Co air-kerma standard based on a cavity ionization chamber includes a wall correction term that corrects for the attenuation and scatter of photons in the chamber wall. For over a decade, the validity of the wall correction terms determined by extrapolation methods (K(w)K(cep)) has been strongly challenged by Monte Carlo (MC) calculation methods (K(wall)). Using the linear extrapolation method with experimental data, K(w)K(cep) was determined in this study for three different styles of primary-standard-grade graphite ionization chamber: cylindrical, spherical and plane-parallel. For measurements taken with the same 60Co source, the air-kerma rates for these three chambers, determined using extrapolated K(w)K(cep) values, differed by up to 2%. The MC code 'EGSnrc' was used to calculate the values of K(wall) for these three chambers. Use of the calculated K(wall) values gave air-kerma rates that agreed within 0.3%. The accuracy of this code was affirmed by its reliability in modelling the complex structure of the response curve obtained by rotation of the non-rotationally symmetric plane-parallel chamber. These results demonstrate that the linear extrapolation technique leads to errors in the determination of air-kerma.
NASA Astrophysics Data System (ADS)
Abe, M.; Prasannaa, V. S.; Das, B. P.
2018-03-01
Heavy polar diatomic molecules are currently among the most promising probes of fundamental physics. Constraining the electric dipole moment of the electron (e EDM ), in order to explore physics beyond the standard model, requires a synergy of molecular experiment and theory. Recent advances in experiment in this field have motivated us to implement a finite-field coupled-cluster (FFCC) approach. This work has distinct advantages over the theoretical methods that we had used earlier in the analysis of e EDM searches. We used relativistic FFCC to calculate molecular properties of interest to e EDM experiments, that is, the effective electric field (Eeff) and the permanent electric dipole moment (PDM). We theoretically determine these quantities for the alkaline-earth monofluorides (AEMs), the mercury monohalides (Hg X ), and PbF. The latter two systems, as well as BaF from the AEMs, are of interest to e EDM searches. We also report the calculation of the properties using a relativistic finite-field coupled-cluster approach with single, double, and partial triples' excitations, which is considered to be the gold standard of electronic structure calculations. We also present a detailed error estimate, including errors that stem from our choice of basis sets, and higher-order correlation effects.
Reithinger, Richard; Coleman, Paul G
2007-01-01
Background Although Kabul city, Afghanistan, is currently the worldwide largest focus of cutaneous leishmaniasis (CL) with an estimated 67,500 cases, donor interest in CL has been comparatively poor because the disease is non-fatal. Since 1998 HealthNet TPO (HNTPO) has implemented leishmaniasis diagnosis and treatment services in Kabul and in 2003 alone 16,390 were treated patients in six health clinics in and around the city. The aim of our study was to calculate the cost-effectiveness for the implemented treatment regimen of CL patients attending HNTPO clinics in the Afghan complex emergency setting. Methods Using clinical and cost data from the on-going operational HNTPO program in Kabul, published and unpublished sources, and discussions with researchers, we developed models that included probabilistic sensitivity analysis to calculate ranges for the cost per disability adjusted life year (DALY) averted for implemented CL treatment regimen. We calculated the cost-effectiveness of intralesional and intramuscular administration of the pentavalent antimonial drug sodium stibogluconate, HNTPO's current CL 'standard treatment'. Results The cost of the standard treatment was calculated to be US$ 27 (95% C.I. 20 – 36) per patient treated and cured. The cost per DALY averted per patient cured with the standard treatment was estimated to be approximately US$ 1,200 (761 – 1,827). Conclusion According to WHO-CHOICE criteria, treatment of CL in Kabul, Afghanistan, is not a cost-effective health intervention. The rationale for treating CL patients in Afghanistan and elsewhere is discussed. PMID:17263879
Element distributions after binary fission of /sup 44/Ti
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pl-dash-baraneta, R.; Belery, P.; Brzychczyk, J.
1986-08-01
Inclusive and coincidence measurements have been performed to study symmetric fragmentation of /sup 44/Ti binary decay from the /sup 32/S+/sup 12/C reaction at 280 MeV incident energy. Element distributions after binary decay were measured. Angular distributions and fragment correlations are presented. Total c.m. kinetic energy for the symmetric products is extracted from our data and from Monte-Carlo model calculations including Q-italic-value fluctuations. This result was compared to liquid drop model calculations and standard fission systematics. Comparison between the experimental value of the total kinetic energy and the rotating liquid-drop model predictions locates the angular momentum window for symmetric splitting ofmore » /sup 44/Ti between 33h-dash-bar and 38h-dash-bar. It also showed that 50% of the corresponding rotational energy contributes to the total kinetic energy values. The dominant reaction mechanism was found to be symmetric splitting followed by evaporation.« less
NASA Technical Reports Server (NTRS)
Spiers, Gary D.
1994-01-01
Section 1 details the theory used to build the lidar model, provides results of using the model to evaluate AEOLUS design instrument designs, and provides snapshots of the visual appearance of the coded model. Appendix A contains a Fortran program to calculate various forms of the refractive index structure function. This program was used to determine the refractive index structure function used in the main lidar simulation code. Appendix B contains a memo on the optimization of the lidar telescope geometry for a line-scan geometry. Appendix C contains the code for the main lidar simulation and brief instruction on running the code. Appendix D contains a Fortran code to calculate the maximum permissible exposure for the eye from the ANSI Z136.1-1992 eye safety standards. Appendix E contains a paper on the eye safety analysis of a space-based coherent lidar presented at the 7th Coherent Laser Radar Applications and Technology Conference, Paris, France, 19-23 July 1993.
Nitric oxide concentration near the mesopause as deduced from ionospheric absorption measurements
NASA Astrophysics Data System (ADS)
Lastovicka, J.
The upper-D-region NO concentration is calculated on the basis of published 2775-kHz-absorption, Lyman-alpha (OSO-5), and X-ray (Solrad-9) data obtained over Central Europe in June-August 1969, 1970, and 1972. Ionization-rate and radio-wave-absorption profiles for solar zenith angles of 60, 70 and 40 deg are computed, presented graphically, and compared with model calculations to derive the NO-concentration correction coefficients necessary to make the Lyman-alpha/X-ray flux ratios of the models of Meira (1971), Baker et al. (1977), Tohmatsu and Iwagami (1976), and Tisone (1973) agree with the observed ratios. Values of the corrected NO concentration include 6.5 and 8.5 x 10 to the 13th/cu m at 78 and 90 km, respectively. The values are shown to be higher than those of standard models but within the range of observed concentrations.
A multilayer model of time dependent deformation following an earthquake on a strike-slip fault
NASA Technical Reports Server (NTRS)
Cohen, S. C.
1981-01-01
A multilayer model of the Earth to calculate finite element of time dependent deformation and stress following an earthquake on a strike slip fault is discussed. The model involves shear properties of an elastic upper lithosphere, a standard viscoelastic linear solid lower lithosphere, a Maxwell viscoelastic asthenosphere and an elastic mesosphere. Systematic variations of fault and layer depths and comparisons with simpler elastic lithosphere over viscoelastic asthenosphere calculations are analyzed. Both the creep of the lower lithosphere and astenosphere contribute to the postseismic deformation. The magnitude of the deformation is enhanced by a short distance between the bottom of the fault (slip zone) and the top of the creep region but is less sensitive to the thickness of the creeping layer. Postseismic restressing is increased as the lower lithosphere becomes more viscoelastic, but the tendency for the width of the restressed zone to growth with time is retarded.
Lee, Mi Kyung; Coker, David F
2016-08-18
An accurate approach for computing intermolecular and intrachromophore contributions to spectral densities to describe the electronic-nuclear interactions relevant for modeling excitation energy transfer processes in light harvesting systems is presented. The approach is based on molecular dynamics (MD) calculations of classical correlation functions of long-range contributions to excitation energy fluctuations and a separate harmonic analysis and single-point gradient quantum calculations for electron-intrachromophore vibrational couplings. A simple model is also presented that enables detailed analysis of the shortcomings of standard MD-based excitation energy fluctuation correlation function approaches. The method introduced here avoids these problems, and its reliability is demonstrated in accurate predictions for bacteriochlorophyll molecules in the Fenna-Matthews-Olson pigment-protein complex, where excellent agreement with experimental spectral densities is found. This efficient approach can provide instantaneous spectral densities for treating the influence of fluctuations in environmental dissipation on fast electronic relaxation.
Effect of clustering on the emission of light charged particles
NASA Astrophysics Data System (ADS)
Kundu, Samir; Bhattacharya, C.; Rana, T. K.; Bhattacharya, S.; Pandey, R.; Banerjee, K.; Roy, Pratap; Meena, J. K.; Mukherjee, G.; Ghosh, T. K.; Mukhopadhyay, S.; Saha, A. K.; Sahoo, J. K.; Mandal Saha, R.; Srivastava, V.; Sinha, M.; Asgar, Md. A.
2018-04-01
Energy spectra of light charged particles emitted in the reaction p + {}^{27}Al → {}^{28}Si^{\\ast} have been studied and compared with statistical model calculation. Unlike 16O + 12C where a large deformation was observed in 28Si*, the energy spectra of α-particles were well explained by the statistical model calculation with standard "deformability parameters" obtained using the rotating liquid drop model. It seems that the α-clustering in the entrance channel causes extra deformation in 28Si* in the case of 16O + 12C, but the reanalysis of other published data shows that there are several cases where extra deformation was observed for composites produced via non-α-cluster entrance channels also. An empirical relation was found between mass-asymmetry in the entrance channel and deformation, which indicates that along with α-clustering, mass-asymmetry may also affect the emission of a light charged particle.
HepSim: A repository with predictions for high-energy physics experiments
Chekanov, S. V.
2015-02-03
A file repository for calculations of cross sections and kinematic distributions using Monte Carlo generators for high-energy collisions is discussed. The repository is used to facilitate effective preservation and archiving of data from theoretical calculations and for comparisons with experimental data. The HepSim data library is publicly accessible and includes a number of Monte Carlo event samples with Standard Model predictions for current and future experiments. The HepSim project includes a software package to automate the process of downloading and viewing online Monte Carlo event samples. Data streaming over a network for end-user analysis is discussed.
Nelson, Daniel R; Fleming, George T; Kilcup, Gregory W
2003-01-17
A standing mystery in the standard model is the unnatural smallness of the strong CP violating phase. A massless up quark has long been proposed as one potential solution. A lattice calculation of the constants of the chiral Lagrangian essential for the determination of the up quark mass, 2alpha(8)-alpha(5), is presented. We find 2alpha(8)-alpha(5)=0.29+/-0.18, which corresponds to m(u)/m(d)=0.410+/-0.036. This is the first such calculation using a physical number of dynamical light quarks, N(f)=3.
Revealing the ISO/IEC 9126-1 Clique Tree for COTS Software Evaluation
NASA Technical Reports Server (NTRS)
Morris, A. Terry
2007-01-01
Previous research has shown that acyclic dependency models, if they exist, can be extracted from software quality standards and that these models can be used to assess software safety and product quality. In the case of commercial off-the-shelf (COTS) software, the extracted dependency model can be used in a probabilistic Bayesian network context for COTS software evaluation. Furthermore, while experts typically employ Bayesian networks to encode domain knowledge, secondary structures (clique trees) from Bayesian network graphs can be used to determine the probabilistic distribution of any software variable (attribute) using any clique that contains that variable. Secondary structures, therefore, provide insight into the fundamental nature of graphical networks. This paper will apply secondary structure calculations to reveal the clique tree of the acyclic dependency model extracted from the ISO/IEC 9126-1 software quality standard. Suggestions will be provided to describe how the clique tree may be exploited to aid efficient transformation of an evaluation model.
Haberman, Shelby J; Sinharay, Sandip; Chon, Kyong Hee
2013-07-01
Residual analysis (e.g. Hambleton & Swaminathan, Item response theory: principles and applications, Kluwer Academic, Boston, 1985; Hambleton, Swaminathan, & Rogers, Fundamentals of item response theory, Sage, Newbury Park, 1991) is a popular method to assess fit of item response theory (IRT) models. We suggest a form of residual analysis that may be applied to assess item fit for unidimensional IRT models. The residual analysis consists of a comparison of the maximum-likelihood estimate of the item characteristic curve with an alternative ratio estimate of the item characteristic curve. The large sample distribution of the residual is proved to be standardized normal when the IRT model fits the data. We compare the performance of our suggested residual to the standardized residual of Hambleton et al. (Fundamentals of item response theory, Sage, Newbury Park, 1991) in a detailed simulation study. We then calculate our suggested residuals using data from an operational test. The residuals appear to be useful in assessing the item fit for unidimensional IRT models.
NASA Technical Reports Server (NTRS)
Kim, Y.-C.; Demarque, P.; Guenther, D. B.
1991-01-01
Improvements to the Yale Rotating Stellar Evolution Code (YREC) by incorporating the Mihalas-Hummer-Daeppen equation of state, an improved opacity interpolation routine, and the effects of molecular opacities, calculated at Los Alamos, have been made. the effect of each of the improvements on the standard solar model has been tested independently by computing the corresponding solar nonradial oscillation frequencies. According to these tests, the Mihalas-Hummer-Daeppen equation of state has very little effect on the model's low l p-mode oscillation spectrum compared to the model using the existing analytical equation of state implemented in YREC. On the other hand, the molecular opacity does improve the model's oscillation spectrum. The effect of molecular opacity on the computed solar oscillation frequencies is much larger than that of the Mihalas-Hummer-Daeppen equation of state. together, the two improvements to the physics reduce the discrepancy with observations by 10 microHz for the low l modes.
Probing particle and nuclear physics models of neutrinoless double beta decay with different nuclei
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fogli, G. L.; Rotunno, A. M.; Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Via Orabona 4, 70126 Bari
2009-07-01
Half-life estimates for neutrinoless double beta decay depend on particle physics models for lepton-flavor violation, as well as on nuclear physics models for the structure and transitions of candidate nuclei. Different models considered in the literature can be contrasted - via prospective data - with a 'standard' scenario characterized by light Majorana neutrino exchange and by the quasiparticle random phase approximation, for which the theoretical covariance matrix has been recently estimated. We show that, assuming future half-life data in four promising nuclei ({sup 76}Ge, {sup 82}Se, {sup 130}Te, and {sup 136}Xe), the standard scenario can be distinguished from a fewmore » nonstandard physics models, while being compatible with alternative state-of-the-art nuclear calculations (at 95% C.L.). Future signals in different nuclei may thus help to discriminate at least some decay mechanisms, without being spoiled by current nuclear uncertainties. Prospects for possible improvements are also discussed.« less
Numerical compliance testing of human exposure to electromagnetic radiation from smart-watches.
Hong, Seon-Eui; Lee, Ae-Kyoung; Kwon, Jong-Hwa; Pack, Jeong-Ki
2016-10-07
In this study, we investigated the electromagnetic dosimetry for smart-watches. At present, the standard for compliance testing of body-mounted and handheld devices specifies the use of a flat phantom to provide conservative estimates of the peak spatial-averaged specific absorption rate (SAR). This means that the estimated SAR using a flat phantom should be higher than the SAR in the exposure part of an anatomical human-body model. To verify this, we numerically calculated the SAR for a flat phantom and compared it with the numerical calculation of the SAR for four anatomical human-body models of different ages. The numerical analysis was performed using the finite difference time domain method (FDTD). The smart-watch models were used in the three antennas: the shorted planar inverted-F antenna (PIFA), loop antenna, and monopole antenna. Numerical smart-watch models were implemented for cellular commutation and wireless local-area network operation at 835, 1850, and 2450 MHz. The peak spatial-averaged SARs of the smart-watch models are calculated for the flat phantom and anatomical human-body model for the wrist-worn and next to mouth positions. The results show that the flat phantom does not provide a consistent conservative SAR estimate. We concluded that the difference in the SAR results between an anatomical human-body model and a flat phantom can be attributed to the different phantom shapes and tissue structures.
Numerical compliance testing of human exposure to electromagnetic radiation from smart-watches
NASA Astrophysics Data System (ADS)
Hong, Seon-Eui; Lee, Ae-Kyoung; Kwon, Jong-Hwa; Pack, Jeong-Ki
2016-10-01
In this study, we investigated the electromagnetic dosimetry for smart-watches. At present, the standard for compliance testing of body-mounted and handheld devices specifies the use of a flat phantom to provide conservative estimates of the peak spatial-averaged specific absorption rate (SAR). This means that the estimated SAR using a flat phantom should be higher than the SAR in the exposure part of an anatomical human-body model. To verify this, we numerically calculated the SAR for a flat phantom and compared it with the numerical calculation of the SAR for four anatomical human-body models of different ages. The numerical analysis was performed using the finite difference time domain method (FDTD). The smart-watch models were used in the three antennas: the shorted planar inverted-F antenna (PIFA), loop antenna, and monopole antenna. Numerical smart-watch models were implemented for cellular commutation and wireless local-area network operation at 835, 1850, and 2450 MHz. The peak spatial-averaged SARs of the smart-watch models are calculated for the flat phantom and anatomical human-body model for the wrist-worn and next to mouth positions. The results show that the flat phantom does not provide a consistent conservative SAR estimate. We concluded that the difference in the SAR results between an anatomical human-body model and a flat phantom can be attributed to the different phantom shapes and tissue structures.
A process-based standard for the Solar Energetic Particle Event Environment
NASA Astrophysics Data System (ADS)
Gabriel, Stephen
For 10 years or more, there has been a lack of concensus on what the ISO standard model for the Solar Energetic Particle Event (SEPE) environment should be. Despite many technical discussions between the world experts in this field, it has been impossible to agree on which of the several models available should be selected as the standard. Most of these discussions at the ISO WG4 meetings and conferences, etc have centred around the differences in modelling approach between the MSU model and the several remaining models from elsewhere worldwide (mainly the USA and Europe). The topic is considered timely given the inclusion of a session on reference data sets at the Space Weather Workshop in Boulder in April 2014. The original idea of a ‘process-based’ standard was conceived by Dr Kent Tobiska as a way of getting round the problems associated with not only the presence of different models, which in themselves could have quite distinct modelling approaches but could also be based on different data sets. In essence, a process based standard approach overcomes these issues by allowing there to be more than one model and not necessarily a single standard model; however, any such model has to be completely transparent in that the data set and the modelling techniques used have to be not only to be clearly and unambiguously defined but also subject to peer review. If the model meets all of these requirements then it should be acceptable as a standard model. So how does this process-based approach resolve the differences between the existing modelling approaches for the SEPE environment and remove the impasse? In a sense, it does not remove all of the differences but only some of them; however, most importantly it will allow something which so far has been impossible without ambiguities and disagreement and that is a comparison of the results of the various models. To date one of the problems (if not the major one) in comparing the results of the various different SEPE statistical models has been caused by two things: 1) the data set and 2) the definition of an event Because unravelling the dependencies of the outputs of different statistical models on these two parameters is extremely difficult if not impossible, currently comparison of the results from the different models is also extremely difficult and can lead to controversies, especially over which model is the correct one; hence, when it comes to using these models for engineering purposes to calculate, for example, the radiation dose for a particular mission, the user, who is in all likelihood not an expert in this field, could be given two( or even more) very different environments and find it impossible to know how to select one ( or even how to compare them). What is proposed then, is a process-based standard, which in common with nearly all of the current models is composed of 3 elements, a standard data set, a standard event definition and a resulting standard event list. A standard event list is the output of this standard and can then be used with any of the existing (or indeed future) models that are based on events. This standard event list is completely traceable and transparent and represents a reference event list for all the community. When coupled with a statistical model, the results when compared will only be dependent on the statistical model and not on the data set or event definition.
NASA Astrophysics Data System (ADS)
Li, Lesheng; Giokas, Paul G.; Kanai, Yosuke; Moran, Andrew M.
2014-06-01
Kinetic models based on Fermi's Golden Rule are commonly employed to understand photoinduced electron transfer dynamics at molecule-semiconductor interfaces. Implicit in such second-order perturbative descriptions is the assumption that nuclear relaxation of the photoexcited electron donor is fast compared to electron injection into the semiconductor. This approximation breaks down in systems where electron transfer transitions occur on 100-fs time scale. Here, we present a fourth-order perturbative model that captures the interplay between time-coincident electron transfer and nuclear relaxation processes initiated by light absorption. The model consists of a fairly small number of parameters, which can be derived from standard spectroscopic measurements (e.g., linear absorbance, fluorescence) and/or first-principles electronic structure calculations. Insights provided by the model are illustrated for a two-level donor molecule coupled to both (i) a single acceptor level and (ii) a density of states (DOS) calculated for TiO2 using a first-principles electronic structure theory. These numerical calculations show that second-order kinetic theories fail to capture basic physical effects when the DOS exhibits narrow maxima near the energy of the molecular excited state. Overall, we conclude that the present fourth-order rate formula constitutes a rigorous and intuitive framework for understanding photoinduced electron transfer dynamics that occur on the 100-fs time scale.
Favazza, Christopher P; Fetterly, Kenneth A; Hangiandreou, Nicholas J; Leng, Shuai; Schueler, Beth A
2015-01-01
Evaluation of flat-panel angiography equipment through conventional image quality metrics is limited by the scope of standard spatial-domain image quality metric(s), such as contrast-to-noise ratio and spatial resolution, or by restricted access to appropriate data to calculate Fourier domain measurements, such as modulation transfer function, noise power spectrum, and detective quantum efficiency. Observer models have been shown capable of overcoming these limitations and are able to comprehensively evaluate medical-imaging systems. We present a spatial domain-based channelized Hotelling observer model to calculate the detectability index (DI) of our different sized disks and compare the performance of different imaging conditions and angiography systems. When appropriate, changes in DIs were compared to expectations based on the classical Rose model of signal detection to assess linearity of the model with quantum signal-to-noise ratio (SNR) theory. For these experiments, the estimated uncertainty of the DIs was less than 3%, allowing for precise comparison of imaging systems or conditions. For most experimental variables, DI changes were linear with expectations based on quantum SNR theory. DIs calculated for the smallest objects demonstrated nonlinearity with quantum SNR theory due to system blur. Two angiography systems with different detector element sizes were shown to perform similarly across the majority of the detection tasks.
The Reference Forward Model (RFM)
NASA Astrophysics Data System (ADS)
Dudhia, Anu
2017-01-01
The Reference Forward Model (RFM) is a general purpose line-by-line radiative transfer model, currently supported by the UK National Centre for Earth Observation. This paper outlines the algorithms used by the RFM, focusing on standard calculations of terrestrial atmospheric infrared spectra followed by a brief summary of some additional capabilities and extensions to microwave wavelengths and extraterrestrial atmospheres. At its most basic level - the 'line-by-line' component - it calculates molecular absorption cross-sections by applying the Voigt lineshape to all transitions up to ±25 cm-1 from line-centre. Alternatively, absorptions can be directly interpolated from various forms of tabulated data. These cross-sections are then used to construct infrared radiance or transmittance spectra for ray paths through homogeneous cells, plane-parallel or circular atmospheres. At a higher level, the RFM can apply instrumental convolutions to simulate measurements from Fourier transform spectrometers. It can also calculate Jacobian spectra and so act as a stand-alone forward model within a retrieval scheme. The RFM is designed for robustness, flexibility and ease-of-use (particularly by the non-expert), and no claims are made for superior accuracy, or indeed novelty, compared to other line-by-line codes. Its main limitations at present are a lack of scattering and simplified modelling of surface reflectance and line-mixing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andreani, Michele
The pretest calculations of phase A of the International Standard Problem 42 (ISP-42) using the GOTHIC containment code are presented in this paper, together with the comparison with the experimental results.The focus of the analyses presented is on the mixing process in the drywells (DWs), initially filled with air, during the initial steam purging transient. Consequently, a large effort has been made to capture the flow pattern produced by the jet created by the steam injection, including in the model a large number of nodes for the three-dimensional (3-D) representation of the two vessels. The influence of the nodalization ofmore » the DWs on the calculation was investigated by means of two additional models using one volume for each of the DWs and a 3-D calculation using a much coarser mesh, respectively.Since the fluid in the DWs was well mixed and stratification occurred only below the injection level, all the models could predict very accurately the global variables such as pressure and temperature. The 3-D simulation also reproduced the wall and gas temperature distributions fairly well. The only (inferred) discrepancy with the test was the overprediction in the upward deflection of the buoyant steam jet.« less
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses.
Desai, Trunil S; Srivastava, Shireesh
2018-01-01
13 C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13 C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13 C-MFA software that works in various operating systems will enable more researchers to perform 13 C-MFA and to further modify and develop the package.
FluxPyt: a Python-based free and open-source software for 13C-metabolic flux analyses
Desai, Trunil S.
2018-01-01
13C-Metabolic flux analysis (MFA) is a powerful approach to estimate intracellular reaction rates which could be used in strain analysis and design. Processing and analysis of labeling data for calculation of fluxes and associated statistics is an essential part of MFA. However, various software currently available for data analysis employ proprietary platforms and thus limit accessibility. We developed FluxPyt, a Python-based truly open-source software package for conducting stationary 13C-MFA data analysis. The software is based on the efficient elementary metabolite unit framework. The standard deviations in the calculated fluxes are estimated using the Monte-Carlo analysis. FluxPyt also automatically creates flux maps based on a template for visualization of the MFA results. The flux distributions calculated by FluxPyt for two separate models: a small tricarboxylic acid cycle model and a larger Corynebacterium glutamicum model, were found to be in good agreement with those calculated by a previously published software. FluxPyt was tested in Microsoft™ Windows 7 and 10, as well as in Linux Mint 18.2. The availability of a free and open 13C-MFA software that works in various operating systems will enable more researchers to perform 13C-MFA and to further modify and develop the package. PMID:29736347
Calculation of heat flux through a wall containing a cavity: Comparison of several models
NASA Astrophysics Data System (ADS)
Park, J. E.; Kirkpatrick, J. R.; Tunstall, J. N.; Childs, K. W.
1986-02-01
This paper describes the calculation of the heat transfer through the standard stud wall structure of a residential building. The wall cavity contains no insulation. Results from five test cases are presented. The first four represent progressively more complicated approximations to the heat transfer through and within a hollow wall structure. The fifth adds the model components necessary to severely inhibit the radiative energy transport across the empty cavity. Flow within the wall cavity is calculated from the Navier-Stokes equations and the energy conservation equation for an ideal gas using an improvement to the Implicit-Compressible Eulerian (ICE) algorithm of Harlow and Amsden. An algorithm is described to efficiently couple the fluid flow calculations to the radiation-conduction model for the solid portions of the system. Results indicate that conduction through still plates contributes less than 2% of the total heat transferred through a composite wall. All of the other elements (conduction through wall board, sheathing, and siding; convection from siding and wallboard to am bients; and radiation across the wall cavity) are required to accurately predict the heat transfer through a wall. Addition of a foil liner on one inner surface of the wall cavity reduces the total heat transferred by almost 50%.
NASA Technical Reports Server (NTRS)
Conley, Julianne M.; Leonard, B. P.
1994-01-01
The modified mixing length (MML) turbulence model was installed in the Proteus Navier-Stokes code, then modified to make it applicable to a wider range of flows typical of aerospace propulsion applications. The modifications are based on experimental data for three flat-plate flows having zero, mild adverse, and strong adverse pressure gradients. Three transonic diffuser test cases were run with the new version of the model in order to evaluate its performance. All results are compared with experimental data and show improvements over calculations made using the Baldwin-Lomax turbulence model, the standard algebraic model in Proteus.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model-generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
NASA Technical Reports Server (NTRS)
Decker, A. J.; Fite, E. B.; Thorp, S. A.; Mehmed, O.
1998-01-01
The responses of artificial neural networks to experimental and model-generated inputs are compared for detection of damage in twisted fan blades using electronic holography. The training-set inputs, for this work, are experimentally generated characteristic patterns of the vibrating blades. The outputs are damage-flag indicators or second derivatives of the sensitivity-vector-projected displacement vectors from a finite element model. Artificial neural networks have been trained in the past with computational-model- generated training sets. This approach avoids the difficult inverse calculations traditionally used to compare interference fringes with the models. But the high modeling standards are hard to achieve, even with fan-blade finite-element models.
Model predictive control of P-time event graphs
NASA Astrophysics Data System (ADS)
Hamri, H.; Kara, R.; Amari, S.
2016-12-01
This paper deals with model predictive control of discrete event systems modelled by P-time event graphs. First, the model is obtained by using the dater evolution model written in the standard algebra. Then, for the control law, we used the finite-horizon model predictive control. For the closed-loop control, we used the infinite-horizon model predictive control (IH-MPC). The latter is an approach that calculates static feedback gains which allows the stability of the closed-loop system while respecting the constraints on the control vector. The problem of IH-MPC is formulated as a linear convex programming subject to a linear matrix inequality problem. Finally, the proposed methodology is applied to a transportation system.
Mechanics and statistics of the worm-like chain
NASA Astrophysics Data System (ADS)
Marantan, Andrew; Mahadevan, L.
2018-02-01
The worm-like chain model is a simple continuum model for the statistical mechanics of a flexible polymer subject to an external force. We offer a tutorial introduction to it using three approaches. First, we use a mesoscopic view, treating a long polymer (in two dimensions) as though it were made of many groups of correlated links or "clinks," allowing us to calculate its average extension as a function of the external force via scaling arguments. We then provide a standard statistical mechanics approach, obtaining the average extension by two different means: the equipartition theorem and the partition function. Finally, we work in a probabilistic framework, taking advantage of the Gaussian properties of the chain in the large-force limit to improve upon the previous calculations of the average extension.
Schwinger-Keldysh diagrammatics for primordial perturbations
NASA Astrophysics Data System (ADS)
Chen, Xingang; Wang, Yi; Xianyu, Zhong-Zhi
2017-12-01
We present a systematic introduction to the diagrammatic method for practical calculations in inflationary cosmology, based on Schwinger-Keldysh path integral formalism. We show in particular that the diagrammatic rules can be derived directly from a classical Lagrangian even in the presence of derivative couplings. Furthermore, we use a quasi-single-field inflation model as an example to show how this formalism, combined with the trick of mixed propagator, can significantly simplify the calculation of some in-in correlation functions. The resulting bispectrum includes the lighter scalar case (m<3H/2) that has been previously studied, and the heavier scalar case (m>3H/2) that has not been explicitly computed for this model. The latter provides a concrete example of quantum primordial standard clocks, in which the clock signals can be observably large.
Froning, M; Kozielewski, T; Schläger, M; Hill, P
2004-01-01
In 1987, a worker was internally contaminated with 137Cs as a result of an accident during the handling of high temperature reactor fuel element ash. In the long-term follow-up monitoring an unusual retention behaviour was found. The observed time dependence of caesium retention does not agree with the standard models of ICRP Publication 30. The present case can be better explained by assuming an intake of a mixture of type F and type S compounds. However, experimental data can be best described by a four-exponential retention function with two long-lived components, which was used as an ad hoc model for dose calculation. The resulting dose is compared with doses calculated on the basis of ICRP Publication 66.
Radiative energy balance of the Venus mesosphere
NASA Astrophysics Data System (ADS)
Haus, R.; Goering, H.
1990-03-01
An accurate radiative transfer model for line-by-line gaseous absorption, as well as for cloud absorption and multiple scattering, is used in the present calculation of solar heating and thermal cooling rates for standard temperature profiles and temperatures yielded by the Venera 15 Fourier Spectrometer Experiment. A strong dependency is noted for heating and cooling rates on cloud-structure variations. The Venus mesosphere is characterized by main cloud-cover heating and overlying-haze cooling. These results are applicable to Venus atmosphere dynamical models.
Study of the parameters of a single-frequency laser for pumping cesium frequency standards
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuravleva, O V; Ivanov, A V; Kurnosov, V D
2008-04-30
A model for calculating the parameters of a laser diode with an external fibre cavity containing a fibre Bragg grating (FBG) is presented. It is shown that by using this model, it is possible to obtain single-mode lasing by neglecting the spectral burning of carriers. The regions of the laser-diode current and temperature and the FBG temperature in which the laser can be tuned to the D{sub 2} line of cesium are determined experimentally. (lasers and amplifiers)