Running coupling constant from lattice studies of gluon and ghost propagators
NASA Astrophysics Data System (ADS)
Cucchieri, A.; Mendes, T.
2004-12-01
We present a numerical study of the running coupling constant in four-dimensional pure-SU(2) lattice gauge theory. The running coupling is evaluated by fitting data for the gluon and ghost propagators in minimal Landau gauge. Following Refs. [1, 2], the fitting formulae are obtained by a simultaneous integration of the β function and of a function coinciding with the anomalous dimension of the propagator in the momentum subtraction scheme. We consider these formulae at three and four loops. The fitting method works well, especially for the ghost case, for which statistical error and hyper-cubic effects are very small. Our present result for ΛMS is 200-40+60 MeV, where the error is purely systematic. We are currently extending this analysis to five loops in order to reduce this systematic error.
Precision determination of the πN scattering lengths and the charged πNN coupling constant
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.; Thomas, A. W.
2000-01-01
We critically evaluate the isovector GMO sumrule for the charged πNN coupling constant using recent precision data from π-p and π-d atoms and with careful attention to systematic errors. From the π-d scattering length we deduce the pion-proton scattering lengths 1/2(aπ-p + aπ-n) = (-20 +/- 6(statistic)+/-10 (systematic) .10-4m-1πc and 1/2(aπ-p - aπ-n) = (903 +/- 14) . 10-4m-1πc. From this a direct evaluation gives g2c(GMO)/4π = 14.20 +/- 0.07 (statistic)+/-0.13(systematic) or f2c/4π = 0.0786 +/- 0.0008.
Dynamically correcting two-qubit gates against any systematic logical error
NASA Astrophysics Data System (ADS)
Calderon Vargas, Fernando Antonio
The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.
Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients
Bruno, Mattia; Lehner, Christoph; Soni, Amarjit
2018-04-20
Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.
Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients
NASA Astrophysics Data System (ADS)
Bruno, Mattia; Lehner, Christoph; Soni, Amarjit; Rbc; Ukqcd Collaborations
2018-04-01
We propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C1 and C2 , related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.
Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Mattia; Lehner, Christoph; Soni, Amarjit
Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.
Volcanic ash modeling with the NMMB-MONARCH-ASH model: quantification of offline modeling errors
NASA Astrophysics Data System (ADS)
Marti, Alejandro; Folch, Arnau
2018-03-01
Volcanic ash modeling systems are used to simulate the atmospheric dispersion of volcanic ash and to generate forecasts that quantify the impacts from volcanic eruptions on infrastructures, air quality, aviation, and climate. The efficiency of response and mitigation actions is directly associated with the accuracy of the volcanic ash cloud detection and modeling systems. Operational forecasts build on offline coupled modeling systems in which meteorological variables are updated at the specified coupling intervals. Despite the concerns from other communities regarding the accuracy of this strategy, the quantification of the systematic errors and shortcomings associated with the offline modeling systems has received no attention. This paper employs the NMMB-MONARCH-ASH model to quantify these errors by employing different quantitative and categorical evaluation scores. The skills of the offline coupling strategy are compared against those from an online forecast considered to be the best estimate of the true outcome. Case studies are considered for a synthetic eruption with constant eruption source parameters and for two historical events, which suitably illustrate the severe aviation disruptive effects of European (2010 Eyjafjallajökull) and South American (2011 Cordón Caulle) volcanic eruptions. Evaluation scores indicate that systematic errors due to the offline modeling are of the same order of magnitude as those associated with the source term uncertainties. In particular, traditional offline forecasts employed in operational model setups can result in significant uncertainties, failing to reproduce, in the worst cases, up to 45-70 % of the ash cloud of an online forecast. These inconsistencies are anticipated to be even more relevant in scenarios in which the meteorological conditions change rapidly in time. The outcome of this paper encourages operational groups responsible for real-time advisories for aviation to consider employing computationally efficient online dispersal models.
Strong coupling constant from Adler function in lattice QCD
NASA Astrophysics Data System (ADS)
Hudspith, Renwick J.; Lewis, Randy; Maltman, Kim; Shintani, Eigo
2016-09-01
We compute the QCD coupling constant, αs, from the Adler function with vector hadronic vacuum polarization (HVP) function. On the lattice, Adler function can be measured by the differential of HVP at two different momentum scales. HVP is measured from the conserved-local vector current correlator using nf = 2 + 1 flavor Domain Wall lattice data with three different lattice cutoffs, up to a-1 ≈ 3.14 GeV. To avoid the lattice artifact due to O(4) symmetry breaking, we set the cylinder cut on the lattice momentum with reflection projection onto vector current correlator, and it then provides smooth function of momentum scale for extracted HVP. We present a global fit of the lattice data at a justified momentum scale with three lattice cutoffs using continuum perturbation theory at 𝒪(αs4) to obtain the coupling in the continuum limit at arbitrary scale. We take the running to Z boson mass through the appropriate thresholds, and obtain αs(5)(MZ) = 0.1191(24)(37) where the first is statistical error and the second is systematic one.
Hamiltonian lattice field theory: Computer calculations using variational methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zako, Robert L.
1991-12-03
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato`s generalizations of Temple`s formula. The algorithm could bemore » adapted to systems such as atoms and molecules. I show how to compute Green`s functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green`s functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems.« less
NASA Astrophysics Data System (ADS)
Harmanec, Petr; Prša, Andrej
2011-08-01
The increasing precision of astronomical observations of stars and stellar systems is gradually getting to a level where the use of slightly different values of the solar mass, radius, and luminosity, as well as different values of fundamental physical constants, can lead to measurable systematic differences in the determination of basic physical properties. An equivalent issue with an inconsistent value of the speed of light was resolved by adopting a nominal value that is constant and has no error associated with it. Analogously, we suggest that the systematic error in stellar parameters may be eliminated by (1) replacing the solar radius R⊙ and luminosity L⊙ by the nominal values that are by definition exact and expressed in SI units: and ; (2) computing stellar masses in terms of M⊙ by noting that the measurement error of the product GM⊙ is 5 orders of magnitude smaller than the error in G; (3) computing stellar masses and temperatures in SI units by using the derived values and ; and (4) clearly stating the reference for the values of the fundamental physical constants used. We discuss the need and demonstrate the advantages of such a paradigm shift.
Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G
NASA Astrophysics Data System (ADS)
DeSalvo, Riccardo
2015-06-01
Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.
Bs and Ds decay constants in three-flavor lattice QCD.
Wingate, Matthew; Davies, Christine T H; Gray, Alan; Lepage, G Peter; Shigemitsu, Junko
2004-04-23
Capitalizing on recent advances in lattice QCD, we present a calculation of the leptonic decay constants f(B(s)) and f(D(s)) that includes effects of one strange sea quark and two light sea quarks via an improved staggered action. By shedding the quenched approximation and the associated lattice scale uncertainty, lattice QCD greatly increases its predictive power. Nonrelativistic QCD is used to simulate heavy quarks with masses between 1.5m(c) and m(b). We arrive at the following results: f(B(s))=260+/-7+/-26+/-8+/-5 and f(D(s))=290+/-20+/-29+/-29+/-6 MeV. The first quoted error is the statistical uncertainty, and the rest estimate the sizes of higher order terms neglected in this calculation. All of these uncertainties are systematically improvable by including another order in the weak coupling expansion, the nonrelativistic expansion, or the Symanzik improvement program.
NASA Astrophysics Data System (ADS)
Terray, P.; Sooraj, K. P.; Masson, S.; Krishna, R. P. M.; Samson, G.; Prajeesh, A. G.
2017-07-01
State-of-the-art global coupled models used in seasonal prediction systems and climate projections still have important deficiencies in representing the boreal summer tropical rainfall climatology. These errors include prominently a severe dry bias over all the Northern Hemisphere monsoon regions, excessive rainfall over the ocean and an unrealistic double inter-tropical convergence zone (ITCZ) structure in the tropical Pacific. While these systematic errors can be partly reduced by increasing the horizontal atmospheric resolution of the models, they also illustrate our incomplete understanding of the key mechanisms controlling the position of the ITCZ during boreal summer. Using a large collection of coupled models and dedicated coupled experiments, we show that these tropical rainfall errors are partly associated with insufficient surface thermal forcing and incorrect representation of the surface albedo over the Northern Hemisphere continents. Improving the parameterization of the land albedo in two global coupled models leads to a large reduction of these systematic errors and further demonstrates that the Northern Hemisphere subtropical deserts play a seminal role in these improvements through a heat low mechanism.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
NASA Astrophysics Data System (ADS)
Kubas, Adam; Hoffmann, Felix; Heck, Alexander; Oberhofer, Harald; Elstner, Marcus; Blumberger, Jochen
2014-03-01
We introduce a database (HAB11) of electronic coupling matrix elements (Hab) for electron transfer in 11 π-conjugated organic homo-dimer cations. High-level ab inito calculations at the multireference configuration interaction MRCI+Q level of theory, n-electron valence state perturbation theory NEVPT2, and (spin-component scaled) approximate coupled cluster model (SCS)-CC2 are reported for this database to assess the performance of three DFT methods of decreasing computational cost, including constrained density functional theory (CDFT), fragment-orbital DFT (FODFT), and self-consistent charge density functional tight-binding (FODFTB). We find that the CDFT approach in combination with a modified PBE functional containing 50% Hartree-Fock exchange gives best results for absolute Hab values (mean relative unsigned error = 5.3%) and exponential distance decay constants β (4.3%). CDFT in combination with pure PBE overestimates couplings by 38.7% due to a too diffuse excess charge distribution, whereas the economic FODFT and highly cost-effective FODFTB methods underestimate couplings by 37.6% and 42.4%, respectively, due to neglect of interaction between donor and acceptor. The errors are systematic, however, and can be significantly reduced by applying a uniform scaling factor for each method. Applications to dimers outside the database, specifically rotated thiophene dimers and larger acenes up to pentacene, suggests that the same scaling procedure significantly improves the FODFT and FODFTB results for larger π-conjugated systems relevant to organic semiconductors and DNA.
Continuous quantum error correction for non-Markovian decoherence
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oreshkov, Ognyan; Brun, Todd A.; Communication Sciences Institute, University of Southern California, Los Angeles, California 90089
2007-08-15
We study the effect of continuous quantum error correction in the case where each qubit in a codeword is subject to a general Hamiltonian interaction with an independent bath. We first consider the scheme in the case of a trivial single-qubit code, which provides useful insights into the workings of continuous error correction and the difference between Markovian and non-Markovian decoherence. We then study the model of a bit-flip code with each qubit coupled to an independent bath qubit and subject to continuous correction, and find its solution. We show that for sufficiently large error-correction rates, the encoded state approximatelymore » follows an evolution of the type of a single decohering qubit, but with an effectively decreased coupling constant. The factor by which the coupling constant is decreased scales quadratically with the error-correction rate. This is compared to the case of Markovian noise, where the decoherence rate is effectively decreased by a factor which scales only linearly with the rate of error correction. The quadratic enhancement depends on the existence of a Zeno regime in the Hamiltonian evolution which is absent in purely Markovian dynamics. We analyze the range of validity of this result and identify two relevant time scales. Finally, we extend the result to more general codes and argue that the performance of continuous error correction will exhibit the same qualitative characteristics.« less
Methods for constraining fine structure constant evolution with OH microwave transitions.
Darling, Jeremy
2003-07-04
We investigate the constraints that OH microwave transitions in megamasers and molecular absorbers at cosmological distances may place on the evolution of the fine structure constant alpha=e(2)/ variant Planck's over 2pi c. The centimeter OH transitions are a combination of hyperfine splitting and lambda doubling that can constrain the cosmic evolution of alpha from a single species, avoiding systematic errors in alpha measurements from multiple species which may have relative velocity offsets. The most promising method compares the 18 and 6 cm OH lines, includes a calibration of systematic errors, and offers multiple determinations of alpha in a single object. Comparisons of OH lines to the HI 21 cm line and CO rotational transitions also show promise.
A probabilistic approach to remote compositional analysis of planetary surfaces
Lapotre, Mathieu G.A.; Ehlmann, Bethany L.; Minson, Sarah E.
2017-01-01
Reflected light from planetary surfaces provides information, including mineral/ice compositions and grain sizes, by study of albedo and absorption features as a function of wavelength. However, deconvolving the compositional signal in spectra is complicated by the nonuniqueness of the inverse problem. Trade-offs between mineral abundances and grain sizes in setting reflectance, instrument noise, and systematic errors in the forward model are potential sources of uncertainty, which are often unquantified. Here we adopt a Bayesian implementation of the Hapke model to determine sets of acceptable-fit mineral assemblages, as opposed to single best fit solutions. We quantify errors and uncertainties in mineral abundances and grain sizes that arise from instrument noise, compositional end members, optical constants, and systematic forward model errors for two suites of ternary mixtures (olivine-enstatite-anorthite and olivine-nontronite-basaltic glass) in a series of six experiments in the visible-shortwave infrared (VSWIR) wavelength range. We show that grain sizes are generally poorly constrained from VSWIR spectroscopy. Abundance and grain size trade-offs lead to typical abundance errors of ≤1 wt % (occasionally up to ~5 wt %), while ~3% noise in the data increases errors by up to ~2 wt %. Systematic errors further increase inaccuracies by a factor of 4. Finally, phases with low spectral contrast or inaccurate optical constants can further increase errors. Overall, typical errors in abundance are <10%, but sometimes significantly increase for specific mixtures, prone to abundance/grain-size trade-offs that lead to high unmixing uncertainties. These results highlight the need for probabilistic approaches to remote determination of planetary surface composition.
Charmed-meson decay constants in three-flavor lattice QCD.
Aubin, C; Bernard, C; Detar, C; Di Pierro, M; Freeland, E D; Gottlieb, Steven; Heller, U M; Hetrick, J E; El-Khadra, A X; Kronfeld, A S; Levkova, L; Mackenzie, P B; Menscher, D; Maresca, F; Nobes, M; Okamoto, M; Renner, D; Simone, J; Sugar, R; Toussaint, D; Trottier, H D
2005-09-16
We present the first lattice QCD calculation with realistic sea quark content of the D+-meson decay constant f(D+). We use the MILC Collaboration's publicly available ensembles of lattice gauge fields, which have a quark sea with two flavors (up and down) much lighter than a third (strange). We obtain f(D+)=201+/-3+/-17 MeV, where the errors are statistical and a combination of systematic errors. We also obtain f(Ds)=249+/-3+/-16 MeV for the Ds meson.
Systematic error of diode thermometer.
Iskrenovic, Predrag S
2009-08-01
Semiconductor diodes are often used for measuring temperatures. The forward voltage across a diode decreases, approximately linearly, with the increase in temperature. The applied method is mainly the simplest one. A constant direct current flows through the diode, and voltage is measured at diode terminals. The direct current that flows through the diode, putting it into operating mode, heats up the diode. The increase in temperature of the diode-sensor, i.e., the systematic error due to self-heating, depends on the intensity of current predominantly and also on other factors. The results of systematic error measurements due to heating up by the forward-bias current have been presented in this paper. The measurements were made at several diodes over a wide range of bias current intensity.
NASA Technical Reports Server (NTRS)
Larson, T. J.; Ehernberger, L. J.
1985-01-01
The flight test technique described uses controlled survey runs to determine horizontal atmospheric pressure variations and systematic altitude errors that result from space positioning measurements. The survey data can be used not only for improved air data calibrations, but also for studies of atmospheric structure and space positioning accuracy performance. The examples presented cover a wide range of radar tracking conditions for both subsonic and supersonic flight to an altitude of 42,000 ft.
Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L
2017-01-01
The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.
Non-minimal derivative coupling gravity in cosmology
NASA Astrophysics Data System (ADS)
Gumjudpai, Burin; Rangdee, Phongsaphat
2015-11-01
We give a brief review of the non-minimal derivative coupling (NMDC) scalar field theory in which there is non-minimal coupling between the scalar field derivative term and the Einstein tensor. We assume that the expansion is of power-law type or super-acceleration type for small redshift. The Lagrangian includes the NMDC term, a free kinetic term, a cosmological constant term and a barotropic matter term. For a value of the coupling constant that is compatible with inflation, we use the combined WMAP9 (WMAP9 + eCMB + BAO + H_0) dataset, the PLANCK + WP dataset, and the PLANCK TT, TE, EE + lowP + Lensing + ext datasets to find the value of the cosmological constant in the model. Modeling the expansion with power-law gives a negative cosmological constants while the phantom power-law (super-acceleration) expansion gives positive cosmological constant with large error bar. The value obtained is of the same order as in the Λ CDM model, since at late times the NMDC effect is tiny due to small curvature.
Systematic errors in Monsoon simulation: importance of the equatorial Indian Ocean processes
NASA Astrophysics Data System (ADS)
Annamalai, H.; Taguchi, B.; McCreary, J. P., Jr.; Nagura, M.; Miyama, T.
2015-12-01
H. Annamalai1, B. Taguchi2, J.P. McCreary1, J. Hafner1, M. Nagura2, and T. Miyama2 International Pacific Research Center, University of Hawaii, USA Application Laboratory, JAMSTEC, Japan In climate models, simulating the monsoon precipitation climatology remains a grand challenge. Compared to CMIP3, the multi-model-mean (MMM) errors for Asian-Australian monsoon (AAM) precipitation climatology in CMIP5, relative to GPCP observations, have shown little improvement. One of the implications is that uncertainties in the future projections of time-mean changes to AAM rainfall may not have reduced from CMIP3 to CMIP5. Despite dedicated efforts by the modeling community, the progress in monsoon modeling is rather slow. This leads us to wonder: Has the scientific community reached a "plateau" in modeling mean monsoon precipitation? Our focus here is to better understanding of the coupled air-sea interactions, and moist processes that govern the precipitation characteristics over the tropical Indian Ocean where large-scale errors persist. A series idealized coupled model experiments are performed to test the hypothesis that errors in the coupled processes along the equatorial Indian Ocean during inter-monsoon seasons could potentially influence systematic errors during the monsoon season. Moist static energy budget diagnostics has been performed to identify the leading moist and radiative processes that account for the large-scale errors in the simulated precipitation. As a way forward, we propose three coordinated efforts, and they are: (i) idealized coupled model experiments; (ii) process-based diagnostics and (iii) direct observations to constrain model physics. We will argue that a systematic and coordinated approach in the identification of the various interactive processes that shape the precipitation basic state needs to be carried out, and high-quality observations over the data sparse monsoon region are needed to validate models and further improve model physics.
The role of the basic state in the ENSO-monsoon relationship and implications for predictability
NASA Astrophysics Data System (ADS)
Turner, A. G.; Inness, P. M.; Slingo, J. M.
2005-04-01
The impact of systematic model errors on a coupled simulation of the Asian summer monsoon and its interannual variability is studied. Although the mean monsoon climate is reasonably well captured, systematic errors in the equatorial Pacific mean that the monsoon-ENSO teleconnection is rather poorly represented in the general-circulation model. A system of ocean-surface heat flux adjustments is implemented in the tropical Pacific and Indian Oceans in order to reduce the systematic biases. In this version of the general-circulation model, the monsoon-ENSO teleconnection is better simulated, particularly the lag-lead relationships in which weak monsoons precede the peak of El Niño. In part this is related to changes in the characteristics of El Niño, which has a more realistic evolution in its developing phase. A stronger ENSO amplitude in the new model version also feeds back to further strengthen the teleconnection. These results have important implications for the use of coupled models for seasonal prediction of systems such as the monsoon, and suggest that some form of flux correction may have significant benefits where model systematic error compromises important teleconnections and modes of interannual variability.
Coupling constant for N*(1535)N{rho}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie Jujun; Graduate University of Chinese Academy of Sciences, Beijing 100049; Wilkin, Colin
2008-05-15
The value of the N*(1535)N{rho} coupling constant g{sub N*N{rho}} derived from the N*(1535){yields}N{rho}{yields}N{pi}{pi} decay is compared with that deduced from the radiative decay N*(1535){yields}N{gamma} using the vector-meson-dominance model. On the basis of an effective Lagrangian approach, we show that the values of g{sub N*N{rho}} extracted from the available experimental data on the two decays are consistent, though the error bars are rather large.
Lewis, Matthew S; Maruff, Paul; Silbert, Brendan S; Evered, Lis A; Scott, David A
2007-02-01
The reliable change index (RCI) expresses change relative to its associated error, and is useful in the identification of postoperative cognitive dysfunction (POCD). This paper examines four common RCIs that each account for error in different ways. Three rules incorporate a constant correction for practice effects and are contrasted with the standard RCI that had no correction for practice. These rules are applied to 160 patients undergoing coronary artery bypass graft (CABG) surgery who completed neuropsychological assessments preoperatively and 1 week postoperatively using error and reliability data from a comparable healthy nonsurgical control group. The rules all identify POCD in a similar proportion of patients, but the use of the within-subject standard deviation (WSD), expressing the effects of random error, as an error estimate is a theoretically appropriate denominator when a constant error correction, removing the effects of systematic error, is deducted from the numerator in a RCI.
Systematic errors of EIT systems determined by easily-scalable resistive phantoms.
Hahn, G; Just, A; Dittmar, J; Hellige, G
2008-06-01
We present a simple method to determine systematic errors that will occur in the measurements by EIT systems. The approach is based on very simple scalable resistive phantoms for EIT systems using a 16 electrode adjacent drive pattern. The output voltage of the phantoms is constant for all combinations of current injection and voltage measurements and the trans-impedance of each phantom is determined by only one component. It can be chosen independently from the input and output impedance, which can be set in order to simulate measurements on the human thorax. Additional serial adapters allow investigation of the influence of the contact impedance at the electrodes on resulting errors. Since real errors depend on the dynamic properties of an EIT system, the following parameters are accessible: crosstalk, the absolute error of each driving/sensing channel and the signal to noise ratio in each channel. Measurements were performed on a Goe-MF II EIT system under four different simulated operational conditions. We found that systematic measurement errors always exceeded the error level of stochastic noise since the Goe-MF II system had been optimized for a sufficient signal to noise ratio but not for accuracy. In time difference imaging and functional EIT (f-EIT) systematic errors are reduced to a minimum by dividing the raw data by reference data. This is not the case in absolute EIT (a-EIT) where the resistivity of the examined object is determined on an absolute scale. We conclude that a reduction of systematic errors has to be one major goal in future system design.
Model parameter-related optimal perturbations and their contributions to El Niño prediction errors
NASA Astrophysics Data System (ADS)
Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua
2018-04-01
Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.
Huchra, J P
1992-04-17
The Hubble constant is the constant of proportionality between recession velocity and distance in the expanding universe. It is a fundamental property of cosmology that sets both the scale and the expansion age of the universe. It is determined by measurement of galaxy The Hubble constant is the constant of proportionality between recession velocity and development of new techniques for the measurements of galaxy distances, both calibration uncertainties and debates over systematic errors remain. Current determinations still range over nearly a factor of 2; the higher values favored by most local measurements are not consistent with many theories of the origin of large-scale structure and stellar evolution.
3j Symbols: To Normalize or Not to Normalize?
ERIC Educational Resources Information Center
van Veenendaal, Michel
2011-01-01
The systematic use of alternative normalization constants for 3j symbols can lead to a more natural expression of quantities, such as vector products and spherical tensor operators. The redefined coupling constants directly equate tensor products to the inner and outer products without any additional square roots. The approach is extended to…
Determination of the pion-nucleon coupling constant and scattering lengths
NASA Astrophysics Data System (ADS)
Ericson, T. E.; Loiseau, B.; Thomas, A. W.
2002-07-01
We critically evaluate the isovector Goldberger-Miyazawa-Oehme (GMO) sum rule for forward πN scattering using the recent precision measurements of π-p and π-d scattering lengths from pionic atoms. We deduce the charged-pion-nucleon coupling constant, with careful attention to systematic and statistical uncertainties. This determination gives, directly from data, g2c(GMO)/ 4π=14.11+/-0.05(statistical)+/-0.19(systematic) or f2c/4π=0.0783(11). This value is intermediate between that of indirect methods and the direct determination from backward np differential scattering cross sections. We also use the pionic atom data to deduce the coherent symmetric and antisymmetric sums of the pion-proton and pion-neutron scattering lengths with high precision, namely, (aπ-p+aπ-n)/2=[- 12+/-2(statistical)+/-8(systematic)]×10-4 m-1π and (aπ-p-aπ- n)/2=[895+/-3(statistical)+/-13 (systematic)]×10-4 m-1π. For the need of the present analysis, we improve the theoretical description of the pion-deuteron scattering length.
Ronchi, Roberta; Revol, Patrice; Katayama, Masahiro; Rossetti, Yves; Farnè, Alessandro
2011-01-01
During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: As consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects). Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift) were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction) produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion “to feel” the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors. PMID:21731649
Bernard, A M; Burgot, J L
1981-12-01
The reversibility of the determination reaction is the most frequent cause of deviations from linearity of thermometric titration curves. Because of this, determination of the equivalence point by the tangent method is associated with a systematic error. The authors propose a relationship which connects this error quantitatively with the equilibrium constant. The relation, verified experimentally, is deduced from a mathematical study of the thermograms and could probably be generalized to apply to other linear methods of determination.
Gaia Data Release 1. Validation of the photometry
NASA Astrophysics Data System (ADS)
Evans, D. W.; Riello, M.; De Angeli, F.; Busso, G.; van Leeuwen, F.; Jordi, C.; Fabricius, C.; Brown, A. G. A.; Carrasco, J. M.; Voss, H.; Weiler, M.; Montegriffo, P.; Cacciari, C.; Burgess, P.; Osborne, P.
2017-04-01
Aims: The photometric validation of the Gaia DR1 release of the ESA Gaia mission is described and the quality of the data shown. Methods: This is carried out via an internal analysis of the photometry using the most constant sources. Comparisons with external photometric catalogues are also made, but are limited by the accuracies and systematics present in these catalogues. An analysis of the quoted errors is also described. Investigations of the calibration coefficients reveal some of the systematic effects that affect the fluxes. Results: The analysis of the constant sources shows that the early-stage photometric calibrations can reach an accuracy as low as 3 mmag.
Omens of coupled model biases in the CMIP5 AMIP simulations
NASA Astrophysics Data System (ADS)
Găinuşă-Bogdan, Alina; Hourdin, Frédéric; Traore, Abdoul Khadre; Braconnot, Pascale
2018-02-01
Despite decades of efforts and improvements in the representation of processes as well as in model resolution, current global climate models still suffer from a set of important, systematic biases in sea surface temperature (SST), not much different from the previous generation of climate models. Many studies have looked at errors in the wind field, cloud representation or oceanic upwelling in coupled models to explain the SST errors. In this paper we highlight the relationship between latent heat flux (LH) biases in forced atmospheric simulations and the SST biases models develop in coupled mode, at the scale of the entire intertropical domain. By analyzing 22 pairs of forced atmospheric and coupled ocean-atmosphere simulations from the CMIP5 database, we show a systematic, negative correlation between the spatial patterns of these two biases. This link between forced and coupled bias patterns is also confirmed by two sets of dedicated sensitivity experiments with the IPSL-CM5A-LR model. The analysis of the sources of the atmospheric LH bias pattern reveals that the near-surface wind speed bias dominates the zonal structure of the LH bias and that the near-surface relative humidity dominates the east-west contrasts.
Improved RF Measurements of SRF Cavity Quality Factors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holzbauer, J. P.; Contreras, C.; Pischalnikov, Y.
SRF cavity quality factors can be accurately measured using RF-power based techniques only when the cavity is very close to critically coupled. This limitation is from systematic errors driven by non-ideal RF components. When the cavity is not close to critically coupled, these systematic effects limit the accuracy of the measurements. The combination of the complex base-band envelopes of the cavity RF signals in combination with a trombone in the circuit allow the relative calibration of the RF signals to be extracted from the data and systematic effects to be characterized and suppressed. The improved calibration allows accurate measurements tomore » be made over a much wider range of couplings. Demonstration of these techniques during testing of a single-spoke resonator with a coupling factor of near 7 will be presented, along with recommendations for application of these techniques.« less
The GMO Sumrule and the πNN Coupling Constant
NASA Astrophysics Data System (ADS)
Ericson, T. E. O.; Loiseau, B.; Thomas, A. W.
The isovector GMO sumrule for forward πN scattering is critically evaluated using the precise π-p and π-d scattering lengths obtained recently from pionic atom measurements. The charged πNN coupling constant is then deduced with careful analysis of systematic and statistical sources of uncertainties. This determination gives directly from data gc2(GMO)/4π = 14.17±0.09 (statistic) ±0.17 (systematic) or fc2/ 4π=0.078(11). This value is half-way between that of indirect methods (phase-shift analyses) and the direct evaluation from from backward np differential scattering cross sections (extrapolation to pion pole). From the π-p and π-d scattering lengths our analysis leads also to accurate values for (1/2)(aπ-p+aπ-n) and (1/2) (aπ-p-aπ-n).
Observing Climate with GNSS Radio Occultation: Characterization and Mitigation of Systematic Errors
NASA Astrophysics Data System (ADS)
Foelsche, U.; Scherllin-Pirscher, B.; Danzer, J.; Ladstädter, F.; Schwarz, J.; Steiner, A. K.; Kirchengast, G.
2013-05-01
GNSS Radio Occultation (RO) data a very well suited for climate applications, since they do not require external calibration and only short-term measurement stability over the occultation event duration (1 - 2 min), which is provided by the atomic clocks onboard the GPS satellites. With this "self-calibration", it is possible to combine data from different sensors and different missions without need for inter-calibration and overlap (which is extremely hard to achieve for conventional satellite data). Using the same retrieval for all datasets we obtained monthly refractivity and temperature climate records from multiple radio occultation satellites, which are consistent within 0.05 % and 0.05 K in almost any case (taking global averages over the altitude range 10 km to 30 km). Longer-term average deviations are even smaller. Even though the RO record is still short, its high quality already allows to see statistically significant temperature trends in the lower stratosphere. The value of RO data for climate monitoring is therefore increasingly recognized by the scientific community, but there is also concern about potential residual systematic errors in RO climatologies, which might be common to data from all satellites. We started to look at different error sources, like the influence of the quality control and the high altitude initialization. We will focus on recent results regarding (apparent) constants used in the retrieval and systematic ionospheric errors. (1) All current RO retrievals use a "classic" set of (measured) constants, relating atmospheric microwave refractivity with atmospheric parameters. With the increasing quality of RO climatologies, errors in these constants are not negligible anymore. We show how these parameters can be related to more fundamental physical quantities (fundamental constants, the molecular/atomic polarizabilities of the constituents of air, and the dipole moment of water vapor). This approach also allows computing sensitivities to changes in atmospheric composition. We found that changes caused by the anthropogenic CO2 increase are still almost exactly offset by the concurrent O2 decrease. (2) Since the ionospheric correction of RO data is an approximation to first order, we have to consider an ionospheric residual, which can be expected to be larger when the ionization is high (day vs. night, high vs. low solar activity). In climate applications this could lead to a time dependent bias, which could induce wrong trends in atmospheric parameters at high altitudes. We studied this systematic ionospheric residual by analyzing the bending angle bias characteristics of CHAMP and COSMIC RO data from the years 2001 to 2011. We found that the night time bending angle bias stays constant over the whole period of 11 years, while the day time bias increases from low to high solar activity. As a result, the difference between night and day time bias increases from -0.05 μrad to -0.4 μrad. This behavior paves the way to correct the (small) solar cycle dependent bias of large ensembles of day time RO profiles.
Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
New limits on variation of the fine-structure constant using atomic dysprosium.
Leefer, N; Weber, C T M; Cingöz, A; Torgerson, J R; Budker, D
2013-08-09
We report on the spectroscopy of radio-frequency transitions between nearly degenerate, opposite-parity excited states in atomic dysprosium (Dy). Theoretical calculations predict that these states are very sensitive to variation of the fine-structure constant α owing to large relativistic corrections of opposite sign for the opposite-parity levels. The near degeneracy reduces the relative precision necessary to place constraints on variation of α, competitive with results obtained from the best atomic clocks in the world. Additionally, the existence of several abundant isotopes of Dy allows isotopic comparisons that suppress common-mode systematic errors. The frequencies of the 754-MHz transition in 164Dy and 235-MHz transition in 162Dy are measured over the span of two years. The linear variation of α is α·/α=(-5.8±6.9([1σ]))×10(-17) yr(-1), consistent with zero. The same data are used to constrain the dimensionless parameter kα characterizing a possible coupling of α to a changing gravitational potential. We find that kα=(-5.5±5.2([1σ]))×10(-7), essentially consistent with zero and the best constraint to date.
NASA Astrophysics Data System (ADS)
Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.
2006-06-01
Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.
NASA Astrophysics Data System (ADS)
Follin, B.; Knox, L.
2018-03-01
Recent determination of the Hubble constant via Cepheid-calibrated supernovae by Riess et al. (2016) (R16) find ˜3σ tension with inferences based on cosmic microwave background temperature and polarization measurements from Planck. This tension could be an indication of inadequacies in the concordance ΛCDM model. Here we investigate the possibility that the discrepancy could instead be due to systematic bias or uncertainty in the Cepheid calibration step of the distance ladder measurement by R16. We consider variations in total-to-selective extinction of Cepheid flux as a function of line-of-sight, hidden structure in the period-luminosity relationship, and potentially different intrinsic colour distributions of Cepheids as a function of host galaxy. Considering all potential sources of error, our final determination of H0 = 73.3 ± 1.7 km/s/Mpc (not including systematic errors from the treatment of geometric distances or Type Ia Supernovae) shows remarkable robustness and agreement with R16. We conclude systematics from the modelling of Cepheid photometry, including Cepheid selection criteria, cannot explain the observed tension between Cepheid-variable and CMB-based inferences of the Hubble constant. Considering a `model-independent' approach to relating Cepheids in galaxies with known distances to Cepheids in galaxies hosting a Type Ia supernova and finding agreement with the R16 result, we conclude no generalization of the model relating anchor and host Cepheid magnitude measurements can introduce significant bias in the H0 inference.
NASA Astrophysics Data System (ADS)
Follin, B.; Knox, L.
2018-07-01
Recent determination of the Hubble constant via Cepheid-calibrated supernovae by Riess et al.find ˜3σ tension with inferences based on cosmic microwave background (CMB) temperature and polarization measurements from Planck. This tension could be an indication of inadequacies in the concordance Λcold dark matter model. Here, we investigate the possibility that the discrepancy could instead be due to systematic bias or uncertainty in the Cepheid calibration step of the distance ladder measurement by Riess et al. We consider variations in total-to-selective extinction of Cepheid flux as a function of line of sight, hidden structure in the period-luminosity relationship, and potentially different intrinsic colour distributions of Cepheids as a function of host galaxy. Considering all potential sources of error, our final determination of H0 = 73.3 ± 1.7 km s-1Mpc-1 (not including systematic errors from the treatment of geometric distances or Type Ia supernovae) shows remarkable robustness and agreement with Riess et al. We conclude systematics from the modelling of Cepheid photometry, including Cepheid selection criteria, cannot explain the observed tension between Cepheid-variable and CMB-based inferences of the Hubble constant. Considering a `model-independent' approach to relating Cepheids in galaxies with known distances to Cepheids in galaxies hosting a Type Ia supernova and finding agreement with the Riess et al. result, we conclude no generalization of the model relating anchor and host Cepheid magnitude measurements can introduce significant bias in the H0 inference.
High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu
2017-05-01
Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.
Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN
NASA Astrophysics Data System (ADS)
Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.
2016-12-01
In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.
Reverberant acoustic energy in auditoria that comprise systems of coupled rooms
NASA Astrophysics Data System (ADS)
Summers, Jason Erik
A frequency-dependent model for levels and decay rates of reverberant energy in systems of coupled rooms is developed and compared with measurements conducted in a 1:10 scale model and in Bass Hall, Fort Worth, TX. Schroeder frequencies of subrooms, fSch, characteristic size of coupling apertures, a, relative to wavelength lambda, and characteristic size of room surfaces, l, relative to lambda define the frequency regions. At high frequencies [HF (f >> f Sch, a >> lambda, l >> lambda)], this work improves upon prior statistical-acoustics (SA) coupled-ODE models by incorporating geometrical-acoustics (GA) corrections for the model of decay within subrooms and the model of energy transfer between subrooms. Previous researchers developed prediction algorithms based on computational GA. Comparisons of predictions derived from beam-axis tracing with scale-model measurements indicate that systematic errors for coupled rooms result from earlier tail-correction procedures that assume constant quadratic growth of reflection density. A new algorithm is developed that uses ray tracing rather than tail correction in the late part and is shown to correct this error. At midfrequencies [MF (f >> f Sch, a ˜ lambda)], HF models are modified to account for wave effects at coupling apertures by including analytically or heuristically derived power transmission coefficients tau. This work improves upon prior SA models of this type by developing more accurate estimates of random-incidence tau. While the accuracy of the MF models is difficult to verify, scale-model measurements evidence the expected behavior. The Biot-Tolstoy-Medwin-Svensson (BTMS) time-domain edge-diffraction model is newly adapted to study transmission through apertures. Multiple-order BTMS scattering is theoretically and experimentally shown to be inaccurate due to the neglect of slope diffraction. At low frequencies (f ˜ f Sch), scale-model measurements have been qualitatively explained by application of previously developed perturbation models. Measurements newly confirm that coupling strength between three-dimensional rooms is related to unperturbed pressure distribution on the coupling surface. In Bass Hall, measurements are conducted to determine the acoustical effects of the coupled stage house on stage and in the audience area. The high-frequency predictions of statistical- and geometrical-acoustics models agree well with measured results. Predictions of the transmission coefficients of the coupling apertures agree, at least qualitatively, with the observed behavior.
Sokolenko, Stanislav; Aucoin, Marc G
2015-09-04
The growing ubiquity of metabolomic techniques has facilitated high frequency time-course data collection for an increasing number of applications. While the concentration trends of individual metabolites can be modeled with common curve fitting techniques, a more accurate representation of the data needs to consider effects that act on more than one metabolite in a given sample. To this end, we present a simple algorithm that uses nonparametric smoothing carried out on all observed metabolites at once to identify and correct systematic error from dilution effects. In addition, we develop a simulation of metabolite concentration time-course trends to supplement available data and explore algorithm performance. Although we focus on nuclear magnetic resonance (NMR) analysis in the context of cell culture, a number of possible extensions are discussed. Realistic metabolic data was successfully simulated using a 4-step process. Starting with a set of metabolite concentration time-courses from a metabolomic experiment, each time-course was classified as either increasing, decreasing, concave, or approximately constant. Trend shapes were simulated from generic functions corresponding to each classification. The resulting shapes were then scaled to simulated compound concentrations. Finally, the scaled trends were perturbed using a combination of random and systematic errors. To detect systematic errors, a nonparametric fit was applied to each trend and percent deviations calculated at every timepoint. Systematic errors could be identified at time-points where the median percent deviation exceeded a threshold value, determined by the choice of smoothing model and the number of observed trends. Regardless of model, increasing the number of observations over a time-course resulted in more accurate error estimates, although the improvement was not particularly large between 10 and 20 samples per trend. The presented algorithm was able to identify systematic errors as small as 2.5 % under a wide range of conditions. Both the simulation framework and error correction method represent examples of time-course analysis that can be applied to further developments in (1)H-NMR methodology and the more general application of quantitative metabolomics.
Good coupling for the multiscale patch scheme on systems with microscale heterogeneity
NASA Astrophysics Data System (ADS)
Bunder, J. E.; Roberts, A. J.; Kevrekidis, I. G.
2017-05-01
Computational simulation of microscale detailed systems is frequently only feasible over spatial domains much smaller than the macroscale of interest. The 'equation-free' methodology couples many small patches of microscale computations across space to empower efficient computational simulation over macroscale domains of interest. Motivated by molecular or agent simulations, we analyse the performance of various coupling schemes for patches when the microscale is inherently 'rough'. As a canonical problem in this universality class, we systematically analyse the case of heterogeneous diffusion on a lattice. Computer algebra explores how the dynamics of coupled patches predict the large scale emergent macroscale dynamics of the computational scheme. We determine good design for the coupling of patches by comparing the macroscale predictions from patch dynamics with the emergent macroscale on the entire domain, thus minimising the computational error of the multiscale modelling. The minimal error on the macroscale is obtained when the coupling utilises averaging regions which are between a third and a half of the patch. Moreover, when the symmetry of the inter-patch coupling matches that of the underlying microscale structure, patch dynamics predicts the desired macroscale dynamics to any specified order of error. The results confirm that the patch scheme is useful for macroscale computational simulation of a range of systems with microscale heterogeneity.
Jackson, Neal
2015-01-01
I review the current state of determinations of the Hubble constant, which gives the length scale of the Universe by relating the expansion velocity of objects to their distance. There are two broad categories of measurements. The first uses individual astrophysical objects which have some property that allows their intrinsic luminosity or size to be determined, or allows the determination of their distance by geometric means. The second category comprises the use of all-sky cosmic microwave background, or correlations between large samples of galaxies, to determine information about the geometry of the Universe and hence the Hubble constant, typically in a combination with other cosmological parameters. Many, but not all, object-based measurements give H 0 values of around 72-74 km s -1 Mpc -1 , with typical errors of 2-3 km s -1 Mpc -1 . This is in mild discrepancy with CMB-based measurements, in particular those from the Planck satellite, which give values of 67-68 km s -1 Mpc -1 and typical errors of 1-2 km s -1 Mpc -1 . The size of the remaining systematics indicate that accuracy rather than precision is the remaining problem in a good determination of the Hubble constant. Whether a discrepancy exists, and whether new physics is needed to resolve it, depends on details of the systematics of the object-based methods, and also on the assumptions about other cosmological parameters and which datasets are combined in the case of the all-sky methods.
Barkley, Sarice S; Deng, Zhao; Gates, Richard S; Reitsma, Mark G; Cannara, Rachel J
2012-02-01
Two independent lateral-force calibration methods for the atomic force microscope (AFM)--the hammerhead (HH) technique and the diamagnetic lateral force calibrator (D-LFC)--are systematically compared and found to agree to within 5 % or less, but with precision limited to about 15 %, using four different tee-shaped HH reference probes. The limitations of each method, both of which offer independent yet feasible paths toward traceable accuracy, are discussed and investigated. We find that stiff cantilevers may produce inconsistent D-LFC values through the application of excessively high normal loads. In addition, D-LFC results vary when the method is implemented using different modes of AFM feedback control, constant height and constant force modes, where the latter is more consistent with the HH method and closer to typical experimental conditions. Specifically, for the D-LFC apparatus used here, calibration in constant height mode introduced errors up to 14 %. In constant force mode using a relatively stiff cantilever, we observed an ≈ 4 % systematic error per μN of applied load for loads ≤ 1 μN. The issue of excessive load typically emerges for cantilevers whose flexural spring constant is large compared with the normal spring constant of the D-LFC setup (such that relatively small cantilever flexural displacements produce relatively large loads). Overall, the HH method carries a larger uncertainty, which is dominated by uncertainty in measurement of the flexural spring constant of the HH cantilever as well as in the effective length dimension of the cantilever probe. The D-LFC method relies on fewer parameters and thus has fewer uncertainties associated with it. We thus show that it is the preferred method of the two, as long as care is taken to perform the calibration in constant force mode with low applied loads.
Gladstone-Dale constant for CF4. [experimental design
NASA Technical Reports Server (NTRS)
Burner, A. W., Jr.; Goad, W. K.
1980-01-01
The Gladstone-Dale constant, which relates the refractive index to density, was measured for CF4 by counting fringes of a two-beam interferometer, one beam of which passes through a cell containing the test gas. The experimental approach and sources of systematic and imprecision errors are discussed. The constant for CF4 was measured at several wavelengths in the visible region of the spectrum. A value of 0.122 cu cm/g with an uncertainty of plus or minus 0.001 cu cm/g was determined for use in the visible region. A procedure for noting the departure of the gas density from the ideal-gas law is discussed.
NASA Astrophysics Data System (ADS)
Akbarashrafi, F.; Al-Attar, D.; Deuss, A.; Trampert, J.; Valentine, A. P.
2018-04-01
Seismic free oscillations, or normal modes, provide a convenient tool to calculate low-frequency seismograms in heterogeneous Earth models. A procedure called `full mode coupling' allows the seismic response of the Earth to be computed. However, in order to be theoretically exact, such calculations must involve an infinite set of modes. In practice, only a finite subset of modes can be used, introducing an error into the seismograms. By systematically increasing the number of modes beyond the highest frequency of interest in the seismograms, we investigate the convergence of full-coupling calculations. As a rule-of-thumb, it is necessary to couple modes 1-2 mHz above the highest frequency of interest, although results depend upon the details of the Earth model. This is significantly higher than has previously been assumed. Observations of free oscillations also provide important constraints on the heterogeneous structure of the Earth. Historically, this inference problem has been addressed by the measurement and interpretation of splitting functions. These can be seen as secondary data extracted from low frequency seismograms. The measurement step necessitates the calculation of synthetic seismograms, but current implementations rely on approximations referred to as self- or group-coupling and do not use fully accurate seismograms. We therefore also investigate whether a systematic error might be present in currently published splitting functions. We find no evidence for any systematic bias, but published uncertainties must be doubled to properly account for the errors due to theoretical omissions and regularization in the measurement process. Correspondingly, uncertainties in results derived from splitting functions must also be increased. As is well known, density has only a weak signal in low-frequency seismograms. Our results suggest this signal is of similar scale to the true uncertainties associated with currently published splitting functions. Thus, it seems that great care must be taken in any attempt to robustly infer details of Earth's density structure using current splitting functions.
Stability of colloidal gold and determination of the Hamaker constant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demirci, S.; Enuestuen, B.V.; Turkevich, J.
1978-12-14
Previous computation of stability factors of colloidal gold from coagulation data was found to be in systematic error due to an underestimation of the particle concentration by electron microscopy. A new experimental technique was developed for determination of this concentration. Stability factors were recalculated from the previous data using the correct concentration. While most of the previously reported conclusions remain unchanged, the absolute rate of fast coagulation is found to agree with that predicted by the theory. A value of the Hamaker constant was determined from the corrected data.
Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M
2011-01-01
Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.
A relativistic coupled-cluster interaction potential and rovibrational constants for the xenon dimer
NASA Astrophysics Data System (ADS)
Jerabek, Paul; Smits, Odile; Pahl, Elke; Schwerdtfeger, Peter
2018-01-01
An accurate potential energy curve has been derived for the xenon dimer using state-of-the-art relativistic coupled-cluster theory up to quadruple excitations accounting for both basis set superposition and incompleteness errors. The data obtained is fitted to a computationally efficient extended Lennard-Jones potential form and to a modified Tang-Toennies potential function treating the short- and long-range part separately. The vibrational spectrum of Xe2 obtained from a numerical solution of the rovibrational Schrödinger equation and subsequently derived spectroscopic constants are in excellent agreement with experimental values. We further present solid-state calculations for xenon using a static many-body expansion up to fourth-order in the xenon interaction potential including dynamic effects within the Einstein approximation. Again we find very good agreement with the experimental (face-centred cubic) lattice constant and cohesive energy.
Mathematical foundations of hybrid data assimilation from a synchronization perspective
NASA Astrophysics Data System (ADS)
Penny, Stephen G.
2017-12-01
The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.
Mathematical foundations of hybrid data assimilation from a synchronization perspective.
Penny, Stephen G
2017-12-01
The state-of-the-art data assimilation methods used today in operational weather prediction centers around the world can be classified as generalized one-way coupled impulsive synchronization. This classification permits the investigation of hybrid data assimilation methods, which combine dynamic error estimates of the system state with long time-averaged (climatological) error estimates, from a synchronization perspective. Illustrative results show how dynamically informed formulations of the coupling matrix (via an Ensemble Kalman Filter, EnKF) can lead to synchronization when observing networks are sparse and how hybrid methods can lead to synchronization when those dynamic formulations are inadequate (due to small ensemble sizes). A large-scale application with a global ocean general circulation model is also presented. Results indicate that the hybrid methods also have useful applications in generalized synchronization, in particular, for correcting systematic model errors.
Intrinsic errors in transporting a single-spin qubit through a double quantum dot
NASA Astrophysics Data System (ADS)
Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.
2017-07-01
Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.
López-Vallejo, Fabian; Fragoso-Serrano, Mabel; Suárez-Ortiz, Gloria Alejandra; Hernández-Rojas, Adriana C; Cerda-García-Rojas, Carlos M; Pereda-Miranda, Rogelio
2011-08-05
A protocol for stereochemical analysis, based on the systematic comparison between theoretical and experimental vicinal (1)H-(1)H NMR coupling constants, was developed and applied to a series of flexible compounds (1-8) derived from the 6-heptenyl-5,6-dihydro-2H-pyran-2-one framework. The method included a broad conformational search, followed by geometry optimization at the DFT B3LYP/DGDZVP level, calculation of the vibrational frequencies, thermochemical parameters, magnetic shielding tensors, and the total NMR spin-spin coupling constants. Three scaling factors, depending on the carbon atom hybridizations, were found for the (1)H-C-C-(1)H vicinal coupling constants: f((sp3)-(sp3)) = 0.910, f((sp3)-(sp2)) = 0.929, and f((sp2)-(sp2))= 0.977. A remarkable correlation between the theoretical (J(pre)) and experimental (1)H-(1)H NMR (J(exp)) coupling constants for spicigerolide (1), a cytotoxic natural product, and some of its synthetic stereoisomers (2-4) demonstrated the predictive value of this approach for the stereochemical assignment of highly flexible compounds containing multiple chiral centers. The stereochemistry of two natural 6-heptenyl-5,6-dihydro-2H-pyran-2-ones (14 and 15) containing diverse functional groups in the heptenyl side chain was also analyzed by application of this combined theoretical and experimental approach, confirming its reliability. Additionally, a geometrical analysis for the conformations of 1-8 revealed that weak hydrogen bonds substantially guide the conformational behavior of the tetraacyloxy-6-heptenyl-2H-pyran-2-ones.
Systematics of the K X-Ray Multiplicity for Transitional Nuclei with A~=200
NASA Astrophysics Data System (ADS)
Karwowski, H. J.; Vigdor, S. E.; Jacobs, W. W.; Throwe, T. G.; Wark, D. L.; Kailas, S.; Singh, P. P.; Soga, F.; Ward, T. E.; Wiggins, J.
1981-11-01
Measurements of the multiplicity of K x rays accompanying (Li,xn) reactions to residual nuclei with Z~80 exhibit plateaus of high and constant multiplicity for neutron numbers between 110 and 120, with rapid falloff for both smaller and larger N. A proposed explanation for this systematic behavior assumes that strongly coupled, high-K rotational bands are a much more general feature of this transitional mass region than existing data indicate.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bauer, Carl A., E-mail: bauerca@colorado.ed; Werner, Gregory R.; Cary, John R.
A new frequency-domain electromagnetics algorithm is developed for simulating curved interfaces between anisotropic dielectrics embedded in a Yee mesh with second-order error in resonant frequencies. The algorithm is systematically derived using the finite integration formulation of Maxwell's equations on the Yee mesh. Second-order convergence of the error in resonant frequencies is achieved by guaranteeing first-order error on dielectric boundaries and second-order error in bulk (possibly anisotropic) regions. Convergence studies, conducted for an analytically solvable problem and for a photonic crystal of ellipsoids with anisotropic dielectric constant, both show second-order convergence of frequency error; the convergence is sufficiently smooth that Richardsonmore » extrapolation yields roughly third-order convergence. The convergence of electric fields near the dielectric interface for the analytic problem is also presented.« less
A spectral filter for ESMR's sidelobe errors
NASA Technical Reports Server (NTRS)
Chesters, D.
1979-01-01
Fourier analysis was used to remove periodic errors from a series of NIMBUS-5 electronically scanned microwave radiometer brightness temperatures. The observations were all taken from the midnight orbits over fixed sites in the Australian grasslands. The angular dependence of the data indicates calibration errors consisted of broad sidelobes and some miscalibration as a function of beam position. Even though an angular recalibration curve cannot be derived from the available data, the systematic errors can be removed with a spectral filter. The 7 day cycle in the drift of the orbit of NIMBUS-5, coupled to the look-angle biases, produces an error pattern with peaks in its power spectrum at the weekly harmonics. About plus or minus 4 K of error is removed by simply blocking the variations near two- and three-cycles-per-week.
A test of Gaia Data Release 1 parallaxes: implications for the local distance scale
NASA Astrophysics Data System (ADS)
Casertano, Stefano; Riess, Adam G.; Bucciarelli, Beatrice; Lattanzi, Mario G.
2017-03-01
Aims: We present a comparison of Gaia Data Release 1 (DR1) parallaxes with photometric parallaxes for a sample of 212 Galactic Cepheids at a median distance of 2 kpc, and explore their implications on the distance scale and the local value of the Hubble constant H0. Methods: The Cepheid distances are estimated from a recent calibration of the near-infrared period-luminosity (P-L) relation. The comparison is carried out in parallax space, where the DR1 parallax errors, with a median value of half the median parallax, are expected to be well-behaved. Results: With the exception of one outlier, the DR1 parallaxes are in very good global agreement with the predictions from a well-established P-L relation, with a possible indication that the published errors may be conservatively overestimated by about 20%. This confirms that the quality of DR1 parallaxes for the Cepheids in our sample is well within their stated errors. We find that the parallaxes of 9 Cepheids brighter than G = 6 may be systematically underestimated. If interpreted as an independent calibration of the Cepheid luminosities and assumed to be otherwise free of systematic uncertainties, DR1 parallaxes are in very good agreement (within 0.3%) with the current estimate of the local Hubble constant, and in conflict at the level of 2.5σ (3.5σ if the errors are scaled) with the value inferred from Planck cosmic microwave background data used in conjunction with ΛCDM. We also test for a zeropoint error in Gaia parallaxes and find none to a precision of 20 μas. We caution however that with this early release, the complete systematic properties of the measurements may not be fully understood at the statistical level of the Cepheid sample mean, a level an order of magnitude below the individual uncertainties. The early results from DR1 demonstrate again the enormous impact that the full mission will likely have on fundamental questions in astrophysics and cosmology.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Songaila, A.; Cowie, L. L., E-mail: acowie@ifa.hawaii.edu
2014-10-01
The unequivocal demonstration of temporal or spatial variability in a fundamental constant of nature would be of enormous significance. Recent attempts to measure the variability of the fine-structure constant α over cosmological time, using high-resolution spectra of high-redshift quasars observed with 10 m class telescopes, have produced conflicting results. We use the many multiplet (MM) method with Mg II and Fe II lines on very high signal-to-noise, high-resolution (R = 72, 000) Keck HIRES spectra of eight narrow quasar absorption systems. We consider both systematic uncertainties in spectrograph wavelength calibration and also velocity offsets introduced by complex velocity structure inmore » even apparently simple and weak narrow lines and analyze their effect on claimed variations in α. We find no significant change in α, Δα/α = (0.43 ± 0.34) × 10{sup –5}, in the redshift range z = 0.7-1.5, where this includes both statistical and systematic errors. We also show that the scatter in measurements of Δα/α arising from absorption line structure can be considerably larger than assigned statistical errors even for apparently simple and narrow absorption systems. We find a null result of Δα/α = (– 0.59 ± 0.55) × 10{sup –5} in a system at z = 1.7382 using lines of Cr II, Zn II, and Mn II, whereas using Cr II and Zn II lines in a system at z = 1.6614 we find a systematic velocity trend that, if interpreted as a shift in α, would correspond to Δα/α = (1.88 ± 0.47) × 10{sup –5}, where both results include both statistical and systematic errors. This latter result is almost certainly caused by varying ionic abundances in subcomponents of the line: using Mn II, Ni II, and Cr II in the analysis changes the result to Δα/α = (– 0.47 ± 0.53) × 10{sup –5}. Combining the Mg II and Fe II results with estimates based on Mn II, Ni II, and Cr II gives Δα/α = (– 0.01 ± 0.26) × 10{sup –5}. We conclude that spectroscopic measurements of quasar absorption lines are not yet capable of unambiguously detecting variation in α using the MM method.« less
The effect of interacting dark energy on local measurements of the Hubble constant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Odderskov, Io; Baldi, Marco; Amendola, Luca, E-mail: isho07@phys.au.dk, E-mail: marco.baldi5@unibo.it, E-mail: l.amendola@thphys.uni-heidelberg.de
2016-05-01
In the current state of cosmology, where cosmological parameters are being measured to percent accuracy, it is essential to understand all sources of error to high precision. In this paper we present the results of a study of the local variations in the Hubble constant measured at the distance scale of the Coma Cluster, and test the validity of correcting for the peculiar velocities predicted by gravitational instability theory. The study is based on N-body simulations, and includes models featuring a coupling between dark energy and dark matter, as well as two ΛCDM simulations with different values of σ{sub 8}.more » It is found that the variance in the local flows is significantly larger in the coupled models, which increases the uncertainty in the local measurements of the Hubble constant in these scenarios. By comparing the results from the different simulations, it is found that most of the effect is caused by the higher value of σ{sub 8} in the coupled cosmologies, though this cannot account for all of the additional variance. Given the discrepancy between different estimates of the Hubble constant in the universe today, cosmological models causing a greater cosmic variance is something that we should be aware of.« less
Global Warming Estimation From Microwave Sounding Unit
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, R., Jr.; Yoo, J.-M.; Dalu, G.
1998-01-01
Microwave Sounding Unit (MSU) Ch 2 data sets, collected from sequential, polar-orbiting, Sun-synchronous National Oceanic and Atmospheric Administration operational satellites, contain systematic calibration errors that are coupled to the diurnal temperature cycle over the globe. Since these coupled errors in MSU data differ between successive satellites, it is necessary to make compensatory adjustments to these multisatellite data sets in order to determine long-term global temperature change. With the aid of the observations during overlapping periods of successive satellites, we can determine such adjustments and use them to account for the coupled errors in the long-term time series of MSU Ch 2 global temperature. In turn, these adjusted MSU Ch 2 data sets can be used to yield global temperature trend. In a pioneering study, Spencer and Christy (SC) (1990) developed a procedure to derive the global temperature trend from MSU Ch 2 data. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedure, the magnitude of the coupled errors is not determined explicitly. Furthermore, based on some assumptions, these coupled errors are eliminated in three separate steps. Such a procedure can leave unaccounted residual errors in the time series of the temperature anomalies deduced by SC, which could lead to a spurious long-term temperature trend derived from their analysis. In the present study, we have developed a method that avoids the shortcomings of the SC procedures. Based on our analysis, we find there is a global warming of 0.23+/-0.12 K between 1980 and 1991. Also, in this study, the time series of global temperature anomalies constructed by removing the global mean annual temperature cycle compares favorably with a similar time series obtained from conventional observations of temperature.
Muon Energy Calibration of the MINOS Detectors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyagawa, Paul S.
MINOS is a long-baseline neutrino oscillation experiment designed to search for conclusive evidence of neutrino oscillations and to measure the oscillation parameters precisely. MINOS comprises two iron tracking calorimeters located at Fermilab and Soudan. The Calibration Detector at CERN is a third MINOS detector used as part of the detector response calibration programme. A correct energy calibration between these detectors is crucial for the accurate measurement of oscillation parameters. This thesis presents a calibration developed to produce a uniform response within a detector using cosmic muons. Reconstruction of tracks in cosmic ray data is discussed. This data is utilized tomore » calculate calibration constants for each readout channel of the Calibration Detector. These constants have an average statistical error of 1.8%. The consistency of the constants is demonstrated both within a single run and between runs separated by a few days. Results are presented from applying the calibration to test beam particles measured by the Calibration Detector. The responses are calibrated to within 1.8% systematic error. The potential impact of the calibration on the measurement of oscillation parameters by MINOS is also investigated. Applying the calibration reduces the errors in the measured parameters by ~ 10%, which is equivalent to increasing the amount of data by 20%.« less
Analysis of High-Resolution Infrared and CARS Spectra of ³⁴S¹⁸O₃
DOE Office of Scientific and Technical Information (OSTI.GOV)
Masiello, Tony; Vulpanovici, Nicolae; Barber, Jeffrey B.
2004-09-11
As part of a series of investigations of isotopic forms of sulfur trioxide, high-resolution infrared and coherent anti-Stokes Raman spectroscopies were used to study the fundamental modes and several hot bands of 32S18O3. Hot bands originating from the v2 and v4 bending mode levels have been found to couple strongly to the IR-inactive v1 symmetric stretching mode through indirect Coriolis interactions and Fermi resonances. Coriolis coupling effects are particularly noticeable in 32S18O3 due to the close proximity of the v2 and v4 fundamental vibrations, whose deperturbed wavenumber values are 486.488 13(4) and 504.284 77(4) cm-1. The uncertainties in the lastmore » digits are shown in parentheses and are two standard deviations. From the infrared transitions, accurate rovibrational constants are deduced for all of the mixed states, leading to deperturbed values for v1, and of 1004.68(2), 0.000 713(2), and 0.000 348(2) cm-1, respectively. The Be value is found to be 0.310 820(2) cm-1, yielding an equilibrium bond length re of 141.7333(4) pm that is, within experimental error, identical to the value of 141.7339(3) pm reported previously for 34S18O3. With this work, precise and accurate spectroscopic constants have now been determined in a systematic and consistent manner for all the fundamental vibrational modes of the sulfur trioxide D3h isotopomeric forms 32S16O3, 34S16O3, 32S18O3, and 34S18O3.« less
The nuclear electric quadrupole moment of copper.
Santiago, Régis Tadeu; Teodoro, Tiago Quevedo; Haiduke, Roberto Luiz Andrade
2014-06-21
The nuclear electric quadrupole moment (NQM) of the (63)Cu nucleus was determined from an indirect approach by combining accurate experimental nuclear quadrupole coupling constants (NQCCs) with relativistic Dirac-Coulomb coupled cluster calculations of the electric field gradient (EFG). The data obtained at the highest level of calculation, DC-CCSD-T, from 14 linear molecules containing the copper atom give rise to an indicated NQM of -198(10) mbarn. Such result slightly deviates from the previously accepted standard value given by the muonic method, -220(15) mbarn, although the error bars are superimposed.
NASA Astrophysics Data System (ADS)
Appleby, Graham; Rodríguez, José; Altamimi, Zuheir
2016-12-01
Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.
Modified expression for bulb-tracer depletion—Effect on argon dating standards
Fleck, Robert J.; Calvert, Andrew T.
2014-01-01
40Ar/39Ar geochronology depends critically on well-calibrated standards, often traceable to first-principles K-Ar age calibrations using bulb-tracer systems. Tracer systems also provide precise standards for noble-gas studies and interlaboratory calibration. The exponential expression long used for calculating isotope tracer concentrations in K-Ar age dating and calibration of 40Ar/39Ar age standards may provide a close approximation of those values, but is not correct. Appropriate equations are derived that accurately describe the depletion of tracer reservoirs and concentrations of sequential tracers. In the modified expression the depletion constant is not in the exponent, which only varies as integers by tracer-number. Evaluation of the expressions demonstrates that systematic error introduced through use of the original expression may be substantial where reservoir volumes are small and resulting depletion constants are large. Traditional use of large reservoir to tracer volumes and the resulting small depletion constants have kept errors well less than experimental uncertainties in most previous K-Ar and calibration studies. Use of the proper expression, however, permits use of volumes appropriate to the problems addressed.
Reverberant acoustic energy in auditoria that comprise systems of coupled rooms
NASA Astrophysics Data System (ADS)
Summers, Jason E.
2003-11-01
A frequency-dependent model for reverberant energy in coupled rooms is developed and compared with measurements for a 1:10 scale model and for Bass Hall, Ft. Worth, TX. At high frequencies, prior statistical-acoustics models are improved by geometrical-acoustics corrections for decay within sub-rooms and for energy transfer between sub-rooms. Comparisons of computational geometrical acoustics predictions based on beam-axis tracing with scale model measurements indicate errors resulting from tail-correction assuming constant quadratic growth of reflection density. Using ray tracing in the late part corrects this error. For mid-frequencies, the models are modified to account for wave effects at coupling apertures by including power transmission coefficients. Similarly, statical-acoustics models are improved through more accurate estimates of power transmission measurements. Scale model measurements are in accord with the predicted behavior. The edge-diffraction model is adapted to study transmission through apertures. Multiple-order scattering is theoretically and experimentally shown inaccurate due to neglect of slope diffraction. At low frequencies, perturbation models qualitatively explain scale model measurements. Measurements confirm relation of coupling strength to unperturbed pressure distribution on coupling surfaces. Measurements in Bass Hall exhibit effects of the coupled stage house. High frequency predictions of statistical acoustics and geometrical acoustics models and predictions of coupling apertures all agree with measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Tweardy, Matthew C.; McConchie, Seth; Hayward, Jason P.
2017-06-13
An extension of the point kinetics model is developed in this paper to describe the neutron multiplicity response of a bare uranium object under interrogation by an associated particle imaging deuterium-tritium (D-T) measurement system. This extended model is used to estimate the total neutron multiplication of the uranium. Both MCNPX-PoliMi simulations and data from active interrogation measurements of highly enriched and depleted uranium geometries are used to evaluate the potential of this method and to identify the sources of systematic error. The detection efficiency correction for measured coincidence response is identified as a large source of systematic error. If themore » detection process is not considered, results suggest that the method can estimate total multiplication to within 13% of the simulated value. Values for multiplicity constants in the point kinetics equations are sensitive to enrichment due to (n, xn) interactions by D-T neutrons and can introduce another significant source of systematic bias. This can theoretically be corrected if isotopic composition is known a priori. Finally, the spatial dependence of multiplication is also suspected of introducing further systematic bias for high multiplication uranium objects.« less
Absolute Timing of the Crab Pulsar with RXTE
NASA Technical Reports Server (NTRS)
Rots, Arnold H.; Jahoda, Keith; Lyne, Andrew G.
2004-01-01
We have monitored the phase of the main X-ray pulse of the Crab pulsar with the Rossi X-ray Timing Explorer (RXTE) for almost eight years, since the start of the mission in January 1996. The absolute time of RXTE's clock is sufficiently accurate to allow this phase to be compared directly with the radio profile. Our monitoring observations of the pulsar took place bi-weekly (during the periods when it was at least 30 degrees from the Sun) and we correlated the data with radio timing ephemerides derived from observations made at Jodrell Bank. We have determined the phase of the X-ray main pulse for each observation with a typical error in the individual data points of 50 microseconds. The total ensemble is consistent with a phase that is constant over the monitoring period, with the X-ray pulse leading the radio pulse by 0.01025 plus or minus 0.00120 period in phase, or 344 plus or minus 40 microseconds in time. The error estimate is dominated by a systematic error of 40 microseconds, most likely constant, arising from uncertainties in the instrumental calibration of the radio data. The statistical error is 0.00015 period, or 5 microseconds. The separation of the main pulse and interpulse appears to be unchanging at time scales of a year or less, with an average value of 0.4001 plus or minus 0.0002 period. There is no apparent variation in these values with energy over the 2-30 keV range. The lag between the radio and X-ray pulses ma be constant in phase (i.e., rotational in nature) or constant in time (i.e., due to a pathlength difference). We are not (yet) able to distinguish between these two interpretations.
Enhancing the sensitivity to new physics in the tt¯ invariant mass distribution
NASA Astrophysics Data System (ADS)
Álvarez, Ezequiel
2012-08-01
We propose selection cuts on the LHC tt¯ production sample which should enhance the sensitivity to new physics signals in the study of the tt¯ invariant mass distribution. We show that selecting events in which the tt¯ object has little transverse and large longitudinal momentum enlarges the quark-fusion fraction of the sample and therefore increases its sensitivity to new physics which couples to quarks and not to gluons. We find that systematic error bars play a fundamental role and assume a simple model for them. We check how a non-visible new particle would become visible after the selection cuts enhance its resonance bump. A final realistic analysis should be done by the experimental groups with a correct evaluation of the systematic error bars.
New dimension analyses with error analysis for quaking aspen and black spruce
NASA Technical Reports Server (NTRS)
Woods, K. D.; Botkin, D. B.; Feiveson, A. H.
1987-01-01
Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.
Chiang, Kai-Wei; Duong, Thanh Trung; Liao, Jhen-Kai
2013-01-01
The integration of an Inertial Navigation System (INS) and the Global Positioning System (GPS) is common in mobile mapping and navigation applications to seamlessly determine the position, velocity, and orientation of the mobile platform. In most INS/GPS integrated architectures, the GPS is considered to be an accurate reference with which to correct for the systematic errors of the inertial sensors, which are composed of biases, scale factors and drift. However, the GPS receiver may produce abnormal pseudo-range errors mainly caused by ionospheric delay, tropospheric delay and the multipath effect. These errors degrade the overall position accuracy of an integrated system that uses conventional INS/GPS integration strategies such as loosely coupled (LC) and tightly coupled (TC) schemes. Conventional tightly coupled INS/GPS integration schemes apply the Klobuchar model and the Hopfield model to reduce pseudo-range delays caused by ionospheric delay and tropospheric delay, respectively, but do not address the multipath problem. However, the multipath effect (from reflected GPS signals) affects the position error far more significantly in a consumer-grade GPS receiver than in an expensive, geodetic-grade GPS receiver. To avoid this problem, a new integrated INS/GPS architecture is proposed. The proposed method is described and applied in a real-time integrated system with two integration strategies, namely, loosely coupled and tightly coupled schemes, respectively. To verify the effectiveness of the proposed method, field tests with various scenarios are conducted and the results are compared with a reliable reference system. PMID:23955434
NASA Astrophysics Data System (ADS)
Shimizu, N.; Aihara, H.; Epifanov, D.; Adachi, I.; Al Said, S.; Asner, D. M.; Aulchenko, V.; Aushev, T.; Ayad, R.; Babu, V.; Badhrees, I.; Bakich, A. M.; Bansal, V.; Barberio, E.; Bhardwaj, V.; Bhuyan, B.; Biswal, J.; Bobrov, A.; Bozek, A.; Bračko, M.; Browder, T. E.; Červenkov, D.; Chang, M.-C.; Chang, P.; Chekelian, V.; Chen, A.; Cheon, B. G.; Chilikin, K.; Cho, K.; Choi, S.-K.; Choi, Y.; Cinabro, D.; Czank, T.; Dash, N.; Di Carlo, S.; Doležal, Z.; Dutta, D.; Eidelman, S.; Fast, J. E.; Ferber, T.; Fulsom, B. G.; Garg, R.; Gaur, V.; Gabyshev, N.; Garmash, A.; Gelb, M.; Goldenzweig, P.; Greenwald, D.; Guido, E.; Haba, J.; Hayasaka, K.; Hayashii, H.; Hedges, M. T.; Hirose, S.; Hou, W.-S.; Iijima, T.; Inami, K.; Inguglia, G.; Ishikawa, A.; Itoh, R.; Iwasaki, M.; Jaegle, I.; Jeon, H. B.; Jia, S.; Jin, Y.; Joo, K. K.; Julius, T.; Kang, K. H.; Karyan, G.; Kawasaki, T.; Kiesling, C.; Kim, D. Y.; Kim, J. B.; Kim, S. H.; Kim, Y. J.; Kinoshita, K.; Kodyž, P.; Korpar, S.; Kotchetkov, D.; Križan, P.; Kroeger, R.; Krokovny, P.; Kulasiri, R.; Kuzmin, A.; Kwon, Y.-J.; Lange, J. S.; Lee, I. S.; Li, L. K.; Li, Y.; Li Gioi, L.; Libby, J.; Liventsev, D.; Masuda, M.; Merola, M.; Miyabayashi, K.; Miyata, H.; Mohanty, G. B.; Moon, H. K.; Mori, T.; Mussa, R.; Nakano, E.; Nakao, M.; Nanut, T.; Nath, K. J.; Natkaniec, Z.; Nayak, M.; Niiyama, M.; Nisar, N. K.; Nishida, S.; Ogawa, S.; Okuno, S.; Ono, H.; Pakhlova, G.; Pal, B.; Park, C. W.; Park, H.; Paul, S.; Pedlar, T. K.; Pestotnik, R.; Piilonen, L. E.; Popov, V.; Ritter, M.; Rostomyan, A.; Sakai, Y.; Salehi, M.; Sandilya, S.; Sato, Y.; Savinov, V.; Schneider, O.; Schnell, G.; Schwanda, C.; Seino, Y.; Senyo, K.; Sevior, M. E.; Shebalin, V.; Shibata, T.-A.; Shiu, J.-G.; Shwartz, B.; Sokolov, A.; Solovieva, E.; Starič, M.; Strube, J. F.; Sumisawa, K.; Sumiyoshi, T.; Tamponi, U.; Tanida, K.; Tenchini, F.; Trabelsi, K.; Uchida, M.; Uglov, T.; Unno, Y.; Uno, S.; Usov, Y.; Van Hulse, C.; Varner, G.; Vorobyev, V.; Vossen, A.; Wang, C. H.; Wang, M.-Z.; Wang, P.; Watanabe, M.; Widmann, E.; Won, E.; Yamashita, Y.; Ye, H.; Yuan, C. Z.; Zhang, Z. P.; Zhilich, V.; Zhukova, V.; Zhulanov, V.; Zupanc, A.
2018-02-01
We present a measurement of the Michel parameters of the τ lepton, \\bar{η} and ξκ, in the radiative leptonic decay τ^- \\rArr ℓ^- ν_{τ} \\bar{ν}_{ℓ} γ using 711 fb^{-1} of collision data collected with the Belle detector at the KEKB e^+e^- collider. The Michel parameters are measured in an unbinned maximum likelihood fit to the kinematic distribution of e^+e^-\\rArrτ^+τ^-\\rArr (π^+π^0 \\bar{ν}_τ)(ℓ^-ν_{τ}\\bar{ν}_{ℓ}γ)(ℓ=e or μ). The measured values of the Michel parameters are \\bar{η} = -1.3 ± 1.5 ± 0.8 and ξκ = 0.5 ± 0.4 ± 0.2, where the first error is statistical and the second is systematic. This is the first measurement of these parameters. These results are consistent with the Standard Model predictions within their uncertainties, and constrain the coupling constants of the generalized weak interaction.
Dissipative quantum error correction and application to quantum sensing with trapped ions.
Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A
2017-11-28
Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
NASA Astrophysics Data System (ADS)
Liu, G. C.; Lu, Y. J.; Xie, L. Z.; Chen, X. L.; Zhao, Y. H.
2016-01-01
Context. Massive luminous red galaxies (LRGs) are believed to be evolving passively and can be used as cosmic chronometers to estimate the Hubble constant (the differential age method). However, different LRGs may be located in different environments. The environmental effects, if any, on the mean ages of LRGs, and the ages of the oldest LRGs at different redshift, may limit the use of the LRGs as cosmic chronometers. Aims: We aim to investigate the environmental and mass dependence of the formation of "quiescent" LRGs, selected from the Sloan Digital Sky Survey (SDSS) data release 8, and to pave the way for using LRGs as cosmic chronometers. Methods: Using the population synthesis software STARLIGHT, we derive the stellar populations in each LRG through the full spectrum fitting and obtain the mean age distribution and the mean star formation history (SFH) of those LRGs. Results: We find that there is no apparent dependence of the mean age and the SFH of quiescent LRGs on their environment, while the ages of those quiescent LRGs depend weakly on their mass. We compare the SFHs of the SDSS LRGs with those obtained from a semi-analytical galaxy formation model and find that they are roughly consistent with each other if we consider the errors in the STARLIGHT-derived ages. We find that a small fraction of later star formation in LRGs leads to a systematical overestimation (~28%) of the Hubble constant by the differential age method, and the systematical errors in the STARLIGHT-derived ages may lead to an underestimation (~ 16%) of the Hubble constant. However, these errors can be corrected by a detailed study of the mean SFH of those LRGs and by calibrating the STARLIGHT-derived ages with those obtained independently by other methods. Conclusions: The environmental effects do not play a significant role in the age estimates of quiescent LRGs; and the quiescent LRGs as a population can be used securely as cosmic chronometers, and the Hubble constant can be measured with high precision by using the differential age method.
QCD Coupling from a Nonperturbative Determination of the Three-Flavor Λ Parameter
Bruno, Mattia; Brida, Mattia Dalla; Fritzsch, Patrick; ...
2017-09-08
We present a lattice determination of the Λ parameter in three-flavor QCD and the strong coupling at the Z pole mass. Computing the nonperturbative running of the coupling in the range from 0.2 to 70 GeV, and using experimental input values for the masses and decay constants of the pion and the kaon, we obtain Λ(3)MS=341(12) MeV. The nonperturbative running up to very high energies guarantees that systematic effects associated with perturbation theory are well under control. Using the four-loop prediction for Λ(5)MS/Λ(3)MS yields α(5)MS(mZ)=0.11852(84).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Saravanan, Ramalingam
2011-10-30
During the course of this project, we have accomplished the following: a) Carried out studies of climate changes in the past using a hierarchy of intermediate coupled models (Chang et al., 2008; Wan et al 2009; Wen et al., 2010a,b) b) Completed the development of a Coupled Regional Climate Model (CRCM; Patricola et al., 2011a,b) c) Carried out studies testing hypotheses testing the origin of systematic errors in the CRCM (Patricola et al., 2011a,b) d) Carried out studies of the impact of air-sea interaction on hurricanes, in the context of barrier layer interactions (Balaguru et al)
Zubenko, Dmitry; Tsentalovich, Yuri; Lebedeva, Nataly; Kirilyuk, Igor; Roshchupkina, Galina; Zhurko, Irina; Reznikov, Vladimir; Marque, Sylvain R A; Bagryanskaya, Elena
2006-08-04
Time-resolved chemically induced dynamic nuclear polarization (TR-CIDNP) and laser flash photolysis (LFP) techniques have been used to measure rate constants for coupling between acrylate-type radicals and a series of newly synthesized stable imidazolidine N-oxyl radicals. The carbon-centered radicals under investigation were generated by photolysis of their corresponding ketone precursors RC(O)R (R = C(CH3)2-C(O)OCH3 and CH(CH3)-C(O)-OtBu) in the presence of stable nitroxides. The coupling rate constants kc for modeling studies of nitroxide-mediated polymerization (NMP) experiments were determined, and the influence of steric and electronic factors on kc values was addressed by using a Hammett linear free energy relationship. The systematic changes in kc due to the varied steric (Es,n) and electronic (sigmaL,n) characters of the substituents are well-described by the biparameter equation log(kc/M- 1s(-1)) = 3.52sigmaL,n + 0.47Es,n + 10.62. Hence, kc decreases with the increasing steric demand and increases with the increasing electron-withdrawing character of the substituents on the nitroxide.
A highly accurate ab initio potential energy surface for methane.
Owens, Alec; Yurchenko, Sergei N; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter
2016-09-14
A new nine-dimensional potential energy surface (PES) for methane has been generated using state-of-the-art ab initio theory. The PES is based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set limit and incorporates a range of higher-level additive energy corrections. These include core-valence electron correlation, higher-order coupled cluster terms beyond perturbative triples, scalar relativistic effects, and the diagonal Born-Oppenheimer correction. Sub-wavenumber accuracy is achieved for the majority of experimentally known vibrational energy levels with the four fundamentals of (12)CH4 reproduced with a root-mean-square error of 0.70 cm(-1). The computed ab initio equilibrium C-H bond length is in excellent agreement with previous values despite pure rotational energies displaying minor systematic errors as J (rotational excitation) increases. It is shown that these errors can be significantly reduced by adjusting the equilibrium geometry. The PES represents the most accurate ab initio surface to date and will serve as a good starting point for empirical refinement.
Measuring cosmological parameters
Freedman, Wendy L.
1998-01-01
In this review, the status of measurements of the matter density (Ωm), the vacuum energy density or cosmological constant (ΩΛ), the Hubble constant (H0), and the ages of the oldest measured objects (t0) are summarized. Three independent types of methods for measuring the Hubble constant are considered: the measurement of time delays in multiply imaged quasars, the Sunyaev–Zel’dovich effect in clusters, and Cepheid-based extragalactic distances. Many recent independent dynamical measurements are yielding a low value for the matter density (Ωm ≈ 0.2–0.3). A wide range of Hubble constant measurements appear to be converging in the range of 60–80 km/sec per megaparsec. Areas where future improvements are likely to be made soon are highlighted—in particular, measurements of anisotropies in the cosmic microwave background. Particular attention is paid to sources of systematic error and the assumptions that underlie many of the measurement methods. PMID:9419315
Synchronized Chaos in Geophysical Fluid Dynamics and in the Predictive Modeling of Natural Systems
NASA Astrophysics Data System (ADS)
Duane, Gregory S.
2008-03-01
The ubiquitous phenomenon of synchronization among regular oscillators in Nature has been shown, in the past two decades, to extend to chaotic systems. Despite sensitive dependence on initial conditions, two chaotic systems will commonly fall into synchronized motion along their strange attractors when only some of the many degrees of freedom of one system are coupled to corresponding variables in the other. In geophysical fluid models, synchronization can mediate scale interactions, so that coupling of degrees of freedom that describe medium-scale components of the flow can result in synchronization, or partial synchronization, at all scales. Chaos synchronization has been used to interpret non-local "teleconnection" patterns in the Earth's climate system and to predict new ones. In the realm of practical meteorology, the fact that two PDE systems, conceived as "truth" and "model", respectively, can be made to synchronize when coupled at only a discrete set of points, explains how observations at a discrete set of weather stations can be sufficient for weather prediction by a synchronously coupled model. Minimizing synchronization error leads to general recipes for assimilation of observed data into a running model that systematize the treatment of nonlinearities in the dynamical equations. Equations can generally be added to adapt parameters as well as states as the model is running, so that the model "learns". The synchronization view of predictive modelling extends to any translationally- any PDE with constant coefficients, the general form of physical theories. The reliance on synchronicity as an organizing principle in Nature, alternative to causality, has philosophical roots in the collaboration of Carl Jung and Wolfgang Pauli, on the one hand, and in traditions outside of European science, on the other.
Christ, Norman H.; Flynn, Jonathan M.; Izubuchi, Taku; ...
2015-03-10
We calculate the B-meson decay constants f B, f Bs, and their ratio in unquenched lattice QCD using domain-wall light quarks and relativistic b quarks. We use gauge-field ensembles generated by the RBC and UKQCD collaborations using the domain-wall fermion action and Iwasaki gauge action with three flavors of light dynamical quarks. We analyze data at two lattice spacings of a ≈ 0.11, 0.086 fm with unitary pion masses as light as M π ≈ 290 MeV; this enables us to control the extrapolation to the physical light-quark masses and continuum. For the b quarks we use the anisotropic clovermore » action with the relativistic heavy-quark interpretation, such that discretization errors from the heavy-quark action are of the same size as from the light-quark sector. We renormalize the lattice heavy-light axial-vector current using a mostly nonperturbative method in which we compute the bulk of the matching factor nonperturbatively, with a small correction, that is close to unity, in lattice perturbation theory. We also improve the lattice heavy-light current through O(α sa). We extrapolate our results to the physical light-quark masses and continuum using SU(2) heavy-meson chiral perturbation theory, and provide a complete systematic error budget. We obtain f B0 = 199.5(12.6) MeV, f B+=195.6(14.9) MeV, f Bs=235.4(12.2) MeV, f Bs/f B0=1.197(50), and f Bs/f B+=1.223(71), where the errors are statistical and total systematic added in quadrature. Finally, these results are in good agreement with other published results and provide an important independent cross-check of other three-flavor determinations of B-meson decay constants using staggered light quarks.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Christ, Norman H.; Flynn, Jonathan M.; Izubuchi, Taku
2015-03-10
We calculate the B-meson decay constants f B, f Bs, and their ratio in unquenched lattice QCD using domain-wall light quarks and relativistic b-quarks. We use gauge-field ensembles generated by the RBC and UKQCD collaborations using the domain-wall fermion action and Iwasaki gauge action with three flavors of light dynamical quarks. We analyze data at two lattice spacings of a ≈ 0.11, 0.086 fm with unitary pion masses as light as M π ≈ 290 MeV; this enables us to control the extrapolation to the physical light-quark masses and continuum. For the b-quarks we use the anisotropic clover action withmore » the relativistic heavy-quark interpretation, such that discretization errors from the heavy-quark action are of the same size as from the light-quark sector. We renormalize the lattice heavy-light axial-vector current using a mostly nonperturbative method in which we compute the bulk of the matching factor nonperturbatively, with a small correction, that is close to unity, in lattice perturbation theory. We also improve the lattice heavy-light current through O(α sa). We extrapolate our results to the physical light-quark masses and continuum using SU(2) heavy-meson chiral perturbation theory, and provide a complete systematic error budget. We obtain f B0 = 196.2(15.7) MeV, f B+ = 195.4(15.8) MeV, f Bs = 235.4(12.2) MeV, f Bs/f B0 = 1.193(59), and f Bs/f B+ = 1.220(82), where the errors are statistical and total systematic added in quadrature. In addition, these results are in good agreement with other published results and provide an important independent cross check of other three-flavor determinations of B-meson decay constants using staggered light quarks.« less
Gottfried Sum Rule in QCD Nonsinglet Analysis of DIS Fixed-Target Data
NASA Astrophysics Data System (ADS)
Kotikov, A. V.; Krivokhizhin, V. G.; Shaikhatdenov, B. G.
2018-03-01
Deep-inelastic-scattering data from fixed-target experiments on the structure function F 2 were analyzed in the valence-quark approximation at the next-to-next-to-leading-order accuracy level in the strong-coupling constant. In this analysis, parton distributions were parametrized by employing information from the Gottfried sum rule. The strong-coupling constant was found to be α s ( M 2 Z) = 0.1180 ± 0.0020 (total expt. error), which is in perfect agreement with the world-averaged value from an updated Particle Data Group (PDG) report, α PDG s ( M 2 Z) = 0.1181 ± 0.0011. Also, the value of < x> u- d = 0.187 ± 0.021 found for the second moment of the difference in the u- and d-quark distributions complies very well with the most recent lattice result < x>LATTICE u- d = 0.208 ± 0.024.
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum
NASA Astrophysics Data System (ADS)
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 104 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10-9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10-14 for a pendulum dipole less than 10-9 A m2. The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ˜ 10-14.
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum.
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 10 4 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10 -9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10 -14 for a pendulum dipole less than 10 -9 A m 2 . The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ∼ 10 -14 .
Temperature equilibration rate with Fermi-Dirac statistics.
Brown, Lowell S; Singleton, Robert L
2007-12-01
We calculate analytically the electron-ion temperature equilibration rate in a fully ionized, weakly to moderately coupled plasma, using an exact treatment of the Fermi-Dirac electrons. The temperature is sufficiently high so that the quantum-mechanical Born approximation to the scattering is valid. It should be emphasized that we do not build a model of the energy exchange mechanism, but rather, we perform a systematic first principles calculation of the energy exchange. At the heart of this calculation lies the method of dimensional continuation, a technique that we borrow from quantum field theory and use in a different fashion to regulate the kinetic equations in a consistent manner. We can then perform a systematic perturbation expansion and thereby obtain a finite first-principles result to leading and next-to-leading order. Unlike model building, this systematic calculation yields an estimate of its own error and thus prescribes its domain of applicability. The calculational error is small for a weakly to moderately coupled plasma, for which our result is nearly exact. It should also be emphasized that our calculation becomes unreliable for a strongly coupled plasma, where the perturbative expansion that we employ breaks down, and one must then utilize model building and computer simulations. Besides providing different and potentially useful results, we use this calculation as an opportunity to explain the method of dimensional continuation in a pedagogical fashion. Interestingly, in the regime of relevance for many inertial confinement fusion experiments, the degeneracy corrections are comparable in size to the subleading quantum correction below the Born approximation. For consistency, we therefore present this subleading quantum-to-classical transition correction in addition to the degeneracy correction.
The mathematical origins of the kinetic compensation effect: 2. The effect of systematic errors.
Barrie, Patrick J
2012-01-07
The kinetic compensation effect states that there is a linear relationship between Arrhenius parameters ln A and E for a family of related processes. It is a widely observed phenomenon in many areas of science, notably heterogeneous catalysis. This paper explores mathematical, rather than physicochemical, explanations for the compensation effect in certain situations. Three different topics are covered theoretically and illustrated by examples. Firstly, the effect of systematic errors in experimental kinetic data is explored, and it is shown that these create apparent compensation effects. Secondly, analysis of kinetic data when the Arrhenius parameters depend on another parameter is examined. In the case of temperature programmed desorption (TPD) experiments when the activation energy depends on surface coverage, it is shown that a common analysis method induces a systematic error, causing an apparent compensation effect. Thirdly, the effect of analysing the temperature dependence of an overall rate of reaction, rather than a rate constant, is investigated. It is shown that this can create an apparent compensation effect, but only under some conditions. This result is illustrated by a case study for a unimolecular reaction on a catalyst surface. Overall, the work highlights the fact that, whenever a kinetic compensation effect is observed experimentally, the possibility of it having a mathematical origin should be carefully considered before any physicochemical conclusions are drawn.
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.
Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang
2016-06-22
An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.
Zavala, Baltazar; Pogosyan, Alek; Ashkan, Keyoumars; Zrinzo, Ludvic; Foltynie, Thomas; Limousin, Patricia; Brown, Peter
2014-01-01
Monitoring and evaluating movement errors to guide subsequent movements is a critical feature of normal motor control. Previously, we showed that the postmovement increase in electroencephalographic (EEG) beta power over the sensorimotor cortex reflects neural processes that evaluate motor errors consistent with Bayesian inference (Tan et al., 2014). Whether such neural processes are limited to this cortical region or involve the basal ganglia is unclear. Here, we recorded EEG over the cortex and local field potential (LFP) activity in the subthalamic nucleus (STN) from electrodes implanted in patients with Parkinson's disease, while they moved a joystick-controlled cursor to visual targets displayed on a computer screen. After movement offsets, we found increased beta activity in both local STN LFP and sensorimotor cortical EEG and in the coupling between the two, which was affected by both error magnitude and its contextual saliency. The postmovement increase in the coupling between STN and cortex was dominated by information flow from sensorimotor cortex to STN. However, an information drive appeared from STN to sensorimotor cortex in the first phase of the adaptation, when a constant rotation was applied between joystick inputs and cursor outputs. The strength of the STN to cortex drive correlated with the degree of adaption achieved across subjects. These results suggest that oscillatory activity in the beta band may dynamically couple the sensorimotor cortex and basal ganglia after movements. In particular, beta activity driven from the STN to cortex indicates task-relevant movement errors, information that may be important in modifying subsequent motor responses. PMID:25505327
NASA Astrophysics Data System (ADS)
Zarycz, M. Natalia C.; Provasi, Patricio F.; Sauer, Stephan P. A.
2015-12-01
It is investigated, whether the number of excited (pseudo)states can be truncated in the sum-over-states expression for indirect spin-spin coupling constants (SSCCs), which is used in the Contributions from Localized Orbitals within the Polarization Propagator Approach and Inner Projections of the Polarization Propagator (IPPP-CLOPPA) approach to analyzing SSCCs in terms of localized orbitals. As a test set we have studied the nine simple compounds, CH4, NH3, H2O, SiH4, PH3, SH2, C2H2, C2H4, and C2H6. The excited (pseudo)states were obtained from time-dependent density functional theory (TD-DFT) calculations with the B3LYP exchange-correlation functional and the specialized core-property basis set, aug-cc-pVTZ-J. We investigated both how the calculated coupling constants depend on the number of (pseudo)states included in the summation and whether the summation can be truncated in a systematic way at a smaller number of states and extrapolated to the total number of (pseudo)states for the given one-electron basis set. We find that this is possible and that for some of the couplings it is sufficient to include only about 30% of the excited (pseudo)states.
Cladé, Pierre; de Mirandes, Estefania; Cadoret, Malo; Guellati-Khélifa, Saïda; Schwob, Catherine; Nez, François; Julien, Lucile; Biraben, François
2006-01-27
We report an accurate measurement of the recoil velocity of 87Rb atoms based on Bloch oscillations in a vertical accelerated optical lattice. We transfer about 900 recoil momenta with an efficiency of 99.97% per recoil. A set of 72 measurements of the recoil velocity, each one with a relative uncertainty of about 33 ppb in 20 min integration time, leads to a determination of the fine structure constant with a statistical relative uncertainty of 4.4 ppb. The detailed analysis of the different systematic errors yields to a relative uncertainty of 6.7 ppb. The deduced value of alpha-1 is 137.035 998 78(91).
A New Proposal to Redefine Kilogram by Measuring the Planck Constant Based on Inertial Mass
NASA Astrophysics Data System (ADS)
Liu, Yongmeng; Wang, Dawei
2018-04-01
A novel method to measure the Planck constant based on inertial mass is proposed here, which is distinguished from the conventional Kibble balance experiment which is based on the gravitational mass. The kilogram unit is linked to the Planck constant by calculating the difference of the parameters, i.e. resistance, voltage, velocity and time, which is measured in a two-mode experiment, unloaded mass mode and the loaded mass mode. In principle, all parameters measured in this experiment can reach a high accuracy, as that in Kibble balance experiment. This method has an advantage that some systematic error can be eliminated in difference calculation of measurements. In addition, this method is insensitive to air buoyancy and the alignment work in this experiment is easy. At last, the initial design of the apparatus is presented.
Towards a multiconfigurational method of increments
NASA Astrophysics Data System (ADS)
Fertitta, E.; Koch, D.; Paulus, B.; Barcza, G.; Legeza, Ö.
2018-06-01
The method of increments (MoI) allows one to successfully calculate cohesive energies of bulk materials with high accuracy, but it encounters difficulties when calculating dissociation curves. The reason is that its standard formalism is based on a single Hartree-Fock (HF) configuration whose orbitals are localised and used for the many-body expansion. In situations where HF does not allow a size-consistent description of the dissociation, the MoI cannot be guaranteed to yield proper results either. Herein, we address the problem by employing a size-consistent multiconfigurational reference for the MoI formalism. This leads to a matrix equation where a coupling derived by the reference itself is employed. In principle, such an approach allows one to evaluate approximate values for the ground as well as excited states energies. While the latter are accurate close to the avoided crossing only, the ground state results are very promising for the whole dissociation curve, as shown by the comparison with density matrix renormalisation group benchmarks. We tested this two-state constant-coupling MoI on beryllium rings of different sizes and studied the error introduced by the constant coupling.
Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals
NASA Technical Reports Server (NTRS)
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2012-01-01
Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.
Generation of Rayleigh waves into mortar and concrete samples.
Piwakowski, B; Fnine, Abdelilah; Goueygou, M; Buyle-Bodin, F
2004-04-01
The paper deals with a non-destructive method for characterizing the degraded cover of concrete structures using high-frequency ultrasound. In a preliminary study, the authors emphasized on the interest of using higher frequency Rayleigh waves (within the 0.2-1 MHz frequency band) for on-site inspection of concrete structures with subsurface damage. The present study represents a continuation of the previous work and aims at optimizing the generation and reception of Rayleigh waves into mortar and concrete be means of wedge transducers. This is performed experimentally by checking the influence of the wedge material and coupling agent on the surface wave parameters. The selection of the best combination wedge/coupling is performed by searching separately for the best wedge material and the best coupling material. Three wedge materials and five coupling agents were tested. For each setup the five parameters obtained from the surface wave measurement i.e. the frequency band, the maximal available central frequency, the group velocity error and its standard deviation and finally the error in velocity dispersion characteristic were investigated and classed as a function of the wedge material and the coupling agent. The selection criteria were chosen so as to minimize the absorption of both materials, the randomness of measurements and the systematic error of the group velocity and of dispersion characteristic. Among the three tested wedge materials, Teflon was found to be the best. The investigation on the coupling agent shows that the gel type materials are the best solutions. The "thick" materials displaying higher viscosity were found as the worst. The results show also that the use of a thin plastic film combined with the coupling agent even increases the bandwidth and decreases the uncertainty of measurements.
Short-range optical air data measurements for aircraft control using rotational Raman backscatter.
Fraczek, Michael; Behrendt, Andreas; Schmitt, Nikolaus
2013-07-15
A first laboratory prototype of a novel concept for a short-range optical air data system for aircraft control and safety was built. The measurement methodology was introduced in [Appl. Opt. 51, 148 (2012)] and is based on techniques known from lidar detecting elastic and Raman backscatter from air. A wide range of flight-critical parameters, such as air temperature, molecular number density and pressure can be measured as well as data on atmospheric particles and humidity can be collected. In this paper, the experimental measurement performance achieved with the first laboratory prototype using 532 nm laser radiation of a pulse energy of 118 mJ is presented. Systematic measurement errors and statistical measurement uncertainties are quantified separately. The typical systematic temperature, density and pressure measurement errors obtained from the mean of 1000 averaged signal pulses are small amounting to < 0.22 K, < 0.36% and < 0.31%, respectively, for measurements at air pressures varying from 200 hPa to 950 hPa but constant air temperature of 298.95 K. The systematic measurement errors at air temperatures varying from 238 K to 308 K but constant air pressure of 946 hPa are even smaller and < 0.05 K, < 0.07% and < 0.06%, respectively. A focus is put on the system performance at different virtual flight altitudes as a function of the laser pulse energy. The virtual flight altitudes are precisely generated with a custom-made atmospheric simulation chamber system. In this context, minimum laser pulse energies and pulse numbers are experimentally determined, which are required using the measurement system, in order to meet measurement error demands for temperature and pressure specified in aviation standards. The aviation error margins limit the allowable temperature errors to 1.5 K for all measurement altitudes and the pressure errors to 0.1% for 0 m and 0.5% for 13000 m. With regard to 100-pulse-averaged temperature measurements, the pulse energy using 532 nm laser radiation has to be larger than 11 mJ (35 mJ), regarding 1-σ (3-σ) uncertainties at all measurement altitudes. For 100-pulse-averaged pressure measurements, the laser pulse energy has to be larger than 95 mJ (355 mJ), respectively. Based on these experimental results, the laser pulse energy requirements are extrapolated to the ultraviolet wavelength region as well, resulting in significantly lower pulse energy demand of 1.5 - 3 mJ (4-10 mJ) and 12-27 mJ (45-110 mJ) for 1-σ (3-σ) 100-pulse-averaged temperature and pressure measurements, respectively.
NASA Astrophysics Data System (ADS)
Parnis, J. Mark; Mackay, Donald; Harner, Tom
2015-06-01
Henry's Law constants (H) and octanol-air partition coefficients (KOA) for polycyclic aromatic hydrocarbons (PAHs) and selected nitrogen-, oxygen- and sulfur-containing derivatives have been computed using the COSMO-RS method between -5 and 40 °C in 5 °C intervals. The accuracy of the estimation was assessed by comparison of COSMOtherm values with published experimental temperature-dependence data for these and similar PAHs. COSMOtherm log H estimates with temperature-variation for parent PAHs are shown to have a root-mean-square (RMS) error of 0.38 (PAH), based on available validation data. Estimates of O-, N- and S-substituted derivative log H values are found to have RMS errors of 0.30 at 25 °C. Log KOA estimates with temperature variation from COSMOtherm are shown to be strongly correlated with experimental values for a small set of unsubstituted PAHs, but with a systematic underestimation and associated RMS error of 1.11. Similar RMS error of 1.64 was found for COSMO-RS estimates of a group of critically-evaluated log KOA values at room temperature. Validation demonstrates that COSMOtherm estimates of H and KOA are of sufficient accuracy to be used for property screening and preliminary environmental risk assessment, and perform very well for modeling the influence of temperature on partitioning behavior in the temperature range -5 to 40 °C. Temperature-dependent shifts of up to 2 log units in log H and one log unit for log KOA are predicted for PAH species over the range -5 and 40 °C. Within the family of PAH molecules, COSMO-RS is sufficiently accurate to make it useful as a source of estimates for modeling purposes, following corrections for systematic underestimation of KOA. Average changes in the values for log H and log KOA upon substitution are given for various PAH substituent categories, with the most significant shifts being associated with the ionizing nitro functionality and keto groups.
NASA Astrophysics Data System (ADS)
Vieira, Daniel; Krems, Roman
2017-04-01
Fine-structure transitions in collisions of O(3Pj) with atomic hydrogen are an important cooling mechanism in the interstellar medium; knowledge of the rate coefficients for these transitions has a wide range of astrophysical applications. The accuracy of the theoretical calculation is limited by inaccuracy in the ab initio interaction potentials used in the coupled-channel quantum scattering calculations from which the rate coefficients can be obtained. In this work we use the latest ab initio results for the O(3Pj) + H interaction potentials to improve on previous calculations of the rate coefficients. We further present a machine-learning technique based on Gaussian Process regression to determine the sensitivity of the rate coefficients to variations of the underlying adiabatic interaction potentials. To account for the inaccuracy inherent in the ab initio calculations we compute error bars for the rate coefficients corresponding to 20% variation in each of the interaction potentials. We obtain these error bars by fitting a Gaussian Process model to a data set of potential curves and rate constants. We use the fitted model to do sensitivity analysis, determining the relative importance of individual adiabatic potential curves to a given fine-structure transition. NSERC.
NASA Astrophysics Data System (ADS)
Hauser, Reas W.; Filatov, Michael; Ernst, Wolfgang E.
2013-06-01
We predict He-droplet-induced changes of the isotropic HFS constant a_{HFS} of the alkali-metal atoms M = Li, Na, K and Rb on the basis of a model description. Optically detected electron spin resonance spectroscopy has allowed high resolution measurements that show the influence of the helium droplet and its size on the unpaired electron spin density at the alkali nucleus. Our theoretical approach to describe this dependence is based on a combination of two well established techniques: Results of relativistic coupled-cluster calculations on the alkali-He dimers (energy and HFS constant as functions of the binding length) are mapped onto the doped-droplet-situation with the help of helium-density functional theory. We simulate doped droplets He_{N} with N ranging from 50 to 10000, using the diatomic alkali-He-potential energy curves as input. From the obtained density profiles we evaluate average distances between the dopant atom and its direct helium neighborhood. The distances are then set in relation to the variation of the HFS constant with binding length in the simplified alkali-He-dimer model picture. This method yields reliable relative shifts but involves a systematic absolute error. Hence, the absolute values of the shifts are tied to one experimentally determined HFS constant for ^{85}Rb-He_{N = 2000}. With this parameter choice we obtain results in good agreement with the available experimental data for Rb and K^{a,b} confirming the predicted 1/N trend of the functional dependence^{c}. M. Koch, G. Auböck, C. Callegari, and W. E. Ernst, Phys. Rev. Lett. 103, 035302-1-4 (2009) M. Koch, C. Callegari, and W. E. Ernst, Mol. Phys. 108 (7), 1005-1011 (2010) A. W. Hauser, T. Gruber, M. Filatov, and W. E. Ernst, ChemPhysChem (2013) online DOI: 10.1002/cphc.201200697
NASA Astrophysics Data System (ADS)
Grzybowski, J. M. V.; Macau, E. E. N.; Yoneyama, T.
2017-05-01
This paper presents a self-contained framework for the stability assessment of isochronal synchronization in networks of chaotic and limit-cycle oscillators. The results were based on the Lyapunov-Krasovskii theorem and they establish a sufficient condition for local synchronization stability of as a function of the system and network parameters. With this in mind, a network of mutually delay-coupled oscillators subject to direct self-coupling is considered and then the resulting error equations are block-diagonalized for the purpose of studying their stability. These error equations are evaluated by means of analytical stability results derived from the Lyapunov-Krasovskii theorem. The proposed approach is shown to be a feasible option for the investigation of local stability of isochronal synchronization for a variety of oscillators coupled through linear functions of the state variables under a given undirected graph structure. This ultimately permits the systematic identification of stability regions within the high-dimensionality of the network parameter space. Examples of applications of the results to a number of networks of delay-coupled chaotic and limit-cycle oscillators are provided, such as Lorenz, Rössler, Cubic Chua's circuit, Van der Pol oscillator and the Hindmarsh-Rose neuron.
2010-08-01
astigmatism and other sources, and stay constant from time to time (LC Technologies, 2000). Systematic errors can sometimes reach many degrees of visual angle...Taking the average of all disparities would mean treating each as equally important regardless of whether they are from correct or incorrect mappings. In...likely stop somewhere near the centroid because the large hM basically treats every point equally (or nearly equally if using the multivariate
Crack Growth Properties of Sealing Glasses
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Tandon, R.
2008-01-01
The crack growth properties of several sealing glasses were measured using constant stress rate testing in 2% and 95% RH (relative humidity). Crack growth parameters measured in high humidity are systematically smaller (n and B) than those measured in low humidity, and velocities for dry environments are approx. 100x lower than for wet environments. The crack velocity is very sensitivity to small changes in RH at low RH. Confidence intervals on parameters that were estimated from propagation of errors were comparable to those from Monte Carlo simulation.
The Anharmonic Force Field of Ethylene, C2H4, by Means of Accurate Ab Initio Calculations
NASA Technical Reports Server (NTRS)
Martin, Jan M. L.; Lee, Timothy J.; Taylor, Peter R.; Francois, Jean-Pierre; Langhoff, Stephen R. (Technical Monitor)
1995-01-01
The quartic force field of ethylene, C2H4, has been calculated ab initio using augmented coupled cluster, CCSD(T), methods and correlation consistent basis sets of spdf quality. For the C-12 isotopomers C2H4, C2H3D, H2CCD2, cis-C2H2D2, trans-C2H2D2, C2HD3, and C2D4, all fundamentals could be reproduced to better than 10 per centimeter, except for three cases of severe Fermi type 1 resonance. The problem with these three bands is identified as a systematic overestimate of the Kiij Fermi resonance constants by a factor of two or more; if this is corrected for, the predicted fundamentals come into excellent agreement with experiment. No such systematic overestimate is seen for Fermi type 2 resonances. Our computed harmonic frequencies suggest a thorough revision of the accepted experimentally derived values. Our computed and empirically corrected re geometry differs substantially from experimentally derived values: both the predicted rz geometry and the ground-state rotational constants are, however, in excellent agreement with experiment, suggesting revision of the older values. Anharmonicity constants agree well with experiment for stretches, but differ substantially for stretch-bend interaction constants, due to equality constraints in the experimental analysis that do not hold. Improved criteria for detecting Fermi and Coriolis resonances are proposed and found to work well, contrary to the established method based on harmonic frequency differences that fails to detect several important resonances for C2H4 and its isotopomers. Surprisingly good results are obtained with a small spd basis at the CCSD(T) level. The well-documented strong basis set effect on the v8 out-of-plane motion is present to a much lesser extent when correlation-optimized polarization functions are used. Complete sets of anharmonic, rovibrational coupling, and centrifugal distortion constants for the isotopomers are available as supplementary material to the paper.
Ong, Robert H.; King, Andrew J. C.; Mullins, Benjamin J.; Cooper, Timothy F.; Caley, M. Julian
2012-01-01
We present Computational Fluid Dynamics (CFD) models of the coupled dynamics of water flow, heat transfer and irradiance in and around corals to predict temperatures experienced by corals. These models were validated against controlled laboratory experiments, under constant and transient irradiance, for hemispherical and branching corals. Our CFD models agree very well with experimental studies. A linear relationship between irradiance and coral surface warming was evident in both the simulation and experimental result agreeing with heat transfer theory. However, CFD models for the steady state simulation produced a better fit to the linear relationship than the experimental data, likely due to experimental error in the empirical measurements. The consistency of our modelling results with experimental observations demonstrates the applicability of CFD simulations, such as the models developed here, to coral bleaching studies. A study of the influence of coral skeletal porosity and skeletal bulk density on surface warming was also undertaken, demonstrating boundary layer behaviour, and interstitial flow magnitude and temperature profiles in coral cross sections. Our models compliment recent studies showing systematic changes in these parameters in some coral colonies and have utility in the prediction of coral bleaching. PMID:22701582
NASA Astrophysics Data System (ADS)
Hensley, Winston; Giovanetti, Kevin
2008-10-01
A 1 ppm precision measurement of the muon lifetime is being conducted by the MULAN collaboration. The reason for this new measurement lies in recent advances in theory that have reduced the uncertainty in calculating the Fermi Coupling Constant from the measured lifetime to a few tenths ppm. The largest uncertainty is now experimental. To achieve a 1ppm level of precision it is necessary to control all sources of systematic error and to understand their influences on the lifetime measurement. James Madison University is contributing by examine the response of the timing system to uncorrelated events, randoms. A radioactive source was placed in front of paired detectors similar to those in the main experiment. These detectors were integrated in an identical fashion into the data acquisition and measurement system and data from these detectors was recorded during the entire experiment. The pair were placed in a shielded enclosure away from the main experiment to minimize interference. The data from these detectors should have a flat time spectrum as the decay of a radioactive source is a random event and has no time correlation. Thus the spectrum can be used as an important diagnostic in studying the method of determining event times and timing system performance.
Error function attack of chaos synchronization based encryption schemes.
Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu
2004-03-01
Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.
QCD Coupling from a Nonperturbative Determination of the Three-Flavor Λ Parameter.
Bruno, Mattia; Brida, Mattia Dalla; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Schaefer, Stefan; Simma, Hubert; Sint, Stefan; Sommer, Rainer
2017-09-08
We present a lattice determination of the Λ parameter in three-flavor QCD and the strong coupling at the Z pole mass. Computing the nonperturbative running of the coupling in the range from 0.2 to 70 GeV, and using experimental input values for the masses and decay constants of the pion and the kaon, we obtain Λ_{MS[over ¯]}^{(3)}=341(12) MeV. The nonperturbative running up to very high energies guarantees that systematic effects associated with perturbation theory are well under control. Using the four-loop prediction for Λ_{MS[over ¯]}^{(5)}/Λ_{MS[over ¯]}^{(3)} yields α_{MS[over ¯]}^{(5)}(m_{Z})=0.11852(84).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Jae Hyeok; Essig, Rouven; McDermott, Samuel D.
We consider the constraints from Supernova 1987A on particles with small couplings to the Standard Model. We discuss a model with a fermion coupled to a dark photon, with various mass relations in the dark sector; millicharged particles; dark-sector fermions with inelastic transitions; the hadronic QCD axion; and an axion-like particle that couples to Standard Model fermions with couplings proportional to their mass. In the fermion cases, we develop a new diagnostic for assessing when such a particle is trapped at large mixing angles. Our bounds for a fermion coupled to a dark photon constrain small couplings and masses <200more » MeV, and do not decouple for low fermion masses. They exclude parameter space that is otherwise unconstrained by existing accelerator-based and direct-detection searches. In addition, our bounds are complementary to proposed laboratory searches for sub-GeV dark matter, and do not constrain several "thermal" benchmark-model targets. For a millicharged particle, we exclude charges between 10^(-9) to a few times 10^(-6) in units of the electron charge; this excludes parameter space to higher millicharges and masses than previous bounds. For the QCD axion and an axion-like particle, we apply several updated nuclear physics calculations and include the energy dependence of the optical depth to accurately account for energy loss at large couplings. We rule out a hadronic axion of mass between 0.1 and a few hundred eV, or equivalently bound the PQ scale between a few times 10^4 and 10^8 GeV, closing the hadronic axion window. For an axion-like particle, our bounds disfavor decay constants between a few times 10^5 GeV up to a few times 10^8 GeV. In all cases, our bounds differ from previous work by more than an order of magnitude across the entire parameter space. We also provide estimated systematic errors due to the uncertainties of the progenitor.« less
Statistical exchange-coupling errors and the practicality of scalable silicon donor qubits
NASA Astrophysics Data System (ADS)
Song, Yang; Das Sarma, S.
2016-12-01
Recent experimental efforts have led to considerable interest in donor-based localized electron spins in Si as viable qubits for a scalable silicon quantum computer. With the use of isotopically purified 28Si and the realization of extremely long spin coherence time in single-donor electrons, the recent experimental focus is on two-coupled donors with the eventual goal of a scaled-up quantum circuit. Motivated by this development, we simulate the statistical distribution of the exchange coupling J between a pair of donors under realistic donor placement straggles, and quantify the errors relative to the intended J value. With J values in a broad range of donor-pair separation ( 5 <|R |<60 nm), we work out various cases systematically, for a target donor separation R0 along the [001], [110] and [111] Si crystallographic directions, with |R0|=10 ,20 or 30 nm and standard deviation σR=1 ,2 ,5 or 10 nm. Our extensive theoretical results demonstrate the great challenge for a prescribed J gate even with just a donor pair, a first step for any scalable Si-donor-based quantum computer.
Coil motion effects in watt balances: a theoretical check
NASA Astrophysics Data System (ADS)
Li, Shisong; Schlamminger, Stephan; Haddad, Darine; Seifert, Frank; Chao, Leon; Pratt, Jon R.
2016-04-01
A watt balance is a precision apparatus for the measurement of the Planck constant that has been proposed as a primary method for realizing the unit of mass in a revised International System of Units. In contrast to an ampere balance, which was historically used to realize the unit of current in terms of the kilogram, the watt balance relates electrical and mechanical units through a virtual power measurement and has far greater precision. However, because the virtual power measurement requires the execution of a prescribed motion of a coil in a fixed magnetic field, systematic errors introduced by horizontal and rotational deviations of the coil from its prescribed path will compromise the accuracy. We model these potential errors using an analysis that accounts for the fringing field in the magnet, creating a framework for assessing the impact of this class of errors on the uncertainty of watt balance results.
Aylor, K.; Hou, Z.; Knox, L.; ...
2017-11-20
The Planck cosmic microwave background temperature data are best fit with a ΛCDM model that mildly contradicts constraints from other cosmological probes. The South Pole Telescope (SPT) 2540more » $${\\deg }^{2}$$ SPT-SZ survey offers measurements on sub-degree angular scales (multipoles $$650\\leqslant {\\ell }\\leqslant 2500$$) with sufficient precision to use as an independent check of the Planck data. Here we build on the recent joint analysis of the SPT-SZ and Planck data in Hou et al. by comparing ΛCDM parameter estimates using the temperature power spectrum from both data sets in the SPT-SZ survey region. We also restrict the multipole range used in parameter fitting to focus on modes measured well by both SPT and Planck, thereby greatly reducing sample variance as a driver of parameter differences and creating a stringent test for systematic errors. We find no evidence of systematic errors from these tests. When we expand the maximum multipole of SPT data used, we see low-significance shifts in the angular scale of the sound horizon and the physical baryon and cold dark matter densities, with a resulting trend to higher Hubble constant. When we compare SPT and Planck data on the SPT-SZ sky patch to Planck full-sky data but keep the multipole range restricted, we find differences in the parameters n s and $${A}_{s}{e}^{-2\\tau }$$. We perform further checks, investigating instrumental effects and modeling assumptions, and we find no evidence that the effects investigated are responsible for any of the parameter shifts. Taken together, these tests reveal no evidence for systematic errors in SPT or Planck data in the overlapping sky coverage and multipole range and at most weak evidence for a breakdown of ΛCDM or systematic errors influencing either the Planck data outside the SPT-SZ survey area or the SPT data at $${\\ell }\\gt 2000$$.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aylor, K.; Hou, Z.; Knox, L.
The Planck cosmic microwave background temperature data are best fit with a ΛCDM model that mildly contradicts constraints from other cosmological probes. The South Pole Telescope (SPT) 2540more » $${\\deg }^{2}$$ SPT-SZ survey offers measurements on sub-degree angular scales (multipoles $$650\\leqslant {\\ell }\\leqslant 2500$$) with sufficient precision to use as an independent check of the Planck data. Here we build on the recent joint analysis of the SPT-SZ and Planck data in Hou et al. by comparing ΛCDM parameter estimates using the temperature power spectrum from both data sets in the SPT-SZ survey region. We also restrict the multipole range used in parameter fitting to focus on modes measured well by both SPT and Planck, thereby greatly reducing sample variance as a driver of parameter differences and creating a stringent test for systematic errors. We find no evidence of systematic errors from these tests. When we expand the maximum multipole of SPT data used, we see low-significance shifts in the angular scale of the sound horizon and the physical baryon and cold dark matter densities, with a resulting trend to higher Hubble constant. When we compare SPT and Planck data on the SPT-SZ sky patch to Planck full-sky data but keep the multipole range restricted, we find differences in the parameters n s and $${A}_{s}{e}^{-2\\tau }$$. We perform further checks, investigating instrumental effects and modeling assumptions, and we find no evidence that the effects investigated are responsible for any of the parameter shifts. Taken together, these tests reveal no evidence for systematic errors in SPT or Planck data in the overlapping sky coverage and multipole range and at most weak evidence for a breakdown of ΛCDM or systematic errors influencing either the Planck data outside the SPT-SZ survey area or the SPT data at $${\\ell }\\gt 2000$$.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aylor, K.; Hou, Z.; Knox, L.
The Planck cosmic microwave background temperature data are best fit with a Lambda CDM model that mildly contradicts constraints from other cosmological probes. The South Pole Telescope (SPT) 2540 deg(2) SPT-SZ survey offers measurements on sub-degree angular scales (multipoles 650 <= l <= 2500) with sufficient precision to use as an independent check of the Planck data. Here we build on the recent joint analysis of the SPT-SZ and Planck data in Hou et al. by comparing Lambda CDM parameter estimates using the temperature power spectrum from both data sets in the SPT-SZ survey region. We also restrict the multipolemore » range used in parameter fitting to focus on modes measured well by both SPT and Planck, thereby greatly reducing sample variance as a driver of parameter differences and creating a stringent test for systematic errors. We find no evidence of systematic errors from these tests. When we expand the maximum multipole of SPT data used, we see low-significance shifts in the angular scale of the sound horizon and the physical baryon and cold dark matter densities, with a resulting trend to higher Hubble constant. When we compare SPT and Planck data on the SPT-SZ sky patch to Planck full-sky data but keep the multipole range restricted, we find differences in the parameters n(s) and A(s)e(-2 tau). We perform further checks, investigating instrumental effects and modeling assumptions, and we find no evidence that the effects investigated are responsible for any of the parameter shifts. Taken together, these tests reveal no evidence for systematic errors in SPT or Planck data in the overlapping sky coverage and multipole range and at most weak evidence for a breakdown of Lambda CDM or systematic errors influencing either the Planck data outside the SPT-SZ survey area or the SPT data at l > 2000.« less
NASA Astrophysics Data System (ADS)
Aylor, K.; Hou, Z.; Knox, L.; Story, K. T.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Cho, H.-M.; Chown, R.; Crawford, T. M.; Crites, A. T.; de Haan, T.; Dobbs, M. A.; Everett, W. B.; George, E. M.; Halverson, N. W.; Harrington, N. L.; Holder, G. P.; Holzapfel, W. L.; Hrubes, J. D.; Keisler, R.; Lee, A. T.; Leitch, E. M.; Luong-Van, D.; Marrone, D. P.; McMahon, J. J.; Meyer, S. S.; Millea, M.; Mocanu, L. M.; Mohr, J. J.; Natoli, T.; Omori, Y.; Padin, S.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Staniszewski, Z.; Stark, A. A.; Vanderlinde, K.; Vieira, J. D.; Williamson, R.
2017-11-01
The Planck cosmic microwave background temperature data are best fit with a ΛCDM model that mildly contradicts constraints from other cosmological probes. The South Pole Telescope (SPT) 2540 {\\deg }2 SPT-SZ survey offers measurements on sub-degree angular scales (multipoles 650≤slant {\\ell }≤slant 2500) with sufficient precision to use as an independent check of the Planck data. Here we build on the recent joint analysis of the SPT-SZ and Planck data in Hou et al. by comparing ΛCDM parameter estimates using the temperature power spectrum from both data sets in the SPT-SZ survey region. We also restrict the multipole range used in parameter fitting to focus on modes measured well by both SPT and Planck, thereby greatly reducing sample variance as a driver of parameter differences and creating a stringent test for systematic errors. We find no evidence of systematic errors from these tests. When we expand the maximum multipole of SPT data used, we see low-significance shifts in the angular scale of the sound horizon and the physical baryon and cold dark matter densities, with a resulting trend to higher Hubble constant. When we compare SPT and Planck data on the SPT-SZ sky patch to Planck full-sky data but keep the multipole range restricted, we find differences in the parameters n s and {A}s{e}-2τ . We perform further checks, investigating instrumental effects and modeling assumptions, and we find no evidence that the effects investigated are responsible for any of the parameter shifts. Taken together, these tests reveal no evidence for systematic errors in SPT or Planck data in the overlapping sky coverage and multipole range and at most weak evidence for a breakdown of ΛCDM or systematic errors influencing either the Planck data outside the SPT-SZ survey area or the SPT data at {\\ell }> 2000.
A summary of the Planck constant determinations using the NRC Kibble balance
NASA Astrophysics Data System (ADS)
Wood, B. M.; Sanchez, C. A.; Green, R. G.; Liard, J. O.
2017-06-01
We present a summary of the Planck constant determinations using the NRC watt balance, now referred to as the NRC Kibble balance. The summary includes a reanalysis of the four determinations performed in late 2013, as well as three new determinations performed in 2016. We also present a number of improvements and modifications to the experiment resulting in lower noise and an improved uncertainty analysis. As well, we present a systematic error that had been previously unrecognized and we have quantified its correction. The seven determinations, using three different nominal masses and two different materials, are reanalysed in a manner consistent with that used by the CODATA Task Group on Fundamental Constants (TGFC) and includes a comprehensive assessment of correlations. The result is a Planck constant of 6.626 070 133(60) ×10-34 Js and an inferred value of the Avogadro constant of 6.022 140 772(55) ×1023 mol-1. These fractional uncertainties of less than 10-8 are the smallest published to date.
Identification and correction of systematic error in high-throughput sequence data
2011-01-01
Background A feature common to all DNA sequencing technologies is the presence of base-call errors in the sequenced reads. The implications of such errors are application specific, ranging from minor informatics nuisances to major problems affecting biological inferences. Recently developed "next-gen" sequencing technologies have greatly reduced the cost of sequencing, but have been shown to be more error prone than previous technologies. Both position specific (depending on the location in the read) and sequence specific (depending on the sequence in the read) errors have been identified in Illumina and Life Technology sequencing platforms. We describe a new type of systematic error that manifests as statistically unlikely accumulations of errors at specific genome (or transcriptome) locations. Results We characterize and describe systematic errors using overlapping paired reads from high-coverage data. We show that such errors occur in approximately 1 in 1000 base pairs, and that they are highly replicable across experiments. We identify motifs that are frequent at systematic error sites, and describe a classifier that distinguishes heterozygous sites from systematic error. Our classifier is designed to accommodate data from experiments in which the allele frequencies at heterozygous sites are not necessarily 0.5 (such as in the case of RNA-Seq), and can be used with single-end datasets. Conclusions Systematic errors can easily be mistaken for heterozygous sites in individuals, or for SNPs in population analyses. Systematic errors are particularly problematic in low coverage experiments, or in estimates of allele-specific expression from RNA-Seq data. Our characterization of systematic error has allowed us to develop a program, called SysCall, for identifying and correcting such errors. We conclude that correction of systematic errors is important to consider in the design and interpretation of high-throughput sequencing experiments. PMID:22099972
Variability-induced transition in a net of neural elements: From oscillatory to excitable behavior.
Glatt, Erik; Gassel, Martin; Kaiser, Friedemann
2006-06-01
Starting with an oscillatory net of neural elements, increasing variability induces a phase transition to excitability. This transition is explained by a systematic effect of the variability, which stabilizes the formerly unstable, spatially uniform, temporally constant solution of the net. Multiplicative noise may also influence the net in a systematic way and may thus induce a similar transition. Adding noise into the model, the interplay of noise and variability with respect to the reported transition is investigated. Finally, pattern formation in a diffusively coupled net is studied, because excitability implies the ability of pattern formation and information transmission.
Hanni, Matti; Lantto, Perttu; Runeberg, Nino; Jokisaari, Jukka; Vaara, Juha
2004-09-22
Quantum chemical calculations of the nuclear shielding tensor, the nuclear quadrupole coupling tensor, and the spin-rotation tensor are reported for the Xe dimer using ab initio quantum chemical methods. The binary chemical shift delta, the anisotropy of the shielding tensor Delta sigma, the nuclear quadrupole coupling tensor component along the internuclear axis chi( parallel ), and the spin-rotation constant C( perpendicular ) are presented as a function of internuclear distance. The basis set superposition error is approximately corrected for by using the counterpoise correction (CP) method. Electron correlation effects are systematically studied via the Hartree-Fock, complete active space self-consistent field, second-order Møller-Plesset many-body perturbation, and coupled-cluster singles and doubles (CCSD) theories, the last one without and with noniterative triples, at the nonrelativistic all-electron level. We also report a high-quality theoretical interatomic potential for the Xe dimer, gained using the relativistic effective potential/core polarization potential scheme. These calculations used valence basis set of cc-pVQZ quality supplemented with a set of midbond functions. The second virial coefficient of Xe nuclear shielding, which is probably the experimentally best-characterized intermolecular interaction effect in nuclear magnetic resonance spectroscopy, is computed as a function of temperature, and compared to experiment and earlier theoretical results. The best results for the second virial coefficient, obtained using the CCSD(CP) binary chemical shift curve and either our best theoretical potential or the empirical potentials from the literature, are in good agreement with experiment. Zero-point vibrational corrections of delta, Delta sigma, chi (parallel), and C (perpendicular) in the nu=0, J=0 rovibrational ground state of the xenon dimer are also reported.
Drought Persistence in Models and Observations
NASA Astrophysics Data System (ADS)
Moon, Heewon; Gudmundsson, Lukas; Seneviratne, Sonia
2017-04-01
Many regions of the world have experienced drought events that persisted several years and caused substantial economic and ecological impacts in the 20th century. However, it remains unclear whether there are significant trends in the frequency or severity of these prolonged drought events. In particular, an important issue is linked to systematic biases in the representation of persistent drought events in climate models, which impedes analysis related to the detection and attribution of drought trends. This study assesses drought persistence errors in global climate model (GCM) simulations from the 5th phase of Coupled Model Intercomparison Project (CMIP5), in the period of 1901-2010. The model simulations are compared with five gridded observational data products. The analysis focuses on two aspects: the identification of systematic biases in the models and the partitioning of the spread of drought-persistence-error into four possible sources of uncertainty: model uncertainty, observation uncertainty, internal climate variability and the estimation error of drought persistence. We use monthly and yearly dry-to-dry transition probabilities as estimates for drought persistence with drought conditions defined as negative precipitation anomalies. For both time scales we find that most model simulations consistently underestimated drought persistence except in a few regions such as India and Eastern South America. Partitioning the spread of the drought-persistence-error shows that at the monthly time scale model uncertainty and observation uncertainty are dominant, while the contribution from internal variability does play a minor role in most cases. At the yearly scale, the spread of the drought-persistence-error is dominated by the estimation error, indicating that the partitioning is not statistically significant, due to a limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current climate models and highlight the main contributors of uncertainty of drought-persistence-error. Future analyses will focus on investigating the temporal propagation of drought persistence to better understand the causes for the identified errors in the representation of drought persistence in state-of-the-art climate models.
Gauging hidden symmetries in two dimensions
NASA Astrophysics Data System (ADS)
Samtleben, Henning; Weidner, Martin
2007-08-01
We initiate the systematic construction of gauged matter-coupled supergravity theories in two dimensions. Subgroups of the affine global symmetry group of toroidally compactified supergravity can be gauged by coupling vector fields with minimal couplings and a particular topological term. The gauge groups typically include hidden symmetries that are not among the target-space isometries of the ungauged theory. The gaugings constructed in this paper are described group-theoretically in terms of a constant embedding tensor subject to a number of constraints which parametrizes the different theories and entirely encodes the gauged Lagrangian. The prime example is the bosonic sector of the maximally supersymmetric theory whose ungauged version admits an affine fraktur e9 global symmetry algebra. The various parameters (related to higher-dimensional p-form fluxes, geometric and non-geometric fluxes, etc.) which characterize the possible gaugings, combine into an embedding tensor transforming in the basic representation of fraktur e9. This yields an infinite-dimensional class of maximally supersymmetric theories in two dimensions. We work out and discuss several examples of higher-dimensional origin which can be systematically analyzed using the different gradings of fraktur e9.
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
NASA Astrophysics Data System (ADS)
Huang, Qing-Guo; Li, Miao; Li, Xiao-Dong; Wang, Shuang
2009-10-01
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant Λ, where the equation of state (EOS) w and the energy density ρΛ of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant ρΛ in each bin, respectively. It is found that for fitting the Constitution set alone, w and ρΛ will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which ρΛ rapidly decreases at redshift z˜0.331 presents a significant improvement (Δχ2=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant Λ at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant ρΛ model always performs better than a piecewise constant w model; this shows the advantage of using ρΛ, instead of w, to probe the variation of DE.
Fitting the constitution type Ia supernova data with the redshift-binned parametrization method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang Qingguo; Kavli Institute for Theoretical Physics China, Chinese Academy of Sciences, Beijing 100190; Li Miao
2009-10-15
In this work, we explore the cosmological consequences of the recently released Constitution sample of 397 Type Ia supernovae (SNIa). By revisiting the Chevallier-Polarski-Linder (CPL) parametrization, we find that, for fitting the Constitution set alone, the behavior of dark energy (DE) significantly deviates from the cosmological constant {lambda}, where the equation of state (EOS) w and the energy density {rho}{sub {lambda}} of DE will rapidly decrease along with the increase of redshift z. Inspired by this clue, we separate the redshifts into different bins, and discuss the models of a constant w or a constant {rho}{sub {lambda}} in each bin,more » respectively. It is found that for fitting the Constitution set alone, w and {rho}{sub {lambda}} will also rapidly decrease along with the increase of z, which is consistent with the result of CPL model. Moreover, a step function model in which {rho}{sub {lambda}} rapidly decreases at redshift z{approx}0.331 presents a significant improvement ({delta}{chi}{sup 2}=-4.361) over the CPL parametrization, and performs better than other DE models. We also plot the error bars of DE density of this model, and find that this model deviates from the cosmological constant {lambda} at 68.3% confidence level (CL); this may arise from some biasing systematic errors in the handling of SNIa data, or more interestingly from the nature of DE itself. In addition, for models with same number of redshift bins, a piecewise constant {rho}{sub {lambda}} model always performs better than a piecewise constant w model; this shows the advantage of using {rho}{sub {lambda}}, instead of w, to probe the variation of DE.« less
Quantum computation with realistic magic-state factories
NASA Astrophysics Data System (ADS)
O'Gorman, Joe; Campbell, Earl T.
2017-03-01
Leading approaches to fault-tolerant quantum computation dedicate a significant portion of the hardware to computational factories that churn out high-fidelity ancillas called magic states. Consequently, efficient and realistic factory design is of paramount importance. Here we present the most detailed resource assessment to date of magic-state factories within a surface code quantum computer, along the way introducing a number of techniques. We show that the block codes of Bravyi and Haah [Phys. Rev. A 86, 052329 (2012), 10.1103/PhysRevA.86.052329] have been systematically undervalued; we track correlated errors both numerically and analytically, providing fidelity estimates without appeal to the union bound. We also introduce a subsystem code realization of these protocols with constant time and low ancilla cost. Additionally, we confirm that magic-state factories have space-time costs that scale as a constant factor of surface code costs. We find that the magic-state factory required for postclassical factoring can be as small as 6.3 million data qubits, ignoring ancilla qubits, assuming 10-4 error gates and the availability of long-range interactions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirata, So; Yanai, Takeshi; De Jong, Wibe A.
Coupled-cluster methods including through and up to the connected single, double, triple, and quadruple substitutions (CCSD, CCSDT, and CCSDTQ) have been automatically derived and implemented for sequential and parallel executions for use in conjunction with a one-component third-order Douglas-Kroll (DK3) approximation for relativistic corrections. A combination of the converging electron-correlation methods, the accurate relativistic reference wave functions, and the use of systematic basis sets tailored to the relativistic approximation has been shown to predict the experimental singlet-triplet separations within 0.02 eV (0.5 kcal/mol) for five triatomic hydrides (CH2, NH2+, SiH2, PH2+, and AsH2+), the experimental bond lengths within 0.002 angstroms,more » rotational constants within 0.02 cm-1, vibration-rotation constants within 0.01 cm-1, centrifugal distortion constants within 2 %, harmonic vibration frequencies within 9 cm-1 (0.4 %), anharmonic vibrational constants within 2 cm-1, and dissociation energies within 0.03 eV (0.8 kcal/mol) for twenty diatomic hydrides (BH, CH, NH, OH, FH, AlH, SiH, PH, SH, ClH, GaH, GeH, AsH, SeH, BrH, InH, SnH, SbH, TeH, and IH) containing main-group elements across the second through fifth periods of the periodic table. In these calculations, spin-orbit effects on dissociation energies, which were assumed to be additive, were estimated from the measured spin-orbit coupling constants of atoms and diatomic molecules, and an electronic energy in the complete-basis-set, complete-electron-correlation limit has been extrapolated by the formula which was in turn based on the exponential-Gaussian extrapolation formula of the basis set dependence.« less
NASA Astrophysics Data System (ADS)
Fleming, Donald G.; Arseneau, Donald J.; Sukhorukov, Oleksandr; Brewer, Jess H.; Mielke, Steven L.; Truhlar, Donald G.; Schatz, George C.; Garrett, Bruce C.; Peterson, Kirk A.
2011-11-01
The neutral muonic helium atom 4Heμ, in which one of the electrons of He is replaced by a negative muon, may be effectively regarded as the heaviest isotope of the hydrogen atom, with a mass of 4.115 amu. We report details of the first muon spin rotation (μSR) measurements of the chemical reaction rate constant of 4Heμ with molecular hydrogen, 4Heμ + H2 → 4HeμH + H, at temperatures of 295.5, 405, and 500 K, as well as a μSR measurement of the hyperfine coupling constant of muonic He at high pressures. The experimental rate constants, kHeμ, are compared with the predictions of accurate quantum mechanical (QM) dynamics calculations carried out on a well converged Born-Huang (BH) potential energy surface, based on complete configuration interaction calculations and including a Born-Oppenheimer diagonal correction. At the two highest measured temperatures the agreement between the quantum theory and experiment is good to excellent, well within experimental uncertainties that include an estimate of possible systematic error, but at 295.5 K the quantum calculations for kHeμ are below the experimental value by 2.1 times the experimental uncertainty estimates. Possible reasons for this discrepancy are discussed. Variational transition state theory calculations with multidimensional tunneling have also been carried out for kHeμ on the BH surface, and they agree with the accurate QM rate constants to within 30% over a wider temperature range of 200-1000 K. Comparisons between theory and experiment are also presented for the rate constants for both the D + H2 and Mu + H2 reactions in a novel study of kinetic isotope effects for the H + H2 reactions over a factor of 36.1 in isotopic mass of the atomic reactant.
Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals
NASA Technical Reports Server (NTRS)
Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko
2013-01-01
Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.
A cautionary note on the use of some mass flow controllers
NASA Astrophysics Data System (ADS)
Weinheimer, Andrew J.; Ridley, Brian A.
1990-06-01
Commercial mass flow controllers are widely used in atmospheric research where precise and constant gas flows are required. We have determined, however, that some commonly used controllers are far more sensitive to ambient pressure than is acknowledged in the literature of the manufacturers. Since a flow error can lead directly to a measurement error of the same magnitude, this is a matter of great concern. Indeed, in our particular application, were we not aware of this problem, our measurements would be subject to a systematic error that increased with altitude (i.e., a drift), up to a factor of 2 at the highest altitudes (˜37 km). In this note we present laboratory measurements of the errors of two brands of flow controllers when operated at pressures down to a few millibars. The errors are as large as a factor of 2 to 3 and depend not simply on the ambient pressure at a given time, but also on the pressure history. In addition there is a large dependence on flow setting. In light of these flow errors, some past measurements of chemical species in the stratosphere will need to be revised.
Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry
2010-12-01
Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.
How important is mode-coupling in global surface wave tomography?
NASA Astrophysics Data System (ADS)
Mikesell, Dylan; Nolet, Guust; Voronin, Sergey; Ritsema, Jeroen; Van Heijst, Hendrik-Jan
2016-04-01
To investigate the influence of mode coupling for fundamental mode Rayleigh waves with periods between 64 and 174s, we analysed 3,505,902 phase measurements obtained along minor arc trajectories as well as 2,163,474 phases along major arcs. This is a selection of five frequency bands from the data set of Van Heijst and Woodhouse, extended with more recent earthquakes, that served to define upper mantle S velocity in model S40RTS. Since accurate estimation of the misfits (as represented by χ2) is essential, we used the method of Voronin et al. (GJI 199:276, 2014) to obtain objective estimates of the standard errors in this data set. We adapted Voronin's method slightly to avoid that systematic errors along clusters of raypaths can be accommodated by source corrections. This was done by simultaneously analysing multiple clusters of raypaths originating from the same group of earthquakes but traveling in different directions. For the minor arc data, phase errors at the one sigma level range from 0.26 rad at a period of 174s to 0.89 rad at 64s. For the major arcs, these errors are roughly twice as high (0.40 and 2.09 rad, respectively). In the subsequent inversion we removed any outliers that could not be fitted at the 3 sigma level in an almost undamped inversion. Using these error estimates and the theory of finite-frequency tomography to include the effects of scattering, we solved for models with χ2 = N (the number of data) both including and excluding the effect of mode coupling between Love and Rayleigh waves. We shall present some dramatic differences between the two models, notably near ocean-continent boundaries (e.g. California) where mode conversions are likely to be largest. But a sharpening of other features, such as cratons and high-velocity blobs in the oceanic domain, is also observed when mode coupling is taken into account. An investigation of the influence of coupling on azimuthal anisotropy is still under way at the time of writing of this abstract, but the results of this will be included in the presentation.
Kaye, Stephen B
2009-04-01
To provide a scalar measure of refractive error, based on geometric lens power through principal, orthogonal and oblique meridians, that is not limited to the paraxial and sag height approximations. A function is derived to model sections through the principal meridian of a lens, followed by rotation of the section through orthogonal and oblique meridians. Average focal length is determined using the definition for the average of a function. Average univariate power in the principal meridian (including spherical aberration), can be computed from the average of a function over the angle of incidence as determined by the parameters of the given lens, or adequately computed from an integrated series function. Average power through orthogonal and oblique meridians, can be similarly determined using the derived formulae. The widely used computation for measuring refractive error, the spherical equivalent, introduces non-constant approximations, leading to a systematic bias. The equations proposed provide a good univariate representation of average lens power and are not subject to a systematic bias. They are particularly useful for the analysis of aggregate data, correlating with biological treatment variables and for developing analyses, which require a scalar equivalent representation of refractive power.
Uncertainty Analysis and Order-by-Order Optimization of Chiral Nuclear Interactions
Carlsson, Boris; Forssen, Christian; Fahlin Strömberg, D.; ...
2016-02-24
Chiral effective field theory ( ΧEFT) provides a systematic approach to describe low-energy nuclear forces. Moreover, EFT is able to provide well-founded estimates of statistical and systematic uncertainties | although this unique advantage has not yet been fully exploited. We ll this gap by performing an optimization and statistical analysis of all the low-energy constants (LECs) up to next-to-next-to-leading order. Our optimization protocol corresponds to a simultaneous t to scattering and bound-state observables in the pion-nucleon, nucleon-nucleon, and few-nucleon sectors, thereby utilizing the full model capabilities of EFT. Finally, we study the effect on other observables by demonstrating forward-error-propagation methodsmore » that can easily be adopted by future works. We employ mathematical optimization and implement automatic differentiation to attain e cient and machine-precise first- and second-order derivatives of the objective function with respect to the LECs. This is also vital for the regression analysis. We use power-counting arguments to estimate the systematic uncertainty that is inherent to EFT and we construct chiral interactions at different orders with quantified uncertainties. Statistical error propagation is compared with Monte Carlo sampling showing that statistical errors are in general small compared to systematic ones. In conclusion, we find that a simultaneous t to different sets of data is critical to (i) identify the optimal set of LECs, (ii) capture all relevant correlations, (iii) reduce the statistical uncertainty, and (iv) attain order-by-order convergence in EFT. Furthermore, certain systematic uncertainties in the few-nucleon sector are shown to get substantially magnified in the many-body sector; in particlar when varying the cutoff in the chiral potentials. The methodology and results presented in this Paper open a new frontier for uncertainty quantification in ab initio nuclear theory.« less
The Effects of Lever Arm (Instrument Offset) Error on GRAV-D Airborne Gravity Data
NASA Astrophysics Data System (ADS)
Johnson, J. A.; Youngman, M.; Damiani, T.
2017-12-01
High quality airborne gravity collection with a 2-axis, stabilized platform gravity instrument, such as with a Micro-g LaCoste Turnkey Airborne Gravity System (TAGS), is dependent on the aircraft's ability to maintain "straight and level" flight. However, during flight there is constant rotation about the aircraft's center of gravity. Standard practice is to install the scientific equipment close to the aircraft's estimated center of gravity to minimize the relative rotations with aircraft motion. However, there remain small offsets between the instruments. These distance offsets, the lever arm, are used to define the rigid-body, spatial relationship between the IMU, GPS antenna, and airborne gravimeter within the aircraft body frame. The Gravity for the Redefinition of the American Vertical Datum (GRAV-D) project, which is collecting airborne gravity data across the U.S., uses a commercial software package for coupled IMU-GNSS aircraft positioning. This software incorporates a lever arm correction to calculate a precise position for the airborne gravimeter. The positioning software must do a coordinate transformation to relate each epoch of the coupled GNSS-IMU derived position to the position of the gravimeter within the constantly-rotating aircraft. This transformation requires three inputs: accurate IMU-measured aircraft rotations, GNSS positions, and lever arm distances between instruments. Previous studies show that correcting for the lever arm distances improves gravity results, but no sensitivity tests have been done to investigate how error in the lever arm distances affects the final airborne gravity products. This research investigates the effects of lever arm measurement error on airborne gravity data. GRAV-D lever arms are nominally measured to the cm-level using surveying equipment. "Truth" data sets will be created by processing GRAV-D flight lines with both relatively small lever arms and large lever arms. Then negative and positive incremental errors will be introduced independently in the x, y, and z directions during GPS-IMU processing. Finally, the post-processed gravity data obtained using the erroneous lever arms will be compared to the post-processed truth sets to identify relationships between error in the lever arm measurement and the final gravity product.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sayre, E.V.; Sancier, K.M.; Freed, S.
1958-07-01
In an analysis of term splitting in the absorption spectrum of 24 samples of praseodymium chloride, Judd (Proc. Roy. Soc. (London) A241, 414(1957)) found all but two of the authors' results to be constant with his. A discussion of reconciliation is presentrd, and the authors point out that the error is due to a mistake in descrimination between electronic transitions and the weak vibrationally coupled lines. (J.R.D.)
NASA Astrophysics Data System (ADS)
Alpha Collaboration; Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø.; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.
2013-04-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5% worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.
Amole, C.; Ashkezari, M. D.; Baquero-Ruiz, M.; Bertsche, W.; Butler, E.; Capra, A.; Cesar, C. L.; Charlton, M.; Eriksson, S.; Fajans, J.; Friesen, T.; Fujiwara, M. C.; Gill, D. R.; Gutierrez, A.; Hangst, J. S.; Hardy, W. N.; Hayden, M. E.; Isaac, C. A.; Jonsell, S.; Kurchaninov, L.; Little, A.; Madsen, N.; McKenna, J. T. K.; Menary, S.; Napoli, S. C.; Nolan, P.; Olin, A.; Pusa, P.; Rasmussen, C. Ø; Robicheaux, F.; Sarid, E.; Silveira, D. M.; So, C.; Thompson, R. I.; van der Werf, D. P.; Wurtele, J. S.; Zhmoginov, A. I.; Charman, A. E.
2013-01-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime. PMID:23653197
Charman, A E; Amole, C; Ashkezari, M D; Baquero-Ruiz, M; Bertsche, W; Butler, E; Capra, A; Cesar, C L; Charlton, M; Eriksson, S; Fajans, J; Friesen, T; Fujiwara, M C; Gill, D R; Gutierrez, A; Hangst, J S; Hardy, W N; Hayden, M E; Isaac, C A; Jonsell, S; Kurchaninov, L; Little, A; Madsen, N; McKenna, J T K; Menary, S; Napoli, S C; Nolan, P; Olin, A; Pusa, P; Rasmussen, C Ø; Robicheaux, F; Sarid, E; Silveira, D M; So, C; Thompson, R I; van der Werf, D P; Wurtele, J S; Zhmoginov, A I
2013-01-01
Physicists have long wondered whether the gravitational interactions between matter and antimatter might be different from those between matter and itself. Although there are many indirect indications that no such differences exist and that the weak equivalence principle holds, there have been no direct, free-fall style, experimental tests of gravity on antimatter. Here we describe a novel direct test methodology; we search for a propensity for antihydrogen atoms to fall downward when released from the ALPHA antihydrogen trap. In the absence of systematic errors, we can reject ratios of the gravitational to inertial mass of antihydrogen >75 at a statistical significance level of 5%; worst-case systematic errors increase the minimum rejection ratio to 110. A similar search places somewhat tighter bounds on a negative gravitational mass, that is, on antigravity. This methodology, coupled with ongoing experimental improvements, should allow us to bound the ratio within the more interesting near equivalence regime.
Danel, J-F; Kazandjian, L; Zérah, G
2012-06-01
Computations of the self-diffusion coefficient and viscosity in warm dense matter are presented with an emphasis on obtaining numerical convergence and a careful evaluation of the standard deviation. The transport coefficients are computed with the Green-Kubo relation and orbital-free molecular dynamics at the Thomas-Fermi-Dirac level. The numerical parameters are varied until the Green-Kubo integral is equal to a constant in the t→+∞ limit; the transport coefficients are deduced from this constant and not by extrapolation of the Green-Kubo integral. The latter method, which gives rise to an unknown error, is tested for the computation of viscosity; it appears that it should be used with caution. In the large domain of coupling constant considered, both the self-diffusion coefficient and viscosity turn out to be well approximated by simple analytical laws using a single effective atomic number calculated in the average-atom model.
NASA Astrophysics Data System (ADS)
Danel, J.-F.; Kazandjian, L.; Zérah, G.
2012-06-01
Computations of the self-diffusion coefficient and viscosity in warm dense matter are presented with an emphasis on obtaining numerical convergence and a careful evaluation of the standard deviation. The transport coefficients are computed with the Green-Kubo relation and orbital-free molecular dynamics at the Thomas-Fermi-Dirac level. The numerical parameters are varied until the Green-Kubo integral is equal to a constant in the t→+∞ limit; the transport coefficients are deduced from this constant and not by extrapolation of the Green-Kubo integral. The latter method, which gives rise to an unknown error, is tested for the computation of viscosity; it appears that it should be used with caution. In the large domain of coupling constant considered, both the self-diffusion coefficient and viscosity turn out to be well approximated by simple analytical laws using a single effective atomic number calculated in the average-atom model.
Influence of silane coupling agent on microstructure and properties of CCTO-P(VDF-CTFE) composites
NASA Astrophysics Data System (ADS)
Tong, Yang; Zhang, Lin; Bass, Patrick; Rolin, Terry D.; Cheng, Z.-Y.
Influence of the coupling agent on microstructure and dielectric properties of ceramic-polymer composites is systematically studied using CaCu3Ti4O12 (CCTO) as the filler, trichloro-(1H,1H,2H,2H-perfluorooctyl)-silane (Cl3-silane) as coupling agent, and P(VDF-CTFE) 88/12mol.% copolymer as the matrix. It is demonstrated that Cl3-silane molecules can be attached onto CCTO surface using a simple process. The experimental results show that coating CCTO with Cl3-silane can improve the microstructure uniformity of the composites due to the good wettability between Cl3-silane and P(VDF-CTFE), which also significantly improves the electric breakdown field of the composites. It is found that the composites using CCTO coated with 1.0wt.% Cl3-silane exhibit a higher dielectric constant with a higher electric breakdown field. For the composites with 15vol.% CCTO that is coated with 1.0wt.% Cl3-silane, an electric breakdown field of more than 240MV/m is obtained with an energy density of more than 4.5J/cm3. It is also experimentally found that the dielectric constant can be used to easily identify the optimized content of coupling agent.
NASA Astrophysics Data System (ADS)
Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang
2018-01-01
The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.
A Systematic Approach to Error Free Telemetry
2017-06-28
A SYSTEMATIC APPROACH TO ERROR FREE TELEMETRY 412TW-TIM-17-03 DISTRIBUTION A: Approved for public release. Distribution is...Systematic Approach to Error-Free Telemetry) was submitted by the Commander, 412th Test Wing, Edwards AFB, California 93524. Prepared by...Technical Information Memorandum 3. DATES COVERED (From - Through) February 2016 4. TITLE AND SUBTITLE A Systematic Approach to Error-Free
Critical analysis of fragment-orbital DFT schemes for the calculation of electronic coupling values
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schober, Christoph; Reuter, Karsten; Oberhofer, Harald, E-mail: harald.oberhofer@ch.tum.de
2016-02-07
We present a critical analysis of the popular fragment-orbital density-functional theory (FO-DFT) scheme for the calculation of electronic coupling values. We discuss the characteristics of different possible formulations or “flavors” of the scheme which differ by the number of electrons in the calculation of the fragments and the construction of the Hamiltonian. In addition to two previously described variants based on neutral fragments, we present a third version taking a different route to the approximate diabatic state by explicitly considering charged fragments. In applying these FO-DFT flavors to the two molecular test sets HAB7 (electron transfer) and HAB11 (hole transfer),more » we find that our new scheme gives improved electronic couplings for HAB7 (−6.2% decrease in mean relative signed error) and greatly improved electronic couplings for HAB11 (−15.3% decrease in mean relative signed error). A systematic investigation of the influence of exact exchange on the electronic coupling values shows that the use of hybrid functionals in FO-DFT calculations improves the electronic couplings, giving values close to or even better than more sophisticated constrained DFT calculations. Comparing the accuracy and computational cost of each variant, we devise simple rules to choose the best possible flavor depending on the task. For accuracy, our new scheme with charged-fragment calculations performs best, while numerically more efficient at reasonable accuracy is the variant with neutral fragments.« less
Observational constraint on the interacting dark energy models including the Sandage-Loeb test
NASA Astrophysics Data System (ADS)
Zhang, Ming-Jian; Liu, Wen-Biao
2014-05-01
Two types of interacting dark energy models are investigated using the type Ia supernova (SNIa), observational data (OHD), cosmic microwave background shift parameter, and the secular Sandage-Loeb (SL) test. In the investigation, we have used two sets of parameter priors including WMAP-9 and Planck 2013. They have shown some interesting differences. We find that the inclusion of SL test can obviously provide a more stringent constraint on the parameters in both models. For the constant coupling model, the interaction term has been improved to be only a half of the original scale on corresponding errors. Comparing with only SNIa and OHD, we find that the inclusion of the SL test almost reduces the best-fit interaction to zero, which indicates that the higher-redshift observation including the SL test is necessary to track the evolution of the interaction. For the varying coupling model, data with the inclusion of the SL test show that the parameter at C.L. in Planck priors is , where the constant is characteristic for the severity of the coincidence problem. This indicates that the coincidence problem will be less severe. We then reconstruct the interaction , and we find that the best-fit interaction is also negative, similar to the constant coupling model. However, for a high redshift, the interaction generally vanishes at infinity. We also find that the phantom-like dark energy with is favored over the CDM model.
NSLS-II BPM System Protection from Rogue Mode Coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Blednykh, A.; Bach, B.; Borrelli, A.
2011-03-28
Rogue mode RF shielding has been successfully designed and implemented into the production multipole vacuum chambers. In order to avoid systematic errors in the NSLS-II BPM system we introduced frequency shift of HOM's by using RF metal shielding located in the antechamber slot of each multipole vacuum chamber. To satisfy the pumping requirement the face of the shielding has been perforated with roughly 50 percent transparency. It stays clear of synchrotron radiation in each chamber.
The contributions of human factors on human error in Malaysia aviation maintenance industries
NASA Astrophysics Data System (ADS)
Padil, H.; Said, M. N.; Azizan, A.
2018-05-01
Aviation maintenance is a multitasking activity in which individuals perform varied tasks under constant pressure to meet deadlines as well as challenging work conditions. These situational characteristics combined with human factors can lead to various types of human related errors. The primary objective of this research is to develop a structural relationship model that incorporates human factors, organizational factors, and their impact on human errors in aviation maintenance. Towards that end, a questionnaire was developed which was administered to Malaysian aviation maintenance professionals. Structural Equation Modelling (SEM) approach was used in this study utilizing AMOS software. Results showed that there were a significant relationship of human factors on human errors and were tested in the model. Human factors had a partial effect on organizational factors while organizational factors had a direct and positive impact on human errors. It was also revealed that organizational factors contributed to human errors when coupled with human factors construct. This study has contributed to the advancement of knowledge on human factors effecting safety and has provided guidelines for improving human factors performance relating to aviation maintenance activities and could be used as a reference for improving safety performance in the Malaysian aviation maintenance companies.
A Systematic Error Correction Method for TOVS Radiances
NASA Technical Reports Server (NTRS)
Joiner, Joanna; Rokke, Laurie; Einaudi, Franco (Technical Monitor)
2000-01-01
Treatment of systematic errors is crucial for the successful use of satellite data in a data assimilation system. Systematic errors in TOVS radiance measurements and radiative transfer calculations can be as large or larger than random instrument errors. The usual assumption in data assimilation is that observational errors are unbiased. If biases are not effectively removed prior to assimilation, the impact of satellite data will be lessened and can even be detrimental. Treatment of systematic errors is important for short-term forecast skill as well as the creation of climate data sets. A systematic error correction algorithm has been developed as part of a 1D radiance assimilation. This scheme corrects for spectroscopic errors, errors in the instrument response function, and other biases in the forward radiance calculation for TOVS. Such algorithms are often referred to as tuning of the radiances. The scheme is able to account for the complex, air-mass dependent biases that are seen in the differences between TOVS radiance observations and forward model calculations. We will show results of systematic error correction applied to the NOAA 15 Advanced TOVS as well as its predecessors. We will also discuss the ramifications of inter-instrument bias with a focus on stratospheric measurements.
Testing the Concept of Quark-Hadron Duality with the ALEPH τ Decay Data
NASA Astrophysics Data System (ADS)
Magradze, B. A.
2010-12-01
We propose a modified procedure for extracting the numerical value for the strong coupling constant α s from the τ lepton hadronic decay rate into non-strange particles in the vector channel. We employ the concept of the quark-hadron duality specifically, introducing a boundary energy squared s p > 0, the onset of the perturbative QCD continuum in Minkowski space (Bertlmann et al. in Nucl Phys B 250:61, 1985; de Rafael in An introduction to sum rules in QCD. In: Lectures at the Les Houches Summer School. arXiv: 9802448 [hep-ph], 1997; Peris et al. in JHEP 9805:011, 1998). To approximate the hadronic spectral function in the region s > s p, we use analytic perturbation theory (APT) up to the fifth order. A new feature of our procedure is that it enables us to extract from the data simultaneously the QCD scale parameter {Λ_{overlineMS}} and the boundary energy squared s p. We carefully determine the experimental errors on these parameters which come from the errors on the invariant mass squared distribution. For the {overlineMS} scheme coupling constant, we obtain {α_s(m2_{tau})=0.3204± 0.0159_{exp.}}. We show that our numerical analysis is much more stable against higher-order corrections than the standard one. Additionally, we recalculate the “experimental” Adler function in the infrared region using final ALEPH results. The uncertainty on this function is also determined.
Leptonic-decay-constant ratio f(K+)/f(π+) from lattice QCD with physical light quarks.
Bazavov, A; Bernard, C; DeTar, C; Foley, J; Freeman, W; Gottlieb, Steven; Heller, U M; Hetrick, J E; Kim, J; Laiho, J; Levkova, L; Lightman, M; Osborn, J; Qiu, S; Sugar, R L; Toussaint, D; Van de Water, R S; Zhou, R
2013-04-26
A calculation of the ratio of leptonic decay constants f(K+)/f(π+) makes possible a precise determination of the ratio of Cabibbo-Kobayashi-Maskawa (CKM) matrix elements |V(us)|/|V(ud)| in the standard model, and places a stringent constraint on the scale of new physics that would lead to deviations from unitarity in the first row of the CKM matrix. We compute f(K+)/f(π+) numerically in unquenched lattice QCD using gauge-field ensembles recently generated that include four flavors of dynamical quarks: up, down, strange, and charm. We analyze data at four lattice spacings a ≈ 0.06, 0.09, 0.12, and 0.15 fm with simulated pion masses down to the physical value 135 MeV. We obtain f(K+)/f(π+) = 1.1947(26)(37), where the errors are statistical and total systematic, respectively. This is our first physics result from our N(f) = 2+1+1 ensembles, and the first calculation of f(K+)/f(π+) from lattice-QCD simulations at the physical point. Our result is the most precise lattice-QCD determination of f(K+)/f(π+), with an error comparable to the current world average. When combined with experimental measurements of the leptonic branching fractions, it leads to a precise determination of |V(us)|/|V(ud)| = 0.2309(9)(4) where the errors are theoretical and experimental, respectively.
Bazavov, A.; Bernard, C.; Komijani, J.; ...
2014-10-30
We compute the leptonic decay constants f D+, f Ds , and f K+, and the quark-mass ratios m c=m s and m s=m l in unquenched lattice QCD using the experimentally determined value of f π+ for normalization. We use the MILC Highly Improved Staggered Quark (HISQ) ensembles with four dynamical quark flavors -- up, down, strange, and charm -- and with both physical and unphysical values of the light sea-quark masses. The use of physical pions removes the need for a chiral extrapolation, thereby eliminating a significant source of uncertainty in previous calculations. Four different lattice spacing ranging from a ≈ 0:06 fm to 0:15 fm are included in the analysis to control the extrapolation to the continuum limit. Our primary results are f D+ = 212:6(0:4)more » $$(^{+1.0}_{-1.2})$$ MeV, f Ds = 249:0(0:3)$$(^{+1.1}_{-1.5})$$ MeV, and f Ds/f D+ = 1:1712(10)$$(^{+29}_{-32})$$, where the errors are statistical and total systematic, respectively. The errors on our results for the charm decay constants and their ratio are approximately two to four times smaller than those of the most precise previous lattice calculations. We also obtain f K+/ f π+ = 1:1956(10)$$(^{+26}_{-18})$$, updating our previous result, and determine the quark-mass ratios m s/m l = 27:35(5)$$(^{+10}_{-7})$$ and m c/m s = 11:747(19)$$(^{+59}_{-43})$$. When combined with experimental measurements of the decay rates, our results lead to precise determinations of the CKM matrix elements !Vus! = 0:22487(51)(29)(20)(5), !Vcd! = 0:217(1)(5)(1) and !Vcs! = 1:010(5)(18)(6), where the errors are from this calculation of the decay constants, the uncertainty in the experimental decay rates, structure-dependent electromagnetic corrections, and, in the case of !Vus!, the uncertainty in |Vud|, respectively.« less
More on Systematic Error in a Boyle's Law Experiment
ERIC Educational Resources Information Center
McCall, Richard P.
2012-01-01
A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.
Sobel, Michael E; Lindquist, Martin A
2014-07-01
Functional magnetic resonance imaging (fMRI) has facilitated major advances in understanding human brain function. Neuroscientists are interested in using fMRI to study the effects of external stimuli on brain activity and causal relationships among brain regions, but have not stated what is meant by causation or defined the effects they purport to estimate. Building on Rubin's causal model, we construct a framework for causal inference using blood oxygenation level dependent (BOLD) fMRI time series data. In the usual statistical literature on causal inference, potential outcomes, assumed to be measured without systematic error, are used to define unit and average causal effects. However, in general the potential BOLD responses are measured with stimulus dependent systematic error. Thus we define unit and average causal effects that are free of systematic error. In contrast to the usual case of a randomized experiment where adjustment for intermediate outcomes leads to biased estimates of treatment effects (Rosenbaum, 1984), here the failure to adjust for task dependent systematic error leads to biased estimates. We therefore adjust for systematic error using measured "noise covariates" , using a linear mixed model to estimate the effects and the systematic error. Our results are important for neuroscientists, who typically do not adjust for systematic error. They should also prove useful to researchers in other areas where responses are measured with error and in fields where large amounts of data are collected on relatively few subjects. To illustrate our approach, we re-analyze data from a social evaluative threat task, comparing the findings with results that ignore systematic error.
Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.
Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kharkov, B. B.; Chizhik, V. I.; Dvinskikh, S. V., E-mail: sergeid@kth.se
2016-01-21
Dipolar recoupling is an essential part of current solid-state NMR methodology for probing atomic-resolution structure and dynamics in solids and soft matter. Recently described magic-echo amplitude- and phase-modulated cross-polarization heteronuclear recoupling strategy aims at efficient and robust recoupling in the entire range of coupling constants both in rigid and highly dynamic molecules. In the present study, the properties of this recoupling technique are investigated by theoretical analysis, spin-dynamics simulation, and experimentally. The resonance conditions and the efficiency of suppressing the rf field errors are examined and compared to those for other recoupling sequences based on similar principles. The experimental datamore » obtained in a variety of rigid and soft solids illustrate the scope of the method and corroborate the results of analytical and numerical calculations. The technique benefits from the dipolar resolution over a wider range of coupling constants compared to that in other state-of-the-art methods and thus is advantageous in studies of complex solids with a broad range of dynamic processes and molecular mobility degrees.« less
Propagation of stage measurement uncertainties to streamflow time series
NASA Astrophysics Data System (ADS)
Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary
2016-04-01
Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.
$B$- and $D$-meson leptonic decay constants from four-flavor lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazavov, A.; Bernard, C.; Brown, N.
We calculate the leptonic decay constants of heavy-light pseudoscalar mesons with charm and bottom quarks in lattice quantum chromodynamics on four-flavor QCD gauge-field configurations with dynamicalmore » $u$, $d$, $s$, and $c$ quarks. We analyze over twenty isospin-symmetric ensembles with six lattice spacings down to $$a\\approx 0.03$$~fm and several values of the light-quark mass down to the physical value $$\\frac{1}{2}(m_u+m_d)$$. We employ the highly-improved staggered-quark (HISQ) action for the sea and valence quarks; on the finest lattice spacings, discretization errors are sufficiently small that we can calculate the $B$-meson decay constants with the HISQ action for the first time directly at the physical $b$-quark mass. We obtain the most precise determinations to-date of the $D$- and $B$-meson decay constants and their ratios, $$f_{D^+} = 212.6 (0.5)$$~MeV, $$f_{D_s} = 249.8(0.4)$$~MeV, $$f_{D_s}/f_{D^+} = 1.1749(11)$$, $$f_{B^+} = 189.4(1.4)$$~MeV, $$f_{B_s} = 230.7(1.2)$$~MeV, $$f_{B_s}/f_{B^+} = 1.2180(49)$$, where the errors include statistical and all systematic uncertainties. Our results for the $B$-meson decay constants are three times more precise than the previous best lattice-QCD calculations, and bring the QCD errors in the Standard-Model predictions for the rare leptonic decays $$\\overline{\\mathcal{B}}(B_s \\to \\mu^+\\mu^-) = 3.65(11) \\times 10^{-9}$$, $$\\overline{\\mathcal{B}}(B^0 \\to \\mu^+\\mu^-) = 1.00(3) \\times 10^{-11}$$, and $$\\overline{\\mathcal{B}}(B^0 \\to \\mu^+\\mu^-)/\\overline{\\mathcal{B}}(B_s \\to \\mu^+\\mu^-) = 0.00264(7)$$ to well below other sources of uncertainty. As a byproduct of our analysis, we also update our previously published results for the light-quark-mass ratios and the scale-setting quantities $$f_{p4s}$$, $$M_{p4s}$$, and $$R_{p4s}$$. We obtain the most precise lattice-QCD determination to date of the ratio $$f_{K^+}/f_{\\pi^+} = 1.1950(^{+15}_{-22})$$~MeV.« less
Systematic errors in transport calculations of shear viscosity using the Green-Kubo formalism
NASA Astrophysics Data System (ADS)
Rose, J. B.; Torres-Rincon, J. M.; Oliinychenko, D.; Schäfer, A.; Petersen, H.
2018-05-01
The purpose of this study is to provide a reproducible framework in the use of the Green-Kubo formalism to extract transport coefficients. More specifically, in the case of shear viscosity, we investigate the limitations and technical details of fitting the auto-correlation function to a decaying exponential. This fitting procedure is found to be applicable for systems interacting both through constant and energy-dependent cross-sections, although this is only true for sufficiently dilute systems in the latter case. We find that the optimal fit technique consists in simultaneously fixing the intercept of the correlation function and use a fitting interval constrained by the relative error on the correlation function. The formalism is then applied to the full hadron gas, for which we obtain the shear viscosity to entropy ratio.
PANCHROMATIC HUBBLE ANDROMEDA TREASURY. XII. MAPPING STELLAR METALLICITY DISTRIBUTIONS IN M31
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gregersen, Dylan; Seth, Anil C.; Williams, Benjamin F.
We present a study of spatial variations in the metallicity of old red giant branch stars in the Andromeda galaxy. Photometric metallicity estimates are derived by interpolating isochrones for over seven million stars in the Panchromatic Hubble Andromeda Treasury (PHAT) survey. This is the first systematic study of stellar metallicities over the inner 20 kpc of Andromeda’s galactic disk. We see a clear metallicity gradient of −0.020 ± 0.004 dex kpc{sup −1} from ∼4–20 kpc assuming a constant red giant branch age. This metallicity gradient is derived after correcting for the effects of photometric bias and completeness and dust extinction, and ismore » quite insensitive to these effects. The unknown age gradient in M31's disk creates the dominant systematic uncertainty in our derived metallicity gradient. However, spectroscopic analyses of galaxies similar to M31 show that they typically have small age gradients that make this systematic error comparable to the 1σ error on our metallicity gradient measurement. In addition to the metallicity gradient, we observe an asymmetric local enhancement in metallicity at radii of 3–6 kpc that appears to be associated with Andromeda’s elongated bar. This same region also appears to have an enhanced stellar density and velocity dispersion.« less
NASA Astrophysics Data System (ADS)
Zhang, Rong-Hua; Tao, Ling-Jiang; Gao, Chuan
2017-09-01
Large uncertainties exist in real-time predictions of the 2015 El Niño event, which have systematic intensity biases that are strongly model-dependent. It is critically important to characterize those model biases so they can be reduced appropriately. In this study, the conditional nonlinear optimal perturbation (CNOP)-based approach was applied to an intermediate coupled model (ICM) equipped with a four-dimensional variational data assimilation technique. The CNOP-based approach was used to quantify prediction errors that can be attributed to initial conditions (ICs) and model parameters (MPs). Two key MPs were considered in the ICM: one represents the intensity of the thermocline effect, and the other represents the relative coupling intensity between the ocean and atmosphere. Two experiments were performed to illustrate the effects of error corrections, one with a standard simulation and another with an optimized simulation in which errors in the ICs and MPs derived from the CNOP-based approach were optimally corrected. The results indicate that simulations of the 2015 El Niño event can be effectively improved by using CNOP-derived error correcting. In particular, the El Niño intensity in late 2015 was adequately captured when simulations were started from early 2015. Quantitatively, the Niño3.4 SST index simulated in Dec. 2015 increased to 2.8 °C in the optimized simulation, compared with only 1.5 °C in the standard simulation. The feasibility and effectiveness of using the CNOP-based technique to improve ENSO simulations are demonstrated in the context of the 2015 El Niño event. The limitations and further applications are also discussed.
Del Bene, Janet E; Elguero, José
2006-08-01
Ab initio equation-of-motion coupled cluster calculations have been carried out to evaluate one-, two-, and three-bond 13C-13C, 15N-13C, 31P-13C coupling constants in benzene, pyridine, pyridinium, phosphinine, and phosphininium. The introduction of N or P heteroatoms into the aromatic ring not only changes the magnitudes of the corresponding X-C coupling constants (J, for X = C, N, or P) but also the signs and magnitudes of corresponding reduced coupling constants (K). Protonation of the heteroatoms also produces dramatic changes in coupling constants and, by removing the lone pair of electrons from the sigma-electron framework, leads to the same signs for corresponding reduced coupling constants for benzene, pyridinium, and phosphininium. C-C coupling constants are rather insensitive to the presence of the heteroatoms and protonation. All terms that contribute to the total coupling constant (except for the diamagnetic spin-orbit (DSO) term) must be computed if good agreement with experimental data is to be obtained. Copyright 2006 John Wiley & Sons, Ltd.
Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less
Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)
NASA Astrophysics Data System (ADS)
Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng
2018-06-01
The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.
A proposed atom interferometry determination of G at 10-5 using a cold atomic fountain
NASA Astrophysics Data System (ADS)
Rosi, G.
2018-02-01
In precision metrology, the determination of the Newtonian gravity constant G represents a real problem, since its history is plagued by huge unknown discrepancies between a large number of independent experiments. In this paper, we propose a novel experimental setup for measuring G with a relative accuracy of 10-5 , using a standard cold atomic fountain and matter wave interferometry. We discuss in detail the major sources of systematic errors, and provide the expected statistical uncertainty. The feasibility of determining G at the 10-6 level is also discussed.
Effect of patient setup errors on simultaneously integrated boost head and neck IMRT treatment plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siebers, Jeffrey V.; Keall, Paul J.; Wu Qiuwen
2005-10-01
Purpose: The purpose of this study is to determine dose delivery errors that could result from random and systematic setup errors for head-and-neck patients treated using the simultaneous integrated boost (SIB)-intensity-modulated radiation therapy (IMRT) technique. Methods and Materials: Twenty-four patients who participated in an intramural Phase I/II parotid-sparing IMRT dose-escalation protocol using the SIB treatment technique had their dose distributions reevaluated to assess the impact of random and systematic setup errors. The dosimetric effect of random setup error was simulated by convolving the two-dimensional fluence distribution of each beam with the random setup error probability density distribution. Random setup errorsmore » of {sigma} = 1, 3, and 5 mm were simulated. Systematic setup errors were simulated by randomly shifting the patient isocenter along each of the three Cartesian axes, with each shift selected from a normal distribution. Systematic setup error distributions with {sigma} = 1.5 and 3.0 mm along each axis were simulated. Combined systematic and random setup errors were simulated for {sigma} = {sigma} = 1.5 and 3.0 mm along each axis. For each dose calculation, the gross tumor volume (GTV) received by 98% of the volume (D{sub 98}), clinical target volume (CTV) D{sub 90}, nodes D{sub 90}, cord D{sub 2}, and parotid D{sub 50} and parotid mean dose were evaluated with respect to the plan used for treatment for the structure dose and for an effective planning target volume (PTV) with a 3-mm margin. Results: Simultaneous integrated boost-IMRT head-and-neck treatment plans were found to be less sensitive to random setup errors than to systematic setup errors. For random-only errors, errors exceeded 3% only when the random setup error {sigma} exceeded 3 mm. Simulated systematic setup errors with {sigma} = 1.5 mm resulted in approximately 10% of plan having more than a 3% dose error, whereas a {sigma} = 3.0 mm resulted in half of the plans having more than a 3% dose error and 28% with a 5% dose error. Combined random and systematic dose errors with {sigma} = {sigma} = 3.0 mm resulted in more than 50% of plans having at least a 3% dose error and 38% of the plans having at least a 5% dose error. Evaluation with respect to a 3-mm expanded PTV reduced the observed dose deviations greater than 5% for the {sigma} = {sigma} = 3.0 mm simulations to 5.4% of the plans simulated. Conclusions: Head-and-neck SIB-IMRT dosimetric accuracy would benefit from methods to reduce patient systematic setup errors. When GTV, CTV, or nodal volumes are used for dose evaluation, plans simulated including the effects of random and systematic errors deviate substantially from the nominal plan. The use of PTVs for dose evaluation in the nominal plan improves agreement with evaluated GTV, CTV, and nodal dose values under simulated setup errors. PTV concepts should be used for SIB-IMRT head-and-neck squamous cell carcinoma patients, although the size of the margins may be less than those used with three-dimensional conformal radiation therapy.« less
Impact of lateral boundary conditions on regional analyses
NASA Astrophysics Data System (ADS)
Chikhar, Kamel; Gauthier, Pierre
2017-04-01
Regional and global climate models are usually validated by comparison to derived observations or reanalyses. Using a model in data assimilation results in a direct comparison to observations to produce its own analyses that may reveal systematic errors. In this study, regional analyses over North America are produced based on the fifth-generation Canadian Regional Climate Model (CRCM5) combined with the variational data assimilation system of the Meteorological Service of Canada (MSC). CRCM5 is driven at its boundaries by global analyses from ERA-interim or produced with the global configuration of the CRCM5. Assimilation cycles for the months of January and July 2011 revealed systematic errors in winter through large values in the mean analysis increments. This bias is attributed to the coupling of the lateral boundary conditions of the regional model with the driving data particularly over the northern boundary where a rapidly changing large scale circulation created significant cross-boundary flows. Increasing the time frequency of the lateral driving and applying a large-scale spectral nudging improved significantly the circulation through the lateral boundaries which translated in a much better agreement with observations.
Effective Inertial Frame in an Atom Interferometric Test of the Equivalence Principle
NASA Astrophysics Data System (ADS)
Overstreet, Chris; Asenbaum, Peter; Kovachy, Tim; Notermans, Remy; Hogan, Jason M.; Kasevich, Mark A.
2018-05-01
In an ideal test of the equivalence principle, the test masses fall in a common inertial frame. A real experiment is affected by gravity gradients, which introduce systematic errors by coupling to initial kinematic differences between the test masses. Here we demonstrate a method that reduces the sensitivity of a dual-species atom interferometer to initial kinematics by using a frequency shift of the mirror pulse to create an effective inertial frame for both atomic species. Using this method, we suppress the gravity-gradient-induced dependence of the differential phase on initial kinematic differences by 2 orders of magnitude and precisely measure these differences. We realize a relative precision of Δ g /g ≈6 ×10-11 per shot, which improves on the best previous result for a dual-species atom interferometer by more than 3 orders of magnitude. By reducing gravity gradient systematic errors to one part in 1 013 , these results pave the way for an atomic test of the equivalence principle at an accuracy comparable with state-of-the-art classical tests.
Asymptotic safety of higher derivative quantum gravity non-minimally coupled with a matter system
NASA Astrophysics Data System (ADS)
Hamada, Yuta; Yamada, Masatoshi
2017-08-01
We study asymptotic safety of models of the higher derivative quantum gravity with and without matter. The beta functions are derived by utilizing the functional renormalization group, and non-trivial fixed points are found. It turns out that all couplings in gravity sector, namely the cosmological constant, the Newton constant, and the R 2 and R μν 2 coupling constants, are relevant in case of higher derivative pure gravity. For the Higgs-Yukawa model non-minimal coupled with higher derivative gravity, we find a stable fixed point at which the scalar-quartic and the Yukawa coupling constants become relevant. The relevant Yukawa coupling is crucial to realize the finite value of the Yukawa coupling constants in the standard model.
Guan, W; Meng, X F; Dong, X M
2014-12-01
Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.
Moodie, Sheila; Pietrobon, Jonathan; Rall, Eileen; Lindley, George; Eiten, Leisha; Gordey, Dave; Davidson, Lisa; Moodie, K Shane; Bagatto, Marlene; Haluschak, Meredith Magathan; Folkeard, Paula; Scollie, Susan
2016-03-01
Real-ear-to-coupler difference (RECD) measurements are used for the purposes of estimating degree and configuration of hearing loss (in dB SPL ear canal) and predicting hearing aid output from coupler-based measures. Accurate measurements of hearing threshold, derivation of hearing aid fitting targets, and predictions of hearing aid output in the ear canal assume consistent matching of RECD coupling procedure (i.e., foam tip or earmold) with that used during assessment and in verification of the hearing aid fitting. When there is a mismatch between these coupling procedures, errors are introduced. The goal of this study was to quantify the systematic difference in measured RECD values obtained when using a foam tip versus an earmold with various tube lengths. Assuming that systematic errors exist, the second goal was to investigate the use of a foam tip to earmold correction for the purposes of improving fitting accuracy when mismatched RECD coupling conditions occur (e.g., foam tip at assessment, earmold at verification). Eighteen adults and 17 children (age range: 3-127 mo) participated in this study. Data were obtained using simulated ears of various volumes and earmold tubing lengths and from patients using their own earmolds. Derived RECD values based on simulated ear measurements were compared with RECD values obtained for adult and pediatric ears for foam tip and earmold coupling. Results indicate that differences between foam tip and earmold RECDs are consistent across test ears for adults and children which support the development of a correction between foam tip and earmold couplings for RECDs that can be applied across individuals. The foam tip to earmold correction values developed in this study can be used to provide improved estimations of earmold RECDs. This may support better accuracy in acoustic transforms related to transforming thresholds and/or hearing aid coupler responses to ear canal sound pressure level for the purposes of fitting behind-the-ear hearing aids. American Academy of Audiology.
Artificial neural networks for processing fluorescence spectroscopy data in skin cancer diagnostics
NASA Astrophysics Data System (ADS)
Lenhardt, L.; Zeković, I.; Dramićanin, T.; Dramićanin, M. D.
2013-11-01
Over the years various optical spectroscopic techniques have been widely used as diagnostic tools in the discrimination of many types of malignant diseases. Recently, synchronous fluorescent spectroscopy (SFS) coupled with chemometrics has been applied in cancer diagnostics. The SFS method involves simultaneous scanning of both emission and excitation wavelengths while keeping the interval of wavelengths (constant-wavelength mode) or frequencies (constant-energy mode) between them constant. This method is fast, relatively inexpensive, sensitive and non-invasive. Total synchronous fluorescence spectra of normal skin, nevus and melanoma samples were used as input for training of artificial neural networks. Two different types of artificial neural networks were trained, the self-organizing map and the feed-forward neural network. Histopathology results of investigated skin samples were used as the gold standard for network output. Based on the obtained classification success rate of neural networks, we concluded that both networks provided high sensitivity with classification errors between 2 and 4%.
NASA Astrophysics Data System (ADS)
Varberg, Thomas D.; Gray, Jeffrey A.; Field, Robert W.; Merer, Anthony J.
1992-12-01
The A7Π- X7Σ + (0, 0) band of MnH at 568 nm has been recorded by laser fluorescence excitation spectroscopy. The original rotational analysis of Nevin [ Proc. R. Irish Acad.48A, 1-45 (1942); 50A, 123-137 (1945)] has been extended with some corrections at low J. Systematic internal hyperfine perturbations in the X7Σ + state, caused by the Δ N = 0, Δ J = ±1 matrix elements of the 55Mn hyperfine term in the Hamiltonian, have been observed in all seven electron spin components over the entire range of N″ studied. These perturbations destroy the "goodness" of J″ as a quantum number, giving rise to hyperfine-induced Δ J = ±2 rotational branches and to observable energy shifts of the most severely affected levels. The A7Π state, with A = 40.5 cm -1 and B = 6.35 cm -1, evolves rapidly from Hund's case ( a) to case ( b) coupling, which produces anomalous branch patterns at low J. A total of 156 rotational branches have been identified and fitted by least squares to an effective Hamiltonian, providing precise values for the rotational and fine structure constants. Values of the principal constants determined in the fit are (1σ errors in units of the last digit are listed in parentheses): The fine structures of the A7Π and X7Σ + states confirm the assignment of the A ← X transition as Mn 4 pπ ← 4 sσ in the presence of a spectator, nonbonding Mn 3 d5 ( 6S) open core.
XYZ-like spectra from Laplace sum rule at N2LO in the chiral limit
NASA Astrophysics Data System (ADS)
Albuquerque, R.; Narison, S.; Fanomezana, F.; Rabemananjara, A.; Rabetiarivony, D.; Randriamanatrika, G.
2016-12-01
We present new compact integrated expressions of QCD spectral functions of heavy-light molecules and four-quark XY Z-like states at lowest order (LO) of perturbative (PT) QCD and up to d = 8 condensates of the Operator Product Expansion (OPE). Then, by including up to next-to-next leading order (N2LO) PT QCD corrections, which we have estimated by assuming the factorization of the four-quark spectral functions, we improve previous LO results from QCD spectral sum rules (QSSR), on the XY Z-like masses and decay constants which suffer from the ill-defined heavy quark mass. PT N3LO corrections are estimated using a geometric growth of the PT series and are included in the systematic errors. Our optimal results based on stability criteria are summarized in Tables 11-14 and compared, in Sec. 10, with experimental candidates and some LO QSSR results. We conclude that the masses of the XZ observed states are compatible with (almost) pure JPC = 1+±, 0++ molecule or/and four-quark states. The ones of the 1-±, 0-± molecule/four-quark states are about 1.5 GeV above the Yc,b mesons experimental candidates and hadronic thresholds. We also find that the couplings of these exotics to the associated interpolating currents are weaker than that of ordinary D,B mesons (fDD ≈ 10-3f D) and may behave numerically as 1/m¯b3/2 (respectively 1/m¯b) for the 1+, 0+ (respectively 1-, 0-) states which can stimulate further theoretical studies of these decay constants.
Pierens, Gregory K; Venkatachalam, Taracad K; Reutens, David C
2016-12-01
Two- and three-bond coupling constants ( 2 J HC and 3 J HC ) were determined for a series of 12 substituted cinnamic acids using a selective 2D inphase/antiphase (IPAP)-single quantum multiple bond correlation (HSQMBC) and 1D proton coupled 13 C NMR experiments. The coupling constants from two methods were compared and found to give very similar values. The results showed coupling constant values ranging from 1.7 to 9.7 Hz and 1.0 to 9.6 Hz for the IPAP-HSQMBC and the direct 13 C NMR experiments, respectively. The experimental values of the coupling constants were compared with discrete density functional theory (DFT) calculated values and were found to be in good agreement for the 3 J HC . However, the DFT method under estimated the 2 J HC coupling constants. Knowing the limitations of the measurement and calculation of these multibond coupling constants will add confidence to the assignment of conformation or stereochemical aspects of complex molecules like natural products. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
An approach to get thermodynamic properties from speed of sound
NASA Astrophysics Data System (ADS)
Núñez, M. A.; Medina, L. A.
2017-01-01
An approach for estimating thermodynamic properties of gases from the speed of sound u, is proposed. The square u2, the compression factor Z and the molar heat capacity at constant volume C V are connected by two coupled nonlinear partial differential equations. Previous approaches to solving this system differ in the conditions used on the range of temperature values [Tmin,Tmax]. In this work we propose the use of Dirichlet boundary conditions at Tmin, Tmax. The virial series of the compression factor Z = 1+Bρ+Cρ2+… and other properties leads the problem to the solution of a recursive set of linear ordinary differential equations for the B, C. Analytic solutions of the B equation for Argon are used to study the stability of our approach and previous ones under perturbation errors of the input data. The results show that the approach yields B with a relative error bounded basically by that of the boundary values and the error of other approaches can be some orders of magnitude lager.
Scalar-tensor theory of gravitation with negative coupling constant
NASA Technical Reports Server (NTRS)
Smalley, L. L.; Eby, P. B.
1976-01-01
The possibility of a Brans-Dicke scalar-tensor gravitation theory with a negative coupling constant is considered. The admissibility of a negative-coupling theory is investigated, and a simplified cosmological solution is obtained which allows a negative derivative of the gravitation constant. It is concluded that a Brans-Dicke theory with a negative coupling constant can be a viable alternative to general relativity and that a large negative value for the coupling constant seems to bring the original scalar-tensor theory into close agreement with perihelion-precession results in view of recent observations of small solar oblateness.
Direct evidence for a position input to the smooth pursuit system.
Blohm, Gunnar; Missal, Marcus; Lefèvre, Philippe
2005-07-01
When objects move in our environment, the orientation of the visual axis in space requires the coordination of two types of eye movements: saccades and smooth pursuit. The principal input to the saccadic system is position error, whereas it is velocity error for the smooth pursuit system. Recently, it has been shown that catch-up saccades to moving targets are triggered and programmed by using velocity error in addition to position error. Here, we show that, when a visual target is flashed during ongoing smooth pursuit, it evokes a smooth eye movement toward the flash. The velocity of this evoked smooth movement is proportional to the position error of the flash; it is neither influenced by the velocity of the ongoing smooth pursuit eye movement nor by the occurrence of a saccade, but the effect is absent if the flash is ignored by the subject. Furthermore, the response started around 85 ms after the flash presentation and decayed with an average time constant of 276 ms. Thus this is the first direct evidence of a position input to the smooth pursuit system. This study shows further evidence for a coupling between saccadic and smooth pursuit systems. It also suggests that there is an interaction between position and velocity error signals in the control of more complex movements.
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
Errors in causal inference: an organizational schema for systematic error and random error.
Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji
2016-11-01
To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.
Sadybekov, Arman; Krylov, Anna I.
2017-07-07
A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sadybekov, Arman; Krylov, Anna I.
A theoretical approach for calculating core-level states in condensed phase is presented. The approach is based on equation-of-motion coupled-cluster theory (EOMCC) and effective fragment potential (EFP) method. By introducing an approximate treatment of double excitations in the EOM-CCSD (EOM-CC with single and double substitutions) ansatz, we address poor convergence issues that are encountered for the core-level states and significantly reduce computational costs. While the approximations introduce relatively large errors in the absolute values of transition energies, the errors are systematic. Consequently, chemical shifts, changes in ionization energies relative to reference systems, are reproduced reasonably well. By using different protonation formsmore » of solvated glycine as a benchmark system, we show that our protocol is capable of reproducing the experimental chemical shifts with a quantitative accuracy. The results demonstrate that chemical shifts are very sensitive to the solvent interactions and that explicit treatment of solvent, such as EFP, is essential for achieving quantitative accuracy.« less
Sursyakova, Viktoria V; Burmakina, Galina V; Rubaylo, Anatoly I
2016-08-01
The influence of analyte concentration when compared with the concentration of a charged ligand in background electrolyte (BGE) on the measured values of electrophoretic mobilities and stability constants (association, binding or formation constants) is studied using capillary electrophoresis (CE) and a dynamic mathematical simulator of CE. The study is performed using labile complexes (with fast kinetics) of iron (III) and 5-sulfosalicylate ions (ISC) as an example. It is shown that because the ligand concentration in the analyte zone is not equal to that in BGE, considerable changes in the migration times and electrophoretic mobilities are observed, resulting in systematic errors in the stability constant values. Of crucial significance is the slope of the dependence of the electrophoretic mobility decrease on the ligand equilibrium concentration. Without prior information on this dependence to accurately evaluate the stability constants for similar systems, the total ligand concentration must be at least >50-100 times higher than the total concentration of analyte. Experimental ISC peak fronting and the difference between the direction of the experimental pH dependence of the electrophoretic mobility decrease and the mathematical simulation allow assuming the presence of capillary wall interaction. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
New methods for B meson decay constants and form factors from lattice NRQCD
NASA Astrophysics Data System (ADS)
Hughes, C.; Davies, C. T. H.; Monahan, C. J.; Hpqcd Collaboration
2018-03-01
We determine the normalization of scalar and pseudoscalar current operators made from nonrelativistic b quarks and highly improved staggered light quarks in lattice quantum chromodynamics (QCD) through O (αs) and ΛQCD/mb. We use matrix elements of these operators to extract B meson decay constants and form factors, and then compare to those obtained using the standard vector and axial-vector operators. This provides a test of systematic errors in the lattice QCD determination of the B meson decay constants and form factors. We provide a new value for the B and Bs meson decay constants from lattice QCD calculations on ensembles that include u , d , s , and c quarks in the sea and those that have the u /d quark mass going down to its physical value. Our results are fB=0.196 (6 ) GeV , fBs=0.236(7 ) GeV , and fB s/fB=1.207 (7 ), agreeing well with earlier results using the temporal axial current. By combining with these previous results, we provide updated values of fB=0.190 (4 ) GeV , fBs=0.229(5 ) GeV , and fB s/fB=1.206 (5 ).
Scalable video transmission over Rayleigh fading channels using LDPC codes
NASA Astrophysics Data System (ADS)
Bansal, Manu; Kondi, Lisimachos P.
2005-03-01
In this paper, we investigate an important problem of efficiently utilizing the available resources for video transmission over wireless channels while maintaining a good decoded video quality and resilience to channel impairments. Our system consists of the video codec based on 3-D set partitioning in hierarchical trees (3-D SPIHT) algorithm and employs two different schemes using low-density parity check (LDPC) codes for channel error protection. The first method uses the serial concatenation of the constant-rate LDPC code and rate-compatible punctured convolutional (RCPC) codes. Cyclic redundancy check (CRC) is used to detect transmission errors. In the other scheme, we use the product code structure consisting of a constant rate LDPC/CRC code across the rows of the `blocks' of source data and an erasure-correction systematic Reed-Solomon (RS) code as the column code. In both the schemes introduced here, we use fixed-length source packets protected with unequal forward error correction coding ensuring a strictly decreasing protection across the bitstream. A Rayleigh flat-fading channel with additive white Gaussian noise (AWGN) is modeled for the transmission. The rate-distortion optimization algorithm is developed and carried out for the selection of source coding and channel coding rates using Lagrangian optimization. The experimental results demonstrate the effectiveness of this system under different wireless channel conditions and both the proposed methods (LDPC+RCPC/CRC and RS+LDPC/CRC) outperform the more conventional schemes such as those employing RCPC/CRC.
NASA Astrophysics Data System (ADS)
Amemiya, Naoyuki; Tominaga, Naoki; Toyomoto, Ryuki; Nishimoto, Takuma; Sogabe, Yusuke; Yamano, Satoshi; Sakamoto, Hisaki
2018-07-01
The shielding-current-induced field is a serious concern for the applications of coated conductors to magnets. The striation of the coated conductor is one of the countermeasures, but it is effective only after the decay of the coupling current, which is characterised with the coupling time constant. In a non-twisted striated coated conductor, the coupling time constant is determined primarily by its length and the transverse resistance between superconductor filaments, because the coupling current could flow along its entire length. We measured and numerically calculated the frequency dependences of magnetisation losses in striated and copper-plated coated conductors with various lengths and their stacks at 77 K and determined their coupling time constants. Stacked conductors simulate the turns of a conductor wound into a pancake coil. Coupling time constants are proportional to the square of the conductor length. Stacking striated coated conductors increases the coupling time constants because the coupling currents in stacked conductors are coupled to one another magnetically to increase the mutual inductances for the coupling current paths. We carried out the numerical electromagnetic field analysis of conductors wound into pancake coils and determined their coupling time constants. They can be explained by the length dependence and mutual coupling effect observed in stacked straight conductors. Even in pancake coils with practical numbers of turns, i.e. conductor lengths, the striation is effective to reduce the shielding-current-induced fields for some dc applications.
Systematic Error Study for ALICE charged-jet v2 Measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heinz, M.; Soltz, R.
We study the treatment of systematic errors in the determination of v 2 for charged jets in √ sNN = 2:76 TeV Pb-Pb collisions by the ALICE Collaboration. Working with the reported values and errors for the 0-5% centrality data we evaluate the Χ 2 according to the formulas given for the statistical and systematic errors, where the latter are separated into correlated and shape contributions. We reproduce both the Χ 2 and p-values relative to a null (zero) result. We then re-cast the systematic errors into an equivalent co-variance matrix and obtain identical results, demonstrating that the two methodsmore » are equivalent.« less
Mancini, Alessio; Esposto, Giampaolo; Manfrini, Silvana; Rilli, Silvia; Tinti, Gessica; Carta, Giuseppe; Petrolati, Laura; Vidali, Matteo; Barocci, Simone
2018-05-01
The aim of this retrospective study is to evaluate the reliability and robustness of six glucose meters for point-of-care testing in our wards using a brand-new protocol. During a 30-days study period a total of 50 diabetes patients were subjected to venous blood sampling and glucose meter blood analysis. The results of six glucose meters were compared with our laboratory reference assay. GlucoMen Plus (Menarini) with the 82% of acceptable results was the most robust glucose meter. Even if the Passing-Bablok analysis demonstrates the presence of constant systematic errors and the Bland-Altman test highlighted a possible overestimation, the surveillance error grid analysis showed that this glucose meter can be used safely. We proved that portable glucose meters are not always reliable in routinely clinical settings.
Isolation and characterization of high affinity aptamers against DNA polymerase iota.
Lakhin, Andrei V; Kazakov, Andrei A; Makarova, Alena V; Pavlov, Yuri I; Efremova, Anna S; Shram, Stanislav I; Tarantul, Viacheslav Z; Gening, Leonid V
2012-02-01
Human DNA-polymerase iota (Pol ι) is an extremely error-prone enzyme and the fidelity depends on the sequence context of the template. Using the in vitro systematic evolution of ligands by exponential enrichment (SELEX) procedure, we obtained an oligoribonucleotide with a high affinity to human Pol ι, named aptamer IKL5. We determined its dissociation constant with homogenous preparation of Pol ι and predicted its putative secondary structure. The aptamer IKL5 specifically inhibits DNA-polymerase activity of the purified enzyme Pol ι, but did not inhibit the DNA-polymerase activities of human DNA polymerases beta and kappa. IKL5 suppressed the error-prone DNA-polymerase activity of Pol ι also in cellular extracts of the tumor cell line SKOV-3. The aptamer IKL5 is useful for studies of the biological role of Pol ι and as a potential drug to suppress the increase of the activity of this enzyme in malignant cells.
An accurate ab initio quartic force field for ammonia
NASA Technical Reports Server (NTRS)
Martin, J. M. L.; Lee, Timothy J.; Taylor, Peter R.
1992-01-01
The quartic force field of ammonia is computed using basis sets of spdf/spd and spdfg/spdf quality and an augmented coupled cluster method. After correcting for Fermi resonance, the computed fundamentals and nu 4 overtones agree on average to better than 3/cm with the experimental ones except for nu 2. The discrepancy for nu 2 is principally due to higher-order anharmonicity effects. The computed omega 1, omega 3, and omega 4 confirm the recent experimental determination by Lehmann and Coy (1988) but are associated with smaller error bars. The discrepancy between the computed and experimental omega 2 is far outside the expected error range, which is also attributed to higher-order anharmonicity effects not accounted for in the experimental determination. Spectroscopic constants are predicted for a number of symmetric and asymmetric top isotopomers of NH3.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bai, Sen; Li, Guangjun; Wang, Maojie
The purpose of this study was to investigate the effect of multileaf collimator (MLC) leaf position, collimator rotation angle, and accelerator gantry rotation angle errors on intensity-modulated radiotherapy plans for nasopharyngeal carcinoma. To compare dosimetric differences between the simulating plans and the clinical plans with evaluation parameters, 6 patients with nasopharyngeal carcinoma were selected for simulation of systematic and random MLC leaf position errors, collimator rotation angle errors, and accelerator gantry rotation angle errors. There was a high sensitivity to dose distribution for systematic MLC leaf position errors in response to field size. When the systematic MLC position errors weremore » 0.5, 1, and 2 mm, respectively, the maximum values of the mean dose deviation, observed in parotid glands, were 4.63%, 8.69%, and 18.32%, respectively. The dosimetric effect was comparatively small for systematic MLC shift errors. For random MLC errors up to 2 mm and collimator and gantry rotation angle errors up to 0.5°, the dosimetric effect was negligible. We suggest that quality control be regularly conducted for MLC leaves, so as to ensure that systematic MLC leaf position errors are within 0.5 mm. Because the dosimetric effect of 0.5° collimator and gantry rotation angle errors is negligible, it can be concluded that setting a proper threshold for allowed errors of collimator and gantry rotation angle may increase treatment efficacy and reduce treatment time.« less
Fred L. Tobiason; Stephen S. Kelley; M. Mark Midland; Richard W. Hemingway
1997-01-01
The pyran ring proton coupling constants for (+)-catechin have been experimentally determined in deuterated methanol over a temperature range of 213 K to 313 K. The experimental coupling constants were simulated to 0.04 Hz on the average at a 90 percent confidence limit using a LAOCOON method. The temperature dependence of the coupling constants was reproduced from the...
Aquarius L-Band Microwave Radiometer: Three Years of Radiometric Performance and Systematic Effects
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey R.; Hong, Liang; Pellerano, Fernando A.
2015-01-01
The Aquarius L-band microwave radiometer is a three-beam pushbroom instrument designed to measure sea surface salinity. Results are analyzed for performance and systematic effects over three years of operation. The thermal control system maintains tight temperature stability promoting good gain stability. The gain spectrum exhibits expected orbital variations with 1f noise appearing at longer time periods. The on-board detection and integration scheme coupled with the calibration algorithm produce antenna temperatures with NEDT 0.16 K for 1.44-s samples. Nonlinearity is characterized before launch and the derived correction is verified with cold-sky calibration data. Finally, long-term drift is discovered in all channels with 1-K amplitude and 100-day time constant. Nonetheless, it is adeptly corrected using an exponential model.
Validation and upgrading of physically based mathematical models
NASA Technical Reports Server (NTRS)
Duval, Ronald
1992-01-01
The validation of the results of physically-based mathematical models against experimental results was discussed. Systematic techniques are used for: (1) isolating subsets of the simulator mathematical model and comparing the response of each subset to its experimental response for the same input conditions; (2) evaluating the response error to determine whether it is the result of incorrect parameter values, incorrect structure of the model subset, or unmodeled external effects of cross coupling; and (3) modifying and upgrading the model and its parameter values to determine the most physically appropriate combination of changes.
Elevation Change of the Southern Greenland Ice Sheet from Satellite Radar Altimeter Data
NASA Technical Reports Server (NTRS)
Haines, Bruce J.
1999-01-01
Long-term changes in the thickness of the polar ice sheets are important indicators of climate change. Understanding the contributions to the global water mass balance from the accumulation or ablation of grounded ice in Greenland and Antarctica is considered crucial for determining the source of the about 2 mm/yr sea-level rise in the last century. Though the Antarctic ice sheet is much larger than its northern counterpart, the Greenland ice sheet is more likely to undergo dramatic changes in response to a warming trend. This can be attributed to the warmer Greenland climate, as well as a potential for amplification of a global warming trend in the polar regions of the Northern Hemisphere. In collaboration with Drs. Curt Davis and Craig Kluever of the University of Missouri, we are using data from satellite radar altimeters to measure changes in the elevation of the Southern Greenland ice sheet from 1978 to the present. Difficulties with systematic altimeter measurement errors, particularly in intersatellite comparisons, beset earlier studies of the Greenland ice sheet thickness. We use altimeter data collected contemporaneously over the global ocean to establish a reference for correcting ice-sheet data. In addition, the waveform data from the ice-sheet radar returns are reprocessed to better determine the range from the satellite to the ice surface. At JPL, we are focusing our efforts principally on the reduction of orbit errors and range biases in the measurement systems on the various altimeter missions. Our approach emphasizes global characterization and reduction of the long-period orbit errors and range biases using altimeter data from NASA's Ocean Pathfinder program. Along-track sea-height residuals are sequentially filtered and backwards smoothed, and the radial orbit errors are modeled as sinusoids with a wavelength equal to one revolution of the satellite. The amplitudes of the sinusoids are treated as exponentially-correlated noise processes with a time-constant of six days. Measurement errors (e.g., altimeter range bias) are simultaneously recovered as constant parameters. The corrections derived from the global ocean analysis are then applied over the Greenland ice sheet. The orbit error and measurement bias corrections for different missions are developed in a single framework to enable robust linkage of ice-sheet measurements from 1978 to the present. In 1998, we completed our re-evaluation of the 1978 Seasat and 1985-1989 Geosat Exact Repeat Mission data. The estimates of ice thickness over Southern Greenland (south of 72N and above 2000 m) from 1978 to 1988 show large regional variations (+/-18 cm/yr), but yield an overall rate of +1.5 +/- 0.5 cm/yr (one standard error). Accounting for systematic errors, the estimate may not be significantly different from the null growth rate. The average elevation change from 1978 to 1988 is too small to assess whether the Greenland ice sheet is undergoing a long-term change.
Measurement of a Cosmographic Distance Ratio with Galaxy and Cosmic Microwave Background Lensing.
Miyatake, Hironao; Madhavacheril, Mathew S; Sehgal, Neelima; Slosar, Anže; Spergel, David N; Sherwin, Blake; van Engelen, Alexander
2017-04-21
We measure the gravitational lensing shear signal around dark matter halos hosting constant mass galaxies using light sources at z∼1 (background galaxies) and at the surface of last scattering at z∼1100 (the cosmic microwave background). The galaxy shear measurement uses data from the CFHTLenS survey, and the microwave background shear measurement uses data from the Planck satellite. The ratio of shears from these cross-correlations provides a purely geometric distance measurement across the longest possible cosmological lever arm. This is because the matter distribution around the halos, including uncertainties in galaxy bias and systematic errors such as miscentering, cancels in the ratio for halos in thin redshift slices. We measure this distance ratio in three different redshift slices of the constant mass (CMASS) sample and combine them to obtain a 17% measurement of the distance ratio, r=0.390_{-0.062}^{+0.070}, at an effective redshift of z=0.53. This is consistent with the predicted ratio from the Planck best-fit cold dark matter model with a cosmological constant cosmology of r=0.419.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Miyatake, Hironao; Madhavacheril, Mathew S.; Sehgal, Neelima
We measure the gravitational lensing shear signal around dark matter halos hosting constant mass galaxies using light sources at z~1 (background galaxies) and at the surface of last scattering at z~1100 (the cosmic microwave background). The galaxy shear measurement uses data from the CFHTLenS survey, and the microwave background shear measurement uses data from the Planck satellite. The ratio of shears from these cross-correlations provides a purely geometric distance measurement across the longest possible cosmological lever arm. This is because the matter distribution around the halos, including uncertainties in galaxy bias and systematic errors such as miscentering, cancels in themore » ratio for halos in thin redshift slices. We measure this distance ratio in three different redshift slices of the constant mass (CMASS) sample and combine them to obtain a 17% measurement of the distance ratio, r = 0.390 +0.070 –0.062, at an effective redshift of z = 0.53. As a result, this is consistent with the predicted ratio from the Planck best-fit cold dark matter model with a cosmological constant cosmology of r = 0.419.« less
Uncertainty Propagation in OMFIT
NASA Astrophysics Data System (ADS)
Smith, Sterling; Meneghini, Orso; Sung, Choongki
2017-10-01
A rigorous comparison of power balance fluxes and turbulent model fluxes requires the propagation of uncertainties in the kinetic profiles and their derivatives. Making extensive use of the python uncertainties package, the OMFIT framework has been used to propagate covariant uncertainties to provide an uncertainty in the power balance calculation from the ONETWO code, as well as through the turbulent fluxes calculated by the TGLF code. The covariant uncertainties arise from fitting 1D (constant on flux surface) density and temperature profiles and associated random errors with parameterized functions such as a modified tanh. The power balance and model fluxes can then be compared with quantification of the uncertainties. No effort is made at propagating systematic errors. A case study will be shown for the effects of resonant magnetic perturbations on the kinetic profiles and fluxes at the top of the pedestal. A separate attempt at modeling the random errors with Monte Carlo sampling will be compared to the method of propagating the fitting function parameter covariant uncertainties. Work supported by US DOE under DE-FC02-04ER54698, DE-FG2-95ER-54309, DE-SC 0012656.
Optimal solutions for a bio mathematical model for the evolution of smoking habit
NASA Astrophysics Data System (ADS)
Sikander, Waseem; Khan, Umar; Ahmed, Naveed; Mohyud-Din, Syed Tauseef
In this study, we apply Variation of Parameter Method (VPM) coupled with an auxiliary parameter to obtain the approximate solutions for the epidemic model for the evolution of smoking habit in a constant population. Convergence of the developed algorithm, namely VPM with an auxiliary parameter is studied. Furthermore, a simple way is considered for obtaining an optimal value of auxiliary parameter via minimizing the total residual error over the domain of problem. Comparison of the obtained results with standard VPM shows that an auxiliary parameter is very feasible and reliable in controlling the convergence of approximate solutions.
Breakup effects on alpha spectroscopic factors of 16O
NASA Astrophysics Data System (ADS)
Adhikari, S.; Basu, C.; Sugathan, P.; Jhinghan, A.; Behera, B. R.; Saneesh, N.; Kaur, G.; Thakur, M.; Mahajan, R.; Dubey, R.; Mitra, A. K.
2017-01-01
The triton angular distribution for the 12C(7Li,t)16O* reaction is measured at 20 MeV, populating discrete states of 16O. Continuum discretized coupled reaction channel calculations are used to to extract the alpha spectroscopic properties of 16O states instead of the distorted wave born approximation theory to include the effects of breakup on the transfer process. The alpha reduced width, spectroscopic factors and the asymptotic normalization constant (ANC) of 16O states are extracted. The error in the spectroscopic factor is about 35% and in that of the ANC about 27%.
Critical temperature of the Ising ferromagnet on the fcc, hcp, and dhcp lattices
NASA Astrophysics Data System (ADS)
Yu, Unjong
2015-02-01
By an extensive Monte-Carlo calculation together with the finite-size-scaling and the multiple histogram method, the critical coupling constant (Kc = J /kBTc) of the Ising ferromagnet on the fcc, hcp, and double hcp (dhcp) lattices were obtained with unprecedented precision: Kcfcc= 0.1020707(2) , Kchcp= 0.1020702(1) , and Kcdhcp= 0.1020706(2) . The critical temperature Tc of the hcp lattice is found to be higher than those of the fcc and the dhcp lattice. The dhcp lattice seems to have higher Tc than the fcc lattice, but the difference is within error bars.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, T. S.
Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmissionmore » and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
NASA Astrophysics Data System (ADS)
Bernard, C.; Toussaint, D.
2018-04-01
We study the effects of failure to equilibrate the squared topological charge Q2 on lattice calculations of pseudoscalar masses and decay constants. The analysis is based on chiral perturbation theory calculations of the dependence of these quantities on the QCD vacuum angle θ . For the light-light partially quenched case, we rederive the known chiral perturbation theory results of Aoki and Fukaya, but using the nonperturbatively valid chiral theory worked out by Golterman, Sharpe and Singleton, and by Sharpe and Shoresh. We then extend these calculations to heavy-light mesons. Results when staggered taste violations are important are also presented. The derived Q2 dependence is compared to that of simulations using the MILC Collaboration's ensembles of lattices with four flavors of dynamical highly improved staggered quarks. We find agreement, albeit with large statistical errors. These results can be used to correct for the leading effects of unequilibrated Q2, or to make estimates of the systematic error coming from the failure to equilibrate Q2. In an appendix, we show that the partially quenched chiral theory may be extended beyond a lower bound on valence masses discovered by Sharpe and Shoresh. Subtleties occurring when a sea-quark mass vanishes are discussed in another appendix.
NASA Astrophysics Data System (ADS)
Kloppstech, K.; Könne, N.; Worbes, L.; Hellmann, D.; Kittel, A.
2015-11-01
We report on a precise in situ procedure to calibrate the heat flux sensor of a near-field scanning thermal microscope. This sensitive thermal measurement is based on 1ω modulation technique and utilizes a hot wire method to build an accessible and controllable heat reservoir. This reservoir is coupled thermally by near-field interactions to our probe. Thus, the sensor's conversion relation V th ( QGS ∗ ) can be precisely determined. Vth is the thermopower generated in the sensor's coaxial thermocouple and QGS ∗ is the thermal flux from reservoir through the sensor. We analyze our method with Gaussian error calculus with an error estimate on all involved quantities. The overall relative uncertainty of the calibration procedure is evaluated to be about 8% for the measured conversion constant, i.e., (2.40 ± 0.19) μV/μW. Furthermore, we determine the sensor's thermal resistance to be about 0.21 K/μW and find the thermal resistance of the near-field mediated coupling at a distance between calibration standard and sensor of about 250 pm to be 53 K/μW.
Systematic Errors in an Air Track Experiment.
ERIC Educational Resources Information Center
Ramirez, Santos A.; Ham, Joe S.
1990-01-01
Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emanuela Barzi et al.1
Nb{sub 3}Sn is the superconductor most used in the R and D of high field accelerator magnets by either the wind and react or the react and wind technique. In order to program the low temperature steps of the heat treatment, the growth kinetics of Cu-Sn intermetallics was investigated as a function of duration and temperature. The diffusion constants of {eta}, {var_epsilon} and {delta} phases between 150 and 550 C were evaluated using Cu-Sn model samples. Statistical and systematic errors were thoroughly evaluated for an accurate data analysis. Next the behavior of Internal Tin and Modified Jelly Roll Nb{sub 3}Snmore » composites was compared with the model predictions.« less
NASA Technical Reports Server (NTRS)
Robertson, Franklin R.; Fitzjarrald, Dan; Marshall, Susan; Oglesby, Robert; Roads, John; Arnold, James E. (Technical Monitor)
2001-01-01
This paper focuses on how fresh water and radiative fluxes over the tropical oceans change during ENSO warm and cold events and how these changes affect the tropical energy balance. At present, ENSO remains the most prominent known mode of natural variability at interannual time scales. While this natural perturbation to climate is quite distinct from possible anthropogenic changes in climate, adjustments in the tropical water and energy budgets during ENSO may give insight into feedback processes involving water vapor and cloud feedbacks. Although great advances have been made in understanding this phenomenon and realizing prediction skill over the past decade, our ability to document the coupled water and energy changes observationally and to represent them in climate models seems far from settled (Soden, 2000 J Climate). In a companion paper we have presented observational analyses, based principally on space-based measurements which document systematic changes in rainfall, evaporation, and surface and top-of-atmosphere (TOA) radiative fluxes. Here we analyze several contemporary climate models run with observed SSTs over recent decades and compare SST-induced changes in radiation, precipitation, evaporation, and energy transport to observational results. Among these are the NASA / NCAR Finite Volume Model, the NCAR Community Climate Model, the NCEP Global Spectral Model, and the NASA NSIPP Model. Key disagreements between model and observational results noted in the recent literature are shown to be due predominantly to observational shortcomings. A reexamination of the Langley 8-Year Surface Radiation Budget data reveals errors in the SST surface longwave emission due to biased SSTs. Subsequent correction allows use of this data set along with ERBE TOA fluxes to infer net atmospheric radiative heating. Further analysis of recent rainfall algorithms provides new estimates for precipitation variability in line with interannual evaporation changes inferred from the da Silva, Young, Levitus COADS analysis. The overall results from our analysis suggest an increase (decrease) of the hydrologic cycle during ENSO warm (cold) events at the rate of about 5 W/sq m per K of SST change. Model results agree reasonably well with this estimate of sensitivity. This rate is slightly less than that which would be expected for constant relative humidity over the tropical oceans. There remain, however, significant quantitative uncertainties in cloud forcing changes in the models as compared to observations. These differences are examined in relationship to model convection and cloud parameterizations Analysis of the possible sampling and measurement errors compared to systematic model errors is also presented.
Mayo-Wilson, Evan; Ng, Sueko Matsumura; Chuck, Roy S; Li, Tianjing
2017-09-05
Systematic reviews should inform American Academy of Ophthalmology (AAO) Preferred Practice Pattern® (PPP) guidelines. The quality of systematic reviews related to the forthcoming Preferred Practice Pattern® guideline (PPP) Refractive Errors & Refractive Surgery is unknown. We sought to identify reliable systematic reviews to assist the AAO Refractive Errors & Refractive Surgery PPP. Systematic reviews were eligible if they evaluated the effectiveness or safety of interventions included in the 2012 PPP Refractive Errors & Refractive Surgery. To identify potentially eligible systematic reviews, we searched the Cochrane Eyes and Vision United States Satellite database of systematic reviews. Two authors identified eligible reviews and abstracted information about the characteristics and quality of the reviews independently using the Systematic Review Data Repository. We classified systematic reviews as "reliable" when they (1) defined criteria for the selection of studies, (2) conducted comprehensive literature searches for eligible studies, (3) assessed the methodological quality (risk of bias) of the included studies, (4) used appropriate methods for meta-analyses (which we assessed only when meta-analyses were reported), (5) presented conclusions that were supported by the evidence provided in the review. We identified 124 systematic reviews related to refractive error; 39 met our eligibility criteria, of which we classified 11 to be reliable. Systematic reviews classified as unreliable did not define the criteria for selecting studies (5; 13%), did not assess methodological rigor (10; 26%), did not conduct comprehensive searches (17; 44%), or used inappropriate quantitative methods (3; 8%). The 11 reliable reviews were published between 2002 and 2016. They included 0 to 23 studies (median = 9) and analyzed 0 to 4696 participants (median = 666). Seven reliable reviews (64%) assessed surgical interventions. Most systematic reviews of interventions for refractive error are low methodological quality. Following widely accepted guidance, such as Cochrane or Institute of Medicine standards for conducting systematic reviews, would contribute to improved patient care and inform future research.
An analysis of the least-squares problem for the DSN systematic pointing error model
NASA Technical Reports Server (NTRS)
Alvarez, L. S.
1991-01-01
A systematic pointing error model is used to calibrate antennas in the Deep Space Network. The least squares problem is described and analyzed along with the solution methods used to determine the model's parameters. Specifically studied are the rank degeneracy problems resulting from beam pointing error measurement sets that incorporate inadequate sky coverage. A least squares parameter subset selection method is described and its applicability to the systematic error modeling process is demonstrated on Voyager 2 measurement distribution.
The FIM-iHYCOM Model in SubX: Evaluation of Subseasonal Errors and Variability
NASA Astrophysics Data System (ADS)
Green, B.; Sun, S.; Benjamin, S.; Grell, G. A.; Bleck, R.
2017-12-01
NOAA/ESRL/GSD has produced both real-time and retrospective forecasts for the Subseasonal Experiment (SubX) using the FIM-iHYCOM model. FIM-iHYCOM couples the atmospheric Flow-following finite volume Icosahedral Model (FIM) to an icosahedral-grid version of the Hybrid Coordinate Ocean Model (HYCOM). This coupled model is unique in terms of its grid structure: in the horizontal, the icosahedral meshes are perfectly matched for FIM and iHYCOM, eliminating the need for a flux interpolator; in the vertical, both models use adaptive arbitrary Lagrangian-Eulerian hybrid coordinates. For SubX, FIM-iHYCOM initializes four time-lagged ensemble members around each Wednesday, which are integrated forward to provide 32-day forecasts. While it has already been shown that this model has similar predictive skill as NOAA's operational CFSv2 in terms of the RMM index, FIM-iHYCOM is still fairly new and thus its overall performance needs to be thoroughly evaluated. To that end, this study examines model errors as a function of forecast lead week (1-4) - i.e., model drift - for key variables including 2-m temperature, precipitation, and SST. Errors are evaluated against two reanalysis products: CFSR, from which FIM-iHYCOM initial conditions are derived, and the quasi-independent ERA-Interim. The week 4 error magnitudes are similar between FIM-iHYCOM and CFSv2, albeit with different spatial distributions. Also, intraseasonal variability as simulated in these two models will be compared with reanalyses. The impact of hindcast frequency (4 times per week, once per week, or once per day) on the model climatology is also examined to determine the implications for systematic error correction in FIM-iHYCOM.
Model-independent determination of the triple Higgs coupling at e + e – colliders
Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; ...
2018-03-20
Here, the observation of Higgs pair production at high-energy colliders can give evidence for the presence of a triple Higgs coupling. However, the actual determination of the value of this coupling is more difficult. In the context of general models for new physics, double Higgs production processes can receive contributions from many possible beyond-Standard-Model effects. This dependence must be understood if one is to make a definite statement about the deviation of the Higgs field potential from the Standard Model. In this paper, we study the extraction of the triple Higgs coupling from the process e +e –→Zhh. We showmore » that, by combining the measurement of this process with other measurements available at a 500 GeV e +e – collider, it is possible to quote model-independent limits on the effective field theory parameter c 6 that parametrizes modifications of the Higgs potential. We present precise error estimates based on the anticipated International Linear Collider physics program, studied with full simulation. Our analysis also gives new insight into the model-independent extraction of the Higgs boson coupling constants and total width from e +e – data.« less
Model-independent determination of the triple Higgs coupling at e + e – colliders
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon
Here, the observation of Higgs pair production at high-energy colliders can give evidence for the presence of a triple Higgs coupling. However, the actual determination of the value of this coupling is more difficult. In the context of general models for new physics, double Higgs production processes can receive contributions from many possible beyond-Standard-Model effects. This dependence must be understood if one is to make a definite statement about the deviation of the Higgs field potential from the Standard Model. In this paper, we study the extraction of the triple Higgs coupling from the process e +e –→Zhh. We showmore » that, by combining the measurement of this process with other measurements available at a 500 GeV e +e – collider, it is possible to quote model-independent limits on the effective field theory parameter c 6 that parametrizes modifications of the Higgs potential. We present precise error estimates based on the anticipated International Linear Collider physics program, studied with full simulation. Our analysis also gives new insight into the model-independent extraction of the Higgs boson coupling constants and total width from e +e – data.« less
Model-independent determination of the triple Higgs coupling at e+e- colliders
NASA Astrophysics Data System (ADS)
Barklow, Tim; Fujii, Keisuke; Jung, Sunghoon; Peskin, Michael E.; Tian, Junping
2018-03-01
The observation of Higgs pair production at high-energy colliders can give evidence for the presence of a triple Higgs coupling. However, the actual determination of the value of this coupling is more difficult. In the context of general models for new physics, double Higgs production processes can receive contributions from many possible beyond-Standard-Model effects. This dependence must be understood if one is to make a definite statement about the deviation of the Higgs field potential from the Standard Model. In this paper, we study the extraction of the triple Higgs coupling from the process e+e-→Z h h . We show that, by combining the measurement of this process with other measurements available at a 500 GeV e+e- collider, it is possible to quote model-independent limits on the effective field theory parameter c6 that parametrizes modifications of the Higgs potential. We present precise error estimates based on the anticipated International Linear Collider physics program, studied with full simulation. Our analysis also gives new insight into the model-independent extraction of the Higgs boson coupling constants and total width from e+e- data.
Experimental determination of the effective strong coupling constant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alexandre Deur; Volker Burkert; Jian-Ping Chen
2007-07-01
We extract an effective strong coupling constant from low Q{sup 2} data on the Bjorken sum. Using sum rules, we establish its Q{sup 2}-behavior over the complete Q{sup 2}-range. The result is compared to effective coupling constants extracted from different processes and to calculations based on Schwinger-Dyson equations, hadron spectroscopy or lattice QCD. Although the connection between the experimentally extracted effective coupling constant and the calculations is not clear, the results agree surprisingly well.
An ab initio study of the low-lying doublet states of AgO and AgS
NASA Astrophysics Data System (ADS)
Bauschlicher, Charles W.; Partridge, Harry; Langhoff, Stephen R.
1990-11-01
Spectroscopic constants ( Do, re, μ e, Te) are determined for the doublet states of AgO and AgS below ≈ 30000 cm -1. valence basis sets are employed in conjunction with relativistic effective core potentials (RECPs). Electron correlation is included using the modified coupled-pair functional (MCPF) and multireferenceconfiguration interaction (MRCI) methods. The A 2Σ +-X 2Π band system is found to occur in the near infrared ( ≈ 9000 cm -1) and to be relatively weak with a radiative lifetime of 900 μs for A 2Σ + (ν = 0). The weakly bound C 2Π state (our notation), the upper state of the blue system, is found to require high levels of theoretical treatment to determine a quantitatively accurate potential. The red system is assigned as a transition from the C 2Π state to the previously unobserved A 2Σ + state. Several additional transitions are identified that should be detectable experiment A more limited study is performed for the vertical excitation spectrum of AgS. In addition, a detailed all-electron study of the X 2Π and A 2Σ + states of AgO is carried out using large atomic natural orbital (ANO) basis sets. Our best calculated Do value for AgO is significantly less than the experimental value, which suggests that there may be some systematic error in the experimental determination.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less
Tuning a climate model using nudging to reanalysis.
NASA Astrophysics Data System (ADS)
Cheedela, S. K.; Mapes, B. E.
2014-12-01
Tuning a atmospheric general circulation model involves a daunting task of adjusting non-observable parameters to adjust the mean climate. These parameters arise from necessity to describe unresolved flow through parametrizations. Tuning a climate model is often done with certain set of priorities, such as global mean temperature, net top of the atmosphere radiation. These priorities are hard enough to reach let alone reducing systematic biases in the models. The goal of currently study is to explore alternate ways to tune a climate model to reduce some systematic biases that can be used in synergy with existing efforts. Nudging a climate model to a known state is a poor man's inverse of tuning process described above. Our approach involves nudging the atmospheric model to state of art reanalysis fields thereby providing a balanced state with respect to the global mean temperature and winds. The tendencies derived from nudging are negative of errors from physical parametrizations as the errors from dynamical core would be small. Patterns of nudging are compared to the patterns of different physical parametrizations to decipher the cause for certain biases in relation to tuning parameters. This approach might also help in understanding certain compensating errors that arise from tuning process. ECHAM6 is a comprehensive general model, also used in recent Coupled Model Intercomparision Project(CMIP5). The approach used to tune it and effect of certain parameters that effect its mean climate are reported clearly, hence it serves as a benchmark for our approach. Our planned experiments include nudging ECHAM6 atmospheric model to European Center Reanalysis (ERA-Interim) and reanalysis from National Center for Environmental Prediction (NCEP) and decipher choice of certain parameters that lead to systematic biases in its simulations. Of particular interest are reducing long standing biases related to simulation of Asian summer monsoon.
Kny Coupling Constants and Form Factors from the Chiral Bag Model
NASA Astrophysics Data System (ADS)
Jeong, M. T.; Cheon, Il-T.
2000-09-01
The form factors and coupling constants for KNΛ and KNΣ interactions have been calculated in the framework of the Chiral Bag Model with vector mesons. Taking into account vector meson (ρ, ω, K*) field effects, we find -3.88 ≤ gKNΛ ≤ -3.67 and 1.15 ≤ gKNΣ ≤ 1.24, where the quark-meson coupling constants are determined by fitting the renormalized, πNN coupling constant, [gπNN(0)]2/4π = 14.3. It is shown that vector mesons make significant contributions to the coupling constants gKNΛ and gKNΣ. Our values are existing within the experimental limits compared to the phenomenological values extracted from the kaon photo production experiments.
SU-E-T-613: Dosimetric Consequences of Systematic MLC Leaf Positioning Errors
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kathuria, K; Siebers, J
2014-06-01
Purpose: The purpose of this study is to determine the dosimetric consequences of systematic MLC leaf positioning errors for clinical IMRT patient plans so as to establish detection tolerances for quality assurance programs. Materials and Methods: Dosimetric consequences were simulated by extracting mlc delivery instructions from the TPS, altering the file by the specified error, reloading the delivery instructions into the TPS, recomputing dose, and extracting dose-volume metrics for one head-andneck and one prostate patient. Machine error was simulated by offsetting MLC leaves in Pinnacle in a systematic way. Three different algorithms were followed for these systematic offsets, and aremore » as follows: a systematic sequential one-leaf offset (one leaf offset in one segment per beam), a systematic uniform one-leaf offset (same one leaf offset per segment per beam) and a systematic offset of a given number of leaves picked uniformly at random from a given number of segments (5 out of 10 total). Dose to the PTV and normal tissue was simulated. Results: A systematic 5 mm offset of 1 leaf for all delivery segments of all beams resulted in a maximum PTV D98 deviation of 1%. Results showed very low dose error in all reasonably possible machine configurations, rare or otherwise, which could be simulated. Very low error in dose to PTV and OARs was shown in all possible cases of one leaf per beam per segment being offset (<1%), or that of only one leaf per beam being offset (<.2%). The errors resulting from a high number of adjacent leaves (maximum of 5 out of 60 total leaf-pairs) being simultaneously offset in many (5) of the control points (total 10–18 in all beams) per beam, in both the PTV and the OARs analyzed, were similarly low (<2–3%). Conclusions: The above results show that patient shifts and anatomical changes are the main source of errors in dose delivered, not machine delivery. These two sources of error are “visually complementary” and uncorrelated (albeit not additive in the final error) and one can easily incorporate error resulting from machine delivery in an error model based purely on tumor motion.« less
NASA Technical Reports Server (NTRS)
Deloach, Richard; Obara, Clifford J.; Goodman, Wesley L.
2012-01-01
This paper documents a check standard wind tunnel test conducted in the Langley 0.3-Meter Transonic Cryogenic Tunnel (0.3M TCT) that was designed and analyzed using the Modern Design of Experiments (MDOE). The test designed to partition the unexplained variance of typical wind tunnel data samples into two constituent components, one attributable to ordinary random error, and one attributable to systematic error induced by covariate effects. Covariate effects in wind tunnel testing are discussed, with examples. The impact of systematic (non-random) unexplained variance on the statistical independence of sequential measurements is reviewed. The corresponding correlation among experimental errors is discussed, as is the impact of such correlation on experimental results generally. The specific experiment documented herein was organized as a formal test for the presence of unexplained variance in representative samples of wind tunnel data, in order to quantify the frequency with which such systematic error was detected, and its magnitude relative to ordinary random error. Levels of systematic and random error reported here are representative of those quantified in other facilities, as cited in the references.
Laissaoui, A; Mas, J L; Hurtado, S; Ziad, N; Villa, M; Benmansour, M
2013-06-01
This study presents metal concentrations (Fe, Mg, Mn, Co, Cu, Zn, Pb, As, Sr and V) and radionuclide activities ((40)K, (137)Cs, (210)Pb, (226)Ra, (228)Ac, (234)Th and (212)Pb) in surface deposits and a sediment core from the Sebou Estuary, Northwest Morocco. Samples were collected in April 2009, about 2 months after a flooding event, and analysed using a well-type coaxial gamma-ray detector and inductively coupled plasma-quadrupole mass spectrometry. Activities of radionuclides and concentrations of almost all elements in surface samples displayed only moderate spatial variation, suggesting homogenous deposition of eroded local soil in response to intense precipitation. Excess (210)Pb displayed relatively constant activity throughout the sediment core, preventing dating and precluding determination of the historical accumulation rates of pollutants at the core site. Some elements showed non-systematic trends with depth and displayed local maxima and minima. Other elements presented relatively systematic concentration trends or relatively constant levels with discrete maxima and/or minima. Except for Mn, Sr and Cr, all metal concentrations in sediment were below levels typical of polluted systems, suggesting little human impact or losses of metals from sediment particles.
Modeling and Control of a Tailsitter with a Ducted Fan
NASA Astrophysics Data System (ADS)
Argyle, Matthew Elliott
There are two traditional aircraft categories: fixed-wing which have a long endurance and a high cruise airspeed and rotorcraft which can take-off and land vertically. The tailsitter is a type of aircraft that has the strengths of both platforms, with no additional mechanical complexity, because it takes off and lands vertically on its tail and can transition the entire aircraft horizontally into high-speed flight. In this dissertation, we develop the entire control system for a tailsitter with a ducted fan. The standard method to compute the quaternion-based attitude error does not generate ideal trajectories for a hovering tailsitter for some situations. In addition, the only approach in the literature to mitigate this breaks down for large attitude errors. We develop an alternative quaternion-based error method which generates better trajectories than the standard approach and can handle large errors. We also derive a hybrid backstepping controller with almost global asymptotic stability based on this error method. Many common altitude and airspeed control schemes for a fixed-wing airplane assume that the altitude and airspeed dynamics are decoupled which leads to errors. The Total Energy Control System (TECS) is an approach that controls the altitude and airspeed by manipulating the total energy rate and energy distribution rate, of the aircraft, in a manner which accounts for the dynamic coupling. In this dissertation, a nonlinear controller, which can handle inaccurate thrust and drag models, based on the TECS principles is derived. Simulation results show that the nonlinear controller has better performance than the standard PI TECS control schemes. Most constant altitude transitions are accomplished by generating an optimal trajectory, and potentially actuator inputs, based on a high fidelity model of the aircraft. While there are several approaches to mitigate the effects of modeling errors, these do not fully remove the accurate model requirement. In this dissertation, we develop two different approaches that can achieve near constant altitude transitions for some types of aircraft. The first method, based on multiple LQR controllers, requires a high fidelity model of the aircraft. However, the second method, based on the energy along the body axes, requires almost no aerodynamic information.
Best-estimate coupled RELAP/CONTAIN analysis of inadvertent BWR ADS valve opening transient
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feltus, M.A.; Muftuoglu, A.K.
1993-01-01
Noncondensible gases may become dissolved in boiling water reactor (BWR) water-level instrumentation during normal operations. Any dissolved noncondensible gases inside these water columns may come out of solution during rapid depressurization events and displace water from the reference leg piping, resulting in a false high level. Significant errors in water-level indication are not expected to occur until the reactor pressure vessel (RPV) pressure has dropped below [approximately]450 psig. These water level errors may cause a delay or failure in emergency core cooling system (ECCS) actuation. The RPV water level is monitored using the pressure of a water column having amore » varying height (reactor water level) that is compared to the pressure of a water column maintained at a constant height (reference level). The reference legs have small-diameter pipes with varying lengths that provide a constant head of water and are located outside the drywell. The amount of noncondensible gases dissolved in each reference leg is very dependent on the amount of leakage from the reference leg and its geometry and interaction of the reactor coolant system with the containment, i.e., torus or suppression pool, and reactor building. If a rapid depressurization causes an erroneously high water level, preventing automatic ECCS actuation, it becomes important to determine if there would be other adequate indications for operator response. In the postulated inadvertent opening of all seven automatic depressurization system (ADS) valves, the ECCS signal on high drywell pressure would be circumvented because the ADS valves discharge directly into the suppression pool. A best-estimate analysis of such an inadvertent opening of all ADS valves would have to consider the thermal-hydraulic coupling between the pool, drywell, reactor building, and RPV.« less
Sturtevant, Blake T; Davulis, Peter M; da Cunha, Mauricio Pereira
2009-04-01
This work reports on the determination of langatate elastic and piezoelectric constants and their associated temperature coefficients employing 2 independent methods, the pulse echo overlap (PEO) and a combined resonance technique (CRT) to measure bulk acoustic wave (BAW) phase velocities. Details on the measurement techniques are provided and discussed, including the analysis of the couplant material in the PEO technique used to couple signal to the sample, which showed to be an order of magnitude more relevant than the experimental errors involved in the data extraction. At room temperature, elastic and piezoelectric constants were extracted by the PEO and the CRT methods and showed results consistent to within a few percent for the elastic constants. Both raw acquired data and optimized constants, based on minimization routines applied to all the modes involved in the measurements, are provided and discussed. Comparison between the elastic constants and their temperature behavior with the literature reveals the recent efforts toward the consistent growth and characterization of LGT, in spite of significant variations (between 1 and 30%) among the constants extracted by different groups at room temperature. The density, dielectric permittivity constants, and respective temperature coefficients used in this work have also been independently determined based on samples from the same crystal boule. The temperature behavior of the BAW modes was extracted using the CRT technique, which has the advantage of not relying on temperature dependent acoustic couplants. Finally, the extracted temperature coefficients for the elastic and piezoelectric constants between room temperature and 120 degrees C are reported and discussed in this work.
Conformal Symmetry as a Template for QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brodsky, S
2004-08-04
Conformal symmetry is broken in physical QCD; nevertheless, one can use conformal symmetry as a template, systematically correcting for its nonzero {beta} function as well as higher-twist effects. For example, commensurate scale relations which relate QCD observables to each other, such as the generalized Crewther relation, have no renormalization scale or scheme ambiguity and retain a convergent perturbative structure which reflects the underlying conformal symmetry of the classical theory. The ''conformal correspondence principle'' also dictates the form of the expansion basis for hadronic distribution amplitudes. The AdS/CFT correspondence connecting superstring theory to superconformal gauge theory has important implications for hadronmore » phenomenology in the conformal limit, including an all-orders demonstration of counting rules for hard exclusive processes as well as determining essential aspects of hadronic light-front wavefunctions. Theoretical and phenomenological evidence is now accumulating that QCD couplings based on physical observables such as {tau} decay become constant at small virtuality; i.e., effective charges develop an infrared fixed point in contradiction to the usual assumption of singular growth in the infrared. The near-constant behavior of effective couplings also suggests that QCD can be approximated as a conformal theory even at relatively small momentum transfer. The importance of using an analytic effective charge such as the pinch scheme for unifying the electroweak and strong couplings and forces is also emphasized.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schotland, R.M.; Hartman, J.E.
1989-02-01
The accuracy in the determination of the solar constant by means of the Langley method is strongly influenced by the spatial inhomogeneities of the atmospheric aerosol. Volcanos frequently inject aerosol into the upper troposphere and lower stratosphere. This paper evaluates the solar constant error that would occur if observations had been taken throughout the plume of El Chichon observed by NASA aircraft in the fall of 1982 and the spring of 1983. A lidar method is suggested to minimize this error. 15 refs.
Adaptation of catch-up saccades during the initiation of smooth pursuit eye movements.
Schütz, Alexander C; Souto, David
2011-04-01
Reduction of retinal speed and alignment of the line of sight are believed to be the respective primary functions of smooth pursuit and saccadic eye movements. As the eye muscles strength can change in the short-term, continuous adjustments of motor signals are required to achieve constant accuracy. While adaptation of saccade amplitude to systematic position errors has been extensively studied, we know less about the adaptive response to position errors during smooth pursuit initiation, when target motion has to be taken into account to program saccades, and when position errors at the saccade endpoint could also be corrected by increasing pursuit velocity. To study short-term adaptation (250 adaptation trials) of tracking eye movements, we introduced a position error during the first catch-up saccade made during the initiation of smooth pursuit-in a ramp-step-ramp paradigm. The target position was either shifted in the direction of the horizontally moving target (forward step), against it (backward step) or orthogonally to it (vertical step). Results indicate adaptation of catch-up saccade amplitude to back and forward steps. With vertical steps, saccades became oblique, by an inflexion of the early or late saccade trajectory. With a similar time course, post-saccadic pursuit velocity was increased in the step direction, adding further evidence that under some conditions pursuit and saccades can act synergistically to reduce position errors.
NASA Astrophysics Data System (ADS)
Soti, G.; Wauters, F.; Breitenfeldt, M.; Finlay, P.; Herzog, P.; Knecht, A.; Köster, U.; Kraev, I. S.; Porobic, T.; Prashanth, P. N.; Towner, I. S.; Tramm, C.; Zákoucký, D.; Severijns, N.
2014-09-01
Background: Precision measurements at low energy search for physics beyond the standard model in a way complementary to searches for new particles at colliders. In the weak sector the most general β-decay Hamiltonian contains, besides vector and axial-vector terms, also scalar, tensor, and pseudoscalar terms. Current limits on the scalar and tensor coupling constants from neutron and nuclear β decay are on the level of several percent. Purpose: Extracting new information on tensor coupling constants by measuring the β-asymmetry parameter in the pure Gamow-Teller decay of Cu67, thereby testing the V-A structure of the weak interaction. Method: An iron sample foil into which the radioactive nuclei were implanted was cooled down to mK temperatures in a 3He-4He dilution refrigerator. An external magnetic field of 0.1 T, in combination with the internal hyperfine magnetic field, oriented the nuclei. The anisotropic β radiation was observed with planar high-purity germanium detectors operating at a temperature of about 10 K. An on-line measurement of the β asymmetry of Cu68 was performed as well for normalization purposes. Systematic effects were investigated using geant4 simulations. Results: The experimental value, Ã=0.587(14), is in agreement with the standard model value of 0.5991(2) and is interpreted in terms of physics beyond the standard model. The limits obtained on possible tensor-type charged currents in the weak interaction Hamiltonian are -0.045<(CT+CT')/CA<0.159 (90% C.L.). Conclusions: The obtained limits are comparable to limits from other correlation measurements in nuclear β decay and contribute to further constraining tensor coupling constants.
New methods for B meson decay constants and form factors from lattice NRQCD
Hughes, C.; Davies, C. T.H.; Monahan, C. J.
2018-03-20
We determine the normalization of scalar and pseudoscalar current operators made from nonrelativistic b quarks and highly improved staggered light quarks in lattice quantum chromodynamics (QCD) through O(α s) and Λ QCD/m b. We use matrix elements of these operators to extract B meson decay constants and form factors, and then compare to those obtained using the standard vector and axial-vector operators. This provides a test of systematic errors in the lattice QCD determination of the B meson decay constants and form factors. We provide a new value for the B and B s meson decay constants from lattice QCDmore » calculations on ensembles that include u, d, s, and c quarks in the sea and those that have the u/d quark mass going down to its physical value. Our results are f B=0.196(6) GeV, f Bs=0.236(7) GeV, and f Bs/f B=1.207(7), agreeing well with earlier results using the temporal axial current. By combining with these previous results, we provide updated values of f B=0.190(4) GeV, f Bs=0.229(5) GeV, and f Bs/f B=1.206(5).« less
New methods for B meson decay constants and form factors from lattice NRQCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hughes, C.; Davies, C. T.H.; Monahan, C. J.
We determine the normalization of scalar and pseudoscalar current operators made from nonrelativistic b quarks and highly improved staggered light quarks in lattice quantum chromodynamics (QCD) through O(α s) and Λ QCD/m b. We use matrix elements of these operators to extract B meson decay constants and form factors, and then compare to those obtained using the standard vector and axial-vector operators. This provides a test of systematic errors in the lattice QCD determination of the B meson decay constants and form factors. We provide a new value for the B and B s meson decay constants from lattice QCDmore » calculations on ensembles that include u, d, s, and c quarks in the sea and those that have the u/d quark mass going down to its physical value. Our results are f B=0.196(6) GeV, f Bs=0.236(7) GeV, and f Bs/f B=1.207(7), agreeing well with earlier results using the temporal axial current. By combining with these previous results, we provide updated values of f B=0.190(4) GeV, f Bs=0.229(5) GeV, and f Bs/f B=1.206(5).« less
Black hole masses in active galactic nuclei
NASA Astrophysics Data System (ADS)
Denney, Kelly D.
2010-11-01
We present the complete results from two, high sampling-rate, multi-month, spectrophotometric reverberation mapping campaigns undertaken to obtain either new or improved Hbeta reverberation lag measurements for several relatively low-luminosity active galactic nuclei (AGNs). We have reliably measured the time delay between variations in the continuum and Hbeta emission line in seven local Seyfert 1 galaxies. These measurements are used to calculate the mass of the supermassive black hole at the center of each of these AGNs. We place our results in context to the most current calibration of the broad-line region (BLR) RBLR-L relationship, where our results remove many outliers and significantly reduce the scatter at the low-luminosity end of this relationship. A detailed analysis of the data from our high sampling rate, multi-month reverberation mapping campaign in 2007 reveals that the Hbeta emission region within the BLRs of several nearby AGNs exhibit a variety of kinematic behaviors. Through a velocity-resolved reverberation analysis of the broad Hbeta emission-line flux variations in our sample, we reconstruct velocity-resolved kinematic signals for our entire sample and clearly see evidence for outflowing, infalling, and virialized BLR gas motions in NGC 3227, NGC 3516, and NGC 5548, respectively. Finally, we explore the nature of systematic errors that can arise in measurements of black hole masses from single-epoch spectra of AGNs by utilizing the many epochs available for NGC 5548 and PG1229+204 from reverberation mapping databases. In particular, we examine systematics due to AGN variability, contamination due to constant spectral components (i.e., narrow lines and host galaxy flux), data quality (i.e., signal-to-noise ratio, S/N), and blending of spectral features. We investigate the effect that each of these systematics has on the precision and accuracy of single-epoch masses calculated from two commonly-used line-width measures by comparing these results to recent reverberation mapping studies. We then present an error budget which summarizes the minimum observable uncertainties as well as the amount of additional scatter and/or systematic offset that can be expected from the individual sources of error investigated.
Local concurrent error detection and correction in data structures using virtual backpointers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, C.C.J.; Chen, P.P.; Fuchs, W.K.
1989-11-01
A new technique, based on virtual backpointers, is presented in this paper for local concurrent error detection and correction in linked data structures. Two new data structures utilizing virtual backpointers, the Virtual Double-Linked List and the B-Tree and Virtual Backpointers, are described. For these structures, double errors within a fixed-size checking window can be detected in constant time and single errors detected during forward moves can be corrected in constant time.
A non-perturbative exploration of the high energy regime in Nf=3 QCD. ALPHA Collaboration
NASA Astrophysics Data System (ADS)
Dalla Brida, Mattia; Fritzsch, Patrick; Korzec, Tomasz; Ramos, Alberto; Sint, Stefan; Sommer, Rainer
2018-05-01
Using continuum extrapolated lattice data we trace a family of running couplings in three-flavour QCD over a large range of scales from about 4 to 128 GeV. The scale is set by the finite space time volume so that recursive finite size techniques can be applied, and Schrödinger functional (SF) boundary conditions enable direct simulations in the chiral limit. Compared to earlier studies we have improved on both statistical and systematic errors. Using the SF coupling to implicitly define a reference scale 1/L_0≈ 4 GeV through \\bar{g}^2(L_0) =2.012, we quote L_0 Λ ^{N_f=3}_{{\\overline{MS}}} =0.0791(21). This error is dominated by statistics; in particular, the remnant perturbative uncertainty is negligible and very well controlled, by connecting to infinite renormalization scale from different scales 2^n/L_0 for n=0,1,\\ldots ,5. An intermediate step in this connection may involve any member of a one-parameter family of SF couplings. This provides an excellent opportunity for tests of perturbation theory some of which have been published in a letter (ALPHA collaboration, M. Dalla Brida et al. in Phys Rev Lett 117(18):182001, 2016). The results indicate that for our target precision of 3 per cent in L_0 Λ ^{N_f=3}_{{\\overline{MS}}}, a reliable estimate of the truncation error requires non-perturbative data for a sufficiently large range of values of α _s=\\bar{g}^2/(4π ). In the present work we reach this precision by studying scales that vary by a factor 2^5= 32, reaching down to α _s≈ 0.1. We here provide the details of our analysis and an extended discussion.
Estimation of attitude sensor timetag biases
NASA Technical Reports Server (NTRS)
Sedlak, J.
1995-01-01
This paper presents an extended Kalman filter for estimating attitude sensor timing errors. Spacecraft attitude is determined by finding the mean rotation from a set of reference vectors in inertial space to the corresponding observed vectors in the body frame. Any timing errors in the observations can lead to attitude errors if either the spacecraft is rotating or the reference vectors themselves vary with time. The state vector here consists of the attitude quaternion, timetag biases, and, optionally, gyro drift rate biases. The filter models the timetags as random walk processes: their expectation values propagate as constants and white noise contributes to their covariance. Thus, this filter is applicable to cases where the true timing errors are constant or slowly varying. The observability of the state vector is studied first through an examination of the algebraic observability condition and then through several examples with simulated star tracker timing errors. The examples use both simulated and actual flight data from the Extreme Ultraviolet Explorer (EUVE). The flight data come from times when EUVE had a constant rotation rate, while the simulated data feature large angle attitude maneuvers. The tests include cases with timetag errors on one or two sensors, both constant and time-varying, and with and without gyro bias errors. Due to EUVE's sensor geometry, the observability of the state vector is severely limited when the spacecraft rotation rate is constant. In the absence of attitude maneuvers, the state elements are highly correlated, and the state estimate is unreliable. The estimates are particularly sensitive to filter mistuning in this case. The EUVE geometry, though, is a degenerate case having coplanar sensors and rotation vector. Observability is much improved and the filter performs well when the rate is either varying or noncoplanar with the sensors, as during a slew. Even with bad geometry and constant rates, if gyro biases are independently known, the timetag error for a single sensor can be accurately estimated as long as its boresight is not too close to the spacecraft rotation axis.
Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix
Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.
Accurate determinations of alpha(s) from realistic lattice QCD.
Mason, Q; Trottier, H D; Davies, C T H; Foley, K; Gray, A; Lepage, G P; Nobes, M; Shigemitsu, J
2005-07-29
We obtain a new value for the QCD coupling constant by combining lattice QCD simulations with experimental data for hadron masses. Our lattice analysis is the first to (1) include vacuum polarization effects from all three light-quark flavors (using MILC configurations), (2) include third-order terms in perturbation theory, (3) systematically estimate fourth and higher-order terms, (4) use an unambiguous lattice spacing, and (5) use an [symbol: see text](a2)-accurate QCD action. We use 28 different (but related) short-distance quantities to obtain alpha((5)/(MS))(M(Z)) = 0.1170(12).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yuri, Yosuke, E-mail: yuri.yosuke@jaea.go.jp
Three-dimensional (3D) ordering of a charged-particle beams circulating in a storage ring is systematically studied with a molecular-dynamics simulation code. An ion beam can exhibit a 3D ordered configuration at ultralow temperature as a result of powerful 3D laser cooling. Various unique characteristics of the ordered beams, different from those of crystalline beams, are revealed in detail, such as the single-particle motion in the transverse and longitudinal directions, and the dependence of the tune depression and the Coulomb coupling constant on the operating points.
Threshold and Jet Radius Joint Resummation for Single-Inclusive Jet Production
Liu, Xiaohui; Moch, Sven -Olaf; Ringer, Felix
2017-11-20
Here, we present the first threshold and jet radius jointly resummed cross section for single-inclusive hadronic jet production. We work at next-to-leading logarithmic accuracy and our framework allows for a systematic extension beyond the currently achieved precision. Long-standing numerical issues are overcome by performing the resummation directly in momentum space within soft collinear effective theory. We present the first numerical results for the LHC and observe an improved description of the available data. Our results are of immediate relevance for LHC precision phenomenology including the extraction of parton distribution functions and the QCD strong coupling constant.
ANSYS simulation of the capacitance coupling of quartz tuning fork gyroscope
NASA Astrophysics Data System (ADS)
Zhang, Qing; Feng, Lihui; Zhao, Ke; Cui, Fang; Sun, Yu-nan
2013-12-01
Coupling error is one of the main error sources of the quartz tuning fork gyroscope. The mechanism of capacitance coupling error is analyzed in this article. Finite Element Method (FEM) is used to simulate the structure of the quartz tuning fork by ANSYS software. The voltage output induced by the capacitance coupling is simulated with the harmonic analysis and characteristics of electrical and mechanical parameters influenced by the capacitance coupling between drive electrodes and sense electrodes are discussed with the transient analysis.
A Measurement of the Michel Parameters in Leptonic Decays of the Tau
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ammar, R.; Baringer, P.; Bean, A.
1997-06-01
We have measured the spectral shape Michel parameters {rho} and {eta} using leptonic decays of the {tau} , recorded by the CLEO II detector. Assuming e-{mu} universality in the vectorlike couplings, we find {rho}{sub e{mu}}=0.735{plus_minus}0.013{plus_minus}0.008 and {eta}{sub e{mu}}=-0.015{plus_minus}0.061{plus_minus}0.062 , where the first error is statistical and the second systematic. We also present measurements for the parameters for e and {mu} final states separately. {copyright} {ital 1997} {ital The American Physical Society}
Small parameters in infrared quantum chromodynamics
NASA Astrophysics Data System (ADS)
Peláez, Marcela; Reinosa, Urko; Serreau, Julien; Tissier, Matthieu; Wschebor, Nicolás
2017-12-01
We study the long-distance properties of quantum chromodynamics in the Landau gauge in an expansion in powers of the three-gluon, four-gluon, and ghost-gluon couplings, but without expanding in the quark-gluon coupling. This is motivated by two observations. First, the gauge sector is well described by perturbation theory in the context of a phenomenological model with a massive gluon. Second, the quark-gluon coupling is significantly larger than those in the gauge sector at large distances. In order to resum the contributions of the remaining infinite set of QED-like diagrams, we further expand the theory in 1 /Nc, where Nc is the number of colors. At leading order, this double expansion leads to the well-known rainbow approximation for the quark propagator. We take advantage of the systematic expansion to get a renormalization-group improvement of the rainbow resummation. A simple numerical solution of the resulting coupled set of equations reproduces the phenomenology of the spontaneous chiral symmetry breaking: for sufficiently large quark-gluon coupling constant, the constituent quark mass saturates when its valence mass approaches zero. We find very good agreement with lattice data for the scalar part of the propagator and explain why the vectorial part is poorly reproduced.
Errors in radial velocity variance from Doppler wind lidar
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, H.; Barthelmie, R. J.; Doubrawa, P.
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Errors in radial velocity variance from Doppler wind lidar
Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...
2016-08-29
A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less
Non-intrusive high voltage measurement using slab coupled optical sensors
NASA Astrophysics Data System (ADS)
Stan, Nikola; Chadderdon, Spencer; Selfridge, Richard H.; Schultz, Stephen M.
2014-03-01
We present an optical fiber non-intrusive sensor for measuring high voltage transients. The sensor converts the unknown voltage to electric field, which is then measured using slab-coupled optical fiber sensor (SCOS). Since everything in the sensor except the electrodes is made of dielectric materials and due to the small field sensor size, the sensor is minimally perturbing to the measured voltage. We present the details of the sensor design, which eliminates arcing and minimizes local dielectric breakdown using Teflon blocks and insulation of the whole structure with transformer oil. The structure has a capacitance of less than 3pF and resistance greater than 10 GΩ. We show the measurement of 66.5 kV pulse with a 32.6μs time constant. The measurement matches the expected value of 67.8 kV with less than 2% error.
Spray ignition measurements in a constant volume combustion vessel under engine-relevant conditions
NASA Astrophysics Data System (ADS)
Ramesh, Varun
Pressure-based and optical diagnostics for ignition delay (ID) measurement of a diesel spray from a multi-hole nozzle were investigated in a constant volume combustion vessel (CVCV) at conditions similar to those in a conventional diesel engine at the start of injection (SOI). It was first hypothesized that compared to an engine, the shorter ID in a CVCV was caused by NO, a byproduct of premixed combustion. The presence of a significant concentration of NO+NO2 was confirmed experimentally and by using a multi-zone model of premixed combustion. Experiments measuring the effect of NO on ID were performed at conditions relevant to a conventional diesel engine. Depending on the temperature regime and the nature of the fuel, NO addition was found to advance or retard ignition. Constant volume ignition simulations were capable of describing the observed trends; the magnitudes were different due to the physical processes involved in spray ignition, not modeled in the current study. The results of the study showed that ID is sensitive to low NO concentrations (<100 PPM) in the low-temperature regime. A second source of uncertainty in pressure-based ID measurement is the systematic error associated with the correction used to account for the speed of sound. Simultaneous measurements of volumetric OH chemiluminescence (OHC) and pressure during spray ignition found the OHC to closely resemble the pressure-based heat release rate for the full combustion duration. The start of OHC was always found to be shorter than the pressure-based ID for all fuels and conditions tested by 100 ms. Experiments were also conducted measuring the location and timing of high-temperature ignition and the steady-state lift-off length by high-speed imaging of OHC during spray ignition. The delay period calculated using the measured ignition location and the bulk average speed of sound was in agreement with the delay between OHC and the pressure-based ID. Results of the study show that start of OHC is coupled to detectable heat release and the two measurements are correlated by the time required for the pressure wave to propagate at the speed of sound between the ignition site and the transducer.
Low-energy pion-nucleon scattering
NASA Astrophysics Data System (ADS)
Gibbs, W. R.; Ai, Li; Kaufmann, W. B.
1998-02-01
An analysis of low-energy charged pion-nucleon data from recent π+/-p experiments is presented. From the scattering lengths and the Goldberger-Miyazawa-Oehme (GMO) sum rule we find a value of the pion-nucleon coupling constant of f2=0.0756+/-0.0007. We also find, contrary to most previous analyses, that the scattering volumes for the P31 and P13 partial waves are equal, within errors, corresponding to a symmetry found in the Hamiltonian of many theories. For the potential models used, the amplitudes are extrapolated into the subthreshold region to estimate the value of the Σ term. Off-shell amplitudes are also provided.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soudackov, Alexander V.; Hammes-Schiffer, Sharon
2015-11-21
Rate constant expressions for vibronically nonadiabatic proton transfer and proton-coupled electron transfer reactions are presented and analyzed. The regimes covered include electronically adiabatic and nonadiabatic reactions, as well as high-frequency and low-frequency proton donor-acceptor vibrational modes. These rate constants differ from previous rate constants derived with the cumulant expansion approach in that the logarithmic expansion of the vibronic coupling in terms of the proton donor-acceptor distance includes a quadratic as well as a linear term. The analysis illustrates that inclusion of this quadratic term in the framework of the cumulant expansion framework may significantly impact the rate constants at highmore » temperatures for proton transfer interfaces with soft proton donor-acceptor modes that are associated with small force constants and weak hydrogen bonds. The effects of the quadratic term may also become significant in these regimes when using the vibronic coupling expansion in conjunction with a thermal averaging procedure for calculating the rate constant. In this case, however, the expansion of the coupling can be avoided entirely by calculating the couplings explicitly for the range of proton donor-acceptor distances sampled. The effects of the quadratic term for weak hydrogen-bonding systems are less significant for more physically realistic models that prevent the sampling of unphysical short proton donor-acceptor distances. Additionally, the rigorous relation between the cumulant expansion and thermal averaging approaches is clarified. In particular, the cumulant expansion rate constant includes effects from dynamical interference between the proton donor-acceptor and solvent motions and becomes equivalent to the thermally averaged rate constant when these dynamical effects are neglected. This analysis identifies the regimes in which each rate constant expression is valid and thus will be important for future applications to proton transfer and proton-coupled electron transfer in chemical and biological processes.« less
A procedure for the significance testing of unmodeled errors in GNSS observations
NASA Astrophysics Data System (ADS)
Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling
2018-01-01
It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.
Test of the cosmic transparency with the standard candles and the standard ruler
NASA Astrophysics Data System (ADS)
Chen, Jun
In this paper, the cosmic transparency is constrained by using the latest baryon acoustic oscillation (BAO) data and the type Ia supernova data with a model-independent method. We find that a transparent universe is consistent with observational data at the 1σ confidence level, except for the case of BAO+ Union 2.1 without the systematic errors where a transparent universe is favored only at the 2σ confidence level. To investigate the effect of the uncertainty of the Hubble constant on the test of the cosmic opacity, we assume h to be a free parameter and obtain that the observations favor a transparent universe at the 1σ confidence level.
Low energy determination of the QCD strong coupling constant on the lattice
Maezawa, Yu; Petreczky, Peter
2016-09-28
Here we present a determination of the strong coupling constant from lattice QCD using the moments of pseudo-scalar charmonium correlators calculated using highly improved staggerered quark action. We obtain a value α s( μ = mc) = 0.3397(56), which is the lowest energy determination of the strong coupling constant so far.
NASA Astrophysics Data System (ADS)
He, Yingwei; Li, Ping; Feng, Guojin; Cheng, Li; Wang, Yu; Wu, Houping; Liu, Zilong; Zheng, Chundi; Sha, Dingguo
2010-11-01
For measuring large-aperture optical system transmittance, a novel sub-aperture scanning machine with double-rotating arms (SSMDA) was designed to obtain sub-aperture beam spot. Optical system full-aperture transmittance measurements can be achieved by applying sub-aperture beam spot scanning technology. The mathematical model of the SSMDA based on a homogeneous coordinate transformation matrix is established to develop a detailed methodology for analyzing the beam spot scanning errors. The error analysis methodology considers two fundamental sources of scanning errors, namely (1) the length systematic errors and (2) the rotational systematic errors. As the systematic errors of the parameters are given beforehand, computational results of scanning errors are between -0.007~0.028mm while scanning radius is not lager than 400.000mm. The results offer theoretical and data basis to the research on transmission characteristics of large optical system.
Enabling full-field physics-based optical proximity correction via dynamic model generation
NASA Astrophysics Data System (ADS)
Lam, Michael; Clifford, Chris; Raghunathan, Ananthan; Fenger, Germain; Adam, Kostas
2017-07-01
As extreme ultraviolet lithography becomes closer to reality for high volume production, its peculiar modeling challenges related to both inter and intrafield effects have necessitated building an optical proximity correction (OPC) infrastructure that operates with field position dependency. Previous state-of-the-art approaches to modeling field dependency used piecewise constant models where static input models are assigned to specific x/y-positions within the field. OPC and simulation could assign the proper static model based on simulation-level placement. However, in the realm of 7 and 5 nm feature sizes, small discontinuities in OPC from piecewise constant model changes can cause unacceptable levels of edge placement errors. The introduction of dynamic model generation (DMG) can be shown to effectively avoid these dislocations by providing unique mask and optical models per simulation region, allowing a near continuum of models through the field. DMG allows unique models for electromagnetic field, apodization, aberrations, etc. to vary through the entire field and provides a capability to precisely and accurately model systematic field signatures.
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-01-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level. PMID:3580488
Flash spectroscopy of purple membrane.
Xie, A H; Nagle, J F; Lozier, R H
1987-04-01
Flash spectroscopy data were obtained for purple membrane fragments at pH 5, 7, and 9 for seven temperatures from 5 degrees to 35 degrees C, at the magic angle for actinic versus measuring beam polarizations, at fifteen wavelengths from 380 to 700 nm, and for about five decades of time from 1 microsecond to completion of the photocycle. Signal-to-noise ratios are as high as 500. Systematic errors involving beam geometries, light scattering, absorption flattening, photoselection, temperature fluctuations, partial dark adaptation of the sample, unwanted actinic effects, and cooperativity were eliminated, compensated for, or are shown to be irrelevant for the conclusions. Using nonlinear least squares techniques, all data at one temperature and one pH were fitted to sums of exponential decays, which is the form required if the system obeys conventional first-order kinetics. The rate constants obtained have well behaved Arrhenius plots. Analysis of the residual errors of the fitting shows that seven exponentials are required to fit the data to the accuracy of the noise level.
NASA Technical Reports Server (NTRS)
Ricks, Douglas W.
1993-01-01
There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.
Measurement of a cosmographic distance ratio with galaxy and cosmic microwave background lensing
Miyatake, Hironao; Madhavacheril, Mathew S.; Sehgal, Neelima; ...
2017-04-17
We measure the gravitational lensing shear signal around dark matter halos hosting constant mass galaxies using light sources at z~1 (background galaxies) and at the surface of last scattering at z~1100 (the cosmic microwave background). The galaxy shear measurement uses data from the CFHTLenS survey, and the microwave background shear measurement uses data from the Planck satellite. The ratio of shears from these cross-correlations provides a purely geometric distance measurement across the longest possible cosmological lever arm. This is because the matter distribution around the halos, including uncertainties in galaxy bias and systematic errors such as miscentering, cancels in themore » ratio for halos in thin redshift slices. We measure this distance ratio in three different redshift slices of the constant mass (CMASS) sample and combine them to obtain a 17% measurement of the distance ratio, r = 0.390 +0.070 –0.062, at an effective redshift of z = 0.53. As a result, this is consistent with the predicted ratio from the Planck best-fit cold dark matter model with a cosmological constant cosmology of r = 0.419.« less
The DiskMass Survey. II. Error Budget
NASA Astrophysics Data System (ADS)
Bershady, Matthew A.; Verheijen, Marc A. W.; Westfall, Kyle B.; Andersen, David R.; Swaters, Rob A.; Martinsson, Thomas
2010-06-01
We present a performance analysis of the DiskMass Survey. The survey uses collisionless tracers in the form of disk stars to measure the surface density of spiral disks, to provide an absolute calibration of the stellar mass-to-light ratio (Υ_{*}), and to yield robust estimates of the dark-matter halo density profile in the inner regions of galaxies. We find that a disk inclination range of 25°-35° is optimal for our measurements, consistent with our survey design to select nearly face-on galaxies. Uncertainties in disk scale heights are significant, but can be estimated from radial scale lengths to 25% now, and more precisely in the future. We detail the spectroscopic analysis used to derive line-of-sight velocity dispersions, precise at low surface-brightness, and accurate in the presence of composite stellar populations. Our methods take full advantage of large-grasp integral-field spectroscopy and an extensive library of observed stars. We show that the baryon-to-total mass fraction ({F}_bar) is not a well-defined observational quantity because it is coupled to the halo mass model. This remains true even when the disk mass is known and spatially extended rotation curves are available. In contrast, the fraction of the rotation speed supplied by the disk at 2.2 scale lengths (disk maximality) is a robust observational indicator of the baryonic disk contribution to the potential. We construct the error budget for the key quantities: dynamical disk mass surface density (Σdyn), disk stellar mass-to-light ratio (Υ^disk_{*}), and disk maximality ({F}_{*,max}^disk≡ V^disk_{*,max}/ V_c). Random and systematic errors in these quantities for individual galaxies will be ~25%, while survey precision for sample quartiles are reduced to 10%, largely devoid of systematic errors outside of distance uncertainties.
The structure and energetics of Cr(CO)6 and Cr(CO)5
NASA Technical Reports Server (NTRS)
Barnes, Leslie A.; Liu, Bowen; Lindh, Roland
1992-01-01
The geometric structure of Cr(CO)6 is optimized at the modified coupled pair functional (MCPF), single and double excitation coupled-cluster (CCSD) and CCSD(T) levels of theory (including a perturbational estimate for connected triple excitations), and the force constants for the totally symmetric representation are determined. The geometry of Cr(CO)5 is partially optimized at the MCPF, CCSD, and CCSD(T) levels of theory. Comparison with experimental data shows that the CCSD(T) method gives the best results for the structures and force constants, and that remaining errors are probably due to deficiencies in the one-particle basis sets used for CO. The total binding energies of Cr(CO)6 and Cr(CO)5 are also determined at the MCPF, CCSD, and CCSD(T) levels of theory. The CCSD(T) method gives a much larger total binding energy than either the MCPF or CCSD methods. An analysis of the basis set superposition error (BSSE) at the MCPF level of treatment points out limitations in the one-particle basis used. Calculations using larger basis sets reduce the BSSE, but the total binding energy of Cr(CO)6 is still significantly smaller than the experimental value, although the first CO bond dissociation energy of Cr(CO)6 is well described. An investigation of 3s3p correlation reveals only a small effect. In the largest basis set, the total CO binding energy of Cr(CO)6 is estimated to be 140 kcal/mol at the CCSD(T) level of theory, or about 86 percent of the experimental value. The remaining discrepancy between the experimental and theoretical value is probably due to limitations in the one-particle basis, rather than limitations in the correlation treatment. In particular an additional d function and an f function on each C and O are needed to obtain quantitative results. This is underscored by the fact that even using a very large primitive set (1042 primitive functions contracted to 300 basis functions), the superposition error for the total binding energy of Cr(CO)6 is 22 kcal/mol at the MCPF level of treatment.
Analysis of Free-Space Coupling to Photonic Lanterns in the Presence of Tilt Errors
2017-05-01
Analysis of Free- Space Coupling to Photonic Lanterns in the Presence of Tilt Errors Timothy M. Yarnall, David J. Geisler, Curt M. Schieler...Massachusetts Avenue Cambridge, MA 02139, USA Abstract—Free space coupling to photonic lanterns is more tolerant to tilt errors and F -number mismatch than...these errors. I. INTRODUCTION Photonic lanterns provide a means for transitioning from the free space regime to the single-mode fiber (SMF) regime by
Improved estimation of anomalous diffusion exponents in single-particle tracking experiments
NASA Astrophysics Data System (ADS)
Kepten, Eldad; Bronshtein, Irena; Garini, Yuval
2013-05-01
The mean square displacement is a central tool in the analysis of single-particle tracking experiments, shedding light on various biophysical phenomena. Frequently, parameters are extracted by performing time averages on single-particle trajectories followed by ensemble averaging. This procedure, however, suffers from two systematic errors when applied to particles that perform anomalous diffusion. The first is significant at short-time lags and is induced by measurement errors. The second arises from the natural heterogeneity in biophysical systems. We show how to estimate and correct these two errors and improve the estimation of the anomalous parameters for the whole particle distribution. As a consequence, we manage to characterize ensembles of heterogeneous particles even for rather short and noisy measurements where regular time-averaged mean square displacement analysis fails. We apply this method to both simulations and in vivo measurements of telomere diffusion in 3T3 mouse embryonic fibroblast cells. The motion of telomeres is found to be subdiffusive with an average exponent constant in time. Individual telomere exponents are normally distributed around the average exponent. The proposed methodology has the potential to improve experimental accuracy while maintaining lower experimental costs and complexity.
Tabelow, Karsten; König, Reinhard; Polzehl, Jörg
2016-01-01
Estimation of learning curves is ubiquitously based on proportions of correct responses within moving trial windows. Thereby, it is tacitly assumed that learning performance is constant within the moving windows, which, however, is often not the case. In the present study we demonstrate that violations of this assumption lead to systematic errors in the analysis of learning curves, and we explored the dependency of these errors on window size, different statistical models, and learning phase. To reduce these errors in the analysis of single-subject data as well as on the population level, we propose adequate statistical methods for the estimation of learning curves and the construction of confidence intervals, trial by trial. Applied to data from an avoidance learning experiment with rodents, these methods revealed performance changes occurring at multiple time scales within and across training sessions which were otherwise obscured in the conventional analysis. Our work shows that the proper assessment of the behavioral dynamics of learning at high temporal resolution can shed new light on specific learning processes, and, thus, allows to refine existing learning concepts. It further disambiguates the interpretation of neurophysiological signal changes recorded during training in relation to learning. PMID:27303809
NASA Technical Reports Server (NTRS)
Voss, P. B.; Stimpfle, R. M.; Cohen, R. C.; Hanisco, T. F.; Bonne, G. P.; Perkins, K. K.; Lanzendorf, E. J.; Anderson, J. G.; Salawitch, R. J.
2001-01-01
We examine inorganic chlorine (Cly) partitioning in the summer lower stratosphere using in situ ER-2 aircraft observations made during the Photochemistry of Ozone Loss in the Arctic Region in Summer (POLARIS) campaign. New steady state and numerical models estimate [ClONO2]/[HCl] using currently accepted photochemistry. These models are tightly constrained by observations with OH (parameterized as a function of solar zenith angle) substituting for modeled HO2 chemistry. We find that inorganic chlorine photochemistry alone overestimates observed [ClONO2]/[HCl] by approximately 55-60% at mid and high latitudes. On the basis of POLARIS studies of the inorganic chlorine budget, [ClO]/[ClONO2], and an intercomparison with balloon observations, the most direct explanation for the model-measurement discrepancy in Cly partitioning is an error in the reactions, rate constants, and measured species concentrations linking HCl and ClO (simulated [ClO]/[HCl] too high) in combination with a possible systematic error in the ER-2 ClONO2 measurement (too low). The high precision of our simulation (+/-15% 1-sigma for [ClONO2]/[HCl], which is compared with observations) increases confidence in the observations, photolysis calculations, and laboratory rate constants. These results, along with other findings, should lead to improvements in both the accuracy and precision of stratospheric photochemical models.
Biggs, Peter J
2003-04-01
The calibration and monthly QA of an electron-only linear accelerator dedicated to intra-operative radiation therapy has been reviewed. Since this machine is calibrated prior to every procedure, there was no necessity to adjust the output calibration at any time except, perhaps, when the magnetron is changed, provided the machine output is reasonably stable. This gives a unique opportunity to study the dose output of the machine per monitor unit, variation in the timer error, flatness and symmetry of the beam and the energy check as a function of time. The results show that, although the dose per monitor unit varied within +/- 2%, the timer error within +/- 0.005 MU and the asymmetry within 1-2%, none of these parameters showed any systematic change with time. On the other hand, the energy check showed a linear drift with time for 6, 9, and 12 MeV (2.1, 3.5, and 2.5%, respectively, over 5 years), while at 15 and 18 MeV, the energy check was relatively constant. It is further shown that based on annual calibrations and RPC TLD checks, the energy of each beam is constant and that therefore the energy check is an exquisitely sensitive one. The consistency of the independent checks is demonstrated.
Heterogeneity: The key to forecasting material failure?
NASA Astrophysics Data System (ADS)
Vasseur, J.; Wadsworth, F. B.; Lavallée, Y.; Dingwell, D. B.
2014-12-01
Empirical mechanistic models have been applied to the description of the stress and strain rate upon failure for heterogeneous materials. The behaviour of porous rocks and their analogous two-phase viscoelastic suspensions are particularly well-described by such models. Nevertheless, failure cannot yet be predicted forcing a reliance on other empirical prediction tools such as the Failure Forecast Method (FFM). Measurable, accelerating rates of physical signals (e.g., seismicity and deformation) preceding failure are often used as proxies for damage accumulation in the FFM. Previous studies have already statistically assessed the applicability and performance of the FFM, but none (to the best of our knowledge) has done so in terms of intrinsic material properties. Here we use a rheological standard glass, which has been powdered and then sintered for different times (up to 32 hours) at high temperature (675°C) in order to achieve a sample suite with porosities in the range of 0.10-0.45 gas volume fraction. This sample suite was then subjected to mechanical tests in a uniaxial press at a constant strain rate of 10-3 s-1 and a temperature in the region of the glass transition. A dual acoustic emission (AE) rig has been employed to test the success of the FFM in these materials of systematically varying porosity. The pore-emanating crack model describes well the peak stress at failure in the elastic regime for these materials. We show that the FFM predicts failure within 0-15% error at porosities >0.2. However, when porosities are <0.2, the forecast error associated with predicting the failure time increases to >100%. We interpret these results as a function of the low efficiency with which strain energy can be released in the scenario where there are few or no heterogeneities from which cracks can propagate. These observations shed light on questions surrounding the variable efficacy of the FFM applied to active volcanoes. In particular, they provide a systematic demonstration of the fact that a good understanding of the material properties is required. Thus, we wish to emphasize the need for a better coupling of empirical failure forecasting models with mechanical parameters, such as failure criteria for heterogeneous materials, and point to the implications of this for a broad range of material-based disciplines.
Alecu, I M; Zheng, Jingjing; Papajak, Ewa; Yu, Tao; Truhlar, Donald G
2012-12-20
Multistructural canonical variational transition-state theory with small-curvature multidimensional tunneling (MS-CVT/SCT) is employed to calculate thermal rate constants for hydrogen-atom abstraction from carbon-1 of n-butanol by the hydroperoxyl radical over the temperature range 250-2000 K. The M08-SO hybrid meta-GGA density functional was validated against CCSD(T)-F12a explicitly correlated wave function calculations with the jul-cc-pVTZ basis set. It was then used to compute the properties of all stationary points and the energies and Hessians of a few nonstationary points along the reaction path, which were then used to generate a potential energy surface by the multiconfiguration Shepard interpolation (MCSI) method. The internal rotations in the transition state for this reaction (like those in the reactant alcohol) are strongly coupled to each other and generate multiple stable conformations, which make important contributions to the partition functions. It is shown that neglecting to account for the multiple-structure effects and torsional potential anharmonicity effects that arise from the torsional modes would lead to order-of-magnitude errors in the calculated rate constants at temperatures of interest in combustion.
Is the compressibility positive or negative in a strongly-coupled dusty plasma?
NASA Astrophysics Data System (ADS)
Goree, John; Ruhunusiri, W. D. Suranga
2014-10-01
In dusty plasmas, dust particles are often strongly coupled with a large Coulomb coupling parameter Γ, while the electrons and ions that share the same volume are weakly coupled. In most substances, compressibility β must be positive; otherwise there would be an explosive instability. In a multicomponent plasma, however, one could entertain the idea that β for a single strongly coupled component could be negative, provided that the restoring force from charge separation overwhelms the destabilizing effect. Indeed, the compressibility for a strongly-coupled dust component is assumed to be negative in three theories we identified in the literature for dust acoustic waves. These theories use a multi-fluid model, with an OCP (one component plasma) or Yukawa-OCP approach for the dust fluid. We performed dusty plasma experiments designed to determine the value of the inverse compressibility β-1, and in particular its sign. We fit an experimentally measured dispersion relation to theory, with β-1 as a free parameter, taking into account the systematic errors in the experiment and model. We find that β-1 is either positive, or it has a negligibly small negative value, which is not in agreement with the assumptions of the OCP-based theories. Supported by NSF and NASA.
Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks
Besada, Juan A.
2017-01-01
In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation. PMID:28934157
Sources of variability and systematic error in mouse timing behavior.
Gallistel, C R; King, Adam; McDonald, Robert
2004-01-01
In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.
Spin relaxation measurements of electrostatic bias in intermolecular exploration
NASA Astrophysics Data System (ADS)
Teng, Ching-Ling; Bryant, Robert G.
2006-04-01
We utilize the paramagnetic contribution to proton spin-lattice relaxation rate constants induced by freely diffusing charged paramagnetic centers to investigate the effect of charge on the intermolecular exploration of a protein by the small molecule. The proton NMR spectrum provided 255 resolved resonances that report how the explorer molecule local concentration varies with position on the surface. The measurements integrate over local dielectric constant variations, and, in principle, provide an experimental characterization of the surface free energy sampling biases introduced by the charge distribution on the protein. The experimental results for ribonuclease A obtained using positive, neutral, and negatively charged small nitroxide radicals are qualitatively similar to those expected from electrostatic calculations. However, while systematic electrostatic trends are apparent, the three different combinations of the data sets do not yield internally consistent values for the electrostatic contribution to the intermolecular free energy. We attribute this failure to the weakness of the electrostatic sampling bias for charged nitroxides in water and local variations in effective translational diffusion constant at the water-protein interface, which enters the nuclear spin relaxation equations for the nitroxide-proton dipolar coupling.
Error analysis and system optimization of non-null aspheric testing system
NASA Astrophysics Data System (ADS)
Luo, Yongjie; Yang, Yongying; Liu, Dong; Tian, Chao; Zhuo, Yongmo
2010-10-01
A non-null aspheric testing system, which employs partial null lens (PNL for short) and reverse iterative optimization reconstruction (ROR for short) technique, is proposed in this paper. Based on system modeling in ray tracing software, the parameter of each optical element is optimized and this makes system modeling more precise. Systematic error of non-null aspheric testing system is analyzed and can be categorized into two types, the error due to surface parameters of PNL in the system modeling and the rest from non-null interferometer by the approach of error storage subtraction. Experimental results show that, after systematic error is removed from testing result of non-null aspheric testing system, the aspheric surface is precisely reconstructed by ROR technique and the consideration of systematic error greatly increase the test accuracy of non-null aspheric testing system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Qinghui; Chan, Maria F.; Burman, Chandra
2013-12-15
Purpose: Setting a proper margin is crucial for not only delivering the required radiation dose to a target volume, but also reducing the unnecessary radiation to the adjacent organs at risk. This study investigated the independent one-dimensional symmetric and asymmetric margins between the clinical target volume (CTV) and the planning target volume (PTV) for linac-based single-fraction frameless stereotactic radiosurgery (SRS).Methods: The authors assumed a Dirac delta function for the systematic error of a specific machine and a Gaussian function for the residual setup errors. Margin formulas were then derived in details to arrive at a suitable CTV-to-PTV margin for single-fractionmore » frameless SRS. Such a margin ensured that the CTV would receive the prescribed dose in 95% of the patients. To validate our margin formalism, the authors retrospectively analyzed nine patients who were previously treated with noncoplanar conformal beams. Cone-beam computed tomography (CBCT) was used in the patient setup. The isocenter shifts between the CBCT and linac were measured for a Varian Trilogy linear accelerator for three months. For each plan, the authors shifted the isocenter of the plan in each direction by ±3 mm simultaneously to simulate the worst setup scenario. Subsequently, the asymptotic behavior of the CTV V{sub 80%} for each patient was studied as the setup error approached the CTV-PTV margin.Results: The authors found that the proper margin for single-fraction frameless SRS cases with brain cancer was about 3 mm for the machine investigated in this study. The isocenter shifts between the CBCT and the linac remained almost constant over a period of three months for this specific machine. This confirmed our assumption that the machine systematic error distribution could be approximated as a delta function. This definition is especially relevant to a single-fraction treatment. The prescribed dose coverage for all the patients investigated was 96.1%± 5.5% with an extreme 3-mm setup error in all three directions simultaneously. It was found that the effect of the setup error on dose coverage was tumor location dependent. It mostly affected the tumors located in the posterior part of the brain, resulting in a minimum coverage of approximately 72%. This was entirely due to the unique geometry of the posterior head.Conclusions: Margin expansion formulas were derived for single-fraction frameless SRS such that the CTV would receive the prescribed dose in 95% of the patients treated for brain cancer. The margins defined in this study are machine-specific and account for nonzero mean systematic error. The margin for single-fraction SRS for a group of machines was also derived in this paper.« less
Nuclear binding energy using semi empirical mass formula
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ankita,, E-mail: ankitagoyal@gmail.com; Suthar, B.
2016-05-06
In the present communication, semi empirical mass formula using the liquid drop model has been presented. Nuclear binding energies are calculated using semi empirical mass formula with various constants given by different researchers. We also compare these calculated values with experimental data and comparative study for finding suitable constants is added using the error plot. The study is extended to find the more suitable constant to reduce the error.
NASA Technical Reports Server (NTRS)
Koch, S. E.; Skillman, W. C.; Kocin, P. J.; Wetzel, P. J.; Brill, K. F.
1985-01-01
The synoptic scale performance characteristics of MASS 2.0 are determined by comparing filtered 12-24 hr model forecasts to same-case forecasts made by the National Meteorological Center's synoptic-scale Limited-area Fine Mesh model. Characteristics of the two systems are contrasted, and the analysis methodology used to determine statistical skill scores and systematic errors is described. The overall relative performance of the two models in the sample is documented, and important systematic errors uncovered are presented.
Anatomy of the Higgs fits: A first guide to statistical treatments of the theoretical uncertainties
NASA Astrophysics Data System (ADS)
Fichet, Sylvain; Moreau, Grégory
2016-04-01
The studies of the Higgs boson couplings based on the recent and upcoming LHC data open up a new window on physics beyond the Standard Model. In this paper, we propose a statistical guide to the consistent treatment of the theoretical uncertainties entering the Higgs rate fits. Both the Bayesian and frequentist approaches are systematically analysed in a unified formalism. We present analytical expressions for the marginal likelihoods, useful to implement simultaneously the experimental and theoretical uncertainties. We review the various origins of the theoretical errors (QCD, EFT, PDF, production mode contamination…). All these individual uncertainties are thoroughly combined with the help of moment-based considerations. The theoretical correlations among Higgs detection channels appear to affect the location and size of the best-fit regions in the space of Higgs couplings. We discuss the recurrent question of the shape of the prior distributions for the individual theoretical errors and find that a nearly Gaussian prior arises from the error combinations. We also develop the bias approach, which is an alternative to marginalisation providing more conservative results. The statistical framework to apply the bias principle is introduced and two realisations of the bias are proposed. Finally, depending on the statistical treatment, the Standard Model prediction for the Higgs signal strengths is found to lie within either the 68% or 95% confidence level region obtained from the latest analyses of the 7 and 8 TeV LHC datasets.
A new systematic calibration method of ring laser gyroscope inertial navigation system
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu
2016-10-01
Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.
Measuring Systematic Error with Curve Fits
ERIC Educational Resources Information Center
Rupright, Mark E.
2011-01-01
Systematic errors are often unavoidable in the introductory physics laboratory. As has been demonstrated in many papers in this journal, such errors can present a fundamental problem for data analysis, particularly when comparing the data to a given model. In this paper I give three examples in which my students use popular curve-fitting software…
Systematic Error Modeling and Bias Estimation
Zhang, Feihu; Knoll, Alois
2016-01-01
This paper analyzes the statistic properties of the systematic error in terms of range and bearing during the transformation process. Furthermore, we rely on a weighted nonlinear least square method to calculate the biases based on the proposed models. The results show the high performance of the proposed approach for error modeling and bias estimation. PMID:27213386
Ab initio structure prediction of silicon and germanium sulfides for lithium-ion battery materials
NASA Astrophysics Data System (ADS)
Hsueh, Connie; Mayo, Martin; Morris, Andrew J.
Conventional experimental-based approaches to materials discovery, which can rely heavily on trial and error, are time-intensive and costly. We discuss approaches to coupling experimental and computational techniques in order to systematize, automate, and accelerate the process of materials discovery, which is of particular relevance to developing new battery materials. We use the ab initio random structure searching (AIRSS) method to conduct a systematic investigation of Si-S and Ge-S binary compounds in order to search for novel materials for lithium-ion battery (LIB) anodes. AIRSS is a high-throughput, density functional theory-based approach to structure prediction which has been successful at predicting the structures of LIBs containing sulfur and silicon and germanium. We propose a lithiation mechanism for Li-GeS2 anodes as well as report new, theoretically stable, layered and porous structures in the Si-S and Ge-S systems that pique experimental interest.
Systematic harmonic power laws inter-relating multiple fundamental constants
NASA Astrophysics Data System (ADS)
Chakeres, Donald; Buckhanan, Wayne; Andrianarijaona, Vola
2017-01-01
Power laws and harmonic systems are ubiquitous in physics. We hypothesize that 2, π, the electron, Bohr radius, Rydberg constant, neutron, fine structure constant, Higgs boson, top quark, kaons, pions, muon, Tau, W, and Z when scaled in a common single unit are all inter-related by systematic harmonic powers laws. This implies that if the power law is known it is possible to derive a fundamental constant's scale in the absence of any direct experimental data of that constant. This is true for the case of the hydrogen constants. We created a power law search engine computer program that randomly generated possible positive or negative powers searching when the product of logical groups of constants equals 1, confirming they are physically valid. For 2, π, and the hydrogen constants the search engine found Planck's constant, Coulomb's energy law, and the kinetic energy law. The product of ratios defined by two constants each was the standard general format. The search engine found systematic resonant power laws based on partial harmonic fraction powers of the neutron for all of the constants with products near 1, within their known experimental precision, when utilized with appropriate hydrogen constants. We conclude that multiple fundamental constants are inter-related within a harmonic power law system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.
Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Verma, Prakash; Morales, Jorge A., E-mail: jorge.morales@ttu.edu; Perera, Ajith
2013-11-07
Coupled cluster (CC) methods provide highly accurate predictions of molecular properties, but their high computational cost has precluded their routine application to large systems. Fortunately, recent computational developments in the ACES III program by the Bartlett group [the OED/ERD atomic integral package, the super instruction processor, and the super instruction architecture language] permit overcoming that limitation by providing a framework for massively parallel CC implementations. In that scheme, we are further extending those parallel CC efforts to systematically predict the three main electron spin resonance (ESR) tensors (A-, g-, and D-tensors) to be reported in a series of papers. Inmore » this paper inaugurating that series, we report our new ACES III parallel capabilities that calculate isotropic hyperfine coupling constants in 38 neutral, cationic, and anionic radicals that include the {sup 11}B, {sup 17}O, {sup 9}Be, {sup 19}F, {sup 1}H, {sup 13}C, {sup 35}Cl, {sup 33}S,{sup 14}N, {sup 31}P, and {sup 67}Zn nuclei. Present parallel calculations are conducted at the Hartree-Fock (HF), second-order many-body perturbation theory [MBPT(2)], CC singles and doubles (CCSD), and CCSD with perturbative triples [CCSD(T)] levels using Roos augmented double- and triple-zeta atomic natural orbitals basis sets. HF results consistently overestimate isotropic hyperfine coupling constants. However, inclusion of electron correlation effects in the simplest way via MBPT(2) provides significant improvements in the predictions, but not without occasional failures. In contrast, CCSD results are consistently in very good agreement with experimental results. Inclusion of perturbative triples to CCSD via CCSD(T) leads to small improvements in the predictions, which might not compensate for the extra computational effort at a non-iterative N{sup 7}-scaling in CCSD(T). The importance of these accurate computations of isotropic hyperfine coupling constants to elucidate experimental ESR spectra, to interpret spin-density distributions, and to characterize and identify radical species is illustrated with our results from large organic radicals. Those include species relevant for organic chemistry, petroleum industry, and biochemistry, such as the cyclo-hexyl, 1-adamatyl, and Zn-porphycene anion radicals, inter alia.« less
Design of a side coupled standing wave accelerating tube for NSTRI e-Linac
NASA Astrophysics Data System (ADS)
Zarei, S.; Abbasi Davani, F.; Lamehi Rachti, M.; Ghasemi, F.
2017-09-01
The design and construction of a 6 MeV electron linear accelerator (e-Linac) was defined in the Institute of Nuclear Science and Technology (NSTRI) for cargo inspection and medical applications. For this accelerator, a side coupled standing wave tube resonant at a frequency of 2998.5 MHZ in π/2 mode was selected. In this article, the authors provide a step-by-step explanation of the process of the design for this tube. The design and simulation of the accelerating and coupling cavities were carried out in five steps; (1) separate design of the accelerating and coupling cavities, (2) design of the coupling aperture between the cavities, (3) design of the entire structure for resonance at the nominal frequency, (4) design of the buncher, and (5) design of the power coupling port. At all design stages, in addition to finding the dimensions of the cavity, the impact of construction tolerances and simulation errors on the electromagnetic parameters were investigated. The values obtained for the coupling coefficient, coupling constant, quality factor and capture efficiency are 2.11, 0.011, 16203 and 36%, respectively. The results of beam dynamics study of the simulated tube in ASTRA have yielded a value of 5.14 π-mm-mrad for the horizontal emittance, 5.06 π-mm-mrad for the vertical emittance, 1.17 mm for the horizontal beam size, 1.16 mm for the vertical beam size and 1090 keV for the energy spread of the output beam.
Solar system anomalies: Revisiting Hubble's law
NASA Astrophysics Data System (ADS)
Plamondon, R.
2017-12-01
This paper investigates the impact of a new metric recently published [R. Plamondon and C. Ouellet-Plamondon, in On Recent Developments in Theoretical and Experimental General Relativity, Astrophysics, and Relativistic Field Theories, edited by K. Rosquist, R. T. Jantzen, and R. Ruffini (World Scientific, Singapore, 2015), p. 1301] for studying the space-time geometry of a static symmetric massive object. This metric depends on a complementary error function (erfc) potential that characterizes the emergent gravitation field predicted by the model. This results in two types of deviations as compared to computations made on the basis of a Newtonian potential: a constant and a radial outcome. One key feature of the metric is that it postulates the existence of an intrinsic physical constant σ , the massive object-specific proper length that scales measurements in its surroundings. Although σ must be evaluated experimentally, we use a heuristic to estimate its value and point out some latent relationships between the Hubble constant, the secular increase in the astronomical unit, and the Pioneers delay. Indeed, highlighting the systematic errors that emerge when the effect of σ is neglected, one can link the Hubble constant H 0 to σ Sun and the secular increase V AU to σ Earth . The accuracy of the resulting numerical predictions, H 0 = 74 . 42 ( 0 . 02 ) ( km / s ) / Mpc and V AU ≅ 7.8 cm yr-1 , calls for more investigations of this new metric by specific experts. Moreover, we investigate the expected impacts of the new metric on the flyby anomalies, and we revisit the Pioneers delay. It is shown that both phenomena could be partly taken into account within the context of this unifying paradigm, with quite accurate numerical predictions. A correction for the osculating asymptotic velocity at the perigee of the order of 10 mm/s and an inward radial acceleration of 8 . 34 × 10 - 10 m / s 2 affecting the Pioneer ! space crafts could be explained by this new model.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2014-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors.
Soshi, Takahiro; Ando, Kumiko; Noda, Takamasa; Nakazawa, Kanako; Tsumura, Hideki; Okada, Takayuki
2015-01-01
Post-error slowing (PES) is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms). Neural correlates of post-error processing were examined using event-related potentials (ERPs). Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS). Behavioral results demonstrated that the commission error for No-go trials was 15%, but PES did not take place immediately. Delayed PES was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to PES. Stimulus-locked N2 was negatively correlated with PES and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater PES and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and PES did not occur quickly. Furthermore, PES and its neural correlate (N2) were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke impulsive behaviors. PMID:25674058
NASA Astrophysics Data System (ADS)
Parrish, D. D.; Trainer, M.; Young, V.; Goldan, P. D.; Kuster, W. C.; Jobson, B. T.; Fehsenfeld, F. C.; Lonneman, W. A.; Zika, R. D.; Farmer, C. T.; Riemer, D. D.; Rodgers, M. O.
1998-09-01
Measurements of tropospheric nonmethane hydrocarbons (NMHCs) made in continental North America should exhibit a common pattern determined by photochemical removal and dilution acting upon the typical North American urban emissions. We analyze 11 data sets collected in the United States in the context of this hypothesis, in most cases by analyzing the geometric mean and standard deviations of ratios of selected NMHCs. In the analysis we attribute deviations from the common pattern to plausible systematic and random experimental errors. In some cases the errors have been independently verified and the specific causes identified. Thus this common pattern provides a check for internal consistency in NMHC data sets. Specific tests are presented which should provide useful diagnostics for all data sets of anthropogenic NMHC measurements collected in the United States. Similar tests, based upon the perhaps different emission patterns of other regions, presumably could be developed. The specific tests include (1) a lower limit for ethane concentrations, (2) specific NMHCs that should be detected if any are, (3) the relatively constant mean ratios of the longer-lived NMHCs with similar atmospheric lifetimes, (4) the constant relative patterns of families of NMHCs, and (5) limits on the ambient variability of the NMHC ratios. Many experimental problems are identified in the literature and the Southern Oxidant Study data sets. The most important conclusion of this paper is that a rigorous field intercomparison of simultaneous measurements of ambient NMHCs by different techniques and researchers is of crucial importance to the field of atmospheric chemistry. The tests presented here are suggestive of errors but are not definitive; only a field intercomparison can resolve the uncertainties.
CCD image sensor induced error in PIV applications
NASA Astrophysics Data System (ADS)
Legrand, M.; Nogueira, J.; Vargas, A. A.; Ventas, R.; Rodríguez-Hidalgo, M. C.
2014-06-01
The readout procedure of charge-coupled device (CCD) cameras is known to generate some image degradation in different scientific imaging fields, especially in astrophysics. In the particular field of particle image velocimetry (PIV), widely extended in the scientific community, the readout procedure of the interline CCD sensor induces a bias in the registered position of particle images. This work proposes simple procedures to predict the magnitude of the associated measurement error. Generally, there are differences in the position bias for the different images of a certain particle at each PIV frame. This leads to a substantial bias error in the PIV velocity measurement (˜0.1 pixels). This is the order of magnitude that other typical PIV errors such as peak-locking may reach. Based on modern CCD technology and architecture, this work offers a description of the readout phenomenon and proposes a modeling for the CCD readout bias error magnitude. This bias, in turn, generates a velocity measurement bias error when there is an illumination difference between two successive PIV exposures. The model predictions match the experiments performed with two 12-bit-depth interline CCD cameras (MegaPlus ES 4.0/E incorporating the Kodak KAI-4000M CCD sensor with 4 megapixels). For different cameras, only two constant values are needed to fit the proposed calibration model and predict the error from the readout procedure. Tests by different researchers using different cameras would allow verification of the model, that can be used to optimize acquisition setups. Simple procedures to obtain these two calibration values are also described.
Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms
NASA Astrophysics Data System (ADS)
Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.
2017-08-01
Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2015-01-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity. PMID:26865735
Improved accuracy of intraocular lens power calculation with the Zeiss IOLMaster.
Olsen, Thomas
2007-02-01
This study aimed to demonstrate how the level of accuracy in intraocular lens (IOL) power calculation can be improved with optical biometry using partial optical coherence interferometry (PCI) (Zeiss IOLMaster) and current anterior chamber depth (ACD) prediction algorithms. Intraocular lens power in 461 consecutive cataract operations was calculated using both PCI and ultrasound and the accuracy of the results of each technique were compared. To illustrate the importance of ACD prediction per se, predictions were calculated using both a recently published 5-variable method and the Haigis 2-variable method and the results compared. All calculations were optimized in retrospect to account for systematic errors, including IOL constants and other off-set errors. The average absolute IOL prediction error (observed minus expected refraction) was 0.65 dioptres with ultrasound and 0.43 D with PCI using the 5-variable ACD prediction method (p < 0.00001). The number of predictions within +/- 0.5 D, +/- 1.0 D and +/- 2.0 D of the expected outcome was 62.5%, 92.4% and 99.9% with PCI, compared with 45.5%, 77.3% and 98.4% with ultrasound, respectively (p < 0.00001). The 2-variable ACD method resulted in an average error in PCI predictions of 0.46 D, which was significantly higher than the error in the 5-variable method (p < 0.001). The accuracy of IOL power calculation can be significantly improved using calibrated axial length readings obtained with PCI and modern IOL power calculation formulas incorporating the latest generation ACD prediction algorithms.
Hellander, Andreas; Lawson, Michael J; Drawert, Brian; Petzold, Linda
2014-06-01
The efficiency of exact simulation methods for the reaction-diffusion master equation (RDME) is severely limited by the large number of diffusion events if the mesh is fine or if diffusion constants are large. Furthermore, inherent properties of exact kinetic-Monte Carlo simulation methods limit the efficiency of parallel implementations. Several approximate and hybrid methods have appeared that enable more efficient simulation of the RDME. A common feature to most of them is that they rely on splitting the system into its reaction and diffusion parts and updating them sequentially over a discrete timestep. This use of operator splitting enables more efficient simulation but it comes at the price of a temporal discretization error that depends on the size of the timestep. So far, existing methods have not attempted to estimate or control this error in a systematic manner. This makes the solvers hard to use for practitioners since they must guess an appropriate timestep. It also makes the solvers potentially less efficient than if the timesteps are adapted to control the error. Here, we derive estimates of the local error and propose a strategy to adaptively select the timestep when the RDME is simulated via a first order operator splitting. While the strategy is general and applicable to a wide range of approximate and hybrid methods, we exemplify it here by extending a previously published approximate method, the Diffusive Finite-State Projection (DFSP) method, to incorporate temporal adaptivity.
Delocalization of Coherent Triplet Excitons in Linear Rigid Rod Conjugated Oligomers.
Hintze, Christian; Korf, Patrick; Degen, Frank; Schütze, Friederike; Mecking, Stefan; Steiner, Ulrich E; Drescher, Malte
2017-02-02
In this work, the triplet state delocalization in a series of monodisperse oligo(p-phenyleneethynylene)s (OPEs) is studied by pulsed electron paramagnetic resonance (EPR) and pulsed electron nuclear double resonance (ENDOR) determining zero-field splitting, optical spin polarization, and proton hyperfine couplings. Neither the zero-field splitting parameters nor the optical spin polarization change significantly with OPE chain length, in contrast to the hyperfine coupling constants, which showed a systematic decrease with chain length n according to a 2/(1 + n) decay law. The results provide striking evidence for the Frenkel-type nature of the triplet excitons exhibiting full coherent delocalization in the OPEs under investigation with up to five OPE repeat units and with a spin density distribution described by a nodeless particle in the box wave function. The same model is successfully applied to recently published data on π-conjugated porphyrin oligomers.
π0 pole mass calculation in a strong magnetic field and lattice constraints
NASA Astrophysics Data System (ADS)
Avancini, Sidney S.; Farias, Ricardo L. S.; Benghi Pinto, Marcus; Tavares, William R.; Timóteo, Varese S.
2017-04-01
The π0 neutral meson pole mass is calculated in a strongly magnetized medium using the SU(2) Nambu-Jona-Lasinio model within the random phase approximation (RPA) at zero temperature and zero baryonic density. We employ a magnetic field dependent coupling, G (eB), fitted to reproduce lattice QCD results for the quark condensates. Divergent quantities are handled with a magnetic field independent regularization scheme in order to avoid unphysical oscillations. A comparison between the running and the fixed couplings reveals that the former produces results much closer to the predictions from recent lattice calculations. In particular, we find that the π0 meson mass systematically decreases when the magnetic field increases while the scalar mass remains almost constant. We also investigate how the magnetic background influences other mesonic properties such as fπ0 and gπ0qq.
A disturbance observer-based adaptive control approach for flexure beam nano manipulators.
Zhang, Yangming; Yan, Peng; Zhang, Zhen
2016-01-01
This paper presents a systematic modeling and control methodology for a two-dimensional flexure beam-based servo stage supporting micro/nano manipulations. Compared with conventional mechatronic systems, such systems have major control challenges including cross-axis coupling, dynamical uncertainties, as well as input saturations, which may have adverse effects on system performance unless effectively eliminated. A novel disturbance observer-based adaptive backstepping-like control approach is developed for high precision servo manipulation purposes, which effectively accommodates model uncertainties and coupling dynamics. An auxiliary system is also introduced, on top of the proposed control scheme, to compensate the input saturations. The proposed control architecture is deployed on a customized-designed nano manipulating system featured with a flexure beam structure and voice coil actuators (VCA). Real time experiments on various manipulating tasks, such as trajectory/contour tracking, demonstrate precision errors of less than 1%. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Superradiance of cold atoms coupled to a superconducting circuit
NASA Astrophysics Data System (ADS)
Braun, Daniel; Hoffman, Jonathan; Tiesinga, Eite
2011-06-01
We investigate superradiance of an ensemble of atoms coupled to an integrated superconducting LC circuit. Particular attention is paid to the effect of inhomogeneous coupling constants. Combining perturbation theory in the inhomogeneity and numerical simulations, we show that inhomogeneous coupling constants can significantly affect the superradiant relaxation process. Incomplete relaxation terminating in “dark states” can occur, from which the only escape is through individual spontaneous emission on a much longer time scale. The relaxation dynamics can be significantly accelerated or retarded, depending on the distribution of the coupling constants. On the technical side, we also generalize the previously known propagator of superradiance for identical couplings in the completely symmetric sector to the full exponentially large Hilbert space.
NASA Astrophysics Data System (ADS)
Xiao, Zhili; Tan, Chao; Dong, Feng
2017-08-01
Magnetic induction tomography (MIT) is a promising technique for continuous monitoring of intracranial hemorrhage due to its contactless nature, low cost and capacity to penetrate the high-resistivity skull. The inter-tissue inductive coupling increases with frequency, which may lead to errors in multi-frequency imaging at high frequency. The effect of inter-tissue inductive coupling was investigated to improve the multi-frequency imaging of hemorrhage. An analytical model of inter-tissue inductive coupling based on the equivalent circuit was established. A set of new multi-frequency decomposition equations separating the phase shift of hemorrhage from other brain tissues was derived by employing the coupling information to improve the multi-frequency imaging of intracranial hemorrhage. The decomposition error and imaging error are both decreased after considering the inter-tissue inductive coupling information. The study reveals that the introduction of inter-tissue inductive coupling can reduce the errors of multi-frequency imaging, promoting the development of intracranial hemorrhage monitoring by multi-frequency MIT.
Ro-vibrational averaging of the isotropic hyperfine coupling constant for the methyl radical
DOE Office of Scientific and Technical Information (OSTI.GOV)
Adam, Ahmad Y.; Jensen, Per, E-mail: jensen@uni-wuppertal.de; Yachmenev, Andrey
2015-12-28
We present the first variational calculation of the isotropic hyperfine coupling constant of the carbon-13 atom in the CH{sub 3} radical for temperatures T = 0, 96, and 300 K. It is based on a newly calculated high level ab initio potential energy surface and hyperfine coupling constant surface of CH{sub 3} in the ground electronic state. The ro-vibrational energy levels, expectation values for the coupling constant, and its temperature dependence were calculated variationally by using the methods implemented in the computer program TROVE. Vibrational energies and vibrational and temperature effects for coupling constant are found to be in verymore » good agreement with the available experimental data. We found, in agreement with previous studies, that the vibrational effects constitute about 44% of the constant’s equilibrium value, originating mainly from the large amplitude out-of-plane bending motion and that the temperature effects play a minor role.« less
A new method for detecting velocity shifts and distortions between optical spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evans, Tyler M.; Murphy, Michael T., E-mail: tevans@astro.swin.edu.au
2013-12-01
Recent quasar spectroscopy from the Very Large Telescope (VLT) and Keck suggests that fundamental constants may not actually be constant. To better confirm or refute this result, systematic errors between telescopes must be minimized. We present a new method to directly compare spectra of the same object and measure any velocity shifts between them. This method allows for the discovery of wavelength-dependent velocity shifts between spectra, i.e., velocity distortions, that could produce spurious detections of cosmological variations in fundamental constants. This 'direct comparison' method has several advantages over alternative techniques: it is model-independent (cf. line-fitting approaches), blind, in that spectralmore » features do not need to be identified beforehand, and it produces meaningful uncertainty estimates for the velocity shift measurements. In particular, we demonstrate that, when comparing echelle-resolution spectra with unresolved absorption features, the uncertainty estimates are reliable for signal-to-noise ratios ≳7 per pixel. We apply this method to spectra of quasar J2123–0050 observed with Keck and the VLT and find no significant distortions over long wavelength ranges (∼1050 Å) greater than ≈180 m s{sup –1}. We also find no evidence for systematic velocity distortions within echelle orders greater than 500 m s{sup –1}. Moreover, previous constraints on cosmological variations in the proton-electron mass ratio should not have been affected by velocity distortions in these spectra by more than 4.0 ± 4.2 parts per million. This technique may also find application in measuring stellar radial velocities in search of extra-solar planets and attempts to directly observe the expansion history of the universe using quasar absorption spectra.« less
Casas, Francisco J; Ortiz, David; Villa, Enrique; Cano, Juan L; Cagigas, Jaime; Pérez, Ana R; Aja, Beatriz; Terán, J Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-08-05
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process.
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandelbaum, R.; Rowe, B.; Armstrong, R.
2015-05-01
We present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty about amore » spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less
GREAT3 results - I. Systematic errors in shear estimation and the impact of real galaxy morphology
Mandelbaum, Rachel; Rowe, Barnaby; Armstrong, Robert; ...
2015-05-11
The study present first results from the third GRavitational lEnsing Accuracy Testing (GREAT3) challenge, the third in a sequence of challenges for testing methods of inferring weak gravitational lensing shear distortions from simulated galaxy images. GREAT3 was divided into experiments to test three specific questions, and included simulated space- and ground-based data with constant or cosmologically varying shear fields. The simplest (control) experiment included parametric galaxies with a realistic distribution of signal-to-noise, size, and ellipticity, and a complex point spread function (PSF). The other experiments tested the additional impact of realistic galaxy morphology, multiple exposure imaging, and the uncertainty aboutmore » a spatially varying PSF; the last two questions will be explored in Paper II. The 24 participating teams competed to estimate lensing shears to within systematic error tolerances for upcoming Stage-IV dark energy surveys, making 1525 submissions overall. GREAT3 saw considerable variety and innovation in the types of methods applied. Several teams now meet or exceed the targets in many of the tests conducted (to within the statistical errors). We conclude that the presence of realistic galaxy morphology in simulations changes shear calibration biases by ~1 per cent for a wide range of methods. Other effects such as truncation biases due to finite galaxy postage stamps, and the impact of galaxy type as measured by the Sérsic index, are quantified for the first time. Our results generalize previous studies regarding sensitivities to galaxy size and signal-to-noise, and to PSF properties such as seeing and defocus. Almost all methods’ results support the simple model in which additive shear biases depend linearly on PSF ellipticity.« less
Infrared line intensity measurements in the v = 0-1 band of the ClO radical
NASA Technical Reports Server (NTRS)
Burkholder, James B.; Howard, Carleton J.; Hammer, Philip D.; Goldman, Aaron
1989-01-01
Integrated line intensity measurements in the ClO-radical fundamental vibrational v = 0-1 band were carried out using a high-resolution Fourier transform spectrometer coupled to a long-path-length absorption cell. The results of a series of measurements designed to minimize systematic errors, yielded a value of the fundamental IR band intensity of the ClO-radical equal to 9.68 + or - 1.45/sq cm per atm at 296 K. This result is consistent with all the earlier published results, with the exception of measurements reported by Kostiuk et al. (1986) and Lang et al. (1988).
NASA Astrophysics Data System (ADS)
Leibfried, D.; Wineland, D. J.
2018-03-01
Effective spin-spin interactions between ? qubits enable the determination of the eigenvalue of an arbitrary Pauli product of dimension N with a constant, small number of multi-qubit gates that is independent of N and encodes the eigenvalue in the measurement basis states of an extra ancilla qubit. Such interactions are available whenever qubits can be coupled to a shared harmonic oscillator, a situation that can be realized in many physical qubit implementations. For example, suitable interactions have already been realized for up to 14 qubits in ion traps. It should be possible to implement stabilizer codes for quantum error correction with a constant number of multi-qubit gates, in contrast to typical constructions with a number of two-qubit gates that increases as a function of N. The special case of finding the parity of N qubits only requires a small number of operations that is independent of N. This compares favorably to algorithms for computing the parity on conventional machines, which implies a genuine quantum advantage.
Indirect NMR spin-spin coupling constants in diatomic alkali halides
NASA Astrophysics Data System (ADS)
Jaszuński, Michał; Antušek, Andrej; Demissie, Taye B.; Komorovsky, Stanislav; Repisky, Michal; Ruud, Kenneth
2016-12-01
We report the Nuclear Magnetic Resonance (NMR) spin-spin coupling constants for diatomic alkali halides MX, where M = Li, Na, K, Rb, or Cs and X = F, Cl, Br, or I. The coupling constants are determined by supplementing the non-relativistic coupled-cluster singles-and-doubles (CCSD) values with relativistic corrections evaluated at the four-component density-functional theory (DFT) level. These corrections are calculated as the differences between relativistic and non-relativistic values determined using the PBE0 functional with 50% exact-exchange admixture. The total coupling constants obtained in this approach are in much better agreement with experiment than the standard relativistic DFT values with 25% exact-exchange, and are also noticeably better than the relativistic PBE0 results obtained with 50% exact-exchange. Further improvement is achieved by adding rovibrational corrections, estimated using literature data.
Strong diffusion formulation of Markov chain ensembles and its optimal weaker reductions
NASA Astrophysics Data System (ADS)
Güler, Marifi
2017-10-01
Two self-contained diffusion formulations, in the form of coupled stochastic differential equations, are developed for the temporal evolution of state densities over an ensemble of Markov chains evolving independently under a common transition rate matrix. Our first formulation derives from Kurtz's strong approximation theorem of density-dependent Markov jump processes [Stoch. Process. Their Appl. 6, 223 (1978), 10.1016/0304-4149(78)90020-0] and, therefore, strongly converges with an error bound of the order of lnN /N for ensemble size N . The second formulation eliminates some fluctuation variables, and correspondingly some noise terms, within the governing equations of the strong formulation, with the objective of achieving a simpler analytic formulation and a faster computation algorithm when the transition rates are constant or slowly varying. There, the reduction of the structural complexity is optimal in the sense that the elimination of any given set of variables takes place with the lowest attainable increase in the error bound. The resultant formulations are supported by numerical simulations.
Assessment of Systematic Measurement Errors for Acoustic Travel-Time Tomography of the Atmosphere
2013-01-01
measurements include assess- ment of the time delays in electronic circuits and mechanical hardware (e.g., drivers and microphones) of a tomography array ...hardware and electronic circuits of the tomography array and errors in synchronization of the transmitted and recorded signals. For example, if...coordinates can be as large as 30 cm. These errors are equivalent to the systematic errors in the travel times of 0.9 ms. Third, loudspeakers which are used
Hubble Space Telescope secondary mirror vertex radius/conic constant test
NASA Technical Reports Server (NTRS)
Parks, Robert
1991-01-01
The Hubble Space Telescope backup secondary mirror was tested to determine the vertex radius and conic constant. Three completely independent tests (to the same procedure) were performed. Similar measurements in the three tests were highly consistent. The values obtained for the vertex radius and conic constant were the nominal design values within the error bars associated with the tests. Visual examination of the interferometric data did not show any measurable zonal figure error in the secondary mirror.
2013-08-01
both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric projection and of the use of quadrature, we also report the...interest MFE ∑(e,ψ) or GFV ∑(e,ψ). Tables 1 and 2 show this using coarse and fine forward solutions. Table 1. The forward problem with solution (4.1) is run...adjoint data components ψu and ψp are constant everywhere and ψξ = 0. adj. grid MFE ∑(e,ψ) ∑MFEi ratio GFV ∑(e,ψ) ∑GFV i ratio 20x20 : 32x32 1.96E−3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Petyuk, Vladislav A.; Mayampurath, Anoop M.; Monroe, Matthew E.
2009-12-16
Hybrid two-stage mass spectrometers capable of both highly accurate mass measurement and MS/MS fragmentation have become widely available in recent years and have allowed for sig-nificantly better discrimination between true and false MS/MS pep-tide identifications by applying relatively narrow windows for maxi-mum allowable deviations for parent ion mass measurements. To fully gain the advantage of highly accurate parent ion mass meas-urements, it is important to limit systematic mass measurement errors. The DtaRefinery software tool can correct systematic errors in parent ion masses by reading a set of fragmentation spectra, searching for MS/MS peptide identifications, then fitting a model that canmore » estimate systematic errors, and removing them. This results in a new fragmentation spectrum file with updated parent ion masses.« less
Evaluation of East Asian climatology as simulated by seven coupled models
NASA Astrophysics Data System (ADS)
Jiang, Dabang; Wang, Huijun; Lang, Xianmei
2005-07-01
Using observation and reanalysis data throughout 1961 1990, the East Asian surface air temperature, precipitation and sea level pressure climatology as simulated by seven fully coupled atmosphere-ocean models, namely CCSR/NIES, CGCM2, CSIRO-Mk2, ECHAM4/OPYC3, GFDL-R30, HadCM3, and NCAR-PCM, are systematically evaluated in this study. It is indicated that the above models can successfully reproduce the annual and seasonal surface air temperature and precipitation climatology in East Asia, with relatively good performance for boreal autumn and annual mean. The models’ ability to simulate surface air temperature is more reliable than precipitation. In addition, the models can dependably capture the geographical distribution pattern of annual, boreal winter, spring and autumn sea level pressure in East Asia. In contrast, relatively large simulation errors are displayed when simulated boreal summer sea level pressure is compared with reanalysis data in East Asia. It is revealed that the simulation errors for surface air temperature, precipitation and sea level pressure are generally large over and around the Tibetan Plateau. No individual model is best in every aspect. As a whole, the ECHAM4/OPYC3 and HadCM3 performances are much better, whereas the CGCM2 is relatively poorer in East Asia. Additionally, the seven-model ensemble mean usually shows a relatively high reliability.
Autschbach, Jochen
2009-09-14
A spherical Gaussian nuclear charge distribution model has been implemented for spin-free (scalar) and two-component (spin-orbit) relativistic density functional calculations of indirect NMR nuclear spin-spin coupling (J-coupling) constants. The finite nuclear volume effects on the hyperfine integrals are quite pronounced and as a consequence they noticeably alter coupling constants involving heavy NMR nuclei such as W, Pt, Hg, Tl, and Pb. Typically, the isotropic J-couplings are reduced in magnitude by about 10 to 15 % for couplings between one of the heaviest NMR nuclei and a light atomic ligand, and even more so for couplings between two heavy atoms. For a subset of the systems studied, viz. the Hg atom, Hg(2) (2+), and Tl--X where X=Br, I, the basis set convergence of the hyperfine integrals and the coupling constants was monitored. For the Hg atom, numerical and basis set calculations of the electron density and the 1s and 6s orbital hyperfine integrals are directly compared. The coupling anisotropies of TlBr and TlI increase by about 2 % due to finite-nucleus effects.
On N = 1 partition functions without R-symmetry
Knodel, Gino; Liu, James T.; Zayas, Leopoldo A. Pando
2015-03-25
Here, we examine the dependence of four-dimensional Euclidean N = 1 partition functions on coupling constants. In particular, we focus on backgrounds without R-symmetry, which arise in the rigid limit of old minimal supergravity. Backgrounds preserving a single supercharge may be classified as having either trivial or SU(2) structure, with the former including S 4. We show that, in the absence of additional symmetries, the partition function depends non-trivially on all couplings in the trivial structure case, and (anti)-holomorphically on couplings in the SU(2) structure case. In both cases, this allows for ambiguities in the form of finite counterterms, whichmore » in principle render the partition function unphysical. However, we argue that on dimensional grounds, ambiguities are restricted to finite powers in relevant couplings, and can therefore be kept under control. On the other hand, for backgrounds preserving supercharges of opposite chiralities, the partition function is completely independent of all couplings. In this case, the background admits an R-symmetry, and the partition function is physical, in agreement with the results obtained in the rigid limit of new minimal supergravity. Based on a systematic analysis of supersymmetric invariants, we also demonstrate that N = 1 localization is not possible for backgrounds without R-symmetry.« less
Hubble's diagram and cosmic expansion
Kirshner, Robert P.
2004-01-01
Edwin Hubble's classic article on the expanding universe appeared in PNAS in 1929 [Hubble, E. P. (1929) Proc. Natl. Acad. Sci. USA 15, 168–173]. The chief result, that a galaxy's distance is proportional to its redshift, is so well known and so deeply embedded into the language of astronomy through the Hubble diagram, the Hubble constant, Hubble's Law, and the Hubble time, that the article itself is rarely referenced. Even though Hubble's distances have a large systematic error, Hubble's velocities come chiefly from Vesto Melvin Slipher, and the interpretation in terms of the de Sitter effect is out of the mainstream of modern cosmology, this article opened the way to investigation of the expanding, evolving, and accelerating universe that engages today's burgeoning field of cosmology. PMID:14695886
Test Method Variability in Slow Crack Growth Properties of Sealing Glasses
NASA Technical Reports Server (NTRS)
Salem, J. A.; Tandon, R.
2010-01-01
The crack growth properties of several sealing glasses were measured by using constant stress rate testing in 2 and 95 percent RH (relative humidity). Crack growth parameters measured in high humidity are systematically smaller (n and B) than those measured in low humidity, and crack velocities for dry environments are 100x lower than for wet environments. The crack velocity is very sensitive to small changes in RH at low RH. Biaxial and uniaxial stress states produced similar parameters. Confidence intervals on crack growth parameters that were estimated from propagation of errors solutions were comparable to those from Monte Carlo simulation. Use of scratch-like and indentation flaws produced similar crack growth parameters when residual stresses were considered.
Diamond-anvil cell for radial x-ray diffraction.
Chesnut, G N; Schiferl, D; Streetman, B D; Anderson, W W
2006-06-28
We have designed a new diamond-anvil cell capable of radial x-ray diffraction to pressures of a few hundred GPa. The diffraction geometry allows access to multiple angles of Ψ, which is the angle between each reciprocal lattice vector g(hkl) and the compression axis of the cell. At the 'magic angle', Ψ≈54.7°, the effects of deviatoric stresses on the interplanar spacings, d(hkl), are significantly reduced. Because the systematic errors, which are different for each d(hkl), are significantly reduced, the crystal structures and the derived equations of state can be determined reliably. At other values of Ψ, the effects of deviatoric stresses on the diffraction pattern could eventually be used to determine elastic constants.
Multiscale model reduction for shale gas transport in poroelastic fractured media
NASA Astrophysics Data System (ADS)
Akkutlu, I. Yucel; Efendiev, Yalchin; Vasilyeva, Maria; Wang, Yuhe
2018-01-01
Inherently coupled flow and geomechanics processes in fractured shale media have implications for shale gas production. The system involves highly complex geo-textures comprised of a heterogeneous anisotropic fracture network spatially embedded in an ultra-tight matrix. In addition, nonlinearities due to viscous flow, diffusion, and desorption in the matrix and high velocity gas flow in the fractures complicates the transport. In this paper, we develop a multiscale model reduction approach to couple gas flow and geomechanics in fractured shale media. A Discrete Fracture Model (DFM) is used to treat the complex network of fractures on a fine grid. The coupled flow and geomechanics equations are solved using a fixed stress-splitting scheme by solving the pressure equation using a continuous Galerkin method and the displacement equation using an interior penalty discontinuous Galerkin method. We develop a coarse grid approximation and coupling using the Generalized Multiscale Finite Element Method (GMsFEM). GMsFEM constructs the multiscale basis functions in a systematic way to capture the fracture networks and their interactions with the shale matrix. Numerical results and an error analysis is provided showing that the proposed approach accurately captures the coupled process using a few multiscale basis functions, i.e. a small fraction of the degrees of freedom of the fine-scale problem.
NASA Astrophysics Data System (ADS)
Hu, Qing-Qing; Freier, Christian; Leykauf, Bastian; Schkolnik, Vladimir; Yang, Jun; Krutzik, Markus; Peters, Achim
2017-09-01
Precisely evaluating the systematic error induced by the quadratic Zeeman effect is important for developing atom interferometer gravimeters aiming at an accuracy in the μ Gal regime (1 μ Gal =10-8m /s2 ≈10-9g ). This paper reports on the experimental investigation of Raman spectroscopy-based magnetic field measurements and the evaluation of the systematic error in the gravimetric atom interferometer (GAIN) due to quadratic Zeeman effect. We discuss Raman duration and frequency step-size-dependent magnetic field measurement uncertainty, present vector light shift and tensor light shift induced magnetic field measurement offset, and map the absolute magnetic field inside the interferometer chamber of GAIN with an uncertainty of 0.72 nT and a spatial resolution of 12.8 mm. We evaluate the quadratic Zeeman-effect-induced gravity measurement error in GAIN as 2.04 μ Gal . The methods shown in this paper are important for precisely mapping the absolute magnetic field in vacuum and reducing the quadratic Zeeman-effect-induced systematic error in Raman transition-based precision measurements, such as atomic interferometer gravimeters.
Identifying causes of Western Pacific ITCZ drift in ECMWF System 4 hindcasts
NASA Astrophysics Data System (ADS)
Shonk, Jonathan K. P.; Guilyardi, Eric; Toniazzo, Thomas; Woolnough, Steven J.; Stockdale, Tim
2018-02-01
The development of systematic biases in climate models used in operational seasonal forecasting adversely affects the quality of forecasts they produce. In this study, we examine the initial evolution of systematic biases in the ECMWF System 4 forecast model, and isolate aspects of the model simulations that lead to the development of these biases. We focus on the tendency of the simulated intertropical convergence zone in the western equatorial Pacific to drift northwards by between 0.5° and 3° of latitude depending on season. Comparing observations with both fully coupled atmosphere-ocean hindcasts and atmosphere-only hindcasts (driven by observed sea-surface temperatures), we show that the northward drift is caused by a cooling of the sea-surface temperature on the Equator. The cooling is associated with anomalous easterly wind stress and excessive evaporation during the first twenty days of hindcast, both of which occur whether air-sea interactions are permitted or not. The easterly wind bias develops immediately after initialisation throughout the lower troposphere; a westerly bias develops in the upper troposphere after about 10 days of hindcast. At this point, the baroclinic structure of the wind bias suggests coupling with errors in convective heating, although the initial wind bias is barotropic in structure and appears to have an alternative origin.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gastegger, Michael; Kauffmann, Clemens; Marquetand, Philipp, E-mail: philipp.marquetand@univie.ac.at
Many approaches, which have been developed to express the potential energy of large systems, exploit the locality of the atomic interactions. A prominent example is the fragmentation methods in which the quantum chemical calculations are carried out for overlapping small fragments of a given molecule that are then combined in a second step to yield the system’s total energy. Here we compare the accuracy of the systematic molecular fragmentation approach with the performance of high-dimensional neural network (HDNN) potentials introduced by Behler and Parrinello. HDNN potentials are similar in spirit to the fragmentation approach in that the total energy ismore » constructed as a sum of environment-dependent atomic energies, which are derived indirectly from electronic structure calculations. As a benchmark set, we use all-trans alkanes containing up to eleven carbon atoms at the coupled cluster level of theory. These molecules have been chosen because they allow to extrapolate reliable reference energies for very long chains, enabling an assessment of the energies obtained by both methods for alkanes including up to 10 000 carbon atoms. We find that both methods predict high-quality energies with the HDNN potentials yielding smaller errors with respect to the coupled cluster reference.« less
NASA Technical Reports Server (NTRS)
Sun, Jielun
1993-01-01
Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.
Casas, Francisco J.; Ortiz, David; Villa, Enrique; Cano, Juan L.; Cagigas, Jaime; Pérez, Ana R.; Aja, Beatriz; Terán, J. Vicente; de la Fuente, Luisa; Artal, Eduardo; Hoyland, Roger; Génova-Santos, Ricardo
2015-01-01
This paper presents preliminary polarization measurements and systematic-error characterization of the Thirty Gigahertz Instrument receiver developed for the QUIJOTE experiment. The instrument has been designed to measure the polarization of Cosmic Microwave Background radiation from the sky, obtaining the Q, U, and I Stokes parameters of the incoming signal simultaneously. Two kinds of linearly polarized input signals have been used as excitations in the polarimeter measurement tests in the laboratory; these show consistent results in terms of the Stokes parameters obtained. A measurement-based systematic-error characterization technique has been used in order to determine the possible sources of instrumental errors and to assist in the polarimeter calibration process. PMID:26251906
Temperature and pressure effects on capacitance probe cryogenic liquid level measurement accuracy
NASA Technical Reports Server (NTRS)
Edwards, Lawrence G.; Haberbusch, Mark
1993-01-01
The inaccuracies of liquid nitrogen and liquid hydrogen level measurements by use of a coaxial capacitance probe were investigated as a function of fluid temperatures and pressures. Significant liquid level measurement errors were found to occur due to the changes in the fluids dielectric constants which develop over the operating temperature and pressure ranges of the cryogenic storage tanks. The level measurement inaccuracies can be reduced by using fluid dielectric correction factors based on measured fluid temperatures and pressures. The errors in the corrected liquid level measurements were estimated based on the reported calibration errors of the temperature and pressure measurement systems. Experimental liquid nitrogen (LN2) and liquid hydrogen (LH2) level measurements were obtained using the calibrated capacitance probe equations and also by the dielectric constant correction factor method. The liquid levels obtained by the capacitance probe for the two methods were compared with the liquid level estimated from the fluid temperature profiles. Results show that the dielectric constant corrected liquid levels agreed within 0.5 percent of the temperature profile estimated liquid level. The uncorrected dielectric constant capacitance liquid level measurements deviated from the temperature profile level by more than 5 percent. This paper identifies the magnitude of liquid level measurement error that can occur for LN2 and LH2 fluids due to temperature and pressure effects on the dielectric constants over the tank storage conditions from 5 to 40 psia. A method of reducing the level measurement errors by using dielectric constant correction factors based on fluid temperature and pressure measurements is derived. The improved accuracy by use of the correction factors is experimentally verified by comparing liquid levels derived from fluid temperature profiles.
On the room temperature multiferroic BiFeO3: magnetic, dielectric and thermal properties
NASA Astrophysics Data System (ADS)
Lu, J.; Günther, A.; Schrettle, F.; Mayr, F.; Krohns, S.; Lunkenheimer, P.; Pimenov, A.; Travkin, V. D.; Mukhin, A. A.; Loidl, A.
2010-06-01
Magnetic dc susceptibility between 1.5 and 800 K, ac susceptibility and magnetization, thermodynamic properties, temperature dependence of radio and audio-wave dielectric constants and conductivity, contact-free dielectric constants at mm-wavelengths, as well as ferroelectric polarization are reported for single crystalline BiFeO3. A well developed anomaly in the magnetic susceptibility signals the onset of antiferromagnetic order close to 635 K. Beside this anomaly no further indications of phase or glass transitions are indicated in the magnetic dc and ac susceptibilities down to the lowest temperatures. The heat capacity has been measured from 2 K up to room temperature and significant contributions from magnon excitations have been detected. From the low-temperature heat capacity an anisotropy gap of the magnon modes of the order of 6 meV has been determined. The dielectric constants measured in standard two-point configuration are dominated by Maxwell-Wagner like effects for temperatures T > 300 K and frequencies below 1 MHz. At lower temperatures the temperature dependence of the dielectric constant and loss reveals no anomalies outside the experimental errors, indicating neither phase transitions nor strong spin phonon coupling. The temperature dependence of the dielectric constant was measured contact free at microwave frequencies. At room temperature the dielectric constant has an intrinsic value of 53. The loss is substantial and strongly frequency dependent indicating the predominance of hopping conductivity. Finally, in small thin samples we were able to measure the ferroelectric polarization between 10 and 200 K. The saturation polarization is of the order of 40 μC/cm2, comparable to reports in literature.
A study for systematic errors of the GLA forecast model in tropical regions
NASA Technical Reports Server (NTRS)
Chen, Tsing-Chang; Baker, Wayman E.; Pfaendtner, James; Corrigan, Martin
1988-01-01
From the sensitivity studies performed with the Goddard Laboratory for Atmospheres (GLA) analysis/forecast system, it was revealed that the forecast errors in the tropics affect the ability to forecast midlatitude weather in some cases. Apparently, the forecast errors occurring in the tropics can propagate to midlatitudes. Therefore, the systematic error analysis of the GLA forecast system becomes a necessary step in improving the model's forecast performance. The major effort of this study is to examine the possible impact of the hydrological-cycle forecast error on dynamical fields in the GLA forecast system.
Validity of the two-level model for Viterbi decoder gap-cycle performance
NASA Technical Reports Server (NTRS)
Dolinar, S.; Arnold, S.
1990-01-01
A two-level model has previously been proposed for approximating the performance of a Viterbi decoder which encounters data received with periodically varying signal-to-noise ratio. Such cyclically gapped data is obtained from the Very Large Array (VLA), either operating as a stand-alone system or arrayed with Goldstone. This approximate model predicts that the decoder error rate will vary periodically between two discrete levels with the same period as the gap cycle. It further predicts that the length of the gapped portion of the decoder error cycle for a constraint length K decoder will be about K-1 bits shorter than the actual duration of the gap. The two-level model for Viterbi decoder performance with gapped data is subjected to detailed validation tests. Curves showing the cyclical behavior of the decoder error burst statistics are compared with the simple square-wave cycles predicted by the model. The validity of the model depends on a parameter often considered irrelevant in the analysis of Viterbi decoder performance, the overall scaling of the received signal or the decoder's branch-metrics. Three scaling alternatives are examined: optimum branch-metric scaling and constant branch-metric scaling combined with either constant noise-level scaling or constant signal-level scaling. The simulated decoder error cycle curves roughly verify the accuracy of the two-level model for both the case of optimum branch-metric scaling and the case of constant branch-metric scaling combined with constant noise-level scaling. However, the model is not accurate for the case of constant branch-metric scaling combined with constant signal-level scaling.
Measurement error is often neglected in medical literature: a systematic review.
Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten
2018-06-01
In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.
Novel Calibration Algorithm for a Three-Axis Strapdown Magnetometer
Liu, Yan Xia; Li, Xi Sheng; Zhang, Xiao Juan; Feng, Yi Bo
2014-01-01
A complete error calibration model with 12 independent parameters is established by analyzing the three-axis magnetometer error mechanism. The said model conforms to an ellipsoid restriction, the parameters of the ellipsoid equation are estimated, and the ellipsoid coefficient matrix is derived. However, the calibration matrix cannot be determined completely, as there are fewer ellipsoid parameters than calibration model parameters. Mathematically, the calibration matrix derived from the ellipsoid coefficient matrix by a different matrix decomposition method is not unique, and there exists an unknown rotation matrix R between them. This paper puts forward a constant intersection angle method (angles between the geomagnetic field and gravitational field are fixed) to estimate R. The Tikhonov method is adopted to solve the problem that rounding errors or other errors may seriously affect the calculation results of R when the condition number of the matrix is very large. The geomagnetic field vector and heading error are further corrected by R. The constant intersection angle method is convenient and practical, as it is free from any additional calibration procedure or coordinate transformation. In addition, the simulation experiment indicates that the heading error declines from ±1° calibrated by classical ellipsoid fitting to ±0.2° calibrated by a constant intersection angle method, and the signal-to-noise ratio is 50 dB. The actual experiment exhibits that the heading error is further corrected from ±0.8° calibrated by the classical ellipsoid fitting to ±0.3° calibrated by a constant intersection angle method. PMID:24831110
NASA Astrophysics Data System (ADS)
Liu, Zhixiang; Xing, Tingwen; Jiang, Yadong; Lv, Baobin
2018-02-01
A two-dimensional (2-D) shearing interferometer based on an amplitude chessboard grating was designed to measure the wavefront aberration of a high numerical-aperture (NA) objective. Chessboard gratings offer better diffraction efficiencies and fewer disturbing diffraction orders than traditional cross gratings. The wavefront aberration of the tested objective was retrieved from the shearing interferogram using the Fourier transform and differential Zernike polynomial-fitting methods. Grating manufacturing errors, including the duty-cycle and pattern-deviation errors, were analyzed with the Fourier transform method. Then, according to the relation between the spherical pupil and planar detector coordinates, the influence of the distortion of the pupil coordinates was simulated. Finally, the systematic error attributable to grating alignment errors was deduced through the geometrical ray-tracing method. Experimental results indicate that the measuring repeatability (3σ) of the wavefront aberration of an objective with NA 0.4 was 3.4 mλ. The systematic-error results were consistent with previous analyses. Thus, the correct wavefront aberration can be obtained after calibration.
NASA Astrophysics Data System (ADS)
Johnson, Traci L.; Sharon, Keren
2016-11-01
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Babenko, V. A.; Petrov, N. M., E-mail: pet2@ukr.net
2016-01-15
The relation between quantities that characterize the pion–nucleon and nucleon–nucleon interactions is studied with allowance for the fact that, at low energies, nuclear forces in nucleon–nucleon systems are mediated predominantly by one-pion exchange. On the basis of the values currently recommended for the low-energy parameters of the proton–proton interaction, the charged pion–nucleon coupling constant is evaluated at g{sub π}{sup 2}±/4π = 14.55(13). This value is in perfect agreement with the experimental value of g{sub π}{sup 2}±/4π = 14.52(26) found by the Uppsala Neutron Research Group. At the same time, the value obtained for the charged pion–nucleon coupling constant differs sizablymore » from the value of the pion–nucleon coupling constant for neutral pions, which is g{sub π}{sup 2} 0/4π = 13.55(13). This is indicative of a substantial charge dependence of the coupling constant.« less
Effect of electrical coupling on ionic current and synaptic potential measurements.
Rabbah, Pascale; Golowasch, Jorge; Nadim, Farzan
2005-07-01
Recent studies have found electrical coupling to be more ubiquitous than previously thought, and coupling through gap junctions is known to play a crucial role in neuronal function and network output. In particular, current spread through gap junctions may affect the activation of voltage-dependent conductances as well as chemical synaptic release. Using voltage-clamp recordings of two strongly electrically coupled neurons of the lobster stomatogastric ganglion and conductance-based models of these neurons, we identified effects of electrical coupling on the measurement of leak and voltage-gated outward currents, as well as synaptic potentials. Experimental measurements showed that both leak and voltage-gated outward currents are recruited by gap junctions from neurons coupled to the clamped cell. Nevertheless, in spite of the strong coupling between these neurons, the errors made in estimating voltage-gated conductance parameters were relatively minor (<10%). Thus in many cases isolation of coupled neurons may not be required if a small degree of measurement error of the voltage-gated currents or the synaptic potentials is acceptable. Modeling results show, however, that such errors may be as high as 20% if the gap-junction position is near the recording site or as high as 90% when measuring smaller voltage-gated ionic currents. Paradoxically, improved space clamp increases the errors arising from electrical coupling because voltage control across gap junctions is poor for even the highest realistic coupling conductances. Furthermore, the common procedure of leak subtraction can add an extra error to the conductance measurement, the sign of which depends on the maximal conductance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Ke; Li Yanqiu; Wang Hai
Characterization of measurement accuracy of the phase-shifting point diffraction interferometer (PS/PDI) is usually performed by two-pinhole null test. In this procedure, the geometrical coma and detector tilt astigmatism systematic errors are almost one or two magnitude higher than the desired accuracy of PS/PDI. These errors must be accurately removed from the null test result to achieve high accuracy. Published calibration methods, which can remove the geometrical coma error successfully, have some limitations in calibrating the astigmatism error. In this paper, we propose a method to simultaneously calibrate the geometrical coma and detector tilt astigmatism errors in PS/PDI null test. Basedmore » on the measurement results obtained from two pinhole pairs in orthogonal directions, the method utilizes the orthogonal and rotational symmetry properties of Zernike polynomials over unit circle to calculate the systematic errors introduced in null test of PS/PDI. The experiment using PS/PDI operated at visible light is performed to verify the method. The results show that the method is effective in isolating the systematic errors of PS/PDI and the measurement accuracy of the calibrated PS/PDI is 0.0088{lambda} rms ({lambda}= 632.8 nm).« less
Correcting systematic errors in high-sensitivity deuteron polarization measurements
NASA Astrophysics Data System (ADS)
Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.
2012-02-01
This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.
NASA Astrophysics Data System (ADS)
Qing, Chun; Wu, Xiaoqing; Li, Xuebin; Tian, Qiguo; Liu, Dong; Rao, Ruizhong; Zhu, Wenyue
2018-01-01
In this paper, we introduce an approach wherein the Weather Research and Forecasting (WRF) model is coupled with the bulk aerodynamic method to estimate the surface layer refractive index structure constant (C n 2) above Taishan Station in Antarctica. First, we use the measured meteorological parameters to estimate C n 2 using the bulk aerodynamic method, and second, we use the WRF model output parameters to estimate C n 2 using the bulk aerodynamic method. Finally, the corresponding C n 2 values from the micro-thermometer are compared with the C n 2 values estimated using the WRF model coupled with the bulk aerodynamic method. We analyzed the statistical operators—the bias, root mean square error (RMSE), bias-corrected RMSE (σ), and correlation coefficient (R xy )—in a 20 day data set to assess how this approach performs. In addition, we employ contingency tables to investigate the estimation quality of this approach, which provides complementary key information with respect to the bias, RMSE, σ, and R xy . The quantitative results are encouraging and permit us to confirm the fine performance of this approach. The main conclusions of this study tell us that this approach provides a positive impact on optimizing the observing time in astronomical applications and provides complementary key information for potential astronomical sites.
Toric-boson model: Toward a topological quantum memory at finite temperature
NASA Astrophysics Data System (ADS)
Hamma, Alioscia; Castelnovo, Claudio; Chamon, Claudio
2009-06-01
We discuss the existence of stable topological quantum memory at finite temperature. At stake here is the fundamental question of whether it is, in principle, possible to store quantum information for macroscopic times without the intervention from the external world, that is, without error correction. We study the toric code in two dimensions with an additional bosonic field that couples to the defects, in the presence of a generic environment at finite temperature: the toric-boson model. Although the coupling constants for the bare model are not finite in the thermodynamic limit, the model has a finite spectrum. We show that in the topological phase, there is a finite temperature below which open strings are confined and therefore the lifetime of the memory can be made arbitrarily (polynomially) long in system size. The interaction with the bosonic field yields a long-range attractive force between the end points of open strings but leaves closed strings and topological order intact.
Systematics of constant roll inflation
NASA Astrophysics Data System (ADS)
Anguelova, Lilia; Suranyi, Peter; Wijewardhana, L. C. R.
2018-02-01
We study constant roll inflation systematically. This is a regime, in which the slow roll approximation can be violated. It has long been thought that this approximation is necessary for agreement with observations. However, recently it was understood that there can be inflationary models with a constant, and not necessarily small, rate of roll that are both stable and compatible with the observational constraint ns ≈ 1. We investigate systematically the condition for such a constant-roll regime. In the process, we find a whole new class of inflationary models, in addition to the known solutions. We show that the new models are stable under scalar perturbations. Finally, we find a part of their parameter space, in which they produce a nearly scale-invariant scalar power spectrum, as needed for observational viability.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soudackov, Alexander; Hammes-Schiffer, Sharon
2015-11-17
Rate constant expressions for vibronically nonadiabatic proton transfer and proton-coupled electron transfer reactions are presented and analyzed. The regimes covered include electronically adiabatic and nonadiabatic reactions, as well as high-frequency and low-frequency regimes for the proton donor-acceptor vibrational mode. These rate constants differ from previous rate constants derived with the cumulant expansion approach in that the logarithmic expansion of the vibronic coupling in terms of the proton donor-acceptor distance includes a quadratic as well as a linear term. The analysis illustrates that inclusion of this quadratic term does not significantly impact the rate constants derived using the cumulant expansion approachmore » in any of the regimes studied. The effects of the quadratic term may become significant when using the vibronic coupling expansion in conjunction with a thermal averaging procedure for calculating the rate constant, however, particularly at high temperatures and for proton transfer interfaces with extremely soft proton donor-acceptor modes that are associated with extraordinarily weak hydrogen bonds. Even with the thermal averaging procedure, the effects of the quadratic term for weak hydrogen-bonding systems are less significant for more physically realistic models that prevent the sampling of unphysical short proton donor-acceptor distances, and the expansion of the coupling can be avoided entirely by calculating the couplings explicitly for the range of proton donor-acceptor distances. This analysis identifies the regimes in which each rate constant expression is valid and thus will be important for future applications to proton transfer and proton-coupled electron transfer in chemical and biological processes. We are grateful for support from National Institutes of Health Grant GM056207 (applications to enzymes) and the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences (applications to molecular electrocatalysts).« less
The Hubble Constant from Supernovae
NASA Astrophysics Data System (ADS)
Saha, Abhijit; Macri, Lucas M.
The decades-long quest to obtain a precise and accurate measurement of the local expansion rate of the universe (the Hubble Constant or H0) has greatly benefited from the use of supernovae (SNe). Starting from humble beginnings (dispersions of ˜ 0.5 mag in the Hubble flow in the late 1960s/early 1970s), the increasingly more sophisticated understanding, classification, and analysis of these events turned type Ia SNe into the premiere choice for a secondary distance indicator by the early 1990s. While some systematic uncertainties specific to SNe and to Cepheid-based distances to the calibrating host galaxies still contribute to the H0 error budget, the major emphasis over the past two decades has been on reducing the statistical uncertainty by obtaining ever-larger samples of distances to SN hosts. Building on early efforts with the first-generation instruments on the Hubble Space Telescope, recent observations with the latest instruments on this facility have reduced the estimated total uncertainty on H0 to 2.4 % and shown a path to reach a 1 % measurement by the end of the decade, aided by Gaia and the James Webb Space Telescope.
ERIC Educational Resources Information Center
Py, Bernard
A progress report is presented of a study which applies a system of generative grammar to error analysis. The objective of the study was to reconstruct the grammar of students' interlanguage, using a systematic analysis of errors. (Interlanguage refers to the linguistic competence of a student who possesses a relatively systematic body of rules,…
Economic optimization of operations for hybrid energy systems under variable markets
Chen, Jen; Garcia, Humberto E.
2016-05-21
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
NASA Astrophysics Data System (ADS)
Gao, J.; Nishida, K.
2010-10-01
This paper describes an Ultraviolet-Visible Laser Absorption-Scattering (UV-Vis LAS) imaging technique applied to asymmetric fuel sprays. Continuing from the previous studies, the detailed measurement principle was derived. It is demonstrated that, by means of this technique, cumulative masses and mass distributions of vapor/liquid phases can be quantitatively measured no matter what shape the spray is. A systematic uncertainty analysis was performed, and the measurement accuracy was also verified through a series of experiments on the completely vaporized fuel spray. The results show that the Molar Absorption Coefficient (MAC) of the test fuel, which is typically pressure and temperature dependent, is the major error source. The measurement error in the vapor determination has been shown to be approximately 18% under the assumption of constant MAC of the test fuel. Two application examples of the extended LAS technique were presented for exploring the dynamics and physical insight of the evaporating fuel sprays: diesel sprays injected by group-hole nozzles and gasoline sprays impinging on an inclined wall.
Economic optimization of operations for hybrid energy systems under variable markets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Jen; Garcia, Humberto E.
We prosed a hybrid energy systems (HES) which is an important element to enable increasing penetration of clean energy. Our paper investigates the operations flexibility of HES, and develops a methodology for operations optimization for maximizing economic value based on predicted renewable generation and market information. A multi-environment computational platform for performing such operations optimization is also developed. In order to compensate for prediction error, a control strategy is accordingly designed to operate a standby energy storage element (ESE) to avoid energy imbalance within HES. The proposed operations optimizer allows systematic control of energy conversion for maximal economic value. Simulationmore » results of two specific HES configurations are included to illustrate the proposed methodology and computational capability. These results demonstrate the economic viability of HES under proposed operations optimizer, suggesting the diversion of energy for alternative energy output while participating in the ancillary service market. Economic advantages of such operations optimizer and associated flexible operations are illustrated by comparing the economic performance of flexible operations against that of constant operations. Sensitivity analysis with respect to market variability and prediction error, are also performed.« less
NASA Astrophysics Data System (ADS)
Rufchahi, E. O. Moradi; Gilani, A. Ghanadzadeh; Taghvaei, V.; Karimi, R.; Ramezanzade, N.
2016-03-01
Malondianilide (I) derived from p-chloroaniline was cyclized to 6-chloro-4-hydroxyquinoline-2(1H)-one (II) in moderately good yield using polyphosphoric acid as catalyst. This compound was then coupled with some diazotized aromatic amines to give the corresponding azo disperse dyes 1-12. A systematic study of the effect of solvent, acid, base and pH upon the electronic absorption spectra of the dyes 1-12 was carried out. In DMSO, DMF, CH3CN, CHCl3, EtOH and acidic media (CH3COOH, acidified EtOH) these dyes that theoretically may be involved in azo-hydrazone tautomerism have been detected only as hydrazone tautomers T1 and T2. The acidic dissociation constants of the dyes were measured in 80 vol% ethanol-water solution at room temperature and ionic strength of 0.1. The results were correlated by the Hammett-type equation using the substituent constants σx.
Subaru Telescope limits on cosmological variations in the fine-structure constant
NASA Astrophysics Data System (ADS)
Murphy, Michael T.; Cooksey, Kathy L.
2017-11-01
Previous, large samples of quasar absorption spectra have indicated some evidence for relative variations in the fine-structure constant (Δα/α) across the sky. However, they were likely affected by long-range distortions of the wavelength calibration, so it is important to establish a statistical sample of more reliable results from multiple telescopes. Here we triple the sample of Δα/α measurements from the Subaru Telescope which have been `supercalibrated' to correct for long-range distortions. A blinded analysis of the metallic ions in six intervening absorption systems in two Subaru quasar spectra provides no evidence for α variation, with a weighted mean of Δα/α = 3.0 ± 2.8stat ± 2.0sys parts per million (1σ statistical and systematic uncertainties). The main remaining systematic effects are uncertainties in the long-range distortion corrections, absorption profile models, and errors from redispersing multiple quasar exposures on to a common wavelength grid. The results also assume that terrestrial isotopic abundances prevail in the absorbers; assuming only the dominant terrestrial isotope is present significantly lowers Δα/α, though it is still consistent with zero. Given the location of the two quasars on the sky, our results do not support the evidence for spatial α variation, especially when combined with the 21 other recent measurements which were corrected for, or resistant to, long-range distortions. Our spectra and absorption profile fits are publicly available.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
Effects of a reentry plasma sheath on the beam pointing properties of an array antenna
NASA Astrophysics Data System (ADS)
Bai, Bowen; Liu, Yanming; Lin, Xiaofang; Li, Xiaoping
2018-03-01
The reduction in the gain of an on-board antenna caused by a reentry plasma sheath is an important effect that contributes to the reentry "blackout" problem. Using phased array antenna and beamforming technology could provide higher gain and an increase in the communication signal intensity. The attenuation and phase delay of the electromagnetic (EM) waves transmitting through the plasma sheath are direction-dependent, and the radiation pattern of the phased array antenna is affected, leading to a deviation in the beam pointing. In this paper, the far-field pattern of a planar array antenna covered by a plasma sheath is deduced analytically by considering both refraction and mutual coupling effects. A comparison between the analytic results and the results from an electromagnetic simulation is carried out. The effect of the plasma sheath on the radiation pattern and the beam pointing errors of the phased array antenna is studied systematically, and the derived results could provide useful information for the correction of pointing errors.
Sancho-García, J C
2011-09-13
Highly accurate coupled-cluster (CC) calculations with large basis sets have been performed to study the binding energy of the (CH)12, (CH)16, (CH)20, and (CH)24 polyhedral hydrocarbons in two, cage-like and planar, forms. We also considered the effect of other minor contributions: core-correlation, relativistic corrections, and extrapolations to the limit of the full CC expansion. Thus, chemically accurate values could be obtained for these complicated systems. These nearly exact results are used to evaluate next the performance of main approximations (i.e., pure, hybrid, and double-hybrid methods) within density functional theory (DFT) in a systematic fashion. Some commonly used functionals, including the B3LYP model, are affected by large errors, and only those having reduced self-interaction error (SIE), which includes the last family of conjectured expressions (double hybrids), are able to achieve reasonable low deviations of 1-2 kcal/mol especially when an estimate for dispersion interactions is also added.
Miniaturized force/torque sensor for in vivo measurements of tissue characteristics.
Hessinger, M; Pilic, T; Werthschutzky, R; Pott, P P
2016-08-01
This paper presents the development of a surgical instrument to measure interaction forces/torques with organic tissue during operation. The focus is on the design progress of the sensor element, consisting of a spoke wheel deformation element with a diameter of 12 mm and eight inhomogeneous doped piezoresistive silicon strain gauges on an integrated full-bridge assembly with an edge length of 500 μm. The silicon chips are contacted to flex-circuits via flip chip and bonded on the substrate with a single component adhesive. A signal processing board with an 18 bit serial A/D converter is integrated into the sensor. The design concept of the handheld surgical sensor device consists of an instrument coupling, the six-axis sensor, a wireless communication interface and battery. The nominal force of the sensing element is 10 N and the nominal torque is 1 N-m in all spatial directions. A first characterization of the force sensor results in a maximal systematic error of 4.92 % and random error of 1.13 %.
Thirty Years of Improving the NCEP Global Forecast System
NASA Astrophysics Data System (ADS)
White, G. H.; Manikin, G.; Yang, F.
2014-12-01
Current eight day forecasts by the NCEP Global Forecast System are as accurate as five day forecasts 30 years ago. This revolution in weather forecasting reflects increases in computer power, improvements in the assimilation of observations, especially satellite data, improvements in model physics, improvements in observations and international cooperation and competition. One important component has been and is the diagnosis, evaluation and reduction of systematic errors. The effect of proposed improvements in the GFS on systematic errors is one component of the thorough testing of such improvements by the Global Climate and Weather Modeling Branch. Examples of reductions in systematic errors in zonal mean temperatures and winds and other fields will be presented. One challenge in evaluating systematic errors is uncertainty in what reality is. Model initial states can be regarded as the best overall depiction of the atmosphere, but can be misleading in areas of few observations or for fields not well observed such as humidity or precipitation over the oceans. Verification of model physics is particularly difficult. The Environmental Modeling Center emphasizes the evaluation of systematic biases against observations. Recently EMC has placed greater emphasis on synoptic evaluation and on precipitation, 2-meter temperatures and dew points and 10 meter winds. A weekly EMC map discussion reviews the performance of many models over the United States and has helped diagnose and alleviate significant systematic errors in the GFS, including a near surface summertime evening cold wet bias over the eastern US and a multi-week period when the GFS persistently developed bogus tropical storms off Central America. The GFS exhibits a wet bias for light rain and a dry bias for moderate to heavy rain over the continental United States. Significant changes to the GFS are scheduled to be implemented in the fall of 2014. These include higher resolution, improved physics and improvements to the assimilation. These changes significantly improve the tropospheric flow and reduce a tropical upper tropospheric warm bias. One important error remaining is the failure of the GFS to maintain deep convection over Indonesia and in the tropical west Pacific. This and other current systematic errors will be presented.
NASA Astrophysics Data System (ADS)
Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong
2017-11-01
Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.
Rusakov, Yury Yu; Krivdin, Leonid B; Østerstrøm, Freja F; Sauer, Stephan P A; Potapov, Vladimir A; Amosova, Svetlana V
2013-08-21
This paper documents the very first example of a high-level correlated calculation of spin-spin coupling constants involving tellurium taking into account relativistic effects, vibrational corrections and solvent effects for medium sized organotellurium molecules. The (125)Te-(1)H spin-spin coupling constants of tellurophene and divinyl telluride were calculated at the SOPPA and DFT levels, in good agreement with experimental data. A new full-electron basis set, av3z-J, for tellurium derived from the "relativistic" Dyall's basis set, dyall.av3z, and specifically optimized for the correlated calculations of spin-spin coupling constants involving tellurium was developed. The SOPPA method shows a much better performance compared to DFT, if relativistic effects calculated within the ZORA scheme are taken into account. Vibrational and solvent corrections are next to negligible, while conformational averaging is of prime importance in the calculation of (125)Te-(1)H spin-spin couplings. Based on the performed calculations at the SOPPA(CCSD) level, a marked stereospecificity of geminal and vicinal (125)Te-(1)H spin-spin coupling constants originating in the orientational lone pair effect of tellurium has been established, which opens a new guideline in organotellurium stereochemistry.
Yang-Baxter σ -models, conformal twists, and noncommutative Yang-Mills theory
NASA Astrophysics Data System (ADS)
Araujo, T.; Bakhmatov, I.; Colgáin, E. Ó.; Sakamoto, J.; Sheikh-Jabbari, M. M.; Yoshida, K.
2017-05-01
The Yang-Baxter σ -model is a systematic way to generate integrable deformations of AdS5×S5 . We recast the deformations as seen by open strings, where the metric is undeformed AdS5×S5 with constant string coupling, and all information about the deformation is encoded in the noncommutative (NC) parameter Θ . We identify the deformations of AdS5 as twists of the conformal algebra, thus explaining the noncommutativity. We show that the unimodularity condition on r -matrices for supergravity solutions translates into Θ being divergence-free. Integrability of the σ -model for unimodular r -matrices implies the existence and planar integrability of the dual NC gauge theory.
Error modeling for differential GPS. M.S. Thesis - MIT, 12 May 1995
NASA Technical Reports Server (NTRS)
Blerman, Gregory S.
1995-01-01
Differential Global Positioning System (DGPS) positioning is used to accurately locate a GPS receiver based upon the well-known position of a reference site. In utilizing this technique, several error sources contribute to position inaccuracy. This thesis investigates the error in DGPS operation and attempts to develop a statistical model for the behavior of this error. The model for DGPS error is developed using GPS data collected by Draper Laboratory. The Marquardt method for nonlinear curve-fitting is used to find the parameters of a first order Markov process that models the average errors from the collected data. The results show that a first order Markov process can be used to model the DGPS error as a function of baseline distance and time delay. The model's time correlation constant is 3847.1 seconds (1.07 hours) for the mean square error. The distance correlation constant is 122.8 kilometers. The total process variance for the DGPS model is 3.73 sq meters.
Deep Coupled Integration of CSAC and GNSS for Robust PNT.
Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi
2015-09-11
Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. "Clock coasting" of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT.
Deep Coupled Integration of CSAC and GNSS for Robust PNT
Ma, Lin; You, Zheng; Li, Bin; Zhou, Bin; Han, Runqi
2015-01-01
Global navigation satellite systems (GNSS) are the most widely used positioning, navigation, and timing (PNT) technology. However, a GNSS cannot provide effective PNT services in physical blocks, such as in a natural canyon, canyon city, underground, underwater, and indoors. With the development of micro-electromechanical system (MEMS) technology, the chip scale atomic clock (CSAC) gradually matures, and performance is constantly improved. A deep coupled integration of CSAC and GNSS is explored in this thesis to enhance PNT robustness. “Clock coasting” of CSAC provides time synchronized with GNSS and optimizes navigation equations. However, errors of clock coasting increase over time and can be corrected by GNSS time, which is stable but noisy. In this paper, weighted linear optimal estimation algorithm is used for CSAC-aided GNSS, while Kalman filter is used for GNSS-corrected CSAC. Simulations of the model are conducted, and field tests are carried out. Dilution of precision can be improved by integration. Integration is more accurate than traditional GNSS. When only three satellites are visible, the integration still works, whereas the traditional method fails. The deep coupled integration of CSAC and GNSS can improve the accuracy, reliability, and availability of PNT. PMID:26378542
Houk, Ronald J. T.; Wallace, Karl J.; Hewage, Himali S.; Anslyn, Eric V.
2008-01-01
A colorimetric chemodosimeter (SQ1) for the detection of trace palladium salts in cross-coupling reactions mediated by palladium is described. Decolorization of SQ1 is affected by nucleophilic attack of ethanethiol in basic DMSO solutions. Thiol addition is determined to have an equilibrium constant (Keq) of 2.9 × 106 M-1, with a large entropic and modest enthalpic driving force. This unusual result is attributed to solvent effects arising from a strong coordinative interaction between DMSO and the parent squaraine. Palladium detection is achieved through thiol scavenging from the SQ1-ethanethiol complex leading to a color “turn-on” of the parent squaraine. It was found that untreated samples obtained directly from Suzuki couplings showed no response to the assay. However, treatment of the samples with aqueous nitric acid generates a uniform Pd(NO3)2 species, which gives an appropriate response. “Naked-eye” detection of Pd(NO3)2 was estimated to be as low as 0.5 ppm in solution, and instrument-based detection was tested as low as 100 ppb. The average error over the working range of the assay was determined to be 7%. PMID:19122841
Within-Tunnel Variations in Pressure Data for Three Transonic Wind Tunnels
NASA Technical Reports Server (NTRS)
DeLoach, Richard
2014-01-01
This paper compares the results of pressure measurements made on the same test article with the same test matrix in three transonic wind tunnels. A comparison is presented of the unexplained variance associated with polar replicates acquired in each tunnel. The impact of a significance component of systematic (not random) unexplained variance is reviewed, and the results of analyses of variance are presented to assess the degree of significant systematic error in these representative wind tunnel tests. Total uncertainty estimates are reported for 140 samples of pressure data, quantifying the effects of within-polar random errors and between-polar systematic bias errors.
The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation
NASA Technical Reports Server (NTRS)
Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.
2006-01-01
Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.
Quantifying Errors in TRMM-Based Multi-Sensor QPE Products Over Land in Preparation for GPM
NASA Technical Reports Server (NTRS)
Peters-Lidard, Christa D.; Tian, Yudong
2011-01-01
Determining uncertainties in satellite-based multi-sensor quantitative precipitation estimates over land of fundamental importance to both data producers and hydro climatological applications. ,Evaluating TRMM-era products also lays the groundwork and sets the direction for algorithm and applications development for future missions including GPM. QPE uncertainties result mostly from the interplay of systematic errors and random errors. In this work, we will synthesize our recent results quantifying the error characteristics of satellite-based precipitation estimates. Both systematic errors and total uncertainties have been analyzed for six different TRMM-era precipitation products (3B42, 3B42RT, CMORPH, PERSIANN, NRL and GSMap). For systematic errors, we devised an error decomposition scheme to separate errors in precipitation estimates into three independent components, hit biases, missed precipitation and false precipitation. This decomposition scheme reveals hydroclimatologically-relevant error features and provides a better link to the error sources than conventional analysis, because in the latter these error components tend to cancel one another when aggregated or averaged in space or time. For the random errors, we calculated the measurement spread from the ensemble of these six quasi-independent products, and thus produced a global map of measurement uncertainties. The map yields a global view of the error characteristics and their regional and seasonal variations, reveals many undocumented error features over areas with no validation data available, and provides better guidance to global assimilation of satellite-based precipitation data. Insights gained from these results and how they could help with GPM will be highlighted.
Chronopoulos, D
2017-01-01
A systematic expression quantifying the wave energy skewing phenomenon as a function of the mechanical characteristics of a non-isotropic structure is derived in this study. A structure of arbitrary anisotropy, layering and geometric complexity is modelled through Finite Elements (FEs) coupled to a periodic structure wave scheme. A generic approach for efficiently computing the angular sensitivity of the wave slowness for each wave type, direction and frequency is presented. The approach does not involve any finite differentiation scheme and is therefore computationally efficient and not prone to the associated numerical errors. Copyright © 2016 Elsevier B.V. All rights reserved.
Performance of quantum annealing on random Ising problems implemented using the D-Wave Two
NASA Astrophysics Data System (ADS)
Wang, Zhihui; Job, Joshua; Rønnow, Troels F.; Troyer, Matthias; Lidar, Daniel A.; USC Collaboration; ETH Collaboration
2014-03-01
Detecting a possible speedup of quantum annealing compared to classical algorithms is a pressing task in experimental adiabatic quantum computing. In this talk, we discuss the performance of the D-Wave Two quantum annealing device on Ising spin glass problems. The expected time to solution for the device to solve random instances with up to 503 spins and with specified coupling ranges is evaluated while carefully addressing the issue of statistical errors. We perform a systematic comparison of the expected time to solution between the D-Wave Two and classical stochastic solvers, specifically simulated annealing, and simulated quantum annealing based on quantum Monte Carlo, and discuss the question of speedup.
Adaptive attitude control and momentum management for large-angle spacecraft maneuvers
NASA Technical Reports Server (NTRS)
Parlos, Alexander G.; Sunkel, John W.
1992-01-01
The fully coupled equations of motion are systematically linearized around an equilibrium point of a gravity gradient stabilized spacecraft, controlled by momentum exchange devices. These equations are then used for attitude control system design of an early Space Station Freedom flight configuration, demonstrating the errors caused by the improper approximation of the spacecraft dynamics. A full state feedback controller, incorporating gain-scheduled adaptation of the attitude gains, is developed for use during spacecraft on-orbit assembly or operations characterized by significant mass properties variations. The feasibility of the gain adaptation is demonstrated via a Space Station Freedom assembly sequence case study. The attitude controller stability robustness and transient performance during gain adaptation appear satisfactory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less
Doss, Hani; Tan, Aixin
2017-01-01
In the classical biased sampling problem, we have k densities π1(·), …, πk(·), each known up to a normalizing constant, i.e. for l = 1, …, k, πl(·) = νl(·)/ml, where νl(·) is a known function and ml is an unknown constant. For each l, we have an iid sample from πl,·and the problem is to estimate the ratios ml/ms for all l and all s. This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the πl’s are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case. PMID:28706463
Doss, Hani; Tan, Aixin
2014-09-01
In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.
Computational studies of metal-metal and metal-ligand interactions
NASA Technical Reports Server (NTRS)
Barnes, Leslie A.
1992-01-01
The geometric structure of Cr(CO)6 is optimized at the modified coupled-pair functional (MCPF), single and double excitation coupled-cluster (CCSD) and CCSD(T) levels of theory (including a perturbational estimate for connected triple excitations), and the force constants for the totally symmetric representation are determined. The geometry of Cr(CO)5 is partially optimized at the MCPF, CCSD and CCSD(T) levels of theory. Comparison with experimental data shows that the CCSD(T) method gives the best results for the structures and force constants, and that remaining errors are probably due to deficiencies in the one-particle basis sets used for CO. A detailed comparison of the properties of free CO is therefore given, at both the MCPF and CCSD/CCSD(T) levels of treatment, using a variety of basis sets. With very large one-particle basis sets, the SSCD(T) method gives excellent results for the bond distance, dipole moment and harmonic frequency of free CO. The total binding energies of Cr(CO)6 and Cr(CO)5 are also determined at the MCPF, CCSD and CCSD(T) levels of theory. The CCSD(T) method gives a much larger total binding energy than either the MCPF or CCSD methods. An analysis of the basis set superposition error (BSSE) at the MCPF level of treatment points out limitations in the one-particle basis used here and in a previous study. Calculations using larger basis sets reduced the BSSE, but the total binding energy of Cr(CO)6 is still significantly smaller than the experimental value, although the first CO bond dissociation energy of Cr(CO)6 is well described. An investigation of 3s3p correlation reveals only a small effect. The remaining discrepancy between the experimental and theoretical total binding energy of Cr(CO)6 is probably due to limitations in the one-particle basis, rather than limitations in the correlation treatment. In particular an additional d function and an f function on each C and O are needed to obtain quantitative results. This is underscored by the fact that even using a very large primitive se (1042 primitive functions contracted to 300 basis functions), the superposition error for the total binding energy of Cr(CO)6 is 22 kcal/mol at the MCPF level of treatment.
Park, S B; Kim, H; Yao, M; Ellis, R; Machtay, M; Sohn, J W
2012-06-01
To quantify the systematic error of a Deformable Image Registration (DIR) system and establish Quality Assurance (QA) procedure. To address the shortfall of landmark approach which it is only available at the significant visible feature points, we adapted a Deformation Vector Map (DVM) comparison approach. We used two CT image sets (R and T image sets) taken for the same patient at different time and generated a DVM, which includes the DIR systematic error. The DVM was calculated using fine-tuned B-Spline DIR and L-BFGS optimizer. By utilizing this DVM we generated R' image set to eliminate the systematic error in DVM,. Thus, we have truth data set, R' and T image sets, and the truth DVM. To test a DIR system, we use R' and T image sets to a DIR system. We compare the test DVM to the truth DVM. If there is no systematic error, they should be identical. We built Deformation Error Histogram (DEH) for quantitative analysis. The test registration was performed with an in-house B-Spline DIR system using a stochastic gradient descent optimizer. Our example data set was generated with a head and neck patient case. We also tested CT to CBCT deformable registration. We found skin regions which interface with the air has relatively larger errors. Also mobile joints such as shoulders had larger errors. Average error for ROIs were as follows; CTV: 0.4mm, Brain stem: 1.4mm, Shoulders: 1.6mm, and Normal tissues: 0.7mm. We succeeded to build DEH approach to quantify the DVM uncertainty. Our data sets are available for testing other systems in our web page. Utilizing DEH, users can decide how much systematic error they would accept. DEH and our data can be a tool for an AAPM task group to compose a DIR system QA guideline. This project is partially supported by the Agency for Healthcare Research and Quality (AHRQ) grant 1R18HS017424-01A2. © 2012 American Association of Physicists in Medicine.
Low-energy pion-nucleon scattering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibbs, W.R.; Ai, L.; Kaufmann, W.B.
An analysis of low-energy charged pion-nucleon data from recent {pi}{sup {plus_minus}}p experiments is presented. From the scattering lengths and the Goldberger-Miyazawa-Oehme (GMO) sum rule we find a value of the pion-nucleon coupling constant of f{sup 2}=0.0756{plus_minus}0.0007. We also find, contrary to most previous analyses, that the scattering volumes for the P{sub 31} and P{sub 13} partial waves are equal, within errors, corresponding to a symmetry found in the Hamiltonian of many theories. For the potential models used, the amplitudes are extrapolated into the subthreshold region to estimate the value of the {Sigma} term. Off-shell amplitudes are also provided. {copyright} {italmore » 1998} {ital The American Physical Society}« less
NASA Astrophysics Data System (ADS)
Rivière, G.; Hua, B. L.
2004-10-01
A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.
The Effect of Systematic Error in Forced Oscillation Testing
NASA Technical Reports Server (NTRS)
Williams, Brianne Y.; Landman, Drew; Flory, Isaac L., IV; Murphy, Patrick C.
2012-01-01
One of the fundamental problems in flight dynamics is the formulation of aerodynamic forces and moments acting on an aircraft in arbitrary motion. Classically, conventional stability derivatives are used for the representation of aerodynamic loads in the aircraft equations of motion. However, for modern aircraft with highly nonlinear and unsteady aerodynamic characteristics undergoing maneuvers at high angle of attack and/or angular rates the conventional stability derivative model is no longer valid. Attempts to formulate aerodynamic model equations with unsteady terms are based on several different wind tunnel techniques: for example, captive, wind tunnel single degree-of-freedom, and wind tunnel free-flying techniques. One of the most common techniques is forced oscillation testing. However, the forced oscillation testing method does not address the systematic and systematic correlation errors from the test apparatus that cause inconsistencies in the measured oscillatory stability derivatives. The primary objective of this study is to identify the possible sources and magnitude of systematic error in representative dynamic test apparatuses. Sensitivities of the longitudinal stability derivatives to systematic errors are computed, using a high fidelity simulation of a forced oscillation test rig, and assessed using both Design of Experiments and Monte Carlo methods.
Global Warming Estimation from MSU
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Iacovazzi, Robert; Yoo, Jung-Moon
1998-01-01
Microwave Sounding Unit (MSU) radiometer observations in Ch 2 (53.74 GHz) from sequential, sun-synchronous, polar-orbiting NOAA satellites contain small systematic errors. Some of these errors are time-dependent and some are time-independent. Small errors in Ch 2 data of successive satellites arise from calibration differences. Also, successive NOAA satellites tend to have different Local Equatorial Crossing Times (LECT), which introduce differences in Ch 2 data due to the diurnal cycle. These two sources of systematic error are largely time independent. However, because of atmospheric drag, there can be a drift in the LECT of a given satellite, which introduces time-dependent systematic errors. One of these errors is due to the progressive chance in the diurnal cycle and the other is due to associated chances in instrument heating by the sun. In order to infer global temperature trend from the these MSU data, we have eliminated explicitly the time-independent systematic errors. Both of the time-dependent errors cannot be assessed from each satellite. For this reason, their cumulative effect on the global temperature trend is evaluated implicitly. Christy et al. (1998) (CSL). based on their method of analysis of the MSU Ch 2 data, infer a global temperature cooling trend (-0.046 K per decade) from 1979 to 1997, although their near nadir measurements yield near zero trend (0.003 K/decade). Utilising an independent method of analysis, we infer global temperature warmed by 0.12 +/- 0.06 C per decade from the observations of the MSU Ch 2 during the period 1980 to 1997.
NASA Astrophysics Data System (ADS)
Glover, Paul W. J.
2016-07-01
When scientists apply Archie's first law they often include an extra parameter a, which was introduced about 10 years after the equation's first publication by Winsauer et al. (1952), and which is sometimes called the "tortuosity" or "lithology" parameter. This parameter is not, however, theoretically justified. Paradoxically, the Winsauer et al. (1952) form of Archie's law often performs better than the original, more theoretically correct version. The difference in the cementation exponent calculated from these two forms of Archie's law is important, and can lead to a misestimation of reserves by at least 20 % for typical reservoir parameter values. We have examined the apparent paradox, and conclude that while the theoretical form of the law is correct, the data that we have been analysing with Archie's law have been in error. There are at least three types of systematic error that are present in most measurements: (i) a porosity error, (ii) a pore fluid salinity error, and (iii) a temperature error. Each of these systematic errors is sufficient to ensure that a non-unity value of the parameter a is required in order to fit the electrical data well. Fortunately, the inclusion of this parameter in the fit has compensated for the presence of the systematic errors in the electrical and porosity data, leading to a value of cementation exponent that is correct. The exceptions are those cementation exponents that have been calculated for individual core plugs. We make a number of recommendations for reducing the systematic errors that contribute to the problem and suggest that the value of the parameter a may now be used as an indication of data quality.
[Errors in Peruvian medical journals references].
Huamaní, Charles; Pacheco-Romero, José
2009-01-01
References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.
König, Caroline; Cárdenas, Martha I; Giraldo, Jesús; Alquézar, René; Vellido, Alfredo
2015-09-29
The characterization of proteins in families and subfamilies, at different levels, entails the definition and use of class labels. When the adscription of a protein to a family is uncertain, or even wrong, this becomes an instance of what has come to be known as a label noise problem. Label noise has a potentially negative effect on any quantitative analysis of proteins that depends on label information. This study investigates class C of G protein-coupled receptors, which are cell membrane proteins of relevance both to biology in general and pharmacology in particular. Their supervised classification into different known subtypes, based on primary sequence data, is hampered by label noise. The latter may stem from a combination of expert knowledge limitations and the lack of a clear correspondence between labels that mostly reflect GPCR functionality and the different representations of the protein primary sequences. In this study, we describe a systematic approach, using Support Vector Machine classifiers, to the analysis of G protein-coupled receptor misclassifications. As a proof of concept, this approach is used to assist the discovery of labeling quality problems in a curated, publicly accessible database of this type of proteins. We also investigate the extent to which physico-chemical transformations of the protein sequences reflect G protein-coupled receptor subtype labeling. The candidate mislabeled cases detected with this approach are externally validated with phylogenetic trees and against further trusted sources such as the National Center for Biotechnology Information, Universal Protein Resource, European Bioinformatics Institute and Ensembl Genome Browser information repositories. In quantitative classification problems, class labels are often by default assumed to be correct. Label noise, though, is bound to be a pervasive problem in bioinformatics, where labels may be obtained indirectly through complex, many-step similarity modelling processes. In the case of G protein-coupled receptors, methods capable of singling out and characterizing those sequences with consistent misclassification behaviour are required to minimize this problem. A systematic, Support Vector Machine-based method has been proposed in this study for such purpose. The proposed method enables a filtering approach to the label noise problem and might become a support tool for database curators in proteomics.
Effective axial-vector strength and β-decay systematics
NASA Astrophysics Data System (ADS)
Delion, D. S.; Suhonen, J.
2014-09-01
We use the weak axial-vector coupling strength g_{\\text{A}} as a key parameter to reproduce simultaneously the available data for both the Gamow-Teller \\beta^- and \\beta^+/\\text{EC} decay rates in nine triplets of isobars with mass numbers A=70,78,100,104,106,110,116,128,130 . We use the proton-neutron quasiparticle random-phase approximation (pnQRPA) with schematic dipole interaction containing particle-particle and particle-hole parts with mass-dependent strengths. Our analysis points to a strongly quenched effective value g_{\\text{A}}\\approx 0.3 , with a relative error of 28%. We then perform a systematic computation of 218 experimentally known \\beta^- and \\beta^+/\\text{EC} decays with quite a remarkable success. The presently extracted value of g_{\\text{A}} should be taken as an effective one, specific for a given nuclear theory framework. Present studies suggest that the effective g_{\\text{A}} is suitable for the description of decay transitions to 1^+ states at moderate excitation, below the Gamow-Teller giant resonance region.
Error mechanism analyses of an ultra-precision stage for high speed scan motion over a large stroke
NASA Astrophysics Data System (ADS)
Wang, Shaokai; Tan, Jiubin; Cui, Jiwen
2015-02-01
Reticle Stage (RS) is designed to complete scan motion with high speed in nanometer-scale over a large stroke. Comparing with the allowable scan accuracy of a few nanometers, errors caused by any internal or external disturbances are critical and must not be ignored. In this paper, RS is firstly introduced in aspects of mechanical structure, forms of motion, and controlling method. Based on that, mechanisms of disturbances transferred to final servo-related error in scan direction are analyzed, including feedforward error, coupling between the large stroke stage (LS) and the short stroke stage (SS), and movement of measurement reference. Especially, different forms of coupling between SS and LS are discussed in detail. After theoretical analysis above, the contributions of these disturbances to final error are simulated numerically. The residual positioning error caused by feedforward error in acceleration process is about 2 nm after settling time, the coupling between SS and LS about 2.19 nm, and the movements of MF about 0.6 nm.
Hasani, Mohammad; Sakieh, Yousef; Dezhkam, Sadeq; Ardakani, Tahereh; Salmanmahiny, Abdolrassoul
2017-04-01
A hierarchical intensity analysis of land-use change is applied to evaluate the dynamics of a coupled urban coastal system in Rasht County, Iran. Temporal land-use layers of 1987, 1999, and 2011 are employed, while spatial accuracy metrics are only available for 2011 data (overall accuracy of 94%). The errors in 1987 and 1999 layers are unknown, which can influence the accuracy of temporal change information. Such data were employed to examine the size and the type of errors that could justify deviations from uniform change intensities. Accordingly, errors comprising 3.31 and 7.47% of 1999 and 2011 maps, respectively, could explain all differences from uniform gains and errors including 5.21 and 1.81% of 1987 and 1999 maps, respectively, could explain all deviations from uniform losses. Additional historical information is also applied for uncertainty assessment and to separate probable map errors from actual land-use changes. In this regard, historical processes in Rasht County can explain different types of transition that are either consistent or inconsistent to known processes. The intensity analysis assisted in identification of systematic transitions and detection of competitive categories, which cannot be investigated through conventional change detection methods. Based on results, built-up area is the most active gaining category in the area and wetland category with less areal extent is more sensitive to intense land-use change processes. Uncertainty assessment results also indicated that there are no considerable classification errors in temporal land-use data and these imprecise layers can reliably provide implications for informed decision making.
NASA Astrophysics Data System (ADS)
Herbst, M.; Hellebrand, H. J.; Bauer, J.; Vanderborght, J.; Vereecken, H.
2006-12-01
The modelling of soil respiration plays an important role in the prediction of climate change. Soil respiration is usually divided in autotrophic and heterotrophic fractions orginating from root respiration and microbial decomposition of soil organic carbon, respectively. We report on the coupling of a one dimensional water, heat and CO2 flux model (SOILCO2) with a model of carbon turnover (RothC) for the prediction of soil heterotrophic respiration. The coupled model was tested using soil temperature, soil moisture, and CO2 flux measurements in a bare soil experimental plot located in Bornim, Germany. A seven year record of soil and CO2 measurements covering a broad range of atmospheric and soil conditions was availabe to evaluate the model performance. After calibrating the decomposition rate constant of the humic fraction pool, the overall model performance on CO2 efflux prediction was acceptable. The root mean square error for the CO2 efflux prediction was 0.12 cm ³/cm ²/d. During the severe summer draught of 2003 very high CO2 efluxes were measured, which could not be explained by the model. Those high fluxes were attributed to a pressure pumping effect. The soil temperature dependency of CO2 production was well described by th e model, whereas the biggest opportunity for improvement is seen in a better description of the soil moisture dependency of CO2 production. The calibration of the humus decomposition rate constant revealed a value of 0.09 1/d, which is higher than the original value suggested by the RothC model developers but within the range of literature values.
Galli, C
2001-07-01
It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.
Pendrill, Robert; Engström, Olof; Volpato, Andrea; Zerbetto, Mirco; Polimeno, Antonino; Widmalm, Göran
2016-01-28
The monosaccharide L-rhamnose is common in bacterial polysaccharides and the disaccharide α-L-Rhap-α-(1 → 2)-α-L-Rhap-OMe represents a structural model for a part of Shigella flexneri O-antigen polysaccharides. Utilization of [1'-(13)C]-site-specific labeling in the anomeric position at the glycosidic linkage between the two sugar residues facilitated the determination of transglycosidic NMR (3)JCH and (3)JCC coupling constants. Based on these spin-spin couplings the major state and the conformational distribution could be determined with respect to the ψ torsion angle, which changed between water and dimethyl sulfoxide (DMSO) as solvents, a finding mirrored by molecular dynamics (MD) simulations with explicit solvent molecules. The (13)C NMR spin relaxation parameters T1, T2, and heteronuclear NOE of the probe were measured for the disaccharide in DMSO-d6 at two magnetic field strengths, with standard deviations ≤1%. The combination of MD simulation and a stochastic description based on the diffusive chain model resulted in excellent agreement between calculated and experimentally observed (13)C relaxation parameters, with an average error of <2%. The coupling between the global reorientation of the molecule and the local motion of the spin probe is deemed essential if reproduction of NMR relaxation parameters should succeed, since decoupling of the two modes of motion results in significantly worse agreement. Calculation of (13)C relaxation parameters based on the correlation functions obtained directly from the MD simulation of the solute molecule in DMSO as solvent showed satisfactory agreement with errors on the order of 10% or less.
NASA Astrophysics Data System (ADS)
Zhang, Y. K.; Liang, X.
2014-12-01
Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.
Taylor, C; Parker, J; Stratford, J; Warren, M
2018-05-01
Although all systematic and random positional setup errors can be corrected for in entirety during on-line image-guided radiotherapy, the use of a specified action level, below which no correction occurs, is also an option. The following service evaluation aimed to investigate the use of this 3 mm action level for on-line image assessment and correction (online, systematic set-up error and weekly evaluation) for lower extremity sarcoma, and understand the impact on imaging frequency and patient positioning error within one cancer centre. All patients were immobilised using a thermoplastic shell attached to a plastic base and an individual moulded footrest. A retrospective analysis of 30 patients was performed. Patient setup and correctional data derived from cone beam CT analysis was retrieved. The timing, frequency and magnitude of corrections were evaluated. The population systematic and random error was derived. 20% of patients had no systematic corrections over the duration of treatment, and 47% had one. The maximum number of systematic corrections per course of radiotherapy was 4, which occurred for 2 patients. 34% of episodes occurred within the first 5 fractions. All patients had at least one observed translational error during their treatment greater than 0.3 cm, and 80% of patients had at least one observed translational error during their treatment greater than 0.5 cm. The population systematic error was 0.14 cm, 0.10 cm, 0.14 cm and random error was 0.27 cm, 0.22 cm, 0.23 cm in the lateral, caudocranial and anteroposterial directions. The required Planning Target Volume margin for the study population was 0.55 cm, 0.41 cm and 0.50 cm in the lateral, caudocranial and anteroposterial directions. The 3 mm action level for image assessment and correction prior to delivery reduced the imaging burden and focussed intervention on patients that exhibited greater positional variability. This strategy could be an efficient deployment of departmental resources if full daily correction of positional setup error is not possible. Copyright © 2017. Published by Elsevier Ltd.
SPIDER OPTIMIZATION. II. OPTICAL, MAGNETIC, AND FOREGROUND EFFECTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
O'Dea, D. T.; Clark, C. N.; Contaldi, C. R.
2011-09-01
SPIDER is a balloon-borne instrument designed to map the polarization of the cosmic microwave background (CMB) with degree-scale resolution over a large fraction of the sky. SPIDER's main goal is to measure the amplitude of primordial gravitational waves through their imprint on the polarization of the CMB if the tensor-to-scalar ratio, r, is greater than 0.03. To achieve this goal, instrumental systematic errors must be controlled with unprecedented accuracy. Here, we build on previous work to use simulations of SPIDER observations to examine the impact of several systematic effects that have been characterized through testing and modeling of various instrumentmore » components. In particular, we investigate the impact of the non-ideal spectral response of the half-wave plates, coupling between focal-plane components and Earth's magnetic field, and beam mismatches and asymmetries. We also present a model of diffuse polarized foreground emission based on a three-dimensional model of the Galactic magnetic field and dust, and study the interaction of this foreground emission with our observation strategy and instrumental effects. We find that the expected level of foreground and systematic contamination is sufficiently low for SPIDER to achieve its science goals.« less
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
Experimental Investigation of Jet Impingement Heat Transfer Using Thermochromic Liquid Crystals
NASA Technical Reports Server (NTRS)
Dempsey, Brian Paul
1997-01-01
Jet impingement cooling of a hypersonic airfoil leading edge is experimentally investigated using thermochromic liquid crystals (TLCS) to measure surface temperature. The experiment uses computer data acquisition with digital imaging of the TLCs to determine heat transfer coefficients during a transient experiment. The data reduction relies on analysis of a coupled transient conduction - convection heat transfer problem that characterizes the experiment. The recovery temperature of the jet is accounted for by running two experiments with different heating rates, thereby generating a second equation that is used to solve for the recovery temperature. The resulting solution requires a complicated numerical iteration that is handled by a computer. Because the computational data reduction method is complex, special attention is paid to error assessment. The error analysis considers random and systematic errors generated by the instrumentation along with errors generated by the approximate nature of the numerical methods. Results of the error analysis show that the experimentally determined heat transfer coefficients are accurate to within 15%. The error analysis also shows that the recovery temperature data may be in error by more than 50%. The results show that the recovery temperature data is only reliable when the recovery temperature of the jet is greater than 5 C, i.e. the jet velocity is in excess of 100 m/s. Parameters that were investigated include nozzle width, distance from the nozzle exit to the airfoil surface, and jet velocity. Heat transfer data is presented in graphical and tabular forms. An engineering analysis of hypersonic airfoil leading edge cooling is performed using the results from these experiments. Several suggestions for the improvement of the experimental technique are discussed.
Parton Dynamics Inferred from High-Mass Drell-Yan Dimuons Induced by 120 GeV p+D Interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ramson, Bryan J.
2018-01-01
Fermilab Experiment 906/SeaQuest (E906/SeaQuest) is the latest in a well established tradition of studying leptoproduction from the annihilation of a quark and anti-quark, known as the Drell-Yan process. The broad goal of E906/SeaQuest is measuring various properties of nucleon structure in order to learn more about quarks and Quantum Chromodynamics (QCD), the mathematical description of the strong force. The present work investigated violations of the Lam-Tung relation between virtual photon polarization and quark and lepton angular momentum. The violation of Lam-Tung can be explained as the signature of quark-nucleon spin-orbit coupling through the use of the Transverse-Momentum-Dependent (TMD) framework, whichmore » assumes that the initial transverse momentum of quarks is smaller than the hard scattering scale, but also non-negligible. An analysis of the angular moments in Drell-Yan collected by E906/SeaQuest was performed with four different configurations in order to estimate the systematic errors attributed to each correction. After correction for background and error propagation, the final extraction of the azimuthal moment excluding contributions from the trigger was ν = 0.151 ± 0.88(stat.) ± 0.346(syst.) at an average transverse momentum of 0.87 ± 0.50 GeV/c and an average dimuon mass of 5.48 ± 0.70 GeV. In the future, the magnitude of the systematic errors on the extraction could potentially be reduced by improving the quality of the trigger efficiency calculation, improving the intensity dependent event reconstruction efficiency, considering the changes in acceptance due to a beam shift relative to the E906/SeaQuest spectrometer, and improving the modeling of background.« less
Participation in the TOMS Science Team
NASA Technical Reports Server (NTRS)
Chance, Kelly; Hilsenrath, Ernest (Technical Monitor)
2002-01-01
Because of the nominal funding provided by this grant, some of the relevant research is partially funded by other sources. Research performed for this funding period included the following items: We have investigated errors in TOMS ozone measurements caused by the uncertainty in wavelength calibration, coupled with the ozone cross sections in the Huggins bands and their temperature dependence. Preliminary results show that 0.1 nm uncertainty in TOMS wavelength calibration at the ozone active wavelengths corresponds to approx. 1% systematic error in O3, and thus potential 1% biases among ozone trends from the various TOMS instruments. This conclusion will be revised for absolute O3 Measurements as cross sections are further investigated for inclusion in the HITRAN database at the SAO, but the potential for relative errors remains. In order to aid further comparisons among TOMS and GOME ozone measurements, we have implemented our method of direct fitting of GOME radiances (BOAS) for O3, and now obtain the best fitting precision to date for GOME O3 Columns. This will aid in future comparisons of the actual quantities measured and fitted for the two instrument types. We have made comparisons between GOME ICFA cloud fraction and cloud fraction determined from GOME data using the Ring effect in the Ca II lines. There is a strong correlation, as expected, but there are substantial systematic biases between the determinations. This study will be refined in the near future using the recently-developed GOME Cloud Retrieval Algorithm (GOMECAT). We have improved the SAO Ring effect determination to include better convolution with instrument transfer functions and inclusion of interferences by atmospheric absorbers (e.g., O3). This has been made available to the general community.
Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim
2015-01-01
Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.
Internal robustness: systematic search for systematic bias in SN Ia data
NASA Astrophysics Data System (ADS)
Amendola, Luca; Marra, Valerio; Quartin, Miguel
2013-04-01
A great deal of effort is currently being devoted to understanding, estimating and removing systematic errors in cosmological data. In the particular case of Type Ia supernovae, systematics are starting to dominate the error budget. Here we propose a Bayesian tool for carrying out a systematic search for systematic contamination. This serves as an extension to the standard goodness-of-fit tests and allows not only to cross-check raw or processed data for the presence of systematics but also to pin-point the data that are most likely contaminated. We successfully test our tool with mock catalogues and conclude that the Union2.1 data do not possess a significant amount of systematics. Finally, we show that if one includes in Union2.1 the supernovae that originally failed the quality cuts, our tool signals the presence of systematics at over 3.8σ confidence level.
Kutateladze, Andrei G; Mukhina, Olga A
2014-09-05
Spin-spin coupling constants in (1)H NMR carry a wealth of structural information and offer a powerful tool for deciphering molecular structures. However, accurate ab initio or DFT calculations of spin-spin coupling constants have been very challenging and expensive. Scaling of (easy) Fermi contacts, fc, especially in the context of recent findings by Bally and Rablen (Bally, T.; Rablen, P. R. J. Org. Chem. 2011, 76, 4818), offers a framework for achieving practical evaluation of spin-spin coupling constants. We report a faster and more precise parametrization approach utilizing a new basis set for hydrogen atoms optimized in conjunction with (i) inexpensive B3LYP/6-31G(d) molecular geometries, (ii) inexpensive 4-31G basis set for carbon atoms in fc calculations, and (iii) individual parametrization for different atom types/hybridizations, not unlike a force field in molecular mechanics, but designed for the fc's. With the training set of 608 experimental constants we achieved rmsd <0.19 Hz. The methodology performs very well as we illustrate with a set of complex organic natural products, including strychnine (rmsd 0.19 Hz), morphine (rmsd 0.24 Hz), etc. This precision is achieved with much shorter computational times: accurate spin-spin coupling constants for the two conformers of strychnine were computed in parallel on two 16-core nodes of a Linux cluster within 10 min.
Effects of mucosal loading on vocal fold vibration.
Tao, Chao; Jiang, Jack J
2009-06-01
A chain model was proposed in this study to examine the effects of mucosal loading on vocal fold vibration. Mucosal loading was defined as the loading caused by the interaction between the vocal folds and the surrounding tissue. In the proposed model, the vocal folds and the surrounding tissue were represented by a series of oscillators connected by a coupling spring. The lumped masses, springs, and dampers of the oscillators modeled the tissue properties of mass, stiffness, and viscosity, respectively. The coupling spring exemplified the tissue interactions. By numerically solving this chain model, the effects of mucosal loading on the phonation threshold pressure, phonation instability pressure, and energy distribution in a voice production system were studied. It was found that when mucosal loading is small, phonation threshold pressure increases with the damping constant R(r), the mass constant R(m), and the coupling constant R(mu) of mucosal loading but decreases with the stiffness constant R(k). Phonation instability pressure is also related to mucosal loading. It was found that phonation instability pressure increases with the coupling constant R(mu) but decreases with the stiffness constant R(k) of mucosal loading. Therefore, it was concluded that mucosal loading directly affects voice production.
Effects of mucosal loading on vocal fold vibration
NASA Astrophysics Data System (ADS)
Tao, Chao; Jiang, Jack J.
2009-06-01
A chain model was proposed in this study to examine the effects of mucosal loading on vocal fold vibration. Mucosal loading was defined as the loading caused by the interaction between the vocal folds and the surrounding tissue. In the proposed model, the vocal folds and the surrounding tissue were represented by a series of oscillators connected by a coupling spring. The lumped masses, springs, and dampers of the oscillators modeled the tissue properties of mass, stiffness, and viscosity, respectively. The coupling spring exemplified the tissue interactions. By numerically solving this chain model, the effects of mucosal loading on the phonation threshold pressure, phonation instability pressure, and energy distribution in a voice production system were studied. It was found that when mucosal loading is small, phonation threshold pressure increases with the damping constant Rr, the mass constant Rm, and the coupling constant Rμ of mucosal loading but decreases with the stiffness constant Rk. Phonation instability pressure is also related to mucosal loading. It was found that phonation instability pressure increases with the coupling constant Rμ but decreases with the stiffness constant Rk of mucosal loading. Therefore, it was concluded that mucosal loading directly affects voice production.
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
An Ab Initio Study of the Low-Lying Doublet States of AgO and AgS
NASA Technical Reports Server (NTRS)
Bauschlicher, Charles W., Jr.; Partridge, Harry; Langhoff, Stephen R.
1990-01-01
Spectroscopic constants (D(sub o), r(sub e), mu(sub e), T(sub e)) are determined for the doublet states of AgO and AgS below approx. = 30000/cm. Large valence basis sets are employed in conjunction with relativistic effective core potentials (RECPs). Electron correlation is included using the modified coupled-pair functional (MCPF) and multireference configuration interaction (MRCI) methods. The A(sup 2)Sigma(sup +) - X(sup 2)Pi band system is found to occur in the near infrared (approx. = 9000/cm) and to be relatively weak with a radiative lifetime of 900 microns for A(sup 2)Sigma(sup +) (upsilon = 0). The weakly bound C(sup 2)Pi state (our notation), the upper state of the blue system, is found to require high levels of theoretical treatment to determine a quantitatively accurate potential. The red system is assigned as a transition from the C(sup 2)Pi state to the previously unobserved A(sup 2)Sigma(sup +) state. Several additional transitions are identified that should be detectable experimentally. A more limited study is performed for the vertical excitation spectrum of AgS. In addition, a detailed all-electron study of the X(sup 2)Pi and A(sup 2)Sigma(sup +) states of AgO is carried out using large atomic natural orbital (ANO) basis sets. Our best calculated D(sub o) value for AgO is significantly less than the experimental value, which suggests that there may be some systematic error in the experimental determination.
NASA Astrophysics Data System (ADS)
Weichert, Christoph; Köchert, Paul; Schötka, Eugen; Flügge, Jens; Manske, Eberhard
2018-06-01
The uncertainty of a straightness interferometer is independent of the component used to introduce the divergence angle between the two probing beams, and is limited by three main error sources, which are linked to each other: their resolution, the influence of refractive index gradients and the topography of the straightness reflector. To identify the configuration with minimal uncertainties under laboratory conditions, a fully fibre-coupled heterodyne interferometer was successively equipped with three different wedge prisms, resulting in three different divergence angles (4°, 8° and 20°). To separate the error sources an independent reference with a smaller reproducibility is needed. Therefore, the straightness measurement capability of the Nanometer Comparator, based on a multisensor error separation method, was improved to provide measurements with a reproducibility of 0.2 nm. The comparison results revealed that the influence of the refractive index gradients of air did not increase with interspaces between the probing beams of more than 11.3 mm. Therefore, over a movement range of 220 mm, the lowest uncertainty was achieved with the largest divergence angle. The dominant uncertainty contribution arose from the mirror topography, which was additionally determined with a Fizeau interferometer. The measured topography agreed within ±1.3 nm with the systematic deviations revealed in the straightness comparison, resulting in an uncertainty contribution of 2.6 nm for the straightness interferometer.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation.
Bartz, Daniel; Hatrick, Kerr; Hesse, Christian W.; Müller, Klaus-Robert; Lemm, Steven
2013-01-01
Robust and reliable covariance estimates play a decisive role in financial and many other applications. An important class of estimators is based on factor models. Here, we show by extensive Monte Carlo simulations that covariance matrices derived from the statistical Factor Analysis model exhibit a systematic error, which is similar to the well-known systematic error of the spectrum of the sample covariance matrix. Moreover, we introduce the Directional Variance Adjustment (DVA) algorithm, which diminishes the systematic error. In a thorough empirical study for the US, European, and Hong Kong stock market we show that our proposed method leads to improved portfolio allocation. PMID:23844016
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carson, M; Molineu, A; Taylor, P
Purpose: To analyze the most recent results of IROC Houston’s anthropomorphic H&N phantom to determine the nature of failing irradiations and the feasibility of altering pass/fail credentialing criteria. Methods: IROC Houston’s H&N phantom, used for IMRT credentialing for NCI-sponsored clinical trials, requires that an institution’s treatment plan must agree with measurement within 7% (TLD doses) and ≥85% pixels must pass 7%/4 mm gamma analysis. 156 phantom irradiations (November 2014 – October 2015) were re-evaluated using tighter criteria: 1) 5% TLD and 5%/4 mm, 2) 5% TLD and 5%/3 mm, 3) 4% TLD and 4%/4 mm, and 4) 3% TLD andmore » 3%/3 mm. Failure/poor performance rates were evaluated with respect to individual film and TLD performance by location in the phantom. Overall poor phantom results were characterized qualitatively as systematic (dosimetric) errors, setup errors/positional shifts, global but non-systematic errors, and errors affecting only a local region. Results: The pass rate for these phantoms using current criteria is 90%. Substituting criteria 1-4 reduces the overall pass rate to 77%, 70%, 63%, and 37%, respectively. Statistical analyses indicated the probability of noise-induced TLD failure at the 5% criterion was <0.5%. Using criteria 1, TLD results were most often the cause of failure (86% failed TLD while 61% failed film), with most failures identified in the primary PTV (77% cases). Other criteria posed similar results. Irradiations that failed from film only were overwhelmingly associated with phantom shifts/setup errors (≥80% cases). Results failing criteria 1 were primarily diagnosed as systematic: 58% of cases. 11% were setup/positioning errors, 8% were global non-systematic errors, and 22% were local errors. Conclusion: This study demonstrates that 5% TLD and 5%/4 mm gamma criteria may be both practically and theoretically achievable. Further work is necessary to diagnose and resolve dosimetric inaccuracy in these trials, particularly for systematic dose errors. This work is funded by NCI Grant CA180803.« less
Convergence study of global meshing on enamel-cement-bracket finite element model
NASA Astrophysics Data System (ADS)
Samshuri, S. F.; Daud, R.; Rojan, M. A.; Basaruddin, K. S.; Abdullah, A. B.; Ariffin, A. K.
2017-09-01
This paper presents on meshing convergence analysis of finite element (FE) model to simulate enamel-cement-bracket fracture. Three different materials used in this study involving interface fracture are concerned. Complex behavior ofinterface fracture due to stress concentration is the reason to have a well-constructed meshing strategy. In FE analysis, meshing size is a critical factor that influenced the accuracy and computational time of analysis. The convergence study meshing scheme involving critical area (CA) and non-critical area (NCA) to ensure an optimum meshing sizes are acquired for this FE model. For NCA meshing, the area of interest are at the back of enamel, bracket ligature groove and bracket wing. For CA meshing, area of interest are enamel area close to cement layer, the cement layer and bracket base. The value of constant NCA meshing tested are meshing size 1 and 0.4. The value constant CA meshing tested are 0.4 and 0.1. Manipulative variables are randomly selected and must abide the rule of NCA must be higher than CA. This study employed first principle stresses due to brittle failure nature of the materials used. Best meshing size are selected according to convergence error analysis. Results show that, constant CA are more stable compare to constant NCA meshing. Then, 0.05 constant CA meshing are tested to test the accuracy of smaller meshing. However, unpromising result obtained as the errors are increasing. Thus, constant CA 0.1 with NCA mesh of 0.15 until 0.3 are the most stable meshing as the error in this region are lowest. Convergence test was conducted on three selected coarse, medium and fine meshes at the range of NCA mesh of 0.15 until 3 and CA mesh area stay constant at 0.1. The result shows that, at coarse mesh 0.3, the error are 0.0003% compare to 3% acceptable error. Hence, the global meshing are converge as the meshing size at CA 0.1 and NCA 0.15 for this model.
NASA Astrophysics Data System (ADS)
Demissie, Taye B.
2017-11-01
The NMR chemical shifts and indirect spin-spin coupling constants of 12 molecules containing 29Si, 73Ge, 119Sn, and 207Pb [X(CCMe)4, Me2X(CCMe)2, and Me3XCCH] are presented. The results are obtained from non-relativistic as well as two- and four-component relativistic density functional theory (DFT) calculations. The scalar and spin-orbit relativistic contributions as well as the total relativistic corrections are determined. The main relativistic effect in these molecules is not due to spin-orbit coupling but rather to the scalar relativistic contraction of the s-shells. The correlation between the calculated and experimental indirect spin-spin coupling constants showed that the four-component relativistic density functional theory (DFT) approach using the Perdew's hybrid scheme exchange-correlation functional (PBE0; using the Perdew-Burke-Ernzerhof exchange and correlation functionals) gives results in good agreement with experimental values. The indirect spin-spin coupling constants calculated using the spin-orbit zeroth order regular approximation together with the hybrid PBE0 functional and the specially designed J-coupling (JCPL) basis sets are in good agreement with the results obtained from the four-component relativistic calculations. For the coupling constants involving the heavy atoms, the relativistic corrections are of the same order of magnitude compared to the non-relativistically calculated results. Based on the comparisons of the calculated results with available experimental values, the best results for all the chemical shifts and non-existing indirect spin-spin coupling constants for all the molecules are reported, hoping that these accurate results will be used to benchmark future DFT calculations. The present study also demonstrates that the four-component relativistic DFT method has reached a level of maturity that makes it a convenient and accurate tool to calculate indirect spin-spin coupling constants of "large" molecular systems involving heavy atoms.
Results from a NIST-EPA Interagency Agreement on Understanding Systematic Measurement Error in Thermal-Optical Analysis for PM Black Carbon Using Response Surfaces and Surface Confidence Intervals will be presented at the American Association for Aerosol Research (AAAR) 24th Annu...
Richards, Emilie J; Brown, Jeremy M; Barley, Anthony J; Chong, Rebecca A; Thomson, Robert C
2018-02-19
The use of large genomic datasets in phylogenetics has highlighted extensive topological variation across genes. Much of this discordance is assumed to result from biological processes. However, variation among gene trees can also be a consequence of systematic error driven by poor model fit, and the relative importance of biological versus methodological factors in explaining gene tree variation is a major unresolved question. Using mitochondrial genomes to control for biological causes of gene tree variation, we estimate the extent of gene tree discordance driven by systematic error and employ posterior prediction to highlight the role of model fit in producing this discordance. We find that the amount of discordance among mitochondrial gene trees is similar to the amount of discordance found in other studies that assume only biological causes of variation. This similarity suggests that the role of systematic error in generating gene tree variation is underappreciated and critical evaluation of fit between assumed models and the data used for inference is important for the resolution of unresolved phylogenetic questions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
San Fabián, J.; Omar, S.; García de la Vega, J. M., E-mail: garcia.delavega@uam.es
The effect of a fraction of Hartree-Fock exchange on the calculated spin-spin coupling constants involving fluorine through a hydrogen bond is analyzed in detail. Coupling constants calculated using wavefunction methods are revisited in order to get high-level calculations using the same basis set. Accurate MCSCF results are obtained using an additive approach. These constants and their contributions are used as a reference for density functional calculations. Within the density functional theory, the Hartree-Fock exchange functional is split in short- and long-range using a modified version of the Coulomb-attenuating method with the SLYP functional as well as with the original B3LYP.more » Results support the difficulties for calculating hydrogen bond coupling constants using density functional methods when fluorine nuclei are involved. Coupling constants are very sensitive to the Hartree-Fock exchange and it seems that, contrary to other properties, it is important to include this exchange for short-range interactions. Best functionals are tested in two different groups of complexes: those related with anionic clusters of type [F(HF){sub n}]{sup −} and those formed by difluoroacetylene and either one or two hydrogen fluoride molecules.« less
The accuracy of the measurements in Ulugh Beg's star catalogue
NASA Astrophysics Data System (ADS)
Krisciunas, K.
1992-12-01
The star catalogue compiled by Ulugh Beg and his collaborators in Samarkand (ca. 1437) is the only catalogue primarily based on original observations between the times of Ptolemy and Tycho Brahe. Evans (1987) has given convincing evidence that Ulugh Beg's star catalogue was based on measurements made with a zodiacal armillary sphere graduated to 15(') , with interpolation to 0.2 units. He and Shevchenko (1990) were primarily interested in the systematic errors in ecliptic longitude. Shevchenko's analysis of the random errors was limited to the twelve zodiacal constellations. We have analyzed all 843 ecliptic longitudes and latitudes attributed to Ulugh Beg by Knobel (1917). This required multiplying all the longitude errors by the respective values of the cosine of the celestial latitudes. We find a random error of +/- 17minp 7 for ecliptic longitude and +/- 16minp 5 for ecliptic latitude. On the whole, the random errors are largest near the ecliptic, decreasing towards the ecliptic poles. For all of Ulugh Beg's measurements (excluding outliers) the mean systematic error is -10minp 8 +/- 0minp 8 for ecliptic longitude and 7minp 5 +/- 0minp 7 for ecliptic latitude, with the errors in the sense ``computed minus Ulugh Beg''. For the brighter stars (those designated alpha , beta , and gamma in the respective constellations), the mean systematic errors are -11minp 3 +/- 1minp 9 for ecliptic longitude and 9minp 4 +/- 1minp 5 for ecliptic latitude. Within the errors this matches the systematic error in both coordinates for alpha Vir. With greater confidence we may conclude that alpha Vir was the principal reference star in the catalogues of Ulugh Beg and Ptolemy. Evans, J. 1987, J. Hist. Astr. 18, 155. Knobel, E. B. 1917, Ulugh Beg's Catalogue of Stars, Washington, D. C.: Carnegie Institution. Shevchenko, M. 1990, J. Hist. Astr. 21, 187.
Planning for Coupling Effects in Bitoric Mixed Astigmatism Ablative Treatments.
Alpins, Noel; Ong, James K Y; Stamatelatos, George
2017-08-01
To demonstrate how to determine the historical coupling adjustments of bitoric mixed astigmatism ablative treatments and how to use these historical coupling adjustments to adjust future bitoric treatments. The individual coupling adjustments of the myopic and hyperopic cylindrical components of a bitoric treatment were derived empirically from a retrospective study where the theoretical combined treatment effect on spherical equivalent was compared to the actual change in refractive spherical equivalent. The coupling adjustments that provided the best fit in both mean and standard deviation were determined to be the historical coupling adjustments. Theoretical treatments that incorporated the historical coupling adjustments were then calculated. The actual distribution of postoperative spherical equivalent errors was compared to the theoretically adjusted distribution. The study group comprised 242 eyes and included 118 virgin right eyes and 124 virgin left eyes of 155 individuals. For the laser used, the myopic coupling adjustment was -0.02 and the hyperopic coupling adjustment was 0.30, as derived by global nonlinear optimization. This implies that almost no adjustment of the myopic component of the bitoric treatment is necessary, but that the hyperopic component of the bitoric treatment generates a large amount of unintended spherical shift. The theoretically adjusted treatments targeted zero mean spherical equivalent error, as intended, and the distribution of the theoretical spherical equivalent errors had the same spread as the distribution of actual postoperative spherical equivalent errors. Bitoric mixed astigmatism ablative treatments may display non-trivial coupling effects. Historical coupling adjustments should be taken into consideration when planning mixed astigmatism treatments to improve surgical outcomes. [J Refract Surg. 2017;33(8):545-551.]. Copyright 2017, SLACK Incorporated.
Planck 2013 results. V. LFI calibration
NASA Astrophysics Data System (ADS)
Planck Collaboration; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Cappellini, B.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Kangaslahti, P.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leach, S.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Osborne, S.; Paci, F.; Pagano, L.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, D.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Ricciardi, S.; Riller, T.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.
2014-11-01
We discuss the methods employed to photometrically calibrate the data acquired by the Low Frequency Instrument on Planck. Our calibration is based on a combination of the orbital dipole plus the solar dipole, caused respectively by the motion of the Planck spacecraft with respect to the Sun and by motion of the solar system with respect to the cosmic microwave background (CMB) rest frame. The latter provides a signal of a few mK with the same spectrum as the CMB anisotropies and is visible throughout the mission. In this data releasewe rely on the characterization of the solar dipole as measured by WMAP. We also present preliminary results (at 44 GHz only) on the study of the Orbital Dipole, which agree with the WMAP value of the solar system speed within our uncertainties. We compute the calibration constant for each radiometer roughly once per hour, in order to keep track of changes in the detectors' gain. Since non-idealities in the optical response of the beams proved to be important, we implemented a fast convolution algorithm which considers the full beam response in estimating the signal generated by the dipole. Moreover, in order to further reduce the impact of residual systematics due to sidelobes, we estimated time variations in the calibration constant of the 30 GHz radiometers (the ones with the largest sidelobes) using the signal of an internal reference load at 4 K instead of the CMB dipole. We have estimated the accuracy of the LFI calibration following two strategies: (1) we have run a set of simulations to assess the impact of statistical errors and systematic effects in the instrument and in the calibration procedure; and (2) we have performed a number of internal consistency checks on the data and on the brightness temperature of Jupiter. Errors in the calibration of this Planck/LFI data release are expected to be about 0.6% at 44 and 70 GHz, and 0.8% at 30 GHz. Both these preliminary results at low and high ℓ are consistent with WMAP results within uncertainties and comparison of power spectra indicates good consistency in the absolute calibration with HFI (0.3%) and a 1.4σ discrepancy with WMAP (0.9%).
CALIBRATED ULTRA FAST IMAGE SIMULATIONS FOR THE DARK ENERGY SURVEY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruderer, Claudio; Chang, Chihway; Refregier, Alexandre
2016-01-20
Image simulations are becoming increasingly important in understanding the measurement process of the shapes of galaxies for weak lensing and the associated systematic effects. For this purpose we present the first implementation of the Monte Carlo Control Loops (MCCL), a coherent framework for studying systematic effects in weak lensing. It allows us to model and calibrate the shear measurement process using image simulations from the Ultra Fast Image Generator (UFig) and the image analysis software SExtractor. We apply this framework to a subset of the data taken during the Science Verification period (SV) of the Dark Energy Survey (DES). Wemore » calibrate the UFig simulations to be statistically consistent with one of the SV images, which covers ∼0.5 square degrees. We then perform tolerance analyses by perturbing six simulation parameters and study their impact on the shear measurement at the one-point level. This allows us to determine the relative importance of different parameters. For spatially constant systematic errors and point-spread function, the calibration of the simulation reaches the weak lensing precision needed for the DES SV survey area. Furthermore, we find a sensitivity of the shear measurement to the intrinsic ellipticity distribution, and an interplay between the magnitude-size and the pixel value diagnostics in constraining the noise model. This work is the first application of the MCCL framework to data and shows how it can be used to methodically study the impact of systematics on the cosmic shear measurement.« less
Interstate vibronic coupling constants between electronic excited states for complex molecules
NASA Astrophysics Data System (ADS)
Fumanal, Maria; Plasser, Felix; Mai, Sebastian; Daniel, Chantal; Gindensperger, Etienne
2018-03-01
In the construction of diabatic vibronic Hamiltonians for quantum dynamics in the excited-state manifold of molecules, the coupling constants are often extracted solely from information on the excited-state energies. Here, a new protocol is applied to get access to the interstate vibronic coupling constants at the time-dependent density functional theory level through the overlap integrals between excited-state adiabatic auxiliary wavefunctions. We discuss the advantages of such method and its potential for future applications to address complex systems, in particular, those where multiple electronic states are energetically closely lying and interact. We apply the protocol to the study of prototype rhenium carbonyl complexes [Re(CO)3(N,N)(L)]n+ for which non-adiabatic quantum dynamics within the linear vibronic coupling model and including spin-orbit coupling have been reported recently.
13Check_RNA: A tool to evaluate 13C chemical shifts assignments of RNA.
Icazatti, A A; Martin, O A; Villegas, M; Szleifer, I; Vila, J A
2018-06-19
Chemical shifts (CS) are an important source of structural information of macromolecules such as RNA. In addition to the scarce availability of CS for RNA, the observed values are prone to errors due to a wrong re-calibration or miss assignments. Different groups have dedicated their efforts to correct CS systematic errors on RNA. Despite this, there are not automated and freely available algorithms for correct assignments of RNA 13C CS before their deposition to the BMRB or re-reference already deposited CS with systematic errors. Based on an existent method we have implemented an open source python module to correct 13C CS (from here on 13Cexp) systematic errors of RNAs and then return the results in 3 formats including the nmrstar one. This software is available on GitHub at https://github.com/BIOS-IMASL/13Check_RNA under a MIT license. Supplementary data are available at Bioinformatics online.
NASA Astrophysics Data System (ADS)
Krisciunas, Kevin
2007-12-01
A gnomon, or vertical pointed stick, can be used to determine the north-south direction at a site, as well as one's latitude. If one has accurate time and knows one's time zone, it is also possible to determine one's longitude. From observations on the first day of winter and the first day of summer one can determine the obliquity of the ecliptic. Since we can obtain accurate geographical coordinates from Google Earth or a GPS device, analysis of set of shadow length measurements can be used by students to learn about astronomical coordinate systems, time systems, systematic errors, and random errors. Systematic latitude errors of student datasets are typically 30 nautical miles (0.5 degree) or more, but with care one can achieve systematic and random errors less than 8 nautical miles. One of the advantages of this experiment is that it can be carried out during the day. Also, it is possible to determine if a student has made up his data.
Ultrafast Multi-Level Logic Gates with Spin-Valley Coupled Polarization Anisotropy in Monolayer MoS2
Wang, Yu-Ting; Luo, Chih-Wei; Yabushita, Atsushi; Wu, Kaung-Hsiung; Kobayashi, Takayoshi; Chen, Chang-Hsiao; Li, Lain-Jong
2015-01-01
The inherent valley-contrasting optical selection rules for interband transitions at the K and K′ valleys in monolayer MoS2 have attracted extensive interest. Carriers in these two valleys can be selectively excited by circularly polarized optical fields. The comprehensive dynamics of spin valley coupled polarization and polarized exciton are completely resolved in this work. Here, we present a systematic study of the ultrafast dynamics of monolayer MoS2 including spin randomization, exciton dissociation, free carrier relaxation, and electron-hole recombination by helicity- and photon energy-resolved transient spectroscopy. The time constants for these processes are 60 fs, 1 ps, 25 ps, and ~300 ps, respectively. The ultrafast dynamics of spin polarization, valley population, and exciton dissociation provides the desired information about the mechanism of radiationless transitions in various applications of 2D transition metal dichalcogenides. For example, spin valley coupled polarization provides a promising way to build optically selective-driven ultrafast valleytronics at room temperature. Therefore, a full understanding of the ultrafast dynamics in MoS2 is expected to provide important fundamental and technological perspectives. PMID:25656222
B*Bπ coupling using relativistic heavy quarks
Flynn, J. M.; Fritzsch, P.; Kawanai, T.; ...
2016-01-27
We report on a calculation of the B*Bπ coupling in lattice QCD. The strong matrix element (Bπ|B*) is directly related to the leading order low-energy constant in heavy meson chiral perturbation theory (HM ΧPT) for B mesons. We carry out our calculation directly at the b-quark mass using a non-perturbatively tuned clover action that controls discretization effects of order |p →a| and (ma) n for all n. Our analysis is performed on RBC/UKQCD gauge configurations using domain-wall fermions and the Iwasaki gauge action at two lattice spacings of a –1 = 1.729(25) GeV, a –1 = 2.281 (28) GeV, andmore » unitary pion masses down to 290 MeV. We achieve good statistical precision and control all systematic uncertainties, giving a final result for the HM ΧPT coupling g b = 0.56(3) stat(7) sys in the continuum and at the physical light-quark masses. Furthermore, this is the first calculation performed directly at the physical b-quark mass and lies in the region one would expect from carrying out an interpolation between previous results at the charm mass and at the static point.« less
Detecting and overcoming systematic errors in genome-scale phylogenies.
Rodríguez-Ezpeleta, Naiara; Brinkmann, Henner; Roure, Béatrice; Lartillot, Nicolas; Lang, B Franz; Philippe, Hervé
2007-06-01
Genome-scale data sets result in an enhanced resolution of the phylogenetic inference by reducing stochastic errors. However, there is also an increase of systematic errors due to model violations, which can lead to erroneous phylogenies. Here, we explore the impact of systematic errors on the resolution of the eukaryotic phylogeny using a data set of 143 nuclear-encoded proteins from 37 species. The initial observation was that, despite the impressive amount of data, some branches had no significant statistical support. To demonstrate that this lack of resolution is due to a mutual annihilation of phylogenetic and nonphylogenetic signals, we created a series of data sets with slightly different taxon sampling. As expected, these data sets yielded strongly supported but mutually exclusive trees, thus confirming the presence of conflicting phylogenetic and nonphylogenetic signals in the original data set. To decide on the correct tree, we applied several methods expected to reduce the impact of some kinds of systematic error. Briefly, we show that (i) removing fast-evolving positions, (ii) recoding amino acids into functional categories, and (iii) using a site-heterogeneous mixture model (CAT) are three effective means of increasing the ratio of phylogenetic to nonphylogenetic signal. Finally, our results allow us to formulate guidelines for detecting and overcoming phylogenetic artefacts in genome-scale phylogenetic analyses.
CORRELATED ERRORS IN EARTH POINTING MISSIONS
NASA Technical Reports Server (NTRS)
Bilanow, Steve; Patt, Frederick S.
2005-01-01
Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor measurement residuals, so some independent checks using imaging sensors are essential and derived science instrument attitude measurements can prove quite valuable in assessing the attitude accuracy.
Ban, Ilija; Troelsen, Anders; Kristensen, Morten Tange
2016-10-01
The Constant score (CS) has been the primary endpoint in most studies on clavicle fractures. However, the CS was not developed to assess patients with clavicle fractures. Our aim was to examine inter-rater reliability and agreement of the CS in patients with clavicle fractures. The secondary aim was to estimate the correlation between the CS and the Disabilities of the Arm, Shoulder and Hand score and the internal consistency of the 2 scores. On the basis of sample sizing, 36 patients (31 male and 5 female patients; mean age, 41.3 years) with clavicle fractures underwent standardized CS assessment at a mean of 6.8 weeks (SD, 1.0 weeks) after injury. Reliability and agreement of the CS were determined by 2 raters. The interclass correlation coefficient (ICC2,1), standard error of measurement, minimal detectable change, Cronbach α coefficient, and Pearson correlation coefficient were estimated. Inter-rater reliability of the total CS was excellent (interclass correlation coefficient, 0.94; 95% confidence interval, 0.88-0.97), with no systematic difference between the 2 raters (P = .75). The standard error of measurement (measurement error at the group level) was 4.9, whereas the minimal detectable change (smallest change needed to indicate a real change for an individual) was 13.6 CS points. The internal consistency of the 10 CS items was good, with a Cronbach α of .85, and we found a strong correlation (r = -0.92) between the CS and Disabilities of the Arm, Shoulder and Hand score. The CS was found to be reliable for assessing patients with clavicle fractures, especially at the group level. With high inter-rater reliability and agreement, in addition to good internal consistency, the standardized CS used in this study can be used for comparison of results from different settings. Copyright © 2016 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Passive imaging of hydrofractures in the South Belridge diatomite
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilderton, D.C.; Patzek, T.W.; Rector, J.W.
1996-03-01
The authors present the results of a seismic analysis of two hydrofractures spanning the entire diatomite column (1,110--1,910 ft or 338--582 m) in Shell`s Phase 2 steam drive pilot in South Belridge, California. These hydrofractures were induced at two depths (1,110--1,460 and 1,560--1,910 ft) and imaged passively using the seismic energy released during fracturing. The arrivals of shear waves from the cracking rock (microseismic events) were recorded at a 1 ms sampling rate by 56 geophones in three remote observation wells, resulting in 10 GB of raw data. These arrival times were then inverted for the event locations, from whichmore » the hydrofracture geometry was inferred. A five-dimensional conjugate-gradient algorithm with a depth-dependent, but otherwise constant shear wave velocity model (CVM) was developed for the inversions. To validate CVM, they created a layered shear wave velocity model of the formation and used it to calculate synthetic arrival times from known locations chosen at various depths along the estimated fracture plane. These arrival times were then inverted with CVM and the calculated locations compared with the known ones, quantifying the systematic error associated with the assumption of constant shear wave velocity. They also performed Monte Carlo sensitivity analyses on the synthetic arrival times to account for all other random errors that exist in field data. After determining the limitations of the inversion algorithm, they hand-picked the shear wave arrival times for both hydrofractures and inverted them with CVM.« less
Local systematic differences in 2MASS positions
NASA Astrophysics Data System (ADS)
Bustos Fierro, I. H.; Calderón, J. H.
2018-01-01
We have found that positions in the 2MASS All-sky Catalog of Point Sources show local systematic differences with characteristic length-scales of ˜ 5 to ˜ 8 arcminutes when compared with several catalogs. We have observed that when 2MASS positions are used in the computation of proper motions, the mentioned systematic differences cause systematic errors in the resulting proper motions. We have developed a method to locally rectify 2MASS with respect to UCAC4 in order to diminish the systematic differences between these catalogs. The rectified 2MASS catalog with the proposed method can be regarded as an extension of UCAC4 for astrometry with accuracy ˜ 90 mas in its positions, with negligible systematic errors. Also we show that the use of these rectified positions removes the observed systematic pattern in proper motions derived from original 2MASS positions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keeling, V; Jin, H; Hossain, S
2014-06-15
Purpose: To evaluate setup accuracy and quantify individual systematic and random errors for the various hardware and software components of the frameless 6D-BrainLAB ExacTrac system. Methods: 35 patients with cranial lesions, some with multiple isocenters (50 total lesions treated in 1, 3, 5 fractions), were investigated. All patients were simulated with a rigid head-and-neck mask and the BrainLAB localizer. CT images were transferred to the IPLAN treatment planning system where optimized plans were generated using stereotactic reference frame based on the localizer. The patients were setup initially with infrared (IR) positioning ExacTrac system. Stereoscopic X-ray images (XC: X-ray Correction) weremore » registered to their corresponding digitally-reconstructed-radiographs, based on bony anatomy matching, to calculate 6D-translational and rotational (Lateral, Longitudinal, Vertical, Pitch, Roll, Yaw) shifts. XC combines systematic errors of the mask, localizer, image registration, frame, and IR. If shifts were below tolerance (0.7 mm translational and 1 degree rotational), treatment was initiated; otherwise corrections were applied and additional X-rays were acquired to verify patient position (XV: X-ray Verification). Statistical analysis was used to extract systematic and random errors of the different components of the 6D-ExacTrac system and evaluate the cumulative setup accuracy. Results: Mask systematic errors (translational; rotational) were the largest and varied from one patient to another in the range (−15 to 4mm; −2.5 to 2.5degree) obtained from mean of XC for each patient. Setup uncertainty in IR positioning (0.97,2.47,1.62mm;0.65,0.84,0.96degree) was extracted from standard-deviation of XC. Combined systematic errors of the frame and localizer (0.32,−0.42,−1.21mm; −0.27,0.34,0.26degree) was extracted from mean of means of XC distributions. Final patient setup uncertainty was obtained from the standard deviations of XV (0.57,0.77,0.67mm,0.39,0.35,0.30degree). Conclusion: Statistical analysis was used to calculate cumulative and individual systematic errors from the different hardware and software components of the 6D-ExacTrac-system. Patients were treated with cumulative errors (<1mm,<1degree) with XV image guidance.« less
Fred L. Tobiason; Richard W. Hemingway
1994-01-01
A GMMX conformational search routine gives a family of conformations that reflects the Boltzmann-averaged heterocyclic ring conformation as evidenced by accurate prediction of all three coupling constants observed for tetra-O-methyl-(+)-catechin.
Fred L. Tobiason; Richard w. Hemingway
1994-01-01
A GMMXe conformational search routine gives a family a conformations that reflects the boltzmann-averaged heterocyclic ring conformation as evidence by accurate prediction of all three coupling constants observed for tetra-O-methyl-(+)-catechin.
Quantum Error Correction with a Globally-Coupled Array of Neutral Atom Qubits
2013-02-01
magneto - optical trap ) located at the center of the science cell. Fluorescence...Bottle beam trap GBA Gaussian beam array EMCCD electron multiplying charge coupled device microsec. microsecond MOT Magneto - optical trap QEC quantum error correction qubit quantum bit ...developed and implemented an array of neutral atom qubits in optical traps for studies of quantum error correction. At the end of the three year
NASA Astrophysics Data System (ADS)
Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.
2017-11-01
Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.
Structure and NMR spectra of some [2.2]paracyclophanes. The dilemma of [2.2]paracyclophane symmetry.
Dodziuk, Helena; Szymański, Sławomir; Jaźwiński, Jarosław; Ostrowski, Maciej; Demissie, Taye Beyene; Ruud, Kenneth; Kuś, Piotr; Hopf, Henning; Lin, Shaw-Tao
2011-09-29
Density functional theory (DFT) quantum chemical calculations of the structure and NMR parameters for highly strained hydrocarbon [2.2]paracyclophane 1 and its three derivatives are presented. The calculated NMR parameters are compared with the experimental ones. By least-squares fitting of the (1)H spectra, almost all J(HH) coupling constants could be obtained with high accuracy. Theoretical vicinal J(HH) couplings in the aliphatic bridges, calculated using different basis sets (6-311G(d,p), and Huz-IV) reproduce the experimental values with essentially the same root-mean-square (rms) error of about 1.3 Hz, regardless of the basis set used. These discrepancies could be in part due to a considerable impact of rovibrational effects on the observed J(HH) couplings, since the latter show a measurable dependence on temperature. Because of the lasting literature controversies concerning the symmetry of parent compound 1, D(2h) versus D(2), a critical analysis of the relevant literature data is carried out. The symmetry issue is prone to confusion because, according to some literature claims, the two hypothetical enantiomeric D(2) structures of 1 could be separated by a very low energy barrier that would explain the occurrence of rovibrational effects on the observed vicinal J(HH) couplings. However, the D(2h) symmetry of 1 with a flat energy minimum could also account for these effects.
NASA Astrophysics Data System (ADS)
Wang, Da-Lin; Qi, Hong
Semi-transparent materials (such as IR optical windows) are widely used for heat protection or transfer, temperature and image measurement, and safety in energy , space, military, and information technology applications. They are used, for instance, ceramic coatings for thermal barriers of spacecrafts or gas turbine blades, and thermal image observation under extreme or some dangerous environments. In this paper, the coupled conduction and radiation heat transfer model is established to describe temperature distribution of semitransparent thermal barrier medium within the aerothermal environment. In order to investigate this numerical model, one semi-transparent sample with black coating was considered, and photothermal properties were measured. At last, Finite Volume Method (FVM) was used to solve the coupled model, and the temperature responses from the sample surfaces were obtained. In addition, experiment study was also taken into account. In the present experiment, aerodynamic heat flux was simulated by one electrical heater, and two experiment cases were designed in terms of the duration of aerodynamic heating. One case is that the heater irradiates one surface of the sample continually until the other surface temperature up to constant, and the other case is that the heater works only 130 s. The surface temperature responses of these two cases were recorded. Finally, FVM model of the coupling conduction-radiation heat transfer was validated based on the experiment study with relative error less than 5%.
Medication errors in the Middle East countries: a systematic review of the literature.
Alsulami, Zayed; Conroy, Sharon; Choonara, Imti
2013-04-01
Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality. Educational programmes on drug therapy for doctors and nurses are urgently needed.
On-board error correction improves IR earth sensor accuracy
NASA Astrophysics Data System (ADS)
Alex, T. K.; Kasturirangan, K.; Shrivastava, S. K.
1989-10-01
Infra-red earth sensors are used in satellites for attitude sensing. Their accuracy is limited by systematic and random errors. The sources of errors in a scanning infra-red earth sensor are analyzed in this paper. The systematic errors arising from seasonal variation of infra-red radiation, oblate shape of the earth, ambient temperature of sensor, changes in scan/spin rates have been analyzed. Simple relations are derived using least square curve fitting for on-board correction of these errors. Random errors arising out of noise from detector and amplifiers, instability of alignment and localized radiance anomalies are analyzed and possible correction methods are suggested. Sun and Moon interference on earth sensor performance has seriously affected a number of missions. The on-board processor detects Sun/Moon interference and corrects the errors on-board. It is possible to obtain eight times improvement in sensing accuracy, which will be comparable with ground based post facto attitude refinement.
A Renormalisation Group Method. V. A Single Renormalisation Group Step
NASA Astrophysics Data System (ADS)
Brydges, David C.; Slade, Gordon
2015-05-01
This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.
Hydrologic Design in the Anthropocene
NASA Astrophysics Data System (ADS)
Vogel, R. M.; Farmer, W. H.; Read, L.
2014-12-01
In an era dubbed the Anthropocene, the natural world is being transformed by a myriad of human influences. As anthropogenic impacts permeate hydrologic systems, hydrologists are challenged to fully account for such changes and develop new methods of hydrologic design. Deterministic watershed models (DWM), which can account for the impacts of changes in land use, climate and infrastructure, are becoming increasing popular for the design of flood and/or drought protection measures. As with all models that are calibrated to existing datasets, DWMs are subject to model error or uncertainty. In practice, the model error component of DWM predictions is typically ignored yet DWM simulations which ignore model error produce model output which cannot reproduce the statistical properties of the observations they are intended to replicate. In the context of hydrologic design, we demonstrate how ignoring model error can lead to systematic downward bias in flood quantiles, upward bias in drought quantiles and upward bias in water supply yields. By reincorporating model error, we document how DWM models can be used to generate results that mimic actual observations and preserve their statistical behavior. In addition to use of DWM for improved predictions in a changing world, improved communication of the risk and reliability is also needed. Traditional statements of risk and reliability in hydrologic design have been characterized by return periods, but such statements often assume that the annual probability of experiencing a design event remains constant throughout the project horizon. We document the general impact of nonstationarity on the average return period and reliability in the context of hydrologic design. Our analyses reveal that return periods do not provide meaningful expressions of the likelihood of future hydrologic events. Instead, knowledge of system reliability over future planning horizons can more effectively prepare society and communicate the likelihood of future hydrologic events of interest.
Coupled-cluster based basis sets for valence correlation calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Claudino, Daniel; Bartlett, Rodney J., E-mail: bartlett@qtp.ufl.edu; Gargano, Ricardo
Novel basis sets are generated that target the description of valence correlation in atoms H through Ar. The new contraction coefficients are obtained according to the Atomic Natural Orbital (ANO) procedure from CCSD(T) (coupled-cluster singles and doubles with perturbative triples correction) density matrices starting from the primitive functions of Dunning et al. [J. Chem. Phys. 90, 1007 (1989); ibid. 98, 1358 (1993); ibid. 100, 2975 (1993)] (correlation consistent polarized valence X-tuple zeta, cc-pVXZ). The exponents of the primitive Gaussian functions are subject to uniform scaling in order to ensure satisfaction of the virial theorem for the corresponding atoms. These newmore » sets, named ANO-VT-XZ (Atomic Natural Orbital Virial Theorem X-tuple Zeta), have the same number of contracted functions as their cc-pVXZ counterparts in each subshell. The performance of these basis sets is assessed by the evaluation of the contraction errors in four distinct computations: correlation energies in atoms, probing the density in different regions of space via 〈r{sup n}〉 (−3 ≤ n ≤ 3) in atoms, correlation energies in diatomic molecules, and the quality of fitting potential energy curves as measured by spectroscopic constants. All energy calculations with ANO-VT-QZ have contraction errors within “chemical accuracy” of 1 kcal/mol, which is not true for cc-pVQZ, suggesting some improvement compared to the correlation consistent series of Dunning and co-workers.« less
Contribution of relativistic quantum chemistry to electron’s electric dipole moment for CP violation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abe, M., E-mail: minoria@tmu.ac.jp; Gopakumar, G., E-mail: gopakumargeetha@gmail.com; Hada, M., E-mail: hada@tmu.ac.jp
The search for the electric dipole moment of the electron (eEDM) is important because it is a probe of Charge Conjugation-Parity (CP) violation. It can also shed light on new physics beyond the standard model. It is not possible to measure the eEDM directly. However, the interaction energy involving the effective electric field (E{sub eff}) acting on an electron in a molecule and the eEDM can be measured. This quantity can be combined with E{sub eff}, which is calculated by relativistic molecular orbital theory to determine eEDM. Previous calculations of E{sub eff} were not sufficiently accurate in the treatment ofmore » relativistic or electron correlation effects. We therefore developed a new method to calculate E{sub eff} based on a four-component relativistic coupled-cluster theory. We demonstrated our method for YbF molecule, one of the promising candidates for the eEDM search. Using very large basis set and without freezing any core orbitals, we obtain a value of 23.1 GV/cm for E{sub eff} in YbF with an estimated error of less than 10%. The error is assessed by comparison of our calculations and experiments for two properties relevant for E{sub eff}, permanent dipole moment and hyperfine coupling constant. Our method paves the way to calculate properties of various kinds of molecules which can be described by a single-reference wave function.« less
Patient disclosure of medical errors in paediatrics: A systematic literature review
Koller, Donna; Rummens, Anneke; Le Pouesard, Morgane; Espin, Sherry; Friedman, Jeremy; Coffey, Maitreya; Kenneally, Noah
2016-01-01
Medical errors are common within paediatrics; however, little research has examined the process of disclosing medical errors in paediatric settings. The present systematic review of current research and policy initiatives examined evidence regarding the disclosure of medical errors involving paediatric patients. Peer-reviewed research from a range of scientific journals from the past 10 years is presented, and an overview of Canadian and international policies regarding disclosure in paediatric settings are provided. The purpose of the present review was to scope the existing literature and policy, and to synthesize findings into an integrated and accessible report. Future research priorities and policy implications are then identified. PMID:27429578
System calibration method for Fourier ptychographic microscopy
NASA Astrophysics Data System (ADS)
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.
Li, T. S.; DePoy, D. L.; Marshall, J. L.; ...
2016-06-01
Here, we report that meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%–2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations inmore » the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. In conclusion, the residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.« less
NASA Astrophysics Data System (ADS)
Abbiati, Giuseppe; La Salandra, Vincenzo; Bursi, Oreste S.; Caracoglia, Luca
2018-02-01
Successful online hybrid (numerical/physical) dynamic substructuring simulations have shown their potential in enabling realistic dynamic analysis of almost any type of non-linear structural system (e.g., an as-built/isolated viaduct, a petrochemical piping system subjected to non-stationary seismic loading, etc.). Moreover, owing to faster and more accurate testing equipment, a number of different offline experimental substructuring methods, operating both in time (e.g. the impulse-based substructuring) and frequency domains (i.e. the Lagrange multiplier frequency-based substructuring), have been employed in mechanical engineering to examine dynamic substructure coupling. Numerous studies have dealt with the above-mentioned methods and with consequent uncertainty propagation issues, either associated with experimental errors or modelling assumptions. Nonetheless, a limited number of publications have systematically cross-examined the performance of the various Experimental Dynamic Substructuring (EDS) methods and the possibility of their exploitation in a complementary way to expedite a hybrid experiment/numerical simulation. From this perspective, this paper performs a comparative uncertainty propagation analysis of three EDS algorithms for coupling physical and numerical subdomains with a dual assembly approach based on localized Lagrange multipliers. The main results and comparisons are based on a series of Monte Carlo simulations carried out on a five-DoF linear/non-linear chain-like systems that include typical aleatoric uncertainties emerging from measurement errors and excitation loads. In addition, we propose a new Composite-EDS (C-EDS) method to fuse both online and offline algorithms into a unique simulator. Capitalizing from the results of a more complex case study composed of a coupled isolated tank-piping system, we provide a feasible way to employ the C-EDS method when nonlinearities and multi-point constraints are present in the emulated system.
Hadronic Contribution to Muon g-2 with Systematic Error Correlations
NASA Astrophysics Data System (ADS)
Brown, D. H.; Worstell, W. A.
1996-05-01
We have performed a new evaluation of the hadronic contribution to a_μ=(g-2)/2 of the muon with explicit correlations of systematic errors among the experimental data on σ( e^+e^- → hadrons ). Our result for the lowest order hadronic vacuum polarization contribution is a_μ^hvp = 701.7(7.6)(13.4) × 10-10 where the total systematic error contributions from below and above √s = 1.4 GeV are (12.5) × 10-10 and (4.8) × 10-10 respectively. Therefore new measurements on σ( e^+e^- → hadrons ) below 1.4 GeV in Novosibirsk, Russia can significantly reduce the total error on a_μ^hvp. This contrasts with a previous evaluation which indicated that the dominant error is due to the energy region above 1.4 GeV. The latter analysis correlated systematic errors at each energy point separately but not across energy ranges as we have done. Combination with higher order hadronic contributions is required for a new measurement of a_μ at Brookhaven National Laboratory to be sensitive to electroweak and possibly supergravity and muon substructure effects. Our analysis may also be applied to calculations of hadronic contributions to the running of α(s) at √s= M_Z, the hyperfine structure of muonium, and the running of sin^2 θW in Møller scattering. The analysis of the new Novosibirsk data will also be given.
Small, J R
1993-01-01
This paper is a study into the effects of experimental error on the estimated values of flux control coefficients obtained using specific inhibitors. Two possible techniques for analysing the experimental data are compared: a simple extrapolation method (the so-called graph method) and a non-linear function fitting method. For these techniques, the sources of systematic errors are identified and the effects of systematic and random errors are quantified, using both statistical analysis and numerical computation. It is shown that the graph method is very sensitive to random errors and, under all conditions studied, that the fitting method, even under conditions where the assumptions underlying the fitted function do not hold, outperformed the graph method. Possible ways of designing experiments to minimize the effects of experimental errors are analysed and discussed. PMID:8257434
Efficient calculation of higher-order optical waveguide dispersion.
Mores, J A; Malheiros-Silveira, G N; Fragnito, H L; Hernández-Figueroa, H E
2010-09-13
An efficient numerical strategy to compute the higher-order dispersion parameters of optical waveguides is presented. For the first time to our knowledge, a systematic study of the errors involved in the higher-order dispersions' numerical calculation process is made, showing that the present strategy can accurately model those parameters. Such strategy combines a full-vectorial finite element modal solver and a proper finite difference differentiation algorithm. Its performance has been carefully assessed through the analysis of several key geometries. In addition, the optimization of those higher-order dispersion parameters can also be carried out by coupling to the present scheme a genetic algorithm, as shown here through the design of a photonic crystal fiber suitable for parametric amplification applications.
Accurate determinations of one-bond 13C-13C couplings in 13C-labeled carbohydrates
NASA Astrophysics Data System (ADS)
Azurmendi, Hugo F.; Freedberg, Darón I.
2013-03-01
Carbon plays a central role in the molecular architecture of carbohydrates, yet the availability of accurate methods for 1DCC determination has not been sufficiently explored, despite the importance that such data could play in structural studies of oligo- and polysaccharides. Existing methods require fitting intensity ratios of cross- to diagonal-peaks as a function of the constant-time (CT) in CT-COSY experiments, while other methods utilize measurement of peak separation. The former strategies suffer from complications due to peak overlap, primarily in regions close to the diagonal, while the latter strategies are negatively impacted by the common occurrence of strong coupling in sugars, which requires a reliable assessment of their influence in the context of RDC determination. We detail a 13C-13C CT-COSY method that combines a variation in the CT processed with diagonal filtering to yield 1JCC and RDCs. The strategy, which relies solely on cross-peak intensity modulation, is inspired in the cross-peak nulling method used for JHH determinations, but adapted and extended to applications where, like in sugars, large one-bond 13C-13C couplings coexist with relatively small long-range couplings. Because diagonal peaks are not utilized, overlap problems are greatly alleviated. Thus, one-bond couplings can be determined from different cross-peaks as either active or passive coupling. This results in increased accuracy when more than one determination is available, and in more opportunities to measure a specific coupling in the presence of severe overlap. In addition, we evaluate the influence of strong couplings on the determination of RDCs by computer simulations. We show that individual scalar couplings are notably affected by the presence of strong couplings but, at least for the simple cases studied, the obtained RDC values for use in structural calculations were not, because the errors introduced by strong couplings for the isotropic and oriented phases are very similar and therefore cancel when calculating the difference to determine 1DCC values.
Boson mapping techniques applied to constant gauge fields in QCD
NASA Technical Reports Server (NTRS)
Hess, Peter Otto; Lopez, J. C.
1995-01-01
Pairs of coordinates and derivatives of the constant gluon modes are mapped to new gluon-pair fields and their derivatives. Applying this mapping to the Hamiltonian of constant gluon fields results for large coupling constants into an effective Hamiltonian which separates into one describing a scalar field and another one for a field with spin two. The ground state is dominated by pairs of gluons coupled to color and spin zero with slight admixtures of color zero and spin two pairs. As color group we used SU(2).
Accurate acceleration of kinetic Monte Carlo simulations through the modification of rate constants.
Chatterjee, Abhijit; Voter, Arthur F
2010-05-21
We present a novel computational algorithm called the accelerated superbasin kinetic Monte Carlo (AS-KMC) method that enables a more efficient study of rare-event dynamics than the standard KMC method while maintaining control over the error. In AS-KMC, the rate constants for processes that are observed many times are lowered during the course of a simulation. As a result, rare processes are observed more frequently than in KMC and the time progresses faster. We first derive error estimates for AS-KMC when the rate constants are modified. These error estimates are next employed to develop a procedure for lowering process rates with control over the maximum error. Finally, numerical calculations are performed to demonstrate that the AS-KMC method captures the correct dynamics, while providing significant CPU savings over KMC in most cases. We show that the AS-KMC method can be employed with any KMC model, even when no time scale separation is present (although in such cases no computational speed-up is observed), without requiring the knowledge of various time scales present in the system.
Adaptive optics system performance approximations for atmospheric turbulence correction
NASA Astrophysics Data System (ADS)
Tyson, Robert K.
1990-10-01
Analysis of adaptive optics system behavior often can be reduced to a few approximations and scaling laws. For atmospheric turbulence correction, the deformable mirror (DM) fitting error is most often used to determine a priori the interactuator spacing and the total number of correction zones required. This paper examines the mirror fitting error in terms of its most commonly used exponential form. The explicit constant in the error term is dependent on deformable mirror influence function shape and actuator geometry. The method of least squares fitting of discrete influence functions to the turbulent wavefront is compared to the linear spatial filtering approximation of system performance. It is found that the spatial filtering method overstimates the correctability of the adaptive optics system by a small amount. By evaluating fitting error for a number of DM configurations, actuator geometries, and influence functions, fitting error constants verify some earlier investigations.
The HST Key Project on the Extragalactic Distance Scale
NASA Astrophysics Data System (ADS)
Freedman, W. L.
1994-12-01
One of the major unresolved problems in observational cosmology is the determination of the Hubble Constant, (H_0). The Hubble Space Telescope (HST) Key Project on the Extragalactic Distance Scale aims to provide a measure of H_0 to an accuracy of 10%. Historically the route to H_0 has been plagued by systematic errors; hence there is no quick and easy route to a believeable value of H_0. Achieving plausible error limits of 10% requires careful attention to eliminating potential sources of systematic error. The strategy adopted by the Key Project team is threefold: First, to discover Cepheids in spiral galaxies located in the field and in small groups that are suitable for the calibration of several independent secondary methods. Second, to make direct Cepheid measurements of 3 spiral galaxies in the Virgo cluster and 2 members of the Fornax cluster. Third, to provide a check on the the Cepheid distance scale via independent distance estimates to nearby galaxies, and in addition, to undertake an empirical test of the sensitivity of the zero point of the Cepheid PL relation to heavy-element abundances. First results from the HST Key Project will be presented. We have now determined Cepheid distances to 4 galaxies using the HST: these are the nearby galaxies M81 and M101, the edge-on galaxy NGC 925, and the face-on spiral galaxy M100 in the Virgo cluster. Recently we have measured a Cepheid distance for M100 of 17 +/- 2 Mpc, which yields a value of H_0 = 80 +/- 17 km/sec/Mpc. This work was carried out in collaboration with the other members of the HST Key Project team, R. Kennicutt, J. Mould, F. Bresolin, S. Faber, L. Ferrarese, H. Ford, J. Graham, J. Gunn, M. Han, P. Harding, J. Hoessel, R. Hill, J. Huchra, S. Hughes, G. Illingworth, D. Kelson, B. Madore, R. Phelps, A. Saha, N. Silbermann, P. Stetson, and A. Turner.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strömberg, Sten, E-mail: sten.stromberg@biotek.lu.se; Nistor, Mihaela, E-mail: mn@bioprocesscontrol.com; Liu, Jing, E-mail: jing.liu@biotek.lu.se
Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the currentmore » study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.« less
Saccadic adaptation to a systematically varying disturbance.
Cassanello, Carlos R; Ohl, Sven; Rolfs, Martin
2016-08-01
Saccadic adaptation maintains the correct mapping between eye movements and their targets, yet the dynamics of saccadic gain changes in the presence of systematically varying disturbances has not been extensively studied. Here we assessed changes in the gain of saccade amplitudes induced by continuous and periodic postsaccadic visual feedback. Observers made saccades following a sequence of target steps either along the horizontal meridian (Two-way adaptation) or with unconstrained saccade directions (Global adaptation). An intrasaccadic step-following a sinusoidal variation as a function of the trial number (with 3 different frequencies tested in separate blocks)-consistently displaced the target along its vector. The oculomotor system responded to the resulting feedback error by modifying saccade amplitudes in a periodic fashion with similar frequency of variation but lagging the disturbance by a few tens of trials. This periodic response was superimposed on a drift toward stronger hypometria with similar asymptotes and decay rates across stimulus conditions. The magnitude of the periodic response decreased with increasing frequency and was smaller and more delayed for Global than Two-way adaptation. These results suggest that-in addition to the well-characterized return-to-baseline response observed in protocols using constant visual feedback-the oculomotor system attempts to minimize the feedback error by integrating its variation across trials. This process resembles a convolution with an internal response function, whose structure would be determined by coefficients of the learning model. Our protocol reveals this fast learning process in single short experimental sessions, qualifying it for the study of sensorimotor learning in health and disease. Copyright © 2016 the American Physiological Society.
Saccadic adaptation to a systematically varying disturbance
Ohl, Sven; Rolfs, Martin
2016-01-01
Saccadic adaptation maintains the correct mapping between eye movements and their targets, yet the dynamics of saccadic gain changes in the presence of systematically varying disturbances has not been extensively studied. Here we assessed changes in the gain of saccade amplitudes induced by continuous and periodic postsaccadic visual feedback. Observers made saccades following a sequence of target steps either along the horizontal meridian (Two-way adaptation) or with unconstrained saccade directions (Global adaptation). An intrasaccadic step—following a sinusoidal variation as a function of the trial number (with 3 different frequencies tested in separate blocks)—consistently displaced the target along its vector. The oculomotor system responded to the resulting feedback error by modifying saccade amplitudes in a periodic fashion with similar frequency of variation but lagging the disturbance by a few tens of trials. This periodic response was superimposed on a drift toward stronger hypometria with similar asymptotes and decay rates across stimulus conditions. The magnitude of the periodic response decreased with increasing frequency and was smaller and more delayed for Global than Two-way adaptation. These results suggest that—in addition to the well-characterized return-to-baseline response observed in protocols using constant visual feedback—the oculomotor system attempts to minimize the feedback error by integrating its variation across trials. This process resembles a convolution with an internal response function, whose structure would be determined by coefficients of the learning model. Our protocol reveals this fast learning process in single short experimental sessions, qualifying it for the study of sensorimotor learning in health and disease. PMID:27098027
A New Approach for Coupled GCM Sensitivity Studies
NASA Astrophysics Data System (ADS)
Kirtman, B. P.; Duane, G. S.
2011-12-01
A new multi-model approach for coupled GCM sensitivity studies is presented. The purpose of the sensitivity experiments is to understand why two different coupled models have such large differences in their respective climate simulations. In the application presented here, the differences between the coupled models using the Center for Ocean-Land-Atmosphere Studies (COLA) and the National Center for Atmospheric Research (NCAR) atmospheric general circulation models (AGCMs) are examined. The intent is to isolate which component of the air-sea fluxes is most responsible for the differences between the coupled models and for the errors in their respective coupled simulations. The procedure is to simultaneously couple the two different atmospheric component models to a single ocean general circulation model (OGCM), in this case the Modular Ocean Model (MOM) developed at the Geophysical Fluid Dynamics Laboratory (GFDL). Each atmospheric component model experiences the same SST produced by the OGCM, but the OGCM is simultaneously coupled to both AGCMs using a cross coupling strategy. In the first experiment, the OGCM is coupled to the heat and fresh water flux from the NCAR AGCM (Community Atmospheric Model; CAM) and the momentum flux from the COLA AGCM. Both AGCMs feel the same SST. In the second experiment, the OGCM is coupled to the heat and fresh water flux from the COLA AGCM and the momentum flux from the CAM AGCM. Again, both atmospheric component models experience the same SST. By comparing these two experimental simulations with control simulations where only one AGCM is used, it is possible to argue which of the flux components are most responsible for the differences in the simulations and their respective errors. Based on these sensitivity experiments we conclude that the tropical ocean warm bias in the COLA coupled model is due to errors in the heat flux, and that the erroneous westward shift in the tropical Pacific cold tongue minimum in the NCAR model is due errors in the momentum flux. All the coupled simulations presented here have warm biases along the eastern boundary of the tropical oceans suggesting that the problem is common to both AGCMs. In terms of interannual variability in the tropical Pacific, the CAM momentum flux is responsible for the erroneous westward extension of the sea surface temperature anomalies (SSTA) and errors in the COLA momentum flux cause the erroneous eastward migration of the El Niño-Southern Oscillation (ENSO) events. These conclusions depend on assuming that the error due to the OGCM can be neglected.
A micro-coupling for micro mechanical systems
NASA Astrophysics Data System (ADS)
Li, Wei; Zhou, Zhixiong; Zhang, Bi; Xiao, Yunya
2016-05-01
The error motions of micro mechanical systems, such as micro-spindles, increase with the increasing of the rotational speed, which not only decreases the rotational accuracy, but also promotes instability and limits the maximum operational speed. One effective way to deal with it is to use micro-flexible couplings between the drive and driven shafts so as to reduce error motions of the driven shaft. But the conventional couplings, such as diaphragm couplings, elastomeric couplings, bellows couplings, and grooved couplings, etc, cannot be directly used because of their large and complicated structures. This study presents a novel micro-coupling that consists of a flexible coupling and a shape memory alloy (SMA)-based clamp for micro mechanical systems. It is monolithic and can be directly machined from a shaft. The study performs design optimization and provides manufacturing considerations, including thermo-mechanical training of the SMA ring for the desired Two-Way-Shape-Memory effect (TWSMe). A prototype micro-coupling and a prototype micro-spindle using the proposed coupling are fabricated and tested. The testing results show that the prototype micro-coupling can bear a torque of above 5 N • mm and an axial force of 8.5 N and be fitted with an SMA ring for clamping action at room temperature (15 °C) and unclamping action below-5 °C. At the same time, the prototype micro-coupling can work at a rotational speed of above 200 kr/min with the application to a high-speed precision micro-spindle. Moreover, the radial runout error of the artifact, as a substitute for the micro-tool, is less than 3 μm while that of turbine shaft is above 7 μm. It can be concluded that the micro-coupling successfully accommodates misalignment errors of the prototype micro-spindle. This research proposes a micro-coupling which is featured with an SMA ring, and it is designed to clamp two shafts, and has smooth transmission, simple assembly, compact structure, zero-maintenance and balanced motions.
Vicinal fluorine-fluorine coupling constants: Fourier analysis.
San Fabián, J; Westra Hoekzema, A J A
2004-10-01
Stereochemical dependences of vicinal fluorine-fluorine nuclear magnetic resonance coupling constants (3JFF) have been studied with the multiconfigurational self-consistent field in the restricted active space approach, with the second-order polarization propagator approximation (SOPPA), and with density functional theory. The SOPPA results show the best overall agreement with experimental couplings. The relationship with the dihedral angle between the coupled fluorines has been studied by Fourier analysis, the result is very different from that of proton-proton couplings. The Fourier coefficients do not resemble those of a typical Karplus equation. The four nonrelativistic contributions to the coupling constants of 1,2-difluoroethane configurations have been studied separately showing that up to six Fourier coefficients are required to reproduce the calculated values satisfactorily. Comparison with Fourier coefficients for matching hydrogen fluoride dimer configurations suggests that the higher order Fourier coefficients (Cn> or =3) originate mainly from through-space Fermi contact interaction. The through-space interaction is the main reason 3JFF do not follow the Karplus equation. (c) 2004 American Institute of Physics
Vicinal fluorine-fluorine coupling constants: Fourier analysis
NASA Astrophysics Data System (ADS)
San Fabián, J.; Westra Hoekzema, A. J. A.
2004-10-01
Stereochemical dependences of vicinal fluorine-fluorine nuclear magnetic resonance coupling constants (3JFF) have been studied with the multiconfigurational self-consistent field in the restricted active space approach, with the second-order polarization propagator approximation (SOPPA), and with density functional theory. The SOPPA results show the best overall agreement with experimental couplings. The relationship with the dihedral angle between the coupled fluorines has been studied by Fourier analysis, the result is very different from that of proton-proton couplings. The Fourier coefficients do not resemble those of a typical Karplus equation. The four nonrelativistic contributions to the coupling constants of 1,2-difluoroethane configurations have been studied separately showing that up to six Fourier coefficients are required to reproduce the calculated values satisfactorily. Comparison with Fourier coefficients for matching hydrogen fluoride dimer configurations suggests that the higher order Fourier coefficients (Cn⩾3) originate mainly from through-space Fermi contact interaction. The through-space interaction is the main reason 3JFF do not follow the Karplus equation.
Zhang, Tisheng; Niu, Xiaoji; Ban, Yalong; Zhang, Hongping; Shi, Chuang; Liu, Jingnan
2015-01-01
A GNSS/INS deeply-coupled system can improve the satellite signals tracking performance by INS aiding tracking loops under dynamics. However, there was no literature available on the complete modeling of the INS branch in the INS-aided tracking loop, which caused the lack of a theoretical tool to guide the selections of inertial sensors, parameter optimization and quantitative analysis of INS-aided PLLs. This paper makes an effort on the INS branch in modeling and parameter optimization of phase-locked loops (PLLs) based on the scalar-based GNSS/INS deeply-coupled system. It establishes the transfer function between all known error sources and the PLL tracking error, which can be used to quantitatively evaluate the candidate inertial measurement unit (IMU) affecting the carrier phase tracking error. Based on that, a steady-state error model is proposed to design INS-aided PLLs and to analyze their tracking performance. Based on the modeling and error analysis, an integrated deeply-coupled hardware prototype is developed, with the optimization of the aiding information. Finally, the performance of the INS-aided PLLs designed based on the proposed steady-state error model is evaluated through the simulation and road tests of the hardware prototype. PMID:25569751
Coupled thermal-fluid analysis with flowpath-cavity interaction in a gas turbine engine
NASA Astrophysics Data System (ADS)
Fitzpatrick, John Nathan
This study seeks to improve the understanding of inlet conditions of a large rotor-stator cavity in a turbofan engine, often referred to as the drive cone cavity (DCC). The inlet flow is better understood through a higher fidelity computational fluid dynamics (CFD) modeling of the inlet to the cavity, and a coupled finite element (FE) thermal to CFD fluid analysis of the cavity in order to accurately predict engine component temperatures. Accurately predicting temperature distribution in the cavity is important because temperatures directly affect the material properties including Young's modulus, yield strength, fatigue strength, creep properties. All of these properties directly affect the life of critical engine components. In addition, temperatures cause thermal expansion which changes clearances and in turn affects engine efficiency. The DCC is fed from the last stage of the high pressure compressor. One of its primary functions is to purge the air over the rotor wall to prevent it from overheating. Aero-thermal conditions within the DCC cavity are particularly challenging to predict due to the complex air flow and high heat transfer in the rotating component. Thus, in order to accurately predict metal temperatures a two-way coupled CFD-FE analysis is needed. Historically, when the cavity airflow is modeled for engine design purposes, the inlet condition has been over-simplified for the CFD analysis which impacts the results, particularly in the region around the compressor disc rim. The inlet is typically simplified by circumferentially averaging the velocity field at the inlet to the cavity which removes the effect of pressure wakes from the upstream rotor blades. The way in which these non-axisymmetric flow characteristics affect metal temperatures is not well understood. In addition, a constant air temperature scaled from a previous analysis is used as the simplified cavity inlet air temperature. Therefore, the objectives of this study are: (a) model the DCC cavity with a more physically representative inlet condition while coupling the solid thermal analysis and compressible air flow analysis that includes the fluid velocity, pressure, and temperature fields; (b) run a coupled analysis whose boundary conditions come from computational models, rather than thermocouple data; (c) validate the model using available experimental data; and (d) based on the validation, determine if the model can be used to predict air inlet and metal temperatures for new engine geometries. Verification with experimental results showed that the coupled analysis with the 3D no-bolt CFD model with predictive boundary conditions, over-predicted the HP6 offtake temperature by 16k. The maximum error was an over-prediction of 50k while the average error was 17k. The predictive model with 3D bolts also predicted cavity temperatures with an average error of 17k. For the two CFD models with predicted boundary conditions, the case without bolts performed better than the case with bolts. This is due to the flow errors caused by placing stationary bolts in a rotating reference frame. Therefore it is recommended that this type of analysis only be attempted for drive cone cavities with no bolts or shielded bolts.
Fluorescence Imaging of Rotational and Vibrational Temperature in a Shock Tunnel Nozzle Flow
NASA Technical Reports Server (NTRS)
Palma, Philip C.; Danehy, Paul M.; Houwing, A. F. P.
2003-01-01
Two-dimensional rotational and vibrational temperature measurements were made at the nozzle exit of a free-piston shock tunnel using planar laser-induced fluorescence. The Mach 7 flow consisted predominantly of nitrogen with a trace quantity of nitric oxide. Nitric oxide was employed as the probe species and was excited at 225 nm. Nonuniformities in the distribution of nitric oxide in the test gas were observed and were concluded to be due to contamination of the test gas by driver gas or cold test gas.The nozzle-exit rotational temperature was measured and is in reasonable agreement with computational modeling. Nonlinearities in the detection system were responsible for systematic errors in the measurements. The vibrational temperature was measured to be constant with distance from the nozzle exit, indicating it had frozen during the nozzle expansion.
Reliable and accurate extraction of Hamaker constants from surface force measurements.
Miklavcic, S J
2018-08-15
A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.
Bayesian inversions of a dynamic vegetation model in four European grassland sites
NASA Astrophysics Data System (ADS)
Minet, J.; Laloy, E.; Tychon, B.; François, L.
2015-01-01
Eddy covariance data from four European grassland sites are used to probabilistically invert the CARAIB dynamic vegetation model (DVM) with ten unknown parameters, using the DREAM(ZS) Markov chain Monte Carlo (MCMC) sampler. We compare model inversions considering both homoscedastic and heteroscedastic eddy covariance residual errors, with variances either fixed a~priori or jointly inferred with the model parameters. Agreements between measured and simulated data during calibration are comparable with previous studies, with root-mean-square error (RMSE) of simulated daily gross primary productivity (GPP), ecosystem respiration (RECO) and evapotranspiration (ET) ranging from 1.73 to 2.19 g C m-2 day-1, 1.04 to 1.56 g C m-2 day-1, and 0.50 to 1.28 mm day-1, respectively. In validation, mismatches between measured and simulated data are larger, but still with Nash-Sutcliffe efficiency scores above 0.5 for three out of the four sites. Although measurement errors associated with eddy covariance data are known to be heteroscedastic, we showed that assuming a classical linear heteroscedastic model of the residual errors in the inversion do not fully remove heteroscedasticity. Since the employed heteroscedastic error model allows for larger deviations between simulated and measured data as the magnitude of the measured data increases, this error model expectedly lead to poorer data fitting compared to inversions considering a constant variance of the residual errors. Furthermore, sampling the residual error variances along with model parameters results in overall similar model parameter posterior distributions as those obtained by fixing these variances beforehand, while slightly improving model performance. Despite the fact that the calibrated model is generally capable of fitting the data within measurement errors, systematic bias in the model simulations are observed. These are likely due to model inadequacies such as shortcomings in the photosynthesis modelling. Besides model behaviour, difference between model parameter posterior distributions among the four grassland sites are also investigated. It is shown that the marginal distributions of the specific leaf area and characteristic mortality time parameters can be explained by site-specific ecophysiological characteristics. Lastly, the possibility of finding a common set of parameters among the four experimental sites is discussed.
Quark masses and strong coupling constant in 2+1 flavor QCD
Maezawa, Y.; Petreczky, P.
2016-08-30
We present a determination of the strange, charm and bottom quark masses as well as the strong coupling constant in 2+1 flavor lattice QCD simulations using highly improved staggered quark action. The ratios of the charm quark mass to the strange quark mass and the bottom quark mass to the charm quark mass are obtained from the meson masses calculated on the lattice and found to be mc/ms = 11.877(91) and mb/mc = 4.528(57) in the continuum limit. We also determine the strong coupling constant and the charm quark mass using the moments of pseudoscalar charmonium correlators: α s(μ =more » m c) = 0.3697(85) and mc(μ = mc) = 1.267(12) GeV. Our result for αs corresponds to the determination of the strong coupling constant at the lowest energy scale so far and is translated to the value α s(μ = M Z, n f = 5) = 0.11622(84).« less
NASA Astrophysics Data System (ADS)
Arulraj, M.; Barros, A. P.
2017-12-01
GPM-DPR reflectivity profiles in mountainous regions are severely handicapped by low level ground-clutter artifacts which have different error characteristics depending on landform (upwind slopes of high mountains versus complex topography in middle-mountains) and precipitation regime. These artifacts result in high detection and estimation errors especially in mid-latitude and tropical mountain regions where low-level light precipitation and complex multi-layer clouds interact with incoming storms. Here, we present results assessment studies in the Southern Appalachian Mountains (SAM) and preliminary results over the eastern slopes of the Andes using ground-based observations from the long-term hydrometeorological networks and model studies toward developing a physically-based framework to systematically identify and attribute measurement errors. Specifically, the focus is on events when GPM-DPR Ka- and Ku- Band precipitation radar misses low-level precipitation with vertical altitude less than 2 km AGL (above ground level). For this purpose, ground-based MRR and Parsivel disdrometer observations near the surface are compared with the reflectivity profiles observed by the GPM-DPR overpasses, the raindrop-size spectra are used to classify the precipitation regime associated with different classes of detection and estimation errors. This information will be used along with a coupled rainfall dynamics and radar simulator model to 1) merge the low-level GPM-DPR measured reflectivity with the MRR reflectivities optimally under strict physically-based constraints and 2) build a library of reflectivity profile corrections. Finally, preliminary 4D analysis of the organization of reflectivity correction modes, microphysical regimes, topography and storm environment will be presented toward developing a general physically-based error model.
Calculation of Host-Guest Binding Affinities Using a Quantum-Mechanical Energy Model.
Muddana, Hari S; Gilson, Michael K
2012-06-12
The prediction of protein-ligand binding affinities is of central interest in computer-aided drug discovery, but it is still difficult to achieve a high degree of accuracy. Recent studies suggesting that available force fields may be a key source of error motivate the present study, which reports the first mining minima (M2) binding affinity calculations based on a quantum mechanical energy model, rather than an empirical force field. We apply a semi-empirical quantum-mechanical energy function, PM6-DH+, coupled with the COSMO solvation model, to 29 host-guest systems with a wide range of measured binding affinities. After correction for a systematic error, which appears to derive from the treatment of polar solvation, the computed absolute binding affinities agree well with experimental measurements, with a mean error 1.6 kcal/mol and a correlation coefficient of 0.91. These calculations also delineate the contributions of various energy components, including solute energy, configurational entropy, and solvation free energy, to the binding free energies of these host-guest complexes. Comparison with our previous calculations, which used empirical force fields, point to significant differences in both the energetic and entropic components of the binding free energy. The present study demonstrates successful combination of a quantum mechanical Hamiltonian with the M2 affinity method.
A blinded determination of H0 from low-redshift Type Ia supernovae, calibrated by Cepheid variables
NASA Astrophysics Data System (ADS)
Zhang, Bonnie R.; Childress, Michael J.; Davis, Tamara M.; Karpenka, Natallia V.; Lidman, Chris; Schmidt, Brian P.; Smith, Mathew
2017-10-01
Presently, a >3σ tension exists between values of the Hubble constant H0 derived from analysis of fluctuations in the cosmic microwave background by Planck, and local measurements of the expansion using calibrators of Type Ia supernovae (SNe Ia). We perform a blinded re-analysis of Riess et al. (2011) to measure H0 from low-redshift SNe Ia, calibrated by Cepheid variables and geometric distances including to NGC 4258. This paper is a demonstration of techniques to be applied to the Riess et al. (2016) data. Our end-to-end analysis starts from available Harvard -Smithsonian Center for Astrophysics (CfA3) and Lick Observatory Supernova Search (LOSS) photometries, providing an independent validation of Riess et al. (2011). We obscure the value of H0 throughout our analysis and the first stage of the referee process, because calibration of SNe Ia requires a series of often subtle choices, and the potential for results to be affected by human bias is significant. Our analysis departs from that of Riess et al. (2011) by incorporating the covariance matrix method adopted in Supernova Legacy Survey and Joint Lightcurve Analysis to quantify SN Ia systematics, and by including a simultaneous fit of all SN Ia and Cepheid data. We find H_0 = 72.5 ± 3.1 ({stat}) ± 0.77 ({sys}) km s-1 Mpc-1with a three-galaxy (NGC 4258+LMC+MW) anchor. The relative uncertainties are 4.3 per cent statistical, 1.1 per cent systematic, and 4.4 per cent total, larger than in Riess et al. (2011) (3.3 per cent total) and the Efstathiou (2014) re-analysis (3.4 per cent total). Our error budget for H0 is dominated by statistical errors due to the small size of the SN sample, whilst the systematic contribution is dominated by variation in the Cepheid fits, and for the SNe Ia, uncertainties in the host galaxy mass dependence and Malmquist bias.
Scaling up the precision in a ytterbium Bose-Einstein condensate interferometer
NASA Astrophysics Data System (ADS)
McAlpine, Katherine; Plotkin-Swing, Benjamin; Gochnauer, Daniel; Saxberg, Brendan; Gupta, Subhadeep
2016-05-01
We report on progress toward a high-precision ytterbium (Yb) Bose-Einstein condensate (BEC) interferometer, with the goal of measuring h/m and thus the fine structure constant α. Here h is Planck's constant and m is the mass of a Yb atom. The use of the non-magnetic Yb atom makes our experiment insensitive to magnetic field noise. Our chosen symmetric 3-path interferometer geometry suppresses errors from vibration, rotation, and acceleration. The precision scales with the phase accrued due to the kinetic energy difference between the interferometer arms, resulting in a quadratic sensitivity to the momentum difference. We are installing and testing the laser pulses for large momentum transfer via Bloch oscillations. We will report on Yb BEC production in a new apparatus and progress toward realizing the atom optical elements for high precision measurements. We will also discuss approaches to mitigate two important systematics: (i) atom interaction effects can be suppressed by creating the BEC in a dynamically shaped optical trap to reduce the density; (ii) diffraction phase effects from the various atom-optical elements can be accounted for through an analysis of the light-atom interaction for each pulse.
Radiation Parameters of High Dose Rate Iridium -192 Sources
NASA Astrophysics Data System (ADS)
Podgorsak, Matthew B.
A lack of physical data for high dose rate (HDR) Ir-192 sources has necessitated the use of basic radiation parameters measured with low dose rate (LDR) Ir-192 seeds and ribbons in HDR dosimetry calculations. A rigorous examination of the radiation parameters of several HDR Ir-192 sources has shown that this extension of physical data from LDR to HDR Ir-192 may be inaccurate. Uncertainty in any of the basic radiation parameters used in dosimetry calculations compromises the accuracy of the calculated dose distribution and the subsequent dose delivery. Dose errors of up to 0.3%, 6%, and 2% can result from the use of currently accepted values for the half-life, exposure rate constant, and dose buildup effect, respectively. Since an accuracy of 5% in the delivered dose is essential to prevent severe complications or tumor regrowth, the use of basic physical constants with uncertainties approaching 6% is unacceptable. A systematic evaluation of the pertinent radiation parameters contributes to a reduction in the overall uncertainty in HDR Ir-192 dose delivery. Moreover, the results of the studies described in this thesis contribute significantly to the establishment of standardized numerical values to be used in HDR Ir-192 dosimetry calculations.
Adverse effects in dual-feed interferometry
NASA Astrophysics Data System (ADS)
Colavita, M. Mark
2009-11-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
NASA Astrophysics Data System (ADS)
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.
Accurate force field for molybdenum by machine learning large materials data
NASA Astrophysics Data System (ADS)
Chen, Chi; Deng, Zhi; Tran, Richard; Tang, Hanmei; Chu, Iek-Heng; Ong, Shyue Ping
2017-09-01
In this work, we present a highly accurate spectral neighbor analysis potential (SNAP) model for molybdenum (Mo) developed through the rigorous application of machine learning techniques on large materials data sets. Despite Mo's importance as a structural metal, existing force fields for Mo based on the embedded atom and modified embedded atom methods do not provide satisfactory accuracy on many properties. We will show that by fitting to the energies, forces, and stress tensors of a large density functional theory (DFT)-computed dataset on a diverse set of Mo structures, a Mo SNAP model can be developed that achieves close to DFT accuracy in the prediction of a broad range of properties, including elastic constants, melting point, phonon spectra, surface energies, grain boundary energies, etc. We will outline a systematic model development process, which includes a rigorous approach to structural selection based on principal component analysis, as well as a differential evolution algorithm for optimizing the hyperparameters in the model fitting so that both the model error and the property prediction error can be simultaneously lowered. We expect that this newly developed Mo SNAP model will find broad applications in large and long-time scale simulations.
Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio
2015-09-18
In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms.
Pasciuto, Ilaria; Ligorio, Gabriele; Bergamini, Elena; Vannozzi, Giuseppe; Sabatini, Angelo Maria; Cappozzo, Aurelio
2015-01-01
In human movement analysis, 3D body segment orientation can be obtained through the numerical integration of gyroscope signals. These signals, however, are affected by errors that, for the case of micro-electro-mechanical systems, are mainly due to: constant bias, scale factor, white noise, and bias instability. The aim of this study is to assess how the orientation estimation accuracy is affected by each of these disturbances, and whether it is influenced by the angular velocity magnitude and 3D distribution across the gyroscope axes. Reference angular velocity signals, either constant or representative of human walking, were corrupted with each of the four noise types within a simulation framework. The magnitude of the angular velocity affected the error in the orientation estimation due to each noise type, except for the white noise. Additionally, the error caused by the constant bias was also influenced by the angular velocity 3D distribution. As the orientation error depends not only on the noise itself but also on the signal it is applied to, different sensor placements could enhance or mitigate the error due to each disturbance, and special attention must be paid in providing and interpreting measures of accuracy for orientation estimation algorithms. PMID:26393606
Optimized constants for an ultraviolet light-adjustable intraocular lens.
Conrad-Hengerer, Ina; Dick, H Burkhard; Hütz, Werner W; Haigis, Wolfgang; Hengerer, Fritz H
2011-12-01
To determine the accuracy of intraocular lens (IOL) power calculations and to suggest adjusted constants for implantation of ultraviolet light-adjustable IOLs. Center for Vision Science, Ruhr University Eye Clinic, Bochum, Germany. Cohort study. Eyes with a visually significant cataract that had phacoemulsification with implantation of a light-adjustable IOL were evaluated. IOLMaster measurements were performed before phacoemulsification and IOL implantation and 4 weeks after surgery before the first adjustment of the IOL. The difference in the expected refraction and estimation error was studied. The study evaluated 125 eyes. Using the surgical constants provided by the manufacturer of the light-adjustable IOL, the SRK/T formula gave a more hyperopic refraction than the Hoffer Q and Holladay 1 formulas. The mean error of prediction was 0.93 diopter (D) ± 0.69 (SD), 0.91 ± 0.63 D, and 0.86 ± 0.65 D, respectively. The corresponding mean absolute error of prediction was 0.98 ± 0.61 D, 0.93 ± 0.61 D, and 0.90 ± 0.59 D, respectively. With optimized constants for the formulas, the mean error of prediction was 0.00 ± 0.63 D for Hoffer Q, 0.00 ± 0.64 D for Holladay 1, and 0.00 ± 0.66 D for SRK/T. The expected refraction after phacoemulsification and implantation of a light-adjustable IOL toward the hyperopic side of the desired refraction could be considered when using the optimized constants for all formulas. Copyright © 2011 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Measuring the fine structure constant with Bragg diffraction and Bloch oscillations
NASA Astrophysics Data System (ADS)
Parker, Richard; Yu, Chenghui; Zhong, Weicheng; Estey, Brian; Müller, Holger
2017-04-01
We have demonstrated a new scheme for atom interferometry based on large-momentum-transfer Bragg beam splitters and Bloch oscillations. In this new scheme, we have achieved a resolution of δÎ+/-/Î+/-=0.25ppb in the fine structure constant measurement, which gives over 10 million radians of phase difference between freely evolving matter waves. We have suppressed many systematic effects known in most atom interferometers with Raman beam splitters such as light shift, Zeeman effect shift as well as vibration. We have also simulated multi-atom Bragg diffraction to understand sub-ppb systematic effects, and implemented spatial filtering to further suppress systematic effects. We present our recent progress toward a measurement of the fine structure constant, which will provide a stringent test of the standard model of particle physics.
Construction of Lines of Constant Density and Constant Refractive Index for Ternary Liquid Mixtures.
ERIC Educational Resources Information Center
Tasic, Aleksandar Z.; Djordjevic, Bojan D.
1983-01-01
Demonstrates construction of density constant and refractive index constant lines in triangular coordinate system on basis of systematic experimental determinations of density and refractive index for both homogeneous (single-phase) ternary liquid mixtures (of known composition) and the corresponding binary compositions. Background information,…
Tanaka, Shigenori
2016-12-07
Correlational and thermodynamic properties of homogeneous electron liquids at finite temperatures are theoretically analyzed in terms of dielectric response formalism with the hypernetted-chain (HNC) approximation and its modified version. The static structure factor and the local-field correction to describe the strong Coulomb-coupling effects beyond the random-phase approximation are self-consistently calculated through solution to integral equations in the paramagnetic (spin unpolarized) and ferromagnetic (spin polarized) states. In the ground state with the normalized temperature θ=0, the present HNC scheme well reproduces the exchange-correlation energies obtained by quantum Monte Carlo (QMC) simulations over the whole fluid phase (the coupling constant r s ≤100), i.e., within 1% and 2% deviations from putative best QMC values in the paramagnetic and ferromagnetic states, respectively. As compared with earlier studies based on the Singwi-Tosi-Land-Sjölander and modified convolution approximations, some improvements on the correlation energies and the correlation functions including the compressibility sum rule are found in the intermediate to strong coupling regimes. When applied to the electron fluids at intermediate Fermi degeneracies (θ≈1), the static structure factors calculated in the HNC scheme show good agreements with the results obtained by the path integral Monte Carlo (PIMC) simulation, while a small negative region in the radial distribution function is observed near the origin, which may be associated with a slight overestimation for the exchange-correlation hole in the HNC approximation. The interaction energies are calculated for various combinations of density and temperature parameters ranging from strong to weak degeneracy and from weak to strong coupling, and the HNC values are then parametrized as functions of r s and θ. The HNC exchange-correlation free energies obtained through the coupling-constant integration show reasonable agreements with earlier results including the PIMC-based fitting over the whole fluid region at finite degeneracies in the paramagnetic state. In contrast, a systematic difference between the HNC and PIMC results is observed in the ferromagnetic state, which suggests a necessity of further studies on the exchange-correlation free energies from both aspects of analytical theory and simulation.
Synchronizing Two AGCMs via Ocean-Atmosphere Coupling (Invited)
NASA Astrophysics Data System (ADS)
Kirtman, B. P.
2009-12-01
A new approach for fusing or synchronizing to very different Atmospheric General Circulation Models (AGCMs) is described. The approach is also well suited for understand why two different coupled models have such large differences in their respective climate simulations. In the application presented here, the differences between the coupled models using the Center for Ocean-Land-Atmosphere Studies (COLA) and the National Center for Atmospheric Research (NCAR) atmospheric general circulation models (AGCMs) are examined. The intent is to isolate which component of the air-sea fluxes is most responsible for the differences between the coupled models and for the errors in their respective coupled simulations. The procedure is to simultaneously couple the two different atmospheric component models to a single ocean general circulation model (OGCM), in this case the Modular Ocean Model (MOM) developed at the Geophysical Fluid Dynamics Laboratory (GFDL). Each atmospheric component model experiences the same SST produced by the OGCM, but the OGCM is simultaneously coupled to both AGCMs using a cross coupling strategy. In the first experiment, the OGCM is coupled to the heat and fresh water flux from the NCAR AGCM (Community Atmospheric Model; CAM) and the momentum flux from the COLA AGCM. Both AGCMs feel the same SST. In the second experiment, the OGCM is coupled to the heat and fresh water flux from the COLA AGCM and the momentum flux from the CAM AGCM. Again, both atmospheric component models experience the same SST. By comparing these two experimental simulations with control simulations where only one AGCM is used, it is possible to argue which of the flux components are most responsible for the differences in the simulations and their respective errors. Based on these sensitivity experiments we conclude that the tropical ocean warm bias in the COLA coupled model is due to errors in the heat flux, and that the erroneous westward shift in the tropical Pacific cold tongue minimum in the NCAR model is due errors in the momentum flux. All the coupled simulations presented here have warm biases along the eastern boundary of the tropical oceans suggesting that the problem is common to both AGCMs. In terms of interannual variability in the tropical Pacific, the CAM momentum flux is responsible for the erroneous westward extension of the sea surface temperature anomalies (SSTA) and errors in the COLA momentum flux cause the erroneous eastward migration of the El Niño-Southern Oscillation (ENSO) events. These conclusions depend on assuming that the error due to the OGCM can be neglected.
Exploring Measurement Error with Cookies: A Real and Virtual Approach via Interactive Excel
ERIC Educational Resources Information Center
Sinex, Scott A; Gage, Barbara A.; Beck, Peggy J.
2007-01-01
A simple, guided-inquiry investigation using stacked sandwich cookies is employed to develop a simple linear mathematical model and to explore measurement error by incorporating errors as part of the investigation. Both random and systematic errors are presented. The model and errors are then investigated further by engaging with an interactive…
Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.
Circuit-Host Coupling Induces Multifaceted Behavioral Modulations of a Gene Switch.
Blanchard, Andrew E; Liao, Chen; Lu, Ting
2018-02-06
Quantitative modeling of gene circuits is fundamentally important to synthetic biology, as it offers the potential to transform circuit engineering from trial-and-error construction to rational design and, hence, facilitates the advance of the field. Currently, typical models regard gene circuits as isolated entities and focus only on the biochemical processes within the circuits. However, such a standard paradigm is getting challenged by increasing experimental evidence suggesting that circuits and their host are intimately connected, and their interactions can potentially impact circuit behaviors. Here we systematically examined the roles of circuit-host coupling in shaping circuit dynamics by using a self-activating gene switch as a model circuit. Through a combination of deterministic modeling, stochastic simulation, and Fokker-Planck equation formalism, we found that circuit-host coupling alters switch behaviors across multiple scales. At the single-cell level, it slows the switch dynamics in the high protein production regime and enlarges the difference between stable steady-state values. At the population level, it favors cells with low protein production through differential growth amplification. Together, the two-level coupling effects induce both quantitative and qualitative modulations of the switch, with the primary component of the effects determined by the circuit's architectural parameters. This study illustrates the complexity and importance of circuit-host coupling in modulating circuit behaviors, demonstrating the need for a new paradigm-integrated modeling of the circuit-host system-for quantitative understanding of engineered gene networks. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Characterizing the impact of model error in hydrologic time series recovery inverse problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
Characterizing the impact of model error in hydrologic time series recovery inverse problems
Hansen, Scott K.; He, Jiachuan; Vesselinov, Velimir V.
2017-10-28
Hydrologic models are commonly over-smoothed relative to reality, owing to computational limitations and to the difficulty of obtaining accurate high-resolution information. When used in an inversion context, such models may introduce systematic biases which cannot be encapsulated by an unbiased “observation noise” term of the type assumed by standard regularization theory and typical Bayesian formulations. Despite its importance, model error is difficult to encapsulate systematically and is often neglected. In this paper, model error is considered for an important class of inverse problems that includes interpretation of hydraulic transients and contaminant source history inference: reconstruction of a time series thatmore » has been convolved against a transfer function (i.e., impulse response) that is only approximately known. Using established harmonic theory along with two results established here regarding triangular Toeplitz matrices, upper and lower error bounds are derived for the effect of systematic model error on time series recovery for both well-determined and over-determined inverse problems. It is seen that use of additional measurement locations does not improve expected performance in the face of model error. A Monte Carlo study of a realistic hydraulic reconstruction problem is presented, and the lower error bound is seen informative about expected behavior. Finally, a possible diagnostic criterion for blind transfer function characterization is also uncovered.« less
Systematic Biases in Parameter Estimation of Binary Black-Hole Mergers
NASA Technical Reports Server (NTRS)
Littenberg, Tyson B.; Baker, John G.; Buonanno, Alessandra; Kelly, Bernard J.
2012-01-01
Parameter estimation of binary-black-hole merger events in gravitational-wave data relies on matched filtering techniques, which, in turn, depend on accurate model waveforms. Here we characterize the systematic biases introduced in measuring astrophysical parameters of binary black holes by applying the currently most accurate effective-one-body templates to simulated data containing non-spinning numerical-relativity waveforms. For advanced ground-based detectors, we find that the systematic biases are well within the statistical error for realistic signal-to-noise ratios (SNR). These biases grow to be comparable to the statistical errors at high signal-to-noise ratios for ground-based instruments (SNR approximately 50) but never dominate the error budget. At the much larger signal-to-noise ratios expected for space-based detectors, these biases will become large compared to the statistical errors but are small enough (at most a few percent in the black-hole masses) that we expect they should not affect broad astrophysical conclusions that may be drawn from the data.
Testing the Reliability of Cluster Mass Indicators with a Systematics Limited Dataset
NASA Technical Reports Server (NTRS)
Juett, Adrienne M.; Davis, David S.; Mushotzky, Richard
2009-01-01
We present the mass X-ray observable scaling relationships for clusters of galaxies using the XMM-Newton cluster catalog of Snowden et al. Our results are roughly consistent with previous observational and theoretical work, with one major exception. We find 2-3 times the scatter around the best fit mass scaling relationships as expected from cluster simulations or seen in other observational studies. We suggest that this is a consequence of using hydrostatic mass, as opposed to virial mass, and is due to the explicit dependence of the hydrostatic mass on the gradients of the temperature and gas density profiles. We find a larger range of slope in the cluster temperature profiles at radii 500 than previous observational studies. Additionally, we find only a weak dependence of the gas mass fraction on cluster mass, consistent with a constant. Our average gas mass fraction results also argue for a closer study of the systematic errors due to instrumental calibration and modeling method variations between analyses. We suggest that a more careful study of the differences between various observational results and with cluster simulations is needed to understand sources of bias and scatter in cosmological studies of galaxy clusters.
System calibration method for Fourier ptychographic microscopy.
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Scherpelz, Peter; Govoni, Marco; Hamada, Ikutaro; ...
2016-06-22
We present an implementation of G 0W 0 calculations including spin–orbit coupling (SOC) enabling investigations of large systems, with thousands of electrons, and we discuss results for molecules, solids, and nanocrystals. Using a newly developed set of molecules with heavy elements (called GW-SOC81), we find that, when based upon hybrid density functional calculations, fully relativistic (FR) and scalar-relativistic (SR) G 0W 0 calculations of vertical ionization potentials both yield excellent performance compared to experiment, with errors below 1.9%. We demonstrate that while SR calculations have higher random errors, FR calculations systematically underestimate the VIP by 0.1 to 0.2 eV. Wemore » further verify that SOC effects may be well approximated at the FR density functional level and then added to SR G 0W 0 results for a broad class of systems. We also address the use of different root-finding algorithms for the G 0W 0 quasiparticle equation and the significant influence of including d electrons in the valence partition of the pseudopotential for G 0W 0 calculations. Lastly, we present statistical analyses of our data, highlighting the importance of separating definitive improvements from those that may occur by chance due to a limited number of samples. We suggest the statistical analyses used here will be useful in the assessment of the accuracy of a large variety of electronic structure methods« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Scherpelz, Peter; Govoni, Marco; Hamada, Ikutaro
We present an implementation of G 0W 0 calculations including spin–orbit coupling (SOC) enabling investigations of large systems, with thousands of electrons, and we discuss results for molecules, solids, and nanocrystals. Using a newly developed set of molecules with heavy elements (called GW-SOC81), we find that, when based upon hybrid density functional calculations, fully relativistic (FR) and scalar-relativistic (SR) G 0W 0 calculations of vertical ionization potentials both yield excellent performance compared to experiment, with errors below 1.9%. We demonstrate that while SR calculations have higher random errors, FR calculations systematically underestimate the VIP by 0.1 to 0.2 eV. Wemore » further verify that SOC effects may be well approximated at the FR density functional level and then added to SR G 0W 0 results for a broad class of systems. We also address the use of different root-finding algorithms for the G 0W 0 quasiparticle equation and the significant influence of including d electrons in the valence partition of the pseudopotential for G 0W 0 calculations. Lastly, we present statistical analyses of our data, highlighting the importance of separating definitive improvements from those that may occur by chance due to a limited number of samples. We suggest the statistical analyses used here will be useful in the assessment of the accuracy of a large variety of electronic structure methods« less
Autonomous Quantum Error Correction with Application to Quantum Metrology
NASA Astrophysics Data System (ADS)
Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.
2017-04-01
We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
NASA Astrophysics Data System (ADS)
Semenov, Z. V.; Labusov, V. A.
2017-11-01
Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.
Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth
2006-07-01
This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.
Analyzing False Positives of Four Questions in the Force Concept Inventory
ERIC Educational Resources Information Center
Yasuda, Jun-ichro; Mae, Naohiro; Hull, Michael M.; Taniguchi, Masa-aki
2018-01-01
In this study, we analyze the systematic error from false positives of the Force Concept Inventory (FCI). We compare the systematic errors of question 6 (Q.6), Q.7, and Q.16, for which clearly erroneous reasoning has been found, with Q.5, for which clearly erroneous reasoning has not been found. We determine whether or not a correct response to a…
Partial compensation interferometry measurement system for parameter errors of conicoid surface
NASA Astrophysics Data System (ADS)
Hao, Qun; Li, Tengfei; Hu, Yao; Wang, Shaopu; Ning, Yan; Chen, Zhuo
2018-06-01
Surface parameters, such as vertex radius of curvature and conic constant, are used to describe the shape of an aspheric surface. Surface parameter errors (SPEs) are deviations affecting the optical characteristics of an aspheric surface. Precise measurement of SPEs is critical in the evaluation of optical surfaces. In this paper, a partial compensation interferometry measurement system for SPE of a conicoid surface is proposed based on the theory of slope asphericity and the best compensation distance. The system is developed to measure the SPE-caused best compensation distance change and SPE-caused surface shape change and then calculate the SPEs with the iteration algorithm for accuracy improvement. Experimental results indicate that the average relative measurement accuracy of the proposed system could be better than 0.02% for the vertex radius of curvature error and 2% for the conic constant error.
How the cosmic web induces intrinsic alignments of galaxies
NASA Astrophysics Data System (ADS)
Codis, S.; Dubois, Y.; Pichon, C.; Devriendt, J.; Slyz, A.
2016-10-01
Intrinsic alignments are believed to be a major source of systematics for future generation of weak gravitational lensing surveys like Euclid or LSST. Direct measurements of the alignment of the projected light distribution of galaxies in wide field imaging data seem to agree on a contamination at a level of a few per cent of the shear correlation functions, although the amplitude of the effect depends on the population of galaxies considered. Given this dependency, it is difficult to use dark matter-only simulations as the sole resource to predict and control intrinsic alignments. We report here estimates on the level of intrinsic alignment in the cosmological hydrodynamical simulation Horizon-AGN that could be a major source of systematic errors in weak gravitational lensing measurements. In particular, assuming that the spin of galaxies is a good proxy for their ellipticity, we show how those spins are spatially correlated and how they couple to the tidal field in which they are embedded. We will also present theoretical calculations that illustrate and qualitatively explain the observed signals.
Electrical Coupling Between Glial Cells in the Rat Retina
Ceelen, Paul W.; Lockridge, Amber; Newman, Eric A.
2008-01-01
The strength of electrical coupling between retinal glial cells was quantified with simultaneous whole-cell current-clamp recordings from astrocyte–astrocyte, astrocyte–Müller cell, and Müller cell–Müller cell pairs in the acutely isolated rat retina. Experimental results were fit and space constants determined using a resistive model of the glial cell network that assumed a homogeneous two-dimensional glial syncytium. The effective space constant (the distance from the point of stimulation to where the voltage falls to 1/e) equaled 12.9, 6.2, and 3.7 µm, respectively for astrocyte–astrocyte, astrocyte–Müller cell, and Müller cell–Müller cell coupling. The addition of 1 mM Ba2+ had little effect on network space constants, while 0.5 mM octanol shortened the space constants to 4.7, 4.4, and 2.6 µm for the three types of coupling. For a given distance separating cell pairs, the strength of coupling showed considerable variability. This variability in coupling strength was reproduced accurately by a second resistive model of the glial cell network (incorporating discrete astrocytes spaced at varying distances from each other), demonstrating that the variability was an intrinsic property of the glial cell network. Coupling between glial cells in the retina may permit the intercellular spread of ions and small molecules, including messengers mediating Ca2+ wave propagation, but it is too weak to carry significant K+ spatial buffer currents. PMID:11424187
Error Sources in Asteroid Astrometry
NASA Technical Reports Server (NTRS)
Owen, William M., Jr.
2000-01-01
Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.
Constraints on the {omega}- and {sigma}-meson coupling constants with dibaryons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faessler, A.; Buchmann, A.J.; Krivoruchenko, M.I.
The effect of narrow dibaryon resonances on basic nuclear matter properties and on the structure of neutron stars is investigated in mean-field theory and in relativistic Hartree approximation. The existence of massive neutron stars imposes constraints on the coupling constants of the {omega} and {sigma} mesons with dibaryons. In the allowed region of the parameter space of the coupling constants, a Bose condensate of the light dibaryon candidates d{sub 1}(1920) and d{sup {prime}}(2060) is stable against compression. This proves the stability of the ground state of heterophase nuclear matter with a Bose condensate of light dibaryons. {copyright} {ital 1997} {italmore » The American Physical Society}« less
Phases of a fermionic model with chiral condensates and Cooper pairs in 1+1 dimensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihaila, Bogdan; Blagoev, Krastan B.; MIND Institute, Albuquerque, New Mexico 87131
2006-01-01
We study the phase structure of a 4-fermi model with three bare coupling constants, which potentially has three types of bound states. This model is a generalization of the model discussed previously by [A. Chodos, F. Cooper, W. Mao, H. Minakata, and A. Singh, Phys. Rev. D 61, 045011 (2000).], which contained both chiral condensates and Cooper pairs. For this generalization we find that there are two independent renormalized coupling constants which determine the phase structure at finite density and temperature. We find that the vacuum can be in one of three distinct phases depending on the value of thesemore » two renormalized coupling constants.« less
Quantum-gravity predictions for the fine-structure constant
NASA Astrophysics Data System (ADS)
Eichhorn, Astrid; Held, Aaron; Wetterich, Christof
2018-07-01
Asymptotically safe quantum fluctuations of gravity can uniquely determine the value of the gauge coupling for a large class of grand unified models. In turn, this makes the electromagnetic fine-structure constant calculable. The balance of gravity and matter fluctuations results in a fixed point for the running of the gauge coupling. It is approached as the momentum scale is lowered in the transplanckian regime, leading to a uniquely predicted value of the gauge coupling at the Planck scale. The precise value of the predicted fine-structure constant depends on the matter content of the grand unified model. It is proportional to the gravitational fluctuation effects for which computational uncertainties remain to be settled.
A novel constant-force scanning probe incorporating mechanical-magnetic coupled structures.
Wang, Hongxi; Zhao, Jian; Gao, Renjing; Yang, Yintang
2011-07-01
A one-dimensional scanning probe with constant measuring force is designed and fabricated by utilizing the negative stiffness of the magnetic coupled structure, which mainly consists of the magnetic structure, the parallel guidance mechanism, and the pre-stressed spring. Based on the theory of material mechanics and the equivalent surface current model for computing the magnetic force, the analytical model of the scanning probe subjected to multi-forces is established, and the nonlinear relationship between the measuring force and the probe displacement is obtained. The practicability of introducing magnetic coupled structure in the constant-force probe is validated by the consistency of the results in numerical simulation and experiments.
The Accuracy of GBM GRB Localizations
NASA Astrophysics Data System (ADS)
Briggs, Michael Stephen; Connaughton, V.; Meegan, C.; Hurley, K.
2010-03-01
We report an study of the accuracy of GBM GRB localizations, analyzing three types of localizations: those produced automatically by the GBM Flight Software on board GBM, those produced automatically with ground software in near real time, and localizations produced with human guidance. The two types of automatic locations are distributed in near real-time via GCN Notices; the human-guided locations are distributed on timescale of many minutes or hours using GCN Circulars. This work uses a Bayesian analysis that models the distribution of the GBM total location error by comparing GBM locations to more accurate locations obtained with other instruments. Reference locations are obtained from Swift, Super-AGILE, the LAT, and with the IPN. We model the GBM total location errors as having systematic errors in addition to the statistical errors and use the Bayesian analysis to constrain the systematic errors.
Measuring h /mCs and the Fine Structure Constant with Bragg Diffraction and Bloch Oscillations
NASA Astrophysics Data System (ADS)
Parker, Richard
2016-05-01
We have demonstrated a new scheme for atom interferometry based on large-momentum-transfer Bragg beam splitters and Bloch oscillations. In this new scheme, we have achieved a resolution of δα / α =0.25ppb in the fine structure constant measurement, which gives up to 4.4 million radians of phase difference between freely evolving matter waves. We suppress many systematic effects, e.g., Zeeman shifts and effects from Earth's gravity and vibrations, use Bloch oscillations to increase the signal and reduce the diffraction phase, simulate multi-atom Bragg diffraction to understand sub-ppb systematic effects, and implement spatial filtering to further suppress systematic effects. We present our recent progress toward a measurement of the fine structure constant, which will provide a stringent test of the standard model of particle physics.
{sup 45}Sc Solid State NMR studies of the silicides ScTSi (T=Co, Ni, Cu, Ru, Rh, Pd, Ir, Pt)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harmening, Thomas; Eckert, Hellmut, E-mail: eckerth@uni-muenster.de; Fehse, Constanze M.
The silicides ScTSi (T=Fe, Co, Ni, Cu, Ru, Rh, Pd, Ir, Pt) were synthesized by arc-melting and characterized by X-ray powder diffraction. The structures of ScCoSi, ScRuSi, ScPdSi, and ScIrSi were refined from single crystal diffractometer data. These silicides crystallize with the TiNiSi type, space group Pnma. No systematic influences of the {sup 45}Sc isotropic magnetic shift and nuclear electric quadrupolar coupling parameters on various structural distortion parameters calculated from the crystal structure data can be detected. {sup 45}Sc MAS-NMR data suggest systematic trends in the local electronic structure probed by the scandium atoms: both the electric field gradients andmore » the isotropic magnetic shifts relative to a 0.2 M aqueous Sc(NO{sub 3}){sub 3} solution decrease with increasing valence electron concentration and within each T group the isotropic magnetic shift decreases monotonically with increasing atomic number. The {sup 45}Sc nuclear electric quadrupolar coupling constants are generally well reproduced by quantum mechanical electric field gradient calculations using the WIEN2k code. Highlights: Black-Right-Pointing-Pointer Arc-melting synthesis of silicides ScTSi. Black-Right-Pointing-Pointer Single crystal X-ray data of ScCoSi, ScRuSi, ScPdSi, and ScIrSi. Black-Right-Pointing-Pointer {sup 45}Sc solid state NMR of silicides ScTSi.« less