Sample records for dipole estimation errors

  1. Dipole estimation errors due to not incorporating anisotropic conductivities in realistic head models for EEG source analysis

    NASA Astrophysics Data System (ADS)

    Hallez, Hans; Staelens, Steven; Lemahieu, Ignace

    2009-10-01

    EEG source analysis is a valuable tool for brain functionality research and for diagnosing neurological disorders, such as epilepsy. It requires a geometrical representation of the human head or a head model, which is often modeled as an isotropic conductor. However, it is known that some brain tissues, such as the skull or white matter, have an anisotropic conductivity. Many studies reported that the anisotropic conductivities have an influence on the calculated electrode potentials. However, few studies have assessed the influence of anisotropic conductivities on the dipole estimations. In this study, we want to determine the dipole estimation errors due to not taking into account the anisotropic conductivities of the skull and/or brain tissues. Therefore, head models are constructed with the same geometry, but with an anisotropically conducting skull and/or brain tissue compartment. These head models are used in simulation studies where the dipole location and orientation error is calculated due to neglecting anisotropic conductivities of the skull and brain tissue. Results show that not taking into account the anisotropic conductivities of the skull yields a dipole location error between 2 and 25 mm, with an average of 10 mm. When the anisotropic conductivities of the brain tissues are neglected, the dipole location error ranges between 0 and 5 mm. In this case, the average dipole location error was 2.3 mm. In all simulations, the dipole orientation error was smaller than 10°. We can conclude that the anisotropic conductivities of the skull have to be incorporated to improve the accuracy of EEG source analysis. The results of the simulation, as presented here, also suggest that incorporation of the anisotropic conductivities of brain tissues is not necessary. However, more studies are needed to confirm these suggestions.

  2. Cortical dipole imaging using truncated total least squares considering transfer matrix error.

    PubMed

    Hori, Junichi; Takeuchi, Kosuke

    2013-01-01

    Cortical dipole imaging has been proposed as a method to visualize electroencephalogram in high spatial resolution. We investigated the inverse technique of cortical dipole imaging using a truncated total least squares (TTLS). The TTLS is a regularization technique to reduce the influence from both the measurement noise and the transfer matrix error caused by the head model distortion. The estimation of the regularization parameter was also investigated based on L-curve. The computer simulation suggested that the estimation accuracy was improved by the TTLS compared with Tikhonov regularization. The proposed method was applied to human experimental data of visual evoked potentials. We confirmed the TTLS provided the high spatial resolution of cortical dipole imaging.

  3. On the dipole approximation with error estimates

    NASA Astrophysics Data System (ADS)

    Boßmann, Lea; Grummt, Robert; Kolb, Martin

    2018-01-01

    The dipole approximation is employed to describe interactions between atoms and radiation. It essentially consists of neglecting the spatial variation of the external field over the atom. Heuristically, this is justified by arguing that the wavelength is considerably larger than the atomic length scale, which holds under usual experimental conditions. We prove the dipole approximation in the limit of infinite wavelengths compared to the atomic length scale and estimate the rate of convergence. Our results include N-body Coulomb potentials and experimentally relevant electromagnetic fields such as plane waves and laser pulses.

  4. Minimum-norm cortical source estimation in layered head models is robust against skull conductivity error☆☆☆

    PubMed Central

    Stenroos, Matti; Hauk, Olaf

    2013-01-01

    The conductivity profile of the head has a major effect on EEG signals, but unfortunately the conductivity for the most important compartment, skull, is only poorly known. In dipole modeling studies, errors in modeled skull conductivity have been considered to have a detrimental effect on EEG source estimation. However, as dipole models are very restrictive, those results cannot be generalized to other source estimation methods. In this work, we studied the sensitivity of EEG and combined MEG + EEG source estimation to errors in skull conductivity using a distributed source model and minimum-norm (MN) estimation. We used a MEG/EEG modeling set-up that reflected state-of-the-art practices of experimental research. Cortical surfaces were segmented and realistically-shaped three-layer anatomical head models were constructed, and forward models were built with Galerkin boundary element method while varying the skull conductivity. Lead-field topographies and MN spatial filter vectors were compared across conductivities, and the localization and spatial spread of the MN estimators were assessed using intuitive resolution metrics. The results showed that the MN estimator is robust against errors in skull conductivity: the conductivity had a moderate effect on amplitudes of lead fields and spatial filter vectors, but the effect on corresponding morphologies was small. The localization performance of the EEG or combined MEG + EEG MN estimator was only minimally affected by the conductivity error, while the spread of the estimate varied slightly. Thus, the uncertainty with respect to skull conductivity should not prevent researchers from applying minimum norm estimation to EEG or combined MEG + EEG data. Comparing our results to those obtained earlier with dipole models shows that general judgment on the performance of an imaging modality should not be based on analysis with one source estimation method only. PMID:23639259

  5. Polarizabilities and hyperpolarizabilities for the atoms Al, Si, P, S, Cl, and Ar: Coupled cluster calculations.

    PubMed

    Lupinetti, Concetta; Thakkar, Ajit J

    2005-01-22

    Accurate static dipole polarizabilities and hyperpolarizabilities are calculated for the ground states of the Al, Si, P, S, Cl, and Ar atoms. The finite-field computations use energies obtained with various ab initio methods including Moller-Plesset perturbation theory and the coupled cluster approach. Excellent agreement with experiment is found for argon. The experimental alpha for Al is likely to be in error. Only limited comparisons are possible for the other atoms because hyperpolarizabilities have not been reported previously for most of these atoms. Our recommended values of the mean dipole polarizability (in the order Al-Ar) are alpha/e(2)a(0) (2)E(h) (-1)=57.74, 37.17, 24.93, 19.37, 14.57, and 11.085 with an error estimate of +/-0.5%. The recommended values of the mean second dipole hyperpolarizability (in the order Al-Ar) are gamma/e(4)a(0) (4)E(h) (-3)=2.02 x 10(5), 4.31 x 10(4), 1.14 x 10(4), 6.51 x 10(3), 2.73 x 10(3), and 1.18 x 10(3) with an error estimate of +/-2%. Our recommended polarizability anisotropy values are Deltaalpha/e(2)a(0) (2)E(h) (-1)=-25.60, 8.41, -3.63, and 1.71 for Al, Si, S, and Cl respectively, with an error estimate of +/-1%. The recommended hyperpolarizability anisotropies are Deltagamma/e(4)a(0) (4)E(h) (-3)=-3.88 x 10(5), 4.16 x 10(4), -7.00 x 10(3), and 1.65 x 10(3) for Al, Si, S, and Cl, respectively, with an error estimate of +/-4%. (c) 2005 American Institute of Physics.

  6. An alternative subspace approach to EEG dipole source localization

    NASA Astrophysics Data System (ADS)

    Xu, Xiao-Liang; Xu, Bobby; He, Bin

    2004-01-01

    In the present study, we investigate a new approach to electroencephalography (EEG) three-dimensional (3D) dipole source localization by using a non-recursive subspace algorithm called FINES. In estimating source dipole locations, the present approach employs projections onto a subspace spanned by a small set of particular vectors (FINES vector set) in the estimated noise-only subspace instead of the entire estimated noise-only subspace in the case of classic MUSIC. The subspace spanned by this vector set is, in the sense of principal angle, closest to the subspace spanned by the array manifold associated with a particular brain region. By incorporating knowledge of the array manifold in identifying FINES vector sets in the estimated noise-only subspace for different brain regions, the present approach is able to estimate sources with enhanced accuracy and spatial resolution, thus enhancing the capability of resolving closely spaced sources and reducing estimation errors. The present computer simulations show, in EEG 3D dipole source localization, that compared to classic MUSIC, FINES has (1) better resolvability of two closely spaced dipolar sources and (2) better estimation accuracy of source locations. In comparison with RAP-MUSIC, FINES' performance is also better for the cases studied when the noise level is high and/or correlations among dipole sources exist.

  7. QTAIM charge-charge flux-dipole flux interpretation of electronegativity and potential models of the fluorochloromethane mean dipole moment derivatives.

    PubMed

    Silva, Arnaldo F; da Silva, João V; Haiduke, R L A; Bruns, Roy E

    2011-11-17

    Infrared fundamental vibrational intensities and quantum theory atoms in molecules (QTAIM) charge-charge flux-dipole flux (CCFDF) contributions to the polar tensors of the fluorochloromethanes have been calculated at the QCISD/cc-pVTZ level. A root-mean-square error of 20.0 km mol(-1) has been found compared to an experimental error estimate of 14.4 and 21.1 km mol(-1) for MP2/6-311++G(3d,3p) results. The errors in the QCISD polar tensor elements and mean dipole moment derivatives are 0.059 e when compared with the experimental values. Both theoretical levels provide results showing that the dynamical charge and dipole fluxes provide significant contributions to the mean dipole moment derivatives and tend to be of opposite signs canceling one another. Although the experimental mean dipole moment derivative values suggest that all the fluorochloromethane molecules have electronic structures consistent with a simple electronegativity model with transferable atomic charges for their terminal atoms, the QTAIM/CCFDF models confirm this only for the fluoromethanes. Whereas the fluorine atom does not suffer a saturation effect in its capacity to drain electronic charge from carbon atoms that are attached to other fluorine and chlorine atoms, the zero flux electronic charge of the chlorine atom depends on the number and kind of the other substituent atoms. Both the QTAIM carbon charges (r = 0.990) and mean dipole moment derivatives (r = 0.996) are found to obey Siegbahn's potential model for carbon 1s electron ionization energies at the QCISD/cc-pVTZ level. The latter is a consequence of the carbon mean derivatives obeying the electronegativity model and not necessarily to their similarities with atomic charges. Atomic dipole contributions to the neighboring atom electrostatic potentials of the fluorochloromethanes are found to be of comparable size to the atomic charge contributions and increase the accuracy of Siegbahn's model for the QTAIM charge model results. Substitution effects of the hydrogen, fluorine, and chlorine atoms on the charge and dipole flux QTAIM contributions are found to be additive for the mean dipole derivatives of the fluorochloromethanes.

  8. Neutron electric dipole moment and possibilities of increasing accuracy of experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Serebrov, A. P., E-mail: serebrov@pnpi.spb.ru; Kolomenskiy, E. A.; Pirozhkov, A. N.

    The paper reports the results of an experiment on searching for the neutron electric dipole moment (EDM), performed on the ILL reactor (Grenoble, France). The double-chamber magnetic resonance spectrometer (Petersburg Nuclear Physics Institute (PNPI)) with prolonged holding of ultra cold neutrons has been used. Sources of possible systematic errors are analyzed, and their influence on the measurement results is estimated. The ways and prospects of increasing accuracy of the experiment are discussed.

  9. Applicability of the single equivalent point dipole model to represent a spatially distributed bio-electrical source

    NASA Technical Reports Server (NTRS)

    Armoundas, A. A.; Feldman, A. B.; Sherman, D. A.; Cohen, R. J.

    2001-01-01

    Although the single equivalent point dipole model has been used to represent well-localised bio-electrical sources, in realistic situations the source is distributed. Consequently, position estimates of point dipoles determined by inverse algorithms suffer from systematic error due to the non-exact applicability of the inverse model. In realistic situations, this systematic error cannot be avoided, a limitation that is independent of the complexity of the torso model used. This study quantitatively investigates the intrinsic limitations in the assignment of a location to the equivalent dipole due to distributed electrical source. To simulate arrhythmic activity in the heart, a model of a wave of depolarisation spreading from a focal source over the surface of a spherical shell is used. The activity is represented by a sequence of concentric belt sources (obtained by slicing the shell with a sequence of parallel plane pairs), with constant dipole moment per unit length (circumferentially) directed parallel to the propagation direction. The distributed source is represented by N dipoles at equal arc lengths along the belt. The sum of the dipole potentials is calculated at predefined electrode locations. The inverse problem involves finding a single equivalent point dipole that best reproduces the electrode potentials due to the distributed source. The inverse problem is implemented by minimising the chi2 per degree of freedom. It is found that the trajectory traced by the equivalent dipole is sensitive to the location of the spherical shell relative to the fixed electrodes. It is shown that this trajectory does not coincide with the sequence of geometrical centres of the consecutive belt sources. For distributed sources within a bounded spherical medium, displaced from the sphere's centre by 40% of the sphere's radius, it is found that the error in the equivalent dipole location varies from 3 to 20% for sources with size between 5 and 50% of the sphere's radius. Finally, a method is devised to obtain the size of the distributed source during the cardiac cycle.

  10. Kernel temporal enhancement approach for LORETA source reconstruction using EEG data.

    PubMed

    Torres-Valencia, Cristian A; Santamaria, M Claudia Joana; Alvarez, Mauricio A

    2016-08-01

    Reconstruction of brain sources from magnetoencephalography and electroencephalography (M/EEG) data is a well known problem in the neuroengineering field. A inverse problem should be solved and several methods have been proposed. Low Resolution Electromagnetic Tomography (LORETA) and the different variations proposed as standardized LORETA (sLORETA) and the standardized weighted LORETA (swLORETA) have solved the inverse problem following a non-parametric approach, that is by setting dipoles in the whole brain domain in order to estimate the dipole positions from the M/EEG data and assuming some spatial priors. Errors in the reconstruction of sources are presented due the low spatial resolution of the LORETA framework and the influence of noise in the observable data. In this work a kernel temporal enhancement (kTE) is proposed in order to build a preprocessing stage of the data that allows in combination with the swLORETA method a improvement in the source reconstruction. The results are quantified in terms of three dipole error localization metrics and the strategy of swLORETA + kTE obtained the best results across different signal to noise ratio (SNR) in random dipoles simulation from synthetic EEG data.

  11. Planck 2013 results. V. LFI calibration

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Aghanim, N.; Armitage-Caplan, C.; Arnaud, M.; Ashdown, M.; Atrio-Barandela, F.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bobin, J.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bridges, M.; Bucher, M.; Burigana, C.; Butler, R. C.; Cappellini, B.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Chen, X.; Chiang, L.-Y.; Christensen, P. R.; Church, S.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Dupac, X.; Efstathiou, G.; Enßlin, T. A.; Eriksen, H. K.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Gaier, T. C.; Galeotta, S.; Ganga, K.; Giard, M.; Giardino, G.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D.; Henrot-Versillé, S.; Hernández-Monteagudo, C.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Jaffe, A. H.; Jaffe, T. R.; Jewell, J.; Jones, W. C.; Juvela, M.; Kangaslahti, P.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Knox, L.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Laureijs, R. J.; Lawrence, C. R.; Leach, S.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maino, D.; Mandolesi, N.; Maris, M.; Marshall, D. J.; Martin, P. G.; Martínez-González, E.; Masi, S.; Massardi, M.; Matarrese, S.; Matthai, F.; Mazzotta, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Moneti, A.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Naselsky, P.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; O'Dwyer, I. J.; Osborne, S.; Paci, F.; Pagano, L.; Paladini, R.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, D.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pierpaoli, E.; Pietrobon, D.; Plaszczynski, S.; Pointecouteau, E.; Polenta, G.; Ponthieu, N.; Popa, L.; Poutanen, T.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Ricciardi, S.; Riller, T.; Rocha, G.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Starck, J.-L.; Stolyarov, V.; Stompor, R.; Sureau, F.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Varis, J.; Vielva, P.; Villa, F.; Vittorio, N.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.

    2014-11-01

    We discuss the methods employed to photometrically calibrate the data acquired by the Low Frequency Instrument on Planck. Our calibration is based on a combination of the orbital dipole plus the solar dipole, caused respectively by the motion of the Planck spacecraft with respect to the Sun and by motion of the solar system with respect to the cosmic microwave background (CMB) rest frame. The latter provides a signal of a few mK with the same spectrum as the CMB anisotropies and is visible throughout the mission. In this data releasewe rely on the characterization of the solar dipole as measured by WMAP. We also present preliminary results (at 44 GHz only) on the study of the Orbital Dipole, which agree with the WMAP value of the solar system speed within our uncertainties. We compute the calibration constant for each radiometer roughly once per hour, in order to keep track of changes in the detectors' gain. Since non-idealities in the optical response of the beams proved to be important, we implemented a fast convolution algorithm which considers the full beam response in estimating the signal generated by the dipole. Moreover, in order to further reduce the impact of residual systematics due to sidelobes, we estimated time variations in the calibration constant of the 30 GHz radiometers (the ones with the largest sidelobes) using the signal of an internal reference load at 4 K instead of the CMB dipole. We have estimated the accuracy of the LFI calibration following two strategies: (1) we have run a set of simulations to assess the impact of statistical errors and systematic effects in the instrument and in the calibration procedure; and (2) we have performed a number of internal consistency checks on the data and on the brightness temperature of Jupiter. Errors in the calibration of this Planck/LFI data release are expected to be about 0.6% at 44 and 70 GHz, and 0.8% at 30 GHz. Both these preliminary results at low and high ℓ are consistent with WMAP results within uncertainties and comparison of power spectra indicates good consistency in the absolute calibration with HFI (0.3%) and a 1.4σ discrepancy with WMAP (0.9%).

  12. Galileo magnetometer results from the Millennium Mission: Rotation rate and secular variation of the internal magnetic field

    NASA Astrophysics Data System (ADS)

    Russell, C. T.; Yu, Z. J.; Kivelson, M. G.; Khurana, K. K.

    2000-10-01

    The System III (1965.0) rotation period of Jupiter, as defined by the IAU based on early radio astronomical data, is 9h 55m 29.71s. Higgins et al. (JGR, 22033, 1997) have suggested, based on more recent radio data, that this period is too high by perhaps 25 ms. In the 25 years since the Pioneer and Voyager measurements, such an error would cause a 6 degree shift in apparent longitude of features tied to the internal magnetic field. A comparison of the longitude of the projection of the dipole moment obtained over the period 1975-1979 with that obtained by Galileo today shows that the average dipole location has drifted only one degree eastward in System III (1965.0). This one-degree shift is not significant given the statistical errors. A possible resolution to this apparent paradox is that the dipole moment observation is sensitive to the lower order field while the radio measurement is sensitive to the high order field at low altitude. Estimates of the secular variation from the in situ data are being pursued.

  13. A new discrete dipole kernel for quantitative susceptibility mapping.

    PubMed

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. Imaging dipole flow sources using an artificial lateral-line system made of biomimetic hair flow sensors

    PubMed Central

    Dagamseh, Ahmad; Wiegerink, Remco; Lammerink, Theo; Krijnen, Gijs

    2013-01-01

    In Nature, fish have the ability to localize prey, school, navigate, etc., using the lateral-line organ. Artificial hair flow sensors arranged in a linear array shape (inspired by the lateral-line system (LSS) in fish) have been applied to measure airflow patterns at the sensor positions. Here, we take advantage of both biomimetic artificial hair-based flow sensors arranged as LSS and beamforming techniques to demonstrate dipole-source localization in air. Modelling and measurement results show the artificial lateral-line ability to image the position of dipole sources accurately with estimation error of less than 0.14 times the array length. This opens up possibilities for flow-based, near-field environment mapping that can be beneficial to, for example, biologists and robot guidance applications. PMID:23594816

  15. Model misspecification detection by means of multiple generator errors, using the observed potential map.

    PubMed

    Zhang, Z; Jewett, D L

    1994-01-01

    Due to model misspecification, currently-used Dipole Source Localization (DSL) methods may contain Multiple-Generator Errors (MulGenErrs) when fitting simultaneously-active dipoles. The size of the MulGenErr is a function of both the model used, and the dipole parameters, including the dipoles' waveforms (time-varying magnitudes). For a given fitting model, by examining the variation of the MulGenErrs (or the fit parameters) under different waveforms for the same generating-dipoles, the accuracy of the fitting model for this set of dipoles can be determined. This method of testing model misspecification can be applied to evoked potential maps even when the parameters of the generating-dipoles are unknown. The dipole parameters fitted in a model should only be accepted if the model can be shown to be sufficiently accurate.

  16. Variations in the geomagnetic dipole moment during the Holocene and the past 50 kyr

    NASA Astrophysics Data System (ADS)

    Knudsen, Mads Faurschou; Riisager, Peter; Donadini, Fabio; Snowball, Ian; Muscheler, Raimund; Korhonen, Kimmo; Pesonen, Lauri J.

    2008-07-01

    All absolute paleointensity data published in peer-reviewed journals were recently compiled in the GEOMAGIA50 database. Based on the information in GEOMAGIA50, we reconstruct variations in the geomagnetic dipole moment over the past 50 kyr, with a focus on the Holocene period. A running-window approach is used to determine the axial dipole moment that provides the optimal least-squares fit to the paleointensity data, whereas associated error estimates are constrained using a bootstrap procedure. We subsequently compare the reconstruction from this study with previous reconstructions of the geomagnetic dipole moment, including those based on cosmogenic radionuclides ( 10Be and 14C). This comparison generally lends support to the axial dipole moments obtained in this study. Our reconstruction shows that the evolution of the dipole moment was highly dynamic, and the recently observed rates of change (5% per century) do not appear unique. We observe no apparent link between the occurrence of archeomagnetic jerks and changes in the geomagnetic dipole moment, suggesting that archeomagnetic jerks most likely represent drastic changes in the orientation of the geomagnetic dipole axis or periods characterized by large secular variation of the non-dipole field. This study also shows that the Holocene geomagnetic dipole moment was high compared to that of the preceding ˜ 40 kyr, and that ˜ 4 · 10 22 Am 2 appears to represent a critical threshold below which geomagnetic excursions and reversals occur.

  17. Magnetic dipole moment determination by near-field analysis

    NASA Technical Reports Server (NTRS)

    Eichhorn, W. L.

    1972-01-01

    A method for determining the magnetic moment of a spacecraft from magnetic field data taken in a limited region of space close to the spacecraft. The spacecraft's magnetic field equations are derived from first principles. With measurements of this field restricted to certain points in space, the near-field equations for the spacecraft are derived. These equations are solved for the dipole moment by a least squares procedure. A method by which one can estimate the magnitude of the error in the calculations is also presented. This technique was thoroughly tested on a computer. The test program is described and evaluated, and partial results are presented.

  18. The Dipole Segment Model for Axisymmetrical Elongated Asteroids

    NASA Astrophysics Data System (ADS)

    Zeng, Xiangyuan; Zhang, Yonglong; Yu, Yang; Liu, Xiangdong

    2018-02-01

    Various simplified models have been investigated as a way to understand the complex dynamical environment near irregular asteroids. A dipole segment model is explored in this paper, one that is composed of a massive straight segment and two point masses at the extremities of the segment. Given an explicitly simple form of the potential function that is associated with the dipole segment model, five topological cases are identified with different sets of system parameters. Locations, stabilities, and variation trends of the system equilibrium points are investigated in a parametric way. The exterior potential distribution of nearly axisymmetrical elongated asteroids is approximated by minimizing the acceleration error in a test zone. The acceleration error minimization process determines the parameters of the dipole segment. The near-Earth asteroid (8567) 1996 HW1 is chosen as an example to evaluate the effectiveness of the approximation method for the exterior potential distribution. The advantages of the dipole segment model over the classical dipole and the traditional segment are also discussed. Percent error of acceleration and the degree of approximation are illustrated by using the dipole segment model to approximate four more asteroids. The high efficiency of the simplified model over the polyhedron is clearly demonstrated by comparing the CPU time.

  19. Europa's induced magnetic field: How much of the signal is from the ocean?

    NASA Astrophysics Data System (ADS)

    Crary, F. J.; Dols, V. J.; Jia, X.; Paty, C. S.; Hale, J. M.

    2017-12-01

    The existence of a sub-surface ocean within Europa was demonstrated by the Galileo spacecraft's measurements of an induced dipole magnetic field. This field, produced by the time variable background magnetic field from Jupiter, is a result of currents flowing within an electrically conductive layer inside Europa, believed to be a liquid ocean. Unfortunately, interpretation of the Galileo results is complicated by the interaction between Jupiter's magnetosphere and Europa and its ionosphere. This interaction also produces magnetic field perturbations which add uncertainty and systematic errors to the determination of the induced field.Here, we estimate the contribution of the plasma interaction to the observed magnetic dipole, and discuss the implications for the properties of Europa's subsurface ocean. The Galileo data have primarily been analyzed by fitting a dipole to the observed magnetic field, without correcting for plasma effects. The data were fit to a dipole magnetic field, and the resulting magnetic moment is the sum of the induced moment from the ocean and a contribution from the plasma interaction. To estimate this contribution, we analyze the results of numerical simulations using exactly the same approach which has been used to analyze the real data. Since we know what ocean dipole was inserted in the models' boundary conditions, we therefore calculate the contribution from the plasma interaction. We have previously used this approach to estimate the sensitivity of the results to upstream plasma conditions. However, there is no assurance that one particular model is correct. In this work, we apply this approach to several different types of simulations, shedding light on the uncertainties in the ocean-induced signature.

  20. Exchange-Hole Dipole Dispersion Model for Accurate Energy Ranking in Molecular Crystal Structure Prediction.

    PubMed

    Whittleton, Sarah R; Otero-de-la-Roza, A; Johnson, Erin R

    2017-02-14

    Accurate energy ranking is a key facet to the problem of first-principles crystal-structure prediction (CSP) of molecular crystals. This work presents a systematic assessment of B86bPBE-XDM, a semilocal density functional combined with the exchange-hole dipole moment (XDM) dispersion model, for energy ranking using 14 compounds from the first five CSP blind tests. Specifically, the set of crystals studied comprises 11 rigid, planar compounds and 3 co-crystals. The experimental structure was correctly identified as the lowest in lattice energy for 12 of the 14 total crystals. One of the exceptions is 4-hydroxythiophene-2-carbonitrile, for which the experimental structure was correctly identified once a quasi-harmonic estimate of the vibrational free-energy contribution was included, evidencing the occasional importance of thermal corrections for accurate energy ranking. The other exception is an organic salt, where charge-transfer error (also called delocalization error) is expected to cause the base density functional to be unreliable. Provided the choice of base density functional is appropriate and an estimate of temperature effects is used, XDM-corrected density-functional theory is highly reliable for the energetic ranking of competing crystal structures.

  1. Application of the finite-field coupled-cluster method to calculate molecular properties relevant to electron electric-dipole-moment searches

    NASA Astrophysics Data System (ADS)

    Abe, M.; Prasannaa, V. S.; Das, B. P.

    2018-03-01

    Heavy polar diatomic molecules are currently among the most promising probes of fundamental physics. Constraining the electric dipole moment of the electron (e EDM ), in order to explore physics beyond the standard model, requires a synergy of molecular experiment and theory. Recent advances in experiment in this field have motivated us to implement a finite-field coupled-cluster (FFCC) approach. This work has distinct advantages over the theoretical methods that we had used earlier in the analysis of e EDM searches. We used relativistic FFCC to calculate molecular properties of interest to e EDM experiments, that is, the effective electric field (Eeff) and the permanent electric dipole moment (PDM). We theoretically determine these quantities for the alkaline-earth monofluorides (AEMs), the mercury monohalides (Hg X ), and PbF. The latter two systems, as well as BaF from the AEMs, are of interest to e EDM searches. We also report the calculation of the properties using a relativistic finite-field coupled-cluster approach with single, double, and partial triples' excitations, which is considered to be the gold standard of electronic structure calculations. We also present a detailed error estimate, including errors that stem from our choice of basis sets, and higher-order correlation effects.

  2. Improved method for retinotopy constrained source estimation of visual evoked responses

    PubMed Central

    Hagler, Donald J.; Dale, Anders M.

    2011-01-01

    Retinotopy constrained source estimation (RCSE) is a method for non-invasively measuring the time courses of activation in early visual areas using magnetoencephalography (MEG) or electroencephalography (EEG). Unlike conventional equivalent current dipole or distributed source models, the use of multiple, retinotopically-mapped stimulus locations to simultaneously constrain the solutions allows for the estimation of independent waveforms for visual areas V1, V2, and V3, despite their close proximity to each other. We describe modifications that improve the reliability and efficiency of this method. First, we find that increasing the number and size of visual stimuli results in source estimates that are less susceptible to noise. Second, to create a more accurate forward solution, we have explicitly modeled the cortical point spread of individual visual stimuli. Dipoles are represented as extended patches on the cortical surface, which take into account the estimated receptive field size at each location in V1, V2, and V3 as well as the contributions from contralateral, ipsilateral, dorsal, and ventral portions of the visual areas. Third, we implemented a map fitting procedure to deform a template to match individual subject retinotopic maps derived from functional magnetic resonance imaging (fMRI). This improves the efficiency of the overall method by allowing automated dipole selection, and it makes the results less sensitive to physiological noise in fMRI retinotopy data. Finally, the iteratively reweighted least squares (IRLS) method was used to reduce the contribution from stimulus locations with high residual error for robust estimation of visual evoked responses. PMID:22102418

  3. In-orbit offline estimation of the residual magnetic dipole biases of the POPSAT-HIP1 nanosatellite

    NASA Astrophysics Data System (ADS)

    Seriani, S.; Brama, Y. L.; Gallina, P.; Manzoni, G.

    2016-05-01

    The nanosatellite POPSAT-HIP1 is a Cubesat-class spacecraft launched on the 19th of June 2014 to test cold-gas based micro-thrusters; it is, as of April 2015, in a low Earth orbit at around 600 km of altitude and is equipped, notably, with a magnetometer. In order to increment the performance of the attitude control of nanosatellites like POPSAT, it is extremely useful to determine the main biases that act on the magnetometer while in orbit, for example those generated by the residual magnetic moment of the satellite itself and those originating from the transmitter. Thus, we present a methodology to perform an in-orbit offline estimation of the magnetometer bias caused by the residual magnetic moment of the satellite (we refer to this as the residual magnetic dipole bias, or RMDB). The method is based on a genetic algorithm coupled with a simplex algorithm, and provides the bias RMDB vector as output, requiring solely the magnetometer readings. This is exploited to compute the transmitter magnetic dipole bias (TMDB), by comparing the computed RMDB with the transmitter operating and idling. An experimental investigation is carried out by acquiring the magnetometer outputs in different phases of the spacecraft life (stabilized, maneuvering, free tumble). Results show remarkable accuracy with an RMDB orientation error between 3.6 ° and 6.2 ° , and a module error around 7 % . TMDB values show similar coherence values. Finally, we note some drawbacks of the methodologies, as well as some possible improvements, e.g. precise transmitter operations logging. In general, however, the methodology proves to be quite effective even with sparse and noisy data, and promises to be incisive in the improvement of attitude control systems.

  4. Interpretation of the MEG-MUSIC scan in biomagnetic source localization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mosher, J.C.; Lewis, P.S.; Leahy, R.M.

    1993-09-01

    MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak atmore » unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.« less

  5. Probing the Cosmological Principle in the counts of radio galaxies at different frequencies

    NASA Astrophysics Data System (ADS)

    Bengaly, Carlos A. P.; Maartens, Roy; Santos, Mario G.

    2018-04-01

    According to the Cosmological Principle, the matter distribution on very large scales should have a kinematic dipole that is aligned with that of the CMB. We determine the dipole anisotropy in the number counts of two all-sky surveys of radio galaxies. For the first time, this analysis is presented for the TGSS survey, allowing us to check consistency of the radio dipole at low and high frequencies by comparing the results with the well-known NVSS survey. We match the flux thresholds of the catalogues, with flux limits chosen to minimise systematics, and adopt a strict masking scheme. We find dipole directions that are in good agreement with each other and with the CMB dipole. In order to compare the amplitude of the dipoles with theoretical predictions, we produce sets of lognormal realisations. Our realisations include the theoretical kinematic dipole, galaxy clustering, Poisson noise, simulated redshift distributions which fit the NVSS and TGSS source counts, and errors in flux calibration. The measured dipole for NVSS is ~2 times larger than predicted by the mock data. For TGSS, the dipole is almost ~ 5 times larger than predicted, even after checking for completeness and taking account of errors in source fluxes and in flux calibration. Further work is required to understand the nature of the systematics that are the likely cause of the anomalously large TGSS dipole amplitude.

  6. Requirements for Coregistration Accuracy in On-Scalp MEG.

    PubMed

    Zetter, Rasmus; Iivanainen, Joonas; Stenroos, Matti; Parkkonen, Lauri

    2018-06-22

    Recent advances in magnetic sensing has made on-scalp magnetoencephalography (MEG) possible. In particular, optically-pumped magnetometers (OPMs) have reached sensitivity levels that enable their use in MEG. In contrast to the SQUID sensors used in current MEG systems, OPMs do not require cryogenic cooling and can thus be placed within millimetres from the head, enabling the construction of sensor arrays that conform to the shape of an individual's head. To properly estimate the location of neural sources within the brain, one must accurately know the position and orientation of sensors in relation to the head. With the adaptable on-scalp MEG sensor arrays, this coregistration becomes more challenging than in current SQUID-based MEG systems that use rigid sensor arrays. Here, we used simulations to quantify how accurately one needs to know the position and orientation of sensors in an on-scalp MEG system. The effects that different types of localisation errors have on forward modelling and source estimates obtained by minimum-norm estimation, dipole fitting, and beamforming are detailed. We found that sensor position errors generally have a larger effect than orientation errors and that these errors affect the localisation accuracy of superficial sources the most. To obtain similar or higher accuracy than with current SQUID-based MEG systems, RMS sensor position and orientation errors should be [Formula: see text] and [Formula: see text], respectively.

  7. Contribution of relativistic quantum chemistry to electron’s electric dipole moment for CP violation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abe, M., E-mail: minoria@tmu.ac.jp; Gopakumar, G., E-mail: gopakumargeetha@gmail.com; Hada, M., E-mail: hada@tmu.ac.jp

    The search for the electric dipole moment of the electron (eEDM) is important because it is a probe of Charge Conjugation-Parity (CP) violation. It can also shed light on new physics beyond the standard model. It is not possible to measure the eEDM directly. However, the interaction energy involving the effective electric field (E{sub eff}) acting on an electron in a molecule and the eEDM can be measured. This quantity can be combined with E{sub eff}, which is calculated by relativistic molecular orbital theory to determine eEDM. Previous calculations of E{sub eff} were not sufficiently accurate in the treatment ofmore » relativistic or electron correlation effects. We therefore developed a new method to calculate E{sub eff} based on a four-component relativistic coupled-cluster theory. We demonstrated our method for YbF molecule, one of the promising candidates for the eEDM search. Using very large basis set and without freezing any core orbitals, we obtain a value of 23.1 GV/cm for E{sub eff} in YbF with an estimated error of less than 10%. The error is assessed by comparison of our calculations and experiments for two properties relevant for E{sub eff}, permanent dipole moment and hyperfine coupling constant. Our method paves the way to calculate properties of various kinds of molecules which can be described by a single-reference wave function.« less

  8. Electron electric dipole moment and hyperfine interaction constants for ThO

    NASA Astrophysics Data System (ADS)

    Fleig, Timo; Nayak, Malaya K.

    2014-06-01

    A recently implemented relativistic four-component configuration interaction approach to study P- and T-odd interaction constants in atoms and molecules is employed to determine the electron electric dipole moment effective electric field in the Ω=1 first excited state of the ThO molecule. We obtain a value of Eeff=75.2GV/cm with an estimated error bar of 3% and 10% smaller than a previously reported result (Skripnikov et al., 2013). Using the same wavefunction model we obtain an excitation energy of TvΩ=1=5410 (cm), in accord with the experimental value within 2%. In addition, we report the implementation of the magnetic hyperfine interaction constant A|| as an expectation value, resulting in A||=-1339 (MHz) for the Ω=1 state in ThO. The smaller effective electric field increases the previously determined upper bound (Baron et al., 2014) on the electron electric dipole moment to |de|<9.7×10-29e cm and thus mildly mitigates constraints to possible extensions of the Standard Model of particle physics.

  9. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    PubMed

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. 3D magnetic sources' framework estimation using Genetic Algorithm (GA)

    NASA Astrophysics Data System (ADS)

    Ponte-Neto, C. F.; Barbosa, V. C.

    2008-05-01

    We present a method for inverting total-field anomaly for determining simple 3D magnetic sources' framework such as: batholiths, dikes, sills, geological contacts, kimberlite and lamproite pipes. We use GA to obtain magnetic sources' frameworks and their magnetic features simultaneously. Specifically, we estimate the magnetization direction (inclination and declination) and the total dipole moment intensity, and the horizontal and vertical positions, in Cartesian coordinates , of a finite set of elementary magnetic dipoles. The spatial distribution of these magnetic dipoles composes the skeletal outlines of the geologic sources. We assume that the geologic sources have a homogeneous magnetization distribution and, thus all dipoles have the same magnetization direction and dipole moment intensity. To implement the GA, we use real-valued encoding with crossover, mutation, and elitism. To obtain a unique and stable solution, we set upper and lower bounds on declination and inclination of [0,360°] and [-90°, 90°], respectively. We also set the criterion of minimum scattering of the dipole-position coordinates, to guarantee that spatial distribution of the dipoles (defining the source skeleton) be as close as possible to continuous distribution. To this end, we fix the upper and lower bounds of the dipole moment intensity and we evaluate the dipole-position estimates. If the dipole scattering is greater than a value expected by the interpreter, the upper bound of the dipole moment intensity is reduced by 10 % of the latter. We repeat this procedure until the dipole scattering and the data fitting are acceptable. We apply our method to noise-corrupted magnetic data from simulated 3D magnetic sources with simple geometries and located at different depths. In tests simulating sources such as sphere and cube, all estimates of the dipole coordinates are agreeing with center of mass of these sources. To elongated-prismatic sources in an arbitrary direction, we estimate dipole-position coordinates coincident with principal axis of sources. In tests with synthetic data, simulating the magnetic anomaly yielded by intrusive 2D structures such as dikes and sills, the estimates of the dipole coordinates are coincident with the principal plane of these 2D sources. We also inverted the aeromagnetic data from Serra do Cabral, in southeastern, Brazil, and we estimated dipoles distributed on a horizontal plane at depth of 30 km, with inclination and declination of 59.1° and -48.0°, respectively. The results showed close agreement with previous interpretation.

  11. The localization of focal heart activity via body surface potential measurements: tests in a heterogeneous torso phantom

    NASA Astrophysics Data System (ADS)

    Wetterling, F.; Liehr, M.; Schimpf, P.; Liu, H.; Haueisen, J.

    2009-09-01

    The non-invasive localization of focal heart activity via body surface potential measurements (BSPM) could greatly benefit the understanding and treatment of arrhythmic heart diseases. However, the in vivo validation of source localization algorithms is rather difficult with currently available measurement techniques. In this study, we used a physical torso phantom composed of different conductive compartments and seven dipoles, which were placed in the anatomical position of the human heart in order to assess the performance of the Recursively Applied and Projected Multiple Signal Classification (RAP-MUSIC) algorithm. Electric potentials were measured on the torso surface for single dipoles with and without further uncorrelated or correlated dipole activity. The localization error averaged 11 ± 5 mm over 22 dipoles, which shows the ability of RAP-MUSIC to distinguish an uncorrelated dipole from surrounding sources activity. For the first time, real computational modelling errors could be included within the validation procedure due to the physically modelled heterogeneities. In conclusion, the introduced heterogeneous torso phantom can be used to validate state-of-the-art algorithms under nearly realistic measurement conditions.

  12. Spontaneous default mode network phase-locking moderates performance perceptions under stereotype threat

    PubMed Central

    Leitner, Jordan B.; Duran-Jordan, Kelly; Magerman, Adam B.; Schmader, Toni; Allen, John J. B.

    2015-01-01

    This study assessed whether individual differences in self-oriented neural processing were associated with performance perceptions of minority students under stereotype threat. Resting electroencephalographic activity recorded in white and minority participants was used to predict later estimates of task errors and self-doubt on a presumed measure of intelligence. We assessed spontaneous phase-locking between dipole sources in left lateral parietal cortex (LPC), precuneus/posterior cingulate cortex (P/PCC), and medial prefrontal cortex (MPFC); three regions of the default mode network (DMN) that are integral for self-oriented processing. Results revealed that minorities with greater LPC-P/PCC phase-locking in the theta band reported more accurate error estimations. All individuals experienced less self-doubt to the extent they exhibited greater LPC-MPFC phase-locking in the alpha band but this effect was driven by minorities. Minorities also reported more self-doubt to the extent they overestimated errors. Findings reveal novel neural moderators of stereotype threat effects on subjective experience. Spontaneous synchronization between DMN regions may play a role in anticipatory coping mechanisms that buffer individuals from stereotype threat. PMID:25398433

  13. Frontal midline theta and the error-related negativity: neurophysiological mechanisms of action regulation.

    PubMed

    Luu, Phan; Tucker, Don M; Makeig, Scott

    2004-08-01

    The error-related negativity (ERN) is an event-related potential (ERP) peak occurring between 50 and 100 ms after the commission of a speeded motor response that the subject immediately realizes to be in error. The ERN is believed to index brain processes that monitor action outcomes. Our previous analyses of ERP and EEG data suggested that the ERN is dominated by partial phase-locking of intermittent theta-band EEG activity. In this paper, this possibility is further evaluated. The possibility that the ERN is produced by phase-locking of theta-band EEG activity was examined by analyzing the single-trial EEG traces from a forced-choice speeded response paradigm before and after applying theta-band (4-7 Hz) filtering and by comparing the averaged and single-trial phase-locked (ERP) and non-phase-locked (other) EEG data. Electrical source analyses were used to estimate the brain sources involved in the generation of the ERN. Beginning just before incorrect button presses in a speeded choice response paradigm, midfrontal theta-band activity increased in amplitude and became partially and transiently phase-locked to the subject's motor response, accounting for 57% of ERN peak amplitude. The portion of the theta-EEG activity increase remaining after subtracting the response-locked ERP from each trial was larger and longer lasting after error responses than after correct responses, extending on average 400 ms beyond the ERN peak. Multiple equivalent-dipole source analysis suggested 3 possible equivalent dipole sources of the theta-bandpassed ERN, while the scalp distribution of non-phase-locked theta amplitude suggested the presence of additional frontal theta-EEG sources. These results appear consistent with a body of research that demonstrates a relationship between limbic theta activity and action regulation, including error monitoring and learning.

  14. Multiple-generator errors are unavoidable under model misspecification.

    PubMed

    Jewett, D L; Zhang, Z

    1995-08-01

    Model misspecification poses a major problem for dipole source localization (DSL) because it causes insidious multiple-generator errors (MulGenErrs) to occur in the fitted dipole parameters. This paper describes how and why this occurs, based upon simple algebraic considerations. MulGenErrs must occur, to some degree, in any DSL analysis of real data because there is model misspecification and mathematically the equations used for the simultaneously active generators must be of a different form than the equations for each generator active alone.

  15. Distance measurements in Au nanoparticles functionalized with nitroxide radicals and Gd(3+)-DTPA chelate complexes.

    PubMed

    Yulikov, Maxim; Lueders, Petra; Warsi, Muhammad Farooq; Chechik, Victor; Jeschke, Gunnar

    2012-08-14

    Nanosized gold particles were functionalised with two types of paramagnetic surface tags, one having a nitroxide radical and the other one carrying a DTPA complex loaded with Gd(3+). Selective measurements of nitroxide-nitroxide, Gd(3+)-nitroxide and Gd(3+)-Gd(3+) distances were performed on this system and information on the distance distribution in the three types of spin pairs was obtained. A numerical analysis of the dipolar frequency distributions is presented for Gd(3+) centres with moderate magnitudes of zero-field splitting, in the range of detection frequencies and resonance fields where the high-field approximation is only roughly valid. The dipolar frequency analysis confirms the applicability of DEER for distance measurements in such complexes and gives an estimate for the magnitudes of possible systematic errors due to the non-ideality of the measurement of the dipole-dipole interaction.

  16. Progress toward measuring the 6S1/2 <--> 5D3/2 magnetic-dipole transition moment in Ba+

    NASA Astrophysics Data System (ADS)

    Williams, Spencer; Jayakumar, Anupriya; Hoffman, Matthew; Blinov, Boris; Fortson, Norval

    2015-05-01

    We report the latest results from our effort to measure the magnetic-dipole transition moment (M1) between the 6S1 / 2 and 5D3 / 2 manifolds in Ba+. We describe a new technique for calibrating view-port birefringence and how we will use it to enhance the M1 signal. To access the transition moment we use a variation of a previously proposed technique that allows us to isolate the magnetic-dipole coupling from the much larger electric-quadrupole coupling in the transition rates between particular Zeeman sub-levels. Knowledge of M1 is crucial for a parity-nonconservation experiment in the ion where M1 will be a leading source of systematic errors. No measurement of this M1 has been made in Ba+, however, there are three calculations that predict it to be 80 ×10-5μB, 22 ×10-5μB, and 17 ×10-5μB. A precise measurement may help resolve this theoretical discrepancy which originates from their different estimations of many-body effects. Supported by NSF Grant No. 09-06494F.

  17. Conventional and reciprocal approaches to the inverse dipole localization problem for N(20)-P (20) somatosensory evoked potentials.

    PubMed

    Finke, Stefan; Gulrajani, Ramesh M; Gotman, Jean; Savard, Pierre

    2013-01-01

    The non-invasive localization of the primary sensory hand area can be achieved by solving the inverse problem of electroencephalography (EEG) for N(20)-P(20) somatosensory evoked potentials (SEPs). This study compares two different mathematical approaches for the computation of transfer matrices used to solve the EEG inverse problem. Forward transfer matrices relating dipole sources to scalp potentials are determined via conventional and reciprocal approaches using individual, realistically shaped head models. The reciprocal approach entails calculating the electric field at the dipole position when scalp electrodes are reciprocally energized with unit current-scalp potentials are obtained from the scalar product of this electric field and the dipole moment. Median nerve stimulation is performed on three healthy subjects and single-dipole inverse solutions for the N(20)-P(20) SEPs are then obtained by simplex minimization and validated against the primary sensory hand area identified on magnetic resonance images. Solutions are presented for different time points, filtering strategies, boundary-element method discretizations, and skull conductivity values. Both approaches produce similarly small position errors for the N(20)-P(20) SEP. Position error for single-dipole inverse solutions is inherently robust to inaccuracies in forward transfer matrices but dependent on the overlapping activity of other neural sources. Significantly smaller time and storage requirements are the principal advantages of the reciprocal approach. Reduced computational requirements and similar dipole position accuracy support the use of reciprocal approaches over conventional approaches for N(20)-P(20) SEP source localization.

  18. Exploration of resistive targets within shallow marine environments using the circular electrical dipole and the differential electrical dipole methods: a time-domain modelling study

    NASA Astrophysics Data System (ADS)

    Haroon, Amir; Mogilatov, Vladimir; Goldman, Mark; Bergers, Rainer; Tezkan, Bülent

    2016-05-01

    Two novel transient controlled source electromagnetic methods called circular electrical dipole (CED) and differential electrical dipole (DED) are theoretically analysed for applications in shallow marine environments. 1-D and 3-D time-domain modelling studies are used to investigate the detectability and applicability of the methods when investigating resistive layers/targets representing hydrocarbon-saturated formations. The results are compared to the conventional time-domain horizontal electrical dipole (HED) and vertical electrical dipole (VED) sources. The applied theoretical modelling studies demonstrate that CED and DED have higher signal detectability towards resistive targets compared to TD-CSEM, but demonstrate significantly poorer signal amplitudes. Future CED/DED applications will have to solve this issue prior to measuring. Furthermore, the two novel methods have very similar detectability characteristics towards 3-D resistive targets embedded in marine sediments as VED while being less susceptible towards non-verticality. Due to the complex transmitter design of CED/DED the systems are prone to geometrical errors. Modelling studies show that even small transmitter inaccuracies have strong effects on the signal characteristics of CED making an actual marine application difficult at the present time. In contrast, the DED signal is less affected by geometrical errors in comparison to CED and may therefore be more adequate for marine applications.

  19. Comparison of different sets of array configurations for multichannel 2D ERT acquisition

    NASA Astrophysics Data System (ADS)

    Martorana, R.; Capizzi, P.; D'Alessandro, A.; Luzio, D.

    2017-02-01

    Traditional electrode arrays such Wenner-Schlumberger or dipole-dipole are still widely used thanks to their well-known properties but the array configurations are generally not optimized for multi-channel resistivity measures. Synthetic datasets relating to four different arrays, dipole-dipole (DD), pole-dipole (PD), Wenner-Schlumberger (WS) and a modified version of multiple gradient (MG), have been made for a systematic comparison between 2D resistivity models and their inverted images. Different sets of array configurations generated from simple combinations of geometric parameters (potential dipole lengths and dipole separation factors) were tested with synthetic and field data sets, even considering the influence of errors and the acquisition velocity. The purpose is to establish array configurations capable to provide reliable results but, at the same time, not involving excessive survey costs, even linked to the acquiring time and therefore to the number of current dipoles used. For DD, PD and WS arrays a progression of different datasets were considered increasing the number of current dipoles trying to get about the same amount of measures. A multi-coverage MG array configuration is proposed by increasing the lateral coverage and so the number of current dipoles. Noise simulating errors both on the electrode positions and on the electric potential was added. The array configurations have been tested on field data acquired in the landfill site of Bellolampo (Palermo, Italy), to detect and locate the leachate plumes and to identify the HDPE bottom of the landfill. The inversion results were compared using a quantitative analysis of data misfit, relative model resolution and model misfit. The results show that the trends of the first two parameters are linked on the array configuration and that a cumulative analysis of these parameters can help to choose the best array configuration in order to obtain a good resolution and reliability of a survey, according to generally short acquisition times.

  20. Spontaneous default mode network phase-locking moderates performance perceptions under stereotype threat.

    PubMed

    Forbes, Chad E; Leitner, Jordan B; Duran-Jordan, Kelly; Magerman, Adam B; Schmader, Toni; Allen, John J B

    2015-07-01

    This study assessed whether individual differences in self-oriented neural processing were associated with performance perceptions of minority students under stereotype threat. Resting electroencephalographic activity recorded in white and minority participants was used to predict later estimates of task errors and self-doubt on a presumed measure of intelligence. We assessed spontaneous phase-locking between dipole sources in left lateral parietal cortex (LPC), precuneus/posterior cingulate cortex (P/PCC), and medial prefrontal cortex (MPFC); three regions of the default mode network (DMN) that are integral for self-oriented processing. Results revealed that minorities with greater LPC-P/PCC phase-locking in the theta band reported more accurate error estimations. All individuals experienced less self-doubt to the extent they exhibited greater LPC-MPFC phase-locking in the alpha band but this effect was driven by minorities. Minorities also reported more self-doubt to the extent they overestimated errors. Findings reveal novel neural moderators of stereotype threat effects on subjective experience. Spontaneous synchronization between DMN regions may play a role in anticipatory coping mechanisms that buffer individuals from stereotype threat. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  1. Continuous millennial decrease of the Earth's magnetic axial dipole

    NASA Astrophysics Data System (ADS)

    Poletti, Wilbor; Biggin, Andrew J.; Trindade, Ricardo I. F.; Hartmann, Gelvam A.; Terra-Nova, Filipe

    2018-01-01

    Since the establishment of direct estimations of the Earth's magnetic field intensity in the first half of the nineteenth century, a continuous decay of the axial dipole component has been observed and variously speculated to be linked to an imminent reversal of the geomagnetic field. Furthermore, indirect estimations from anthropologically made materials and volcanic derivatives suggest that this decrease began significantly earlier than direct measurements have been available. Here, we carefully reassess the available archaeointensity dataset for the last two millennia, and show a good correspondence between direct (observatory/satellite) and indirect (archaeomagnetic) estimates of the axial dipole moment creating, in effect, a proxy to expand our analysis back in time. Our results suggest a continuous linear decay as the most parsimonious long-term description of the axial dipole variation for the last millennium. We thus suggest that a break in the symmetry of axial dipole moment advective sources occurred approximately 1100 years earlier than previously described. In addition, based on the observed dipole secular variation timescale, we speculate that the weakening of the axial dipole may end soon.

  2. Corrigendum to "The discrete dipole approximation: An overview and recent developments" [J. Quant. Spectrosc. Radiat. Transfer 106 (2007) 558-589

    NASA Astrophysics Data System (ADS)

    Yurkin, Maxim A.; Hoekstra, Alfons G.

    2016-03-01

    The review [1] is still widely used as a general reference to the discrete dipole approximation, which motivates keeping it as accurate as possible. In the following we correct several errors, mostly typographical ones, which were uncovered over the years.

  3. Effect of EEG electrode density on dipole localization accuracy using two realistically shaped skull resistivity models.

    PubMed

    Laarne, P H; Tenhunen-Eskelinen, M L; Hyttinen, J K; Eskola, H J

    2000-01-01

    The effect of number of EEG electrodes on the dipole localization was studied by comparing the results obtained using the 10-20 and 10-10 electrode systems. Two anatomically detailed models with resistivity values of 177.6 omega m and 67.0 omega m for the skull were applied. Simulated potential values generated by current dipoles were applied to different combinations of the volume conductors and electrode systems. High and low resistivity models differed slightly in favour of the lower skull resistivity model when dipole localization was based on noiseless data. The localization errors were approximately three times larger using low resistivity model for generating the potentials, but applying high resistivity model for the inverse solution. The difference between the two electrode systems was minor in favour of the 10-10 electrode system when simulated, noiseless potentials were used. In the presence of noise the dipole localization algorithm operated more accurately using the denser electrode system. In conclusion, increasing the number of recording electrodes seems to improve the localization accuracy in the presence of noise. The absolute skull resistivity value also affects the accuracy, but using an incorrect value in modelling calculations seems to be the most serious source of error.

  4. Neutron Electric Dipole Moment and Tensor Charges from Lattice QCD.

    PubMed

    Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Gupta, Rajan; Lin, Huey-Wen; Yoon, Boram

    2015-11-20

    We present lattice QCD results on the neutron tensor charges including, for the first time, a simultaneous extrapolation in the lattice spacing, volume, and light quark masses to the physical point in the continuum limit. We find that the "disconnected" contribution is smaller than the statistical error in the "connected" contribution. Our estimates in the modified minimal subtraction scheme at 2 GeV, including all systematics, are g_{T}^{d-u}=1.020(76), g_{T}^{d}=0.774(66), g_{T}^{u}=-0.233(28), and g_{T}^{s}=0.008(9). The flavor diagonal charges determine the size of the neutron electric dipole moment (EDM) induced by quark EDMs that are generated in many new scenarios of CP violation beyond the standard model. We use our results to derive model-independent bounds on the EDMs of light quarks and update the EDM phenomenology in split supersymmetry with gaugino mass unification, finding a stringent upper bound of d_{n}<4×10^{-28} e cm for the neutron EDM in this scenario.

  5. Using the ratio of the magnetic field to the analytic signal of the magnetic gradient tensor in determining the position of simple shaped magnetic anomalies

    NASA Astrophysics Data System (ADS)

    Karimi, Kurosh; Shirzaditabar, Farzad

    2017-08-01

    The analytic signal of magnitude of the magnetic field’s components and its first derivatives have been employed for locating magnetic structures, which can be considered as point-dipoles or line of dipoles. Although similar methods have been used for locating such magnetic anomalies, they cannot estimate the positions of anomalies in noisy states with an acceptable accuracy. The methods are also inexact in determining the depth of deep anomalies. In noisy cases and in places other than poles, the maximum points of the magnitude of the magnetic vector components and Az are not located exactly above 3D bodies. Consequently, the horizontal location estimates of bodies are accompanied by errors. Here, the previous methods are altered and generalized to locate deeper models in the presence of noise even at lower magnetic latitudes. In addition, a statistical technique is presented for working in noisy areas and a new method, which is resistant to noise by using a ‘depths mean’ method, is made. Reduction to the pole transformation is also used to find the most possible actual horizontal body location. Deep models are also well estimated. The method is tested on real magnetic data over an urban gas pipeline in the vicinity of Kermanshah province, Iran. The estimated location of the pipeline is accurate in accordance with the result of the half-width method.

  6. Zn Coordination Chemistry:  Development of Benchmark Suites for Geometries, Dipole Moments, and Bond Dissociation Energies and Their Use To Test and Validate Density Functionals and Molecular Orbital Theory.

    PubMed

    Amin, Elizabeth A; Truhlar, Donald G

    2008-01-01

    We present nonrelativistic and relativistic benchmark databases (obtained by coupled cluster calculations) of 10 Zn-ligand bond distances, 8 dipole moments, and 12 bond dissociation energies in Zn coordination compounds with O, S, NH3, H2O, OH, SCH3, and H ligands. These are used to test the predictions of 39 density functionals, Hartree-Fock theory, and seven more approximate molecular orbital theories. In the nonrelativisitic case, the M05-2X, B97-2, and mPW1PW functionals emerge as the most accurate ones for this test data, with unitless balanced mean unsigned errors (BMUEs) of 0.33, 0.38, and 0.43, respectively. The best local functionals (i.e., functionals with no Hartree-Fock exchange) are M06-L and τ-HCTH with BMUEs of 0.54 and 0.60, respectively. The popular B3LYP functional has a BMUE of 0.51, only slightly better than the value of 0.54 for the best local functional, which is less expensive. Hartree-Fock theory itself has a BMUE of 1.22. The M05-2X functional has a mean unsigned error of 0.008 Å for bond lengths, 0.19 D for dipole moments, and 4.30 kcal/mol for bond energies. The X3LYP functional has a smaller mean unsigned error (0.007 Å) for bond lengths but has mean unsigned errors of 0.43 D for dipole moments and 5.6 kcal/mol for bond energies. The M06-2X functional has a smaller mean unsigned error (3.3 kcal/mol) for bond energies but has mean unsigned errors of 0.017 Å for bond lengths and 0.37 D for dipole moments. The best of the semiempirical molecular orbital theories are PM3 and PM6, with BMUEs of 1.96 and 2.02, respectively. The ten most accurate functionals from the nonrelativistic benchmark analysis are then tested in relativistic calculations against new benchmarks obtained with coupled-cluster calculations and a relativistic effective core potential, resulting in M05-2X (BMUE = 0.895), PW6B95 (BMUE = 0.90), and B97-2 (BMUE = 0.93) as the top three functionals. We find significant relativistic effects (∼0.01 Å in bond lengths, ∼0.2 D in dipole moments, and ∼4 kcal/mol in Zn-ligand bond energies) that cannot be neglected for accurate modeling, but the same density functionals that do well in all-electron nonrelativistic calculations do well with relativistic effective core potentials. Although most tests are carried out with augmented polarized triple-ζ basis sets, we also carried out some tests with an augmented polarized double-ζ basis set, and we found, on average, that with the smaller basis set DFT has no loss in accuracy for dipole moments and only ∼10% less accurate bond lengths.

  7. Methods, analysis, and the treatment of systematic errors for the electron electric dipole moment search in thorium monoxide

    NASA Astrophysics Data System (ADS)

    Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration

    2017-07-01

    We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.

  8. Analysis of fast boundary-integral approximations for modeling electrostatic contributions of molecular binding

    PubMed Central

    Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.

    2013-01-01

    We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561

  9. Linear optics measurements and corrections using an AC dipole in RHIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, G.; Bai, M.; Yang, L.

    2010-05-23

    We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.

  10. Atomic charge transfer-counter polarization effects determine infrared CH intensities of hydrocarbons: a quantum theory of atoms in molecules model.

    PubMed

    Silva, Arnaldo F; Richter, Wagner E; Meneses, Helen G C; Bruns, Roy E

    2014-11-14

    Atomic charge transfer-counter polarization effects determine most of the infrared fundamental CH intensities of simple hydrocarbons, methane, ethylene, ethane, propyne, cyclopropane and allene. The quantum theory of atoms in molecules/charge-charge flux-dipole flux model predicted the values of 30 CH intensities ranging from 0 to 123 km mol(-1) with a root mean square (rms) error of only 4.2 km mol(-1) without including a specific equilibrium atomic charge term. Sums of the contributions from terms involving charge flux and/or dipole flux averaged 20.3 km mol(-1), about ten times larger than the average charge contribution of 2.0 km mol(-1). The only notable exceptions are the CH stretching and bending intensities of acetylene and two of the propyne vibrations for hydrogens bound to sp hybridized carbon atoms. Calculations were carried out at four quantum levels, MP2/6-311++G(3d,3p), MP2/cc-pVTZ, QCISD/6-311++G(3d,3p) and QCISD/cc-pVTZ. The results calculated at the QCISD level are the most accurate among the four with root mean square errors of 4.7 and 5.0 km mol(-1) for the 6-311++G(3d,3p) and cc-pVTZ basis sets. These values are close to the estimated aggregate experimental error of the hydrocarbon intensities, 4.0 km mol(-1). The atomic charge transfer-counter polarization effect is much larger than the charge effect for the results of all four quantum levels. Charge transfer-counter polarization effects are expected to also be important in vibrations of more polar molecules for which equilibrium charge contributions can be large.

  11. Signal Conditioning for the Kalman Filter: Application to Satellite Attitude Estimation with Magnetometer and Sun Sensors

    PubMed Central

    Esteban, Segundo; Girón-Sierra, Jose M.; Polo, Óscar R.; Angulo, Manuel

    2016-01-01

    Most satellites use an on-board attitude estimation system, based on available sensors. In the case of low-cost satellites, which are of increasing interest, it is usual to use magnetometers and Sun sensors. A Kalman filter is commonly recommended for the estimation, to simultaneously exploit the information from sensors and from a mathematical model of the satellite motion. It would be also convenient to adhere to a quaternion representation. This article focuses on some problems linked to this context. The state of the system should be represented in observable form. Singularities due to alignment of measured vectors cause estimation problems. Accommodation of the Kalman filter originates convergence difficulties. The article includes a new proposal that solves these problems, not needing changes in the Kalman filter algorithm. In addition, the article includes assessment of different errors, initialization values for the Kalman filter; and considers the influence of the magnetic dipole moment perturbation, showing how to handle it as part of the Kalman filter framework. PMID:27809250

  12. Signal Conditioning for the Kalman Filter: Application to Satellite Attitude Estimation with Magnetometer and Sun Sensors.

    PubMed

    Esteban, Segundo; Girón-Sierra, Jose M; Polo, Óscar R; Angulo, Manuel

    2016-10-31

    Most satellites use an on-board attitude estimation system, based on available sensors. In the case of low-cost satellites, which are of increasing interest, it is usual to use magnetometers and Sun sensors. A Kalman filter is commonly recommended for the estimation, to simultaneously exploit the information from sensors and from a mathematical model of the satellite motion. It would be also convenient to adhere to a quaternion representation. This article focuses on some problems linked to this context. The state of the system should be represented in observable form. Singularities due to alignment of measured vectors cause estimation problems. Accommodation of the Kalman filter originates convergence difficulties. The article includes a new proposal that solves these problems, not needing changes in the Kalman filter algorithm. In addition, the article includes assessment of different errors, initialization values for the Kalman filter; and considers the influence of the magnetic dipole moment perturbation, showing how to handle it as part of the Kalman filter framework.

  13. Corrected Four-Sphere Head Model for EEG Signals.

    PubMed

    Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V; Dale, Anders M; Einevoll, Gaute T; Wójcik, Daniel K

    2017-01-01

    The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations.

  14. Corrected Four-Sphere Head Model for EEG Signals

    PubMed Central

    Næss, Solveig; Chintaluri, Chaitanya; Ness, Torbjørn V.; Dale, Anders M.; Einevoll, Gaute T.; Wójcik, Daniel K.

    2017-01-01

    The EEG signal is generated by electrical brain cell activity, often described in terms of current dipoles. By applying EEG forward models we can compute the contribution from such dipoles to the electrical potential recorded by EEG electrodes. Forward models are key both for generating understanding and intuition about the neural origin of EEG signals as well as inverse modeling, i.e., the estimation of the underlying dipole sources from recorded EEG signals. Different models of varying complexity and biological detail are used in the field. One such analytical model is the four-sphere model which assumes a four-layered spherical head where the layers represent brain tissue, cerebrospinal fluid (CSF), skull, and scalp, respectively. While conceptually clear, the mathematical expression for the electric potentials in the four-sphere model is cumbersome, and we observed that the formulas presented in the literature contain errors. Here, we derive and present the correct analytical formulas with a detailed derivation. A useful application of the analytical four-sphere model is that it can serve as ground truth to test the accuracy of numerical schemes such as the Finite Element Method (FEM). We performed FEM simulations of the four-sphere head model and showed that they were consistent with the corrected analytical formulas. For future reference we provide scripts for computing EEG potentials with the four-sphere model, both by means of the correct analytical formulas and numerical FEM simulations. PMID:29093671

  15. Predicting geomagnetic reversals via data assimilation: a feasibility study

    NASA Astrophysics Data System (ADS)

    Morzfeld, Matthias; Fournier, Alexandre; Hulot, Gauthier

    2014-05-01

    The system of three ordinary differential equations (ODE) presented by Gissinger in [1] was shown to exhibit chaotic reversals whose statistics compared well with those from the paleomagnetic record. We explore the geophysical relevance of this low-dimensional model via data assimilation, i.e. we update the solution of the ODE with information from data of the dipole variable. The data set we use is 'SINT' (Valet et al. [2]), and it provides the signed virtual axial dipole moment over the past 2 millions years. We can obtain an accurate reconstruction of these dipole data using implicit sampling (a fully nonlinear Monte Carlo sampling strategy) and assimilating 5 kyr of data per sweep. We confirm our calibration of the model using the PADM2M dipole data set of Ziegler et al. [3]. The Monte Carlo sampling strategy provides us with quantitative information about the uncertainty of our estimates, and -in principal- we can use this information for making (robust) predictions under uncertainty. We perform synthetic data experiments to explore the predictive capability of the ODE model updated by data assimilation. For each experiment, we produce 2 Myr of synthetic data (with error levels similar to the ones found in the SINT data), calibrate the model to this record, and then check if this calibrated model can reliably predict a reversal within the next 5 kyr. By performing a large number of such experiments, we can estimate the statistics that describe how reliably our calibrated model can predict a reversal of the geomagnetic field. It is found that the 1 kyr-ahead predictions of reversals produced by the model appear to be accurate and reliable. These encouraging results prompted us to also test predictions of the five reversals of the SINT (and PADM2M) data set, using a similarly calibrated model. Results will be presented and discussed. References Gissinger, C., 2012, A new deterministic model for chaotic reversals, European Physical Journal B, 85:137 Valet, J.P., Maynadier,L and Guyodo, Y., 2005, Geomagnetic field strength and reversal rate over the past 2 Million years, Nature, 435, 802-805. Ziegler, L.B., Constable, C.G., Johnson, C.L. and Tauxe, L., 2011, PADM2M: a penalized maximum likelihood moidel of the 0-2 Ma paleomagnetic axial dipole moment, Geophysical Journal International, 184, 1069-1089.

  16. Infrared and far-infrared laser magnetic resonance spectroscopy of the GeH radical - Determination of ground state parameters

    NASA Technical Reports Server (NTRS)

    Brown, J. M.; Evenson, K. M.; Sears, T. J.

    1985-01-01

    The GeH radical has been detected in its ground 2 Pi state in the gas phase reaction of fluorine atoms with GeH4 by laser magnetic resonance techniques. Rotational transitions within both 2 Pi 1/2 and 2 Pi 3/2 manifolds have been observed at far-infrared wavelengths and rotational transitions between the two fine structure components have been detected at infrared wavelengths (10 microns). Signals have been observed for all five naturally occurring isotopes of germanium. Nuclear hyperfine structure for H-1 and Ge-73 has also been observed. The data for the dominant isotope (/Ge-74/H) have been fitted to within experimental error by an effective Hamiltonian to give a set of molecular parameters for the X 2 Pi state which is very nearly complete. In addition, the dipole moment of GeH in its ground state has been estimated from the relative intensities of electric and magnetic dipole transitions in the 10 micron spectrum to be 1.24(+ or - 0.10) D.

  17. Neutron Electric Dipole Moment and Tensor Charges from Lattice QCD

    DOE PAGES

    Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Gupta, Rajan; ...

    2015-11-17

    In this paper, we present lattice QCD results on the neutron tensor charges including, for the first time, a simultaneous extrapolation in the lattice spacing, volume, and light quark masses to the physical point in the continuum limit. We find that the “disconnected” contribution is smaller than the statistical error in the “connected” contribution. Our estimates in the modified minimal subtraction scheme at 2 GeV, including all systematics, are g d-u T=1.020(76), g d T=0.774(66), g u T=-0.233(28), and g s T=0.008(9). The flavor diagonal charges determine the size of the neutron electric dipole moment (EDM) induced by quark EDMsmore » that are generated in many new scenarios of CP violation beyond the standard model. Finally, we use our results to derive model-independent bounds on the EDMs of light quarks and update the EDM phenomenology in split supersymmetry with gaugino mass unification, finding a stringent upper bound of d n<4×10 -28 e cm for the neutron EDM in this scenario.« less

  18. Vibrationally averaged dipole moments of methane and benzene isotopologues.

    PubMed

    Arapiraca, A F C; Mohallem, J R

    2016-04-14

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C6H3D3 is about twice as large as the measured dipole moment of C6H5D. Computational progress is advanced concerning applications to larger systems and the choice of appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.

  19. Development of Monopole Interaction Models for Ionic Compounds. Part I: Estimation of Aqueous Henry's Law Constants for Ions and Gas Phase pKa Values for Acidic Compounds.

    PubMed

    Hilal, S H; Saravanaraj, A N; Carreira, L A

    2014-02-01

    The SPARC (SPARC Performs Automated Reasoning in Chemistry) physicochemical mechanistic models for neutral compounds have been extended to estimate Henry's Law Constant (HLC) for charged species by incorporating ionic electrostatic interaction models. Combinations of absolute aqueous pKa values, relative pKa values in the gas phase, and aqueous HLC for neutral compounds have been used to develop monopole interaction models that quantify the energy differences upon moving an ionic solute molecule from the gas phase to the liquid phase. Inter-molecular interaction energies were factored into mechanistic contributions of monopoles with polarizability, dipole, H-bonding, and resonance. The monopole ionic models were validated by a wide range of measured gas phase pKa data for 450 acidic compounds. The RMS deviation error and R(2) for the OH, SH, CO2 H, CH3 and NR2 acidic reaction centers (C) were 16.9 kcal/mol and 0.87, respectively. The calculated HLCs of ions were compared to the HLCs of 142 ions calculated by quantum mechanics. Effects of inter-molecular interaction of the monopoles with polarizability, dipole, H-bonding, and resonance on acidity of the solutes in the gas phase are discussed. Copyright © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Methods for Room Acoustic Analysis and Synthesis using a Monopole-Dipole Microphone Array

    NASA Technical Reports Server (NTRS)

    Abel, J. S.; Begault, Durand R.; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    In recent work, a microphone array consisting of an omnidirectional microphone and colocated dipole microphones having orthogonally aligned dipole axes was used to examine the directional nature of a room impulse response. The arrival of significant reflections was indicated by peaks in the power of the omnidirectional microphone response; reflection direction of arrival was revealed by comparing zero-lag crosscorrelations between the omnidirectional response and the dipole responses to the omnidirectional response power to estimate arrival direction cosines with respect to the dipole axes.

  1. A storage ring experiment to detect a proton electric dipole moment

    DOE PAGES

    Anastassopoulos, V.; Andrianov, S.; Baartman, R.; ...

    2016-11-29

    We describe a new experiment to detect a permanent electric dipole moment of the proton with a sensitivity of 10 $-$29e cm by using polarized “magic” momentum 0.7 GeV/c protons in an all-electric storage ring. Systematic errors relevant to the experiment are discussed and techniques to address them are presented. The measurement is sensitive to new physics beyond the Standard Model at the scale of 3000 TeV.

  2. A storage ring experiment to detect a proton electric dipole moment.

    PubMed

    Anastassopoulos, V; Andrianov, S; Baartman, R; Baessler, S; Bai, M; Benante, J; Berz, M; Blaskiewicz, M; Bowcock, T; Brown, K; Casey, B; Conte, M; Crnkovic, J D; D'Imperio, N; Fanourakis, G; Fedotov, A; Fierlinger, P; Fischer, W; Gaisser, M O; Giomataris, Y; Grosse-Perdekamp, M; Guidoboni, G; Hacıömeroğlu, S; Hoffstaetter, G; Huang, H; Incagli, M; Ivanov, A; Kawall, D; Kim, Y I; King, B; Koop, I A; Lazarus, D M; Lebedev, V; Lee, M J; Lee, S; Lee, Y H; Lehrach, A; Lenisa, P; Levi Sandri, P; Luccio, A U; Lyapin, A; MacKay, W; Maier, R; Makino, K; Malitsky, N; Marciano, W J; Meng, W; Meot, F; Metodiev, E M; Miceli, L; Moricciani, D; Morse, W M; Nagaitsev, S; Nayak, S K; Orlov, Y F; Ozben, C S; Park, S T; Pesce, A; Petrakou, E; Pile, P; Podobedov, B; Polychronakos, V; Pretz, J; Ptitsyn, V; Ramberg, E; Raparia, D; Rathmann, F; Rescia, S; Roser, T; Kamal Sayed, H; Semertzidis, Y K; Senichev, Y; Sidorin, A; Silenko, A; Simos, N; Stahl, A; Stephenson, E J; Ströher, H; Syphers, M J; Talman, J; Talman, R M; Tishchenko, V; Touramanis, C; Tsoupas, N; Venanzoni, G; Vetter, K; Vlassis, S; Won, E; Zavattini, G; Zelenski, A; Zioutas, K

    2016-11-01

    A new experiment is described to detect a permanent electric dipole moment of the proton with a sensitivity of 10 -29 e ⋅ cm by using polarized "magic" momentum 0.7 GeV/c protons in an all-electric storage ring. Systematic errors relevant to the experiment are discussed and techniques to address them are presented. The measurement is sensitive to new physics beyond the standard model at the scale of 3000 TeV.

  3. A storage ring experiment to detect a proton electric dipole moment

    NASA Astrophysics Data System (ADS)

    Anastassopoulos, V.; Andrianov, S.; Baartman, R.; Baessler, S.; Bai, M.; Benante, J.; Berz, M.; Blaskiewicz, M.; Bowcock, T.; Brown, K.; Casey, B.; Conte, M.; Crnkovic, J. D.; D'Imperio, N.; Fanourakis, G.; Fedotov, A.; Fierlinger, P.; Fischer, W.; Gaisser, M. O.; Giomataris, Y.; Grosse-Perdekamp, M.; Guidoboni, G.; Hacıömeroǧlu, S.; Hoffstaetter, G.; Huang, H.; Incagli, M.; Ivanov, A.; Kawall, D.; Kim, Y. I.; King, B.; Koop, I. A.; Lazarus, D. M.; Lebedev, V.; Lee, M. J.; Lee, S.; Lee, Y. H.; Lehrach, A.; Lenisa, P.; Levi Sandri, P.; Luccio, A. U.; Lyapin, A.; MacKay, W.; Maier, R.; Makino, K.; Malitsky, N.; Marciano, W. J.; Meng, W.; Meot, F.; Metodiev, E. M.; Miceli, L.; Moricciani, D.; Morse, W. M.; Nagaitsev, S.; Nayak, S. K.; Orlov, Y. F.; Ozben, C. S.; Park, S. T.; Pesce, A.; Petrakou, E.; Pile, P.; Podobedov, B.; Polychronakos, V.; Pretz, J.; Ptitsyn, V.; Ramberg, E.; Raparia, D.; Rathmann, F.; Rescia, S.; Roser, T.; Kamal Sayed, H.; Semertzidis, Y. K.; Senichev, Y.; Sidorin, A.; Silenko, A.; Simos, N.; Stahl, A.; Stephenson, E. J.; Ströher, H.; Syphers, M. J.; Talman, J.; Talman, R. M.; Tishchenko, V.; Touramanis, C.; Tsoupas, N.; Venanzoni, G.; Vetter, K.; Vlassis, S.; Won, E.; Zavattini, G.; Zelenski, A.; Zioutas, K.

    2016-11-01

    A new experiment is described to detect a permanent electric dipole moment of the proton with a sensitivity of 10-29 e ṡ cm by using polarized "magic" momentum 0.7 GeV/c protons in an all-electric storage ring. Systematic errors relevant to the experiment are discussed and techniques to address them are presented. The measurement is sensitive to new physics beyond the standard model at the scale of 3000 TeV.

  4. Quantification of tracer plume transport parameters in 2D saturated porous media by cross-borehole ERT imaging

    NASA Astrophysics Data System (ADS)

    Lekmine, G.; Auradou, H.; Pessel, M.; Rayner, J. L.

    2017-04-01

    Cross-borehole ERT imaging was tested to quantify the average velocity and transport parameters of tracer plumes in saturated porous media. Seven tracer tests were performed at different flow rates and monitored by either a vertical or horizontal dipole-dipole ERT sequence. These sequences were tested to reconstruct the shape and temporally follow the spread of the tracer plumes through a background regularization procedure. Data sets were inverted with the same inversion parameters and 2D model sections of resistivity ratios were converted to tracer concentrations. Both array types provided an accurate estimation of the average pore velocity vz. The total mass Mtot recovered was always overestimated by the horizontal dipole-dipole and underestimated by the vertical dipole-dipole. The vertical dipole-dipole was however reliable to quantify the longitudinal dispersivity λz, while the horizontal dipole-dipole returned better estimation for the transverse component λx. λ and Mtot were mainly influenced by the 2D distribution of the cumulated electrical sensitivity and the Shadow Effects induced by the third dimension. The size reduction of the edge of the plume was also related to the inability of the inversion process to reconstruct sharp resistivity contrasts at the interface. Smoothing was counterbalanced by a non-realistic rise of the ERT concentrations around the centre of mass returning overpredicted total masses. A sensitivity analysis on the cementation factor m and the porosity ϕ demonstrated that a change in one of these parameters by 8% involved non negligible variations by 30 and 40% of the dispersion coefficients and mass recovery.

  5. Estimation of hyper-parameters for a hierarchical model of combined cortical and extra-brain current sources in the MEG inverse problem.

    PubMed

    Morishige, Ken-ichi; Yoshioka, Taku; Kawawaki, Dai; Hiroe, Nobuo; Sato, Masa-aki; Kawato, Mitsuo

    2014-11-01

    One of the major obstacles in estimating cortical currents from MEG signals is the disturbance caused by magnetic artifacts derived from extra-cortical current sources such as heartbeats and eye movements. To remove the effect of such extra-brain sources, we improved the hybrid hierarchical variational Bayesian method (hyVBED) proposed by Fujiwara et al. (NeuroImage, 2009). hyVBED simultaneously estimates cortical and extra-brain source currents by placing dipoles on cortical surfaces as well as extra-brain sources. This method requires EOG data for an EOG forward model that describes the relationship between eye dipoles and electric potentials. In contrast, our improved approach requires no EOG and less a priori knowledge about the current variance of extra-brain sources. We propose a new method, "extra-dipole," that optimally selects hyper-parameter values regarding current variances of the cortical surface and extra-brain source dipoles. With the selected parameter values, the cortical and extra-brain dipole currents were accurately estimated from the simulated MEG data. The performance of this method was demonstrated to be better than conventional approaches, such as principal component analysis and independent component analysis, which use only statistical properties of MEG signals. Furthermore, we applied our proposed method to measured MEG data during covert pursuit of a smoothly moving target and confirmed its effectiveness. Copyright © 2014 Elsevier Inc. All rights reserved.

  6. Heart sounds as a result of acoustic dipole radiation of heart valves

    NASA Astrophysics Data System (ADS)

    Kasoev, S. G.

    2005-11-01

    Heart sounds are associated with impulses of force acting on heart valves at the moment they close under the action of blood-pressure difference. A unified model for all the valves represents this impulse as an acoustic dipole. The near pressure field of this dipole creates a distribution of the normal velocity on the breast surface with features typical of auscultation practice: a pronounced localization of heart sound audibility areas, an individual area for each of the valves, and a noncoincidence of these areas with the projections of the valves onto the breast surface. In the framework of the dipole theory, the optimum size of the stethoscope’s bell is found and the spectrum of the heart sounds is estimated. The estimates are compared with the measured spectrum.

  7. Quantitative susceptibility mapping: Report from the 2016 reconstruction challenge.

    PubMed

    Langkammer, Christian; Schweser, Ferdinand; Shmueli, Karin; Kames, Christian; Li, Xu; Guo, Li; Milovic, Carlos; Kim, Jinsuh; Wei, Hongjiang; Bredies, Kristian; Buch, Sagar; Guo, Yihao; Liu, Zhe; Meineke, Jakob; Rauscher, Alexander; Marques, José P; Bilgic, Berkin

    2018-03-01

    The aim of the 2016 quantitative susceptibility mapping (QSM) reconstruction challenge was to test the ability of various QSM algorithms to recover the underlying susceptibility from phase data faithfully. Gradient-echo images of a healthy volunteer acquired at 3T in a single orientation with 1.06 mm isotropic resolution. A reference susceptibility map was provided, which was computed using the susceptibility tensor imaging algorithm on data acquired at 12 head orientations. Susceptibility maps calculated from the single orientation data were compared against the reference susceptibility map. Deviations were quantified using the following metrics: root mean squared error (RMSE), structure similarity index (SSIM), high-frequency error norm (HFEN), and the error in selected white and gray matter regions. Twenty-seven submissions were evaluated. Most of the best scoring approaches estimated the spatial frequency content in the ill-conditioned domain of the dipole kernel using compressed sensing strategies. The top 10 maps in each category had similar error metrics but substantially different visual appearance. Because QSM algorithms were optimized to minimize error metrics, the resulting susceptibility maps suffered from over-smoothing and conspicuity loss in fine features such as vessels. As such, the challenge highlighted the need for better numerical image quality criteria. Magn Reson Med 79:1661-1673, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Inclusion of Theta(12) dependence in the Coulomb-dipole theory of the ionization threshold

    NASA Technical Reports Server (NTRS)

    Srivastava, M. K.; Temkin, A.

    1991-01-01

    The Coulomb-dipole (CD) theory of the electron-atom impact-ionization threshold law is extended to include the full electronic repulsion. It is found that the threshold law is altered to a form in contrast to the previous angular-independent model. A second energy regime, is also identified wherein the 'threshold' law reverts to its angle-independent form. In the final part of the paper the dipole parameter is estimated to be about 28. This yields numerical estimates of E(a) = about 0.0003 and E(b) = about 0.25 eV.

  9. Electric dipole moment of diatomic molecules by configuration interaction. IV.

    NASA Technical Reports Server (NTRS)

    Green, S.

    1972-01-01

    The theory of basis set dependence in configuration interaction calculations is discussed, taking into account a perturbation model which is valid for small changes in the self-consistent field orbitals. It is found that basis set corrections are essentially additive through first order. It is shown that an error found in a previously published dipole moment calculation by Green (1972) for the metastable first excited state of CO was indeed due to an inadequate basis set as claimed.

  10. A storage ring experiment to detect a proton electric dipole moment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anastassopoulos, V.; Andrianov, S.; Baartman, R.

    2016-11-01

    A new experiment is described to detect a permanent electric dipole moment of the proton with a sensitivity ofmore » $$10^{-29}e\\cdot$$cm by using polarized "magic" momentum $0.7$~GeV/c protons in an all-electric storage ring. Systematic errors relevant to the experiment are discussed and techniques to address them are presented. The measurement is sensitive to new physics beyond the Standard Model at the scale of 3000~TeV.« less

  11. Dipole-relaxation parameters for Ce3+-Fint- complexes in CaF2:Ce and CaF2:Ce,Mn

    NASA Astrophysics Data System (ADS)

    Jassemnejad, B.; McKeever, S. W. S.

    1987-12-01

    Dipole-relaxation parameters for Ce3+-Fint- centers (C4v symmetry) in CaF2 are calculated using the method of ionic thermocurrents (ITC). The data indicate concentration-dependent effects if analyzed using the traditional ITC equation, assuming a single value for the reorientation activation energy. This analysis is unable to account for an observed broadening of the ITC peak as more Ce is added to the crystals. However, as has been published for other MF2:R3+ systems, we find that the broadening can be successfully accounted for by adopting a modified ITC equation which allows for a Gaussian distribution of activation energies about a mean value E0 and with a distribution width p. The parameter E0 is found to be independent of dipole content while p is found to increase with increasing dipole concentration. The data are consistent with a perturbation of the dipole-relaxation parameters due to interactions with other defects within the system. However, the strength of the observed effects is difficult to explain by invoking electrostatic dipole-dipole interactions only. Other perturbations, due perhaps to monopole-dipole interactions or elastic interactions, must be taking place. The data indicate that dipole concentrations calculated by ITC will be in error in the presence of such interactions due to a reduction in the mean contribution per dipole to the overall polarization density. For samples in which interaction effects are negligible, we calculate a dipole moment of 3.12×10-29 C m. The data further indicate that that the addition of Mn to the system causes a decrease in the interaction effects via a reduction in the Ce C4v center dipole moment. It appears that the broadening of the ITC curve is sensitive to the defect structure surrounding the dipoles.

  12. Real-Time Localization of Moving Dipole Sources for Tracking Multiple Free-Swimming Weakly Electric Fish

    PubMed Central

    Jun, James Jaeyoon; Longtin, André; Maler, Leonard

    2013-01-01

    In order to survive, animals must quickly and accurately locate prey, predators, and conspecifics using the signals they generate. The signal source location can be estimated using multiple detectors and the inverse relationship between the received signal intensity (RSI) and the distance, but difficulty of the source localization increases if there is an additional dependence on the orientation of a signal source. In such cases, the signal source could be approximated as an ideal dipole for simplification. Based on a theoretical model, the RSI can be directly predicted from a known dipole location; but estimating a dipole location from RSIs has no direct analytical solution. Here, we propose an efficient solution to the dipole localization problem by using a lookup table (LUT) to store RSIs predicted by our theoretically derived dipole model at many possible dipole positions and orientations. For a given set of RSIs measured at multiple detectors, our algorithm found a dipole location having the closest matching normalized RSIs from the LUT, and further refined the location at higher resolution. Studying the natural behavior of weakly electric fish (WEF) requires efficiently computing their location and the temporal pattern of their electric signals over extended periods. Our dipole localization method was successfully applied to track single or multiple freely swimming WEF in shallow water in real-time, as each fish could be closely approximated by an ideal current dipole in two dimensions. Our optimized search algorithm found the animal’s positions, orientations, and tail-bending angles quickly and accurately under various conditions, without the need for calibrating individual-specific parameters. Our dipole localization method is directly applicable to studying the role of active sensing during spatial navigation, or social interactions between multiple WEF. Furthermore, our method could be extended to other application areas involving dipole source localization. PMID:23805244

  13. Vibrationally averaged dipole moments of methane and benzene isotopologues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arapiraca, A. F. C.; Centro Federal de Educação Tecnológica de Minas Gerais, Coordenação de Ciências, CEFET-MG, Campus I, 30.421-169 Belo Horizonte, MG; Mohallem, J. R., E-mail: rachid@fisica.ufmg.br

    DFT-B3LYP post-Born-Oppenheimer (finite-nuclear-mass-correction (FNMC)) calculations of vibrationally averaged isotopic dipole moments of methane and benzene, which compare well with experimental values, are reported. For methane, in addition to the principal vibrational contribution to the molecular asymmetry, FNMC accounts for the surprisingly large Born-Oppenheimer error of about 34% to the dipole moments. This unexpected result is explained in terms of concurrent electronic and vibrational contributions. The calculated dipole moment of C{sub 6}H{sub 3}D{sub 3} is about twice as large as the measured dipole moment of C{sub 6}H{sub 5}D. Computational progress is advanced concerning applications to larger systems and the choice ofmore » appropriate basis sets. The simpler procedure of performing vibrational averaging on the Born-Oppenheimer level and then adding the FNMC contribution evaluated at the equilibrium distance is shown to be appropriate. Also, the basis set choice is made by heuristic analysis of the physical behavior of the systems, instead of by comparison with experiments.« less

  14. Demonstration of Protection of a Superconducting Qubit from Energy Decay

    NASA Astrophysics Data System (ADS)

    Lin, Yen-Hsiang; Nguyen, Long B.; Grabon, Nicholas; San Miguel, Jonathan; Pankratova, Natalia; Manucharyan, Vladimir E.

    2018-04-01

    Long-lived transitions occur naturally in atomic systems due to the abundance of selection rules inhibiting spontaneous emission. By contrast, transitions of superconducting artificial atoms typically have large dipoles, and hence their lifetimes are determined by the dissipative environment of a macroscopic electrical circuit. We designed a multilevel fluxonium artificial atom such that the qubit's transition dipole can be exponentially suppressed by flux tuning, while it continues to dispersively interact with a cavity mode by virtual transitions to the noncomputational states. Remarkably, energy decay time T1 grew by 2 orders of magnitude, proportionally to the inverse square of the transition dipole, and exceeded the benchmark value of T1>2 ms (quality factor Q1>4 ×107) without showing signs of saturation. The dephasing time was limited by the first-order coupling to flux noise to about 4 μ s . Our circuit validated the general principle of hardware-level protection against bit-flip errors and can be upgraded to the 0 -π circuit [P. Brooks, A. Kitaev, and J. Preskill, Phys. Rev. A 87, 052306 (2013), 10.1103/PhysRevA.87.052306], adding protection against dephasing and certain gate errors.

  15. Qubit-qubit interaction in quantum computers: errors and scaling laws

    NASA Astrophysics Data System (ADS)

    Gea-Banacloche, Julio R.

    1998-07-01

    This paper explores the limitations that interaction between the physical qubits making up a quantum computer may impose on the computer's performance. For computers using atoms as qubits, magnetic dipole-dipole interactions are likely to be dominant; various types of errors which they might introduce are considered here. The strength of the interaction may be reduce by increasing the distance between qubits, which in general will make the computer slower. For ion-chain based quantum computers the slowing down due to this effect is found to be generally more sever than that due to other causes. In particular, this effect alone would be enough to make these systems unacceptably slow for large-scale computation, whether they use the center of mass motion as the 'bus' or whether they do this via an optical cavity mode.

  16. Computation of forces from deformed visco-elastic biological tissues

    NASA Astrophysics Data System (ADS)

    Muñoz, José J.; Amat, David; Conte, Vito

    2018-04-01

    We present a least-squares based inverse analysis of visco-elastic biological tissues. The proposed method computes the set of contractile forces (dipoles) at the cell boundaries that induce the observed and quantified deformations. We show that the computation of these forces requires the regularisation of the problem functional for some load configurations that we study here. The functional measures the error of the dynamic problem being discretised in time with a second-order implicit time-stepping and in space with standard finite elements. We analyse the uniqueness of the inverse problem and estimate the regularisation parameter by means of an L-curved criterion. We apply the methodology to a simple toy problem and to an in vivo set of morphogenetic deformations of the Drosophila embryo.

  17. A new estimate of average dipole field strength for the last five million years

    NASA Astrophysics Data System (ADS)

    Cromwell, G.; Tauxe, L.; Halldorsson, S. A.

    2013-12-01

    The Earth's ancient magnetic field can be approximated by a geocentric axial dipole (GAD) where the average field intensity is twice as strong at the poles than at the equator. The present day geomagnetic field, and some global paleointensity datasets, support the GAD hypothesis with a virtual axial dipole moment (VADM) of about 80 ZAm2. Significant departures from GAD for 0-5 Ma are found in Antarctica and Iceland where paleointensity experiments on massive flows (Antarctica) (1) and volcanic glasses (Iceland) produce average VADM estimates of 41.4 ZAm2 and 59.5 ZAm2, respectively. These combined intensities are much closer to a lower estimate for long-term dipole field strength, 50 ZAm2 (2), and some other estimates of average VADM based on paleointensities strictly from volcanic glasses. Proposed explanations for the observed non-GAD behavior, from otherwise high-quality paleointensity results, include incomplete temporal sampling, effects from the tangent cylinder, and hemispheric asymmetry. Differences in estimates of average magnetic field strength likely arise from inconsistent selection protocols and experiment methodologies. We address these possible biases and estimate the average dipole field strength for the last five million years by compiling measurement level data of IZZI-modified paleointensity experiments from lava flows around the globe (including new results from Iceland and the HSDP-2 Hawaii drill core). We use the Thellier Gui paleointensity interpreter (3) in order to apply objective criteria to all specimens, ensuring consistency between sites. Specimen level selection criteria are determined from a recent paleointensity investigation of modern Hawaiian lava flows where the expected magnetic field strength was accurately recovered when following certain selection parameters. Our new estimate of average dipole field strength for the last five million years incorporates multiple paleointensity studies on lava flows with diverse global and temporal distributions, and objectively constrains site level estimates by applying uniform selection requirements on measurement level data. (1) Lawrence, K.P., L. Tauxe, H. Staudigel, C.G. Constable, A. Koppers, W. McIntosh, C.L. Johnson, Paleomagnetic field properties at high southern latitude, Geochemistry Geophysics Geosystems, 10, 2009. (2) Selkin, P.A., L. Tauxe, Long-term variations in palaeointensity, Phil. Trans. R. Soc. Lond., 358, 1065-1088, 2000. (3) Shaar, R., L. Tauxe, Thellier GUI: An integrated tool for analyzing paleointensity data from Thellier-type experiments, Geochemistry Geophysics Geosystems, 14, 2013

  18. Permanent electric dipole moments of PtX (X = H, F, Cl, Br, and I) by the composite approach

    NASA Astrophysics Data System (ADS)

    Deng, Dan; Lian, Yongqin; Zou, Wenli

    2017-11-01

    Using the FPD composite approach of Peterson et. al. we calculate the permanent electric dipole moments of PtX (X = H, F, Cl, Br, and I) at the equilibrium geometries of their ground states. The dipole moment of PtF is estimated to be 3.421 Debye, being very close to the experimental value of 3.42(6) Debye. This research also suggests the ordering of dipole moments of PtX being proportional to the electronegativity of X.

  19. Monopole and dipole estimation for multi-frequency sky maps by linear regression

    NASA Astrophysics Data System (ADS)

    Wehus, I. K.; Fuskeland, U.; Eriksen, H. K.; Banday, A. J.; Dickinson, C.; Ghosh, T.; Górski, K. M.; Lawrence, C. R.; Leahy, J. P.; Maino, D.; Reich, P.; Reich, W.

    2017-01-01

    We describe a simple but efficient method for deriving a consistent set of monopole and dipole corrections for multi-frequency sky map data sets, allowing robust parametric component separation with the same data set. The computational core of this method is linear regression between pairs of frequency maps, often called T-T plots. Individual contributions from monopole and dipole terms are determined by performing the regression locally in patches on the sky, while the degeneracy between different frequencies is lifted whenever the dominant foreground component exhibits a significant spatial spectral index variation. Based on this method, we present two different, but each internally consistent, sets of monopole and dipole coefficients for the nine-year WMAP, Planck 2013, SFD 100 μm, Haslam 408 MHz and Reich & Reich 1420 MHz maps. The two sets have been derived with different analysis assumptions and data selection, and provide an estimate of residual systematic uncertainties. In general, our values are in good agreement with previously published results. Among the most notable results are a relative dipole between the WMAP and Planck experiments of 10-15μK (depending on frequency), an estimate of the 408 MHz map monopole of 8.9 ± 1.3 K, and a non-zero dipole in the 1420 MHz map of 0.15 ± 0.03 K pointing towards Galactic coordinates (l,b) = (308°,-36°) ± 14°. These values represent the sum of any instrumental and data processing offsets, as well as any Galactic or extra-Galactic component that is spectrally uniform over the full sky.

  20. Investigation on the individual contributions of N-H...O=C and C-H...O=C interactions to the binding energies of beta-sheet models.

    PubMed

    Wang, Chang-Sheng; Sun, Chang-Liang

    2010-04-15

    In this article, the binding energies of 16 antiparallel and parallel beta-sheet models are estimated using the analytic potential energy function we proposed recently and the results are compared with those obtained from MP2, AMBER99, OPLSAA/L, and CHARMM27 calculations. The comparisons indicate that the analytic potential energy function can produce reasonable binding energies for beta-sheet models. Further comparisons suggest that the binding energy of the beta-sheet models might come mainly from dipole-dipole attractive and repulsive interactions and VDW interactions between the two strands. The dipole-dipole attractive and repulsive interactions are further obtained in this article. The total of N-H...H-N and C=O...O=C dipole-dipole repulsive interaction (the secondary electrostatic repulsive interaction) in the small ring of the antiparallel beta-sheet models is estimated to be about 6.0 kcal/mol. The individual N-H...O=C dipole-dipole attractive interaction is predicted to be -6.2 +/- 0.2 kcal/mol in the antiparallel beta-sheet models and -5.2 +/- 0.6 kcal/mol in the parallel beta-sheet models. The individual C(alpha)-H...O=C attractive interaction is -1.2 +/- 0.2 kcal/mol in the antiparallel beta-sheet models and -1.5 +/- 0.2 kcal/mol in the parallel beta-sheet models. These values are important in understanding the interactions at protein-protein interfaces and developing a more accurate force field for peptides and proteins. 2009 Wiley Periodicals, Inc.

  1. Quantum theory of atoms in molecules charge-charge flux-dipole flux models for the infrared intensities of X(2)CY (X = H, F, Cl; Y = O, S) molecules.

    PubMed

    Faria, Sergio H D M; da Silva, João Viçozo; Haiduke, Roberto L A; Vidal, Luciano N; Vazquez, Pedro A M; Bruns, Roy E

    2007-08-16

    The molecular dipole moments, their derivatives, and the fundamental IR intensities of the X2CY (X = H, F, Cl; Y = O, S) molecules are determined from QTAIM atomic charges and dipoles and their fluxes at the MP2/6-311++G(3d,3p) level. Root-mean-square errors of +/-0.03 D and +/-1.4 km mol(-1) are found for the molecular dipole moments and fundamental IR intensities calculated using quantum theory of atoms in molecules (QTAIM) parameters when compared with those obtained directly from the MP2/6-311++G(3d,3p) calculations and +/-0.05 D and 51.2 km mol(-1) when compared with the experimental values. Charge (C), charge flux (CF), and dipole flux (DF) contributions are reported for all the normal vibrations of these molecules. A large negative correlation coefficient of -0.83 is calculated between the charge flux and dipole flux contributions and indicates that electronic charge transfer from one side of the molecule to the other during vibrations is accompanied by a relaxation effect with electron density polarization in the opposite direction. The characteristic substituent effect that has been observed for experimental infrared intensity parameters and core electron ionization energies has been applied to the CCFDF/QTAIM parameters of F2CO, Cl2CO, F2CS, and Cl2CS. The individual atomic charge, atomic charge flux, and atomic dipole flux contributions are seen to obey the characteristic substituent effect equation just as accurately as the total dipole moment derivative. The CH, CF, and CCl stretching normal modes of these molecules are shown to have characteristic sets of charge, charge flux, and dipole flux contributions.

  2. Neutron Electric Dipole Moment in the Standard Model: Complete Three-Loop Calculation of the Valence Quark Contributions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Czarnecki, A.; Krause, B.

    1997-06-01

    We present a complete three-loop calculation of the electric dipole moment of the u and d quarks in the standard model. For the d quark, more relevant for the experimentally important neutron electric dipole moment, we find cancellations which lead to an order of magnitude suppression compared with previous estimates. {copyright} {ital 1997} {ital The American Physical Society}

  3. Towards an effective record of dipole moment variations since the Precambrian using new reliability criteria and outputs from numerical dynamo simulations

    NASA Astrophysics Data System (ADS)

    Biggin, A. J.; Suttie, N.; Paterson, G. A.; Aubert, J.; Hurst, E.; Clarke, A.

    2013-12-01

    On timescales over which mantle convection may be affecting the geodynamo (10-100s of million years), magnetic reversal frequency is the best documented aspect of geomagnetic behaviour. Suitable, continuous recorders of this parameter become very sparse before a few hundreds of millions of years however presenting a major challenge to documenting and understanding geomagnetic variations on the timescale of even the most recent supercontinent cycle. It is hypothetically possible to measure the absolute geomagnetic palaeointensity from any geological material that has cooled from above the Curie Temperature of its constituent magnetic remanence carriers. Since igneous rocks are abundant in the geological record, estimates of dipole moment from these present a vital resource in documenting geomagnetic variations into deep time. In practice, a host of practical problems makes obtaining such measurements reliably from geological materials challenging. Nevertheless, the absolute palaeointensity database PINT, newly linked to the comprehensive Magnetics Information Consortium (MagIC) database, already contains 3,941 published dipole moment estimates from rocks older than 50,000 years ago and continues to grow rapidly. In order that even the existing record may be used to maximum effectiveness in characterising geomagnetic behaviour, two challenges must be met. 1. The variable reliability of individual measurements must be reasonably assessed 2. The impact of the inhomogeneous distribution of dipole moment estimates in space and time must be ascertained. Here, we will report efforts attempting to address these two challenges using novel approaches. A new set of quality criteria for palaeointensity data (QPI) has been developed and tested by application to studies recently incorporated into PINT. To address challenge 1, we propose that every published dipole moment estimate eventually be given a QPI score indicating the number of these criteria fulfilled. To begin to address challenge 2, we take an approach using the outputs of numerical dynamo simulations. This involves subsampling synthetic global time series of full-vector magnetic field data, converting these datasets into virtual (axial) dipole moments, and comparing these to the entire distribution to ascertain how well secular variation is averaged and characterised. Finally, the two approaches will be combined. Datasets of real dipole moment estimates, filtered by QPI, will be compared to the synthetic distributions in order to present more robust characterisations of geomagnetic behaviour in different time intervals than has previously been possible.

  4. Solvatochromic studies on 4-Bromomethyl-7-methyl coumarins

    NASA Astrophysics Data System (ADS)

    Khanapurmath, Netravati; Kulkarni, Manohar V.; Pallavi, L.; Yenagi, Jayashree; Tonannavar, Jagdish

    2018-05-01

    Non- and dinitro 4-bromomethyl-7-methyl coumarins and new mono- and trinitro 4-bromomethyl-7-methyl coumarins have been synthesized. Effect of nitro groups on the photophysical properties of the parent 4-bromomethyl-7-methyl coumarin has been reported. Their ground and excited state dipole moments have been estimated by solvatochromic method using nine solvents. A reasonable agreement has been observed between calculated and observed dipole moments. Reduction in dipole moment has been observed for mono- and dinitro compounds where as the trinitro compound was found to have higher dipole moment in the excited state.

  5. A polarizable dipole-dipole interaction model for evaluation of the interaction energies for N-H···O=C and C-H···O=C hydrogen-bonded complexes.

    PubMed

    Li, Shu-Shi; Huang, Cui-Ying; Hao, Jiao-Jiao; Wang, Chang-Sheng

    2014-03-05

    In this article, a polarizable dipole-dipole interaction model is established to estimate the equilibrium hydrogen bond distances and the interaction energies for hydrogen-bonded complexes containing peptide amides and nucleic acid bases. We regard the chemical bonds N-H, C=O, and C-H as bond dipoles. The magnitude of the bond dipole moment varies according to its environment. We apply this polarizable dipole-dipole interaction model to a series of hydrogen-bonded complexes containing the N-H···O=C and C-H···O=C hydrogen bonds, such as simple amide-amide dimers, base-base dimers, peptide-base dimers, and β-sheet models. We find that a simple two-term function, only containing the permanent dipole-dipole interactions and the van der Waals interactions, can produce the equilibrium hydrogen bond distances compared favorably with those produced by the MP2/6-31G(d) method, whereas the high-quality counterpoise-corrected (CP-corrected) MP2/aug-cc-pVTZ interaction energies for the hydrogen-bonded complexes can be well-reproduced by a four-term function which involves the permanent dipole-dipole interactions, the van der Waals interactions, the polarization contributions, and a corrected term. Based on the calculation results obtained from this polarizable dipole-dipole interaction model, the natures of the hydrogen bonding interactions in these hydrogen-bonded complexes are further discussed. Copyright © 2013 Wiley Periodicals, Inc.

  6. Persistent Axial Dipole Decay for Past 400 Years Deduced from Lava Flows in Japan

    NASA Astrophysics Data System (ADS)

    Fukuma, K.

    2017-12-01

    Temporal variation of the axial dipole moment g10 was deduced from paleointensity data that were obtained from volcanic islands Izu-Oshima and Miyakejima in Japan for the last 400 years, combined with historical field model gufm1. The basaltic lava flows are precisely dated based on ancient documents on the eruptions. Essentially no age error is necessary to be counted. Thellier paleointensity measurements were performed using a fully automated magnetometer-furnace system "tspin" using about 450 specimens, which were mainly collected from clinkers and scorias. Appropriate Thellier temperature steps for each specimen were chosen, based on the thermomagnetic curve that was quite variable depending on the vertical position within a lava flow. The newly obtained paleointensities are much more consistent between sites and provide more reliable paleointensity variation than previous data from lava interiors. I applied the method as Gubbins et al. [2006] to this single spot paleointensity variation from Japan, and obtained persisitent decay of the axial dipole moment over the last 400 years. Contrary to gufm1's assumption that g10 linearly decayed from 1590 to 1840 as extrapolating the post-1840 instrumental records, Gubbins et al. [2006] argued no definite temporal trend on g10 recognizable from the existing archeointensity database. The g10 variation calculated from the previous paleointensity data are seriously discredited by both age and intensity errors resulted from various materials, locations and experimental methods involved. Our single spot and well-dated paleointensity data are free from the problems and support persistent axial dipole decay for past 400 years as assumed in gufm1.

  7. Fast Electron Correlation Methods for Molecular Clusters without Basis Set Superposition Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kamiya, Muneaki; Hirata, So; Valiev, Marat

    2008-02-19

    Two critical extensions to our fast, accurate, and easy-to-implement binary or ternary interaction method for weakly-interacting molecular clusters [Hirata et al. Mol. Phys. 103, 2255 (2005)] have been proposed, implemented, and applied to water hexamers, hydrogen fluoride chains and rings, and neutral and zwitterionic glycine–water clusters with an excellent result for an initial performance assessment. Our original method included up to two- or three-body Coulomb, exchange, and correlation energies exactly and higher-order Coulomb energies in the dipole–dipole approximation. In this work, the dipole moments are replaced by atom-centered point charges determined so that they reproduce the electrostatic potentials of themore » cluster subunits as closely as possible and also self-consistently with one another in the cluster environment. They have been shown to lead to dramatic improvement in the description of short-range electrostatic potentials not only of large, charge-separated subunits like zwitterionic glycine but also of small subunits. Furthermore, basis set superposition errors (BSSE) known to plague direct evaluation of weak interactions have been eliminated by com-bining the Valiron–Mayer function counterpoise (VMFC) correction with our binary or ternary interaction method in an economical fashion (quadratic scaling n2 with respect to the number of subunits n when n is small and linear scaling when n is large). A new variant of VMFC has also been proposed in which three-body and all higher-order Coulomb effects on BSSE are estimated approximately. The BSSE-corrected ternary interaction method with atom-centered point charges reproduces the VMFC-corrected results of conventional electron correlation calculations within 0.1 kcal/mol. The proposed method is significantly more accurate and also efficient than conventional correlation methods uncorrected of BSSE.« less

  8. Ab Initio Calculations of Water Line Strengths

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Partridge, Harry

    1998-01-01

    We report on the determination of a high quality ab initiu potential energy surface (PES) and dipole moment function for water. This PES is empirically adjusted to improve the agreement between the computed line positions and those from the HITRAN 92 data base with J less than 6 for H2O. The changes in the PES are small, nonetheless including an estimate of core (oxygen 1s) electron correlation greatly improves the agreement with experiment. Using this adjusted PES, we can match 30,092 of the 30,117 transitions in the HITRAN 96 data base for H2O with theoretical lines. The 10,25,50,75, and 90 percentiles of the difference between the calculated and tabulated line positions are -0.11, -0.04, -0.01, 0.02, and 0.07 l/cm. Non-adiabatic effects are not explicitly included. About 3% of the tabulated line positions appear to be incorrect. Similar agreement using this adjusted PES is obtained for the oxygen 17 and oxygen 18 isotopes. For HDO, the agreement is not as good, with root-mean-square error of 0.25 l/cm for lines with J less than 6. This error is reduced to 0.02 l/cm by including a small asymmetric correction to the PES, which is parameterized by simultaneously fitting to HDO md D2O data. Scaling this correction by mass factors yields good results for T2O and HTO. The intensities summed over vibrational bands are usually in good agreement between the calculations and the tabulated results, but individual lines strengths can differ greatly. A high temperature list consisting of 307,721,352 lines is generated for H2O using our PES and dipole moment function.

  9. A priori predictions of the rotational constants for HC13N, HC15N, C5O

    NASA Technical Reports Server (NTRS)

    DeFrees, D. J.; McLean, A. D.

    1989-01-01

    Ab initio molecular orbital theory is used to estimate the rotational constant for several carbon-chain molecules that are candidates for discovery in interstellar space. These estimated rotational constants can be used in laboratory or astronomical searches for the molecules. The rotational constant for HC13N is estimated to be 0.1073 +/- 0.0002 GHz and its dipole moment 5.4 D. The rotational constant for HC15N is estimated to be 0.0724 GHz, with a somewhat larger uncertainty. The rotational constant of C5O is estimated to be 1.360 +/- 2% GHz and its dipole moment 4.4. D.

  10. Gaussian polarizable-ion tight binding.

    PubMed

    Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P

    2016-10-14

    To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).

  11. Gaussian polarizable-ion tight binding

    NASA Astrophysics Data System (ADS)

    Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.

    2016-10-01

    To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).

  12. Dipole oscillator strength distributions with improved high-energy behavior: Dipole sum rules and dispersion coefficients for Ne, Ar, Kr, and Xe revisited

    NASA Astrophysics Data System (ADS)

    Kumar, Ashok; Thakkar, Ajit J.

    2010-02-01

    The construction of the dipole oscillator strength distribution (DOSD) from theoretical and experimental photoabsorption cross sections combined with constraints provided by the Kuhn-Reiche-Thomas sum rule and molar refractivity data is a well-established technique that has been successfully applied to more than 50 species. Such DOSDs are insufficiently accurate at large photon energies. A novel iterative procedure is developed that rectifies this deficiency by using the high-energy asymptotic behavior of the dipole oscillator strength density as an additional constraint. Pilot applications are made for the neon, argon, krypton, and xenon atoms. The resulting DOSDs improve the agreement of the predicted S2 and S1 sum rules with ab initio calculations while preserving the accuracy of the remainder of the moments. Our DOSDs exploit new and more accurate experimental data. Improved estimates of dipole properties for these four atoms and of dipole-dipole C6 and triple-dipole C9 dispersion coefficients for the interactions among them are reported.

  13. Improving Planck calibration by including frequency-dependent relativistic corrections

    NASA Astrophysics Data System (ADS)

    Quartin, Miguel; Notari, Alessio

    2015-09-01

    The Planck satellite detectors are calibrated in the 2015 release using the "orbital dipole", which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10-3, due to coupling with the "solar dipole" (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevant for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.

  14. Bandpass mismatch error for satellite CMB experiments I: estimating the spurious signal

    NASA Astrophysics Data System (ADS)

    Thuong Hoang, Duc; Patanchon, Guillaume; Bucher, Martin; Matsumura, Tomotake; Banerji, Ranajoy; Ishino, Hirokazu; Hazumi, Masashi; Delabrouille, Jacques

    2017-12-01

    Future Cosmic Microwave Background (CMB) satellite missions aim to use the B mode polarization to measure the tensor-to-scalar ratio r with a sensitivity σr lesssim 10-3. Achieving this goal will not only require sufficient detector array sensitivity but also unprecedented control of all systematic errors inherent in CMB polarization measurements. Since polarization measurements derive from differences between observations at different times and from different sensors, detector response mismatches introduce leakages from intensity to polarization and thus lead to a spurious B mode signal. Because the expected primordial B mode polarization signal is dwarfed by the known unpolarized intensity signal, such leakages could contribute substantially to the final error budget for measuring r. Using simulations we estimate the magnitude and angular spectrum of the spurious B mode signal resulting from bandpass mismatch between different detectors. It is assumed here that the detectors are calibrated, for example using the CMB dipole, so that their sensitivity to the primordial CMB signal has been perfectly matched. Consequently the mismatch in the frequency bandpass shape between detectors introduces differences in the relative calibration of galactic emission components. We simulate this effect using a range of scanning patterns being considered for future satellite missions. We find that the spurious contribution to r from the reionization bump on large angular scales (l < 10) is ≈ 10-3 assuming large detector arrays and 20 percent of the sky masked. We show how the amplitude of the leakage depends on the nonuniformity of the angular coverage in each pixel that results from the scan pattern.

  15. Spectral and physicochemical properties of difluoroboranyls containing N,N-dimethylamino group studied by solvatochromic methods

    NASA Astrophysics Data System (ADS)

    Jędrzejewska, Beata; Grabarz, Anna; Bartkowiak, Wojciech; Ośmiałowski, Borys

    2018-06-01

    The solvatochromism of the dyes was analyzed based on the four-parameter scale including: polarizability (SP), dipolarity (SdP), acidity (SA) and basicity (SB) parameters by method proposed by Catalán. The change of solvent to more polar caused the red shift of absorption and fluorescence band position. The frequency shifts manifest the change in the dipole moment upon excitation. The ground-state dipole moment of the difluoroboranyls was estimated based on changes in molecular polarization with temperature. Moreover, the Stokes shifts were used to calculate the excited state dipole moments of the dyes. For the calculation, the ground-state dipole moments and Onsager cavity radius were also determined theoretically using density functional theory (DFT). The experimentally determined excited-state dipole moments for the compounds are higher than the corresponding ground-state values. The increase in the dipole moment is described in terms of the nature of the excited state.

  16. Force on an electric/magnetic dipole and classical approach to spin-orbit coupling in hydrogen-like atoms

    NASA Astrophysics Data System (ADS)

    Kholmetskii, A. L.; Missevitch, O. V.; Yarman, T.

    2017-09-01

    We carry out the classical analysis of spin-orbit coupling in hydrogen-like atoms, using the modern expressions for the force and energy of an electric/magnetic dipole in an electromagnetic field. We disclose a novel physical meaning of this effect and show that for a laboratory observer the energy of spin-orbit interaction is represented solely by the mechanical energy of the spinning electron (considered as a gyroscope) due to the Thomas precession of its spin. Concurrently we disclose some errors in the old and new publications on this subject.

  17. Magnetoencephalographic accuracy profiles for the detection of auditory pathway sources.

    PubMed

    Bauer, Martin; Trahms, Lutz; Sander, Tilmann

    2015-04-01

    The detection limits for cortical and brain stem sources associated with the auditory pathway are examined in order to analyse brain responses at the limits of the audible frequency range. The results obtained from this study are also relevant to other issues of auditory brain research. A complementary approach consisting of recordings of magnetoencephalographic (MEG) data and simulations of magnetic field distributions is presented in this work. A biomagnetic phantom consisting of a spherical volume filled with a saline solution and four current dipoles is built. The magnetic fields outside of the phantom generated by the current dipoles are then measured for a range of applied electric dipole moments with a planar multichannel SQUID magnetometer device and a helmet MEG gradiometer device. The inclusion of a magnetometer system is expected to be more sensitive to brain stem sources compared with a gradiometer system. The same electrical and geometrical configuration is simulated in a forward calculation. From both the measured and the simulated data, the dipole positions are estimated using an inverse calculation. Results are obtained for the reconstruction accuracy as a function of applied electric dipole moment and depth of the current dipole. We found that both systems can localize cortical and subcortical sources at physiological dipole strength even for brain stem sources. Further, we found that a planar magnetometer system is more suitable if the position of the brain source can be restricted in a limited region of the brain. If this is not the case, a helmet-shaped sensor system offers more accurate source estimation.

  18. Detection, localization and classification of multiple dipole-like magnetic sources using magnetic gradient tensor data

    NASA Astrophysics Data System (ADS)

    Gang, Yin; Yingtang, Zhang; Hongbo, Fan; Zhining, Li; Guoquan, Ren

    2016-05-01

    We have developed a method for automatic detection, localization and classification (DLC) of multiple dipole sources using magnetic gradient tensor data. First, we define modified tilt angles to estimate the approximate horizontal locations of the multiple dipole-like magnetic sources simultaneously and detect the number of magnetic sources using a fixed threshold. Secondly, based on the isotropy of the normalized source strength (NSS) response of a dipole, we obtain accurate horizontal locations of the dipoles. Then the vertical locations are calculated using magnitude magnetic transforms of magnetic gradient tensor data. Finally, we invert for the magnetic moments of the sources using the measured magnetic gradient tensor data and forward model. Synthetic and field data sets demonstrate effectiveness and practicality of the proposed method.

  19. Magnetic design and field optimization of a superferric dipole for the RISP fragment separator

    NASA Astrophysics Data System (ADS)

    Zaghloul, A.; Kim, J. Y.; Kim, D. G.; Jo, H. C.; Kim, M. J.

    2015-10-01

    The in-flight fragment separator of the Rare Isotope Science Project requires eight dipole magnets to produce a gap field of 1.7 T in a deflection sector of 30 degree with a 6-m central radius. If the beam-optics requirements are to be met, an integral field homogeneity of a few units (1 unit = 10-4) must be achieved. A superferric dipole magnet has been designed by using the Low-Temperature Superconducting wire NbTi and soft iron of grade SAE1010. The 3D magnetic design and field optimization have been performed using the Opera code. The length and the width of the air slots in the poles have been determined in an optimization process that considered not only the uniformity of the field in the straight section but also the field errors in the end regions. The field uniformity has also been studied for a range of operation of the dipole magnet from 0.4 T to 1.7 T. The magnetic design and field uniformity are discussed.

  20. Excitation of transverse dipole and quadrupole modes in a pure ion plasma in a linear Paul trap to study collective processes in intense beams

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.

    Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less

  1. CCSD(T) potential energy and induced dipole surfaces for N2–H2(D2): retrieval of the collision-induced absorption integrated intensities in the regions of the fundamental and first overtone vibrational transitions.

    PubMed

    Buryak, Ilya; Lokshtanov, Sergei; Vigasin, Andrey

    2012-09-21

    The present work aims at ab initio characterization of the integrated intensity temperature variation of collision-induced absorption (CIA) in N(2)-H(2)(D(2)). Global fits of potential energy surface (PES) and induced dipole moment surface (IDS) were made on the basis of CCSD(T) (coupled cluster with single and double and perturbative triple excitations) calculations with aug-cc-pV(T,Q)Z basis sets. Basis set superposition error correction and extrapolation to complete basis set (CBS) limit techniques were applied to both energy and dipole moment. Classical second cross virial coefficient calculations accounting for the first quantum correction were employed to prove the quality of the obtained PES. The CIA temperature dependence was found in satisfactory agreement with available experimental data.

  2. Dephasing due to Nuclear Spins in Large-Amplitude Electric Dipole Spin Resonance.

    PubMed

    Chesi, Stefano; Yang, Li-Ping; Loss, Daniel

    2016-02-12

    We analyze effects of the hyperfine interaction on electric dipole spin resonance when the amplitude of the quantum-dot motion becomes comparable or larger than the quantum dot's size. Away from the well-known small-drive regime, the important role played by transverse nuclear fluctuations leads to a Gaussian decay with characteristic dependence on drive strength and detuning. A characterization of spin-flip gate fidelity, in the presence of such additional drive-dependent dephasing, shows that vanishingly small errors can still be achieved at sufficiently large amplitudes. Based on our theory, we analyze recent electric dipole spin resonance experiments relying on spin-orbit interactions or the slanting field of a micromagnet. We find that such experiments are already in a regime with significant effects of transverse nuclear fluctuations and the form of decay of the Rabi oscillations can be reproduced well by our theory.

  3. Improving Planck calibration by including frequency-dependent relativistic corrections

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quartin, Miguel; Notari, Alessio, E-mail: mquartin@if.ufrj.br, E-mail: notari@ffn.ub.es

    2015-09-01

    The Planck satellite detectors are calibrated in the 2015 release using the 'orbital dipole', which is the time-dependent dipole generated by the Doppler effect due to the motion of the satellite around the Sun. Such an effect has also relativistic time-dependent corrections of relative magnitude 10{sup −3}, due to coupling with the 'solar dipole' (the motion of the Sun compared to the CMB rest frame), which are included in the data calibration by the Planck collaboration. We point out that such corrections are subject to a frequency-dependent multiplicative factor. This factor differs from unity especially at the highest frequencies, relevantmore » for the HFI instrument. Since currently Planck calibration errors are dominated by systematics, to the point that polarization data is currently unreliable at large scales, such a correction can in principle be highly relevant for future data releases.« less

  4. Neutron Electric Dipole Moment from Gauge-String Duality.

    PubMed

    Bartolini, Lorenzo; Bigazzi, Francesco; Bolognesi, Stefano; Cotrone, Aldo L; Manenti, Andrea

    2017-03-03

    We compute the electric dipole moment of nucleons in the large N_{c} QCD model by Witten, Sakai, and Sugimoto with N_{f}=2 degenerate massive flavors. Baryons in the model are instantonic solitons of an effective five-dimensional action describing the whole tower of mesonic fields. We find that the dipole electromagnetic form factor of the nucleons, induced by a finite topological θ angle, exhibits complete vector meson dominance. We are able to evaluate the contribution of each vector meson to the final result-a small number of modes are relevant to obtain an accurate estimate. Extrapolating the model parameters to real QCD data, the neutron electric dipole moment is evaluated to be d_{n}=1.8×10^{-16}θ e cm. The electric dipole moment of the proton is exactly the opposite.

  5. Exploiting Synoptic-Scale Climate Processes to Develop Nonstationary, Probabilistic Flood Hazard Projections

    NASA Astrophysics Data System (ADS)

    Spence, C. M.; Brown, C.; Doss-Gollin, J.

    2016-12-01

    Climate model projections are commonly used for water resources management and planning under nonstationarity, but they do not reliably reproduce intense short-term precipitation and are instead more skilled at broader spatial scales. To provide a credible estimate of flood trend that reflects climate uncertainty, we present a framework that exploits the connections between synoptic-scale oceanic and atmospheric patterns and local-scale flood-producing meteorological events to develop long-term flood hazard projections. We demonstrate the method for the Iowa River, where high flow episodes have been found to correlate with tropical moisture exports that are associated with a pressure dipole across the eastern continental United States We characterize the relationship between flooding on the Iowa River and this pressure dipole through a nonstationary Pareto-Poisson peaks-over-threshold probability distribution estimated based on the historic record. We then combine the results of a trend analysis of dipole index in the historic record with the results of a trend analysis of the dipole index as simulated by General Circulation Models (GCMs) under climate change conditions through a Bayesian framework. The resulting nonstationary posterior distribution of dipole index, combined with the dipole-conditioned peaks-over-threshold flood frequency model, connects local flood hazard to changes in large-scale atmospheric pressure and circulation patterns that are related to flooding in a process-driven framework. The Iowa River example demonstrates that the resulting nonstationary, probabilistic flood hazard projection may be used to inform risk-based flood adaptation decisions.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaBrecque, Douglas J; Adkins, Paula L

    The objective of this research was to determine the feasibility of building and operating an ERT system that will allow measurement precision that is an order of magnitude better than existing systems on the market today and in particular if this can be done without significantly greater manufacturing or operating costs than existing commercial systems. Under this proposal, we performed an estimation of measurement errors in galvanic resistivity data that arise as a consequence of the type of electrode material used to make the measurements. In our laboratory, measurement errors for both magnitude and induced polarization (IP) were estimated usingmore » the reciprocity of data from an array of electrodes as might be used for electrical resistance tomography using 14 different metals as well as one non-metal - carbon. In a second phase of this study, using archival data from two long-term ERT surveys, we examined long-term survivability of electrodes over periods of several years. The survey sites were: the Drift Scale Test at Yucca Mountain, Nevada (which was sponsored by the U. S. Department of Energy as part of the civilian radioactive waste management program), and a water infiltration test at a site adjacent to the New Mexico Institute of Mines and Technology in Socorro, New Mexico (sponsored by the Sandia/Tech vadose program). This enabled us to compare recent values with historical values and determine electrode performance over the long-term as well as the percentage of electrodes that have failed entirely. We have constructed a prototype receiver system, made modifications and revised the receiver design. The revised prototype uses a new 24 bit analog to digital converter from Linear Technologies with amplifier chips from Texas Instruments. The input impedance of the system will be increased from 107 Ohms to approximately 1010 Ohms. The input noise level of the system has been decreased to approximately 10 Nanovolts and system resolution to about 1 Nanovolt at the highest gain range of 125 to 1. The receiver also uses very high precision and high temperature stability components. The goal is to improve the accuracy to better than 0.1%. The system has more receiver channels, eight, to allow efficient data collection at lower base frequencies. We are also implementing a frequency-domain acquisition mode in addition to the time-domain acquisition mode used in the earlier systems. Initial field tests were started in the fall of 2008. We conducted tests on a number of types of cable commonly used for resistivity surveys. A series of different tests were designed to determine if the couplings were primarily resistive, capacitive, or inductive in nature and to ascertain that the response was due to the cable cross-talk and did not depend on the receiver electronics. The results show that the problem appears to be primarily capacitive in nature and does not appear to be due to problems in the receiver electronics. Thus a great deal of emphasis has been placed on finding appropriate cables as well as stable electrodes that have low contact impedance at the very low current flows observed at the receiver. One of the issues in survey design and data collection has been determining how long one must wait before using the same electrode as a transmitter and as a receiver. A series of tests was completed in the laboratory sand tank where four-electrode measurements were made using the same dipole transmitters and dipole receivers (the dipoles used adjacent electrodes). For each data series, a single set of normal measurements were collected with no reciprocals and electrodes were never reused as a receiver after being used as a transmitter. After waiting a specified length of time, the reciprocal measurements were collected using a schedule of measurements. The order of this second schedule was rearranged such that if this second set of measurements were performed without first using the normal schedule, no electrode would be used as a receiver after being used as a transmitter. For this study, we cannot conclude that increasing the wait time increased or decreased the reciprocal errors, only that there was not a dramatic change in results with different wait times. Another issue in ERT data collection is the potential for the transmitter as well as the receiver end of an ERT system to create problems with reciprocity readings. Existing ERT systems typically use a constant voltage source. For the transmitter dipole, a constant voltage source has low output impedance, whereas a constant current source has high output impedance. Therefore, we devised an experiment to determine if a constant current source transmitter might produce smaller errors than a constant voltage source. These preliminary results suggest there is little or no difference in either resistivity or chargeability reciprocal errors using a constant voltage or constant current dipole drive source.« less

  7. Error Correction for the JLEIC Ion Collider Ring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Guohui; Morozov, Vasiliy; Lin, Fanglei

    2016-05-01

    The sensitivity to misalignment, magnet strength error, and BPM noise is investigated in order to specify design tolerances for the ion collider ring of the Jefferson Lab Electron Ion Collider (JLEIC) project. Those errors, including horizontal, vertical, longitudinal displacement, roll error in transverse plane, strength error of main magnets (dipole, quadrupole, and sextupole), BPM noise, and strength jitter of correctors, cause closed orbit distortion, tune change, beta-beat, coupling, chromaticity problem, etc. These problems generally reduce the dynamic aperture at the Interaction Point (IP). According to real commissioning experiences in other machines, closed orbit correction, tune matching, beta-beat correction, decoupling, andmore » chromaticity correction have been done in the study. Finally, we find that the dynamic aperture at the IP is restored. This paper describes that work.« less

  8. Systematic effects in the HfF+-ion experiment to search for the electron electric dipole moment

    NASA Astrophysics Data System (ADS)

    Petrov, A. N.

    2018-05-01

    The energy splittings for J =1 , F =3 /2 , | mF|=3 /2 hyperfine levels of the 3Δ1 electronic state of 180Hf+19F ion are calculated as functions of the external variable electric and magnetic fields within two approaches. In the first one, the transition to the rotating frame is performed, whereas in the second approach, the quantization of rotating electromagnetic field is performed. Calculations are required for understanding possible systematic errors in the experiment to search for the electron electric dipole moment (e EDM ) with the 180Hf+19F ion.

  9. Calculation of the atomic electric dipole moment of Pb2+ induced by nuclear Schiff moment

    NASA Astrophysics Data System (ADS)

    Ramachandran, S. M.; Latha, K. V. P.; Meenakshisundaram, N.

    2017-07-01

    We report the atomic electric dipole moment induced by the P, T violating interactions in the nuclear/sub-nuclear level, for 207Pb2+ and 207Pb, owing to the recent interest in the ferroelectric crystal PbTiO3 as one of the candidates for investigating macroscopic P, T-odd effects. In this paper, we calculate the atomic electric dipole moments of 207Pb and Pb2+, parametrized in terms of the P, T-odd coupling parameter, the nuclear Schiff moment (NSM), S, in the frame-work of the coupled-perturbed Hartree-Fock theory. We estimate the Schiff moment of Pb2+ using the experimental result of a system, which is electronically similar to the Pb2+ ion. We present the dominant contributions of the electric dipole moment (EDM) matrix elements and the important correlation effects contributing to the atomic EDM of Pb2+. Our results provide the first ever calculated EDM of the Pb2+ ion, and an estimate of its NSM from which the P, T-odd energy shift in a PbTiO3 crystal can be evaluated.

  10. Joint inversion of apparent resistivity and seismic surface and body wave data

    NASA Astrophysics Data System (ADS)

    Garofalo, Flora; Sauvin, Guillaume; Valentina Socco, Laura; Lecomte, Isabelle

    2013-04-01

    A novel inversion algorithm has been implemented to jointly invert apparent resistivity curves from vertical electric soundings, surface wave dispersion curves, and P-wave travel times. The algorithm works in the case of laterally varying layered sites. Surface wave dispersion curves and P-wave travel times can be extracted from the same seismic dataset and apparent resistivity curves can be obtained from continuous vertical electric sounding acquisition. The inversion scheme is based on a series of local 1D layered models whose unknown parameters are thickness h, S-wave velocity Vs, P-wave velocity Vp, and Resistivity R of each layer. 1D models are linked to surface-wave dispersion curves and apparent resistivity curves through classical 1D forward modelling, while a 2D model is created by interpolating the 1D models and is linked to refracted P-wave hodograms. A priori information can be included in the inversion and a spatial regularization is introduced as a set of constraints between model parameters of adjacent models and layers. Both a priori information and regularization are weighted by covariance matrixes. We show the comparison of individual inversions and joint inversion for a synthetic dataset that presents smooth lateral variations. Performing individual inversions, the poor sensitivity to some model parameters leads to estimation errors up to 62.5 %, whereas for joint inversion the cooperation of different techniques reduces most of the model estimation errors below 5% with few exceptions up to 39 %, with an overall improvement. Even though the final model retrieved by joint inversion is internally consistent and more reliable, the analysis of the results evidences unacceptable values of Vp/Vs ratio for some layers, thus providing negative Poisson's ratio values. To further improve the inversion performances, an additional constraint is added imposing Poisson's ratio in the range 0-0.5. The final results are globally improved by the introduction of this constraint further reducing the maximum error to 30 %. The same test was performed on field data acquired in a landslide-prone area close by the town of Hvittingfoss, Norway. Seismic data were recorded on two 160-m long profiles in roll-along mode using a 5-kg sledgehammer as source and 24 4.5-Hz vertical geophones with 4-m separation. First-arrival travel times were picked at every shot locations and surface wave dispersion curves extracted at 8 locations for each profile. 2D resistivity measurements were carried out on the same profiles using Gradient and Dipole-Dipole arrays with 2-m electrode spacing. The apparent resistivity curves were extracted at the same location as for the dispersion curves. The data were subsequently jointly inverted and the resulting model compared to individual inversions. Although models from both, individual and joint inversions are consistent, the estimation error is smaller for joint inversion, and more especially for first-arrival travel times. The joint inversion exploits different sensitivities of the methods to model parameters and therefore mitigates solution nonuniqueness and the effects of intrinsic limitations of the different techniques. Moreover, it produces an internally consistent multi-parametric final model that can be profitably interpreted to provide a better understanding of subsurface properties.

  11. CORRELATED ERRORS IN EARTH POINTING MISSIONS

    NASA Technical Reports Server (NTRS)

    Bilanow, Steve; Patt, Frederick S.

    2005-01-01

    Two different Earth-pointing missions dealing with attitude control and dynamics changes illustrate concerns with correlated error sources and coupled effects that can occur. On the OrbView-2 (OV-2) spacecraft, the assumption of a nearly-inertially-fixed momentum axis was called into question when a residual dipole bias apparently changed magnitude. The possibility that alignment adjustments and/or sensor calibration errors may compensate for actual motions of the spacecraft is discussed, and uncertainties in the dynamics are considered. Particular consideration is given to basic orbit frequency and twice orbit frequency effects and their high correlation over the short science observation data span. On the Tropical Rainfall Measuring Mission (TRMM) spacecraft, the switch to a contingency Kalman filter control mode created changes in the pointing error patterns. Results from independent checks on the TRMM attitude using science instrument data are reported, and bias shifts and error correlations are discussed. Various orbit frequency effects are common with the flight geometry for Earth pointing instruments. In both dual-spin momentum stabilized spacecraft (like OV-2) and three axis stabilized spacecraft with gyros (like TRMM under Kalman filter control), changes in the initial attitude state propagate into orbit frequency variations in attitude and some sensor measurements. At the same time, orbit frequency measurement effects can arise from dynamics assumptions, environment variations, attitude sensor calibrations, or ephemeris errors. Also, constant environment torques for dual spin spacecraft have similar effects to gyro biases on three axis stabilized spacecraft, effectively shifting the one-revolution-per-orbit (1-RPO) body rotation axis. Highly correlated effects can create a risk for estimation errors particularly when a mission switches an operating mode or changes its normal flight environment. Some error effects will not be obvious from attitude sensor measurement residuals, so some independent checks using imaging sensors are essential and derived science instrument attitude measurements can prove quite valuable in assessing the attitude accuracy.

  12. Electron-atom spin asymmetry and two-electron photodetachment - Addenda to the Coulomb-dipole threshold law

    NASA Technical Reports Server (NTRS)

    Temkin, A.

    1984-01-01

    Temkin (1982) has derived the ionization threshold law based on a Coulomb-dipole theory of the ionization process. The present investigation is concerned with a reexamination of several aspects of the Coulomb-dipole threshold law. Attention is given to the energy scale of the logarithmic denominator, the spin-asymmetry parameter, and an estimate of alpha and the energy range of validity of the threshold law, taking into account the result of the two-electron photodetachment experiment conducted by Donahue et al. (1984).

  13. Optimal Design for Parameter Estimation in EEG Problems in a 3D Multilayered Domain

    DTIC Science & Technology

    2014-03-30

    dipole, C(x) = q δ(x − rq), where δ is the Dirac distribution, rq is a fixed point in the brain which represents the dipole location, and q is the dipole...again based on the formulations discussed above, we consider a function F of the form F (x, θ) = qδ(x− rq), where δ denotes the dirac distribution...Inverse Problems, 12, (1996), 565–577. [5] H.T. Banks, M.W. Buksas and T. Lin, Electromagnetic Material Interrogation Using Conductive Inter- faces and

  14. Exclusive vector meson production with leading neutrons in a saturation model for the dipole amplitude in mixed space

    NASA Astrophysics Data System (ADS)

    Amaral, J. T.; Becker, V. M.

    2018-05-01

    We investigate ρ vector meson production in e p collisions at HERA with leading neutrons in the dipole formalism. The interaction of the dipole and the pion is described in a mixed-space approach, in which the dipole-pion scattering amplitude is given by the Marquet-Peschanski-Soyez saturation model, which is based on the traveling wave solutions of the nonlinear Balitsky-Kovchegov equation. We estimate the magnitude of the absorption effects and compare our results with a previous analysis of the same process in full coordinate space. In contrast with this approach, the present study leads to absorption K factors in the range of those predicted by previous theoretical studies on semi-inclusive processes.

  15. 0-2 Ma Paleomagnetic Field Behavior from Lava Flow Data Sets

    NASA Astrophysics Data System (ADS)

    Johnson, C. L.; Constable, C.; Tauxe, L.; Cromwell, G.

    2010-12-01

    The global time-averaged (TAF) structure of the paleomagnetic field and paleosecular variation (PSV) provide important constraints for numerical geodynamo simulations. Studies of the TAF have sought to characterize the nature of non-geocentric-axial dipole contributions to the field, in particular any such contributions that may be diagnostic of the influence of core-mantle boundary conditions on field generation. Similarly geographical variations in PSV are of interest, in particular the long-standing debate concerning anomalously low VGP (virtual geomagnetic pole) dispersion at Hawaii. Here, we analyze updated global directional data sets from lava flows. We present global models for the time-averaged field for the Brunhes and Matuyama epochs. New TAF models based on lava flow directional data for the Brunhes show longitudinal structure. In particular, high latitude flux lobes are observed, constrained by improved data sets from N. and S. America, Japan, and New Zealand. Anomalous TAF structure is also observed in the region around Hawaii. At Hawaii, previous inferences of the anomalous TAF (large inclination anomaly) and PSV (low VGP dispersion) have been argued to be the result of temporal sampling bias toward young flows. We use resampling techniques to examine possible biases in the TAF and PSV incurred by uneven temporal sampling. Resampling of the paleodirectional data onto a uniform temporal distribution, incorporating site ages and age errors leads to a TAF estimate for the Brunhes that is close to that reported for the actual data set, but an estimate for VGP dispersion that is increased relative to that obtained from the unevenly sampled data. Future investigations will incorporate the temporal resampling procedures into TAF modeling efforts, as well as recent progress in modeling the 0-2 Ma paleomagnetic dipole moment.

  16. Helping Students Assess the Relative Importance of Different Intermolecular Interactions

    ERIC Educational Resources Information Center

    Jasien, Paul G.

    2008-01-01

    A semi-quantitative model has been developed to estimate the relative effects of dispersion, dipole-dipole interactions, and H-bonding on the normal boiling points ("T[subscript b]") for a subset of simple organic systems. The model is based upon a statistical analysis using multiple linear regression on a series of straight-chain organic…

  17. Evolution of the substructure of a novel 12% Cr steel under creep conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yadav, Surya Deo, E-mail: surya.yadav@tugraz.at; Kalácska, Szilvia, E-mail: kalacska@metal.elte.hu; Dománková, Mária, E-mail: maria.domankova@stuba.sk

    2016-05-15

    In this work we study the microstruture evolution of a newly developed 12% Cr martensitic/ferritic steel in as-received condition and after creep at 650 °C under 130 MPa and 80 MPa. The microstructure is described as consisting of mobile dislocations, dipole dislocations, boundary dislocations, precipitates, lath boundaries, block boundaries, packet boundaries and prior austenitic grain boundaries. The material is characterized employing light optical microscopy (LOM), scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD) and electron backscatter diffraction (EBSD). TEM is used to characterize the dislocations (mobile + dipole) inside the subgrains and XRD measurements are used tomore » the characterize mobile dislocations. Based on the subgrain boundary misorientations obtained from EBSD measurements, the boundary dislocation density is estimated. The total dislocation density is estimated for the as-received and crept conditions adding the mobile, boundary and dipole dislocation densities. Additionally, the subgrain size is estimated from the EBSD measurements. In this publication we propose the use of three characterization techniques TEM, XRD and EBSD as necessary to characterize all type of dislocations and quantify the total dislocation densty in martensitic/ferritic steels. - Highlights: • Creep properties of a novel 12% Cr steel alloyed with Ta • Experimental characterization of different types of dislocations: mobile, dipole and boundary • Characterization and interpretation of the substructure evolution using unique combination of TEM, XRD and EBSD.« less

  18. Dual polarized receiving steering antenna array for measurement of ultrawideband pulse polarization structure

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balzovsky, E. V.; Buyanov, Yu. I.; Koshelev, V. I., E-mail: koshelev@lhfe.hcei.tsc.ru

    To measure simultaneously two orthogonal components of the electromagnetic field of nano- and subnano-second duration, an antenna array has been developed. The antenna elements of the array are the crossed dipoles of dimension 5 × 5 cm. The arms of the dipoles are connected to the active four-pole devices to compensate the frequency response variations of a short dipole in the frequency band ranging from 0.4 to 4 GHz. The dipoles have superimposed phase centers allowing measuring the polarization structure of the field in different directions. The developed antenna array is the linear one containing four elements. The pattern maximummore » position is controlled by means of the switched ultrawideband true time delay lines. Discrete steering in seven directions in the range from −40° to +40° has been realized. The error at setting the pattern maximum position is less than 4°. The isolation of the polarization exceeds 29 dB in the direction orthogonal to the array axis and in the whole steering range it exceeds 23 dB. Measurement results of the polarization structure of radiated and scattered pulses with different polarization are presented as well.« less

  19. Localization of heart vectors produced by epicardial burns and ectopic stimuli; validation of a dipole ranging method.

    PubMed

    Ideker, R E; Bandura, J P; Larsen, R A; Cox, J W; Keller, F W; Brody, D A

    1975-01-01

    Location of the equivalent cardiac dipole has been estimated but not fully verified in several laboratories. To test the accuracy of such a procedure, injury vectors were produced in 14 isolated, perfused rabbit hearts by epicardial searing. Strongly dipolar excitation fronts were produced in 6 additional hearts by left ventricular pacing. Twenty computer-processed signals, derived from surface electrodes on a spherical electrolyte-filled tank containing the test preparation, were optimally fitted with a locatable cardiac dipole that accounted for over 99% of the root-mean-square surface potential. For the 14 burns (mean radius 5.0 mm), the S-T injury dipole was located 3.4 plus or minus 0.7 (SD) mm from the burn center. For the 6 paced hearts, the dipole early in the ectopic beat was located 3.7 mm (range 2.6 to 4.6 mm) from the stimulating electrode. Phase inhomogeneities within the chamber appeared to have a small but predictable effect on dipole site determination. The study demonstrates that equivalent dipole location can be determined with acceptable accuracy from potential measurements of the external cardiac field.

  20. Towards a fully self-consistent inversion combining historical and paleomagnetic data for geomagnetic field reconstructions

    NASA Astrophysics Data System (ADS)

    Arneitz, P.; Leonhardt, R.; Fabian, K.; Egli, R.

    2017-12-01

    Historical and paleomagnetic data are the two main sources of information about the long-term geomagnetic field evolution. Historical observations extend to the late Middle Ages, and prior to the 19th century, they consisted mainly of pure declination measurements from navigation and orientation logs. Field reconstructions going back further in time rely solely on magnetization acquired by rocks, sediments, and archaeological artefacts. The combined dataset is characterized by a strongly inhomogeneous spatio-temporal distribution and highly variable data reliability and quality. Therefore, an adequate weighting of the data that correctly accounts for data density, type, and realistic error estimates represents the major challenge for an inversion approach. Until now, there has not been a fully self-consistent geomagnetic model that correctly recovers the variation of the geomagnetic dipole together with the higher-order spherical harmonics. Here we present a new geomagnetic field model for the last 4 kyrs based on historical, archeomagnetic and volcanic records. The iterative Bayesian inversion approach targets the implementation of reliable error treatment, which allows different record types to be combined in a fully self-consistent way. Modelling results will be presented along with a thorough analysis of model limitations, validity and sensitivity.

  1. Plasma-Based Detector of Outer-Space Dust Particles

    NASA Technical Reports Server (NTRS)

    Tsurutani, Bruce; Brinza, David E.; Henry, Michael D.; Clay, Douglas R.

    2006-01-01

    A report presents a concept for an instrument to be flown in outer space, where it would detect dust particles - especially those associated with comets. The instrument would include a flat plate that would intercept the dust particles. The anticipated spacecraft/dust-particle relative speeds are so high that the impingement of a dust particle on the plate would generate a plasma cloud. Simple electric dipole sensors located equidistantly along the circumference of the plate would detect the dust particle indirectly by detecting the plasma cloud. The location of the dust hit could be estimated from the timing of the detection pulses of the different dipoles. The mass and composition of the dust particle could be estimated from the shapes and durations of the pulses from the dipoles. In comparison with other instruments for detecting hypervelocity dust particles, the proposed instrument offers advantages of robustness, large collection area, and simplicity.

  2. Improved ensemble-mean forecasting of ENSO events by a zero-mean stochastic error model of an intermediate coupled model

    NASA Astrophysics Data System (ADS)

    Zheng, Fei; Zhu, Jiang

    2017-04-01

    How to design a reliable ensemble prediction strategy with considering the major uncertainties of a forecasting system is a crucial issue for performing an ensemble forecast. In this study, a new stochastic perturbation technique is developed to improve the prediction skills of El Niño-Southern Oscillation (ENSO) through using an intermediate coupled model. We first estimate and analyze the model uncertainties from the ensemble Kalman filter analysis results through assimilating the observed sea surface temperatures. Then, based on the pre-analyzed properties of model errors, we develop a zero-mean stochastic model-error model to characterize the model uncertainties mainly induced by the missed physical processes of the original model (e.g., stochastic atmospheric forcing, extra-tropical effects, Indian Ocean Dipole). Finally, we perturb each member of an ensemble forecast at each step by the developed stochastic model-error model during the 12-month forecasting process, and add the zero-mean perturbations into the physical fields to mimic the presence of missing processes and high-frequency stochastic noises. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr hindcast experiments, which are initialized from the same initial conditions and differentiated by whether they consider the stochastic perturbations. The comparison results show that the stochastic perturbations have a significant effect on improving the ensemble-mean prediction skills during the entire 12-month forecasting process. This improvement occurs mainly because the nonlinear terms in the model can form a positive ensemble-mean from a series of zero-mean perturbations, which reduces the forecasting biases and then corrects the forecast through this nonlinear heating mechanism.

  3. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field intensity, and mean geomagnetic dipole power excursion and axial dipole reversal frequencies. We conclude that McLeod's Rule helps unify geo-paleomagnetism, correctly relates theoretically predictable statistical properties of the core geodynamo to magnetic observation, and provides a priori information required for stochastic inversion of paleo-, archeo-, and/or historical geomagnetic measurements.

  4. Rotation of a Spherical Particle with Electrical Dipole Moment Induced by Steady Irradiation in a Static Electric Field

    NASA Astrophysics Data System (ADS)

    Grachev, A. I.

    2018-04-01

    Rotation of a spherical particle in a static electric field and under steady irradiation that induces an electric dipole moment in the particle is studied for the first time. Along with the general treatment of the phenomenon, we analyze possible mechanisms underlying the photoinduction of dipole moment in the particle. Estimations of the angular velocity and the power expended by the rotating particle are provided. The indicated characteristics reach their maximum values if the size of particles is within the range of 10 nm to 10 μm.

  5. On the injection of fine dust from the Jovian magnetosphere

    NASA Technical Reports Server (NTRS)

    Maravilla, D.; Flammer, K. R.; Mendis, D. A.

    1995-01-01

    Using a simple aligned dipole model of the Jovian magnetic field, and exploiting integrals of the gravito-electrodynamic equation of motion of charged dust, we obtain an analytic result which characterizes the nature of the orbits of grains of different (fixed) charge-to-mass ratios launched at different velocities from different radial distances from Jupiter. This enables us to consider various possible sources of the dust-streams emanating from Jupiter which have been observed by the Ulysses spacecraft. We conclude that Jupiter's volcanically active satellite Io is the likely source, in agreement with the earlier calculations and simulations of Horanyi et al. using a detailed three-dimensional model of the Jovian magnetosphere. Our estimates of the size range and the velocity range of these dust grains are also in good agreement with those of the above authors and are within the error bars of the observations.

  6. Accurate van der Waals coefficients from density functional theory

    PubMed Central

    Tao, Jianmin; Perdew, John P.; Ruzsinszky, Adrienn

    2012-01-01

    The van der Waals interaction is a weak, long-range correlation, arising from quantum electronic charge fluctuations. This interaction affects many properties of materials. A simple and yet accurate estimate of this effect will facilitate computer simulation of complex molecular materials and drug design. Here we develop a fast approach for accurate evaluation of dynamic multipole polarizabilities and van der Waals (vdW) coefficients of all orders from the electron density and static multipole polarizabilities of each atom or other spherical object, without empirical fitting. Our dynamic polarizabilities (dipole, quadrupole, octupole, etc.) are exact in the zero- and high-frequency limits, and exact at all frequencies for a metallic sphere of uniform density. Our theory predicts dynamic multipole polarizabilities in excellent agreement with more expensive many-body methods, and yields therefrom vdW coefficients C6, C8, C10 for atom pairs with a mean absolute relative error of only 3%. PMID:22205765

  7. Estimation of ground and excited state dipole moment of laser dyes C504T and C521T using solvatochromic shifts of absorption and fluorescence spectra.

    PubMed

    Basavaraja, Jana; Suresh Kumar, H M; Inamdar, S R; Wari, M N

    2016-02-05

    The absorption and fluorescence spectra of laser dyes: coumarin 504T (C504T) and coumarin 521T (C521T) have been recorded at room temperature in a series of non-polar and polar solvents. The spectra of these dyes showed bathochromic shift with increasing in solvent polarity indicating the involvement of π→π⁎ transition. Kamlet-Taft and Catalan solvent parameters were used to analyze the effect of solvents on C504T and C521T molecules. The study reveals that both general solute-solvent interactions and specific interactions are operative in these two systems. The ground state dipole moment was estimated using Guggenheim's method and also by quantum mechanical calculations. The solvatochromic data were used to determine the excited state dipole moment (μ(e)). It is observed that dipole moment value of excited state (μ(e)) is higher than that of the ground state in both the laser dyes indicating that these dyes are more polar in nature in the excited state than in the ground state. Copyright © 2015. Published by Elsevier B.V.

  8. Comment on “Error made in reports of main field decay”

    NASA Astrophysics Data System (ADS)

    IAGA Working Group V-MOD on Geomagnetic Field Modeling,; Maus, Stefan; Macmillan, Susan

    2004-09-01

    As the International Association of Geomagnetism and Aeronomy (IAGA) Working Group on Geomagnetic Field Modeling (http://www.ngdc.noaa.gov/IAGA/vmod/), responsible for the International Geomagnetic Reference Field (IGRF) [Macmillan et al., 2003], we would like to comment on the Forum article by Wallace H.Campbell (Eos,85(16),20 April 2004). Campbell claims that reports of dipole decay at a special session held at the AGU 2003 Fall Meeting were misleading due to an incorrect choice of the coordinate system for the spherical harmonic analysis (SHA) of the geomagnetic field used for the IGRF the model on which the decay calculation was based.Campbell alleges that the dipole moment of a spherical harmonic expansion depends on the choice of the origin of the coordinate system. In his textbook on geomagnetism, Campbell goes one step further in asserting that, without changing the origin, the process of “tilting the analysis axis to align with the geomagnetic axis…would enhance the dipole term at the expense of the higher multipoles” [Campbell, 2003].

  9. Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum

    NASA Astrophysics Data System (ADS)

    Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun

    2018-04-01

    The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 104 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10-9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10-14 for a pendulum dipole less than 10-9 A m2. The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ˜ 10-14.

  10. Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum.

    PubMed

    Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun

    2018-04-01

    The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 10 4 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10 -9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10 -14 for a pendulum dipole less than 10 -9 A m 2 . The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ∼ 10 -14 .

  11. Experimental foundation of the Gabor-Nelson theory applied to boundaries which are non-insulating.

    PubMed

    Troquet, J; Lambin, P; Nelson, C V

    1985-06-07

    In order to found the application of the Gabor-Nelson theory to non-insulating boundaries, we have used a network which we have divided into two parts: a core energized by a source sink pair and an appendage, the conductivity of which may or may not differ from that of the core. By ignoring the appendage and by applying the Gabor-Nelson method to the restricted perimeter as if it were totally insulating, we stress the errors made in computing the dipole strength, orientation and position and how they are influenced by the dipole eccentricity, by its orientation with respect to the junction between the added portion and the core, and by a change in conductivity between the same compartments. Finally, we restore the dipole characteristics by using the appropriate correction derived from theory. Comparing the later results to those obtained by applying the Gabor-Nelson method to the whole insulating boundary leads to the conclusion that the correction is founded and must be taken into account.

  12. Manifestations of geometric phases in a proton electric-dipole-moment experiment in an all-electric storage ring

    NASA Astrophysics Data System (ADS)

    Silenko, Alexander J.

    2017-12-01

    We consider a proton electric-dipole-moment experiment in an all-electric storage ring when the spin is frozen and local longitudinal and vertical electric fields alternate. In this experiment, the geometric (Berry) phases are very important. Due to the these phases, the spin rotates about the radial axis. The corresponding systematic error is rather important while it can be canceled with clockwise and counterclockwise beams. The geometric phases also lead to the spin rotation about the radial axis. This effect can be canceled with clockwise and counterclockwise beams as well. The sign of the azimuthal component of the angular velocity of the spin precession depends on the starting point where the spin orientation is perfect. The radial component of this quantity keeps its value and sign for each starting point. When the longitudinal and vertical electric fields are joined in the same sections without any alternation, the systematic error due to the geometric phases does not appear but another systematic effect of the spin rotation about the azimuthal axis takes place. It has opposite signs for clockwise and counterclockwise beams.

  13. C6 Coefficients and Dipole Polarizabilities for All Atoms and Many Ions in Rows 1-6 of the Periodic Table.

    PubMed

    Gould, Tim; Bučko, Tomáš

    2016-08-09

    Using time-dependent density functional theory (TDDFT) with exchange kernels, we calculate and test imaginary frequency-dependent dipole polarizabilities for all atoms and many ions in rows 1-6 of the periodic table. These are then integrated over frequency to produce C6 coefficients. Results are presented under different models: straight TDDFT calculations using two different kernels; "benchmark" TDDFT calculations corrected by more accurate quantum chemical and experimental data; and "benchmark" TDDFT with frozen orbital anions. Parametrizations are presented for 411+ atoms and ions, allowing results to be easily used by other researchers. A curious relationship, C6,XY ∝ [αX(0)αY(0)](0.73), is found between C6 coefficients and static polarizabilities α(0). The relationship C6,XY = 2C6,XC6,Y/[(αX/αY)C6,Y + (αY/αX)C6,X] is tested and found to work well (<5% errors) in ∼80% of the cases, but can break down badly (>30% errors) in a small fraction of cases.

  14. Correcting systematic errors in high-sensitivity deuteron polarization measurements

    NASA Astrophysics Data System (ADS)

    Brantjes, N. P. M.; Dzordzhadze, V.; Gebel, R.; Gonnella, F.; Gray, F. E.; van der Hoek, D. J.; Imig, A.; Kruithof, W. L.; Lazarus, D. M.; Lehrach, A.; Lorentz, B.; Messi, R.; Moricciani, D.; Morse, W. M.; Noid, G. A.; Onderwater, C. J. G.; Özben, C. S.; Prasuhn, D.; Levi Sandri, P.; Semertzidis, Y. K.; da Silva e Silva, M.; Stephenson, E. J.; Stockhorst, H.; Venanzoni, G.; Versolato, O. O.

    2012-02-01

    This paper reports deuteron vector and tensor beam polarization measurements taken to investigate the systematic variations due to geometric beam misalignments and high data rates. The experiments used the In-Beam Polarimeter at the KVI-Groningen and the EDDA detector at the Cooler Synchrotron COSY at Jülich. By measuring with very high statistical precision, the contributions that are second-order in the systematic errors become apparent. By calibrating the sensitivity of the polarimeter to such errors, it becomes possible to obtain information from the raw count rate values on the size of the errors and to use this information to correct the polarization measurements. During the experiment, it was possible to demonstrate that corrections were satisfactory at the level of 10 -5 for deliberately large errors. This may facilitate the real time observation of vector polarization changes smaller than 10 -6 in a search for an electric dipole moment using a storage ring.

  15. Spectral performance of Square Kilometre Array Antennas - II. Calibration performance

    NASA Astrophysics Data System (ADS)

    Trott, Cathryn M.; de Lera Acedo, Eloy; Wayth, Randall B.; Fagnoni, Nicolas; Sutinjo, Adrian T.; Wakley, Brett; Punzalan, Chris Ivan B.

    2017-09-01

    We test the bandpass smoothness performance of two prototype Square Kilometre Array (SKA) SKA1-Low log-periodic dipole antennas, SKALA2 and SKALA3 ('SKA Log-periodic Antenna'), and the current dipole from the Murchison Widefield Array (MWA) precursor telescope. Throughout this paper, we refer to the output complex-valued voltage response of an antenna when connected to a low-noise amplifier, as the dipole bandpass. In Paper I, the bandpass spectral response of the log-periodic antenna being developed for the SKA1-Low was estimated using numerical electromagnetic simulations and analysed using low-order polynomial fittings, and it was compared with the HERA antenna against the delay spectrum metric. In this work, realistic simulations of the SKA1-Low instrument, including frequency-dependent primary beam shapes and array configuration, are used with a weighted least-squares polynomial estimator to assess the ability of a given prototype antenna to perform the SKA Epoch of Reionisation (EoR) statistical experiments. This work complements the ideal estimator tolerances computed for the proposed EoR science experiments in Trott & Wayth, with the realized performance of an optimal and standard estimation (calibration) procedure. With a sufficient sky calibration model at higher frequencies, all antennas have bandpasses that are sufficiently smooth to meet the tolerances described in Trott & Wayth to perform the EoR statistical experiments, and these are primarily limited by an adequate sky calibration model and the thermal noise level in the calibration data. At frequencies of the Cosmic Dawn, which is of principal interest to SKA as one of the first next-generation telescopes capable of accessing higher redshifts, the MWA dipole and SKALA3 antenna have adequate performance, while the SKALA2 design will impede the ability to explore this era.

  16. Hybrid analysis of multiaxis electromagnetic data for discrimination of munitions and explosives of concern

    USGS Publications Warehouse

    Friedel, M.J.; Asch, T.H.; Oden, C.

    2012-01-01

    The remediation of land containing munitions and explosives of concern, otherwise known as unexploded ordnance, is an ongoing problem facing the U.S. Department of Defense and similar agencies worldwide that have used or are transferring training ranges or munitions disposal areas to civilian control. The expense associated with cleanup of land previously used for military training and war provides impetus for research towards enhanced discrimination of buried unexploded ordnance. Towards reducing that expense, a multiaxis electromagnetic induction data collection and software system, called ALLTEM, was designed and tested with support from the U.S. Department of Defense Environmental Security Technology Certification Program. ALLTEM is an on-time time-domain system that uses a continuous triangle-wave excitation to measure the target-step response rather than traditional impulse response. The system cycles through three orthogonal transmitting loops and records a total of 19 different transmitting and receiving loop combinations with a nominal spatial data sampling interval of 20 cm. Recorded data are pre-processed and then used in a hybrid discrimination scheme involving both data-driven and numerical classification techniques. The data-driven classification scheme is accomplished in three steps. First, field observations are used to train a type of unsupervised artificial neural network, a self-organizing map (SOM). Second, the SOM is used to simultaneously estimate target parameters (depth, azimuth, inclination, item type and weight) by iterative minimization of the topographic error vectors. Third, the target classification is accomplished by evaluating histograms of the estimated parameters. The numerical classification scheme is also accomplished in three steps. First, the Biot–Savart law is used to model the primary magnetic fields from the transmitter coils and the secondary magnetic fields generated by currents induced in the target materials in the ground. Second, the target response is modelled by three orthogonal dipoles from prolate, oblate and triaxial ellipsoids with one long axis and two shorter axes. Each target consists of all three dipoles. Third, unknown target parameters are determined by comparing modelled to measured target responses. By comparing the rms error among the self-organizing map and numerical classification results, we achieved greater than 95 per cent detection and correct classification of the munitions and explosives of concern at the direct fire and indirect fire test areas at the UXO Standardized Test Site at the Aberdeen Proving Ground, Maryland in 2010.

  17. Hybrid analysis of multiaxis electromagnetic data for discrimination of munitions and explosives of concern

    NASA Astrophysics Data System (ADS)

    Friedel, M. J.; Asch, T. H.; Oden, C.

    2012-08-01

    The remediation of land containing munitions and explosives of concern, otherwise known as unexploded ordnance, is an ongoing problem facing the U.S. Department of Defense and similar agencies worldwide that have used or are transferring training ranges or munitions disposal areas to civilian control. The expense associated with cleanup of land previously used for military training and war provides impetus for research towards enhanced discrimination of buried unexploded ordnance. Towards reducing that expense, a multiaxis electromagnetic induction data collection and software system, called ALLTEM, was designed and tested with support from the U.S. Department of Defense Environmental Security Technology Certification Program. ALLTEM is an on-time time-domain system that uses a continuous triangle-wave excitation to measure the target-step response rather than traditional impulse response. The system cycles through three orthogonal transmitting loops and records a total of 19 different transmitting and receiving loop combinations with a nominal spatial data sampling interval of 20 cm. Recorded data are pre-processed and then used in a hybrid discrimination scheme involving both data-driven and numerical classification techniques. The data-driven classification scheme is accomplished in three steps. First, field observations are used to train a type of unsupervised artificial neural network, a self-organizing map (SOM). Second, the SOM is used to simultaneously estimate target parameters (depth, azimuth, inclination, item type and weight) by iterative minimization of the topographic error vectors. Third, the target classification is accomplished by evaluating histograms of the estimated parameters. The numerical classification scheme is also accomplished in three steps. First, the Biot-Savart law is used to model the primary magnetic fields from the transmitter coils and the secondary magnetic fields generated by currents induced in the target materials in the ground. Second, the target response is modelled by three orthogonal dipoles from prolate, oblate and triaxial ellipsoids with one long axis and two shorter axes. Each target consists of all three dipoles. Third, unknown target parameters are determined by comparing modelled to measured target responses. By comparing the rms error among the self-organizing map and numerical classification results, we achieved greater than 95 per cent detection and correct classification of the munitions and explosives of concern at the direct fire and indirect fire test areas at the UXO Standardized Test Site at the Aberdeen Proving Ground, Maryland in 2010.

  18. Optimal Super Dielectric Material

    DTIC Science & Technology

    2015-09-01

    INTENTIONALLY LEFT BLANK i REPORT DOCUMENTATION PAGE Form Approved OMB No. 0704–0188 Public reporting burden for this collection of information is estimated...containing liquid with dissolved ionic species will form large dipoles, polarized opposite the applied field. Large dipole SDM placed between the...electrodes of a parallel plate capacitor will reduce the net field to an unprecedented extent. This family of materials can form materials with

  19. Multilevel effects on the balance of dipole-allowed to dipole-forbidden transitions in Rydberg atoms induced by a dipole interaction with slow charged projectiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Syrkin, M.I.

    1996-02-01

    In collisions of Rydberg atoms with charged projectiles at velocities approximately matching the speed of the Rydberg electron {ital v}{sub {ital n}} (matching velocity), {ital n} being the principal quantum number of the Rydberg level, the dipole-forbidden transitions with large angular-momentum transfer {Delta}{ital l}{gt}1 substantially dominate over dipole-allowed transitions {Delta}{ital l}=1, although both are induced by the dipole interaction. Here it is shown that as the projectile velocity decreases the adiabatic character of the depopulation depends on the energy distribution of states in the vicinity of the initial level. If the spectrum is close to degeneracy (as for high-{ital l}more » levels) the dipole-forbidden depopulation prevails practically over the entire low-velocity region, down to velocities {approximately}{ital n}{sup 3}[{Delta}{ital E}/Ry]{ital v}{sub {ital n}}, where {Delta}{ital E} is the energy spacing adjoining to the level due to either a quantum defect or the relevant level width or splitting, whichever is greater. If the energy gaps are substantial (as for strongly nonhydrogenic {ital s} and {ital p} levels in alkali-metal atoms), then the fraction of dipole transitions in the total depopulation reaches a flat minimum just below the matching velocity and then grows again, making the progressively increasing contribution to the low-velocity depopulation. The analytic models based on the first-order Born amplitudes (rather than the two-level adiabatic approximation) furnish reasonable estimates of the fractional dipole-allowed and dipole-forbidden depopulations. {copyright} {ital 1996 The American Physical Society.}« less

  20. Fingering instabilities and pattern formation in a two-component dipolar Bose-Einstein condensate

    NASA Astrophysics Data System (ADS)

    Xi, Kui-Tian; Byrnes, Tim; Saito, Hiroki

    2018-02-01

    We study fingering instabilities and pattern formation at the interface of an oppositely polarized two-component Bose-Einstein condensate with strong dipole-dipole interactions in three dimensions. It is shown that the rotational symmetry is spontaneously broken by fingering instability when the dipole-dipole interactions are strengthened. Frog-shaped and mushroom-shaped patterns emerge during the dynamics due to the dipolar interactions. We also demonstrate the spontaneous density modulation and domain growth of a two-component dipolar BEC in the dynamics. Bogoliubov analyses in the two-dimensional approximation are performed, and the characteristic lengths of the domains are estimated analytically. Patterns resembling those in magnetic classical fluids are modulated when the number ratio of atoms, the trap ratio of the external potential, or tilted polarization with respect to the z direction is varied.

  1. Cross-Correlation of the X-Ray Background with Nearby Galaxies: Erratum

    NASA Astrophysics Data System (ADS)

    Jahoda, Keith; Lahav, Ofer; Mushotzky, Richard F.; Boldt, Elihu

    1992-11-01

    In the Letter "Cross-Correlation of the X-Ray Background with Nearby Galaxies" by Keith Jahoda, Ofer Lahav, Richard F. Mushotzky, & Elihu Boldt (ApJ, 378, L37, [1991]) there is an error in the evaluation of equation(5): the numerical constant is too small by a factor of 4.5 (the solid angle of the HEAO 1 A2 beam). The revised X-ray emissivity values (over the volume sampled by the UGC and ESO galaxies) are as follows. For UGC (using the median of Table 1) ρ_x_ = (10.5 +/- 6.0) x 10^38^ h_50_ ergs s^-1^ Mpc^-3^, where the error reflects the scatter in Table 1 and the uncertainty in R_*_, the effective depth of the catalogs (the Hubble constant is in units of H_0_ = 50 h_50_ km s^-1^ Mpc^-1^). Similarly for ESO ρ_x = (14.5 +/- 8.0) x 10^38^ h_50_ ergs s^-1^ Mpc^-3^. For the combined data (UGC and ESO)our revised value is the mean of the two samples,ρ_x_ = (12.5 +/- 7.0) x 10^38^ h_50_ ergs s^-1^ Mpc^-3^. This correction has important consequences for the discussion section of the paper. First, the fraction of the X-ray background which can be produced by nonevolving X-ray sources distributed out to high redshift (assuming a look-back factor of f=0.5) can be as large as 50% +/- 30% and 70% +/- 40% for UGC and ESO, respectively. Second, this measurement of ρ_x_ exceeds the upper limit calculated by E. Boldt (IAU Colloq. 123,451 [1990]) based on an approximation of the total extragalactic X-ray dipole, unless b{OMEGA}^-0.6^<~ 1.3, less than about half the value derived for bright X-ray AGNs by T. Miyaji & E. Boldt (ApJ, 353, L3 [1990]) and T. Miyaji, K. Jahoda, & E. Boldt (AIP Conf. Proc. 222,431 [1991]). However, an improved determination of the extragalactic X-ray dipole, now obtained by performing a direct vector sum of the all-sky X-ray data (excluding only points near known Galactic point sources and their antipodes and points with |b| < 20^deg^), and subtracting the high-latitude contribution predicted by the Galactic model of D. Iwan et al. (ApJ, 260,111 [1982]) and that arising from the Compton-Getting effect, gives an estimate (rather than an upper limit) that ρ_x_ ~ 30 x 10^38^ h_50_(b{OMEGA}^- 0.6^)^-1^ ergs s^-1^ Mpc^-3^, consistent with the revised volume emissivity estimated in this work and the same bias parameter deduced for the bright AGNs. These two results suggest that a substantial fraction of the X-ray background could be produced by present-epoch objects and that these have a bias parameter similar to or only slightly smaller than the X-ray bright AGNs. We would also like to point out a minor error after equation (3). The correct sentence should be "where W_gg_, a galaxy autocorrelation estimator defined in analogy to equation (1), is, for example, 023 and 0.24 for UGC and ESO, respectively (for 17 deg^2^ cells)." The rest of the calculation is as before. We thank Andy Fabian for bringing the error to our attention and Takamitsu Miyaji for discussion.

  2. Anatomically constrained dipole adjustment (ANACONDA) for accurate MEG/EEG focal source localizations

    NASA Astrophysics Data System (ADS)

    Im, Chang-Hwan; Jung, Hyun-Kyo; Fujimaki, Norio

    2005-10-01

    This paper proposes an alternative approach to enhance localization accuracy of MEG and EEG focal sources. The proposed approach assumes anatomically constrained spatio-temporal dipoles, initial positions of which are estimated from local peak positions of distributed sources obtained from a pre-execution of distributed source reconstruction. The positions of the dipoles are then adjusted on the cortical surface using a novel updating scheme named cortical surface scanning. The proposed approach has many advantages over the conventional ones: (1) as the cortical surface scanning algorithm uses spatio-temporal dipoles, it is robust with respect to noise; (2) it requires no a priori information on the numbers and initial locations of the activations; (3) as the locations of dipoles are restricted only on a tessellated cortical surface, it is physiologically more plausible than the conventional ECD model. To verify the proposed approach, it was applied to several realistic MEG/EEG simulations and practical experiments. From the several case studies, it is concluded that the anatomically constrained dipole adjustment (ANACONDA) approach will be a very promising technique to enhance accuracy of focal source localization which is essential in many clinical and neurological applications of MEG and EEG.

  3. Reactivity of fluoroalkanes in reactions of coordinated molecular decomposition

    NASA Astrophysics Data System (ADS)

    Pokidova, T. S.; Denisov, E. T.

    2017-08-01

    Experimental results on the coordinated molecular decomposition of RF fluoroalkanes to olefin and HF are analyzed using the model of intersecting parabolas (IPM). The kinetic parameters are calculated to allow estimates of the activation energy ( E) and rate constant ( k) of these reactions, based on enthalpy and IPM algorithms. Parameters E and k are found for the first time for eight RF decomposition reactions. The factors that affect activation energy E of RF decomposition (the enthalpy of the reaction, the electronegativity of the atoms of reaction centers, and the dipole-dipole interaction of polar groups) are determined. The values of E and k for reverse reactions of addition are estimated.

  4. Long-range, collision-induced hyperpolarizabilities of atoms or centrosymmetric linear molecules: Theory and numerical results for pairs containing H or He

    NASA Astrophysics Data System (ADS)

    Li, Xiaoping; Hunt, Katharine L. C.; Pipin, Janusz; Bishop, David M.

    1996-12-01

    For atoms or molecules of D∞h or higher symmetry, this work gives equations for the long-range, collision-induced changes in the first (Δβ) and second (Δγ) hyperpolarizabilities, complete to order R-7 in the intermolecular separation R for Δβ, and order R-6 for Δγ. The results include nonlinear dipole-induced-dipole (DID) interactions, higher multipole induction, induction due to the nonuniformity of the local fields, back induction, and dispersion. For pairs containing H or He, we have used ab initio values of the static (hyper)polarizabilities to obtain numerical results for the induction terms in Δβ and Δγ. For dispersion effects, we have derived analytic results in the form of integrals of the dynamic (hyper)polarizabilities over imaginary frequencies, and we have evaluated these numerically for the pairs H...H, H...He, and He...He using the values of the fourth dipole hyperpolarizability ɛ(-iω; iω, 0, 0, 0, 0) obtained in this work, along with other hyperpolarizabilities calculated previously by Bishop and Pipin. For later numerical applications to molecular pairs, we have developed constant ratio approximations (CRA1 and CRA2) to estimate the dispersion effects in terms of static (hyper)polarizabilities and van der Waals energy or polarizability coefficients. Tests of the approximations against accurate results for the pairs H...H, H...He, and He...He show that the root mean square (rms) error in CRA1 is ˜20%-25% for Δβ and Δγ; for CRA2 the error in Δβ is similar, but the rms error in Δγ is less than 4%. At separations ˜1.0 a.u. outside the van der Waals minima of the pair potentials for H...H, H...He, and He...He, the nonlinear DID interactions make the dominant contributions to Δγzzzz (where z is the interatomic axis) and to Δγxxxx, accounting for ˜80%-123% of the total value. Contributions due to higher-multipole induction and the nonuniformity of the local field (Qα terms) may exceed 15%, while dispersion effects contribute ˜4%-9% of the total Δγzzzz and Δγxxxx. For Δγxxzz, the α term is roughly equal to the nonlinear DID term in absolute value, but opposite in sign. Other terms in Δγxxzz are smaller, but they are important in determining its net value because of the near cancellation of the two dominant terms. When Δγ is averaged isotropically over the orientations of the interatomic vector to give Δγ¯, dispersion effects dominate, contributing 76% of the total Δγ¯ (through order R-6) for H...H, 81% for H...He, and 73% for He...He.

  5. The compensation of quadrupole errors and space charge effects by using trim quadrupoles

    NASA Astrophysics Data System (ADS)

    An, YuWen; Wang, Sheng

    2011-12-01

    The China Spallation Neutron Source (CSNS) accelerators consist of an H-linac and a proton Rapid Cycling Synchrotron (RCS). RCS is designed to accumulate and accelerate proton beam from 80 MeV to 1.6 GeV with a repetition rate of 25 Hz. The main dipole and quadruple magnet will operate in AC mode. Due to the adoption of the resonant power supplies, saturation errors of magnetic field cannot be compensated by power supplies. These saturation errors will disturb the linear optics parameters, such as tunes, beta function and dispersion function. The strong space charge effects will cause emittance growth. The compensation of these effects by using trim quadruples is studied, and the corresponding results are presented.

  6. Differential phase measurements of D-region partial reflections

    NASA Technical Reports Server (NTRS)

    Wiersma, D. J.; Sechrist, C. F., Jr.

    1972-01-01

    Differential phase partial reflection measurements were used to deduce D region electron density profiles. The phase difference was measured by taking sums and differences of amplitudes received on an array of crossed dipoles. The reflection model used was derived from Fresnel reflection theory. Seven profiles obtained over the period from 13 October 1971 to 5 November 1971 are presented, along with the results from simultaneous measurements of differential absorption. Some possible sources of error and error propagation are discussed. A collision frequency profile was deduced from the electron concentration calculated from differential phase and differential absorption.

  7. Dipole Resonances of 76Ge

    NASA Astrophysics Data System (ADS)

    Ilieva, R. S.; Cooper, N.; Werner, V.; Rusev, G.; Pietralla, N.; Kelly, J. H.; Tornow, W.; Yates, S. W.; Crider, B. P.; Peters, E.

    2013-10-01

    Dipole resonances in 76Ge have been studied using the method of Nuclear Resonance Fluorescence (NRF). The experiment was performed using the Free Electron Laser facility at HI γS/TUNL, which produced linearly polarised quasi-monoenergetic photons in the 4-9 MeV energy range. Photon strength, in particular dipole strength, is an important ingredient in nuclear reaction calculations, and recent interest in its study has been stimulated by observations of a pygmy dipole resonance near the neutron separation energy Sn of certain nuclei. Furthermore, 76Ge is a candidate for 0 ν 2 β -decay. The results are complimentary to a relevant experiment done at TU Darmstadt using Bremsstrahlung beams. Single-resonance parities and a preliminary estimate of the total photo-excitation cross section will be presented. This work was supported by the U.S. DOE under grant no. DE-FG02-91ER40609.

  8. Influence of silver nanoparticles on relaxation processes and efficiency of dipole – dipole energy transfer between dye molecules in polymethylmethacrylate films

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bryukhanov, V V; Borkunov, R Yu; Tsarkov, M V

    The fluorescence and phosphorescence of dyes in thin polymethylmethacrylate (PMMA) films in the presence of ablated silver nanoparticles has been investigated in a wide temperature range by methods of femtosecond and picosecond laser photoexcitation. The fluorescence and phosphorescence times, as well as spectral and kinetic characteristics of rhodamine 6G (R6G) molecules in PMMA films are measured in a temperature range of 80 – 330 K. The temperature quenching activation energy of the fluorescence of R6G molecules in the presence of ablated silver nanoparticles is found. The vibrational relaxation rate of R6G in PMMA films is estimated, the efficiency of themore » dipole – dipole electron energy transfer between R6G and brilliant green molecules (enhanced by plasmonic interaction with ablated silver nanoparticles) is analysed, and the constants of this energy transfer are determined. (nanophotonics)« less

  9. Terahertz radiation-induced sub-cycle field electron emission across a split-gap dipole antenna

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jingdi; Averitt, Richard D., E-mail: xinz@bu.edu, E-mail: raveritt@ucsd.edu; Department of Physics, Boston University, Boston, Massachusetts 02215

    We use intense terahertz pulses to excite the resonant mode (0.6 THz) of a micro-fabricated dipole antenna with a vacuum gap. The dipole antenna structure enhances the peak amplitude of the in-gap THz electric field by a factor of ∼170. Above an in-gap E-field threshold amplitude of ∼10 MV/cm{sup −1}, THz-induced field electron emission is observed as indicated by the field-induced electric current across the dipole antenna gap. Field emission occurs within a fraction of the driving THz period. Our analysis of the current (I) and incident electric field (E) is in agreement with a Millikan-Lauritsen analysis where log (I) exhibits amore » linear dependence on 1/E. Numerical estimates indicate that the electrons are accelerated to a value of approximately one tenth of the speed of light.« less

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moyotl, A.; Rosado, A.; Tavares-Velasco, G.

    The magnetic dipole moment and the electric dipole moment of leptons are calculated under the assumption of lepton flavor violation (LFV) induced by spin-1 unparticles with both vector and axial-vector couplings to leptons, including a CP-violating phase. The experimental limits on the muon magnetic dipole moment and LFV process, such as the decay l{sub i}{sup -}{yields}l{sub j}{sup -}l{sub k}{sup -}l{sub k}{sup +}, are then used to constrain the LFV couplings for particular values of the unparticle operator dimension d{sub U} and the unparticle scale {Lambda}{sub U}, assuming that LFV transitions between the tau and muon leptons are dominant. It ismore » found that the current experimental constraints favor a scenario with dominance of the vector couplings over the axial-vector couplings. We also obtain estimates for the electric dipole moments of the electron and the muon, which are well below the experimental values.« less

  11. The effect of memory in the stochastic master equation analyzed using the stochastic Liouville equation of motion. Electronic energy migration transfer between reorienting donor-donor, donor-acceptor chromophores

    NASA Astrophysics Data System (ADS)

    Håkansson, Pär; Westlund, Per-Olof

    2005-01-01

    This paper discusses the process of energy migration transfer within reorientating chromophores using the stochastic master equation (SME) and the stochastic Liouville equation (SLE) of motion. We have found that the SME over-estimates the rate of the energy migration compared to the SLE solution for a case of weakly interacting chromophores. This discrepancy between SME and SLE is caused by a memory effect occurring when fluctuations in the dipole-dipole Hamiltonian ( H( t)) are on the same timescale as the intrinsic fast transverse relaxation rate characterized by (1/ T2). Thus the timescale critical for energy-transfer experiments is T2≈10 -13 s. An extended SME is constructed, accounting for the memory effect of the dipole-dipole Hamiltonian dynamics. The influence of memory on the interpretation of experiments is discussed.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Cohen, Saul D.

    Here, we present results for the isovector axial, scalar, and tensor charges g u–d A, g u–d S, and g u–d T of the nucleon needed to probe the Standard Model and novel physics. The axial charge is a fundamental parameter describing the weak interactions of nucleons. The scalar and tensor charges probe novel interactions at the TeV scale in neutron and nuclear β-decays, and the flavor-diagonal tensor charges g u T, g d T, and g s T are needed to quantify the contribution of the quark electric dipole moment (EDM) to the neutron EDM. The lattice-QCD calculations weremore » done using nine ensembles of gauge configurations generated by the MILC Collaboration using the highly improved staggered quarks action with 2+1+1 dynamical flavors. These ensembles span three lattice spacings a ≈ 0.06,0.09, and 0.12 fm and light-quark masses corresponding to the pion masses M π ≈ 135, 225, and 315 MeV. High-statistics estimates on five ensembles using the all-mode-averaging method allow us to quantify all systematic uncertainties and perform a simultaneous extrapolation in the lattice spacing, lattice volume, and light-quark masses for the connected contributions. Our final estimates, in the ¯MS scheme at 2 GeV, of the isovector charges are g u–d A = 1.195(33)(20), g u–d S = 0.97(12)(6), and g u–d T = 0.987(51)(20). The first error includes statistical and all systematic uncertainties except that due to the extrapolation Ansatz, which is given by the second error estimate. Combining our estimate for gu–dS with the difference of light quarks masses (m d–m u) QCD = 2.67(35) MeV given by the Flavor Lattice Average Group, we obtain (M N – M P) QCD = 2.59(49) MeV. Estimates of the connected part of the flavor-diagonal tensor charges of the proton are g u T = 0.792(42) and g d T = –0.194(14). Combining our new estimates with precision low-energy experiments, we present updated constraints on novel scalar and tensor interactions, ε S,T, at the TeV scale.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koppal, V. V., E-mail: varshakoppal@gmail.com; Muddapur, G. V., E-mail: muddapur.gangadhar@gmail.com; Patil, N. R., E-mail: patilnr23@gmail.com

    In this paper we attempted to record absorption and emission spectra of 2-acetyl-3H-benzo[f]chromen-3-one [2AHBC] laser dye in different solvents of varying polarities to investigate its solvatochromic behavior. The two electronic states dipole moments of 2AHBC are calculated using solvatochromic spectral shifts which are correlated with dielectric constant (ε) refractive index (n) of various solvents. A systematic approach is made to estimate ground and excited state dipole moments on the basis of different solvent correlation methods like Bilot-Kawski equations, Lippert-Mataga, Bakhsheiv, Kawaski-Chamma-Viallet and Reichardt methods. Dipole moments in the excited state was found to be higher than the ground state bymore » confirming π→π* transition.« less

  14. Effect of Loop Geometry on TEM Response Over Layered Earth

    NASA Astrophysics Data System (ADS)

    Qi, Youzheng; Huang, Ling; Wu, Xin; Fang, Guangyou; Yu, Gang

    2014-09-01

    A large horizontal loop located on the ground or carried by an aircraft are the most common sources of the transient electromagnetic method. Although topographical factors or airplane outlines make the loop of arbitrary shape, magnetic sources are generally represented as a magnetic dipole or a circular loop, which may bring about significant errors in the calculated response. In this paper, we present a method for calculating the response of a loop of arbitrary shape (for which the description can be obtained by different methods, including GPS localization) in air or on the surface of a stratified earth. The principle of reciprocity is firstly used to exchange the functions of the transmitting loop and the dipole receiver, then the response of a vertical or a horizontal magnetic dipole is calculated beforehand, and finally the line integral of the second kind is employed to get the transient response. Analytical analysis and comparisons depict that our work got very good results in many situations. Synthetic and field examples are given in the end to show the effect of loop geometry and how our method improves the precision of the EM response.

  15. Implementation of the incremental scheme for one-electron first-order properties in coupled-cluster theory.

    PubMed

    Friedrich, Joachim; Coriani, Sonia; Helgaker, Trygve; Dolg, Michael

    2009-10-21

    A fully automated parallelized implementation of the incremental scheme for coupled-cluster singles-and-doubles (CCSD) energies has been extended to treat molecular (unrelaxed) first-order one-electron properties such as the electric dipole and quadrupole moments. The convergence and accuracy of the incremental approach for the dipole and quadrupole moments have been studied for a variety of chemically interesting systems. It is found that the electric dipole moment can be obtained to within 5% and 0.5% accuracy with respect to the exact CCSD value at the third and fourth orders of the expansion, respectively. Furthermore, we find that the incremental expansion of the quadrupole moment converges to the exact result with increasing order of the expansion: the convergence of nonaromatic compounds is fast with errors less than 16 mau and less than 1 mau at third and fourth orders, respectively (1 mau=10(-3)ea(0)(2)); the aromatic compounds converge slowly with maximum absolute deviations of 174 and 72 mau at third and fourth orders, respectively.

  16. Quantum Computation using Arrays of N Polar Molecules in Pendular States.

    PubMed

    Wei, Qi; Cao, Yudong; Kais, Sabre; Friedrich, Bretislav; Herschbach, Dudley

    2016-11-18

    We investigate several aspects of realizing quantum computation using entangled polar molecules in pendular states. Quantum algorithms typically start from a product state |00⋯0⟩ and we show that up to a negligible error, the ground states of polar molecule arrays can be considered as the unentangled qubit basis state |00⋯0⟩ . This state can be prepared by simply allowing the system to reach thermal equilibrium at low temperature (<1 mK). We also evaluate entanglement, characterized by concurrence of pendular state qubits in dipole arrays as governed by the external electric field, dipole-dipole coupling and number N of molecules in the array. In the parameter regime that we consider for quantum computing, we find that qubit entanglement is modest, typically no greater than 10 -4 , confirming the negligible entanglement in the ground state. We discuss methods for realizing quantum computation in the gate model, measurement-based model, instantaneous quantum polynomial time circuits and the adiabatic model using polar molecules in pendular states. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Developing an A Priori Database for Passive Microwave Snow Water Retrievals Over Ocean

    NASA Astrophysics Data System (ADS)

    Yin, Mengtao; Liu, Guosheng

    2017-12-01

    A physically optimized a priori database is developed for Global Precipitation Measurement Microwave Imager (GMI) snow water retrievals over ocean. The initial snow water content profiles are derived from CloudSat Cloud Profiling Radar (CPR) measurements. A radiative transfer model in which the single-scattering properties of nonspherical snowflakes are based on the discrete dipole approximate results is employed to simulate brightness temperatures and their gradients. Snow water content profiles are then optimized through a one-dimensional variational (1D-Var) method. The standard deviations of the difference between observed and simulated brightness temperatures are in a similar magnitude to the observation errors defined for observation error covariance matrix after the 1D-Var optimization, indicating that this variational method is successful. This optimized database is applied in a Bayesian retrieval snow water algorithm. The retrieval results indicated that the 1D-Var approach has a positive impact on the GMI retrieved snow water content profiles by improving the physical consistency between snow water content profiles and observed brightness temperatures. Global distribution of snow water contents retrieved from the a priori database is compared with CloudSat CPR estimates. Results showed that the two estimates have a similar pattern of global distribution, and the difference of their global means is small. In addition, we investigate the impact of using physical parameters to subset the database on snow water retrievals. It is shown that using total precipitable water to subset the database with 1D-Var optimization is beneficial for snow water retrievals.

  18. Biases in Time-Averaged Field and Paleosecular Variation Studies

    NASA Astrophysics Data System (ADS)

    Johnson, C. L.; Constable, C.

    2009-12-01

    Challenges to constructing time-averaged field (TAF) and paleosecular variation (PSV) models of Earth’s magnetic field over million year time scales are the uneven geographical and temporal distribution of paleomagnetic data and the absence of full vector records of the magnetic field variability at any given site. Recent improvements in paleomagnetic data sets now allow regional assessment of the biases introduced by irregular temporal sampling and the absence of full vector information. We investigate these effects over the past few Myr for regions with large paleomagnetic data sets, where the TAF and/or PSV have been of previous interest (e.g., significant departures of the TAF from the field predicted by a geocentric axial dipole). We calculate the effects of excluding paleointensity data from TAF calculations, and find these to be small. For example, at Hawaii, we find that for the past 50 ka, estimates of the TAF direction are minimally affected if only paleodirectional data versus the full paleofield vector are used. We use resampling techniques to investigate biases incurred by the uneven temporal distribution. Key to the latter issue is temporal information on a site-by-site basis. At Hawaii, resampling of the paleodirectional data onto a uniform temporal distribution, assuming no error in the site ages, reduces the magnitude of the inclination anomaly for the Brunhes, Gauss and Matuyama epochs. However inclusion of age errors in the sampling procedure leads to TAF estimates that are close to those reported for the original data sets. We discuss the implications of our results for global field models.

  19. Polarized-interferometer feasibility study

    NASA Technical Reports Server (NTRS)

    Raab, F. H.

    1983-01-01

    The feasibility of using a polarized-interferometer system as a rendezvous and docking sensor for two cooperating spacecraft was studied. The polarized interferometer is a radio frequency system for long range, real time determination of relative position and attitude. Range is determined by round trip signal timing. Direction is determined by radio interferometry. Relative roll is determined from signal polarization. Each spacecraft is equipped with a transponder and an antenna array. The antenna arrays consist of four crossed dipoles that can transmit or receive either circularly or linearly polarized signals. The active spacecraft is equipped with a sophisticated transponder and makes all measurements. The transponder on the passive spacecraft is a relatively simple repeater. An initialization algorithm is developed to estimate position and attitude without any a priori information. A tracking algorithm based upon minimum variance linear estimators is also developed. Techniques to simplify the transponder on the passive spacecraft are investigated and a suitable configuration is determined. A multiple carrier CW signal format is selected. The dependence of range accuracy and ambiguity resolution error probability are derived and used to design a candidate system. The validity of the design and the feasibility of the polarized interferometer concept are verified by simulation.

  20. Relating polarizability to volume, ionization energy, electronegativity, hardness, moments of momentum, and other molecular properties

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blair, Shamus A.; Thakkar, Ajit J., E-mail: ajit@unb.ca

    2014-08-21

    Semiquantitative relationships between the mean static dipole polarizability and other molecular properties such as the volume, ionization energy, electronegativity, hardness, and moments of momentum are explored. The relationships are tested using density functional theory computations on the 1641 neutral, ground-state, organic molecules in the TABS database. The best polarizability approximations have median errors under 5%.

  1. Relating polarizability to volume, ionization energy, electronegativity, hardness, moments of momentum, and other molecular properties.

    PubMed

    Blair, Shamus A; Thakkar, Ajit J

    2014-08-21

    Semiquantitative relationships between the mean static dipole polarizability and other molecular properties such as the volume, ionization energy, electronegativity, hardness, and moments of momentum are explored. The relationships are tested using density functional theory computations on the 1641 neutral, ground-state, organic molecules in the TABS database. The best polarizability approximations have median errors under 5%.

  2. AE monitoring instrumentation for high performance superconducting dipoles and quadrupoles, Phase 2

    NASA Astrophysics Data System (ADS)

    Iwasa, Y.

    1986-01-01

    In the past year and a half, attention has been focused on the development of instrumentation for on-line monitoring of high-performance superconducting dipoles and quadrupoles. This instrumentation has been completed and satisfactorily demonstrated on a prototype Fermi dipole. Conductor motion is the principal source of acoustic emission (AE) and the major cause of quenches in the dipole, except during the virgin run when other sources are also present. The motion events are mostly microslips. The middle of the magnet is most susceptible to quenches. This result agrees with the peak field location in the magnet. In the virgin state the top and bottom of the magnet appeared acoustically similar but diverged after training, possibly due to minute structural asymmetry, for example differences in clamping and welding strength; however, the results do not indicate any major structural defects. There is good correlation between quench current and AE starting current. The correlation is reasonable if mechanical disturbances are indeed responsible for quench. Based on AE cumulative history, the average frictional power dissipation in the whole dipole winding is estimated to be approx. 10 (MU)W cm(-3). We expect to implement the following in the next phase of this project: Application of room-temperature techniques to detecting structural defects in the dipole; application of the system to other dipoles and quadrupoles in the same series to compare their performances; and further investigation of AE starting current approx. quench current relationship. Work has begun on the room temperature measurements. Preliminary Stress Wave Factor measurements have been made on a model dipole casing.

  3. Highly Accurate Potential Energy Surface, Dipole Moment Surface, Rovibrational Energy Levels, and Infrared Line List for (32)S(16)O2 up to 8000 cm(exp -1)

    NASA Technical Reports Server (NTRS)

    Huang, Xinchuan; Schwenke, David W.; Lee, Timothy J.

    2014-01-01

    A purely ab initio potential energy surface (PES) was refined with selected (32)S(16)O2 HITRAN data. Compared to HITRAN, the root-mean-squares error (RMS) error for all J=0-80 rovibrational energy levels computed on the refined PES (denoted Ames-1) is 0.013 cm(exp -1). Combined with a CCSD(T)/aug-cc-pV(Q+d)Z dipole moment surface (DMS), an infrared (IR) line list (denoted Ames-296K) has been computed at 296K and covers up to 8,000 cm(exp -1). Compared to the HITRAN and CDMS databases, the intensity agreement for most vibrational bands is better than 85-90%. Our predictions for (34)S(16)O2 band origins, higher energy (32)S(16)O2 band origins and missing (32)S(16)O2 IR bands have been verified by most recent experiments and available HITRAN data. We conclude that the Ames-1 PES is able to predict (32/34)S(16)O2 band origins below 5500 cm(exp -1) with 0.01-0.03 cm(exp -1) uncertainties, and the Ames-296K line list provides continuous, reliable and accurate IR simulations. The Ka-dependence of both line position and line intensity errors is discussed. The line list will greatly facilitate SO2 IR spectral experimental analysis, as well as elimination of SO2 lines in high-resolution astronomical observations.

  4. Robust quantum logic in neutral atoms via adiabatic Rydberg dressing

    DOE PAGES

    Keating, Tyler; Cook, Robert L.; Hankin, Aaron M.; ...

    2015-01-28

    We study a scheme for implementing a controlled-Z (CZ) gate between two neutral-atom qubits based on the Rydberg blockade mechanism in a manner that is robust to errors caused by atomic motion. By employing adiabatic dressing of the ground electronic state, we can protect the gate from decoherence due to random phase errors that typically arise because of atomic thermal motion. In addition, the adiabatic protocol allows for a Doppler-free configuration that involves counterpropagating lasers in a σ +/σ - orthogonal polarization geometry that further reduces motional errors due to Doppler shifts. The residual motional error is dominated by dipole-dipolemore » forces acting on doubly-excited Rydberg atoms when the blockade is imperfect. As a result, for reasonable parameters, with qubits encoded into the clock states of 133Cs, we predict that our protocol could produce a CZ gate in < 10 μs with error probability on the order of 10 -3.« less

  5. Localization of source with unknown amplitude using IPMC sensor arrays

    NASA Astrophysics Data System (ADS)

    Abdulsadda, Ahmad T.; Zhang, Feitian; Tan, Xiaobo

    2011-04-01

    The lateral line system, consisting of arrays of neuromasts functioning as flow sensors, is an important sensory organ for fish that enables them to detect predators, locate preys, perform rheotaxis, and coordinate schooling. Creating artificial lateral line systems is of significant interest since it will provide a new sensing mechanism for control and coordination of underwater robots and vehicles. In this paper we propose recursive algorithms for localizing a vibrating sphere, also known as a dipole source, based on measurements from an array of flow sensors. A dipole source is frequently used in the study of biological lateral lines, as a surrogate for underwater motion sources such as a flapping fish fin. We first formulate a nonlinear estimation problem based on an analytical model for the dipole-generated flow field. Two algorithms are presented to estimate both the source location and the vibration amplitude, one based on the least squares method and the other based on the Newton-Raphson method. Simulation results show that both methods deliver comparable performance in source localization. A prototype of artificial lateral line system comprising four ionic polymer-metal composite (IPMC) sensors is built, and experimental results are further presented to demonstrate the effectiveness of IPMC lateral line systems and the proposed estimation algorithms.

  6. Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions

    PubMed Central

    Onufriev, Alexey V.

    2013-01-01

    We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790

  7. Planck 2015 results: V. LFI calibration

    DOE PAGES

    Ade, P. A. R.; Aghanim, N.; Ashdown, M.; ...

    2016-09-20

    In this paper, we present a description of the pipeline used to calibrate the Planck Low Frequency Instrument (LFI) timelines into thermodynamic temperatures for the Planck 2015 data release, covering four years of uninterrupted operations. As in the 2013 data release, our calibrator is provided by the spin-synchronous modulation of the cosmic microwave background dipole, but we now use the orbital component, rather than adopting the Wilkinson Microwave Anisotropy Probe (WMAP) solar dipole. This allows our 2015 LFI analysis to provide an independent Solar dipole estimate, which is in excellent agreement with that of HFI and within 1σ (0.3% inmore » amplitude) of the WMAP value. This 0.3% shift in the peak-to-peak dipole temperature from WMAP and a general overhaul of the iterative calibration code increases the overall level of the LFI maps by 0.45% (30 GHz), 0.64% (44 GHz), and 0.82% (70 GHz) in temperature with respect to the 2013 Planck data release, thus reducing the discrepancy with the power spectrum measured by WMAP. We estimate that the LFI calibration uncertainty is now at the level of 0.20% for the 70 GHz map, 0.26% for the 44 GHz map, and 0.35% for the 30 GHz map. Finally, we provide a detailed description of the impact of all the changes implemented in the calibration since the previous data release.« less

  8. Planck 2015 results: V. LFI calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ade, P. A. R.; Aghanim, N.; Ashdown, M.

    In this paper, we present a description of the pipeline used to calibrate the Planck Low Frequency Instrument (LFI) timelines into thermodynamic temperatures for the Planck 2015 data release, covering four years of uninterrupted operations. As in the 2013 data release, our calibrator is provided by the spin-synchronous modulation of the cosmic microwave background dipole, but we now use the orbital component, rather than adopting the Wilkinson Microwave Anisotropy Probe (WMAP) solar dipole. This allows our 2015 LFI analysis to provide an independent Solar dipole estimate, which is in excellent agreement with that of HFI and within 1σ (0.3% inmore » amplitude) of the WMAP value. This 0.3% shift in the peak-to-peak dipole temperature from WMAP and a general overhaul of the iterative calibration code increases the overall level of the LFI maps by 0.45% (30 GHz), 0.64% (44 GHz), and 0.82% (70 GHz) in temperature with respect to the 2013 Planck data release, thus reducing the discrepancy with the power spectrum measured by WMAP. We estimate that the LFI calibration uncertainty is now at the level of 0.20% for the 70 GHz map, 0.26% for the 44 GHz map, and 0.35% for the 30 GHz map. Finally, we provide a detailed description of the impact of all the changes implemented in the calibration since the previous data release.« less

  9. Planck 2015 results. V. LFI calibration

    NASA Astrophysics Data System (ADS)

    Planck Collaboration; Ade, P. A. R.; Aghanim, N.; Ashdown, M.; Aumont, J.; Baccigalupi, C.; Banday, A. J.; Barreiro, R. B.; Bartolo, N.; Battaglia, P.; Battaner, E.; Benabed, K.; Benoît, A.; Benoit-Lévy, A.; Bernard, J.-P.; Bersanelli, M.; Bielewicz, P.; Bock, J. J.; Bonaldi, A.; Bonavera, L.; Bond, J. R.; Borrill, J.; Bouchet, F. R.; Bucher, M.; Burigana, C.; Butler, R. C.; Calabrese, E.; Cardoso, J.-F.; Catalano, A.; Chamballu, A.; Christensen, P. R.; Colombi, S.; Colombo, L. P. L.; Crill, B. P.; Curto, A.; Cuttaia, F.; Danese, L.; Davies, R. D.; Davis, R. J.; de Bernardis, P.; de Rosa, A.; de Zotti, G.; Delabrouille, J.; Dickinson, C.; Diego, J. M.; Dole, H.; Donzelli, S.; Doré, O.; Douspis, M.; Ducout, A.; Dupac, X.; Efstathiou, G.; Elsner, F.; Enßlin, T. A.; Eriksen, H. K.; Fergusson, J.; Finelli, F.; Forni, O.; Frailis, M.; Franceschi, E.; Frejsel, A.; Galeotta, S.; Galli, S.; Ganga, K.; Giard, M.; Giraud-Héraud, Y.; Gjerløw, E.; González-Nuevo, J.; Górski, K. M.; Gratton, S.; Gregorio, A.; Gruppuso, A.; Hansen, F. K.; Hanson, D.; Harrison, D. L.; Henrot-Versillé, S.; Herranz, D.; Hildebrandt, S. R.; Hivon, E.; Hobson, M.; Holmes, W. A.; Hornstrup, A.; Hovest, W.; Huffenberger, K. M.; Hurier, G.; Jaffe, A. H.; Jaffe, T. R.; Juvela, M.; Keihänen, E.; Keskitalo, R.; Kisner, T. S.; Knoche, J.; Krachmalnicoff, N.; Kunz, M.; Kurki-Suonio, H.; Lagache, G.; Lähteenmäki, A.; Lamarre, J.-M.; Lasenby, A.; Lattanzi, M.; Lawrence, C. R.; Leahy, J. P.; Leonardi, R.; Lesgourgues, J.; Levrier, F.; Liguori, M.; Lilje, P. B.; Linden-Vørnle, M.; López-Caniego, M.; Lubin, P. M.; Macías-Pérez, J. F.; Maggio, G.; Maino, D.; Mandolesi, N.; Mangilli, A.; Maris, M.; Martin, P. G.; Martínez-González, E.; Masi, S.; Matarrese, S.; McGehee, P.; Meinhold, P. R.; Melchiorri, A.; Mendes, L.; Mennella, A.; Migliaccio, M.; Mitra, S.; Montier, L.; Morgante, G.; Mortlock, D.; Moss, A.; Munshi, D.; Murphy, J. A.; Naselsky, P.; Nati, F.; Natoli, P.; Netterfield, C. B.; Nørgaard-Nielsen, H. U.; Novikov, D.; Novikov, I.; Paci, F.; Pagano, L.; Pajot, F.; Paoletti, D.; Partridge, B.; Pasian, F.; Patanchon, G.; Pearson, T. J.; Peel, M.; Perdereau, O.; Perotto, L.; Perrotta, F.; Pettorino, V.; Piacentini, F.; Pierpaoli, E.; Pietrobon, D.; Pointecouteau, E.; Polenta, G.; Pratt, G. W.; Prézeau, G.; Prunet, S.; Puget, J.-L.; Rachen, J. P.; Rebolo, R.; Reinecke, M.; Remazeilles, M.; Renzi, A.; Rocha, G.; Romelli, E.; Rosset, C.; Rossetti, M.; Roudier, G.; Rubiño-Martín, J. A.; Rusholme, B.; Sandri, M.; Santos, D.; Savelainen, M.; Scott, D.; Seiffert, M. D.; Shellard, E. P. S.; Spencer, L. D.; Stolyarov, V.; Sutton, D.; Suur-Uski, A.-S.; Sygnet, J.-F.; Tauber, J. A.; Tavagnacco, D.; Terenzi, L.; Toffolatti, L.; Tomasi, M.; Tristram, M.; Tucci, M.; Tuovinen, J.; Türler, M.; Umana, G.; Valenziano, L.; Valiviita, J.; Van Tent, B.; Vassallo, T.; Vielva, P.; Villa, F.; Wade, L. A.; Wandelt, B. D.; Watson, R.; Wehus, I. K.; Wilkinson, A.; Yvon, D.; Zacchei, A.; Zonca, A.

    2016-09-01

    We present a description of the pipeline used to calibrate the Planck Low Frequency Instrument (LFI) timelines into thermodynamic temperatures for the Planck 2015 data release, covering four years of uninterrupted operations. As in the 2013 data release, our calibrator is provided by the spin-synchronous modulation of the cosmic microwave background dipole, but we now use the orbital component, rather than adopting the Wilkinson Microwave Anisotropy Probe (WMAP) solar dipole. This allows our 2015 LFI analysis to provide an independent Solar dipole estimate, which is in excellent agreement with that of HFI and within 1σ (0.3% in amplitude) of the WMAP value. This 0.3% shift in the peak-to-peak dipole temperature from WMAP and a general overhaul of the iterative calibration code increases the overall level of the LFI maps by 0.45% (30 GHz), 0.64% (44 GHz), and 0.82% (70 GHz) in temperature with respect to the 2013 Planck data release, thus reducing the discrepancy with the power spectrum measured by WMAP. We estimate that the LFI calibration uncertainty is now at the level of 0.20% for the 70 GHz map, 0.26% for the 44 GHz map, and 0.35% for the 30 GHz map. We provide a detailed description of the impact of all the changes implemented in the calibration since the previous data release.

  10. Precise measurements of the atomic masses of silicon-28, phosphorus-31, sulfur-32, krypton-84,86, xenon-129,132,136, and the dipole moment of PH+ using single-ion and two-ion Penning trap techniques

    NASA Astrophysics Data System (ADS)

    Redshaw, Matthew

    This dissertation describes high precision measurements of atomic masses by measuring the cyclotron frequency of ions trapped singly, or in pairs, in a precision, cryogenic Penning trap. By building on techniques developed at MIT for measuring the cyclotron frequency of single trapped ions, the atomic masses of 84,86Kr, and 129,132,136Xe have been measured to less than a part in 1010 fractional precision. By developing a new technique for measuring the cyclotron frequency ratio of a pair of simultaneously trapped ions, the atomic masses of 28Si, 31P and 32S have been measured to 2 or 3 parts in 10 11. This new technique has also been used to measure the dipole moment of PH+. During the course of these measurements, two significant, but previously unsuspected sources of systematic error were discovered, characterized and eliminated. Extensive tests for other sources of systematic error were performed and are described in detail. The mass measurements presented here provide a significant increase in precision over previous values for these masses, by factors of 3 to 700. The results have a broad range of physics applications: The mass of 136 Xe is important for searches for neutrinoless double-beta-decay; the mass of 28Si is relevant to the re-definition of the artifact kilogram in terms of an atomic mass standard; the masses of 84,86Kr, and 129,132,136Xe provide convenient reference masses for less precise mass spectrometers in diverse fields such as nuclear physics and chemistry; and the dipole moment of PH+ provides a test of molecular structure calculations.

  11. Language and music: differential hemispheric dominance in detecting unexpected errors in the lyrics and melody of memorized songs.

    PubMed

    Yasui, Takuya; Kaga, Kimitaka; Sakai, Kuniyoshi L

    2009-02-01

    Using magnetoencephalography (MEG), we report here the hemispheric dominance of the auditory cortex that is selectively modulated by unexpected errors in the lyrics and melody of songs (lyrics and melody deviants), thereby elucidating under which conditions the lateralization of auditory processing changes. In experiment 1 using familiar songs, we found that the dipole strength of responses to the lyrics deviants was left-dominant at 140 ms (M140), whereas that of responses to the melody deviants was right-dominant at 130 ms (M130). In experiment 2 using familiar songs with a constant syllable or pitch, the dipole strength of frequency mismatch negativity elicited by oddballs was left-dominant. There were significant main effects of experiment (1 and 2) for the peak latencies and for the coordinates of the dipoles, indicating that the M140 and M130 were not the frequency mismatch negativity. In experiment 3 using newly memorized songs, the right-dominant M130 was observed only when the presented note was unexpected one, independent of perceiving unnatural pitch transitions (i.e., perceptual saliency) and of selective attention to the melody of songs. The consistent right-dominance of the M130 between experiments 1 and 3 suggests that the M130 in experiment 1 is due to unexpected notes deviating from well-memorized songs. On the other hand, the left-dominant M140 was elicited by lyrics deviants, suggesting the influence of top-down linguistic information and the memory of the familiar songs. We thus conclude that the left- lateralized M140 and right-lateralized M130 reflect the expectation based on top-down information of language and music, respectively.

  12. Development of a bio-magnetic measurement system and sensor configuration analysis for rats

    NASA Astrophysics Data System (ADS)

    Kim, Ji-Eun; Kim, In-Seon; Kim, Kiwoong; Lim, Sanghyun; Kwon, Hyukchan; Kang, Chan Seok; Ahn, San; Yu, Kwon Kyu; Lee, Yong-Ho

    2017-04-01

    Magnetoencephalography (MEG) based on superconducting quantum interference devices enables the measurement of very weak magnetic fields (10-1000 fT) generated from the human or animal brain. In this article, we introduce a small MEG system that we developed specifically for use with rats. Our system has the following characteristics: (1) variable distance between the pick-up coil and outer Dewar bottom (˜5 mm), (2) small pick-up coil (4 mm) for high spatial resolution, (3) good field sensitivity (45 ˜ 80 fT /cm/√{Hz} ) , (4) the sensor interval satisfies the Nyquist spatial sampling theorem, and (5) small source localization error for the region to be investigated. To reduce source localization error, it is necessary to establish an optimal sensor layout. To this end, we simulated confidence volumes at each point on a grid on the surface of a virtual rat head. In this simulation, we used locally fitted spheres as model rat heads. This enabled us to consider more realistic volume currents. We constrained the model such that the dipoles could have only four possible orientations: the x- and y-axes from the original coordinates, and two tangentially layered dipoles (local x- and y-axes) in the locally fitted spheres. We considered the confidence volumes according to the sensor layout and dipole orientation and positions. We then conducted a preliminary test with a 4-channel MEG system prior to manufacturing the multi-channel system. Using the 4-channel MEG system, we measured rat magnetocardiograms. We obtained well defined P-, QRS-, and T-waves in rats with a maximum value of 15 pT/cm. Finally, we measured auditory evoked fields and steady state auditory evoked fields with maximum values 400 fT/cm and 250 fT/cm, respectively.

  13. Modeling super-resolution SERS using a T-matrix method to elucidate molecule-nanoparticle coupling and the origins of localization errors

    NASA Astrophysics Data System (ADS)

    Heaps, Charles W.; Schatz, George C.

    2017-06-01

    A computational method to model diffraction-limited images from super-resolution surface-enhanced Raman scattering microscopy is introduced. Despite significant experimental progress in plasmon-based super-resolution imaging, theoretical predictions of the diffraction limited images remain a challenge. The method is used to calculate localization errors and image intensities for a single spherical gold nanoparticle-molecule system. The light scattering is calculated using a modification of generalized Mie (T-matrix) theory with a point dipole source and diffraction limited images are calculated using vectorial diffraction theory. The calculation produces the multipole expansion for each emitter and the coherent superposition of all fields. Imaging the constituent fields in addition to the total field provides new insight into the strong coupling between the molecule and the nanoparticle. Regardless of whether the molecular dipole moment is oriented parallel or perpendicular to the nanoparticle surface, the anisotropic excitation distorts the center of the nanoparticle as measured by the point spread function by approximately fifty percent of the particle radius toward to the molecule. Inspection of the nanoparticle multipoles reveals that distortion arises from a weak quadrupole resonance interfering with the dipole field in the nanoparticle. When the nanoparticle-molecule fields are in-phase, the distorted nanoparticle field dominates the observed image. When out-of-phase, the nanoparticle and molecule are of comparable intensity and interference between the two emitters dominates the observed image. The method is also applied to different wavelengths and particle radii. At off-resonant wavelengths, the method predicts images closer to the molecule not because of relative intensities but because of greater distortion in the nanoparticle. The method is a promising approach to improving the understanding of plasmon-enhanced super-resolution experiments.

  14. An Ab Initio Based Potential Energy Surface for Water

    NASA Technical Reports Server (NTRS)

    Partridge, Harry; Schwenke, David W.; Langhoff, Stephen R. (Technical Monitor)

    1996-01-01

    We report a new determination of the water potential energy surface. A high quality ab initio potential energy surface (PES) and dipole moment function of water have been computed. This PES is empirically adjusted to improve the agreement between the computed line positions and those from the HITRAN 92 data base. The adjustment is small, nonetheless including an estimate of core (oxygen 1s) electron correlation greatly improves the agreement with experiment. Of the 27,245 assigned transitions in the HITRAN 92 data base for H2(O-16), the overall root mean square (rms) deviation between the computed and observed line positions is 0.125/cm. However the deviations do not correspond to a normal distribution: 69% of the lines have errors less than 0.05/cm. Overall, the agreement between the line intensities computed in the present work and those contained in the data base is quite good, however there are a significant number of line strengths which differ greatly.

  15. Particle swarm optimization and its application in MEG source localization using single time sliced data

    NASA Astrophysics Data System (ADS)

    Lin, Juan; Liu, Chenglian; Guo, Yongning

    2014-10-01

    The estimation of neural active sources from the magnetoencephalography (MEG) data is a very critical issue for both clinical neurology and brain functions research. A widely accepted source-modeling technique for MEG involves calculating a set of equivalent current dipoles (ECDs). Depth in the brain is one of difficulties in MEG source localization. Particle swarm optimization(PSO) is widely used to solve various optimization problems. In this paper we discuss its ability and robustness to find the global optimum in different depths of the brain when using single equivalent current dipole (sECD) model and single time sliced data. The results show that PSO is an effective global optimization to MEG source localization when given one dipole in different depths.

  16. On Geomagnetism and Paleomagnetism

    NASA Technical Reports Server (NTRS)

    Voorhies, Coerte V.

    1998-01-01

    A statistical description of Earth's broad scale, core-source magnetic field has been developed and tested. The description features an expected, or mean, spatial magnetic power spectrum that is neither "flat" nor "while" at any depth, but is akin to spectra advanced by Stevenson and McLeod. This multipole spectrum describes the magnetic energy range; it is not steep enough for Gubbins' magnetic dissipation range. Natural variations of core multipole powers about their mean values are to be expected over geologic time and are described via trial probability distribution functions that neither require nor prohibit magnetic isotropy. The description is thus applicable to core-source dipole and low degree non-dipole fields despite axial dipole anisotropy. The description is combined with main field models of modem satellite and surface geomagnetic measurements to make testable predictions of: (1) the radius of Earth's core, (2) mean paleomagnetic field intensity, and (3) the mean rates and durations of both dipole power excursions and durable axial dipole reversals. The predicted core radius is 0.7% above the 3480 km seismologic value. The predicted root mean square paleointensity (35.6 mu T) and mean Virtual Axial Dipole Moment (about 6.2 lx 1022 Am(exp 2)) are within the range of various mean paleointensity estimates. The predicted mean rate of dipole power excursions, as defined by an absolute dipole moment <20% of the 1980 value, is 9.04/Myr and 14% less than obtained by analysis of a 4 Myr paleointensity record. The predicted mean rate of durable axial dipole reversals (2.26/Myr) is 2.3% more than established by the polarity time-scale for the past 84 Myr. The predicted mean duration of axial dipole reversals (5533 yr) is indistinguishable from an observational value. The accuracy of these predictions demonstrates the power and utility of the description, which is thought to merit further development and testing. It is suggested that strong stable stratification of Earth's uppermost outer core leads to a geologically long interval of no dipole reversals and a very nearly axisymmetric field outside the core. Statistical descriptions of other planetary magnetic fields are outlined.

  17. Noise covariance incorporated MEG-MUSIC algorithm: a method for multiple-dipole estimation tolerant of the influence of background brain activity.

    PubMed

    Sekihara, K; Poeppel, D; Marantz, A; Koizumi, H; Miyashita, Y

    1997-09-01

    This paper proposes a method of localizing multiple current dipoles from spatio-temporal biomagnetic data. The method is based on the multiple signal classification (MUSIC) algorithm and is tolerant of the influence of background brain activity. In this method, the noise covariance matrix is estimated using a portion of the data that contains noise, but does not contain any signal information. Then, a modified noise subspace projector is formed using the generalized eigenvectors of the noise and measured-data covariance matrices. The MUSIC localizer is calculated using this noise subspace projector and the noise covariance matrix. The results from a computer simulation have verified the effectiveness of the method. The method was then applied to source estimation for auditory-evoked fields elicited by syllable speech sounds. The results strongly suggest the method's effectiveness in removing the influence of background activity.

  18. Muscle and eye movement artifact removal prior to EEG source localization.

    PubMed

    Hallez, Hans; Vergult, Anneleen; Phlypo, Ronald; Van Hese, Peter; De Clercq, Wim; D'Asseler, Yves; Van de Walle, Rik; Vanrumste, Bart; Van Paesschen, Wim; Van Huffel, Sabine; Lemahieu, Ignace

    2006-01-01

    Muscle and eye movement artifacts are very prominent in the ictal EEG of patients suffering from epilepsy, thus making the dipole localization of ictal activity very unreliable. Recently, two techniques (BSS-CCA and pSVD) were developed to remove those artifacts. The purpose of this study is to assess whether the removal of muscle and eye movement artifacts improves the EEG dipole source localization. We used a total of 8 EEG fragments, each from another patient, first unfiltered, then filtered by the BSS-CCA and pSVD. In both the filtered and unfiltered EEG fragments we estimated multiple dipoles using RAP-MUSIC. The resulting dipoles were subjected to a K-means clustering algorithm, to extract the most prominent cluster. We found that the removal of muscle and eye artifact results to tighter and more clear dipole clusters. Furthermore, we found that localization of the filtered EEG corresponded with the localization derived from the ictal SPECT in 7 of the 8 patients. Therefore, we can conclude that the BSS-CCA and pSVD improve localization of ictal activity, thus making the localization more reliable for the presurgical evaluation of the patient.

  19. Self-calibration method without joint iteration for distributed small satellite SAR systems

    NASA Astrophysics Data System (ADS)

    Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan

    2013-12-01

    The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.

  20. Dipole Approximation to Predict the Resonances of Dimers Composed of Dielectric Resonators for Directional Emission: Dielectric Dimers Dipole Approximation

    DOE PAGES

    Campione, Salvatore; Warne, Larry K.; Basilio, Lorena I.

    2017-09-29

    In this paper we develop a fully-retarded, dipole approximation model to estimate the effective polarizabilities of a dimer made of dielectric resonators. They are computed from the polarizabilities of the two resonators composing the dimer. We analyze the situation of full-cubes as well as split-cubes, which have been shown to exhibit overlapping electric and magnetic resonances. We compare the effective dimer polarizabilities to ones retrieved via full-wave simulations as well as ones computed via a quasi-static, dipole approximation. We observe good agreement between the fully-retarded solution and the full-wave results, whereas the quasi-static approximation is less accurate for the problemmore » at hand. The developed model can be used to predict the electric and magnetic resonances of a dimer under parallel or orthogonal (to the dimer axis) excitation. This is particularly helpful when interested in locating frequencies at which the dimer will emit directional radiation.« less

  1. Direct evaluation of electrical dipole moment and oxygen density ratio at high-k dielectrics/SiO2 interface by X-ray photoelectron spectroscopy analysis

    NASA Astrophysics Data System (ADS)

    Fujimura, Nobuyuki; Ohta, Akio; Ikeda, Mitsuhisa; Makihara, Katsunori; Miyazaki, Seiichi

    2018-04-01

    The electrical dipole moment at an ultrathin high-k (HfO2, Al2O3, TiO2, Y2O3, and SrO)/SiO2 interface and its correlation with the oxygen density ratio at the interface have been directly evaluated by X-ray photoelectron spectroscopy (XPS) under monochromatized Al Kα radiation. The electrical dipole moment at the high-k/SiO2 interface has been measured from the change in the cut-off energy of secondary photoelectrons. Moreover, the oxygen density ratio at the interface between high-k and SiO2 has been estimated from cation core-line signals, such as Hf 4f, Al 2p, Y 3d, Ti 2p, Sr 3d, and Si 2p. We have experimentally clarified the relationship between the measured electrical dipole moment and the oxygen density ratio at the high-k/SiO2 interface.

  2. First Year Wilkinson Microwave Anisotropy Probe(WMAP) Observations: Data Processing Methods and Systematic Errors Limits

    NASA Technical Reports Server (NTRS)

    Hinshaw, G.; Barnes, C.; Bennett, C. L.; Greason, M. R.; Halpern, M.; Hill, R. S.; Jarosik, N.; Kogut, A.; Limon, M.; Meyer, S. S.

    2003-01-01

    We describe the calibration and data processing methods used to generate full-sky maps of the cosmic microwave background (CMB) from the first year of Wilkinson Microwave Anisotropy Probe (WMAP) observations. Detailed limits on residual systematic errors are assigned based largely on analyses of the flight data supplemented, where necessary, with results from ground tests. The data are calibrated in flight using the dipole modulation of the CMB due to the observatory's motion around the Sun. This constitutes a full-beam calibration source. An iterative algorithm simultaneously fits the time-ordered data to obtain calibration parameters and pixelized sky map temperatures. The noise properties are determined by analyzing the time-ordered data with this sky signal estimate subtracted. Based on this, we apply a pre-whitening filter to the time-ordered data to remove a low level of l/f noise. We infer and correct for a small (approx. 1 %) transmission imbalance between the two sky inputs to each differential radiometer, and we subtract a small sidelobe correction from the 23 GHz (K band) map prior to further analysis. No other systematic error corrections are applied to the data. Calibration and baseline artifacts, including the response to environmental perturbations, are negligible. Systematic uncertainties are comparable to statistical uncertainties in the characterization of the beam response. Both are accounted for in the covariance matrix of the window function and are propagated to uncertainties in the final power spectrum. We characterize the combined upper limits to residual systematic uncertainties through the pixel covariance matrix.

  3. Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions

    NASA Astrophysics Data System (ADS)

    Cairncross, William B.; Gresh, Daniel N.; Grau, Matt; Cossel, Kevin C.; Roussy, Tanya S.; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A.

    2017-10-01

    We describe the first precision measurement of the electron's electric dipole moment (de) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on 180Hf 19F+ in its metastable 3Δ1 electronic state, we obtain de=(0.9 ±7. 7stat±1. 7syst)×10-29 e cm , resulting in an upper bound of |de|<1.3 ×10-28 e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |de|<9.4 ×10-29 e cm [J. Baron et al., New J. Phys. 19, 073029 (2017), 10.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.

  4. Low-degree Structure in Mercury's Planetary Magnetic Field

    NASA Technical Reports Server (NTRS)

    Anderson, Brian J.; Johnson, Catherine L.; Korth, Haje; Winslow, Reka M.; Borovsky, Joseph E.; Purucker, Michael E.; Slavin, James A.; Solomon, Sean C.; Zuber, Maria T.; McNutt, Ralph L. Jr.

    2012-01-01

    The structure of Mercury's internal magnetic field has been determined from analysis of orbital Magnetometer measurements by the MESSENGER spacecraft. We identified the magnetic equator on 531 low-altitude and 120 high-altitude equator crossings from the zero in the radial cylindrical magnetic field component, Beta (sub rho). The low-altitude crossings are offset 479 +/- 6 km northward, indicating an offset of the planetary dipole. The tilt of the magnetic pole relative to the planetary spin axis is less than 0.8 deg.. The high-altitude crossings yield a northward offset of the magnetic equator of 486 +/- 74 km. A field with only nonzero dipole and octupole coefficients also matches the low-altitude observations but cannot yield off-equatorial Beta (sub rho) = 0 at radial distances greater than 3520 km. We compared offset dipole and other descriptions of the field with vector field observations below 600 km for 13 longitudinally distributed, magnetically quiet orbits. An offset dipole with southward directed moment of 190 nT-R-cube (sub M) yields root-mean-square (RMS) residuals below 14 nT, whereas a field with only dipole and octupole terms tuned to match the polar field and the low-altitude magnetic equator crossings yields RMS residuals up to 68 nT. Attributing the residuals from the offset-dipole field to axial degree 3 and 4 contributions we estimate that the Gauss coefficient magnitudes for the additional terms are less than 4% and 7%, respectively, relative to the dipole. The axial alignment and prominent quadrupole are consistent with a non-convecting layer above a deep dynamo in Mercury's fluid outer core.

  5. Field evaluation of distance-estimation error during wetland-dependent bird surveys

    USGS Publications Warehouse

    Nadeau, Christopher P.; Conway, Courtney J.

    2012-01-01

    Context: The most common methods to estimate detection probability during avian point-count surveys involve recording a distance between the survey point and individual birds detected during the survey period. Accurately measuring or estimating distance is an important assumption of these methods; however, this assumption is rarely tested in the context of aural avian point-count surveys. Aims: We expand on recent bird-simulation studies to document the error associated with estimating distance to calling birds in a wetland ecosystem. Methods: We used two approaches to estimate the error associated with five surveyor's distance estimates between the survey point and calling birds, and to determine the factors that affect a surveyor's ability to estimate distance. Key results: We observed biased and imprecise distance estimates when estimating distance to simulated birds in a point-count scenario (x̄error = -9 m, s.d.error = 47 m) and when estimating distances to real birds during field trials (x̄error = 39 m, s.d.error = 79 m). The amount of bias and precision in distance estimates differed among surveyors; surveyors with more training and experience were less biased and more precise when estimating distance to both real and simulated birds. Three environmental factors were important in explaining the error associated with distance estimates, including the measured distance from the bird to the surveyor, the volume of the call and the species of bird. Surveyors tended to make large overestimations to birds close to the survey point, which is an especially serious error in distance sampling. Conclusions: Our results suggest that distance-estimation error is prevalent, but surveyor training may be the easiest way to reduce distance-estimation error. Implications: The present study has demonstrated how relatively simple field trials can be used to estimate the error associated with distance estimates used to estimate detection probability during avian point-count surveys. Evaluating distance-estimation errors will allow investigators to better evaluate the accuracy of avian density and trend estimates. Moreover, investigators who evaluate distance-estimation errors could employ recently developed models to incorporate distance-estimation error into analyses. We encourage further development of such models, including the inclusion of such models into distance-analysis software.

  6. Rotation Detection Using the Precession of Molecular Electric Dipole Moment

    NASA Astrophysics Data System (ADS)

    Ke, Yi; Deng, Xiao-Bing; Hu, Zhong-Kun

    2017-11-01

    We present a method to detect the rotation by using the precession of molecular electric dipole moment in a static electric field. The molecular electric dipole moments are polarized under the static electric field and a nonzero electric polarization vector emerges in the molecular gas. A resonant radio-frequency pulse electric field is applied to realize a 90° flip of the electric polarization vector of a particular rotational state. After the pulse electric field, the electric polarization vector precesses under the static electric field. The rotation induces a shift in the precession frequency which is measured to deduce the angular velocity of the rotation. The fundamental sensitivity limit of this method is estimated. This work is only a proposal and does not involve experimental results.

  7. Optimal spacecraft formation establishment and reconfiguration propelled by the geomagnetic Lorentz force

    NASA Astrophysics Data System (ADS)

    Huang, Xu; Yan, Ye; Zhou, Yang

    2014-12-01

    The Lorentz force acting on an electrostatically charged spacecraft as it moves through the planetary magnetic field could be utilized as propellantless electromagnetic propulsion for orbital maneuvering, such as spacecraft formation establishment and formation reconfiguration. By assuming that the Earth's magnetic field could be modeled as a tilted dipole located at the center of Earth that corotates with Earth, a dynamical model that describes the relative orbital motion of Lorentz spacecraft is developed. Based on the proposed dynamical model, the energy-optimal open-loop trajectories of control inputs, namely, the required specific charges of Lorentz spacecraft, for Lorentz-propelled spacecraft formation establishment or reconfiguration problems with both fixed and free final conditions constraints are derived via Gauss pseudospectral method. The effect of the magnetic dipole tilt angle on the optimal control inputs and the relative transfer trajectories for formation establishment or reconfiguration is also investigated by comparisons with the results derived from a nontilted dipole model. Furthermore, a closed-loop integral sliding mode controller is designed to guarantee the trajectory tracking in the presence of external disturbances and modeling errors. The stability of the closed-loop system is proved by a Lyapunov-based approach. Numerical simulations are presented to verify the validity of the proposed open-loop control methods and demonstrate the performance of the closed-loop controller. Also, the results indicate the dipole tilt angle should be considered when designing control strategies for Lorentz-propelled spacecraft formation establishment or reconfiguration.

  8. Thermal noise calculation method for precise estimation of the signal-to-noise ratio of ultra-low-field MRI with an atomic magnetometer.

    PubMed

    Yamashita, Tatsuya; Oida, Takenori; Hamada, Shoji; Kobayashi, Tetsuo

    2012-02-01

    In recent years, there has been considerable interest in developing an ultra-low-field magnetic resonance imaging (ULF-MRI) system using an optically pumped atomic magnetometer (OPAM). However, a precise estimation of the signal-to-noise ratio (SNR) of ULF-MRI has not been carried out. Conventionally, to calculate the SNR of an MR image, thermal noise, also called Nyquist noise, has been estimated by considering a resistor that is electrically equivalent to a biological-conductive sample and is connected in series to a pickup coil. However, this method has major limitations in that the receiver has to be a coil and that it cannot be applied directly to a system using OPAM. In this paper, we propose a method to estimate the thermal noise of an MRI system using OPAM. We calculate the thermal noise from the variance of the magnetic sensor output produced by current-dipole moments that simulate thermally fluctuating current sources in a biological sample. We assume that the random magnitude of the current dipole in each volume element of the biological sample is described by the Maxwell-Boltzmann distribution. The sensor output produced by each current-dipole moment is calculated either by an analytical formula or a numerical method based on the boundary element method. We validate the proposed method by comparing our results with those obtained by conventional methods that consider resistors connected in series to a pickup coil using single-layered sphere, multi-layered sphere, and realistic head models. Finally, we apply the proposed method to the ULF-MRI model using OPAM as the receiver with multi-layered sphere and realistic head models and estimate their SNR. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Bias in the Wagner-Nelson estimate of the fraction of drug absorbed.

    PubMed

    Wang, Yibin; Nedelman, Jerry

    2002-04-01

    To examine and quantify bias in the Wagner-Nelson estimate of the fraction of drug absorbed resulting from the estimation error of the elimination rate constant (k), measurement error of the drug concentration, and the truncation error in the area under the curve. Bias in the Wagner-Nelson estimate was derived as a function of post-dosing time (t), k, ratio of absorption rate constant to k (r), and the coefficient of variation for estimates of k (CVk), or CV% for the observed concentration, by assuming a one-compartment model and using an independent estimate of k. The derived functions were used for evaluating the bias with r = 0.5, 3, or 6; k = 0.1 or 0.2; CV, = 0.2 or 0.4; and CV, =0.2 or 0.4; for t = 0 to 30 or 60. Estimation error of k resulted in an upward bias in the Wagner-Nelson estimate that could lead to the estimate of the fraction absorbed being greater than unity. The bias resulting from the estimation error of k inflates the fraction of absorption vs. time profiles mainly in the early post-dosing period. The magnitude of the bias in the Wagner-Nelson estimate resulting from estimation error of k was mainly determined by CV,. The bias in the Wagner-Nelson estimate resulting from to estimation error in k can be dramatically reduced by use of the mean of several independent estimates of k, as in studies for development of an in vivo-in vitro correlation. The truncation error in the area under the curve can introduce a negative bias in the Wagner-Nelson estimate. This can partially offset the bias resulting from estimation error of k in the early post-dosing period. Measurement error of concentration does not introduce bias in the Wagner-Nelson estimate. Estimation error of k results in an upward bias in the Wagner-Nelson estimate, mainly in the early drug absorption phase. The truncation error in AUC can result in a downward bias, which may partially offset the upward bias due to estimation error of k in the early absorption phase. Measurement error of concentration does not introduce bias. The joint effect of estimation error of k and truncation error in AUC can result in a non-monotonic fraction-of-drug-absorbed-vs-time profile. However, only estimation error of k can lead to the Wagner-Nelson estimate of fraction of drug absorbed greater than unity.

  10. Application of database methods to the prediction of B3LYP-optimized polyhedral water cluster geometries and electronic energies

    NASA Astrophysics Data System (ADS)

    Anick, David J.

    2003-12-01

    A method is described for a rapid prediction of B3LYP-optimized geometries for polyhedral water clusters (PWCs). Starting with a database of 121 B3LYP-optimized PWCs containing 2277 H-bonds, linear regressions yield formulas correlating O-O distances, O-O-O angles, and H-O-H orientation parameters, with local and global cluster descriptors. The formulas predict O-O distances with a rms error of 0.85 pm to 1.29 pm and predict O-O-O angles with a rms error of 0.6° to 2.2°. An algorithm is given which uses the O-O and O-O-O formulas to determine coordinates for the oxygen nuclei of a PWC. The H-O-H formulas then determine positions for two H's at each O. For 15 test clusters, the gap between the electronic energy of the predicted geometry and the true B3LYP optimum ranges from 0.11 to 0.54 kcal/mol or 4 to 18 cal/mol per H-bond. Linear regression also identifies 14 parameters that strongly correlate with PWC electronic energy. These descriptors include the number of H-bonds in which both oxygens carry a non-H-bonding H, the number of quadrilateral faces, the number of symmetric angles in 5- and in 6-sided faces, and the square of the cluster's estimated dipole moment.

  11. Jacobian-Based Iterative Method for Magnetic Localization in Robotic Capsule Endoscopy

    PubMed Central

    Di Natali, Christian; Beccani, Marco; Simaan, Nabil; Valdastri, Pietro

    2016-01-01

    The purpose of this study is to validate a Jacobian-based iterative method for real-time localization of magnetically controlled endoscopic capsules. The proposed approach applies finite-element solutions to the magnetic field problem and least-squares interpolations to obtain closed-form and fast estimates of the magnetic field. By defining a closed-form expression for the Jacobian of the magnetic field relative to changes in the capsule pose, we are able to obtain an iterative localization at a faster computational time when compared with prior works, without suffering from the inaccuracies stemming from dipole assumptions. This new algorithm can be used in conjunction with an absolute localization technique that provides initialization values at a slower refresh rate. The proposed approach was assessed via simulation and experimental trials, adopting a wireless capsule equipped with a permanent magnet, six magnetic field sensors, and an inertial measurement unit. The overall refresh rate, including sensor data acquisition and wireless communication was 7 ms, thus enabling closed-loop control strategies for magnetic manipulation running faster than 100 Hz. The average localization error, expressed in cylindrical coordinates was below 7 mm in both the radial and axial components and 5° in the azimuthal component. The average error for the capsule orientation angles, obtained by fusing gyroscope and inclinometer measurements, was below 5°. PMID:27087799

  12. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

  13. Studying Room Acoustics using a Monopole-Dipole Microphone Array

    NASA Technical Reports Server (NTRS)

    Begault, Durand R.; Abel, Jonathan S.; Gills, Stephen R. (Technical Monitor)

    1997-01-01

    The use of a soundfield microphone for examining the directional nature of a room impulse response was reported recently. By cross-correlating monopole and co-located dipole microphone signals aligned with left-right, up-down, and front-back axes, a sense of signal direction of arrival is revealed. The current study is concerned with the array's ability to detect individual reflections and directions of arrival, as a function of the cross-correlation window duration. If is window is too long, weak reflections are overlooked; if too short, spurious detections result. Guidelines are presented for setting the window width according to perceptual criteria. Formulas are presented describing the accuracy with which direction of arrival can be estimated as a function of room specifics and measurement noise. The direction of arrival of early reflections is more accurately determined than that of later reflections which are quieter and more numerous. The transition from a fairly directional sound field at the beginning of the room impulse response to a uni-directional diffuse field is examined. Finally, it is shown that measurements from additional dipole orientations can significantly improve the ability to detect reflections and estimate their directions of arrival.

  14. Geomagnetic cutoffs: A review for space dosimetry applications

    NASA Astrophysics Data System (ADS)

    Smart, D. F.; Shea, M. A.

    1994-10-01

    The earth's magnetic field acts as a shield against charged particle radiation from interplanetary space, technically described as the geomagnetic cutoff. The cutoff rigidity problem (except for the dipole special case) has 'no solution in closed form'. The dipole case yields the Stormer equation which has been repeatedly applied to the earth in hopes of providing useful approximations of cutoff rigidities. Unfortunately the earth's magnetic field has significant deviations from dipole geometry, and the Stormer cutoffs are not adequate for most applications. By application of massive digital computer power it is possible to determine realistic geomagnetic cutoffs derived from high order simulation of the geomagnetic field. Using this technique, 'world-grids' of directional cutoffs for the earth's surface and for a limited number of satellite altitudes have been derived. However, this approach is so expensive and time comsuming it is impractical for most spacecraft orbits, and approximations must be used. The world grids of cutoff rigidities are extensively used as lookup tables, normalization points and interpolation aids to estimate the effective geomagnetic cutoff rigidity of a specific location in space. We review the various options for estimating the cutoff rigidity for earth-orbiting satellites.

  15. High-Accuracy Analysis of Compton Scattering in Chiral EFT: Proton and Neutron Polarisabilities

    NASA Astrophysics Data System (ADS)

    Griesshammer, Harald W.; Phillips, Daniel R.; McGovern, Judith A.

    2013-10-01

    Compton scattering from protons and neutrons provides important insight into the structure of the nucleon. A new extraction of the static electric and magnetic dipole polarisabilities αE 1 and βM 1 of the proton and neutron from all published elastic data below 300 MeV in Chiral Effective Field Theory shows that within the statistics-dominated errors, the proton and neutron polarisabilities are identical, i.e. no iso-spin breaking effects of the pion cloud are seen. Particular attention is paid to the precision and accuracy of each data set, and to an estimate of residual theoretical uncertainties. ChiEFT is ideal for that purpose since it provides a model-independent estimate of higher-order corrections and encodes the correct low-energy dynamics of QCD, including, for few-nucleon systems used to extract neutron polarisabilities, consistent nuclear currents, rescattering effects and wave functions. It therefore automatically respects the low-energy theorems for photon-nucleus scattering. The Δ (1232) as active degree of freedom is essential to realise the full power of the world's Compton data.Its parameters are constrained in the resonance region. A brief outlook is provided on what kind of future experiments can improve the database. Supported in part by UK STFC, DOE, NSF, and the Sino-German CRC 110.

  16. Gas-phase spectroscopy of synephrine by laser desorption supersonic jet technique.

    PubMed

    Ishiuchi, Shun-ichi; Asakawa, Toshiro; Mitsuda, Haruhiko; Miyazaki, Mitsuhiko; Chakraborty, Shamik; Fujii, Masaaki

    2011-09-22

    In our previous work, we found that synephrine has six conformers in the gas phase, while adrenaline, which is a catecholamine and has the same side chain as synephrine, has been reported to have only two conformers. To determine the conformational geometries of synephrine, we measured resonance enhanced multiphoton ionization, ultraviolet-ultraviolet hole burning, and infrared dip spectra by utilizing the laser desorption supersonic jet technique. By comparing the observed infrared spectra with theoretical ones, we assigned geometries except for the orientations of the phenolic OH group. Comparison between the determined structures of synephrine and those of 2-methylaminno-1-phenylethanol, which has the same side chain as synephrine but no phenol OH group, leads to the conclusion that the phenolic OH group in synephrine does not affect the conformational flexibility of the side chain. In the case of adrenaline, which is expected to have 12 conformers if there are no interactions between the catecholic OH groups and the side chain, some interactions possibly exist between them because only two conformations are observed. By estimation of the dipole-dipole interaction energy between partial dipole moments of the catecholic OH groups and the side chain, it was concluded that the dipole-dipole interaction stabilizes specific conformers which are actually observed. © 2011 American Chemical Society

  17. Dynamics of elastic interactions in soft and biological matter.

    PubMed

    Yuval, Janni; Safran, Samuel A

    2013-04-01

    Cells probe their mechanical environment and can change the organization of their cytoskeletons when the elastic and viscous properties of their environment are modified. We use a model in which the forces exerted by small, contractile acto-myosin filaments (e.g., nascent stress fibers in stem cells) on the extracellular matrix are modeled as local force dipoles. In some cases, the strain field caused by these force dipoles propagates quickly enough so that only static elastic interactions need be considered. On the other hand, in the case of significant energy dissipation, strain propagation is slower and may be eliminated completely by the relaxation of the cellular cytoskeleton (e.g., by cross-link dissociation). Here, we consider several dissipative mechanisms that affect the propagation of the strain field in adhered cells and consider these effects on the interaction between force dipoles and their resulting mutual orientations. This is a first step in understanding the development of orientational (nematic) or layering (smectic) order in the cytoskeleton. We use the theory to estimate the propagation time of the strain fields over a cellular distance for different mechanisms and find that in some cases it can be of the order of seconds, thus competing with the cytoskeletal relaxation time. Furthermore, for a simple system of two force dipoles, we predict that in some cases the orientation of force dipoles might change significantly with time, e.g., for short times the dipoles exhibit parallel alignment while for later times they align perpendicularly.

  18. Towards a unified description of total and diffractive structure functions at DESY HERA in the QCD dipole picture

    NASA Astrophysics Data System (ADS)

    Bialas, A.; Peschanski, R.; Royon, Ch.

    1998-06-01

    It is argued that the QCD dipole picture allows us to build a unified theoretical description, based on Balitskii-Fadin-Kuraev-Lipatov dynamics, of the total and diffractive nucleon structure functions. This description is in qualitative agreement with the present collection of data obtained by the H1 Collaboration. More precise theoretical estimates, in particular the determination of the normalizations and proton transverse momentum behavior of the diffractive components, are shown to be required in order to reach definite conclusions.

  19. Axial, scalar, and tensor charges of the nucleon from 2 + 1 + 1 -flavor lattice QCD

    DOE PAGES

    Bhattacharya, Tanmoy; Cirigliano, Vincenzo; Cohen, Saul D.; ...

    2016-09-19

    Here, we present results for the isovector axial, scalar, and tensor charges g u–d A, g u–d S, and g u–d T of the nucleon needed to probe the Standard Model and novel physics. The axial charge is a fundamental parameter describing the weak interactions of nucleons. The scalar and tensor charges probe novel interactions at the TeV scale in neutron and nuclear β-decays, and the flavor-diagonal tensor charges g u T, g d T, and g s T are needed to quantify the contribution of the quark electric dipole moment (EDM) to the neutron EDM. The lattice-QCD calculations weremore » done using nine ensembles of gauge configurations generated by the MILC Collaboration using the highly improved staggered quarks action with 2+1+1 dynamical flavors. These ensembles span three lattice spacings a ≈ 0.06,0.09, and 0.12 fm and light-quark masses corresponding to the pion masses M π ≈ 135, 225, and 315 MeV. High-statistics estimates on five ensembles using the all-mode-averaging method allow us to quantify all systematic uncertainties and perform a simultaneous extrapolation in the lattice spacing, lattice volume, and light-quark masses for the connected contributions. Our final estimates, in the ¯MS scheme at 2 GeV, of the isovector charges are g u–d A = 1.195(33)(20), g u–d S = 0.97(12)(6), and g u–d T = 0.987(51)(20). The first error includes statistical and all systematic uncertainties except that due to the extrapolation Ansatz, which is given by the second error estimate. Combining our estimate for gu–dS with the difference of light quarks masses (m d–m u) QCD = 2.67(35) MeV given by the Flavor Lattice Average Group, we obtain (M N – M P) QCD = 2.59(49) MeV. Estimates of the connected part of the flavor-diagonal tensor charges of the proton are g u T = 0.792(42) and g d T = –0.194(14). Combining our new estimates with precision low-energy experiments, we present updated constraints on novel scalar and tensor interactions, ε S,T, at the TeV scale.« less

  20. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies

    PubMed Central

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-01-01

    Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476

  1. A map of the cosmic background radiation at 3 millimeters

    NASA Technical Reports Server (NTRS)

    Lubin, P.; Villela, T.; Epstein, G.; Smoot, G.

    1985-01-01

    Data from a series of balloon flights covering both the Northern and Southern Hemispheres, measuring the large angular scale anisotropy in the cosmic background radiation at 3.3 mm wavelength are presented. The data cover 85 percent of the sky to a limiting sensitivity of 0.7 mK per 7 deg field of view. The data show a 50-sigma (statistical error only) dipole anisotropy with an amplitude of 3.44 + or - 0.17 mK and a direction of alpha = 11.2 h + or - 0.1 h, and delta = -6.0 deg + or - 1.5 deg. A 90 percent confidence level upper limit of 0.00007 is obtained for the rms quadrupole amplitude. Flights separated by 6 months show the motion of earth around the sun. Galactic contamination is very small, with less than 0.1 mK contribution to the dipole quadrupole terms. A map of the sky has been generated from the data.

  2. Precision Measurement of the Electron's Electric Dipole Moment Using Trapped Molecular Ions.

    PubMed

    Cairncross, William B; Gresh, Daniel N; Grau, Matt; Cossel, Kevin C; Roussy, Tanya S; Ni, Yiqi; Zhou, Yan; Ye, Jun; Cornell, Eric A

    2017-10-13

    We describe the first precision measurement of the electron's electric dipole moment (d_{e}) using trapped molecular ions, demonstrating the application of spin interrogation times over 700 ms to achieve high sensitivity and stringent rejection of systematic errors. Through electron spin resonance spectroscopy on ^{180}Hf^{19}F^{+} in its metastable ^{3}Δ_{1} electronic state, we obtain d_{e}=(0.9±7.7_{stat}±1.7_{syst})×10^{-29}  e cm, resulting in an upper bound of |d_{e}|<1.3×10^{-28}  e cm (90% confidence). Our result provides independent confirmation of the current upper bound of |d_{e}|<9.4×10^{-29}  e cm [J. Baron et al., New J. Phys. 19, 073029 (2017)NJOPFM1367-263010.1088/1367-2630/aa708e], and offers the potential to improve on this limit in the near future.

  3. How well do static electronic dipole polarizabilities from gas-phase experiments compare with density functional and MP2 computations?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thakkar, Ajit J., E-mail: ajit@unb.ca; Wu, Taozhe

    2015-10-14

    Static electronic dipole polarizabilities for 135 molecules are calculated using second-order Møller-Plesset perturbation theory and six density functionals recently recommended for polarizabilities. Comparison is made with the best gas-phase experimental data. The lowest mean absolute percent deviations from the best experimental values for all 135 molecules are 3.03% and 3.08% for the LC-τHCTH and M11 functionals, respectively. Excluding the eight extreme outliers for which the experimental values are almost certainly in error, the mean absolute percent deviation for the remaining 127 molecules drops to 2.42% and 2.48% for the LC-τHCTH and M11 functionals, respectively. Detailed comparison enables us to identifymore » 32 molecules for which the discrepancy between the calculated and experimental values warrants further investigation.« less

  4. Study of top quark dipole interactions in t t \\xAF production associated with two heavy gauge bosons at the LHC

    NASA Astrophysics Data System (ADS)

    Etesami, Seyed Mohsen; Khatibi, Sara; Mohammadi Najafabadi, Mojtaba

    2018-04-01

    In this paper, we investigate the prospects of measuring the strong and weak dipole moments of the top quark at the Large Hadron Collider (LHC). Measurements of these couplings provide an excellent opportunity to probe new physics interactions as they have quite small magnitudes in the standard model. Our analyses are performed using the production cross sections of t t ¯W W and t t ¯Z Z processes in the same sign dilepton and four-lepton final states, respectively. The sensitivities to strong and weak top quark dipole interactions at the 95% confidence level for various integrated luminosity scenarios are derived and compared with other studies. To estimate the constraints, the main source of backgrounds and a realistic simulation of the detector response are considered.

  5. Structural properties of glucose-dimethylsulfoxide solutions probed by Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Paolantoni, Marco; Gallina, Maria Elena; Sassi, Paola; Morresi, Assunta

    2009-04-01

    Raman spectroscopy was employed to achieve a molecular level description of solvation properties in glucose-dimethylsulfoxide (DMSO) solutions. The analysis of Raman spectra confirms the importance of the dipole-dipole interaction in determining structural properties of pure DMSO; the overall intermolecular structure is maintained in the whole 20-75 °C temperature range investigated. The blueshift of the CH stretching modes observed at higher temperatures points out that CH3⋯O contacts contribute to the cohesive energy of the DMSO liquid system. The addition of glucose perturbs the intermolecular ordering of DMSO owing to the formation of stable solute-solvent hydrogen bonds. The average number of OH⋯OS contacts (3.2±0.3) and their corresponding energy (˜20 kJ/mol) were estimated. Besides, the concentration dependence of the CH stretching bands and the behavior of the noncoincidence effect on the SO band, suggest that the dipole-dipole and CH3⋯O interactions among DMSO molecules are disfavored within the glucose solvation layer. These findings contribute to improve our understanding about the microscopic origin of solvent properties of DMSO toward more complex biomolecular systems.

  6. Channel branching ratios in CH2CN- photodetachment: Rotational structure and vibrational energy redistribution in autodetachment

    NASA Astrophysics Data System (ADS)

    Lyle, Justin; Wedig, Olivia; Gulania, Sahil; Krylov, Anna I.; Mabbs, Richard

    2017-12-01

    We report photoelectron spectra of CH2CN-, recorded at photon energies between 13 460 and 15 384 cm-1, which show rapid intensity variations in particular detachment channels. The branching ratios for various spectral features reveal rotational structure associated with autodetachment from an intermediate anion state. Calculations using equation-of-motion coupled-cluster method with single and double excitations reveal the presence of two dipole-bound excited anion states (a singlet and a triplet). The computed oscillator strength for the transition to the singlet dipole-bound state provides an estimate of the autodetachment channel contribution to the total photoelectron yield. Analysis of the different spectral features allows identification of the dipole-bound and neutral vibrational levels involved in the autodetachment processes. For the most part, the autodetachment channels are consistent with the vibrational propensity rule and normal mode expectation. However, examination of the rotational structure shows that autodetachment from the ν3 (v = 1 and v = 2) levels of the dipole-bound state displays behavior counter to the normal mode expectation with the final state vibrational level belonging to a different mode.

  7. EEG minimum-norm estimation compared with MEG dipole fitting in the localization of somatosensory sources at S1.

    PubMed

    Komssi, S; Huttunen, J; Aronen, H J; Ilmoniemi, R J

    2004-03-01

    Dipole models, which are frequently used in attempts to solve the electromagnetic inverse problem, require explicit a priori assumptions about the cerebral current sources. This is not the case for solutions based on minimum-norm estimates. In the present study, we evaluated the spatial accuracy of the L2 minimum-norm estimate (MNE) in realistic noise conditions by assessing its ability to localize sources of evoked responses at the primary somatosensory cortex (SI). Multichannel somatosensory evoked potentials (SEPs) and magnetic fields (SEFs) were recorded in 5 subjects while stimulating the median and ulnar nerves at the left wrist. A Tikhonov-regularized L2-MNE, constructed on a spherical surface from the SEP signals, was compared with an equivalent current dipole (ECD) solution obtained from the SEFs. Primarily tangential current sources accounted for both SEP and SEF distributions at around 20 ms (N20/N20m) and 70 ms (P70/P70m), which deflections were chosen for comparative analysis. The distances between the locations of the maximum current densities obtained from MNE and the locations of ECDs were on the average 12-13 mm for both deflections and nerves stimulated. In accordance with the somatotopical order of SI, both the MNE and ECD tended to localize median nerve activation more laterally than ulnar nerve activation for the N20/N20m deflection. Simulation experiments further indicated that, with a proper estimate of the source depth and with a good fit of the head model, the MNE can reach a mean accuracy of 5 mm in 0.2-microV root-mean-square noise. When compared with previously reported localizations based on dipole modelling of SEPs, it appears that equally accurate localization of S1 can be obtained with the MNE. MNE can be used to verify parametric source modelling results. Having a relatively good localization accuracy and requiring minimal assumptions, the MNE may be useful for the localization of poorly known activity distributions and for tracking activity changes between brain areas as a function of time.

  8. A function space approach to smoothing with applications to model error estimation for flexible spacecraft control

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1981-01-01

    A function space approach to smoothing is used to obtain a set of model error estimates inherent in a reduced-order model. By establishing knowledge of inevitable deficiencies in the truncated model, the error estimates provide a foundation for updating the model and thereby improving system performance. The function space smoothing solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for spacecraft attitude control.

  9. Model error estimation for distributed systems described by elliptic equations

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1983-01-01

    A function space approach is used to develop a theory for estimation of the errors inherent in an elliptic partial differential equation model for a distributed parameter system. By establishing knowledge of the inevitable deficiencies in the model, the error estimates provide a foundation for updating the model. The function space solution leads to a specification of a method for computation of the model error estimates and development of model error analysis techniques for comparison between actual and estimated errors. The paper summarizes the model error estimation approach as well as an application arising in the area of modeling for static shape determination of large flexible systems.

  10. Effects of refractive errors on visual evoked magnetic fields.

    PubMed

    Suzuki, Masaya; Nagae, Mizuki; Nagata, Yuko; Kumagai, Naoya; Inui, Koji; Kakigi, Ryusuke

    2015-11-09

    The latency and amplitude of visual evoked cortical responses are known to be affected by refractive states, suggesting that they may be used as an objective index of refractive errors. In order to establish an easy and reliable method for this purpose, we herein examined the effects of refractive errors on visual evoked magnetic fields (VEFs). Binocular VEFs following the presentation of a simple grating of 0.16 cd/m(2) in the lower visual field were recorded in 12 healthy volunteers and compared among four refractive states: 0D, +1D, +2D, and +4D, by using plus lenses. The low-luminance visual stimulus evoked a main MEG response at approximately 120 ms (M100) that reversed its polarity between the upper and lower visual field stimulations and originated from the occipital midline area. When refractive errors were induced by plus lenses, the latency of M100 increased, while its amplitude decreased with an increase in power of the lens. Differences from the control condition (+0D) were significant for all three lenses examined. The results of dipole analyses showed that evoked fields for the control (+0D) condition were explainable by one dipole in the primary visual cortex (V1), while other sources, presumably in V3 or V6, slightly contributed to shape M100 for the +2D or +4D condition. The present results showed that the latency and amplitude of M100 are both useful indicators for assessing refractive states. The contribution of neural sources other than V1 to M100 was modest under the 0D and +1D conditions. By considering the nature of the activity of M100 including its high sensitivity to a spatial frequency and lower visual field dominance, a simple low-luminance grating stimulus at an optimal spatial frequency in the lower visual field appears appropriate for obtaining data on high S/N ratios and reducing the load on subjects.

  11. Shift of the Magnetopause Reconnection Line to the Winter Hemisphere Under Southward IMF Conditions: Geotail and MMS Observations

    NASA Technical Reports Server (NTRS)

    Kitamura, N.; Hasegawa, H.; Saito, Y.; Shinohara, I.; Yokota, S.; Nagai, T.; Pollock, C. J.; Giles, B. L.; Moore, T. E.; Dorelli, J. C.; hide

    2016-01-01

    At 02:13 UT on 18 November 2015 when the geomagnetic dipole was tilted by -27deg, the MMS spacecraft observed southward reconnection jets near the subsolar magnetopause under southward and dawnward interplanetary magnetic field conditions. Based on four-spacecraft estimations of the magnetic field direction near the separatrix and the motion and direction of the current sheet, the location of the reconnection line was estimated to be approx.1.8 R(sub E) or further northward of MMS. The Geotail spacecraft at GSM Z approx. 1.4 R(sub E) also observed southward reconnection jets at the dawnside magnetopause 30-40 min later. The estimated reconnection line location was northward of GSM Z approx.2 R(sub E). This crossing occurred when MMS observed purely southward magnetic fields in the magnetosheath. The simultaneous observations are thus consistent with the hypothesis that the dayside magnetopause reconnection line shifts from the subsolar point toward the northem (winter) hemisphere due to the effect of geomagnetic dipole tilt.

  12. Energy dissipation of rigid dipoles in a viscous fluid under the action of a time-periodic field: The influence of thermal bath and dipole interaction

    NASA Astrophysics Data System (ADS)

    Lyutyy, T. V.; Reva, V. V.

    2018-05-01

    Ferrofluid heating by an external alternating field is studied based on the rigid dipole model, where the magnetization of each particle in a fluid is supposed to be firmly fixed in the crystal lattice. Equations of motion, employing Newton's second law for rotational motion, the condition of rigid body rotation, and the assumption that the friction torque is proportional to angular velocity are used. This oversimplification permits us to expand the model easily: to take into account the thermal noise and interparticle interaction that allows us to estimate from unified positions the role of thermal activation and dipole interaction in the heating process. Our studies are conducted in three stages. The exact expressions for the average power loss of a single particle are obtained within the dynamical approximation. Then, in the stochastic case, the power loss of a single particle is estimated analytically using the Fokker-Planck equation and numerically using the effective Langevin equation. Finally, the power loss for the particle ensemble is obtained using the molecular dynamics method. Here, the local dipole fields are calculated approximately based on the Barnes-Hut algorithm. The revealed trends in the behavior of both a single particle and the particle ensemble suggest the way of choosing the conditions for obtaining the maximum heating efficiency. The competitiveness character of the interparticle interaction and thermal noise is investigated in detail. Two situations, when the thermal noise rectifies the power loss reduction caused by the interaction, are described. The first of them is related to the complete destruction of dense clusters at high noise intensity. The second one originates from the rare switching of the particles in clusters due to thermal activation, when the noise intensity is relatively weak. In this way, the constructive role of noise appears in the system.

  13. Negative ions of p-nitroaniline: Photodetachment, collisions, and ab initio calculations

    NASA Astrophysics Data System (ADS)

    Smith, Byron H.; Buonaugurio, Angela; Chen, Jing; Collins, Evan; Bowen, Kit H.; Compton, Robert N.; Sommerfeld, Thomas

    2013-06-01

    The structures of parent anion, M-, and deprotonated molecule, [M-H]-, anions of the highly polar p-nitroaniline (pNA) molecule are studied experimentally and theoretically. Photoelectron spectroscopy (PES) of the parent anion is employed to estimate the adiabatic electron affinity (EAa = 0.75 ± 0.1 eV) and vertical detachment energy (VDE = 1.1 eV). These measured energies are in good agreement with computed values of 0.73 eV for the EAa and the range of 0.85 to 1.0 eV for the VDE at the EOM-CCSD/Aug-cc-pVTZ level. Collision induced dissociation (CID) of deprotonated pNA, [pNA - H]-, with argon yielded [pNA - H - NO]- (i.e., rearrangement to give loss of NO) with a threshold energy of 2.36 eV. Calculations of the energy difference between [pNA - H]- and [pNA - H - NO]- give 1.64 eV, allowing an estimate of a 0.72 eV activation barrier for the rearrangement reaction. Direct dissociation of [pNA - H]- yielding NO_2^ - occurs at a threshold energy of 3.80 eV, in good agreement with theory (between 3.39 eV and 4.30 eV). As a result of the exceedingly large dipole moment for pNA (6.2 Debye measured in acetone), we predict two dipole-bound states, one at ˜110 meV and an excited state at 2 meV. No dipole-bound states are observed in the photodetachment experiments due the pronounced mixing between states with dipole-bound and valence character similar to what has been observed in other nitro systems. For the same reason, dipole-bound states are expected to provide highly efficient "doorway states" for the formation of the pNA- valence anion, and these states should be observable as resonances in the reverse process, that is, in the photodetachment spectrum of pNA- near the photodetachment threshold.

  14. Negative ions of p-nitroaniline: photodetachment, collisions, and ab initio calculations.

    PubMed

    Smith, Byron H; Buonaugurio, Angela; Chen, Jing; Collins, Evan; Bowen, Kit H; Compton, Robert N; Sommerfeld, Thomas

    2013-06-21

    The structures of parent anion, M(-), and deprotonated molecule, [M-H](-), anions of the highly polar p-nitroaniline (pNA) molecule are studied experimentally and theoretically. Photoelectron spectroscopy (PES) of the parent anion is employed to estimate the adiabatic electron affinity (EAa = 0.75 ± 0.1 eV) and vertical detachment energy (VDE = 1.1 eV). These measured energies are in good agreement with computed values of 0.73 eV for the EAa and the range of 0.85 to 1.0 eV for the VDE at the EOM-CCSD∕Aug-cc-pVTZ level. Collision induced dissociation (CID) of deprotonated pNA, [pNA - H](-), with argon yielded [pNA - H - NO](-) (i.e., rearrangement to give loss of NO) with a threshold energy of 2.36 eV. Calculations of the energy difference between [pNA - H](-) and [pNA - H - NO](-) give 1.64 eV, allowing an estimate of a 0.72 eV activation barrier for the rearrangement reaction. Direct dissociation of [pNA - H](-) yielding NO2(-) occurs at a threshold energy of 3.80 eV, in good agreement with theory (between 3.39 eV and 4.30 eV). As a result of the exceedingly large dipole moment for pNA (6.2 Debye measured in acetone), we predict two dipole-bound states, one at ~110 meV and an excited state at 2 meV. No dipole-bound states are observed in the photodetachment experiments due the pronounced mixing between states with dipole-bound and valence character similar to what has been observed in other nitro systems. For the same reason, dipole-bound states are expected to provide highly efficient "doorway states" for the formation of the pNA(-) valence anion, and these states should be observable as resonances in the reverse process, that is, in the photodetachment spectrum of pNA(-) near the photodetachment threshold.

  15. Bias in error estimation when using cross-validation for model selection.

    PubMed

    Varma, Sudhir; Simon, Richard

    2006-02-23

    Cross-validation (CV) is an effective method for estimating the prediction error of a classifier. Some recent articles have proposed methods for optimizing classifiers by choosing classifier parameter values that minimize the CV error estimate. We have evaluated the validity of using the CV error estimate of the optimized classifier as an estimate of the true error expected on independent data. We used CV to optimize the classification parameters for two kinds of classifiers; Shrunken Centroids and Support Vector Machines (SVM). Random training datasets were created, with no difference in the distribution of the features between the two classes. Using these "null" datasets, we selected classifier parameter values that minimized the CV error estimate. 10-fold CV was used for Shrunken Centroids while Leave-One-Out-CV (LOOCV) was used for the SVM. Independent test data was created to estimate the true error. With "null" and "non null" (with differential expression between the classes) data, we also tested a nested CV procedure, where an inner CV loop is used to perform the tuning of the parameters while an outer CV is used to compute an estimate of the error. The CV error estimate for the classifier with the optimal parameters was found to be a substantially biased estimate of the true error that the classifier would incur on independent data. Even though there is no real difference between the two classes for the "null" datasets, the CV error estimate for the Shrunken Centroid with the optimal parameters was less than 30% on 18.5% of simulated training data-sets. For SVM with optimal parameters the estimated error rate was less than 30% on 38% of "null" data-sets. Performance of the optimized classifiers on the independent test set was no better than chance. The nested CV procedure reduces the bias considerably and gives an estimate of the error that is very close to that obtained on the independent testing set for both Shrunken Centroids and SVM classifiers for "null" and "non-null" data distributions. We show that using CV to compute an error estimate for a classifier that has itself been tuned using CV gives a significantly biased estimate of the true error. Proper use of CV for estimating true error of a classifier developed using a well defined algorithm requires that all steps of the algorithm, including classifier parameter tuning, be repeated in each CV loop. A nested CV procedure provides an almost unbiased estimate of the true error.

  16. Parallel computers - Estimate errors caused by imprecise data

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne

    1991-01-01

    A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.

  17. Forward modeling to investigate inversion artifacts resulting from time-lapse electrical resistivity tomography during rainfall simulations

    NASA Astrophysics Data System (ADS)

    Carey, Austin M.; Paige, Ginger B.; Carr, Bradley J.; Dogan, Mine

    2017-10-01

    Time-lapse electrical resistivity tomography (ERT) is commonly used as a minimally invasive tool to study infiltration processes. In 2014, we conducted field studies coupling variable intensity rainfall simulation with high-resolution ERT to study the real-time partitioning of rainfall into surface and subsurface response. The significant contrast in resistivity in the subsurface from large changes in subsurface moisture resulted in artifacts during the inversion process of the time-lapse ERT data collected using a dipole-dipole electrode array. These artifacts, which are not representative of real subsurface moisture dynamics, have been shown to arise during time-lapse inversion of ERT data and may be subject to misinterpretation. Forward modeling of the infiltration process post field experiments using a two-layer system (saprolite overlain by a soil layer) was used to generate synthetic datasets. The synthetic data were used to investigate the influence of both changes in volumetric moisture content and electrode configuration on the development of the artifacts identified in the field datasets. For the dipole-dipole array, we found that a decrease in the resistivity of the bottom layer by 67% resulted in a 50% reduction in artifact development. Artifacts for the seven additional array configurations tested, ranged from a 19% increase in artifact development (using an extended dipole-dipole array) to as much as a 96% decrease in artifact development (using a wenner-alpha array), compared to that of the dipole-dipole array. Moreover, these arrays varied in their ability to accurately delineate the infiltration front. Model results showed that the modified pole-dipole array was able to accurately image the infiltration zone and presented fewer artifacts for our experiments. In this study, we identify an optimal array type for imaging rainfall-infiltration dynamics that reduces artifacts. The influence of moisture contrast between the infiltrating water and the bulk subsurface material was characterized and shown to be a major factor in contributing to artifact development. Through forward modeling, this study highlights the importance of considering array type and subsurface moisture conditions when using time-lapse resistivity to obtain reliable estimates of vadose zone flow processes during rainfall-infiltration events.

  18. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    PubMed

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  19. A mathematical model of extremely low frequency ocean induced electromagnetic noise

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dautta, Manik, E-mail: manik.dautta@anyeshan.com; Faruque, Rumana Binte, E-mail: rumana.faruque@anyeshan.com; Islam, Rakibul, E-mail: rakibul.islam@anyeshan.com

    2016-07-12

    Magnetic Anomaly Detection (MAD) system uses the principle that ferromagnetic objects disturb the magnetic lines of force of the earth. These lines of force are able to pass through both water and air in similar manners. A MAD system, usually mounted on an aerial vehicle, is thus often employed to confirm the detection and accomplish localization of large ferromagnetic objects submerged in a sea-water environment. However, the total magnetic signal encountered by a MAD system includes contributions from a myriad of low to Extremely Low Frequency (ELF) sources. The goal of the MAD system is to detect small anomaly signalsmore » in the midst of these low-frequency interfering signals. Both the Range of Detection (R{sub d}) and the Probability of Detection (P{sub d}) are limited by the ratio of anomaly signal strength to the interfering magnetic noise. In this paper, we report a generic mathematical model to estimate the signal-to-noise ratio or SNR. Since time-variant electro-magnetic signals are affected by conduction losses due to sea-water conductivity and the presence of air-water interface, we employ the general formulation of dipole induced electromagnetic field propagation in stratified media [1]. As a first step we employ a volumetric distribution of isolated elementary magnetic dipoles, each having its own dipole strength and orientation, to estimate the magnetic noise observed by a MAD system. Numerical results are presented for a few realizations out of an ensemble of possible realizations of elementary dipole source distributions.« less

  20. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  1. Multi-transmitter multi-receiver null coupled systems for inductive detection and characterization of metallic objects

    NASA Astrophysics Data System (ADS)

    Smith, J. Torquil; Morrison, H. Frank; Doolittle, Lawrence R.; Tseng, Hung-Wen

    2007-03-01

    Equivalent dipole polarizabilities are a succinct way to summarize the inductive response of an isolated conductive body at distances greater than the scale of the body. Their estimation requires measurement of secondary magnetic fields due to currents induced in the body by time varying magnetic fields in at least three linearly independent (e.g., orthogonal) directions. Secondary fields due to an object are typically orders of magnitude smaller than the primary inducing fields near the primary field sources (transmitters). Receiver coils may be oriented orthogonal to primary fields from one or two transmitters, nulling their response to those fields, but simultaneously nulling to fields of additional transmitters is problematic. If transmitter coils are constructed symmetrically with respect to inversion in a point, their magnetic fields are symmetric with respect to that point. If receiver coils are operated in pairs symmetric with respect to inversion in the same point, then their differenced output is insensitive to the primary fields of any symmetrically constructed transmitters, allowing nulling to three (or more) transmitters. With a sufficient number of receivers pairs, object equivalent dipole polarizabilities can be estimated in situ from measurements at a single instrument sitting, eliminating effects of inaccurate instrument location on polarizability estimates. The method is illustrated with data from a multi-transmitter multi-receiver system with primary field nulling through differenced receiver pairs, interpreted in terms of principal equivalent dipole polarizabilities as a function of time.

  2. A comparison of two estimates of standard error for a ratio-of-means estimator for a mapped-plot sample design in southeast Alaska.

    Treesearch

    Willem W.S. van Hees

    2002-01-01

    Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...

  3. Minimal nuclear energy density functional

    NASA Astrophysics Data System (ADS)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; Perez, Rodrigo Navarro; Schunck, Nicolas

    2018-04-01

    We present a minimal nuclear energy density functional (NEDF) called "SeaLL1" that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ɛr=0.022 fm and a standard deviation σr=0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body (NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body (NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. We identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.

  4. Low resolution brain electromagnetic tomography in a realistic geometry head model: a simulation study

    NASA Astrophysics Data System (ADS)

    Ding, Lei; Lai, Yuan; He, Bin

    2005-01-01

    It is of importance to localize neural sources from scalp recorded EEG. Low resolution brain electromagnetic tomography (LORETA) has received considerable attention for localizing brain electrical sources. However, most such efforts have used spherical head models in representing the head volume conductor. Investigation of the performance of LORETA in a realistic geometry head model, as compared with the spherical model, will provide useful information guiding interpretation of data obtained by using the spherical head model. The performance of LORETA was evaluated by means of computer simulations. The boundary element method was used to solve the forward problem. A three-shell realistic geometry (RG) head model was constructed from MRI scans of a human subject. Dipole source configurations of a single dipole located at different regions of the brain with varying depth were used to assess the performance of LORETA in different regions of the brain. A three-sphere head model was also used to approximate the RG head model, and similar simulations performed, and results compared with the RG-LORETA with reference to the locations of the simulated sources. Multi-source localizations were discussed and examples given in the RG head model. Localization errors employing the spherical LORETA, with reference to the source locations within the realistic geometry head, were about 20-30 mm, for four brain regions evaluated: frontal, parietal, temporal and occipital regions. Localization errors employing the RG head model were about 10 mm over the same four brain regions. The present simulation results suggest that the use of the RG head model reduces the localization error of LORETA, and that the RG head model based LORETA is desirable if high localization accuracy is needed.

  5. Minimal nuclear energy density functional

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi

    Inmore » this paper, we present a minimal nuclear energy density functional (NEDF) called “SeaLL1” that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ε r = 0.022 fm and a standard deviation σ r = 0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body ( NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body ( NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. Finally, we identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.« less

  6. Minimal nuclear energy density functional

    DOE PAGES

    Bulgac, Aurel; Forbes, Michael McNeil; Jin, Shi; ...

    2018-04-17

    Inmore » this paper, we present a minimal nuclear energy density functional (NEDF) called “SeaLL1” that has the smallest number of possible phenomenological parameters to date. SeaLL1 is defined by seven significant phenomenological parameters, each related to a specific nuclear property. It describes the nuclear masses of even-even nuclei with a mean energy error of 0.97 MeV and a standard deviation of 1.46 MeV , two-neutron and two-proton separation energies with rms errors of 0.69 MeV and 0.59 MeV respectively, and the charge radii of 345 even-even nuclei with a mean error ε r = 0.022 fm and a standard deviation σ r = 0.025 fm . SeaLL1 incorporates constraints on the equation of state (EoS) of pure neutron matter from quantum Monte Carlo calculations with chiral effective field theory two-body ( NN ) interactions at the next-to-next-to-next-to leading order (N3LO) level and three-body ( NNN ) interactions at the next-to-next-to leading order (N2LO) level. Two of the seven parameters are related to the saturation density and the energy per particle of the homogeneous symmetric nuclear matter, one is related to the nuclear surface tension, two are related to the symmetry energy and its density dependence, one is related to the strength of the spin-orbit interaction, and one is the coupling constant of the pairing interaction. Finally, we identify additional phenomenological parameters that have little effect on ground-state properties but can be used to fine-tune features such as the Thomas-Reiche-Kuhn sum rule, the excitation energy of the giant dipole and Gamow-Teller resonances, the static dipole electric polarizability, and the neutron skin thickness.« less

  7. Whole head quantitative susceptibility mapping using a least-norm direct dipole inversion method.

    PubMed

    Sun, Hongfu; Ma, Yuhan; MacDonald, M Ethan; Pike, G Bruce

    2018-06-15

    A new dipole field inversion method for whole head quantitative susceptibility mapping (QSM) is proposed. Instead of performing background field removal and local field inversion sequentially, the proposed method performs dipole field inversion directly on the total field map in a single step. To aid this under-determined and ill-posed inversion process and obtain robust QSM images, Tikhonov regularization is implemented to seek the local susceptibility solution with the least-norm (LN) using the L-curve criterion. The proposed LN-QSM does not require brain edge erosion, thereby preserving the cerebral cortex in the final images. This should improve its applicability for QSM-based cortical grey matter measurement, functional imaging and venography of full brain. Furthermore, LN-QSM also enables susceptibility mapping of the entire head without the need for brain extraction, which makes QSM reconstruction more automated and less dependent on intermediate pre-processing methods and their associated parameters. It is shown that the proposed LN-QSM method reduced errors in a numerical phantom simulation, improved accuracy in a gadolinium phantom experiment, and suppressed artefacts in nine subjects, as compared to two-step and other single-step QSM methods. Measurements of deep grey matter and skull susceptibilities from LN-QSM are consistent with established reconstruction methods. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. A-posteriori error estimation for second order mechanical systems

    NASA Astrophysics Data System (ADS)

    Ruiner, Thomas; Fehr, Jörg; Haasdonk, Bernard; Eberhard, Peter

    2012-06-01

    One important issue for the simulation of flexible multibody systems is the reduction of the flexible bodies degrees of freedom. As far as safety questions are concerned knowledge about the error introduced by the reduction of the flexible degrees of freedom is helpful and very important. In this work, an a-posteriori error estimator for linear first order systems is extended for error estimation of mechanical second order systems. Due to the special second order structure of mechanical systems, an improvement of the a-posteriori error estimator is achieved. A major advantage of the a-posteriori error estimator is that the estimator is independent of the used reduction technique. Therefore, it can be used for moment-matching based, Gramian matrices based or modal based model reduction techniques. The capability of the proposed technique is demonstrated by the a-posteriori error estimation of a mechanical system, and a sensitivity analysis of the parameters involved in the error estimation process is conducted.

  9. A Four Lake Latitudinal Comparison Along Coastal Southern to Central California: A Late-Holocene Perspective on the Western US Precipitation Dipole.

    NASA Astrophysics Data System (ADS)

    Kirby, M.; Nichols, K. E.; Ramezan, R.; Palermo, J. A.; Hiner, C.; Bonuso, N.; Patterson, W. P.; Silveira, E.

    2016-12-01

    One of the dominant hydroclimatic features of the western United States is the winter season precipitation dipole. The dipole is characterized by a N-S antiphased precipitation regime presently centered on 40° N latitude (Cayan et al., 1998; Dettinger et al., 1998; Wise, 2010). For example, the position of the dipole dictates where CA receives its winter precipitation; thus, it is critical to understand the dipole from a paleoperspective, which at present is poorly known. Here, we present four lake sites spanning 33°-36° N latitude along coastal CA. These sites include: Lake Elsinore, Crystal Lake, Zaca Lake, and Abbott Lake. All four of these sites are located south of the dipole's average historic (since 1950 AD) latitude. The predominant hydroclimatic indicator is similar for each basin (i.e., grain size); although, several other indicators are used for independent verification/assessment of the grain size interpretation. Notably, these lakes contain varied age control, which limits site-to-site correlation without consideration of age model dependence. Following a Bayesian framework, MCMC algorithms in conjuction with radiocarbon dating will be used to estimate timestamps of sediment deposits with a degree of statistical uncertainty. Samples from the posterior distribution will be used to correlate hydroclimatic features between sites. Included in this analysis are tree ring records from the region to assess the similarities and differences as recorded in annually resolved tree ring drought reconstructions and decadally resolved lake sediment hydroclimatic records. Finally, the four sites are assessed in the context of tropical and north Pacific SST forcing.

  10. Ratiometric fluorescence measurements and imaging of the dipole potential in cell plasma membranes

    NASA Astrophysics Data System (ADS)

    Shynkar, Vasyl V.; Klymchenko, Andrey S.; Duportail, Guy; Demchenko, Alexander P.; Mély, Yves

    2004-09-01

    Development of fluorescence microscopic methods is limited by the application of new dyes, the response of which could be sensitive to different functional states in the living cells, and, in particular, to electrostatic potentials on their plasma membranes. Recently, we showed that newly designed 3-hydroxyflavone fluorescence dyes are highly electrochromic and show a strong two-band ratiometric response to electric dipole potential in lipid membranes. In the present report we extend these observations and describe a new generation of these dyes as electrochromic probes in biomembrane research. Modification of the membrane dipole potential was achieved by addition of 6-ketocholestanol (6-KC), cholesterol and phloretin. The dipole potential was also estimated by the reference probe di-8-ANEPPS. As an example, we show that on addition of 6-KC there occurs a dramatic change of the intensity ratio of the two emission bands, which is easily detected as a change of color. We describe in detail the applications of one of these dyes, PPZ8, to the studies of cells in suspension or attached to the glass surface. Confocal microscopy demonstrates strong preference of the probe for the cell plasma membrane, which allows us to apply this dye for studying electrostatic and other biomembrane properties. We demonstrate that the two-color response provides a direct and convenient way to measure the dipole potential in the plasma membrane. Applying PPZ8 in confocal microcopy and two-photon microspectroscopy allowed us to provide two-color imaging of the membrane dipole potential on the level of a single cell.

  11. Direct evidence of three-body interactions in a cold {sup 85}Rb Rydberg gas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Han Jianing

    2010-11-15

    Cold Rydberg atoms trapped in a magneto-optical trap (MOT) are not isolated and they interact through dipole-dipole and multipole-multipole interactions. First-order dipole-dipole interactions and van der Waals interactions between two atoms have been intensively studied. However, the facts that the first-order dipole-dipole interactions and van der Waals interactions show the same size of broadening [A. Reinhard, K. C. Younge, T. C. Liebisch, B. Knuffman, P. R. Berman, and G. Raithel, Phys. Rev. Lett. 100, 233201 (2008)] and there are transitions between two dimer states [S. M. Farooqi, D. Tong, S. Krishnan, J. Stanojevic, Y. P. Zhang, J. R. Ensher, A.more » S. Estrin, C. Boisseau, R. Cote, E. E. Eyler, and P. L. Gould, Phys. Rev. Lett. 91, 183002 (2003); K. R. Overstreet, Arne Schwettmann, Jonathan Tallant, and James P. Shaffer, Phys. Rev. A 76, 011403(R) (2007)] cannot be explained by the two-atom picture. The purpose of this article is to show the few-body nature of a dense cold Rydberg gas by studying the molecular-state microwave spectra. Specifically, three-body energy levels have been calculated. Moreover, the transition from three-body energy levels to two-body coupled molecular energy levels and to isolated atomic energy levels as a function of the internuclear spacing is studied. Finally, single-body, two-body, and three-body interaction regions are estimated according to the experimental data. The results reported here provides useful information for plasma formation, further cooling, and superfluid formation.« less

  12. Optimal estimation of large structure model errors. [in Space Shuttle controller design

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.

    1979-01-01

    In-flight estimation of large structure model errors is usually required as a means of detecting inevitable deficiencies in large structure controller/estimator models. The present paper deals with a least-squares formulation which seeks to minimize a quadratic functional of the model errors. The properties of these error estimates are analyzed. It is shown that an arbitrary model error can be decomposed as the sum of two components that are orthogonal in a suitably defined function space. Relations between true and estimated errors are defined. The estimates are found to be approximations that retain many of the significant dynamics of the true model errors. Current efforts are directed toward application of the analytical results to a reference large structure model.

  13. Spherical harmonic representation of the main geomagnetic field for world charting and investigations of some fundamental problems of physics and geophysics

    NASA Technical Reports Server (NTRS)

    Barraclough, D. R.; Hide, R.; Leaton, B. R.; Lowes, F. J.; Malin, S. R. C.; Wilson, R. L. (Principal Investigator)

    1981-01-01

    Quiet-day data from MAGSAT were examined for effects which might test the validity of Maxwell's equations. Both external and toroidal fields which might represent a violation of the equations appear to exist, well within the associated errors. The external field might be associated with the ring current, and varies of a time-scale of one day or less. Its orientation is parallel to the geomagnetic dipole. The toriodal field can be confused with an orientation in error (in yaw). It the toroidal field really exists, its can be related to either ionospheric currents, or to toroidal fields in the Earth's core in accordance with Einstein's unified field theory, or to both.

  14. Managing Systematic Errors in a Polarimeter for the Storage Ring EDM Experiment

    NASA Astrophysics Data System (ADS)

    Stephenson, Edward J.; Storage Ring EDM Collaboration

    2011-05-01

    The EDDA plastic scintillator detector system at the Cooler Synchrotron (COSY) has been used to demonstrate that it is possible using a thick target at the edge of the circulating beam to meet the requirements for a polarimeter to be used in the search for an electric dipole moment on the proton or deuteron. Emphasizing elastic and low Q-value reactions leads to large analyzing powers and, along with thick targets, to efficiencies near 1%. Using only information obtained comparing count rates for oppositely vector-polarized beam states and a calibration of the sensitivity of the polarimeter to rate and geometric changes, the contribution of systematic errors can be suppressed below the level of one part per million.

  15. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    PubMed

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  16. Reply to "Comment on `Protecting bipartite entanglement by quantum interferences' "

    NASA Astrophysics Data System (ADS)

    Das, Sumanta; Agarwal, G. S.

    2018-03-01

    In a recent Comment Nair and Arun, Phys. Rev. A 97, 036301 (2018), 10.1103/PhysRevA.97.036301, it was concluded that the two-qubit entanglement protection reported in our work [Das and Agarwal, Phys. Rev. A 81, 052341 (2010), 10.1103/PhysRevA.81.052341] is erroneous. While we acknowledge the error in analytical results on concurrence when dipole matrix elements were unequal, the essential conclusions on entanglement protection are not affected.

  17. Is interstellar detection of higher members of the linear radicals CnCH and CnN feasible?

    NASA Technical Reports Server (NTRS)

    Pauzat, F.; Ellinger, Y.; Mclean, A. D.

    1991-01-01

    Rotational constants and dipole moments for linear-chain radicals CnCH and CnN are estimated using a combinatiaon of ab initio molecular orbital calculations and observed data on the starting members of the series. CnCH with n = 0-5 have been observed by radioastronomy in carbon-rich interstellar clouds; higher members of the series have 2Pi ground states with large dipole moments and are strong candidates for observation. CN and C3N have also been observed by radioastronomy; higher members of the series, with the possible exception of C5N, have 2Pi ground states with near-zero dipole moments making their interstellar detection hopeless under present observational conditions. C5N can be a strong candidate only if it has a 2Sigma ground state, and best computations so far indicate that this is not the case.

  18. Is interstellar detection of higher members of the linear radicals CnCH and CnN feasible?

    PubMed

    Pauzat, F; Ellinger, Y; McLean, A D

    1991-03-01

    Rotational constants and dipole moments for linear-chain radicals CnCH and CnN are estimated using a combination of ab initio molecular orbital calculations and observed data on the starting members of the series. CnCH with n = 0-5 have been observed by radioastronomy in carbon-rich interstellar clouds; higher members of the series have 2 pi ground states with large dipole moments and are strong candidates for observation. CN and C3N have also been observed by radioastronomy; higher members of the series, with the possible exception of C5N, have 2 pi ground states with near-zero dipole moments making their interstellar detection hopeless under present observational conditions. C5N can be a strong candidate only if it has a 2 sigma ground state, and our best computations so far indicate that this is not the case.

  19. Is interstellar detection of higher members of the linear radicals CnCH and CnN feasible

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pauzat, F.; Ellinger, Y.; Mclean, A.D.

    1991-03-01

    Rotational constants and dipole moments for linear-chain radicals CnCH and CnN are estimated using a combinatiaon of ab initio molecular orbital calculations and observed data on the starting members of the series. CnCH with n = 0-5 have been observed by radioastronomy in carbon-rich interstellar clouds; higher members of the series have 2Pi ground states with large dipole moments and are strong candidates for observation. CN and C3N have also been observed by radioastronomy; higher members of the series, with the possible exception of C5N, have 2Pi ground states with near-zero dipole moments making their interstellar detection hopeless under presentmore » observational conditions. C5N can be a strong candidate only if it has a 2Sigma ground state, and best computations so far indicate that this is not the case. 20 refs.« less

  20. Relativistic Coulomb Excitation within the Time Dependent Superfluid Local Density Approximation

    NASA Astrophysics Data System (ADS)

    Stetcu, I.; Bertulani, C. A.; Bulgac, A.; Magierski, P.; Roche, K. J.

    2015-01-01

    Within the framework of the unrestricted time-dependent density functional theory, we present for the first time an analysis of the relativistic Coulomb excitation of the heavy deformed open shell nucleus 238U. The approach is based on the superfluid local density approximation formulated on a spatial lattice that can take into account coupling to the continuum, enabling self-consistent studies of superfluid dynamics of any nuclear shape. We compute the energy deposited in the target nucleus as a function of the impact parameter, finding it to be significantly larger than the estimate using the Goldhaber-Teller model. The isovector giant dipole resonance, the dipole pygmy resonance, and giant quadrupole modes are excited during the process. The one-body dissipation of collective dipole modes is shown to lead a damping width Γ↓≈0.4 MeV and the number of preequilibrium neutrons emitted has been quantified.

  1. Large-scale galactic motions: test of the Dipole Repeller model with the RFGC galaxies data

    NASA Astrophysics Data System (ADS)

    Parnovsky, S.

    2017-06-01

    The paper "The Dipole Repeller" in Nature Astronomy by Hoffman et al. state that the local large-scale galactic flow is dominated by a single attractor - associated with the Shapley Concentration - and a single previously unidentified repeller. We check this hypothesis using the data for 1459 galaxies from RFGC catalogue with distances up to 100 h-1 Mpc. We compared the models with multipole velocity field for pure Hubble expansion and dipole, quadrupole and octopole motion with the models with two attractors in the regions indicated by Hoffman et al with the multipole velocity field background. The results do not support the hypothesis, but does not contradict it. In any case, the inclusion of the following multipole is more effective than the addition of two attractors. Estimations of excess mass of attractors vary greatly, even changing their sign depending on the highest multipole used in model.

  2. Relativistic Coulomb excitation within the time dependent superfluid local density approximation

    DOE PAGES

    Stetcu, I.; Bertulani, C. A.; Bulgac, A.; ...

    2015-01-06

    Within the framework of the unrestricted time-dependent density functional theory, we present for the first time an analysis of the relativistic Coulomb excitation of the heavy deformed open shell nucleus 238U. The approach is based on the superfluid local density approximation formulated on a spatial lattice that can take into account coupling to the continuum, enabling self-consistent studies of superfluid dynamics of any nuclear shape. We compute the energy deposited in the target nucleus as a function of the impact parameter, finding it to be significantly larger than the estimate using the Goldhaber-Teller model. The isovector giant dipole resonance, themore » dipole pygmy resonance, and giant quadrupole modes are excited during the process. As a result, the one-body dissipation of collective dipole modes is shown to lead a damping width Γ↓≈0.4 MeV and the number of preequilibrium neutrons emitted has been quantified.« less

  3. Temperature dependent impedance spectroscopy and Thermally Stimulated Depolarization Current (TSDC) analysis of disperse red 1-co-poly(methyl methacrylate) copolymers

    NASA Astrophysics Data System (ADS)

    Ko, Yee Song; Cuervo-Reyes, Eduardo; Nüesch, Frank A.; Opris, Dorina M.

    2016-04-01

    The dielectric relaxation processes of polymethyl methacrylates that have been functionalized with Disperse Red 1 (DR1) in the side chain (DR1-co-MMA) were studied with temperature dependent impedance spectroscopy and thermally stimulated depolarization current (TSDC) techniques. Copolymers with dipole contents which varied between 10 mol% and 70 mol% were prepared. All samples showed dipole relaxations above the structural-glass transition temperature (Tg). The β-relaxation of the methyl methacrylate (MMA) repeating unit was most visible in DR1(10%)-co-MMA and rapidly vanishes with higher dipole contents. DSC data reveal an increase of the Tg by 20 °C to 125°C with the inclusion of the dipole into the polymethyl methacrylate (PMMA) as side chain. The impedance data of samples with several DR1 concentrations, taken at several temperatures above Tg, have been fitted with the Havriliak-Negami (HN) function. In all cases, the fits reveal a dielectric response that corresponds to power-law dipolar relaxations. TSDC measurements show that the copolymer can be poled, and that the induced polarization can be frozen by lowering the temperature well below the glass transition. Relaxation strengths ΔƐ estimated by integrating the depolarization current are similar to those obtained from the impedance data, confirming the efficient freezing of the dipoles in the structural glass state.

  4. Effect of lipid structure on the dipole potential of phosphatidylcholine bilayers.

    PubMed

    Clarke, R J

    1997-07-25

    A fluorescent ratio method utilizing styrylpyridinium dyes has recently been suggested for the measurement of the membrane dipole potential. Up to now only qualititative measurements have been possible. Here the fluorescence excitation ratio of the dye di-8-ANEPPS has been measured in lipid vesicles composed of a range of saturated and unsaturated phosphatidylcholines. It has been found that the fluorescence ratio is inversely proportional to the surface area occupied by the lipid in its fully hydrated state. This finding allows, by extra- and interpolation, the packing density to be estimated of phosphatidylcholines for which X-ray crystallographic data are not yet available. Comparison of the fluorescence data with literature data of the dipole potential from electrical measurements on monolayers and bilayers allows a calibration curve to be constructed, so that a quantitative determination of the dipole potential using di-8-ANEPPS is possible. It has been found that the value of the dipole potential decreases with increasing unsaturation and, in the case of unsaturated lipids, with increasing length of the hydrocarbon chains. This effect can be explained by the effects of chain packing on the spacing between the headgroups. In addition to the effects of lipid structure on membrane fluidity, these measurements demonstrate the possibility of a direct electrical mechanism for lipid regulation of protein function, in particular of ion transport proteins.

  5. Estimating locations and total magnetization vectors of compact magnetic sources from scalar, vector, or tensor magnetic measurements through combined Helbig and Euler analysis

    USGS Publications Warehouse

    Phillips, J.D.; Nabighian, M.N.; Smith, D.V.; Li, Y.

    2007-01-01

    The Helbig method for estimating total magnetization directions of compact sources from magnetic vector components is extended so that tensor magnetic gradient components can be used instead. Depths of the compact sources can be estimated using the Euler equation, and their dipole moment magnitudes can be estimated using a least squares fit to the vector component or tensor gradient component data. ?? 2007 Society of Exploration Geophysicists.

  6. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  7. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  8. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  9. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  10. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  11. Decorrelation of the true and estimated classifier errors in high-dimensional settings.

    PubMed

    Hanczar, Blaise; Hua, Jianping; Dougherty, Edward R

    2007-01-01

    The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error), in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting). Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, k-fold cross-validation, and .632 bootstrap). Moreover, three scenarios are considered: (1) feature selection, (2) known-feature set, and (3) all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection or using all features, with the better correlation between the latter two showing no general trend, but differing for different models.

  12. Error estimates for ice discharge calculated using the flux gate approach

    NASA Astrophysics Data System (ADS)

    Navarro, F. J.; Sánchez Gámez, P.

    2017-12-01

    Ice discharge to the ocean is usually estimated using the flux gate approach, in which ice flux is calculated through predefined flux gates close to the marine glacier front. However, published results usually lack a proper error estimate. In the flux calculation, both errors in cross-sectional area and errors in velocity are relevant. While for estimating the errors in velocity there are well-established procedures, the calculation of the error in the cross-sectional area requires the availability of ground penetrating radar (GPR) profiles transverse to the ice-flow direction. In this contribution, we use IceBridge operation GPR profiles collected in Ellesmere and Devon Islands, Nunavut, Canada, to compare the cross-sectional areas estimated using various approaches with the cross-sections estimated from GPR ice-thickness data. These error estimates are combined with those for ice-velocities calculated from Sentinel-1 SAR data, to get the error in ice discharge. Our preliminary results suggest, regarding area, that the parabolic cross-section approaches perform better than the quartic ones, which tend to overestimate the cross-sectional area for flight lines close to the central flowline. Furthermore, the results show that regional ice-discharge estimates made using parabolic approaches provide reasonable results, but estimates for individual glaciers can have large errors, up to 20% in cross-sectional area.

  13. The Role of Skull Modeling in EEG Source Imaging for Patients with Refractory Temporal Lobe Epilepsy.

    PubMed

    Montes-Restrepo, Victoria; Carrette, Evelien; Strobbe, Gregor; Gadeyne, Stefanie; Vandenberghe, Stefaan; Boon, Paul; Vonck, Kristl; Mierlo, Pieter van

    2016-07-01

    We investigated the influence of different skull modeling approaches on EEG source imaging (ESI), using data of six patients with refractory temporal lobe epilepsy who later underwent successful epilepsy surgery. Four realistic head models with different skull compartments, based on finite difference methods, were constructed for each patient: (i) Three models had skulls with compact and spongy bone compartments as well as air-filled cavities, segmented from either computed tomography (CT), magnetic resonance imaging (MRI) or a CT-template and (ii) one model included a MRI-based skull with a single compact bone compartment. In all patients we performed ESI of single and averaged spikes marked in the clinical 27-channel EEG by the epileptologist. To analyze at which time point the dipole estimations were closer to the resected zone, ESI was performed at two time instants: the half-rising phase and peak of the spike. The estimated sources for each model were validated against the resected area, as indicated by the postoperative MRI. Our results showed that single spike analysis was highly influenced by the signal-to-noise ratio (SNR), yielding estimations with smaller distances to the resected volume at the peak of the spike. Although averaging reduced the SNR effects, it did not always result in dipole estimations lying closer to the resection. The proposed skull modeling approaches did not lead to significant differences in the localization of the irritative zone from clinical EEG data with low spatial sampling density. Furthermore, we showed that a simple skull model (MRI-based) resulted in similar accuracy in dipole estimation compared to more complex head models (based on CT- or CT-template). Therefore, all the considered head models can be used in the presurgical evaluation of patients with temporal lobe epilepsy to localize the irritative zone from low-density clinical EEG recordings.

  14. Finite Element A Posteriori Error Estimation for Heat Conduction. Degree awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Lang, Christapher G.; Bey, Kim S. (Technical Monitor)

    2002-01-01

    This research investigates residual-based a posteriori error estimates for finite element approximations of heat conduction in single-layer and multi-layered materials. The finite element approximation, based upon hierarchical modelling combined with p-version finite elements, is described with specific application to a two-dimensional, steady state, heat-conduction problem. Element error indicators are determined by solving an element equation for the error with the element residual as a source, and a global error estimate in the energy norm is computed by collecting the element contributions. Numerical results of the performance of the error estimate are presented by comparisons to the actual error. Two methods are discussed and compared for approximating the element boundary flux. The equilibrated flux method provides more accurate results for estimating the error than the average flux method. The error estimation is applied to multi-layered materials with a modification to the equilibrated flux method to approximate the discontinuous flux along a boundary at the material interfaces. A directional error indicator is developed which distinguishes between the hierarchical modeling error and the finite element error. Numerical results are presented for single-layered materials which show that the directional indicators accurately determine which contribution to the total error dominates.

  15. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    NASA Astrophysics Data System (ADS)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion model can help in the understanding of the posterior estimates and percentage errors. Stable and realistic sub-regional and monthly flux estimates for western region of AB/SK can be obtained, but not for the eastern region of ON. This indicates that it is likely a real observation-based inversion for the annual provincial emissions will work for the western region whereas; improvements are needed with the current inversion setup before real inversion is performed for the eastern region.

  16. Magnetization in the South Pole-Aitken basin: Implications for the lunar dynamo and true polar wander

    DTIC Science & Technology

    2016-10-14

    We introduce new Monte Carlo methods to quantify errors in our inversions arising from Gaussian time-dependent changes in the external field and the...all study areas; Appendix A shows de- ails of magnetic inversions for all these areas (see Sections 2.3 and .4 ). Supplementary Appendix B shows maps...of the total field for ll available days that were considered, but not used. .3. Inversion algorithm 1: defined dipoles, constant magnetization DD

  17. Decomposing the electromagnetic response of magnetic dipoles to determine the geometric parameters of a dipole conductor

    NASA Astrophysics Data System (ADS)

    Desmarais, Jacques K.; Smith, Richard S.

    2016-03-01

    A novel automatic data interpretation algorithm is presented for modelling airborne electromagnetic (AEM) data acquired over resistive environments, using a single-component (vertical) transmitter, where the position and orientation of a dipole conductor is allowed to vary in three dimensions. The algorithm assumes that the magnetic fields produced from compact vortex currents are expressed as a linear combinations of the fields arising from dipoles in the subsurface oriented parallel to the [1, 0, 0], [0, 1, 0], and [0, 0, 1], unit vectors. In this manner, AEM responses can be represented as 12 terms. The relative size of each term in the decomposition can be used to determine geometrical information about the orientation of the subsurface conductivity structure. The geometrical parameters of the dipole (location, depth, dip, strike) are estimated using a combination of a look-up table and a matrix inverted in a least-squares sense. Tests on 703 synthetic models show that the algorithm is capable of extracting most of the correct geometrical parameters of a dipole conductor when three-component receiver data is included in the interpretation procedure. The algorithm is unstable when the target is perfectly horizontal, as the strike is undefined. Ambiguities may occur in predicting the orientation of the dipole conductor if y-component data is excluded from the analysis. Application of our approach to an anomaly on line 15 of the Reid Mahaffy test site yields geometrical parameters in reasonable agreement with previous authors. However, our algorithm provides additional information on the strike and offset from the traverse line of the conductor. Disparities in the values of predicted dip and depth are within the range of numerical precision. The index of fit was better when strike and offset were included in the interpretation procedure. Tests on the data from line 15701 of the Chibougamau MEGATEM survey shows that the algorithm is applicable to situations where three-component AEM data is available.

  18. Direct evidence of three-body interactions in a cold Rb85 Rydberg gas

    NASA Astrophysics Data System (ADS)

    Han, Jianing

    2010-11-01

    Cold Rydberg atoms trapped in a magneto-optical trap (MOT) are not isolated and they interact through dipole-dipole and multipole-multipole interactions. First-order dipole-dipole interactions and van der Waals interactions between two atoms have been intensively studied. However, the facts that the first-order dipole-dipole interactions and van der Waals interactions show the same size of broadening [A. Reinhard, K. C. Younge, T. C. Liebisch, B. Knuffman, P. R. Berman, and G. Raithel, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.100.233201 100, 233201 (2008)] and there are transitions between two dimer states [S. M. Farooqi, D. Tong, S. Krishnan, J. Stanojevic, Y. P. Zhang, J. R. Ensher, A. S. Estrin, C. Boisseau, R. Cote, E. E. Eyler, and P. L. Gould, Phys. Rev. Lett.PRLTAO0031-900710.1103/PhysRevLett.91.183002 91, 183002 (2003); K. R. Overstreet, Arne Schwettmann, Jonathan Tallant, and James P. Shaffer, Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.76.011403 76, 011403(R) (2007)] cannot be explained by the two-atom picture. The purpose of this article is to show the few-body nature of a dense cold Rydberg gas by studying the molecular-state microwave spectra. Specifically, three-body energy levels have been calculated. Moreover, the transition from three-body energy levels to two-body coupled molecular energy levels and to isolated atomic energy levels as a function of the internuclear spacing is studied. Finally, single-body, two-body, and three-body interaction regions are estimated according to the experimental data. The results reported here provides useful information for plasma formation, further cooling, and superfluid formation.

  19. Formation of iron metal and grain coagulation in the solar nebula

    NASA Technical Reports Server (NTRS)

    Nuth, Joseph A., III; Berg, Otto

    1994-01-01

    The interstellar grain population in the giant molecular cloud from which the sun formed contained little or no iron metal. However, thermal processing of individual interstellar silicates in the solar nebula is likely to result in the formation of a population of very small iron metal grains. If such grains are exposed to even transient magnetic fields, each will become a tiny dipole magnet capable of interacting with other such dipoles over spatial scale orders of magnitude larger than the radii of individual grains. Such interactions will greatly increase the coagulation cross-section for this grain population. Furthermore, the magnetic attraction between two iron dipoles will significantly increase both the collisional sticking coefficient and the strength of the interparticle binding energy for iron aggregates. Formation of iron metal may therefore be a key step in the aggregation of planetesimals in a protoplanetary nebula. Such aggregates may have already been observed in protoplanetary systems. The enhancement in the effective interaction distance between two magnetic dipoles is directly proportional to the strength of the magnetic dipoles and inversely proportional to the relative velocity. It is less sensitive to the reduced mass of the interacting particles (alpha M(exp -1/2)) and almost insensitive to the initial number density of magnetic dipoles (alpha n(sub o)(exp 1/6)). We are in the process of measuring the degree of coagulation in our condensation flow apparatus as a function of applied magnetic field and correlating these results by means of magnetic remanance acquisition measurements on our iron grains with the strength of the magnetic field to which the grains are exposed. Results of our magnetic remanance acquisition measurements and the magnetic-induced coagulation study will be presented as well as an estimate of the importance of such processes near the nebular midplane.

  20. Finite difference modelling of dipole acoustic logs in a poroelastic formation with anisotropic permeability

    NASA Astrophysics Data System (ADS)

    He, Xiao; Hu, Hengshan; Wang, Xiuming

    2013-01-01

    Sedimentary rocks can exhibit strong permeability anisotropy due to layering, pre-stresses and the presence of aligned microcracks or fractures. In this paper, we develop a modified cylindrical finite-difference algorithm to simulate the borehole acoustic wavefield in a saturated poroelastic medium with transverse isotropy of permeability and tortuosity. A linear interpolation process is proposed to guarantee the leapfrog finite difference scheme for the generalized dynamic equations and Darcy's law for anisotropic porous media. First, the modified algorithm is validated by comparison against the analytical solution when the borehole axis is parallel to the symmetry axis of the formation. The same algorithm is then used to numerically model the dipole acoustic log in a borehole with its axis being arbitrarily deviated from the symmetry axis of transverse isotropy. The simulation results show that the amplitudes of flexural modes vary with the dipole orientation because the permeability tensor of the formation is dependent on the wellbore azimuth. It is revealed that the attenuation of the flexural wave increases approximately linearly with the radial permeability component in the direction of the transmitting dipole. Particularly, when the borehole axis is perpendicular to the symmetry axis of the formation, it is possible to estimate the anisotropy of permeability by evaluating attenuation of the flexural wave using a cross-dipole sonic logging tool according to the results of sensitivity analyses. Finally, the dipole sonic logs in a deviated borehole surrounded by a stratified porous formation are modelled using the proposed finite difference code. Numerical results show that the arrivals and amplitudes of transmitted flexural modes near the layer interface are sensitive to the wellbore inclination.

  1. Simultaneous treatment of unspecified heteroskedastic model error distribution and mismeasured covariates for restricted moment models.

    PubMed

    Garcia, Tanya P; Ma, Yanyuan

    2017-10-01

    We develop consistent and efficient estimation of parameters in general regression models with mismeasured covariates. We assume the model error and covariate distributions are unspecified, and the measurement error distribution is a general parametric distribution with unknown variance-covariance. We construct root- n consistent, asymptotically normal and locally efficient estimators using the semiparametric efficient score. We do not estimate any unknown distribution or model error heteroskedasticity. Instead, we form the estimator under possibly incorrect working distribution models for the model error, error-prone covariate, or both. Empirical results demonstrate robustness to different incorrect working models in homoscedastic and heteroskedastic models with error-prone covariates.

  2. New Experiment to Measure the Electron Electric Dipole Moment

    NASA Technical Reports Server (NTRS)

    Kittle, Melanie

    2003-01-01

    An electron can possess an electric dipole moment (edm) only if time reversal symmetry (T) is violated. No edm of any particle has yet been discovered. CP-violation, equivalent to T-violation by the CPT theorem, does occur in Kaon decays and can be accounted for by the standard model. However, this mechanism leads to an electron edm d(sub e) of the order of 10(exp -38) e cm, whereas the current experimental bound on d(sub e) is about 10(exp -27) e cm. However, well-motivated extensions of the standard model such as supersymmetric theories do predict that de could be as large as the current bound. In addition, CP violation in the early universe is required to explain the preponderance of matter over anti-matter, but the exact mechanism of this CP violation is unclear. For these reasons, we are undertaking a new experimental program to determine de to an improved accuracy of 10(exp -29) e cm. Our experiment will use laser-cooled, trapped Cesium atoms to measure the atomic edm d(sub Cs) that occurs if d(sub e) is not zero. In order to do this, we will measure the energy splitting between the atoms spin states in parallel electric and magnetic fields. The signature of an edm would be a linear dependence of the splitting on the electric field E due to the interaction - d(sub Cs) dot E. Our measurement will be much more sensitive than previous measurements because atoms can be stored in the trap for tens of seconds, allowing for much narrower Zeeman resonance linewidths. Also, our method eliminates the most important systematic errors, proportional to atomic velocity, which have limited previous experiments. In this presentation, we will describe the design of our new apparatus, which is presently under construction. An important feature of our experimental apparatus is that magnetic field noise will be suppressed to a very low value of the order of 1 fT/(Hz)1/2. This requires careful attention to the Johnson noise currents in the chamber, which have not been important in previous experiments. In addition we will present estimates of the limits of the various errors that we expect for our experiment.

  3. Feasibility of clinical magnetoencephalography (MEG) functional mapping in the presence of dental artefacts.

    PubMed

    Hillebrand, A; Fazio, P; de Munck, J C; van Dijk, B W

    2013-01-01

    To evaluate the viability of MEG source reconstruction in the presence of large interference due to orthodontic material. We recorded the magnetic fields following a simple hand movement and following electrical stimulation of the median nerve (somatosensory evoked field -SEF). These two tasks were performed twice, once with and once without artificial dental artefacts. Temporal Signal Space Separation (tSSS) was applied to spatially filter the data and source reconstruction was performed according to standard procedures for pre-surgical mapping of eloquent cortex, applying dipole fitting to the SEF data and beamforming to the hand movement data. Comparing the data with braces to the data without braces, the observed distances between the activations following hand movement in the two conditions were on average 6.4 and 4.5 mm for the left and right hand, respectively, whereas the dipole localisation errors for the SEF were 4.1 and 5.4 mm, respectively. Without tSSS it was generally not possible to obtain reliable dipole fit or beamforming results when wearing braces. We confirm that tSSS is a required and effective pre-processing step for data recorded with the Elekta-MEG system. Moreover, we have shown that even the presence of large interference from orthodontic material does not significantly alter the results from dipole localisation or beamformer analysis, provided the data are spatially filtered by tSSS. State-of-the-art signal processing techniques enable the use of MEG for pre-surgical evaluation in a much larger clinical population than previously thought possible. Copyright © 2012 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  4. Magnetoencephalography Phantom Comparison and Validation: Hospital Universiti Sains Malaysia (HUSM) Requisite.

    PubMed

    Omar, Hazim; Ahmad, Alwani Liyan; Hayashi, Noburo; Idris, Zamzuri; Abdullah, Jafri Malin

    2015-12-01

    Magnetoencephalography (MEG) has been extensively used to measure small-scale neuronal brain activity. Although it is widely acknowledged as a sensitive tool for deciphering brain activity and source localisation, the accuracy of the MEG system must be critically evaluated. Typically, on-site calibration with the provided phantom (Local phantom) is used. However, this method is still questionable due to the uncertainty that may originate from the phantom itself. Ideally, the validation of MEG data measurements would require cross-site comparability. A simple method of phantom testing was used twice in addition to a measurement taken with a calibrated reference phantom (RefPhantom) obtained from Elekta Oy of Helsinki, Finland. The comparisons of two main aspects were made in terms of the dipole moment (Qpp) and the difference in the dipole distance from the origin (d) after the tests of statistically equal means and variance were confirmed. The result of Qpp measurements for the LocalPhantom and RefPhantom were 978 (SD24) nAm and 988 (SD32) nAm, respectively, and were still optimally within the accepted range of 900 to 1100 nAm. Moreover, the shifted d results for the LocalPhantom and RefPhantom were 1.84 mm (SD 0.53) and 2.14 mm (SD 0.78), respectively, and these values were below the maximum acceptance range of within 5.0 mm of the nominal dipole location. The Local phantom seems to outperform the reference phantom as indicated by the small standard error of the former (SE 0.094) compared with the latter (SE 0.138). The result indicated that HUSM MEG system was in excellent working condition in terms of the dipole magnitude and localisation measurements as these values passed the acceptance limits criteria of the phantom test.

  5. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers.

    PubMed

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-11-18

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration-which are the basis of tracking error estimation-are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (-0.25 cycle, 0.25 cycle) to (-0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz.

  6. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters

    PubMed Central

    Park, Chan Gook

    2018-01-01

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms. PMID:29690539

  7. Physical Validation of TRMM TMI and PR Monthly Rain Products Over Oklahoma

    NASA Technical Reports Server (NTRS)

    Fisher, Brad L.

    2004-01-01

    The Tropical Rainfall Measuring Mission (TRMM) provides monthly rainfall estimates using data collected by the TRMM satellite. These estimates cover a substantial fraction of the earth's surface. The physical validation of TRMM estimates involves corroborating the accuracy of spaceborne estimates of areal rainfall by inferring errors and biases from ground-based rain estimates. The TRMM error budget consists of two major sources of error: retrieval and sampling. Sampling errors are intrinsic to the process of estimating monthly rainfall and occur because the satellite extrapolates monthly rainfall from a small subset of measurements collected only during satellite overpasses. Retrieval errors, on the other hand, are related to the process of collecting measurements while the satellite is overhead. One of the big challenges confronting the TRMM validation effort is how to best estimate these two main components of the TRMM error budget, which are not easily decoupled. This four-year study computed bulk sampling and retrieval errors for the TRMM microwave imager (TMI) and the precipitation radar (PR) by applying a technique that sub-samples gauge data at TRMM overpass times. Gridded monthly rain estimates are then computed from the monthly bulk statistics of the collected samples, providing a sensor-dependent gauge rain estimate that is assumed to include a TRMM equivalent sampling error. The sub-sampled gauge rain estimates are then used in conjunction with the monthly satellite and gauge (without sub- sampling) estimates to decouple retrieval and sampling errors. The computed mean sampling errors for the TMI and PR were 5.9% and 7.796, respectively, in good agreement with theoretical predictions. The PR year-to-year retrieval biases exceeded corresponding TMI biases, but it was found that these differences were partially due to negative TMI biases during cold months and positive TMI biases during warm months.

  8. Smooth empirical Bayes estimation of observation error variances in linear systems

    NASA Technical Reports Server (NTRS)

    Martz, H. F., Jr.; Lian, M. W.

    1972-01-01

    A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.

  9. Elimination of Emergency Department Medication Errors Due To Estimated Weights.

    PubMed

    Greenwalt, Mary; Griffen, David; Wilkerson, Jim

    2017-01-01

    From 7/2014 through 6/2015, 10 emergency department (ED) medication dosing errors were reported through the electronic incident reporting system of an urban academic medical center. Analysis of these medication errors identified inaccurate estimated weight on patients as the root cause. The goal of this project was to reduce weight-based dosing medication errors due to inaccurate estimated weights on patients presenting to the ED. Chart review revealed that 13.8% of estimated weights documented on admitted ED patients varied more than 10% from subsequent actual admission weights recorded. A random sample of 100 charts containing estimated weights revealed 2 previously unreported significant medication dosage errors (.02 significant error rate). Key improvements included removing barriers to weighing ED patients, storytelling to engage staff and change culture, and removal of the estimated weight documentation field from the ED electronic health record (EHR) forms. With these improvements estimated weights on ED patients, and the resulting medication errors, were eliminated.

  10. An error-based micro-sensor capture system for real-time motion estimation

    NASA Astrophysics Data System (ADS)

    Yang, Lin; Ye, Shiwei; Wang, Zhibo; Huang, Zhipei; Wu, Jiankang; Kong, Yongmei; Zhang, Li

    2017-10-01

    A wearable micro-sensor motion capture system with 16 IMUs and an error-compensatory complementary filter algorithm for real-time motion estimation has been developed to acquire accurate 3D orientation and displacement in real life activities. In the proposed filter algorithm, the gyroscope bias error, orientation error and magnetic disturbance error are estimated and compensated, significantly reducing the orientation estimation error due to sensor noise and drift. Displacement estimation, especially for activities such as jumping, has been the challenge in micro-sensor motion capture. An adaptive gait phase detection algorithm has been developed to accommodate accurate displacement estimation in different types of activities. The performance of this system is benchmarked with respect to the results of VICON optical capture system. The experimental results have demonstrated effectiveness of the system in daily activities tracking, with estimation error 0.16 ± 0.06 m for normal walking and 0.13 ± 0.11 m for jumping motions. Research supported by the National Natural Science Foundation of China (Nos. 61431017, 81272166).

  11. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  12. Neutron and proton electric dipole moments from N f=2+1 domain-wall fermion lattice QCD

    DOE PAGES

    Shintani, Eigo; Blum, Thomas; Izubuchi, Taku; ...

    2016-05-05

    We present a lattice calculation of the neutron and proton electric dipole moments (EDM’s) with N f = 2 + 1 flavors of domain-wall fermions. The neutron and proton EDM form factors are extracted from three-point functions at the next-to-leading order in the θ vacuum of QCD. In this computation, we use pion masses 330 and 420 MeV and 2.7 fm 3 lattices with Iwasaki gauge action and a 170 MeV pion and 4.6 fm 3 lattice with I-DSDR gauge action, all generated by the RBC and UKQCD collaborations. The all-mode-averaging technique enables an efficient, high statistics calculation; however themore » statistical errors on our results are still relatively large, so we investigate a new direction to reduce them, reweighting with the local topological charge density which appears promising. Furthermore, we discuss the chiral behavior and finite size effects of the EDM’s in the context of baryon chiral perturbation theory.« less

  13. Measurements of the cosmic background radiation

    NASA Technical Reports Server (NTRS)

    Lubin, P.; Villela, T.

    1987-01-01

    Maps of the large scale structure (theta is greater than 6 deg) of the cosmic background radiation covering 90 percent of the sky are now available. The data show a very strong 50-100 sigma (statistical error) dipole component, interpreted as being due to our motion, with a direction of alpha = 11.5 + or - 0.15 hours, sigma = -5.6 + or - 2.0 deg. The inferred direction of the velocity of our galaxy relative to the cosmic background radiation is alpha = 10.6 + or - 0.3 hours, sigma = -2.3 + or - 5 deg. This is 44 deg from the center of the Virgo cluster. After removing the dipole component, the data show a galactic signature but no apparent residual structure. An autocorrelation of the residual data, after substraction of the galactic component from a combined Berkeley (3 mm) and Princeton (12 mm) data sets, show no apparent structure from 10 to 180 deg with a rms of 0.01 mK(sup 2). At 90 percent confidence level limit of .00007 is placed on a quadrupole component.

  14. Dipole excitation of surface plasmon on a conducting sheet: Finite element approximation and validation

    NASA Astrophysics Data System (ADS)

    Maier, Matthias; Margetis, Dionisios; Luskin, Mitchell

    2017-06-01

    We formulate and validate a finite element approach to the propagation of a slowly decaying electromagnetic wave, called surface plasmon-polariton, excited along a conducting sheet, e.g., a single-layer graphene sheet, by an electric Hertzian dipole. By using a suitably rescaled form of time-harmonic Maxwell's equations, we derive a variational formulation that enables a direct numerical treatment of the associated class of boundary value problems by appropriate curl-conforming finite elements. The conducting sheet is modeled as an idealized hypersurface with an effective electric conductivity. The requisite weak discontinuity for the tangential magnetic field across the hypersurface can be incorporated naturally into the variational formulation. We carry out numerical simulations for an infinite sheet with constant isotropic conductivity embedded in two spatial dimensions; and validate our numerics against the closed-form exact solution obtained by the Fourier transform in the tangential coordinate. Numerical aspects of our treatment such as an absorbing perfectly matched layer, as well as local refinement and a posteriori error control are discussed.

  15. The Method of Fundamental Solutions using the Vector Magnetic Dipoles for Calculation of the Magnetic Fields in the Diagnostic Problems Based on Full-Scale Modelling Experiment

    NASA Astrophysics Data System (ADS)

    Bakhvalov, Yu A.; Grechikhin, V. V.; Yufanova, A. L.

    2016-04-01

    The article describes the calculation of the magnetic fields in the problems diagnostic of technical systems based on the full-scale modeling experiment. Use of gridless fundamental solution method and its variants in combination with grid methods (finite differences and finite elements) are allowed to considerably reduce the dimensionality task of the field calculation and hence to reduce calculation time. When implementing the method are used fictitious magnetic charges. In addition, much attention is given to the calculation accuracy. Error occurs when wrong choice of the distance between the charges. The authors are proposing to use vector magnetic dipoles to improve the accuracy of magnetic fields calculation. Examples of this approacharegiven. The article shows the results of research. They are allowed to recommend the use of this approach in the method of fundamental solutions for the full-scale modeling tests of technical systems.

  16. Silicon quantum processor with robust long-distance qubit couplings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tosi, Guilherme; Mohiyaddin, Fahd A.; Schmitt, Vivien

    Practical quantum computers require a large network of highly coherent qubits, interconnected in a design robust against errors. Donor spins in silicon provide state-of-the-art coherence and quantum gate fidelities, in a platform adapted from industrial semiconductor processing. Here we present a scalable design for a silicon quantum processor that does not require precise donor placement and leaves ample space for the routing of interconnects and readout devices. We introduce the flip-flop qubit, a combination of the electron-nuclear spin states of a phosphorus donor that can be controlled by microwave electric fields. Two-qubit gates exploit a second-order electric dipole-dipole interaction, allowingmore » selective coupling beyond the nearest-neighbor, at separations of hundreds of nanometers, while microwave resonators can extend the entanglement to macroscopic distances. We predict gate fidelities within fault-tolerance thresholds using realistic noise models. This design provides a realizable blueprint for scalable spin-based quantum computers in silicon.« less

  17. Self-replication with magnetic dipolar colloids

    NASA Astrophysics Data System (ADS)

    Dempster, Joshua M.; Zhang, Rui; Olvera de la Cruz, Monica

    2015-10-01

    Colloidal self-replication represents an exciting research frontier in soft matter physics. Currently, all reported self-replication schemes involve coating colloidal particles with stimuli-responsive molecules to allow switchable interactions. In this paper, we introduce a scheme using ferromagnetic dipolar colloids and preprogrammed external magnetic fields to create an autonomous self-replication system. Interparticle dipole-dipole forces and periodically varying weak-strong magnetic fields cooperate to drive colloid monomers from the solute onto templates, bind them into replicas, and dissolve template complexes. We present three general design principles for autonomous linear replicators, derived from a focused study of a minimalist sphere-dimer magnetic system in which single binding sites allow formation of dimeric templates. We show via statistical models and computer simulations that our system exhibits nonlinear growth of templates and produces nearly exponential growth (low error rate) upon adding an optimized competing electrostatic potential. We devise experimental strategies for constructing the required magnetic colloids based on documented laboratory techniques. We also present qualitative ideas about building more complex self-replicating structures utilizing magnetic colloids.

  18. Magnetic localization and orientation of the capsule endoscope based on a random complex algorithm.

    PubMed

    He, Xiaoqi; Zheng, Zizhao; Hu, Chao

    2015-01-01

    The development of the capsule endoscope has made possible the examination of the whole gastrointestinal tract without much pain. However, there are still some important problems to be solved, among which, one important problem is the localization of the capsule. Currently, magnetic positioning technology is a suitable method for capsule localization, and this depends on a reliable system and algorithm. In this paper, based on the magnetic dipole model as well as magnetic sensor array, we propose nonlinear optimization algorithms using a random complex algorithm, applied to the optimization calculation for the nonlinear function of the dipole, to determine the three-dimensional position parameters and two-dimensional direction parameters. The stability and the antinoise ability of the algorithm is compared with the Levenberg-Marquart algorithm. The simulation and experiment results show that in terms of the error level of the initial guess of magnet location, the random complex algorithm is more accurate, more stable, and has a higher "denoise" capacity, with a larger range for initial guess values.

  19. Error analysis of finite element method for Poisson–Nernst–Planck equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sun, Yuzhou; Sun, Pengtao; Zheng, Bin

    A priori error estimates of finite element method for time-dependent Poisson-Nernst-Planck equations are studied in this work. We obtain the optimal error estimates in L∞(H1) and L2(H1) norms, and suboptimal error estimates in L∞(L2) norm, with linear element, and optimal error estimates in L∞(L2) norm with quadratic or higher-order element, for both semi- and fully discrete finite element approximations. Numerical experiments are also given to validate the theoretical results.

  20. Possible 6-qubit NMR quantum computer device material; simulator of the NMR line width

    NASA Astrophysics Data System (ADS)

    Hashi, K.; Kitazawa, H.; Shimizu, T.; Goto, A.; Eguchi, S.; Ohki, S.

    2002-12-01

    For an NMR quantum computer, splitting of an NMR spectrum must be larger than a line width. In order to find a best device material for a solid-state NMR quantum computer, we have made a simulation program to calculate the NMR line width due to the nuclear dipole field by the 2nd moment method. The program utilizes the lattice information prepared by commercial software to draw a crystal structure. By applying this program, we can estimate the NMR line width due to the nuclear dipole field without measurements and find a candidate material for a 6-qubit solid-state NMR quantum computer device.

  1. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    NASA Astrophysics Data System (ADS)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  2. Improvement in error propagation in the Shack-Hartmann-type zonal wavefront sensors.

    PubMed

    Pathak, Biswajit; Boruah, Bosanta R

    2017-12-01

    Estimation of the wavefront from measured slope values is an essential step in a Shack-Hartmann-type wavefront sensor. Using an appropriate estimation algorithm, these measured slopes are converted into wavefront phase values. Hence, accuracy in wavefront estimation lies in proper interpretation of these measured slope values using the chosen estimation algorithm. There are two important sources of errors associated with the wavefront estimation process, namely, the slope measurement error and the algorithm discretization error. The former type is due to the noise in the slope measurements or to the detector centroiding error, and the latter is a consequence of solving equations of a basic estimation algorithm adopted onto a discrete geometry. These errors deserve particular attention, because they decide the preference of a specific estimation algorithm for wavefront estimation. In this paper, we investigate these two important sources of errors associated with the wavefront estimation algorithms of Shack-Hartmann-type wavefront sensors. We consider the widely used Southwell algorithm and the recently proposed Pathak-Boruah algorithm [J. Opt.16, 055403 (2014)JOOPDB0150-536X10.1088/2040-8978/16/5/055403] and perform a comparative study between the two. We find that the latter algorithm is inherently superior to the Southwell algorithm in terms of the error propagation performance. We also conduct experiments that further establish the correctness of the comparative study between the said two estimation algorithms.

  3. Fuel Burn Estimation Model

    NASA Technical Reports Server (NTRS)

    Chatterji, Gano

    2011-01-01

    Conclusions: Validated the fuel estimation procedure using flight test data. A good fuel model can be created if weight and fuel data are available. Error in assumed takeoff weight results in similar amount of error in the fuel estimate. Fuel estimation error bounds can be determined.

  4. An Enhanced Non-Coherent Pre-Filter Design for Tracking Error Estimation in GNSS Receivers

    PubMed Central

    Luo, Zhibin; Ding, Jicheng; Zhao, Lin; Wu, Mouyan

    2017-01-01

    Tracking error estimation is of great importance in global navigation satellite system (GNSS) receivers. Any inaccurate estimation for tracking error will decrease the signal tracking ability of signal tracking loops and the accuracies of position fixing, velocity determination, and timing. Tracking error estimation can be done by traditional discriminator, or Kalman filter-based pre-filter. The pre-filter can be divided into two categories: coherent and non-coherent. This paper focuses on the performance improvements of non-coherent pre-filter. Firstly, the signal characteristics of coherent and non-coherent integration—which are the basis of tracking error estimation—are analyzed in detail. After that, the probability distribution of estimation noise of four-quadrant arctangent (ATAN2) discriminator is derived according to the mathematical model of coherent integration. Secondly, the statistical property of observation noise of non-coherent pre-filter is studied through Monte Carlo simulation to set the observation noise variance matrix correctly. Thirdly, a simple fault detection and exclusion (FDE) structure is introduced to the non-coherent pre-filter design, and thus its effective working range for carrier phase error estimation extends from (−0.25 cycle, 0.25 cycle) to (−0.5 cycle, 0.5 cycle). Finally, the estimation accuracies of discriminator, coherent pre-filter, and the enhanced non-coherent pre-filter are evaluated comprehensively through the carefully designed experiment scenario. The pre-filter outperforms traditional discriminator in estimation accuracy. In a highly dynamic scenario, the enhanced non-coherent pre-filter provides accuracy improvements of 41.6%, 46.4%, and 50.36% for carrier phase error, carrier frequency error, and code phase error estimation, respectively, when compared with coherent pre-filter. The enhanced non-coherent pre-filter outperforms the coherent pre-filter in code phase error estimation when carrier-to-noise density ratio is less than 28.8 dB-Hz, in carrier frequency error estimation when carrier-to-noise density ratio is less than 20 dB-Hz, and in carrier phase error estimation when carrier-to-noise density belongs to (15, 23) dB-Hz ∪ (26, 50) dB-Hz. PMID:29156581

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuhn, Heinz-Dieter.

    The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less

  6. A simulation test of the effectiveness of several methods for error-checking non-invasive genetic data

    USGS Publications Warehouse

    Roon, David A.; Waits, L.P.; Kendall, K.C.

    2005-01-01

    Non-invasive genetic sampling (NGS) is becoming a popular tool for population estimation. However, multiple NGS studies have demonstrated that polymerase chain reaction (PCR) genotyping errors can bias demographic estimates. These errors can be detected by comprehensive data filters such as the multiple-tubes approach, but this approach is expensive and time consuming as it requires three to eight PCR replicates per locus. Thus, researchers have attempted to correct PCR errors in NGS datasets using non-comprehensive error checking methods, but these approaches have not been evaluated for reliability. We simulated NGS studies with and without PCR error and 'filtered' datasets using non-comprehensive approaches derived from published studies and calculated mark-recapture estimates using CAPTURE. In the absence of data-filtering, simulated error resulted in serious inflations in CAPTURE estimates; some estimates exceeded N by ??? 200%. When data filters were used, CAPTURE estimate reliability varied with per-locus error (E??). At E?? = 0.01, CAPTURE estimates from filtered data displayed < 5% deviance from error-free estimates. When E?? was 0.05 or 0.09, some CAPTURE estimates from filtered data displayed biases in excess of 10%. Biases were positive at high sampling intensities; negative biases were observed at low sampling intensities. We caution researchers against using non-comprehensive data filters in NGS studies, unless they can achieve baseline per-locus error rates below 0.05 and, ideally, near 0.01. However, we suggest that data filters can be combined with careful technique and thoughtful NGS study design to yield accurate demographic information. ?? 2005 The Zoological Society of London.

  7. High-redshift radio galaxies and divergence from the CMB dipole

    NASA Astrophysics Data System (ADS)

    Colin, Jacques; Mohayaee, Roya; Rameez, Mohamed; Sarkar, Subir

    2017-10-01

    Previous studies have found our velocity in the rest frame of radio galaxies at high redshift to be much larger than that inferred from the dipole anisotropy of the cosmic microwave background. We construct a full sky catalogue, NVSUMSS, by merging the NRAO VLA Sky Survey and the Sydney University Molonglo Sky Survey catalogues and removing local sources by various means including cross-correlating with the 2MASS Redshift Survey catalogue. We take into account both aberration and Doppler boost to deduce our velocity from the hemispheric number count asymmetry, as well as via a three-dimensional linear estimator. Both its magnitude and direction depend on cuts made to the catalogue, e.g. on the lowest source flux; however these effects are small. From the hemispheric number count asymmetry we obtain a velocity of 1729 ± 187 km s-1, I.e. about four times larger than that obtained from the cosmic microwave background dipole, but close in direction, towards RA=149° ± 2°, Dec. = -17° ± 12°. With the three-dimensional estimator, the derived velocity is 1355 ± 174 km s-1 towards RA = 141° ± 11°, Dec. = -9° ± 10°. We assess the statistical significance of these results by comparison with catalogues of random distributions, finding it to be 2.81σ (99.75 per cent confidence).

  8. Electron Cloud Trapping in Recycler Combined Function Dipole Magnets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antipov, Sergey A.; Nagaitsev, S.

    2016-10-04

    Electron cloud can lead to a fast instability in intense proton and positron beams in circular accelerators. In the Fermilab Recycler the electron cloud is confined within its combined function magnets. We show that the field of combined function magnets traps the electron cloud, present the results of analytical estimates of trapping, and compare them to numerical simulations of electron cloud formation. The electron cloud is located at the beam center and up to 1% of the particles can be trapped by the magnetic field. Since the process of electron cloud build-up is exponential, once trapped this amount of electronsmore » significantly increases the density of the cloud on the next revolution. In a Recycler combined function dipole this multiturn accumulation allows the electron cloud reaching final intensities orders of magnitude greater than in a pure dipole. The multi-turn build-up can be stopped by injection of a clearing bunch of 1010 p at any position in the ring.« less

  9. Lattice calculation of electric dipole moments and form factors of the nucleon

    NASA Astrophysics Data System (ADS)

    Abramczyk, M.; Aoki, S.; Blum, T.; Izubuchi, T.; Ohki, H.; Syritsyn, S.

    2017-07-01

    We analyze commonly used expressions for computing the nucleon electric dipole form factors (EDFF) F3 and moments (EDM) on a lattice and find that they lead to spurious contributions from the Pauli form factor F2 due to inadequate definition of these form factors when parity mixing of lattice nucleon fields is involved. Using chirally symmetric domain wall fermions, we calculate the proton and the neutron EDFF induced by the C P -violating quark chromo-EDM interaction using the corrected expression. In addition, we calculate the electric dipole moment of the neutron using a background electric field that respects time translation invariance and boundary conditions, and we find that it decidedly agrees with the new formula but not the old formula for F3. Finally, we analyze some selected lattice results for the nucleon EDM and observe that after the correction is applied, they either agree with zero or are substantially reduced in magnitude, thus reconciling their difference from phenomenological estimates of the nucleon EDM.

  10. Control of systematic uncertainties in the storage ring search for an electric dipole moment by measuring the electric quadrupole moment

    NASA Astrophysics Data System (ADS)

    Magiera, Andrzej

    2017-09-01

    Measurements of electric dipole moment (EDM) for light hadrons with use of a storage ring have been proposed. The expected effect is very small, therefore various subtle effects need to be considered. In particular, interaction of particle's magnetic dipole moment and electric quadrupole moment with electromagnetic field gradients can produce an effect of a similar order of magnitude as that expected for EDM. This paper describes a very promising method employing an rf Wien filter, allowing to disentangle that contribution from the genuine EDM effect. It is shown that both these effects could be separated by the proper setting of the rf Wien filter frequency and phase. In the EDM measurement the magnitude of systematic uncertainties plays a key role and they should be under strict control. It is shown that particles' interaction with field gradients offers also the possibility to estimate global systematic uncertainties with the precision necessary for an EDM measurement with the planned accuracy.

  11. Central Compact Objects: some of them could be spinning up?

    NASA Astrophysics Data System (ADS)

    Benli, O.; Ertan, Ü.

    2018-05-01

    Among confirmed central compact objects (CCOs), only three sources have measured period and period derivatives. We have investigated possible evolutionary paths of these three CCOs in the fallback disc model. The model can account for the individual X-ray luminosities and rotational properties of the sources consistently with their estimated supernova ages. For these sources, reasonable model curves can be obtained with dipole field strengths ˜ a few × 109 G on the surface of the star. The model curves indicate that these CCOs were in the spin-up state in the early phase of evolution. The spin-down starts, while accretion is going on, at a time t ˜ 103 - 104 yr depending on the current accretion rate, period and the magnetic dipole moment of the star. This implies that some of the CCOs with relatively long periods, weak dipole fields and high X-ray luminosities could be strong candidates to show spin-up behavior if they indeed evolve with fallback discs.

  12. Adjusting for radiotelemetry error to improve estimates of habitat use.

    Treesearch

    Scott L. Findholt; Bruce K. Johnson; Lyman L. McDonald; John W. Kern; Alan Ager; Rosemary J. Stussy; Larry D. Bryant

    2002-01-01

    Animal locations estimated from radiotelemetry have traditionally been treated as error-free when analyzed in relation to habitat variables. Location error lowers the power of statistical tests of habitat selection. We describe a method that incorporates the error surrounding point estimates into measures of environmental variables determined from a geographic...

  13. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  14. Empirical State Error Covariance Matrix for Batch Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joe

    2015-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the uncertainty in the estimated states. By a reinterpretation of the equations involved in the weighted batch least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. The proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. This empirical error covariance matrix may be calculated as a side computation for each unique batch solution. Results based on the proposed technique will be presented for a simple, two observer and measurement error only problem.

  15. Software for Quantifying and Simulating Microsatellite Genotyping Error

    PubMed Central

    Johnson, Paul C.D.; Haydon, Daniel T.

    2007-01-01

    Microsatellite genetic marker data are exploited in a variety of fields, including forensics, gene mapping, kinship inference and population genetics. In all of these fields, inference can be thwarted by failure to quantify and account for data errors, and kinship inference in particular can benefit from separating errors into two distinct classes: allelic dropout and false alleles. Pedant is MS Windows software for estimating locus-specific maximum likelihood rates of these two classes of error. Estimation is based on comparison of duplicate error-prone genotypes: neither reference genotypes nor pedigree data are required. Other functions include: plotting of error rate estimates and confidence intervals; simulations for performing power analysis and for testing the robustness of error rate estimates to violation of the underlying assumptions; and estimation of expected heterozygosity, which is a required input. The program, documentation and source code are available from http://www.stats.gla.ac.uk/~paulj/pedant.html. PMID:20066126

  16. Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation

    NASA Technical Reports Server (NTRS)

    Park, Michael A.

    2002-01-01

    Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.

  17. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  18. Online machining error estimation method of numerical control gear grinding machine tool based on data analysis of internal sensors

    NASA Astrophysics Data System (ADS)

    Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin

    2016-12-01

    This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.

  19. An Application of Semi-parametric Estimator with Weighted Matrix of Data Depth in Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Pan, X. G.; Wang, J. Q.; Zhou, H. Y.

    2013-05-01

    The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.

  20. Bootstrap Standard Errors for Maximum Likelihood Ability Estimates When Item Parameters Are Unknown

    ERIC Educational Resources Information Center

    Patton, Jeffrey M.; Cheng, Ying; Yuan, Ke-Hai; Diao, Qi

    2014-01-01

    When item parameter estimates are used to estimate the ability parameter in item response models, the standard error (SE) of the ability estimate must be corrected to reflect the error carried over from item calibration. For maximum likelihood (ML) ability estimates, a corrected asymptotic SE is available, but it requires a long test and the…

  1. Satellite Sampling and Retrieval Errors in Regional Monthly Rain Estimates from TMI AMSR-E, SSM/I, AMSU-B and the TRMM PR

    NASA Technical Reports Server (NTRS)

    Fisher, Brad; Wolff, David B.

    2010-01-01

    Passive and active microwave rain sensors onboard earth-orbiting satellites estimate monthly rainfall from the instantaneous rain statistics collected during satellite overpasses. It is well known that climate-scale rain estimates from meteorological satellites incur sampling errors resulting from the process of discrete temporal sampling and statistical averaging. Sampling and retrieval errors ultimately become entangled in the estimation of the mean monthly rain rate. The sampling component of the error budget effectively introduces statistical noise into climate-scale rain estimates that obscure the error component associated with the instantaneous rain retrieval. Estimating the accuracy of the retrievals on monthly scales therefore necessitates a decomposition of the total error budget into sampling and retrieval error quantities. This paper presents results from a statistical evaluation of the sampling and retrieval errors for five different space-borne rain sensors on board nine orbiting satellites. Using an error decomposition methodology developed by one of the authors, sampling and retrieval errors were estimated at 0.25 resolution within 150 km of ground-based weather radars located at Kwajalein, Marshall Islands and Melbourne, Florida. Error and bias statistics were calculated according to the land, ocean and coast classifications of the surface terrain mask developed for the Goddard Profiling (GPROF) rain algorithm. Variations in the comparative error statistics are attributed to various factors related to differences in the swath geometry of each rain sensor, the orbital and instrument characteristics of the satellite and the regional climatology. The most significant result from this study found that each of the satellites incurred negative longterm oceanic retrieval biases of 10 to 30%.

  2. Authigenic 10Be/9Be Ratio Signatures of the Cosmogenic Nuclide Production Linked to Geomagnetic Dipole Moment Variation During and Since the Brunhes/Matuyama Boundary

    NASA Astrophysics Data System (ADS)

    Simon, Q.; Thouveny, N.; Bourles, D. L.; Ménabréaz, L.; Valet, J. P.; Valery, G.; Choy, S.

    2015-12-01

    The atmospheric production rate of cosmogenic nuclides is linked to the geomagnetic dipole moment (GDM) by a non-linear inverse relationship. Large amplitude GDM variations associated with reversals and excursions can potentially be reconstructed using time variation of the cosmogenic beryllium-10 (10Be) production recorded in ocean sediments. Downcore profiles of authigenic 10Be/9Be ratios (proxy of atmospheric 10Be production) in oceanic cores provide independent and additional records of the evolution of the geomagnetic intensity and complete previous information derived from relative paleointensity (RPI). Here are presented new authigenic 10Be/9Be results obtained from cores MD05-2920 and from the top of core MD05-2930 collected in the West Equatorial Pacific Ocean. Completing data of Ménabréaz et al. (2012, 2014), these results provide the first continuous 10Be production rate sedimentary record covering the last 800 ka. Along these cores, authigenic 10Be/9Be ratio peaks are recorded - within methodological errors - at the stratigraphic level of RPI lows. High-resolution chronologies (δ18O-derived) lead to interpret these peaks as successive global 10Be overproduction events triggered by geomagnetic dipole lows present in the PISO-1500 and Sint-2000 stacks. The largest amplitude 10Be production enhancement is synchronous to the very large decrease of the dipole field associated with the last polarity reversal (772 ka). It is consistent in shape and duration with the peak recorded in core MD90-0961 from the Maldive area (Indian Ocean) (Valet et al. 2014). Two significant 10Be production enhancements are coeval with the Laschamp (41 ka) and Icelandic basin (190 ka) excursions, while 10Be production peaks of lower amplitude correlate to other recognized excursions such as the Blake (120 ka), Pringle-Falls (215 ka), Portuguese Margin (290 ka), Big Lost (540 ka) among others. This study provides new data on the amplitude and timing of dipole field variations, helping to understand the difference between paleosecular variation, excursions, aborted reversals and reversals regimes.

  3. Calibration of remotely sensed proportion or area estimates for misclassification error

    Treesearch

    Raymond L. Czaplewski; Glenn P. Catts

    1992-01-01

    Classifications of remotely sensed data contain misclassification errors that bias areal estimates. Monte Carlo techniques were used to compare two statistical methods that correct or calibrate remotely sensed areal estimates for misclassification bias using reference data from an error matrix. The inverse calibration estimator was consistently superior to the...

  4. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different Reynolds numbers. It is found that the velocity angle error estimator can detect most flow characteristics and produce dense grids in the regions where flow velocity directions have abrupt changes. In addition, the e theta estimator makes the derivative error dilutely distribute in the whole computational domain and also allows the refinement to be conducted at regions of high error. Through comparison of the velocity angle error across the interface with neighbouring cells, it is verified that the adaptive scheme in using etheta provides an optimum mesh which can clearly resolve local flow features in a precise way. The adaptive results justify the applicability of the etheta estimator and prove that this error estimator is a valuable adaptive indicator for the automatic refinement of unstructured grids.

  5. On the Calculation of Uncertainty Statistics with Error Bounds for CFD Calculations Containing Random Parameters and Fields

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2016-01-01

    This chapter discusses the ongoing development of combined uncertainty and error bound estimates for computational fluid dynamics (CFD) calculations subject to imposed random parameters and random fields. An objective of this work is the construction of computable error bound formulas for output uncertainty statistics that guide CFD practitioners in systematically determining how accurately CFD realizations should be approximated and how accurately uncertainty statistics should be approximated for output quantities of interest. Formal error bounds formulas for moment statistics that properly account for the presence of numerical errors in CFD calculations and numerical quadrature errors in the calculation of moment statistics have been previously presented in [8]. In this past work, hierarchical node-nested dense and sparse tensor product quadratures are used to calculate moment statistics integrals. In the present work, a framework has been developed that exploits the hierarchical structure of these quadratures in order to simplify the calculation of an estimate of the quadrature error needed in error bound formulas. When signed estimates of realization error are available, this signed error may also be used to estimate output quantity of interest probability densities as a means to assess the impact of realization error on these density estimates. Numerical results are presented for CFD problems with uncertainty to demonstrate the capabilities of this framework.

  6. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  7. Communication: Correct charge transfer in CT complexes from the Becke'05 density functional

    NASA Astrophysics Data System (ADS)

    Becke, Axel D.; Dale, Stephen G.; Johnson, Erin R.

    2018-06-01

    It has been known for over twenty years that density functionals of the generalized-gradient approximation (GGA) type and exact-exchange-GGA hybrids with low exact-exchange mixing fraction yield enormous errors in the properties of charge-transfer (CT) complexes. Manifestations of this error have also plagued computations of CT excitation energies. GGAs transfer far too much charge in CT complexes. This error has therefore come to be called "delocalization" error. It remains, to this day, a vexing unsolved problem in density-functional theory (DFT). Here we report that a 100% exact-exchange-based density functional known as Becke'05 or "B05" [A. D. Becke, J. Chem. Phys. 119, 2972 (2003); 122, 064101 (2005)] predicts excellent charge transfers in classic CT complexes involving the electron donors NH3, C2H4, HCN, and C2H2 and electron acceptors F2 and Cl2. Our approach is variational, as in our recent "B05min" dipole moments paper [Dale et al., J. Chem. Phys. 147, 154103 (2017)]. Therefore B05 is not only an accurate DFT for thermochemistry but is promising as a solution to the delocalization problem as well.

  8. Supervised local error estimation for nonlinear image registration using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Eppenhof, Koen A. J.; Pluim, Josien P. W.

    2017-02-01

    Error estimation in medical image registration is valuable when validating, comparing, or combining registration methods. To validate a nonlinear image registration method, ideally the registration error should be known for the entire image domain. We propose a supervised method for the estimation of a registration error map for nonlinear image registration. The method is based on a convolutional neural network that estimates the norm of the residual deformation from patches around each pixel in two registered images. This norm is interpreted as the registration error, and is defined for every pixel in the image domain. The network is trained using a set of artificially deformed images. Each training example is a pair of images: the original image, and a random deformation of that image. No manually labeled ground truth error is required. At test time, only the two registered images are required as input. We train and validate the network on registrations in a set of 2D digital subtraction angiography sequences, such that errors up to eight pixels can be estimated. We show that for this range of errors the convolutional network is able to learn the registration error in pairs of 2D registered images at subpixel precision. Finally, we present a proof of principle for the extension to 3D registration problems in chest CTs, showing that the method has the potential to estimate errors in 3D registration problems.

  9. Goal-oriented explicit residual-type error estimates in XFEM

    NASA Astrophysics Data System (ADS)

    Rüter, Marcus; Gerasimov, Tymofiy; Stein, Erwin

    2013-08-01

    A goal-oriented a posteriori error estimator is derived to control the error obtained while approximately evaluating a quantity of engineering interest, represented in terms of a given linear or nonlinear functional, using extended finite elements of Q1 type. The same approximation method is used to solve the dual problem as required for the a posteriori error analysis. It is shown that for both problems to be solved numerically the same singular enrichment functions can be used. The goal-oriented error estimator presented can be classified as explicit residual type, i.e. the residuals of the approximations are used directly to compute upper bounds on the error of the quantity of interest. This approach therefore extends the explicit residual-type error estimator for classical energy norm error control as recently presented in Gerasimov et al. (Int J Numer Meth Eng 90:1118-1155, 2012a). Without loss of generality, the a posteriori error estimator is applied to the model problem of linear elastic fracture mechanics. Thus, emphasis is placed on the fracture criterion, here the J-integral, as the chosen quantity of interest. Finally, various illustrative numerical examples are presented where, on the one hand, the error estimator is compared to its finite element counterpart and, on the other hand, improved enrichment functions, as introduced in Gerasimov et al. (2012b), are discussed.

  10. Deuteron Compton scattering below pion photoproduction threshold

    NASA Astrophysics Data System (ADS)

    Levchuk, M. I.; L'vov, A. I.

    2000-07-01

    Deuteron Compton scattering below pion photoproduction threshold is considered in the framework of the nonrelativistic diagrammatic approach with the Bonn OBE potential. A complete gauge-invariant set of diagrams is taken into account which includes resonance diagrams without and with NN-rescattering and diagrams with one- and two-body seagulls. The seagull operators are analyzed in detail, and their relations with free- and bound-nucleon polarizabilities are discussed. It is found that both dipole and higher-order polarizabilities of the nucleon are required for a quantitative description of recent experimental data. An estimate of the isospin-averaged dipole electromagnetic polarizabilities of the nucleon and the polarizabilities of the neutron is obtained from the data.

  11. Ultralight gravitons with tiny electric dipole moment are seeping from the vacuum

    NASA Astrophysics Data System (ADS)

    Novikov, Evgeny A.

    2016-05-01

    Mass and electric dipole moment (EDM) of graviton, which is identified as dark matter particle (DMP), are estimated. This change the concept of dark matter and can help to explain the baryon asymmetry of the universe. The calculations are based on quantum modification of the general relativity (Qmoger) with two additional terms in the Einstein equations, which takes into account production/absorption of gravitons. In this theory, there are no Big Bang in the beginning (some local bangs during the evolution of the universe are probable), no critical density of the universe, no dark energy (no need in cosmological constant) and no inflation. The theory (without fitting) is in good quantitative agreement with cosmic data.

  12. Radiative lifetimes and cooling functions for astrophysically important molecules

    NASA Astrophysics Data System (ADS)

    Tennyson, Jonathan; Hulme, Kelsey; Naim, Omree K.; Yurchenko, Sergei N.

    2016-02-01

    Extensive line lists generated as part of the ExoMol project are used to compute lifetimes for individual rotational, rovibrational and rovibronic excited states, and temperature-dependent cooling functions by summing over all dipole-allowed transitions for the states concerned. Results are presented for SiO, CaH, AlO, ScH, H2O and methane. The results for CH4 are particularly unusual with four excited states with no dipole-allowed decay route and several others, where these decays lead to exceptionally long lifetimes. These lifetime data should be useful in models of masers and estimates of critical densities, and can provide a link with laboratory measurements. Cooling functions are important in stellar and planet formation.

  13. The combined effects of forward masking by noise and high click rate on monaural and binaural human auditory nerve and brainstem potentials.

    PubMed

    Pratt, Hillel; Polyakov, Andrey; Bleich, Naomi; Mittelman, Naomi

    2004-07-01

    To study effects of forward masking and rapid stimulation on human monaurally- and binaurally-evoked brainstem potentials and suggest their relation to synaptic fatigue and recovery and to neuronal action potential refractoriness. Auditory brainstem evoked potentials (ABEPs) were recorded from 12 normally- and symmetrically hearing adults, in response to each click (50 dB nHL, condensation and rarefaction) in a train of nine, with an inter-click interval of 11 ms, that followed a white noise burst of 100 ms duration (50 dB nHL). Sequences of white noise and click train were repeated at a rate of 2.89 s(-1). The interval between noise and first click in the train was 2, 11, 22, 44, 66 or 88 ms in different runs. ABEPs were averaged (8000 repetitions) using a dwell time of 25 micros/address/channel. The binaural interaction components (BICs) of ABEPs were derived and the single, centrally located equivalent dipoles of ABEP waves I and V and of the BIC major wave were estimated. The latencies of dipoles I and V of ABEP, their inter-dipole interval and the dipole magnitude of component V were significantly affected by the interval between noise and clicks and by the serial position of the click in the train. The latency and dipole magnitude of the major BIC component were significantly affected by the interval between noise and clicks. Interval from noise and the click's serial position in the train interacted to affect dipole V latency, dipole V magnitude, BIC latencies and the V-I inter-dipole latency difference. Most of the effects were fully apparent by the first few clicks in the train, and the trend (increase or decrease) was affected by the interval between noise and clicks. The changes in latency and magnitude of ABEP and BIC components with advancing position in the click train and the interactions of click position in the train with the intervals from noise indicate an interaction of fatigue and recovery, compatible with synaptic depletion and replenishing, respectively. With the 2 ms interval between noise and the first click in the train, neuronal action potential refractoriness may also be involved.

  14. Non-Dipole Features of the Geomagnetic Field May Persist for Millions of Years

    NASA Astrophysics Data System (ADS)

    Biasi, J.; Kirschvink, J. L.

    2017-12-01

    Here we present paleointensity results from within the South Atlantic Anomaly (SAA), which is a large non-dipole feature of the geomagnetic field. Within the area of the SAA, anomalous declinations, inclinations, and intensities are observed. Our results suggest that the SAA has been present for at least 5 Ma. This is orders-of-magnitude greater than any previous estimate, and suggests that some non-dipole features do not `average out' over geologic time, which is a fundamental assumption in all paleodirectional studies. The SAA has been steadily growing in size since the first magnetic measurements were made in the South Atlantic, and it is widely believed to have appeared 400 years ago. Recent studies from South Africa (Tarduno et al. (2015)) and Tristan da Cunha (Shah et al. (2016)) have suggested that the SAA has persisted for 1 ka and 96 ka respectively. We conducted paleointensity (PI) experiments on basaltic lavas from James Ross Island, on the Antarctic Peninsula. This large shield volcano has been erupting regularly over the last 6+ Ma (dated via Ar/Ar geochronology), and therefore contains the most complete volcanostratigraphic record in the south Atlantic. Our PI experiments used the Thellier-Thellier method, the IZZI protocol, and the same selection criteria as the Lawrence et al. (2009) study of Ross Island lavas (near McMurdo Station), which is the only comparable PI study on the Antarctic continent. We determined an average paleointensity at JRI of 13.8±5.2 μT, which is far lower than what we would expect from a dipole field (55 μT). In addition, this is far lower than the current value over James Ross Island of 36 μT. These results support the following conclusions: The time-averaged field model of Juarez et al. (1998) and Tauxe et al. (2013) is strongly favored by our PI data. The SAA has persisted over James Ross Island for at least 5 Ma, and has not drifted significantly over that time. The strength of non-dipole features such as the SAA scales with the dipole moment of the earth. Non-dipole features like the SAA can survive geomagnetic reversals. The fundamental assumption that non-dipole features of the geomagnetic field are `averaged out' over geologic timescales needs to be reconsidered.

  15. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  16. Regression-assisted deconvolution.

    PubMed

    McIntyre, Julie; Stefanski, Leonard A

    2011-06-30

    We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Dipole strength in 80Se below the neutron-separation energy for the nuclear transmutation of 79Se

    NASA Astrophysics Data System (ADS)

    Makinaga, Ayano; Massarczyk, Ralph; Beard, Mary; Schwengner, Ronald; Otsu, Hideaki; Müller, Stefan; Röder, Marko; Schmidt, Konrad; Wagner, Andreas

    2017-09-01

    The γ-ray strength function (γSF) in 80Se is an important parameter to estimate the neutron-capture cross section of 79Se which is one of the long-lived fission products (LLFPs). Until now, the γSF method was applied for 80Se only above the neutron-separation energy (Sn) and the evaluated 79Se(n,γ) cross section has an instability caused by the GSF below Sn. We studied the dipole-strength distribution of 80Se in a photon-scattering experiment using bremsstrahlung produced by an electron beam of an energy of 11.5 MeV at the linear accelerator ELBE at HZDR. The present photoabsorption cross section of 80Se was combined with results of (γ,n) experiments and are compared with predictions usinmg the TALYS code. We also estimated the 79Se(n,γ) cross sections and compare them with TALYS predictionms and earlier work by other groups.

  18. Morphology and mixing state of aged soot particles at a remote marine free troposphere site: Implications for optical properties

    DOE PAGES

    China, Swarup; Scarnato, Barbara; Owen, Robert C.; ...

    2015-01-14

    The radiative properties of soot particles depend on their morphology and mixing state, but their evolution during transport is still elusive. In this paper, we report observations from an electron microscopy analysis of individual particles transported in the free troposphere over long distances to the remote Pico Mountain Observatory in the Azores in the North Atlantic. Approximately 70% of the soot particles were highly compact and of those 26% were thinly coated. Discrete dipole approximation simulations indicate that this compaction results in an increase in soot single scattering albedo by a factor of ≤2.17. The top of the atmosphere directmore » radiative forcing is typically smaller for highly compact than mass-equivalent lacy soot. Lastly, the forcing estimated using Mie theory is within 12% of the forcing estimated using the discrete dipole approximation for a high surface albedo, implying that Mie calculations may provide a reasonable approximation for compact soot above remote marine clouds.« less

  19. Error analysis and new dual-cosine window for estimating the sensor frequency response function from the step response data

    NASA Astrophysics Data System (ADS)

    Yang, Shuang-Long; Liang, Li-Ping; Liu, Hou-De; Xu, Ke-Jun

    2018-03-01

    Aiming at reducing the estimation error of the sensor frequency response function (FRF) estimated by the commonly used window-based spectral estimation method, the error models of interpolation and transient errors are derived in the form of non-parameter models. Accordingly, window effects on the errors are analyzed and reveal that the commonly used hanning window leads to smaller interpolation error which can also be significantly eliminated by the cubic spline interpolation method when estimating the FRF from the step response data, and window with smaller front-end value can restrain more transient error. Thus, a new dual-cosine window with its non-zero discrete Fourier transform bins at -3, -1, 0, 1, and 3 is constructed for FRF estimation. Compared with the hanning window, the new dual-cosine window has the equivalent interpolation error suppression capability and better transient error suppression capability when estimating the FRF from the step response; specifically, it reduces the asymptotic property of the transient error from O(N-2) of the hanning window method to O(N-4) while only increases the uncertainty slightly (about 0.4 dB). Then, one direction of a wind tunnel strain gauge balance which is a high order, small damping, and non-minimum phase system is employed as the example for verifying the new dual-cosine window-based spectral estimation method. The model simulation result shows that the new dual-cosine window method is better than the hanning window method for FRF estimation, and compared with the Gans method and LPM method, it has the advantages of simple computation, less time consumption, and short data requirement; the actual data calculation result of the balance FRF is consistent to the simulation result. Thus, the new dual-cosine window is effective and practical for FRF estimation.

  20. Sampling Errors of SSM/I and TRMM Rainfall Averages: Comparison with Error Estimates from Surface Data and a Sample Model

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.

  1. The Sensitivity of Adverse Event Cost Estimates to Diagnostic Coding Error

    PubMed Central

    Wardle, Gavin; Wodchis, Walter P; Laporte, Audrey; Anderson, Geoffrey M; Baker, Ross G

    2012-01-01

    Objective To examine the impact of diagnostic coding error on estimates of hospital costs attributable to adverse events. Data Sources Original and reabstracted medical records of 9,670 complex medical and surgical admissions at 11 hospital corporations in Ontario from 2002 to 2004. Patient specific costs, not including physician payments, were retrieved from the Ontario Case Costing Initiative database. Study Design Adverse events were identified among the original and reabstracted records using ICD10-CA (Canadian adaptation of ICD10) codes flagged as postadmission complications. Propensity score matching and multivariate regression analysis were used to estimate the cost of the adverse events and to determine the sensitivity of cost estimates to diagnostic coding error. Principal Findings Estimates of the cost of the adverse events ranged from $16,008 (metabolic derangement) to $30,176 (upper gastrointestinal bleeding). Coding errors caused the total cost attributable to the adverse events to be underestimated by 16 percent. The impact of coding error on adverse event cost estimates was highly variable at the organizational level. Conclusions Estimates of adverse event costs are highly sensitive to coding error. Adverse event costs may be significantly underestimated if the likelihood of error is ignored. PMID:22091908

  2. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  3. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  4. A Complementary Note to 'A Lag-1 Smoother Approach to System-Error Estimation': The Intrinsic Limitations of Residual Diagnostics

    NASA Technical Reports Server (NTRS)

    Todling, Ricardo

    2015-01-01

    Recently, this author studied an approach to the estimation of system error based on combining observation residuals derived from a sequential filter and fixed lag-1 smoother. While extending the methodology to a variational formulation, experimenting with simple models and making sure consistency was found between the sequential and variational formulations, the limitations of the residual-based approach came clearly to the surface. This note uses the sequential assimilation application to simple nonlinear dynamics to highlight the issue. Only when some of the underlying error statistics are assumed known is it possible to estimate the unknown component. In general, when considerable uncertainties exist in the underlying statistics as a whole, attempts to obtain separate estimates of the various error covariances are bound to lead to misrepresentation of errors. The conclusions are particularly relevant to present-day attempts to estimate observation-error correlations from observation residual statistics. A brief illustration of the issue is also provided by comparing estimates of error correlations derived from a quasi-operational assimilation system and a corresponding Observing System Simulation Experiments framework.

  5. Optics measurement and correction for the Relativistic Heavy Ion Collider

    NASA Astrophysics Data System (ADS)

    Shen, Xiaozhe

    The quality of beam optics is of great importance for the performance of a high energy accelerator like the Relativistic Heavy Ion Collider (RHIC). The turn-by-turn (TBT) beam position monitor (BPM) data can be used to derive beam optics. However, the accuracy of the derived beam optics is often limited by the performance and imperfections of instruments as well as measurement methods and conditions. Therefore, a robust and model-independent data analysis method is highly desired to extract noise-free information from TBT BPM data. As a robust signal-processing technique, an independent component analysis (ICA) algorithm called second order blind identification (SOBI) has been proven to be particularly efficient in extracting physical beam signals from TBT BPM data even in the presence of instrument's noise and error. We applied the SOBI ICA algorithm to RHIC during the 2013 polarized proton operation to extract accurate linear optics from TBT BPM data of AC dipole driven coherent beam oscillation. From the same data, a first systematic estimation of RHIC BPM noise performance was also obtained by the SOBI ICA algorithm, and showed a good agreement with the RHIC BPM configurations. Based on the accurate linear optics measurement, a beta-beat response matrix correction method and a scheme of using horizontal closed orbit bumps at sextupoles for arc beta-beat correction were successfully applied to reach a record-low beam optics error at RHIC. This thesis presents principles of the SOBI ICA algorithm and theory as well as experimental results of optics measurement and correction at RHIC.

  6. Optical absorption spectra of the uranium (4+) ion in the thorium germanate matrix

    NASA Astrophysics Data System (ADS)

    Gajek, Z.; Krupa, J. C.; Antic-Fidancev, E.

    1997-01-01

    Visible and infrared absorption measurements on the 0953-8984/9/2/023/img6 ion in tetragonal zircon-type matrix 0953-8984/9/2/023/img7 are reported and analysed in terms of the standard parametrization scheme. The observed 17 main peaks and a number of less intense lines have been assigned and fitted to most of the 32 allowed electric dipole transitions with the root mean square error equal to 0953-8984/9/2/023/img8. The free-ion parameters obtained for the model Hamiltonian, 0953-8984/9/2/023/img9, 0953-8984/9/2/023/img10, 0953-8984/9/2/023/img11 and 0953-8984/9/2/023/img12, as well as the corresponding crystal-field parameters, 0953-8984/9/2/023/img13, 0953-8984/9/2/023/img14, 0953-8984/9/2/023/img15, 0953-8984/9/2/023/img16 and 0953-8984/9/2/023/img17, agree fairly well with the initial theoretical estimations. The results are discussed in relation to the previous spectroscopic study on the scheelite-type matrix 0953-8984/9/2/023/img18.

  7. Benchmarking the Performance of Exchange-Correlation Functionals for Predicting Two-Photon Absorption Strengths.

    PubMed

    Beerepoot, Maarten T P; Alam, Md Mehboob; Bednarska, Joanna; Bartkowiak, Wojciech; Ruud, Kenneth; Zaleśny, Robert

    2018-06-15

    The present work investigates the performance of exchange-correlation functionals in the prediction of two-photon absorption (2PA) strengths. For this purpose, we considered six common functionals used for studying 2PA processes and tested these on six organoboron chelates. The set consisted of two semilocal (PBE and BLYP), two hybrid (B3LYP and PBE0), and two range-separated (LC-BLYP and CAM-B3LYP) functionals. The RI-CC2 method was chosen as a reference level and was found to give results consistent with the experimental data that are available for three of the molecules considered. Of the six exchange-correlation functionals studied, only the range-separated functionals predict an ordering of the 2PA strengths that is consistent with experimental and RI-CC2 results. Even though the range-separated functionals predict correct relative trends, the absolute values for the 2PA strengths are underestimated by a factor of 2-6 for the molecules considered. An in-depth analysis, on the basis of the derived generalized few-state model expression for the 2PA strength for a coupled-cluster wave function, reveals that the problem with these functionals can be linked to underestimated excited-state dipole moments and, to a lesser extent, overestimated excitation energies. The semilocal and hybrid functionals exhibit less predictable errors and a variation in the 2PA strengths in disagreement with the reference results. The semilocal and hybrid functionals show smaller average errors than the range-separated functionals, but our analysis reveals that this is due to fortuitous error cancellation between excitation energies and the transition dipole moments. Our results constitute a warning against using currently available exchange-correlation functionals in the prediction of 2PA strengths and highlight the need for functionals that correctly describe the electron density of excited electronic states.

  8. Can we estimate total magnetization directions from aeromagnetic data using Helbig's integrals?

    USGS Publications Warehouse

    Phillips, J.D.

    2005-01-01

    An algorithm that implements Helbig's (1963) integrals for estimating the vector components (mx, my, mz) of tile magnetic dipole moment from the first order moments of the vector magnetic field components (??X, ??Y, ??Z) is tested on real and synthetic data. After a grid of total field aeromagnetic data is converted to vector component grids using Fourier filtering, Helbig's infinite integrals are evaluated as finite integrals in small moving windows using a quadrature algorithm based on the 2-D trapezoidal rule. Prior to integration, best-fit planar surfaces must be removed from the component data within the data windows in order to make the results independent of the coordinate system origin. Two different approaches are described for interpreting the results of the integration. In the "direct" method, results from pairs of different window sizes are compared to identify grid nodes where the angular difference between solutions is small. These solutions provide valid estimates of total magnetization directions for compact sources such as spheres or dipoles, but not for horizontally elongated or 2-D sources. In the "indirect" method, which is more forgiving of source geometry, results of the quadrature analysis are scanned for solutions that are parallel to a specified total magnetization direction.

  9. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2015-03-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which is to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  10. Comparison of spatial association approaches for landscape mapping of soil organic carbon stocks

    NASA Astrophysics Data System (ADS)

    Miller, B. A.; Koszinski, S.; Wehrhan, M.; Sommer, M.

    2014-11-01

    The distribution of soil organic carbon (SOC) can be variable at small analysis scales, but consideration of its role in regional and global issues demands the mapping of large extents. There are many different strategies for mapping SOC, among which are to model the variables needed to calculate the SOC stock indirectly or to model the SOC stock directly. The purpose of this research is to compare direct and indirect approaches to mapping SOC stocks from rule-based, multiple linear regression models applied at the landscape scale via spatial association. The final products for both strategies are high-resolution maps of SOC stocks (kg m-2), covering an area of 122 km2, with accompanying maps of estimated error. For the direct modelling approach, the estimated error map was based on the internal error estimations from the model rules. For the indirect approach, the estimated error map was produced by spatially combining the error estimates of component models via standard error propagation equations. We compared these two strategies for mapping SOC stocks on the basis of the qualities of the resulting maps as well as the magnitude and distribution of the estimated error. The direct approach produced a map with less spatial variation than the map produced by the indirect approach. The increased spatial variation represented by the indirect approach improved R2 values for the topsoil and subsoil stocks. Although the indirect approach had a lower mean estimated error for the topsoil stock, the mean estimated error for the total SOC stock (topsoil + subsoil) was lower for the direct approach. For these reasons, we recommend the direct approach to modelling SOC stocks be considered a more conservative estimate of the SOC stocks' spatial distribution.

  11. Fully anisotropic goal-oriented mesh adaptation for 3D steady Euler equations

    NASA Astrophysics Data System (ADS)

    Loseille, A.; Dervieux, A.; Alauzet, F.

    2010-04-01

    This paper studies the coupling between anisotropic mesh adaptation and goal-oriented error estimate. The former is very well suited to the control of the interpolation error. It is generally interpreted as a local geometric error estimate. On the contrary, the latter is preferred when studying approximation errors for PDEs. It generally involves non local error contributions. Consequently, a full and strong coupling between both is hard to achieve due to this apparent incompatibility. This paper shows how to achieve this coupling in three steps. First, a new a priori error estimate is proved in a formal framework adapted to goal-oriented mesh adaptation for output functionals. This estimate is based on a careful analysis of the contributions of the implicit error and of the interpolation error. Second, the error estimate is applied to the set of steady compressible Euler equations which are solved by a stabilized Galerkin finite element discretization. A goal-oriented error estimation is derived. It involves the interpolation error of the Euler fluxes weighted by the gradient of the adjoint state associated with the observed functional. Third, rewritten in the continuous mesh framework, the previous estimate is minimized on the set of continuous meshes thanks to a calculus of variations. The optimal continuous mesh is then derived analytically. Thus, it can be used as a metric tensor field to drive the mesh adaptation. From a numerical point of view, this method is completely automatic, intrinsically anisotropic, and does not depend on any a priori choice of variables to perform the adaptation. 3D examples of steady flows around supersonic and transsonic jets are presented to validate the current approach and to demonstrate its efficiency.

  12. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    DOE PAGES

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less

  13. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  14. An improved estimator for the hydration of fat-free mass from in vivo measurements subject to additive technical errors.

    PubMed

    Kinnamon, Daniel D; Lipsitz, Stuart R; Ludwig, David A; Lipshultz, Steven E; Miller, Tracie L

    2010-04-01

    The hydration of fat-free mass, or hydration fraction (HF), is often defined as a constant body composition parameter in a two-compartment model and then estimated from in vivo measurements. We showed that the widely used estimator for the HF parameter in this model, the mean of the ratios of measured total body water (TBW) to fat-free mass (FFM) in individual subjects, can be inaccurate in the presence of additive technical errors. We then proposed a new instrumental variables estimator that accurately estimates the HF parameter in the presence of such errors. In Monte Carlo simulations, the mean of the ratios of TBW to FFM was an inaccurate estimator of the HF parameter, and inferences based on it had actual type I error rates more than 13 times the nominal 0.05 level under certain conditions. The instrumental variables estimator was accurate and maintained an actual type I error rate close to the nominal level in all simulations. When estimating and performing inference on the HF parameter, the proposed instrumental variables estimator should yield accurate estimates and correct inferences in the presence of additive technical errors, but the mean of the ratios of TBW to FFM in individual subjects may not.

  15. Optimal post-experiment estimation of poorly modeled dynamic systems

    NASA Technical Reports Server (NTRS)

    Mook, D. Joseph

    1988-01-01

    Recently, a novel strategy for post-experiment state estimation of discretely-measured dynamic systems has been developed. The method accounts for errors in the system dynamic model equations in a more general and rigorous manner than do filter-smoother algorithms. The dynamic model error terms do not require the usual process noise assumptions of zero-mean, symmetrically distributed random disturbances. Instead, the model error terms require no prior assumptions other than piecewise continuity. The resulting state estimates are more accurate than filters for applications in which the dynamic model error clearly violates the typical process noise assumptions, and the available measurements are sparse and/or noisy. Estimates of the dynamic model error, in addition to the states, are obtained as part of the solution of a two-point boundary value problem, and may be exploited for numerous reasons. In this paper, the basic technique is explained, and several example applications are given. Included among the examples are both state estimation and exploitation of the model error estimates.

  16. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  17. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    PubMed Central

    Shi, Yun; Xu, Peiliang; Peng, Junhuan; Shi, Chuang; Liu, Jingnan

    2014-01-01

    Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS) adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM) have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM. PMID:24434880

  18. Effects of shape, size, and chromaticity of stimuli on estimated size in normally sighted, severely myopic, and visually impaired students.

    PubMed

    Huang, Kuo-Chen; Wang, Hsiu-Feng; Chen, Chun-Ching

    2010-06-01

    Effects of shape, size, and chromaticity of stimuli on participants' errors when estimating the size of simultaneously presented standard and comparison stimuli were examined. 48 Taiwanese college students ages 20 to 24 years old (M = 22.3, SD = 1.3) participated. Analysis showed that the error for estimated size was significantly greater for those in the low-vision group than for those in the normal-vision and severe-myopia groups. The errors were significantly greater with green and blue stimuli than with red stimuli. Circular stimuli produced smaller mean errors than did square stimuli. The actual size of the standard stimulus significantly affected the error for estimated size. Errors for estimations using smaller sizes were significantly higher than when the sizes were larger. Implications of the results for graphics-based interface design, particularly when taking account of visually impaired users, are discussed.

  19. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    NASA Astrophysics Data System (ADS)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  20. Optimized tuner selection for engine performance estimation

    NASA Technical Reports Server (NTRS)

    Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)

    2013-01-01

    A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.

  1. Characterizing the SWOT discharge error budget on the Sacramento River, CA

    NASA Astrophysics Data System (ADS)

    Yoon, Y.; Durand, M. T.; Minear, J. T.; Smith, L.; Merry, C. J.

    2013-12-01

    The Surface Water and Ocean Topography (SWOT) is an upcoming satellite mission (2020 year) that will provide surface-water elevation and surface-water extent globally. One goal of SWOT is the estimation of river discharge directly from SWOT measurements. SWOT discharge uncertainty is due to two sources. First, SWOT cannot measure channel bathymetry and determine roughness coefficient data necessary for discharge calculations directly; these parameters must be estimated from the measurements or from a priori information. Second, SWOT measurement errors directly impact the discharge estimate accuracy. This study focuses on characterizing parameter and measurement uncertainties for SWOT river discharge estimation. A Bayesian Markov Chain Monte Carlo scheme is used to calculate parameter estimates, given the measurements of river height, slope and width, and mass and momentum constraints. The algorithm is evaluated using simulated both SWOT and AirSWOT (the airborne version of SWOT) observations over seven reaches (about 40 km) of the Sacramento River. The SWOT and AirSWOT observations are simulated by corrupting the ';true' HEC-RAS hydraulic modeling results with the instrument error. This experiment answers how unknown bathymetry and roughness coefficients affect the accuracy of the river discharge algorithm. From the experiment, the discharge error budget is almost completely dominated by unknown bathymetry and roughness; 81% of the variance error is explained by uncertainties in bathymetry and roughness. Second, we show how the errors in water surface, slope, and width observations influence the accuracy of discharge estimates. Indeed, there is a significant sensitivity to water surface, slope, and width errors due to the sensitivity of bathymetry and roughness to measurement errors. Increasing water-surface error above 10 cm leads to a corresponding sharper increase of errors in bathymetry and roughness. Increasing slope error above 1.5 cm/km leads to a significant degradation due to direct error in the discharge estimates. As the width error increases past 20%, the discharge error budget is dominated by the width error. Above two experiments are performed based on AirSWOT scenarios. In addition, we explore the sensitivity of the algorithm to the SWOT scenarios.

  2. Joint estimation over multiple individuals improves behavioural state inference from animal movement data.

    PubMed

    Jonsen, Ian

    2016-02-08

    State-space models provide a powerful way to scale up inference of movement behaviours from individuals to populations when the inference is made across multiple individuals. Here, I show how a joint estimation approach that assumes individuals share identical movement parameters can lead to improved inference of behavioural states associated with different movement processes. I use simulated movement paths with known behavioural states to compare estimation error between nonhierarchical and joint estimation formulations of an otherwise identical state-space model. Behavioural state estimation error was strongly affected by the degree of similarity between movement patterns characterising the behavioural states, with less error when movements were strongly dissimilar between states. The joint estimation model improved behavioural state estimation relative to the nonhierarchical model for simulated data with heavy-tailed Argos location errors. When applied to Argos telemetry datasets from 10 Weddell seals, the nonhierarchical model estimated highly uncertain behavioural state switching probabilities for most individuals whereas the joint estimation model yielded substantially less uncertainty. The joint estimation model better resolved the behavioural state sequences across all seals. Hierarchical or joint estimation models should be the preferred choice for estimating behavioural states from animal movement data, especially when location data are error-prone.

  3. Economic measurement of medical errors using a hospital claims database.

    PubMed

    David, Guy; Gunnarsson, Candace L; Waters, Heidi C; Horblyuk, Ruslan; Kaplan, Harold S

    2013-01-01

    The primary objective of this study was to estimate the occurrence and costs of medical errors from the hospital perspective. Methods from a recent actuarial study of medical errors were used to identify medical injuries. A visit qualified as an injury visit if at least 1 of 97 injury groupings occurred at that visit, and the percentage of injuries caused by medical error was estimated. Visits with more than four injuries were removed from the population to avoid overestimation of cost. Population estimates were extrapolated from the Premier hospital database to all US acute care hospitals. There were an estimated 161,655 medical errors in 2008 and 170,201 medical errors in 2009. Extrapolated to the entire US population, there were more than 4 million unique injury visits containing more than 1 million unique medical errors each year. This analysis estimated that the total annual cost of measurable medical errors in the United States was $985 million in 2008 and just over $1 billion in 2009. The median cost per error to hospitals was $892 for 2008 and rose to $939 in 2009. Nearly one third of all medical injuries were due to error in each year. Medical errors directly impact patient outcomes and hospitals' profitability, especially since 2008 when Medicare stopped reimbursing hospitals for care related to certain preventable medical errors. Hospitals must rigorously analyze causes of medical errors and implement comprehensive preventative programs to reduce their occurrence as the financial burden of medical errors shifts to hospitals. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  4. On-line estimation of error covariance parameters for atmospheric data assimilation

    NASA Technical Reports Server (NTRS)

    Dee, Dick P.

    1995-01-01

    A simple scheme is presented for on-line estimation of covariance parameters in statistical data assimilation systems. The scheme is based on a maximum-likelihood approach in which estimates are produced on the basis of a single batch of simultaneous observations. Simple-sample covariance estimation is reasonable as long as the number of available observations exceeds the number of tunable parameters by two or three orders of magnitude. Not much is known at present about model error associated with actual forecast systems. Our scheme can be used to estimate some important statistical model error parameters such as regionally averaged variances or characteristic correlation length scales. The advantage of the single-sample approach is that it does not rely on any assumptions about the temporal behavior of the covariance parameters: time-dependent parameter estimates can be continuously adjusted on the basis of current observations. This is of practical importance since it is likely to be the case that both model error and observation error strongly depend on the actual state of the atmosphere. The single-sample estimation scheme can be incorporated into any four-dimensional statistical data assimilation system that involves explicit calculation of forecast error covariances, including optimal interpolation (OI) and the simplified Kalman filter (SKF). The computational cost of the scheme is high but not prohibitive; on-line estimation of one or two covariance parameters in each analysis box of an operational bozed-OI system is currently feasible. A number of numerical experiments performed with an adaptive SKF and an adaptive version of OI, using a linear two-dimensional shallow-water model and artificially generated model error are described. The performance of the nonadaptive versions of these methods turns out to depend rather strongly on correct specification of model error parameters. These parameters are estimated under a variety of conditions, including uniformly distributed model error and time-dependent model error statistics.

  5. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    NASA Astrophysics Data System (ADS)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    In this study, PERSIANN daily dataset is evaluated from 2000 to 2011 in 69 pixels over Urmia Lake basin in northwest of Iran. Different analytical approaches and indexes are used to examine PERSIANN precision in detection and estimation of rainfall rate. The residuals are decomposed into Hit, Miss and FA estimation biases while continues decomposition of systematic and random error components are also analyzed seasonally and categorically. New interpretation of estimation accuracy named "reliability on PERSIANN estimations" is introduced while the changing manners of existing categorical/statistical measures and error components are also seasonally analyzed over different rainfall rate categories. This study yields new insights into the nature of PERSIANN errors over Urmia lake basin as a semi-arid region in the middle-east, including the followings: - The analyzed contingency table indexes indicate better detection precision during spring and fall. - A relatively constant level of error is generally observed among different categories. The range of precipitation estimates at different rainfall rate categories is nearly invariant as a sign for the existence of systematic error. - Low level of reliability is observed on PERSIANN estimations at different categories which are mostly associated with high level of FA error. However, it is observed that as the rate of precipitation increase, the ability and precision of PERSIANN in rainfall detection also increases. - The systematic and random error decomposition in this area shows that PERSIANN has more difficulty in modeling the system and pattern of rainfall rather than to have bias due to rainfall uncertainties. The level of systematic error also considerably increases in heavier rainfalls. It is also important to note that PERSIANN error characteristics at each season varies due to the condition and rainfall patterns of that season which shows the necessity of seasonally different approach for the calibration of this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.

  6. Sampling Errors in Monthly Rainfall Totals for TRMM and SSM/I, Based on Statistics of Retrieved Rain Rates and Simple Models

    NASA Technical Reports Server (NTRS)

    Bell, Thomas L.; Kundu, Prasun K.; Einaudi, Franco (Technical Monitor)

    2000-01-01

    Estimates from TRMM satellite data of monthly total rainfall over an area are subject to substantial sampling errors due to the limited number of visits to the area by the satellite during the month. Quantitative comparisons of TRMM averages with data collected by other satellites and by ground-based systems require some estimate of the size of this sampling error. A method of estimating this sampling error based on the actual statistics of the TRMM observations and on some modeling work has been developed. "Sampling error" in TRMM monthly averages is defined here relative to the monthly total a hypothetical satellite permanently stationed above the area would have reported. "Sampling error" therefore includes contributions from the random and systematic errors introduced by the satellite remote sensing system. As part of our long-term goal of providing error estimates for each grid point accessible to the TRMM instruments, sampling error estimates for TRMM based on rain retrievals from TRMM microwave (TMI) data are compared for different times of the year and different oceanic areas (to minimize changes in the statistics due to algorithmic differences over land and ocean). Changes in sampling error estimates due to changes in rain statistics due 1) to evolution of the official algorithms used to process the data, and 2) differences from other remote sensing systems such as the Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave/Imager (SSM/I), are analyzed.

  7. Dependence of Interaction Free Energy between Solutes on an External Electrostatic Field

    PubMed Central

    Yang, Pei-Kun

    2013-01-01

    To explore the athermal effect of an external electrostatic field on the stabilities of protein conformations and the binding affinities of protein-protein/ligand interactions, the dependences of the polar and hydrophobic interactions on the external electrostatic field, −Eext, were studied using molecular dynamics (MD) simulations. By decomposing Eext into, along, and perpendicular to the direction formed by the two solutes, the effect of Eext on the interactions between these two solutes can be estimated based on the effects from these two components. Eext was applied along the direction of the electric dipole formed by two solutes with opposite charges. The attractive interaction free energy between these two solutes decreased for solutes treated as point charges. In contrast, the attractive interaction free energy between these two solutes increased, as observed by MD simulations, for Eext = 40 or 60 MV/cm. Eext was applied perpendicular to the direction of the electric dipole formed by these two solutes. The attractive interaction free energy was increased for Eext = 100 MV/cm as a result of dielectric saturation. The force on the solutes along the direction of Eext computed from MD simulations was greater than that estimated from a continuum solvent in which the solutes were treated as point charges. To explore the hydrophobic interactions, Eext was applied to a water cluster containing two neutral solutes. The repulsive force between these solutes was decreased/increased for Eext along/perpendicular to the direction of the electric dipole formed by these two solutes. PMID:23852018

  8. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  9. Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife

    ERIC Educational Resources Information Center

    Jennrich, Robert I.

    2008-01-01

    The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…

  10. Characterization of electrophysiological propagation by multichannel sensors

    PubMed Central

    Bradshaw, L. Alan; Kim, Juliana H.; Somarajan, Suseela; Richards, William O.; Cheng, Leo K.

    2016-01-01

    Objective The propagation of electrophysiological activity measured by multichannel devices could have significant clinical implications. Gastric slow waves normally propagate along longitudinal paths that are evident in recordings of serosal potentials and transcutaneous magnetic fields. We employed a realistic model of gastric slow wave activity to simulate the transabdominal magnetogastrogram (MGG) recorded in a multichannel biomagnetometer and to determine characteristics of electrophysiological propagation from MGG measurements. Methods Using MGG simulations of slow wave sources in a realistic abdomen (both superficial and deep sources) and in a horizontally-layered volume conductor, we compared two analytic methods (Second Order Blind Identification, SOBI and Surface Current Density, SCD) that allow quantitative characterization of slow wave propagation. We also evaluated the performance of the methods with simulated experimental noise. The methods were also validated in an experimental animal model. Results Mean square errors in position estimates were within 2 cm of the correct position, and average propagation velocities within 2 mm/s of the actual velocities. SOBI propagation analysis outperformed the SCD method for dipoles in the superficial and horizontal layer models with and without additive noise. The SCD method gave better estimates for deep sources, but did not handle additive noise as well as SOBI. Conclusion SOBI-MGG and SCD-MGG were used to quantify slow wave propagation in a realistic abdomen model of gastric electrical activity. Significance These methods could be generalized to any propagating electrophysiological activity detected by multichannel sensor arrays. PMID:26595907

  11. Stress Recovery and Error Estimation for 3-D Shell Structures

    NASA Technical Reports Server (NTRS)

    Riggs, H. R.

    2000-01-01

    The C1-continuous stress fields obtained from finite element analyses are in general lower- order accurate than are the corresponding displacement fields. Much effort has focussed on increasing their accuracy and/or their continuity, both for improved stress prediction and especially error estimation. A previous project developed a penalized, discrete least squares variational procedure that increases the accuracy and continuity of the stress field. The variational problem is solved by a post-processing, 'finite-element-type' analysis to recover a smooth, more accurate, C1-continuous stress field given the 'raw' finite element stresses. This analysis has been named the SEA/PDLS. The recovered stress field can be used in a posteriori error estimators, such as the Zienkiewicz-Zhu error estimator or equilibrium error estimators. The procedure was well-developed for the two-dimensional (plane) case involving low-order finite elements. It has been demonstrated that, if optimal finite element stresses are used for the post-processing, the recovered stress field is globally superconvergent. Extension of this work to three dimensional solids is straightforward. Attachment: Stress recovery and error estimation for shell structure (abstract only). A 4-node, shear-deformable flat shell element developed via explicit Kirchhoff constraints (abstract only). A novel four-node quadrilateral smoothing element for stress enhancement and error estimation (abstract only).

  12. Joint nonparametric correction estimator for excess relative risk regression in survival analysis with exposure measurement error

    PubMed Central

    Wang, Ching-Yun; Cullings, Harry; Song, Xiao; Kopecky, Kenneth J.

    2017-01-01

    SUMMARY Observational epidemiological studies often confront the problem of estimating exposure-disease relationships when the exposure is not measured exactly. In the paper, we investigate exposure measurement error in excess relative risk regression, which is a widely used model in radiation exposure effect research. In the study cohort, a surrogate variable is available for the true unobserved exposure variable. The surrogate variable satisfies a generalized version of the classical additive measurement error model, but it may or may not have repeated measurements. In addition, an instrumental variable is available for individuals in a subset of the whole cohort. We develop a nonparametric correction (NPC) estimator using data from the subcohort, and further propose a joint nonparametric correction (JNPC) estimator using all observed data to adjust for exposure measurement error. An optimal linear combination estimator of JNPC and NPC is further developed. The proposed estimators are nonparametric, which are consistent without imposing a covariate or error distribution, and are robust to heteroscedastic errors. Finite sample performance is examined via a simulation study. We apply the developed methods to data from the Radiation Effects Research Foundation, in which chromosome aberration is used to adjust for the effects of radiation dose measurement error on the estimation of radiation dose responses. PMID:29354018

  13. Numerically accurate computational techniques for optimal estimator analyses of multi-parameter models

    NASA Astrophysics Data System (ADS)

    Berger, Lukas; Kleinheinz, Konstantin; Attili, Antonio; Bisetti, Fabrizio; Pitsch, Heinz; Mueller, Michael E.

    2018-05-01

    Modelling unclosed terms in partial differential equations typically involves two steps: First, a set of known quantities needs to be specified as input parameters for a model, and second, a specific functional form needs to be defined to model the unclosed terms by the input parameters. Both steps involve a certain modelling error, with the former known as the irreducible error and the latter referred to as the functional error. Typically, only the total modelling error, which is the sum of functional and irreducible error, is assessed, but the concept of the optimal estimator enables the separate analysis of the total and the irreducible errors, yielding a systematic modelling error decomposition. In this work, attention is paid to the techniques themselves required for the practical computation of irreducible errors. Typically, histograms are used for optimal estimator analyses, but this technique is found to add a non-negligible spurious contribution to the irreducible error if models with multiple input parameters are assessed. Thus, the error decomposition of an optimal estimator analysis becomes inaccurate, and misleading conclusions concerning modelling errors may be drawn. In this work, numerically accurate techniques for optimal estimator analyses are identified and a suitable evaluation of irreducible errors is presented. Four different computational techniques are considered: a histogram technique, artificial neural networks, multivariate adaptive regression splines, and an additive model based on a kernel method. For multiple input parameter models, only artificial neural networks and multivariate adaptive regression splines are found to yield satisfactorily accurate results. Beyond a certain number of input parameters, the assessment of models in an optimal estimator analysis even becomes practically infeasible if histograms are used. The optimal estimator analysis in this paper is applied to modelling the filtered soot intermittency in large eddy simulations using a dataset of a direct numerical simulation of a non-premixed sooting turbulent flame.

  14. Sonographic estimation of fetal weight: comparison of bias, precision and consistency using 12 different formulae.

    PubMed

    Anderson, N G; Jolley, I J; Wells, J E

    2007-08-01

    To determine the major sources of error in ultrasonographic assessment of fetal weight and whether they have changed over the last decade. We performed a prospective observational study in 1991 and again in 2000 of a mixed-risk pregnancy population, estimating fetal weight within 7 days of delivery. In 1991, the Rose and McCallum formula was used for 72 deliveries. Inter- and intraobserver agreement was assessed within this group. Bland-Altman measures of agreement from log data were calculated as ratios. We repeated the study in 2000 in 208 consecutive deliveries, comparing predicted and actual weights for 12 published equations using Bland-Altman and percentage error methods. We compared bias (mean percentage error), precision (SD percentage error), and their consistency across the weight ranges. 95% limits of agreement ranged from - 4.4% to + 3.3% for inter- and intraobserver estimates, but were - 18.0% to 24.0% for estimated and actual birth weight. There was no improvement in accuracy between 1991 and 2000. In 2000 only six of the 12 published formulae had overall bias within 7% and precision within 15%. There was greater bias and poorer precision in nearly all equations if the birth weight was < 1,000 g. Observer error is a relatively minor component of the error in estimating fetal weight; error due to the equation is a larger source of error. Improvements in ultrasound technology have not improved the accuracy of estimating fetal weight. Comparison of methods of estimating fetal weight requires statistical methods that can separate out bias, precision and consistency. Estimating fetal weight in the very low birth weight infant is subject to much greater error than it is in larger babies. Copyright (c) 2007 ISUOG. Published by John Wiley & Sons, Ltd.

  15. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiplemore » causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.« less

  16. Advanced dynamic statistical parametric mapping with MEG in localizing epileptogenicity of the bottom of sulcus dysplasia.

    PubMed

    Nakajima, Midori; Wong, Simeon; Widjaja, Elysa; Baba, Shiro; Okanishi, Tohru; Takada, Lynne; Sato, Yosuke; Iwata, Hiroki; Sogabe, Maya; Morooka, Hikaru; Whitney, Robyn; Ueda, Yuki; Ito, Tomoshiro; Yagyu, Kazuyori; Ochi, Ayako; Carter Snead, O; Rutka, James T; Drake, James M; Doesburg, Sam; Takeuchi, Fumiya; Shiraishi, Hideaki; Otsubo, Hiroshi

    2018-06-01

    To investigate whether advanced dynamic statistical parametric mapping (AdSPM) using magnetoencephalography (MEG) can better localize focal cortical dysplasia at bottom of sulcus (FCDB). We analyzed 15 children with diagnosis of FCDB in surgical specimen and 3 T MRI by using MEG. Using AdSPM, we analyzed a ±50 ms epoch relative to each single moving dipole (SMD) and applied summation technique to estimate the source activity. The most active area in AdSPM was defined as the location of AdSPM spike source. We compared spatial congruence between MRI-visible FCDB and (1) dipole cluster in SMD method; and (2) AdSPM spike source. AdSPM localized FCDB in 12 (80%) of 15 children whereas dipole cluster localized six (40%). AdSPM spike source was concordant within seizure onset zone in nine (82%) of 11 children with intracranial video EEG. Eleven children with resective surgery achieved seizure freedom with follow-up period of 1.9 ± 1.5 years. Ten (91%) of them had an AdSPM spike source in the resection area. AdSPM can noninvasively and neurophysiologically localize epileptogenic FCDB, whether it overlaps with the dipole cluster or not. This is the first study to localize epileptogenic FCDB using MEG. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  17. Induced CMB quadrupole from pointing offsets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moss, Adam; Scott, Douglas; Sigurdson, Kris, E-mail: adammoss@phas.ubc.ca, E-mail: dscott@phas.ubc.ca, E-mail: krs@phas.ubc.ca

    2011-01-01

    Recent claims in the literature have suggested that the WMAP quadrupole is not primordial in origin, and arises from an aliasing of the much larger dipole field because of incorrect satellite pointing. We attempt to reproduce this result and delineate the key physics leading to the effect. We find that, even if real, the induced quadrupole would be smaller than the WMAP value. We discuss reasons why the WMAP data are unlikely to suffer from this particular systematic effect, including the implications for observations of point sources. Given this evidence against the reality of the effect, the similarity between themore » pointing-offset-induced signal and the actual quadrupole then appears to be quite puzzling. However, we find that the effect arises from a convolution between the gradient of the dipole field and anisotropic coverage of the scan direction at each pixel. There is something of a directional conspiracy here — the dipole signal lies close to the Ecliptic Plane, and its direction, together with the WMAP scan strategy, results in a strong coupling to the Y{sub 2,−1} component in Ecliptic co-ordinates. The dominant strength of this component in the measured quadrupole suggests that one should exercise increased caution in interpreting its estimated amplitude. The Planck satellite has a different scan strategy which does not so directly couple the dipole and quadrupole in this way and will soon provide an independent measurement.« less

  18. A stopping criterion for the iterative solution of partial differential equations

    NASA Astrophysics Data System (ADS)

    Rao, Kaustubh; Malan, Paul; Perot, J. Blair

    2018-01-01

    A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.

  19. Consequences of Secondary Calibrations on Divergence Time Estimates.

    PubMed

    Schenk, John J

    2016-01-01

    Secondary calibrations (calibrations based on the results of previous molecular dating studies) are commonly applied in divergence time analyses in groups that lack fossil data; however, the consequences of applying secondary calibrations in a relaxed-clock approach are not fully understood. I tested whether applying the posterior estimate from a primary study as a prior distribution in a secondary study results in consistent age and uncertainty estimates. I compared age estimates from simulations with 100 randomly replicated secondary trees. On average, the 95% credible intervals of node ages for secondary estimates were significantly younger and narrower than primary estimates. The primary and secondary age estimates were significantly different in 97% of the replicates after Bonferroni corrections. Greater error in magnitude was associated with deeper than shallower nodes, but the opposite was found when standardized by median node age, and a significant positive relationship was determined between the number of tips/age of secondary trees and the total amount of error. When two secondary calibrated nodes were analyzed, estimates remained significantly different, and although the minimum and median estimates were associated with less error, maximum age estimates and credible interval widths had greater error. The shape of the prior also influenced error, in which applying a normal, rather than uniform, prior distribution resulted in greater error. Secondary calibrations, in summary, lead to a false impression of precision and the distribution of age estimates shift away from those that would be inferred by the primary analysis. These results suggest that secondary calibrations should not be applied as the only source of calibration in divergence time analyses that test time-dependent hypotheses until the additional error associated with secondary calibrations is more properly modeled to take into account increased uncertainty in age estimates.

  20. Comparing interval estimates for small sample ordinal CFA models

    PubMed Central

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002

  1. Comparing interval estimates for small sample ordinal CFA models.

    PubMed

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.

  2. The mean sea surface height and geoid along the Geosat subtrack from Bermuda to Cape Cod

    NASA Astrophysics Data System (ADS)

    Kelly, Kathryn A.; Joyce, Terrence M.; Schubert, David M.; Caruso, Michael J.

    1991-07-01

    Measurements of near-surface velocity and concurrent sea level along an ascending Geosat subtrack were used to estimate the mean sea surface height and the Earth's gravitational geoid. Velocity measurements were made on three traverses of a Geosat subtrack within 10 days, using an acoustic Doppler current profiler (ADCP). A small bias in the ADCP velocity was removed by considering a mass balance for two pairs of triangles for which expendable bathythermograph measurements were also made. Because of the large curvature of the Gulf Stream, the gradient wind balance was used to estimate the cross-track component of geostrophic velocity from the ADCP vectors; this component was then integrated to obtain the sea surface height profile. The mean sea surface height was estimated as the difference between the instantaneous sea surface height from ADCP and the Geosat residual sea level, with mesoscale errors reduced by low-pass filtering. The error estimates were divided into a bias, tilt, and mesoscale residual; the bias was ignored because profiles were only determined within a constant of integration. The calculated mean sea surface height estimate agreed with an independent estimate of the mean sea surface height from Geosat, obtained by modeling the Gulf Stream as a Gaussian jet, within the expected errors in the estimates: the tilt error was 0.10 m, and the mesoscale error was 0.044 m. To minimize mesoscale errors in the estimate, the alongtrack geoid estimate was computed as the difference between the mean sea level from the Geosat Exact Repeat Mission and an estimate of the mean sea surface height, rather than as the difference between instantaneous profiles of sea level and sea surface height. In the critical region near the Gulf Stream the estimated error reduction using this method was about 0.07 m. Differences between the geoid estimate and a gravimetric geoid were not within the expected errors: the rms mesoscale difference was 0.24 m rms.

  3. Space-Time Error Representation and Estimation in Navier-Stokes Calculations

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2006-01-01

    The mathematical framework for a-posteriori error estimation of functionals elucidated by Eriksson et al. [7] and Becker and Rannacher [3] is revisited in a space-time context. Using these theories, a hierarchy of exact and approximate error representation formulas are presented for use in error estimation and mesh adaptivity. Numerical space-time results for simple model problems as well as compressible Navier-Stokes flow at Re = 300 over a 2D circular cylinder are then presented to demonstrate elements of the error representation theory for time-dependent problems.

  4. Estimating Solar Proton Flux at LEO From a Geomagnetic Cutoff Model

    DTIC Science & Technology

    2015-07-14

    simple shadow cones (using nomenclature from Stormer theory of particle motion in a dipole magnetic field [6]), that result from particles trajectories...basic Stormer theory [7]. However, in LEO the changes would be small relative to uncertainties in the model and therefore unnecessary. If the model were

  5. Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas

    USGS Publications Warehouse

    Puente, Celso

    1978-01-01

    The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.

  6. Electrical imaging for localizing historical tunnels at an urban environment

    NASA Astrophysics Data System (ADS)

    Osella, Ana; Martinelli, Patricia; Grunhut, Vivian; de la Vega, Matías; Bonomo, Néstor; Weissel, Marcelo

    2015-08-01

    We performed a geophysical study at a historical site in Buenos Aires, Argentina, corresponding to the location of a Jesuit Mission established during the 17th century, remaining there until the 18th century. The site consisted of a church, cloisters, a school, orchards and a procurator’s office; also several tunnels were built, connecting the mission with different public buildings in the town. In the 19th century the Faculty of Sciences of the University of Buenos Aires was built in a sector of the site originally occupied by an orchard, functioning until its demolition in 1973. At present, this area is a cobbled square. With the aim of preserving and restoring the buried structures, work was carried out in this square looking for tunnels and remains of the basement of the old building. Considering the conductive features of the subsoil, mainly formed by clays and silt, the complex characteristics of the buried structures, and the urban localization of the study area with its consequent high level of environmental electromagnetic noise, we performed pre-feasibility studies to determine the usefulness of different geophysical methods. The best results were achieved from the geoelectrical method. Dipole-dipole profiles with electrode spacings of 1.5 and 3 m provided enough lateral and vertical resolution and the required penetration depth. Reliable data were obtained as long as the electrodes were buried at least 15 cm among the cobble stones. Nine 2D electrical resistivity tomographies were obtained by using a robust inversion procedure to reduce the effect of possible data outliers in the resulting models. The effect on these models of different error estimations was also analyzed. Then, we built up a pseudo-3D model by laterally interpolating the 2D inversion results. Finally, by correlating the resulting model with the original plans, the remains of the expected main structures embedded in the site were characterized. In addition, an anomaly was identified that indicates the presence of a tunnel not previously reported.

  7. Adaptive Error Estimation in Linearized Ocean General Circulation Models

    NASA Technical Reports Server (NTRS)

    Chechelnitsky, Michael Y.

    1999-01-01

    Data assimilation methods are routinely used in oceanography. The statistics of the model and measurement errors need to be specified a priori. This study addresses the problem of estimating model and measurement error statistics from observations. We start by testing innovation based methods of adaptive error estimation with low-dimensional models in the North Pacific (5-60 deg N, 132-252 deg E) to TOPEX/POSEIDON (TIP) sea level anomaly data, acoustic tomography data from the ATOC project, and the MIT General Circulation Model (GCM). A reduced state linear model that describes large scale internal (baroclinic) error dynamics is used. The methods are shown to be sensitive to the initial guess for the error statistics and the type of observations. A new off-line approach is developed, the covariance matching approach (CMA), where covariance matrices of model-data residuals are "matched" to their theoretical expectations using familiar least squares methods. This method uses observations directly instead of the innovations sequence and is shown to be related to the MT method and the method of Fu et al. (1993). Twin experiments using the same linearized MIT GCM suggest that altimetric data are ill-suited to the estimation of internal GCM errors, but that such estimates can in theory be obtained using acoustic data. The CMA is then applied to T/P sea level anomaly data and a linearization of a global GFDL GCM which uses two vertical modes. We show that the CMA method can be used with a global model and a global data set, and that the estimates of the error statistics are robust. We show that the fraction of the GCM-T/P residual variance explained by the model error is larger than that derived in Fukumori et al.(1999) with the method of Fu et al.(1993). Most of the model error is explained by the barotropic mode. However, we find that impact of the change in the error statistics on the data assimilation estimates is very small. This is explained by the large representation error, i.e. the dominance of the mesoscale eddies in the T/P signal, which are not part of the 21 by 1" GCM. Therefore, the impact of the observations on the assimilation is very small even after the adjustment of the error statistics. This work demonstrates that simult&neous estimation of the model and measurement error statistics for data assimilation with global ocean data sets and linearized GCMs is possible. However, the error covariance estimation problem is in general highly underdetermined, much more so than the state estimation problem. In other words there exist a very large number of statistical models that can be made consistent with the available data. Therefore, methods for obtaining quantitative error estimates, powerful though they may be, cannot replace physical insight. Used in the right context, as a tool for guiding the choice of a small number of model error parameters, covariance matching can be a useful addition to the repertory of tools available to oceanographers.

  8. Measuring coverage in MNCH: total survey error and the interpretation of intervention coverage estimates from household surveys.

    PubMed

    Eisele, Thomas P; Rhoda, Dale A; Cutts, Felicity T; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J D; Arnold, Fred

    2013-01-01

    Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used.

  9. Measuring Coverage in MNCH: Total Survey Error and the Interpretation of Intervention Coverage Estimates from Household Surveys

    PubMed Central

    Eisele, Thomas P.; Rhoda, Dale A.; Cutts, Felicity T.; Keating, Joseph; Ren, Ruilin; Barros, Aluisio J. D.; Arnold, Fred

    2013-01-01

    Nationally representative household surveys are increasingly relied upon to measure maternal, newborn, and child health (MNCH) intervention coverage at the population level in low- and middle-income countries. Surveys are the best tool we have for this purpose and are central to national and global decision making. However, all survey point estimates have a certain level of error (total survey error) comprising sampling and non-sampling error, both of which must be considered when interpreting survey results for decision making. In this review, we discuss the importance of considering these errors when interpreting MNCH intervention coverage estimates derived from household surveys, using relevant examples from national surveys to provide context. Sampling error is usually thought of as the precision of a point estimate and is represented by 95% confidence intervals, which are measurable. Confidence intervals can inform judgments about whether estimated parameters are likely to be different from the real value of a parameter. We recommend, therefore, that confidence intervals for key coverage indicators should always be provided in survey reports. By contrast, the direction and magnitude of non-sampling error is almost always unmeasurable, and therefore unknown. Information error and bias are the most common sources of non-sampling error in household survey estimates and we recommend that they should always be carefully considered when interpreting MNCH intervention coverage based on survey data. Overall, we recommend that future research on measuring MNCH intervention coverage should focus on refining and improving survey-based coverage estimates to develop a better understanding of how results should be interpreted and used. PMID:23667331

  10. New dimension analyses with error analysis for quaking aspen and black spruce

    NASA Technical Reports Server (NTRS)

    Woods, K. D.; Botkin, D. B.; Feiveson, A. H.

    1987-01-01

    Dimension analysis for black spruce in wetland stands and trembling aspen are reported, including new approaches in error analysis. Biomass estimates for sacrificed trees have standard errors of 1 to 3%; standard errors for leaf areas are 10 to 20%. Bole biomass estimation accounts for most of the error for biomass, while estimation of branch characteristics and area/weight ratios accounts for the leaf area error. Error analysis provides insight for cost effective design of future analyses. Predictive equations for biomass and leaf area, with empirically derived estimators of prediction error, are given. Systematic prediction errors for small aspen trees and for leaf area of spruce from different site-types suggest a need for different predictive models within species. Predictive equations are compared with published equations; significant differences may be due to species responses to regional or site differences. Proportional contributions of component biomass in aspen change in ways related to tree size and stand development. Spruce maintains comparatively constant proportions with size, but shows changes corresponding to site. This suggests greater morphological plasticity of aspen and significance for spruce of nutrient conditions.

  11. Pure rotation spectrum of CF4 in the v3 = 1 state using THz synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Boudon, V.; Carlos, M.; Richard, C.; Pirali, O.

    2018-06-01

    Spherical-top tetrahedral species like CH4, SiH4, CF4, …possess no permanent dipole moment. Therefore, probing their pure rotation spectrum is very challenging since only a very weak dipole moment can be induced by centrifugal distortion and/or rovibrational interaction. If some Q branch lines have been recorded thanks to microwave techniques, R branch lines in the THz region have been poorly explored until recently. In previous studies, we have reported the pure rotation THz spectrum of cold and hot band lines of methane recorded at the SOLEIL Synchrotron facility. Here, we present the first recorded THz spectrum of the R branch of CF4, a powerful greenhouse gas, in its v3 = 1 state. This Fourier transform spectrum covers the R (20) to R (37) line clusters, in the 20-37 cm-1 spectral range. It was recorded thanks to a 150 m multiple path cell at room temperature. We could estimate the vibration-induced dipole moment value and also include the recorded line positions in a global fit of many CF4 transitions.

  12. Lattice calculation of electric dipole moments and form factors of the nucleon

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abramczyk, M.; Aoki, S.; Blum, T.

    In this paper, we analyze commonly used expressions for computing the nucleon electric dipole form factors (EDFF)more » $$F_3$$ and moments (EDM) on a lattice and find that they lead to spurious contributions from the Pauli form factor $$F_2$$ due to inadequate definition of these form factors when parity mixing of lattice nucleon fields is involved. Using chirally symmetric domain wall fermions, we calculate the proton and the neutron EDFF induced by the CP-violating quark chromo-EDM interaction using the corrected expression. In addition, we calculate the electric dipole moment of the neutron using a background electric field that respects time translation invariance and boundary conditions, and we find that it decidedly agrees with the new formula but not the old formula for $$F_3$$. In conclusion, we analyze some selected lattice results for the nucleon EDM and observe that after the correction is applied, they either agree with zero or are substantially reduced in magnitude, thus reconciling their difference from phenomenological estimates of the nucleon EDM.« less

  13. Relativistic coupled-cluster-theory analysis of energies, hyperfine-structure constants, and dipole polarizabilities of Cd+

    NASA Astrophysics Data System (ADS)

    Li, Cheng-Bin; Yu, Yan-Mei; Sahoo, B. K.

    2018-02-01

    Roles of electron correlation effects in the determination of attachment energies, magnetic-dipole hyperfine-structure constants, and electric-dipole (E 1 ) matrix elements of the low-lying states in the singly charged cadmium ion (Cd+) have been analyzed. We employ the singles and doubles approximated relativistic coupled-cluster (RCC) method to calculate these properties. Intermediate results from the Dirac-Hartree-Fock approximation,the second-order many-body perturbation theory, and considering only the linear terms of the RCC method are given to demonstrate propagation of electron correlation effects in this ion. Contributions from important RCC terms are also given to highlight the importance of various correlation effects in the evaluation of these properties. At the end, we also determine E 1 polarizabilities (αE 1) of the ground and 5 p 2P1 /2 ;3 /2 states of Cd+ in the ab initio approach. We estimate them again by replacing some of the E 1 matrix elements and energies from the measurements to reduce their uncertainties so that they can be used in the high-precision experiments of this ion.

  14. Lattice calculation of electric dipole moments and form factors of the nucleon

    DOE PAGES

    Abramczyk, M.; Aoki, S.; Blum, T.; ...

    2017-07-10

    In this paper, we analyze commonly used expressions for computing the nucleon electric dipole form factors (EDFF)more » $$F_3$$ and moments (EDM) on a lattice and find that they lead to spurious contributions from the Pauli form factor $$F_2$$ due to inadequate definition of these form factors when parity mixing of lattice nucleon fields is involved. Using chirally symmetric domain wall fermions, we calculate the proton and the neutron EDFF induced by the CP-violating quark chromo-EDM interaction using the corrected expression. In addition, we calculate the electric dipole moment of the neutron using a background electric field that respects time translation invariance and boundary conditions, and we find that it decidedly agrees with the new formula but not the old formula for $$F_3$$. In conclusion, we analyze some selected lattice results for the nucleon EDM and observe that after the correction is applied, they either agree with zero or are substantially reduced in magnitude, thus reconciling their difference from phenomenological estimates of the nucleon EDM.« less

  15. Modeling the thermal structure and magnetic properties of the crust of active regions with application to the Rio Grande rift

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Experiments in Curie depth estimation from long wavelength magnetic anomalies are summarized. The heart of the work is equivalent-layer-type magnetization models derived by inversion of high-elevation, long wavelength magnetic anomaly data. The methodology is described in detail in the above references. A magnetization distribution in a thin equivalent layer at the Earth's surface having maximum detail while retaining physical significance, and giving rise to a synthetic anomaly field which makes a best fit to the observed field in a least squares sense is discussed. The apparent magnetization contrast in the equivalent layer is approximated using an array of dipoles distributed in equal area at the Earth's surface. The dipoles are pointed in the direction of the main magnetic field, which carries the implicit assumption that crustal magnetization is dominantly induced or viscous. The determination of the closest possible dipole spacing giving a stable inversion to a solution having physical significance is accomplished by plotting the standard deviation of the solution parameters against their spatial separation for a series of solutions.

  16. Evaluation of the predicted error of the soil moisture retrieval from C-band SAR by comparison against modelled soil moisture estimates over Australia

    PubMed Central

    Doubková, Marcela; Van Dijk, Albert I.J.M.; Sabel, Daniel; Wagner, Wolfgang; Blöschl, Günter

    2012-01-01

    The Sentinel-1 will carry onboard a C-band radar instrument that will map the European continent once every four days and the global land surface at least once every twelve days with finest 5 × 20 m spatial resolution. The high temporal sampling rate and operational configuration make Sentinel-1 of interest for operational soil moisture monitoring. Currently, updated soil moisture data are made available at 1 km spatial resolution as a demonstration service using Global Mode (GM) measurements from the Advanced Synthetic Aperture Radar (ASAR) onboard ENVISAT. The service demonstrates the potential of the C-band observations to monitor variations in soil moisture. Importantly, a retrieval error estimate is also available; these are needed to assimilate observations into models. The retrieval error is estimated by propagating sensor errors through the retrieval model. In this work, the existing ASAR GM retrieval error product is evaluated using independent top soil moisture estimates produced by the grid-based landscape hydrological model (AWRA-L) developed within the Australian Water Resources Assessment system (AWRA). The ASAR GM retrieval error estimate, an assumed prior AWRA-L error estimate and the variance in the respective datasets were used to spatially predict the root mean square error (RMSE) and the Pearson's correlation coefficient R between the two datasets. These were compared with the RMSE calculated directly from the two datasets. The predicted and computed RMSE showed a very high level of agreement in spatial patterns as well as good quantitative agreement; the RMSE was predicted within accuracy of 4% of saturated soil moisture over 89% of the Australian land mass. Predicted and calculated R maps corresponded within accuracy of 10% over 61% of the continent. The strong correspondence between the predicted and calculated RMSE and R builds confidence in the retrieval error model and derived ASAR GM error estimates. The ASAR GM and Sentinel-1 have the same basic physical measurement characteristics, and therefore very similar retrieval error estimation method can be applied. Because of the expected improvements in radiometric resolution of the Sentinel-1 backscatter measurements, soil moisture estimation errors can be expected to be an order of magnitude less than those for ASAR GM. This opens the possibility for operationally available medium resolution soil moisture estimates with very well-specified errors that can be assimilated into hydrological or crop yield models, with potentially large benefits for land-atmosphere fluxes, crop growth, and water balance monitoring and modelling. PMID:23483015

  17. FEL Trajectory Analysis for the VISA Experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuhn, Heinz-Dieter

    1998-10-06

    The Visual to Infrared SASE Amplifier (VISA) [1] FEL is designed to achieve saturation at radiation wavelengths between 800 and 600 nm with a 4-m pure permanent magnet undulator. The undulator comprises four 99-cm segments each of which has four FODO focusing cells superposed on the beam by means of permanent magnets in the gap alongside the beam. Each segment will also have two beam position monitors and two sets of x-y dipole correctors. The trajectory walk-off in each segment will be reduced to a value smaller than the rms beam radius by means of magnet sorting, precise fabrication, andmore » post-fabrication shimming and trim magnets. However, this leaves possible inter-segment alignment errors. A trajectory analysis code has been used in combination with the FRED3D [2] FEL code to simulate the effect of the shimming procedure and segment alignment errors on the electron beam trajectory and to determine the sensitivity of the FEL gain process to trajectory errors. The paper describes the technique used to establish tolerances for the segment alignment.« less

  18. Precision Møller Polarimetry

    NASA Astrophysics Data System (ADS)

    Henry, William; Jefferson Lab Hall A Collaboration

    2017-09-01

    Jefferson Lab's cutting-edge parity-violating electron scattering program has increasingly stringent requirements for systematic errors. Beam polarimetry is often one of the dominant systematic errors in these experiments. A new Møller Polarimeter in Hall A of Jefferson Lab (JLab) was installed in 2015 and has taken first measurements for a polarized scattering experiment. Upcoming parity violation experiments in Hall A include CREX, PREX-II, MOLLER and SOLID with the latter two requiring <0.5% precision on beam polarization measurements. The polarimeter measures the Møller scattering rates of the polarized electron beam incident upon an iron target placed in a saturating magnetic field. The spectrometer consists of four focusing quadrapoles and one momentum selection dipole. The detector is designed to measure the scattered and knock out target electrons in coincidence. Beam polarization is extracted by constructing an asymmetry from the scattering rates when the incident electron spin is parallel and anti-parallel to the target electron spin. Initial data will be presented. Sources of systematic errors include target magnetization, spectrometer acceptance, the Levchuk effect, and radiative corrections which will be discussed. National Science Foundation.

  19. Cardiac conduction velocity estimation from sequential mapping assuming known Gaussian distribution for activation time estimation error.

    PubMed

    Shariat, Mohammad Hassan; Gazor, Saeed; Redfearn, Damian

    2016-08-01

    In this paper, we study the problem of the cardiac conduction velocity (CCV) estimation for the sequential intracardiac mapping. We assume that the intracardiac electrograms of several cardiac sites are sequentially recorded, their activation times (ATs) are extracted, and the corresponding wavefronts are specified. The locations of the mapping catheter's electrodes and the ATs of the wavefronts are used here for the CCV estimation. We assume that the extracted ATs include some estimation errors, which we model with zero-mean white Gaussian noise values with known variances. Assuming stable planar wavefront propagation, we derive the maximum likelihood CCV estimator, when the synchronization times between various recording sites are unknown. We analytically evaluate the performance of the CCV estimator and provide its mean square estimation error. Our simulation results confirm the accuracy of the proposed method and the error analysis of the proposed CCV estimator.

  20. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Improved Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.; hide

    2006-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and nonconvective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud-resolving model simulations, and from the Bayesian formulation itself. Synthetic rain-rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in TMI instantaneous rain-rate estimates at 0.5 -resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. Errors in collocated spaceborne radar rain-rate estimates are roughly 50%-80% of the TMI errors at this resolution. The estimated algorithm random error in TMI rain rates at monthly, 2.5deg resolution is relatively small (less than 6% at 5 mm day.1) in comparison with the random error resulting from infrequent satellite temporal sampling (8%-35% at the same rain rate). Percentage errors resulting from sampling decrease with increasing rain rate, and sampling errors in latent heating rates follow the same trend. Averaging over 3 months reduces sampling errors in rain rates to 6%-15% at 5 mm day.1, with proportionate reductions in latent heating sampling errors.

  1. Decay in blood loss estimation skills after web-based didactic training.

    PubMed

    Toledo, Paloma; Eosakul, Stanley T; Goetz, Kristopher; Wong, Cynthia A; Grobman, William A

    2012-02-01

    Accuracy in blood loss estimation has been shown to improve immediately after didactic training. The objective of this study was to evaluate retention of blood loss estimation skills 9 months after a didactic web-based training. Forty-four participants were recruited from a cohort that had undergone web-based training and testing in blood loss estimation. The web-based posttraining test, consisting of pictures of simulated blood loss, was repeated 9 months after the initial training and testing. The primary outcome was the difference in accuracy of estimated blood loss (percent error) at 9 months compared with immediately posttraining. At the 9-month follow-up, the median error in estimation worsened to -34.6%. Although better than the pretraining error of -47.8% (P = 0.003), the 9-month error was significantly less accurate than the immediate posttraining error of -13.5% (P = 0.01). Decay in blood loss estimation skills occurs by 9 months after didactic training.

  2. Evaluation of monthly rainfall estimates derived from the special sensor microwave/imager (SSM/I) over the tropical Pacific

    NASA Technical Reports Server (NTRS)

    Berg, Wesley; Avery, Susan K.

    1995-01-01

    Estimates of monthly rainfall have been computed over the tropical Pacific using passive microwave satellite observations from the special sensor microwave/imager (SSM/I) for the period from July 1987 through December 1990. These monthly estimates are calibrated using data from a network of Pacific atoll rain gauges in order to account for systematic biases and are then compared with several visible and infrared satellite-based rainfall estimation techniques for the purpose of evaluating the performance of the microwave-based estimates. Although several key differences among the various techniques are observed, the general features of the monthly rainfall time series agree very well. Finally, the significant error sources contributing to uncertainties in the monthly estimates are examined and an estimate of the total error is produced. The sampling error characteristics are investigated using data from two SSM/I sensors and a detailed analysis of the characteristics of the diurnal cycle of rainfall over the oceans and its contribution to sampling errors in the monthly SSM/I estimates is made using geosynchronous satellite data. Based on the analysis of the sampling and other error sources the total error was estimated to be of the order of 30 to 50% of the monthly rainfall for estimates averaged over 2.5 deg x 2.5 deg latitude/longitude boxes, with a contribution due to diurnal variability of the order of 10%.

  3. Collisional x- and A-State Kinetics of CN Using Transient Sub-Doppler Hole Burning

    NASA Astrophysics Data System (ADS)

    Hause, Michael L.; Sears, Trevor J.; Hall, Gregory E.

    2010-06-01

    We examine the collisional kinetics of the CN radical using transient hole-burning and saturation recovery. Narrow velocity groups of individual hyperfine levels in CN are depleted (X2Σ^+) and excited (A2Π) with a saturation laser, and probed by a counterpropagating, frequency modulated probe beam. Recovery of the unsaturated absorption is recorded following abrupt termination of an electro optically switched pulse of saturation light. Pressure-dependent recovery kinetics are measured for precursors, ethane dinitrile, NCCN, and pyruvonitrile, CH_3COCN, and buffer gases, helium, argon and nitrogen with rate coefficients ranging from 0.7-2.0 x 10-9 cm3 s-1 molec-1. In the case of NCCN, recovery kinetics are for two-level saturation resonances, where the signal observed is a combination of X- and A-state kinetics. Similar rates occur for three-level crossover resonances, which can be chosen to probe selectively the hole-filling in the X state or the decay of velocity-selected A state radicals. However in the case of CH_3COCN, which has a dipole moment of 3.45 D, the X-state kinetics are faster than the A-state due to an efficient dipole-dipole rotational energy transfer mechanism as the X-state dipole moment is 1.5 D and the A-state dipole moment is 0.06 D. The observed recovery rates are 2-3 times faster than the estimated rotationally inelastic contribution and are a combination of inelastic and velocity-changing elastic collisions. Acknowledgment: This work was carried out under Contract No. DE-AC02-98CH10886 with the U.S. Department of Energy.

  4. Dilution effects on combined magnetic and electric dipole interactions: A study of ferromagnetic cobalt nanoparticles with tuneable interactions

    NASA Astrophysics Data System (ADS)

    Hod, M.; Dobroserdova, A.; Samin, S.; Dobbrow, C.; Schmidt, A. M.; Gottlieb, M.; Kantorovich, S.

    2017-08-01

    Improved understanding of complex interactions between nanoparticles will facilitate the control over the ensuing self-assembled structures. In this work, we consider the dynamic changes occurring upon dilution in the self-assembly of a system of ferromagnetic cobalt nanoparticles that combine magnetic, electric, and steric interactions. The systems examined here vary in the strength of the magnetic dipole interactions and the amount of point charges per particle. Scattering techniques are employed for the characterization of the self-assembly aggregates, and zeta-potential measurements are employed for the estimation of surface charges. Our experiments show that for particles with relatively small initial number of surface electric dipoles, an increase in particle concentration results in an increase in diffusion coefficients; whereas for particles with relatively high number of surface dipoles, no effect is observed upon concentration changes. We attribute these changes to a shift in the adsorption/desorption equilibrium of the tri-n-octylphosphine oxide (TOPO) molecules on the particle surface. We put forward an explanation, based on the combination of two theoretical models. One predicts that the growing concentration of electric dipoles, stemming from the addition of tri-n-octylphosphine oxide (TOPO) as co-surfactant during particle synthesis, on the surface of the particles results in the overall repulsive interaction. Secondly, using density functional theory, we explain that the observed behaviour of the diffusion coefficient can be treated as a result of the concentration dependent nanoparticle self-assembly: additional repulsion leads to the reduction in self-assembled aggregate size despite the shorter average interparticle distances, and as such provides the growth of the diffusion coefficient.

  5. Dilution effects on combined magnetic and electric dipole interactions: A study of ferromagnetic cobalt nanoparticles with tuneable interactions.

    PubMed

    Hod, M; Dobroserdova, A; Samin, S; Dobbrow, C; Schmidt, A M; Gottlieb, M; Kantorovich, S

    2017-08-28

    Improved understanding of complex interactions between nanoparticles will facilitate the control over the ensuing self-assembled structures. In this work, we consider the dynamic changes occurring upon dilution in the self-assembly of a system of ferromagnetic cobalt nanoparticles that combine magnetic, electric, and steric interactions. The systems examined here vary in the strength of the magnetic dipole interactions and the amount of point charges per particle. Scattering techniques are employed for the characterization of the self-assembly aggregates, and zeta-potential measurements are employed for the estimation of surface charges. Our experiments show that for particles with relatively small initial number of surface electric dipoles, an increase in particle concentration results in an increase in diffusion coefficients; whereas for particles with relatively high number of surface dipoles, no effect is observed upon concentration changes. We attribute these changes to a shift in the adsorption/desorption equilibrium of the tri-n-octylphosphine oxide (TOPO) molecules on the particle surface. We put forward an explanation, based on the combination of two theoretical models. One predicts that the growing concentration of electric dipoles, stemming from the addition of tri-n-octylphosphine oxide (TOPO) as co-surfactant during particle synthesis, on the surface of the particles results in the overall repulsive interaction. Secondly, using density functional theory, we explain that the observed behaviour of the diffusion coefficient can be treated as a result of the concentration dependent nanoparticle self-assembly: additional repulsion leads to the reduction in self-assembled aggregate size despite the shorter average interparticle distances, and as such provides the growth of the diffusion coefficient.

  6. BATSE Observations of the Large-Scale Isotropy of Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Briggs, Michael S.; Paciesas, William S.; Pendleton, Geoffrey N.; Meegan, Charles A.; Fishman, Gerald J.; Horack, John M.; Brock, Martin N.; Kouveliotou, Chryssa; Hartmann, Dieter H.; Hakkila, Jon

    1996-01-01

    We use dipole and quadrupole statistics to test the large-scale isotropy of the first 1005 gamma-ray bursts observed by the Burst and Transient Source Experiment (BATSE). In addition to the entire sample of 1005 gamma-ray bursts, many subsets are examined. We use a variety of dipole and quadrupole statistics to search for Galactic and other predicted anisotropies and for anisotropies in a coordinate-system independent manner. We find the gamma-ray burst locations to be consistent with isotropy, e.g., for the total sample the observed Galactic dipole moment (cos theta) differs from the value predicted for isotropy by 0.9 sigma and the observed Galactic quadrupole moment (sin(exp 2) b - 1/3) by 0.3 sigma. We estimate for various models the anisotropies that could have been detected. If one-half of the locations were within 86 deg of the Galactic center, or within 28 deg of the Galactic plane, the ensuing dipole or quadrupole moment would have typically been detected at the 99% confidence level. We compare the observations with the dipole and quadrupole moments of various Galactic models. Several Galactic gamma-ray bursts models have moments within 2 sigma of the observations; most of the Galactic models proposed to date are no longer in acceptable agreement with the data. Although a spherical dark matter halo distribution could be consistent with the data, the required core radius is larger than the core radius of the dark matter halo used to explain the Galaxy's rotation curve. Gamma-ray bursts are much more isotropic than any observed Galactic population, strongly favoring but not requiring an origin at cosmological distances.

  7. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  8. Bayes Error Rate Estimation Using Classifier Ensembles

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep

    2003-01-01

    The Bayes error rate gives a statistical lower bound on the error achievable for a given classification problem and the associated choice of features. By reliably estimating th is rate, one can assess the usefulness of the feature set that is being used for classification. Moreover, by comparing the accuracy achieved by a given classifier with the Bayes rate, one can quantify how effective that classifier is. Classical approaches for estimating or finding bounds for the Bayes error, in general, yield rather weak results for small sample sizes; unless the problem has some simple characteristics, such as Gaussian class-conditional likelihoods. This article shows how the outputs of a classifier ensemble can be used to provide reliable and easily obtainable estimates of the Bayes error with negligible extra computation. Three methods of varying sophistication are described. First, we present a framework that estimates the Bayes error when multiple classifiers, each providing an estimate of the a posteriori class probabilities, a recombined through averaging. Second, we bolster this approach by adding an information theoretic measure of output correlation to the estimate. Finally, we discuss a more general method that just looks at the class labels indicated by ensem ble members and provides error estimates based on the disagreements among classifiers. The methods are illustrated for artificial data, a difficult four-class problem involving underwater acoustic data, and two problems from the Problem benchmarks. For data sets with known Bayes error, the combiner-based methods introduced in this article outperform existing methods. The estimates obtained by the proposed methods also seem quite reliable for the real-life data sets for which the true Bayes rates are unknown.

  9. Porous silicon nanoparticles as biocompatible contrast agents for magnetic resonance imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gongalsky, M. B., E-mail: mgongalsky@gmail.com; Kargina, Yu. V.; Osminkina, L. A.

    2015-12-07

    We propose porous silicon nanoparticles (PSi NPs) with natural oxide coating as biocompatible and bioresorbable contrast agents for magnetic resonant imaging (MRI). A strong shortening of the transversal proton relaxation time (T{sub 2}) was observed for aqueous suspensions of PSi NPs, whereas the longitudinal relaxation time (T{sub 1}) changed moderately. The longitudinal and transversal relaxivities are estimated to be 0.03 and 0.4 l/(g·s), respectively, which are promising for biomedical studies. The proton relaxation is suggested to undergo via the magnetic dipole-dipole interaction with Si dangling bonds on surfaces of PSi NPs. MRI experiments with phantoms have revealed the remarkable contrastingmore » properties of PSi NPs for medical diagnostics.« less

  10. Understanding ferromagnetic hysteresis: A theoretical approach

    NASA Astrophysics Data System (ADS)

    Gangopadhyay, Bijan Kumar

    2018-05-01

    This work presents a theoretical-mathematical model for the ferromagnetic hysteresis. Theoretical understanding on ferromagnetism can be achieved through addressing the self-interaction propensity between the magnetic dipole moments associated with the magnetic domains, in conjunction with the pinning effects of the dipoles with the defects in the domain sites. An expression which relates ferromagnetic magnetization to the effective magnetic field was established in our previous work (AIP Conference Proceedings 1665, 130042 (2015)). Using this relation and solving for the reversible and the irreversible components of the magnetization, we successfully show that the magnetic saturation and the magnetic remanence can be achieved theoretically. This work also estimates the range of the external field that can be used to trace a reversible M-H curve.

  11. Optimizing estimation of hemispheric dominance for language using magnetic source imaging

    PubMed Central

    Passaro, Antony D.; Rezaie, Roozbeh; Moser, Dana C.; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C.

    2011-01-01

    The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10–18 Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10–18 Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. PMID:21890118

  12. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  13. A Sensitive, Multifunctional Spinner Magnetometer Using Magneto-impedance Sensor: a Rapid and Convenient Tool for the Quantification of Inhomogeneity of Magnetization

    NASA Astrophysics Data System (ADS)

    Kodama, K.

    2016-12-01

    A new type of spinner magnetometer with wide dynamic range from 10-7 mAm2 to 10-1 mAm2 and the resolution of 10-8 mAm2 was developed. The high sensitivity was achieved by using magneto-impedance (MI) sensor, a compact, high-performance magnetic sensor used in industrial fields. The slow spinning speed (5 Hz) and the unique mechanism enabling the adjustment of the sample-sensor distance allow measurements of fragile samples in any shape and size. A differential arrangement connecting a pair of the MI sensors in opposite serial reduces external noise and temperature drift. The differential sensor output is transferred to an amplification circuit associated with a programmable low-pass filter. The signal with reference to the spinning frequency is detected with a digital lock-in amplifier. The spinner magnetometer has two selectable measurement modes, the fundamental-mode (F-mode) and the harmonic-mode (H-mode). Measurements in the F-mode detect signals oscillating at the fundamental frequency (5 Hz) as conventional spinner magnetometers do. In the H-mode, additionally, the second (10 Hz) and the third (15 Hz) harmonic components can be measured. Tests in the H-mode were performed using a small coil and changing its position to simulate an offset-dipole. The results demonstrate that the dipole moment of the fundamental component is systematically biased by both quadrupole and octupole components arising in practice from inhomogeneity of magnetization or irregularity of sample shape. This study proposes, combined with theoretical and numerical analyses, quantification of such non-dipole effects and associated errors in the determination of dipole moment of a sample, as well as their correction that may be necessary, for example, when measuring irregular-shaped samples in the proximity of the sensor.

  14. Tissue resistivity estimation in the presence of positional and geometrical uncertainties.

    PubMed

    Baysal, U; Eyüboğlu, B M

    2000-08-01

    Geometrical uncertainties (organ boundary variation and electrode position uncertainties) are the biggest sources of error in estimating electrical resistivity of tissues from body surface measurements. In this study, in order to decrease estimation errors, the statistically constrained minimum mean squared error estimation algorithm (MiMSEE) is constrained with a priori knowledge of the geometrical uncertainties in addition to the constraints based on geometry, resistivity range, linearization and instrumentation errors. The MiMSEE calculates an optimum inverse matrix, which maps the surface measurements to the unknown resistivity distribution. The required data are obtained from four-electrode impedance measurements, similar to injected-current electrical impedance tomography (EIT). In this study, the surface measurements are simulated by using a numerical thorax model. The data are perturbed with additive instrumentation noise. Simulated surface measurements are then used to estimate the tissue resistivities by using the proposed algorithm. The results are compared with the results of conventional least squares error estimator (LSEE). Depending on the region, the MiMSEE yields an estimation error between 0.42% and 31.3% compared with 7.12% to 2010% for the LSEE. It is shown that the MiMSEE is quite robust even in the case of geometrical uncertainties.

  15. New GRACE-Derived Storage Change Estimates Using Empirical Mode Extraction

    NASA Astrophysics Data System (ADS)

    Aierken, A.; Lee, H.; Yu, H.; Ate, P.; Hossain, F.; Basnayake, S. B.; Jayasinghe, S.; Saah, D. S.; Shum, C. K.

    2017-12-01

    Estimated mass change from GRACE spherical harmonic solutions have north/south stripes and east/west banded errors due to random noise and modeling errors. Low pass filters like decorrelation and Gaussian smoothing are typically applied to reduce noise and errors. However, these filters introduce leakage errors that need to be addressed. GRACE mascon estimates (JPL and CSR mascon solutions) do not need decorrelation or Gaussian smoothing and offer larger signal magnitudes compared to the GRACE spherical harmonics (SH) filtered results. However, a recent study [Chen et al., JGR, 2017] demonstrated that both JPL and CSR mascon solutions also have leakage errors. We developed a new postprocessing method based on empirical mode decomposition to estimate mass change from GRACE SH solutions without decorrelation and Gaussian smoothing, the two main sources of leakage errors. We found that, without any post processing, the noise and errors in spherical harmonic solutions introduced very clear high frequency components in the spatial domain. By removing these high frequency components and reserve the overall pattern of the signal, we obtained better mass estimates with minimum leakage errors. The new global mass change estimates captured all the signals observed by GRACE without the stripe errors. Results were compared with traditional methods over the Tonle Sap Basin in Cambodia, Northwestern India, Central Valley in California, and the Caspian Sea. Our results provide larger signal magnitudes which are in good agreement with the leakage corrected (forward modeled) SH results.

  16. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  17. Noise-induced errors in geophysical parameter estimation from retarding potential analyzers in low Earth orbit

    NASA Astrophysics Data System (ADS)

    Debchoudhury, Shantanab; Earle, Gregory

    2017-04-01

    Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.

  18. Using Audit Information to Adjust Parameter Estimates for Data Errors in Clinical Trials

    PubMed Central

    Shepherd, Bryan E.; Shaw, Pamela A.; Dodd, Lori E.

    2013-01-01

    Background Audits are often performed to assess the quality of clinical trial data, but beyond detecting fraud or sloppiness, the audit data is generally ignored. In earlier work using data from a non-randomized study, Shepherd and Yu (2011) developed statistical methods to incorporate audit results into study estimates, and demonstrated that audit data could be used to eliminate bias. Purpose In this manuscript we examine the usefulness of audit-based error-correction methods in clinical trial settings where a continuous outcome is of primary interest. Methods We demonstrate the bias of multiple linear regression estimates in general settings with an outcome that may have errors and a set of covariates for which some may have errors and others, including treatment assignment, are recorded correctly for all subjects. We study this bias under different assumptions including independence between treatment assignment, covariates, and data errors (conceivable in a double-blinded randomized trial) and independence between treatment assignment and covariates but not data errors (possible in an unblinded randomized trial). We review moment-based estimators to incorporate the audit data and propose new multiple imputation estimators. The performance of estimators is studied in simulations. Results When treatment is randomized and unrelated to data errors, estimates of the treatment effect using the original error-prone data (i.e., ignoring the audit results) are unbiased. In this setting, both moment and multiple imputation estimators incorporating audit data are more variable than standard analyses using the original data. In contrast, in settings where treatment is randomized but correlated with data errors and in settings where treatment is not randomized, standard treatment effect estimates will be biased. And in all settings, parameter estimates for the original, error-prone covariates will be biased. Treatment and covariate effect estimates can be corrected by incorporating audit data using either the multiple imputation or moment-based approaches. Bias, precision, and coverage of confidence intervals improve as the audit size increases. Limitations The extent of bias and the performance of methods depend on the extent and nature of the error as well as the size of the audit. This work only considers methods for the linear model. Settings much different than those considered here need further study. Conclusions In randomized trials with continuous outcomes and treatment assignment independent of data errors, standard analyses of treatment effects will be unbiased and are recommended. However, if treatment assignment is correlated with data errors or other covariates, naive analyses may be biased. In these settings, and when covariate effects are of interest, approaches for incorporating audit results should be considered. PMID:22848072

  19. Estimation of geopotential differences over intercontinental locations using satellite and terrestrial measurements. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Pavlis, Nikolaos K.

    1991-01-01

    An error analysis study was conducted in order to assess the current accuracies and the future anticipated improvements in the estimation of geopotential differences over intercontinental locations. An observation/estimation scheme was proposed and studied, whereby gravity disturbance measurements on the Earth's surface, in caps surrounding the estimation points, are combined with corresponding data in caps directly over these points at the altitude of a low orbiting satellite, for the estimation of the geopotential difference between the terrestrial stations. The mathematical modeling required to relate the primary observables to the parameters to be estimated, was studied for the terrestrial data and the data at altitude. Emphasis was placed on the examination of systematic effects and on the corresponding reductions that need to be applied to the measurements to avoid systematic errors. The error estimation for the geopotential differences was performed using both truncation theory and least squares collocation with ring averages, in case observations on the Earth's surface only are used. The error analysis indicated that with the currently available global geopotential model OSU89B and with gravity disturbance data in 2 deg caps surrounding the estimation points, the error of the geopotential difference arising from errors in the reference model and the cap data is about 23 kgal cm, for 30 deg station separation.

  20. (How) do we learn from errors? A prospective study of the link between the ward's learning practices and medication administration errors.

    PubMed

    Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O

    2014-03-01

    Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, p<.05 and estimate=-.17, p<.01 correspondingly), while workload was significantly linked to inflated medication administration errors (estimate=.04, p<.05). Of the learning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, p<.05). Integrated and patchy learning were significantly linked to higher levels of medication administration errors (estimate=-.03, p<.05 and estimate=-.04, p<.01 correspondingly). Non-integrated learning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Sliding mode output feedback control based on tracking error observer with disturbance estimator.

    PubMed

    Xiao, Lingfei; Zhu, Yue

    2014-07-01

    For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Bias correction by use of errors-in-variables regression models in studies with K-X-ray fluorescence bone lead measurements.

    PubMed

    Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard

    2011-01-01

    In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    EIA Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  4. Fisher classifier and its probability of error estimation

    NASA Technical Reports Server (NTRS)

    Chittineni, C. B.

    1979-01-01

    Computationally efficient expressions are derived for estimating the probability of error using the leave-one-out method. The optimal threshold for the classification of patterns projected onto Fisher's direction is derived. A simple generalization of the Fisher classifier to multiple classes is presented. Computational expressions are developed for estimating the probability of error of the multiclass Fisher classifier.

  5. Disturbance torque rejection properties of the NASA/JPL 70-meter antenna axis servos

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1989-01-01

    Analytic methods for evaluating pointing errors caused by external disturbance torques are developed and applied to determine the effects of representative values of wind and friction torque. The expressions relating pointing errors to disturbance torques are shown to be strongly dependent upon the state estimator parameters, as well as upon the state feedback gain and the flow versus pressure characteristics of the hydraulic system. Under certain conditions, when control is derived from an uncorrected estimate of integral position error, the desired type 2 servo properties are not realized and finite steady-state position errors result. Methods for reducing these errors to negligible proportions through the proper selection of control gain and estimator correction parameters are demonstrated. The steady-state error produced by a disturbance torque is found to be directly proportional to the hydraulic internal leakage. This property can be exploited to provide a convenient method of determining system leakage from field measurements of estimator error, axis rate, and hydraulic differential pressure.

  6. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  7. Quantifying the impact of material-model error on macroscale quantities-of-interest using multiscale a posteriori error-estimation techniques

    DOE PAGES

    Brown, Judith A.; Bishop, Joseph E.

    2016-07-20

    An a posteriori error-estimation framework is introduced to quantify and reduce modeling errors resulting from approximating complex mesoscale material behavior with a simpler macroscale model. Such errors may be prevalent when modeling welds and additively manufactured structures, where spatial variations and material textures may be present in the microstructure. We consider a case where a <100> fiber texture develops in the longitudinal scanning direction of a weld. Transversely isotropic elastic properties are obtained through homogenization of a microstructural model with this texture and are considered the reference weld properties within the error-estimation framework. Conversely, isotropic elastic properties are considered approximatemore » weld properties since they contain no representation of texture. Errors introduced by using isotropic material properties to represent a weld are assessed through a quantified error bound in the elastic regime. Lastly, an adaptive error reduction scheme is used to determine the optimal spatial variation of the isotropic weld properties to reduce the error bound.« less

  8. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  9. Line Intensities in the ν 8Band of HNO 3

    NASA Astrophysics Data System (ADS)

    Wang, W. F.; Looi, E. C.; Tan, T. L.; Ong, P. P.

    1996-07-01

    Line intensity measurements have been made on the ν 8band of HNO 3using a high-resolution Fourier transform infrared spectrum in the region 739-800 cm -1. A least-squares fit of a total of 710 line intensities in the Pand Rbranches was performed, leading to accurate determination of five dipole moment operator constants. By utilizing these constants, the observed line intensities are well reproduced with an average random error of 6% and the integrated band intensity is found to be 15.7 ± 0.9 cm -2atm -1at 296 K.

  10. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  11. Coupled-resonator waveguide perfect transport single-photon by interatomic dipole-dipole interaction

    NASA Astrophysics Data System (ADS)

    Yan, Guo-an; Lu, Hua; Qiao, Hao-xue; Chen, Ai-xi; Wu, Wan-qing

    2018-06-01

    We theoretically investigate single-photon coherent transport in a one-dimensional coupled-resonator waveguide coupled to two quantum emitters with dipole-dipole interactions. The numerical simulations demonstrate that the transmission spectrum of the photon depends on the two atoms dipole-dipole interactions and the photon-atom couplings. The dipole-dipole interactions may change the dip positions in the spectra and the coupling strength may broaden the frequency band width in the transmission spectrum. We further demonstrate that the typical transmission spectra split into two dips due to the dipole-dipole interactions. This phenomenon may be used to manufacture new quantum waveguide devices.

  12. An Analysis of a Finite Element Method for Convection-Diffusion Problems. Part II. A Posteriori Error Estimates and Adaptivity.

    DTIC Science & Technology

    1983-03-01

    AN ANALYSIS OF A FINITE ELEMENT METHOD FOR CONVECTION- DIFFUSION PROBLEMS PART II: A POSTERIORI ERROR ESTIMATES AND ADAPTIVITY by W. G. Szymczak Y 6a...PERIOD COVERED AN ANALYSIS OF A FINITE ELEMENT METHOD FOR final life of the contract CONVECTION- DIFFUSION PROBLEM S. Part II: A POSTERIORI ERROR ...Element Method for Convection- Diffusion Problems. Part II: A Posteriori Error Estimates and Adaptivity W. G. Szvmczak and I. Babu~ka# Laboratory for

  13. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    NASA Astrophysics Data System (ADS)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  14. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    NASA Technical Reports Server (NTRS)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  15. Effect of correlated observation error on parameters, predictions, and uncertainty

    USGS Publications Warehouse

    Tiedeman, Claire; Green, Christopher T.

    2013-01-01

    Correlations among observation errors are typically omitted when calculating observation weights for model calibration by inverse methods. We explore the effects of omitting these correlations on estimates of parameters, predictions, and uncertainties. First, we develop a new analytical expression for the difference in parameter variance estimated with and without error correlations for a simple one-parameter two-observation inverse model. Results indicate that omitting error correlations from both the weight matrix and the variance calculation can either increase or decrease the parameter variance, depending on the values of error correlation (ρ) and the ratio of dimensionless scaled sensitivities (rdss). For small ρ, the difference in variance is always small, but for large ρ, the difference varies widely depending on the sign and magnitude of rdss. Next, we consider a groundwater reactive transport model of denitrification with four parameters and correlated geochemical observation errors that are computed by an error-propagation approach that is new for hydrogeologic studies. We compare parameter estimates, predictions, and uncertainties obtained with and without the error correlations. Omitting the correlations modestly to substantially changes parameter estimates, and causes both increases and decreases of parameter variances, consistent with the analytical expression. Differences in predictions for the models calibrated with and without error correlations can be greater than parameter differences when both are considered relative to their respective confidence intervals. These results indicate that including observation error correlations in weighting for nonlinear regression can have important effects on parameter estimates, predictions, and their respective uncertainties.

  16. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  17. Performance analysis of adaptive equalization for coherent acoustic communications in the time-varying ocean environment.

    PubMed

    Preisig, James C

    2005-07-01

    Equations are derived for analyzing the performance of channel estimate based equalizers. The performance is characterized in terms of the mean squared soft decision error (sigma2(s)) of each equalizer. This error is decomposed into two components. These are the minimum achievable error (sigma2(0)) and the excess error (sigma2(e)). The former is the soft decision error that would be realized by the equalizer if the filter coefficient calculation were based upon perfect knowledge of the channel impulse response and statistics of the interfering noise field. The latter is the additional soft decision error that is realized due to errors in the estimates of these channel parameters. These expressions accurately predict the equalizer errors observed in the processing of experimental data by a channel estimate based decision feedback equalizer (DFE) and a passive time-reversal equalizer. Further expressions are presented that allow equalizer performance to be predicted given the scattering function of the acoustic channel. The analysis using these expressions yields insights into the features of surface scattering that most significantly impact equalizer performance in shallow water environments and motivates the implementation of a DFE that is robust with respect to channel estimation errors.

  18. Noise Estimation and Adaptive Encoding for Asymmetric Quantum Error Correcting Codes

    NASA Astrophysics Data System (ADS)

    Florjanczyk, Jan; Brun, Todd; CenterQuantum Information Science; Technology Team

    We present a technique that improves the performance of asymmetric quantum error correcting codes in the presence of biased qubit noise channels. Our study is motivated by considering what useful information can be learned from the statistics of syndrome measurements in stabilizer quantum error correcting codes (QECC). We consider the case of a qubit dephasing channel where the dephasing axis is unknown and time-varying. We are able to estimate the dephasing angle from the statistics of the standard syndrome measurements used in stabilizer QECC's. We use this estimate to rotate the computational basis of the code in such a way that the most likely type of error is covered by the highest distance of the asymmetric code. In particular, we use the [ [ 15 , 1 , 3 ] ] shortened Reed-Muller code which can correct one phase-flip error but up to three bit-flip errors. In our simulations, we tune the computational basis to match the estimated dephasing axis which in turn leads to a decrease in the probability of a phase-flip error. With a sufficiently accurate estimate of the dephasing axis, our memory's effective error is dominated by the much lower probability of four bit-flips. Aro MURI Grant No. W911NF-11-1-0268.

  19. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    NASA Astrophysics Data System (ADS)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.

  20. Quantitative estimation of localization errors of 3d transition metal pseudopotentials in diffusion Monte Carlo

    DOE PAGES

    Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.

    2017-07-10

    The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less

  1. A measurement error model for physical activity level as measured by a questionnaire with application to the 1999-2006 NHANES questionnaire.

    PubMed

    Tooze, Janet A; Troiano, Richard P; Carroll, Raymond J; Moshfegh, Alanna J; Freedman, Laurence S

    2013-06-01

    Systematic investigations into the structure of measurement error of physical activity questionnaires are lacking. We propose a measurement error model for a physical activity questionnaire that uses physical activity level (the ratio of total energy expenditure to basal energy expenditure) to relate questionnaire-based reports of physical activity level to true physical activity levels. The 1999-2006 National Health and Nutrition Examination Survey physical activity questionnaire was administered to 433 participants aged 40-69 years in the Observing Protein and Energy Nutrition (OPEN) Study (Maryland, 1999-2000). Valid estimates of participants' total energy expenditure were also available from doubly labeled water, and basal energy expenditure was estimated from an equation; the ratio of those measures estimated true physical activity level ("truth"). We present a measurement error model that accommodates the mixture of errors that arise from assuming a classical measurement error model for doubly labeled water and a Berkson error model for the equation used to estimate basal energy expenditure. The method was then applied to the OPEN Study. Correlations between the questionnaire-based physical activity level and truth were modest (r = 0.32-0.41); attenuation factors (0.43-0.73) indicate that the use of questionnaire-based physical activity level would lead to attenuated estimates of effect size. Results suggest that sample sizes for estimating relationships between physical activity level and disease should be inflated, and that regression calibration can be used to provide measurement error-adjusted estimates of relationships between physical activity and disease.

  2. Computer simulation comparison of tripolar, bipolar, and spline Laplacian electrocadiogram estimators.

    PubMed

    Chen, T; Besio, W; Dai, W

    2009-01-01

    A comparison of the performance of the tripolar and bipolar concentric as well as spline Laplacian electrocardiograms (LECGs) and body surface Laplacian mappings (BSLMs) for localizing and imaging the cardiac electrical activation has been investigated based on computer simulation. In the simulation a simplified eccentric heart-torso sphere-cylinder homogeneous volume conductor model were developed. Multiple dipoles with different orientations were used to simulate the underlying cardiac electrical activities. Results show that the tripolar concentric ring electrodes produce the most accurate LECG and BSLM estimation among the three estimators with the best performance in spatial resolution.

  3. Base Level Management of Radio Frequency Radiation Protection Program

    DTIC Science & Technology

    1989-04-01

    Antennae ....... 17 5 Estimated Hazard Distance for Vertical Monopole Antennae ....... 17 6 Permissible Exposure Limits...36 H-1 Monopole Antennas .............................................. 83 H-2 Radiation Pattern of Monopole Antennas...correction factors for determining power density values in the near-field of an emitter. Power Density = (4 x P av)/(Antenna Area) (14) For dipole, monopole

  4. The generation of piezoelectricity and flexoelectricity in graphene by breaking the materials symmetries.

    PubMed

    Javvaji, Brahmanandam; He, Bo; Zhuang, Xiaoying

    2018-06-01

    Graphene is a non-piezoelectric material. Engineering the piezoelectricity in graphene is possible with the help of impurities, defects and structural modifications. This study reports the mechanism of strain induced polarization and the estimation of piezoelectric and flexoelectric coefficients for graphene system. The combination of charge-dipole potential and the strong many-body potential is employed for describing the inter-atomic interactions. The breaking of symmetry in graphene material is utilized to generate the polarization. Pristine graphene, graphene with circular defect, graphene with triangular defect and trapezium-shaped graphene are considered. Molecular dynamics simulations are performed for straining the graphene atomic systems. The optimization of charge-dipole potential functions measure the polarization for these systems. Pristine and circular defect graphene systems show a constant polarization with strain. The polarization is varying with strain for a triangular defected and trapezium-shaped graphene system. The local atomic deformation produces a change in polarization with respect to the strain gradient. Estimated piezo and flexo coefficients motivate the usage of graphene in electro-mechanical devices.

  5. The generation of piezoelectricity and flexoelectricity in graphene by breaking the materials symmetries

    NASA Astrophysics Data System (ADS)

    Javvaji, Brahmanandam; He, Bo; Zhuang, Xiaoying

    2018-06-01

    Graphene is a non-piezoelectric material. Engineering the piezoelectricity in graphene is possible with the help of impurities, defects and structural modifications. This study reports the mechanism of strain induced polarization and the estimation of piezoelectric and flexoelectric coefficients for graphene system. The combination of charge-dipole potential and the strong many-body potential is employed for describing the inter-atomic interactions. The breaking of symmetry in graphene material is utilized to generate the polarization. Pristine graphene, graphene with circular defect, graphene with triangular defect and trapezium-shaped graphene are considered. Molecular dynamics simulations are performed for straining the graphene atomic systems. The optimization of charge-dipole potential functions measure the polarization for these systems. Pristine and circular defect graphene systems show a constant polarization with strain. The polarization is varying with strain for a triangular defected and trapezium-shaped graphene system. The local atomic deformation produces a change in polarization with respect to the strain gradient. Estimated piezo and flexo coefficients motivate the usage of graphene in electro-mechanical devices.

  6. Partial Deconvolution with Inaccurate Blur Kernel.

    PubMed

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.

  7. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  8. Enhanced and tunable electric dipole-dipole interactions near a planar metal film

    NASA Astrophysics Data System (ADS)

    Zhou, Lei-Ming; Yao, Pei-Jun; Zhao, Nan; Sun, Fang-Wen

    2017-08-01

    We investigate the enhanced electric dipole-dipole interaction of surface plasmon polaritons (SPPs) supported by a planar metal film waveguide. By taking two nitrogen-vacancy (NV) center electric dipoles in diamond as an example, both the coupling strength and collective relaxation of two dipoles are studied with the numerical Green Function method. Compared to two-dipole coupling on a planar surface, metal film provides stronger and tunable coupling coefficients. Enhancement of the interaction between coupled NV center dipoles could have applications in both quantum information and energy transfer investigation. Our investigation provides systematic results for experimental applications based on a dipole-dipole interaction mediated with SPPs on a planar metal film.

  9. Pressure profiles of plasmas confined in the field of a dipole magnet

    NASA Astrophysics Data System (ADS)

    Davis, Matthew Stiles

    Understanding the maintenance and stability of plasma pressure confined by a strong magnetic field is a fundamental challenge in both laboratory and space plasma physics. Using magnetic and X-ray measurements on the Levitated Dipole Experiment (LDX), the equilibrium plasma pressure has been reconstructed, and variations of the plasma pressure for different plasma conditions have been examined. The relationship of these profiles to the magnetohydrodynamic (MHD) stability limit, and to the enhanced stability limit that results from a fraction of energetic trapped electrons, has been analyzed. In each case, the measured pressure profiles and the estimated fractional densities of energetic electrons were qualitatively consistent with expectations of plasma stability. LDX confines high temperature and high pressure plasma in the field of a superconducting dipole magnet. The strong dipole magnet can be either mechanically supported or magnetically levitated. When the dipole was mechanically supported, the plasma density profile was generally uniform while the plasma pressure was highly peaked. The uniform density was attributed to the thermal plasma being rapidly lost along the field to the mechanical supports. In contrast, the strongly peaked plasma pressure resulted from a fraction of energetic, mirror trapped electrons created by microwave heating at the electron cyclotron resonance (ECRH). These hot electrons are known to be gyrokinetically stabilized by the background plasma and can adopt pressure profiles steeper than the MHD limit. X-ray measurements indicated that this hot electron population could be described by an energy distribution in the range 50-100 keV. Combining information from the magnetic reconstruction of the pressure profile, multi-chord interferometer measurements of the electron density profile, and X-ray measurements of the hot electron energy distribution, the fraction of energetic electrons at the pressure peak was estimated to be ˜ 35% of the total electron population. When the dipole was magnetically levitated the plasma density increased substantially because particle losses to the mechanical supports were eliminated so particles could only be lost via slower cross-field transport processes. The pressure profile was observed to be broader during levitated operation than it was during supported operation, and the pressure appeared to be contained in both a thermal population and an energetic electron population. X-ray spectra indicated that the X-rays came from a similar hot electron population during levitated and supported operation; however, the hot electron fraction was an order of magnitude smaller during levitated operation (<3% of the total electron population). Pressure gradients for both supported and levitated plasmas were compared to the MHD limit. Levitated plasmas had pressure profiles that were (i) steeper than, (ii) shallower than, or (iii) near the MHD limit dependent on plasma conditions. However, those profiles that exceeded the MHD limit were observed to have larger fractions of energetic electrons. When the dipole magnet was supported, high pressure plasmas always had profiles that exceeded the MHD interchange stability limit, but the high pressure in these plasmas appeared to arise entirely from a population of energetic trapped electrons.

  10. Do Survey Data Estimate Earnings Inequality Correctly? Measurement Errors among Black and White Male Workers

    ERIC Educational Resources Information Center

    Kim, ChangHwan; Tamborini, Christopher R.

    2012-01-01

    Few studies have considered how earnings inequality estimates may be affected by measurement error in self-reported earnings in surveys. Utilizing restricted-use data that links workers in the Survey of Income and Program Participation with their W-2 earnings records, we examine the effect of measurement error on estimates of racial earnings…

  11. A-Posteriori Error Estimation for Hyperbolic Conservation Laws with Constraint

    NASA Technical Reports Server (NTRS)

    Barth, Timothy

    2004-01-01

    This lecture considers a-posteriori error estimates for the numerical solution of conservation laws with time invariant constraints such as those arising in magnetohydrodynamics (MHD) and gravitational physics. Using standard duality arguments, a-posteriori error estimates for the discontinuous Galerkin finite element method are then presented for MHD with solenoidal constraint. From these estimates, a procedure for adaptive discretization is outlined. A taxonomy of Green's functions for the linearized MHD operator is given which characterizes the domain of dependence for pointwise errors. The extension to other constrained systems such as the Einstein equations of gravitational physics are then considered. Finally, future directions and open problems are discussed.

  12. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    PubMed

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  13. High dimensional linear regression models under long memory dependence and measurement error

    NASA Astrophysics Data System (ADS)

    Kaul, Abhishek

    This dissertation consists of three chapters. The first chapter introduces the models under consideration and motivates problems of interest. A brief literature review is also provided in this chapter. The second chapter investigates the properties of Lasso under long range dependent model errors. Lasso is a computationally efficient approach to model selection and estimation, and its properties are well studied when the regression errors are independent and identically distributed. We study the case, where the regression errors form a long memory moving average process. We establish a finite sample oracle inequality for the Lasso solution. We then show the asymptotic sign consistency in this setup. These results are established in the high dimensional setup (p> n) where p can be increasing exponentially with n. Finally, we show the consistency, n½ --d-consistency of Lasso, along with the oracle property of adaptive Lasso, in the case where p is fixed. Here d is the memory parameter of the stationary error sequence. The performance of Lasso is also analysed in the present setup with a simulation study. The third chapter proposes and investigates the properties of a penalized quantile based estimator for measurement error models. Standard formulations of prediction problems in high dimension regression models assume the availability of fully observed covariates and sub-Gaussian and homogeneous model errors. This makes these methods inapplicable to measurement errors models where covariates are unobservable and observations are possibly non sub-Gaussian and heterogeneous. We propose weighted penalized corrected quantile estimators for the regression parameter vector in linear regression models with additive measurement errors, where unobservable covariates are nonrandom. The proposed estimators forgo the need for the above mentioned model assumptions. We study these estimators in both the fixed dimension and high dimensional sparse setups, in the latter setup, the dimensionality can grow exponentially with the sample size. In the fixed dimensional setting we provide the oracle properties associated with the proposed estimators. In the high dimensional setting, we provide bounds for the statistical error associated with the estimation, that hold with asymptotic probability 1, thereby providing the ℓ1-consistency of the proposed estimator. We also establish the model selection consistency in terms of the correctly estimated zero components of the parameter vector. A simulation study that investigates the finite sample accuracy of the proposed estimator is also included in this chapter.

  14. Difference-based ridge-type estimator of parameters in restricted partial linear model with correlated errors.

    PubMed

    Wu, Jibo

    2016-01-01

    In this article, a generalized difference-based ridge estimator is proposed for the vector parameter in a partial linear model when the errors are dependent. It is supposed that some additional linear constraints may hold to the whole parameter space. Its mean-squared error matrix is compared with the generalized restricted difference-based estimator. Finally, the performance of the new estimator is explained by a simulation study and a numerical example.

  15. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  16. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    NASA Technical Reports Server (NTRS)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  17. Transfer of dipolar gas through the discrete localized mode.

    PubMed

    Bai, Xiao-Dong; Zhang, Ai-Xia; Xue, Ju-Kui

    2013-12-01

    By considering the discrete nonlinear Schrödinger model with dipole-dipole interactions for dipolar condensate, the existence, the types, the stability, and the dynamics of the localized modes in a nonlinear lattice are discussed. It is found that the contact interaction and the dipole-dipole interactions play important roles in determining the existence, the type, and the stability of the localized modes. Because of the coupled effects of the contact interaction and the dipole-dipole interactions, rich localized modes and their stability nature can exist: when the contact interaction is larger and the dipole-dipole interactions is smaller, a discrete bright breather occurs. In this case, while the on-site interaction can stabilize the discrete breather, the dipole-dipole interactions will destabilize the discrete breather; when both the contact interaction and the dipole-dipole interactions are larger, a discrete kink appears. In this case, both the on-site interaction and the dipole-dipole interactions can stabilize the discrete kink, but the discrete kink is more unstable than the ordinary discrete breather. The predicted results provide a deep insight into the dynamics of blocking, filtering, and transfer of the norm in nonlinear lattices for dipolar condensates.

  18. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    NASA Astrophysics Data System (ADS)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were linear in the shift for both the QSPECT and QPlanar methods. QPlanar was less sensitive to object definition perturbations than QSPECT, especially for dilation and erosion cases. Up to 1 voxel misregistration or misdefinition resulted in up to 8% error in organ activity estimates, with the largest errors for small or low uptake organs. Both types of VOI definition errors produced larger errors in activity estimates for a small and low uptake organs (i.e. -7.5% to 5.3% for the left kidney) than for a large and high uptake organ (i.e. -2.9% to 2.1% for the liver). We observed that misregistration generally had larger effects than misdefinition, with errors ranging from -7.2% to 8.4%. The different imaging methods evaluated responded differently to the errors from misregistration and misdefinition. We found that QSPECT was more sensitive to misdefinition errors, but less sensitive to misregistration errors, as compared to the QPlanar method. Thus, sensitivity to VOI definition errors should be an important criterion in evaluating quantitative imaging methods.

  19. An accurate global potential energy surface, dipole moment surface, and rovibrational frequencies for NH3

    NASA Astrophysics Data System (ADS)

    Huang, Xinchuan; Schwenke, David W.; Lee, Timothy J.

    2008-12-01

    A global potential energy surface (PES) that includes short and long range terms has been determined for the NH3 molecule. The singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations and the internally contracted averaged coupled-pair functional electronic structure methods have been used in conjunction with very large correlation-consistent basis sets, including diffuse functions. Extrapolation to the one-particle basis set limit was performed and core correlation and scalar relativistic contributions were included directly, while the diagonal Born-Oppenheimer correction was added. Our best purely ab initio PES, denoted "mixed," is constructed from two PESs which differ in whether the ic-ACPF higher-order correlation correction was added or not. Rovibrational transition energies computed from the mixed PES agree well with experiment and the best previous theoretical studies, but most importantly the quality does not deteriorate even up to 10300cm-1 above the zero-point energy (ZPE). The mixed PES was improved further by empirical refinement using the most reliable J =0-2 rovibrational transitions in the HITRAN 2004 database. Agreement between high-resolution experiment and rovibrational transition energies computed from our refined PES for J =0-6 is excellent. Indeed, the root mean square (rms) error for 13 HITRAN 2004 bands for J =0-2 is 0.023cm-1 and that for each band is always ⩽0.06cm-1. For J =3-5 the rms error is always ⩽0.15cm-1. This agreement means that transition energies computed with our refined PES should be useful in the assignment of new high-resolution NH3 spectra and in correcting mistakes in previous assignments. Ideas for further improvements to our refined PES and for extension to other isotopolog are discussed.

  20. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    PubMed

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  1. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    PubMed

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  2. Quantitative characterization of non-classic polarization of cations on clay aggregate stability.

    PubMed

    Hu, Feinan; Li, Hang; Liu, Xinmin; Li, Song; Ding, Wuquan; Xu, Chenyang; Li, Yue; Zhu, Longhui

    2015-01-01

    Soil particle interactions are strongly influenced by the concentration, valence and ion species and the pH of the bulk solution, which will also affect aggregate stability and particle transport. In this study, we investigated clay aggregate stability in the presence of different alkali ions (Li+, Na+, K+, and Cs+) at concentrations from10-5 to 10-1 mol L-1. Strong specific ion effects on clay aggregate stability were observed, and showed the order Cs+>K+>Na+>Li+. We found that it was not the effects of ion size, hydration, and dispersion forces in the cation-surface interactions but strong non-classic polarization of adsorbed cations that resulted in these specific effects. In this study, the non-classic dipole moments of each cation species resulting from the non-classic polarization were estimated. By comparing non-classic dipole moments with classic values, the observed dipole moments of adsorbed cations were up to 104 times larger than the classic values for the same cation. The observed non-classic dipole moments sharply increased with decreasing electrolyte concentration. We conclude that strong non-classic polarization could significantly suppress the thickness of the diffuse layer, thereby weakening the electric field near the clay surface and resulting in improved clay aggregate stability. Even though we only demonstrated specific ion effects on aggregate stability with several alkali ions, our results indicate that these effects could be universally important in soil aggregate stability.

  3. Quantitative Characterization of Non-Classic Polarization of Cations on Clay Aggregate Stability

    PubMed Central

    Hu, Feinan; Li, Hang; Liu, Xinmin; Li, Song; Ding, Wuquan; Xu, Chenyang; Li, Yue; Zhu, Longhui

    2015-01-01

    Soil particle interactions are strongly influenced by the concentration, valence and ion species and the pH of the bulk solution, which will also affect aggregate stability and particle transport. In this study, we investigated clay aggregate stability in the presence of different alkali ions (Li+, Na+, K+, and Cs+) at concentrations from10−5 to 10−1 mol L−1. Strong specific ion effects on clay aggregate stability were observed, and showed the order Cs+>K+>Na+>Li+. We found that it was not the effects of ion size, hydration, and dispersion forces in the cation–surface interactions but strong non-classic polarization of adsorbed cations that resulted in these specific effects. In this study, the non-classic dipole moments of each cation species resulting from the non-classic polarization were estimated. By comparing non-classic dipole moments with classic values, the observed dipole moments of adsorbed cations were up to 104 times larger than the classic values for the same cation. The observed non-classic dipole moments sharply increased with decreasing electrolyte concentration. We conclude that strong non-classic polarization could significantly suppress the thickness of the diffuse layer, thereby weakening the electric field near the clay surface and resulting in improved clay aggregate stability. Even though we only demonstrated specific ion effects on aggregate stability with several alkali ions, our results indicate that these effects could be universally important in soil aggregate stability. PMID:25874864

  4. Estimating population genetic parameters and comparing model goodness-of-fit using DNA sequences with error

    PubMed Central

    Liu, Xiaoming; Fu, Yun-Xin; Maxwell, Taylor J.; Boerwinkle, Eric

    2010-01-01

    It is known that sequencing error can bias estimation of evolutionary or population genetic parameters. This problem is more prominent in deep resequencing studies because of their large sample size n, and a higher probability of error at each nucleotide site. We propose a new method based on the composite likelihood of the observed SNP configurations to infer population mutation rate θ = 4Neμ, population exponential growth rate R, and error rate ɛ, simultaneously. Using simulation, we show the combined effects of the parameters, θ, n, ɛ, and R on the accuracy of parameter estimation. We compared our maximum composite likelihood estimator (MCLE) of θ with other θ estimators that take into account the error. The results show the MCLE performs well when the sample size is large or the error rate is high. Using parametric bootstrap, composite likelihood can also be used as a statistic for testing the model goodness-of-fit of the observed DNA sequences. The MCLE method is applied to sequence data on the ANGPTL4 gene in 1832 African American and 1045 European American individuals. PMID:19952140

  5. A preliminary estimate of geoid-induced variations in repeat orbit satellite altimeter observations

    NASA Technical Reports Server (NTRS)

    Brenner, Anita C.; Beckley, B. D.; Koblinsky, C. J.

    1990-01-01

    Altimeter satellites are often maintained in a repeating orbit to facilitate the separation of sea-height variations from the geoid. However, atmospheric drag and solar radiation pressure cause a satellite orbit to drift. For Geosat this drift causes the ground track to vary by + or - 1 km about the nominal repeat path. This misalignment leads to an error in the estimates of sea surface height variations because of the local slope in the geoid. This error has been estimated globally for the Geosat Exact Repeat Mission using a mean sea surface constructed from Geos 3 and Seasat altimeter data. Over most of the ocean the geoid gradient is small, and the repeat-track misalignment leads to errors of only 1 to 2 cm. However, in the vicinity of trenches, continental shelves, islands, and seamounts, errors can exceed 20 cm. The estimated error is compared with direct estimates from Geosat altimetry, and a strong correlation is found in the vicinity of the Tonga and Aleutian trenches. This correlation increases as the orbit error is reduced because of the increased signal-to-noise ratio.

  6. Changes in the electric dipole vector of human serum albumin due to complexing with fatty acids.

    PubMed Central

    Scheider, W; Dintzis, H M; Oncley, J L

    1976-01-01

    The magnitude of the electric dipole vector of human serum albumin, as measured by the dielectric increment of the isoionic solution, is found to be a sensitive, monotonic indicator of the number of moles (up to at least 5) of long chain fatty acid complexed. The sensitivity is about three times as great as it is in bovine albumin. New methods of analysis of the frequency dispersion of the dielectric constant were developed to ascertain if molecular shape changes also accompany the complexing with fatty acid. Direct two-component rotary diffusion constant analysis is found to be too strongly affected by cross modulation between small systematic errors and physically significant data components to be a reliable measure of structural modification. Multicomponent relaxation profiles are more useful as recognition patterns for structural comparisons, but the equations involved are ill-conditioned and solutions based on standard least-squares regression contain mathematical artifacts which mask the physically significant spectrum. By constraining the solution to non-negative coefficients, the magnitude of the artifacts is reduced to well below the magnitudes of the spectral components. Profiles calculated in this way show no evidence of significant dipole direction or molecular shape change as the albumin is complexed with 1 mol of fatty acid. In these experiments albumin was defatted by incubation with adipose tissue at physiological pH, which avoids passing the protein through the pH of the N-F transition usually required in defatting. Addition of fatty acid from soluion in small amounts of ethanol appears to form a complex indistinguishable from the "native" complex. PMID:6087

  7. Solid core dipoles and switching power supplies: Lower cost light sources?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benesch, Jay; Philip, Sarin

    As a result of improvements in power semiconductors, moderate frequency switching supplies can now provide the hundreds of amps typically required by accelerators with zero-to-peak noise in the kHz region ~ 0.06% in current or voltage mode. Modeling was undertaken using a finite electromagnetic program to determine if eddy currents induced in the solid steel of CEBAF magnets and small supplemental additions would bring the error fields down to the 5ppm level needed for beam quality. The expected maximum field of the magnet under consideration is 0.85 T and the DC current required to produce that field is used inmore » the calculations. An additional 0.1% current ripple is added to the DC current at discrete frequencies 360 Hz, 720 Hz or 7200 Hz. Over the region of the pole within 0.5% of the central integrated BdL the resulting AC field changes can be reduced to less than 1% of the 0.1% input ripple for all frequencies, and a sixth of that at 7200 Hz. Doubling the current, providing 1.5 T central field, yielded the same fractional reduction in ripple at the beam for the cases checked. A small dipole was measured at 60, 120, 360 and 720 Hz in two conditions and the results compared to the larger model for the latter two frequencies with surprisingly good agreement. Thus, for light sources with aluminum vacuum vessels and full energy linac injection, the combination of solid core dipoles and switching power supplies may result in significant cost savings.« less

  8. Impact of transport model errors on the global and regional methane emissions estimated by inverse modelling

    DOE PAGES

    Locatelli, R.; Bousquet, P.; Chevallier, F.; ...

    2013-10-08

    A modelling experiment has been conceived to assess the impact of transport model errors on methane emissions estimated in an atmospheric inversion system. Synthetic methane observations, obtained from 10 different model outputs from the international TransCom-CH4 model inter-comparison exercise, are combined with a prior scenario of methane emissions and sinks, and integrated into the three-component PYVAR-LMDZ-SACS (PYthon VARiational-Laboratoire de Météorologie Dynamique model with Zooming capability-Simplified Atmospheric Chemistry System) inversion system to produce 10 different methane emission estimates at the global scale for the year 2005. The same methane sinks, emissions and initial conditions have been applied to produce the 10more » synthetic observation datasets. The same inversion set-up (statistical errors, prior emissions, inverse procedure) is then applied to derive flux estimates by inverse modelling. Consequently, only differences in the modelling of atmospheric transport may cause differences in the estimated fluxes. Here in our framework, we show that transport model errors lead to a discrepancy of 27 Tg yr -1 at the global scale, representing 5% of total methane emissions. At continental and annual scales, transport model errors are proportionally larger than at the global scale, with errors ranging from 36 Tg yr -1 in North America to 7 Tg yr -1 in Boreal Eurasia (from 23 to 48%, respectively). At the model grid-scale, the spread of inverse estimates can reach 150% of the prior flux. Therefore, transport model errors contribute significantly to overall uncertainties in emission estimates by inverse modelling, especially when small spatial scales are examined. Sensitivity tests have been carried out to estimate the impact of the measurement network and the advantage of higher horizontal resolution in transport models. The large differences found between methane flux estimates inferred in these different configurations highly question the consistency of transport model errors in current inverse systems.« less

  9. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    NASA Astrophysics Data System (ADS)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  10. Beam debunching due to ISR-induced energy diffusion

    DOE PAGES

    Yampolsky, Nikolai A.; Carlsten, Bruce E.

    2017-06-20

    One of the options for increasing longitudinal coherency of X-ray free electron lasers (XFELs) is seeding with a microbunched electron beam. Several schemes leading to significant amplitude of the beam bunching at X-ray wavelengths were recently proposed. All these schemes rely on beam optics having several magnetic dipoles. While the beam passes through a dipole, its energy spread increases due to quantum effects of synchrotron radiation. As a result, the bunching factor at small wavelengths reduces since electrons having different energies follow different trajectories in the bend. We rigorously calculate the reduction in the bunching factor due to incoherent synchrotronmore » the radiation while the beam travels in an arbitrary beamline. Lastly, we apply general results to estimate reduction of harmonic current in common schemes proposed for XFEL seeding.« less

  11. Methods for the evaluation of quench temperature profiles and their application for LHC superconducting short dipole magnets

    NASA Astrophysics Data System (ADS)

    Sanfilippo, S.; Siemko, A.

    2000-08-01

    This paper presents a study of the thermal effects on quench performance for several large Hadron collider (LHC) single aperture short dipole models. The analysis is based on the temperature profile in a superconducting magnet evaluated after a quench. Peak temperatures and temperature gradients in the magnet coil are estimated for different thicknesses of insulation layer between the quench heaters and the coil and different powering and protection parameters. The results show clear correlation between the thermo-mechanical response of the magnet and quench performance. They also display that the optimisation of the position of quench heaters can reduce the decrease of training performance caused by the coexistence of a mechanical weak region and of a local temperature rise.

  12. Accurate electric multipole moment, static polarizability and hyperpolarizability derivatives for N2

    NASA Astrophysics Data System (ADS)

    Maroulis, George

    2003-02-01

    We report accurate values of the electric moments, static polarizabilities, hyperpolarizabilities and their respective derivatives for N2. Our values have been extracted from finite-field Møller-Pleset perturbation theory and coupled cluster calculations performed with carefully designed basis sets. A large [15s12p9d7f] basis set consisting of 290 CGTF is expected to provide reference self-consistent-field values of near-Hartree-Fock quality for all properties. The Hartree-Fock limit for the mean hyperpolarizability is estimated at γ¯=715±4e4a04Eh-3 at the experimental bond length Re=2.074 32a0. Accurate estimates of the electron correlation effects were obtained with a [10s7p6d4f] basis set. Our best values are Θ=-1.1258ea02 for the quadrupole and Φ=-6.75ea04 for the hexadecapole moment, ᾱ=11.7709 and Δα=4.6074e2a02Eh-1 for the mean and the anisotropy of the dipole polarizability, C¯=41.63e2a04Eh-1 for the mean quadrupole polarizability and γ¯=927e4a04Eh-3 for the dipole hyperpolarizability. The latter value is quite close to Shelton's experimental estimate of 917±5e4a04Eh-3 [D. P. Shelton, Phys. Rev. A 42, 2578 (1990)]. The R dependence of all properties has been calculated with a [7s5p4d2f] basis set. At the CCSD(T) level of theory the dipole polarizability varies around Re as ᾱ(R)/e2a02Eh-1=11.8483+6.1758(R-Re)+0.9191(R-Re)2-0.8212(R-Re)3-0.0006(R-Re)4, Δα(R)/e2a02Eh-1=4.6032+7.0301(R-Re)+1.9340(R-Re)2-0.5708(R-Re)3+0.1949(R-Re)4. For the Cartesian components and the mean of γαβγδ, (dγzzzz/dR)e=1398, (dγxxxx/dR)e=867, (dγxxzz/dR)e=317, and (dγ¯/dR)e=994e4a03Eh-3. For the quadrupole polarizability Cαβ,γδ, we report (dCzz,zz/dR)e=19.20, (dCxz,xz/dR)e=16.55, (dCxx,xx/dR)e=10.20, and (dC¯/dR)e=23.31e2a03Eh-1. At the MP2 level of theory the components of the dipole-octopole polarizability (Eα,βγδ) and the mean dipole-dipole-octopole hyperpolarizability B¯ we have obtained (dEz,zzz/dR)e=36.71, (dEx,xxx/dR)e=-12.94e2a03Eh-1, and (dB¯/dR)e=-108e3a03Eh-2. In comparison with some other 14-electron systems, N2 appears to be less (hyper)polarizable than most, as near the Hartree-Fock limit we observe ᾱ(N2)<ᾱ(CO)<ᾱ(HCN)<ᾱ(BF)<ᾱ(HCCH) and γ¯(N2)<γ¯(CO)<γ¯(HCN)<γ¯(HCCH)<γ¯(BF).

  13. Ensemble-type numerical uncertainty information from single model integrations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter

    2015-07-01

    We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less

  14. Axioms of adaptivity

    PubMed Central

    Carstensen, C.; Feischl, M.; Page, M.; Praetorius, D.

    2014-01-01

    This paper aims first at a simultaneous axiomatic presentation of the proof of optimal convergence rates for adaptive finite element methods and second at some refinements of particular questions like the avoidance of (discrete) lower bounds, inexact solvers, inhomogeneous boundary data, or the use of equivalent error estimators. Solely four axioms guarantee the optimality in terms of the error estimators. Compared to the state of the art in the temporary literature, the improvements of this article can be summarized as follows: First, a general framework is presented which covers the existing literature on optimality of adaptive schemes. The abstract analysis covers linear as well as nonlinear problems and is independent of the underlying finite element or boundary element method. Second, efficiency of the error estimator is neither needed to prove convergence nor quasi-optimal convergence behavior of the error estimator. In this paper, efficiency exclusively characterizes the approximation classes involved in terms of the best-approximation error and data resolution and so the upper bound on the optimal marking parameters does not depend on the efficiency constant. Third, some general quasi-Galerkin orthogonality is not only sufficient, but also necessary for the R-linear convergence of the error estimator, which is a fundamental ingredient in the current quasi-optimality analysis due to Stevenson 2007. Finally, the general analysis allows for equivalent error estimators and inexact solvers as well as different non-homogeneous and mixed boundary conditions. PMID:25983390

  15. Modified fast frequency acquisition via adaptive least squares algorithm

    NASA Technical Reports Server (NTRS)

    Kumar, Rajendra (Inventor)

    1992-01-01

    A method and the associated apparatus for estimating the amplitude, frequency, and phase of a signal of interest are presented. The method comprises the following steps: (1) inputting the signal of interest; (2) generating a reference signal with adjustable amplitude, frequency and phase at an output thereof; (3) mixing the signal of interest with the reference signal and a signal 90 deg out of phase with the reference signal to provide a pair of quadrature sample signals comprising respectively a difference between the signal of interest and the reference signal and a difference between the signal of interest and the signal 90 deg out of phase with the reference signal; (4) using the pair of quadrature sample signals to compute estimates of the amplitude, frequency, and phase of an error signal comprising the difference between the signal of interest and the reference signal employing a least squares estimation; (5) adjusting the amplitude, frequency, and phase of the reference signal from the numerically controlled oscillator in a manner which drives the error signal towards zero; and (6) outputting the estimates of the amplitude, frequency, and phase of the error signal in combination with the reference signal to produce a best estimate of the amplitude, frequency, and phase of the signal of interest. The preferred method includes the step of providing the error signal as a real time confidence measure as to the accuracy of the estimates wherein the closer the error signal is to zero, the higher the probability that the estimates are accurate. A matrix in the estimation algorithm provides an estimate of the variance of the estimation error.

  16. Genotyping-by-sequencing for estimating relatedness in nonmodel organisms: Avoiding the trap of precise bias.

    PubMed

    Attard, Catherine R M; Beheregaray, Luciano B; Möller, Luciana M

    2018-05-01

    There has been remarkably little attention to using the high resolution provided by genotyping-by-sequencing (i.e., RADseq and similar methods) for assessing relatedness in wildlife populations. A major hurdle is the genotyping error, especially allelic dropout, often found in this type of data that could lead to downward-biased, yet precise, estimates of relatedness. Here, we assess the applicability of genotyping-by-sequencing for relatedness inferences given its relatively high genotyping error rate. Individuals of known relatedness were simulated under genotyping error, allelic dropout and missing data scenarios based on an empirical ddRAD data set, and their true relatedness was compared to that estimated by seven relatedness estimators. We found that an estimator chosen through such analyses can circumvent the influence of genotyping error, with the estimator of Ritland (Genetics Research, 67, 175) shown to be unaffected by allelic dropout and to be the most accurate when there is genotyping error. We also found that the choice of estimator should not rely solely on the strength of correlation between estimated and true relatedness as a strong correlation does not necessarily mean estimates are close to true relatedness. We also demonstrated how even a large SNP data set with genotyping error (allelic dropout or otherwise) or missing data still performs better than a perfectly genotyped microsatellite data set of tens of markers. The simulation-based approach used here can be easily implemented by others on their own genotyping-by-sequencing data sets to confirm the most appropriate and powerful estimator for their data. © 2017 John Wiley & Sons Ltd.

  17. Relative-Error-Covariance Algorithms

    NASA Technical Reports Server (NTRS)

    Bierman, Gerald J.; Wolff, Peter J.

    1991-01-01

    Two algorithms compute error covariance of difference between optimal estimates, based on data acquired during overlapping or disjoint intervals, of state of discrete linear system. Provides quantitative measure of mutual consistency or inconsistency of estimates of states. Relative-error-covariance concept applied, to determine degree of correlation between trajectories calculated from two overlapping sets of measurements and construct real-time test of consistency of state estimates based upon recently acquired data.

  18. Effects of Measurement Errors on Individual Tree Stem Volume Estimates for the Austrian National Forest Inventory

    Treesearch

    Ambros Berger; Thomas Gschwantner; Ronald E. McRoberts; Klemens Schadauer

    2014-01-01

    National forest inventories typically estimate individual tree volumes using models that rely on measurements of predictor variables such as tree height and diameter, both of which are subject to measurement error. The aim of this study was to quantify the impacts of these measurement errors on the uncertainty of the model-based tree stem volume estimates. The impacts...

  19. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  20. Probing lipid membrane electrostatics

    NASA Astrophysics Data System (ADS)

    Yang, Yi

    The electrostatic properties of lipid bilayer membranes play a significant role in many biological processes. Atomic force microscopy (AFM) is highly sensitive to membrane surface potential in electrolyte solutions. With fully characterized probe tips, AFM can perform quantitative electrostatic analysis of lipid membranes. Electrostatic interactions between Silicon nitride probes and supported zwitterionic dioleoylphosphatidylcholine (DOPC) bilayer with a variable fraction of anionic dioleoylphosphatidylserine (DOPS) were measured by AFM. Classical Gouy-Chapman theory was used to model the membrane electrostatics. The nonlinear Poisson-Boltzmann equation was numerically solved with finite element method to provide the potential distribution around the AFM tips. Theoretical tip-sample electrostatic interactions were calculated with the surface integral of both Maxwell and osmotic stress tensors on tip surface. The measured forces were interpreted with theoretical forces and the resulting surface charge densities of the membrane surfaces were in quantitative agreement with the Gouy-Chapman-Stern model of membrane charge regulation. It was demonstrated that the AFM can quantitatively detect membrane surface potential at a separation of several screening lengths, and that the AFM probe only perturbs the membrane surface potential by <2%. One important application of this technique is to estimate the dipole density of lipid membrane. Electrostatic analysis of DOPC lipid bilayers with the AFM reveals a repulsive force between the negatively charged probe tips and the zwitterionic lipid bilayers. This unexpected interaction has been analyzed quantitatively to reveal that the repulsion is due to a weak external field created by the internai membrane dipole moment. The analysis yields a dipole moment of 1.5 Debye per lipid with a dipole potential of +275 mV for supported DOPC membranes. This new ability to quantitatively measure the membrane dipole density in a noninvasive manner will be useful in identifying the biological effects of the dipole potential. Finally, heterogeneous model membranes were studied with fluid electric force microscopy (FEFM). Electrostatic mapping was demonstrated with 50 nm resolution. The capabilities of quantitative electrostatic measurement and lateral charge density mapping make AFM a unique and powerful probe of membrane electrostatics.

  1. Modeling Multiplicative Error Variance: An Example Predicting Tree Diameter from Stump Dimensions in Baldcypress

    Treesearch

    Bernard R. Parresol

    1993-01-01

    In the context of forest modeling, it is often reasonable to assume a multiplicative heteroscedastic error structure to the data. Under such circumstances ordinary least squares no longer provides minimum variance estimates of the model parameters. Through study of the error structure, a suitable error variance model can be specified and its parameters estimated. This...

  2. Highly accurate potential energy surface, dipole moment surface, rovibrational energy levels, and infrared line list for {sup 32}S{sup 16}O{sub 2} up to 8000 cm{sup −1}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Xinchuan, E-mail: Xinchuan.Huang-1@nasa.gov, E-mail: Timothy.J.Lee@nasa.gov; Schwenke, David W., E-mail: David.W.Schwenke@nasa.gov; Lee, Timothy J., E-mail: Xinchuan.Huang-1@nasa.gov, E-mail: Timothy.J.Lee@nasa.gov

    2014-03-21

    A purely ab initio potential energy surface (PES) was refined with selected {sup 32}S{sup 16}O{sub 2} HITRAN data. Compared to HITRAN, the root-mean-squares error (σ{sub RMS}) for all J = 0–80 rovibrational energy levels computed on the refined PES (denoted Ames-1) is 0.013 cm{sup −1}. Combined with a CCSD(T)/aug-cc-pV(Q+d)Z dipole moment surface (DMS), an infrared (IR) line list (denoted Ames-296K) has been computed at 296 K and covers up to 8000 cm{sup −1}. Compared to the HITRAN and CDMS databases, the intensity agreement for most vibrational bands is better than 85%–90%. Our predictions for {sup 34}S{sup 16}O{sub 2} band origins,more » higher energy {sup 32}S{sup 16}O{sub 2} band origins and missing {sup 32}S{sup 16}O{sub 2} IR bands have been verified by most recent experiments and available HITRAN data. We conclude that the Ames-1 PES is able to predict {sup 32/34}S{sup 16}O{sub 2} band origins below 5500 cm{sup −1} with 0.01–0.03 cm{sup −1} uncertainties, and the Ames-296K line list provides continuous, reliable and accurate IR simulations. The K{sub a}-dependence of both line position and line intensity errors is discussed. The line list will greatly facilitate SO{sub 2} IR spectral experimental analysis, as well as elimination of SO{sub 2} lines in high-resolution astronomical observations.« less

  3. First Demonstration of ECHO: an External Calibrator for Hydrogen Observatories

    NASA Astrophysics Data System (ADS)

    Jacobs, Daniel C.; Burba, Jacob; Bowman, Judd D.; Neben, Abraham R.; Stinnett, Benjamin; Turner, Lauren; Johnson, Kali; Busch, Michael; Allison, Jay; Leatham, Marc; Serrano Rodriguez, Victoria; Denney, Mason; Nelson, David

    2017-03-01

    Multiple instruments are pursuing constraints on dark energy, observing reionization and opening a window on the dark ages through the detection and characterization of the 21 cm hydrogen line for redshifts ranging from ˜1 to 25. These instruments, including CHIME in the sub-meter and HERA in the meter bands, are wide-field arrays with multiple-degree beams, typically operating in transit mode. Accurate knowledge of their primary beams is critical for separation of bright foregrounds from the desired cosmological signals, but difficult to achieve through astronomical observations alone. Previous beam calibration work at low frequencies has focused on model verification and does not address the need of 21 cm experiments for routine beam mapping, to the horizon, of the as-built array. We describe the design and methodology of a drone-mounted calibrator, the External Calibrator for Hydrogen Observatories (ECHO), that aims to address this need. We report on a first set of trials to calibrate low-frequency dipoles at 137 MHz and compare ECHO measurements to an established beam-mapping system based on transmissions from the Orbcomm satellite constellation. We create beam maps of two dipoles at a 9° resolution and find sample noise ranging from 1% at the zenith to 100% in the far sidelobes. Assuming this sample noise represents the error in the measurement, the higher end of this range is not yet consistent with the desired requirement but is an improvement on Orbcomm. The overall performance of ECHO suggests that the desired precision and angular coverage is achievable in practice with modest improvements. We identify the main sources of systematic error and uncertainty in our measurements and describe the steps needed to overcome them.

  4. Estimating Aboveground Biomass in Tropical Forests: Field Methods and Error Analysis for the Calibration of Remote Sensing Observations

    DOE PAGES

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...

    2017-01-07

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  5. Radial orbit error reduction and sea surface topography determination using satellite altimetry

    NASA Technical Reports Server (NTRS)

    Engelis, Theodossios

    1987-01-01

    A method is presented in satellite altimetry that attempts to simultaneously determine the geoid and sea surface topography with minimum wavelengths of about 500 km and to reduce the radial orbit error caused by geopotential errors. The modeling of the radial orbit error is made using the linearized Lagrangian perturbation theory. Secular and second order effects are also included. After a rather extensive validation of the linearized equations, alternative expressions of the radial orbit error are derived. Numerical estimates for the radial orbit error and geoid undulation error are computed using the differences of two geopotential models as potential coefficient errors, for a SEASAT orbit. To provide statistical estimates of the radial distances and the geoid, a covariance propagation is made based on the full geopotential covariance. Accuracy estimates for the SEASAT orbits are given which agree quite well with already published results. Observation equations are develped using sea surface heights and crossover discrepancies as observables. A minimum variance solution with prior information provides estimates of parameters representing the sea surface topography and corrections to the gravity field that is used for the orbit generation. The simulation results show that the method can be used to effectively reduce the radial orbit error and recover the sea surface topography.

  6. Bootstrap Estimates of Standard Errors in Generalizability Theory

    ERIC Educational Resources Information Center

    Tong, Ye; Brennan, Robert L.

    2007-01-01

    Estimating standard errors of estimated variance components has long been a challenging task in generalizability theory. Researchers have speculated about the potential applicability of the bootstrap for obtaining such estimates, but they have identified problems (especially bias) in using the bootstrap. Using Brennan's bias-correcting procedures…

  7. Efficient Measurement of Quantum Gate Error by Interleaved Randomized Benchmarking

    NASA Astrophysics Data System (ADS)

    Magesan, Easwar; Gambetta, Jay M.; Johnson, B. R.; Ryan, Colm A.; Chow, Jerry M.; Merkel, Seth T.; da Silva, Marcus P.; Keefe, George A.; Rothwell, Mary B.; Ohki, Thomas A.; Ketchen, Mark B.; Steffen, M.

    2012-08-01

    We describe a scalable experimental protocol for estimating the average error of individual quantum computational gates. This protocol consists of interleaving random Clifford gates between the gate of interest and provides an estimate as well as theoretical bounds for the average error of the gate under test, so long as the average noise variation over all Clifford gates is small. This technique takes into account both state preparation and measurement errors and is scalable in the number of qubits. We apply this protocol to a superconducting qubit system and find a bounded average error of 0.003 [0,0.016] for the single-qubit gates Xπ/2 and Yπ/2. These bounded values provide better estimates of the average error than those extracted via quantum process tomography.

  8. Optimizing estimation of hemispheric dominance for language using magnetic source imaging.

    PubMed

    Passaro, Antony D; Rezaie, Roozbeh; Moser, Dana C; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C

    2011-10-06

    The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10-18Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10-18Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. Published by Elsevier B.V.

  9. Combining wrist age and third molars in forensic age estimation: how to calculate the joint age estimate and its error rate in age diagnostics.

    PubMed

    Gelbrich, Bianca; Frerking, Carolin; Weiss, Sandra; Schwerdt, Sebastian; Stellzig-Eisenhauer, Angelika; Tausche, Eve; Gelbrich, Götz

    2015-01-01

    Forensic age estimation in living adolescents is based on several methods, e.g. the assessment of skeletal and dental maturation. Combination of several methods is mandatory, since age estimates from a single method are too imprecise due to biological variability. The correlation of the errors of the methods being combined must be known to calculate the precision of combined age estimates. To examine the correlation of the errors of the hand and the third molar method and to demonstrate how to calculate the combined age estimate. Clinical routine radiographs of the hand and dental panoramic images of 383 patients (aged 7.8-19.1 years, 56% female) were assessed. Lack of correlation (r = -0.024, 95% CI = -0.124 to + 0.076, p = 0.64) allows calculating the combined age estimate as the weighted average of the estimates from hand bones and third molars. Combination improved the standard deviations of errors (hand = 0.97, teeth = 1.35 years) to 0.79 years. Uncorrelated errors of the age estimates obtained from both methods allow straightforward determination of the common estimate and its variance. This is also possible when reference data for the hand and the third molar method are established independently from each other, using different samples.

  10. Photophysical study of some 3-benzoylmethyleneindol-2-ones and estimation of ground and excited states dipole moments from solvatochromic methods using solvent polarity parameters

    NASA Astrophysics Data System (ADS)

    Saroj, Manju K.; Sharma, Neera; Rastogi, Ramesh C.

    2012-03-01

    3-Benzoylmethyleneindol-2-ones, isatin based chalcones containing donor and acceptor moieties that exhibit excited-state intramolecular charge transfer, have been studied in different solvents by absorption and emission spectroscopy. The excited state behavior of these compounds is strongly dependent on the nature of substituents and the environment. These compounds show multiple emissions arising from a locally excited state and the two states due to intramolecular processes viz. intramolecular charge transfer (ICT) and excited state intramolecular proton transfer (ESIPT). Excited-state dipole moments have been calculated using Stoke-shifts of LE and ICT states using solvatochromic methods. The higher values of dipole moments obtained lead to support the formation of ICT state as one of the prominent species in the excited states of all 3-benzoylmethyleneindol-2-ones. The correlation of the solvatochromic Stokes-shifts with the microscopic solvent polarity parameter (ETN) was found to be superior to that obtained using bulk solvent polarity functions. The absorption and florescence spectral characteristics have been also investigated as a function of acidity and basicity (Ho/pH) in aqueous phase.

  11. Relativistic calculations of atomic properties

    NASA Astrophysics Data System (ADS)

    Kaur, Jasmeet; Sahoo, B. K.; Arora, Bindiya

    2017-04-01

    Singly charged ions are engaging candidates in many areas of Physics. They are especially important in astrophysics for evaluating the radiative properties of stellar objects, in optical frequency standards and for fundamental physics studies such as searches for permanent electric dipole moments and atomic parity violation. Interpretation of these experiments often requires a knowledge of their transition wavelengths and electric dipole amplitudes. In this work, we discuss the calculation of various properties of alkaline earth ions. The relativistic all-order SD method in which all single and double excitations of the Dirac-Fock wave function are included, is used to calculate these atomic properties. We use this method for evaluation of electric dipole matrix elements of alkaline earth ions. Combination of these matrix elements with experimental energies allow to obtain the polarizabilities of ground and excited states of ions. We discuss the applications of estimated polarizabiities as a function of imaginary frequencies in the calculations of long-range atom-ion interactions. We have also located the magic wavelengths for nS1 / 2 - nD3 / 2 , 5 / 2 transitions of alkaline earth ions. These calculated properties will be highly valuable to atomic and astrophysics community. UGC-BSR Grant No. F.7-273/2009/BSR.

  12. Nondestructive evaluation using dipole model analysis with a scan type magnetic camera

    NASA Astrophysics Data System (ADS)

    Lee, Jinyi; Hwang, Jiseong

    2005-12-01

    Large structures such as nuclear power, thermal power, chemical and petroleum refining plants are drawing interest with regard to the economic aspect of extending component life in respect to the poor environment created by high pressure, high temperature, and fatigue, securing safety from corrosion and exceeding their designated life span. Therefore, technology that accurately calculates and predicts degradation and defects of aging materials is extremely important. Among different methods available, nondestructive testing using magnetic methods is effective in predicting and evaluating defects on the surface of or surrounding ferromagnetic structures. It is important to estimate the distribution of magnetic field intensity for applicable magnetic methods relating to industrial nondestructive evaluation. A magnetic camera provides distribution of a quantitative magnetic field with a homogeneous lift-off and spatial resolution. It is possible to interpret the distribution of magnetic field when the dipole model was introduced. This study proposed an algorithm for nondestructive evaluation using dipole model analysis with a scan type magnetic camera. The numerical and experimental considerations of the quantitative evaluation of several sizes and shapes of cracks using magnetic field images of the magnetic camera were examined.

  13. Adaptive control of theophylline therapy: importance of blood sampling times.

    PubMed

    D'Argenio, D Z; Khakmahd, K

    1983-10-01

    A two-observation protocol for estimating theophylline clearance during a constant-rate intravenous infusion is used to examine the importance of blood sampling schedules with regard to the information content of resulting concentration data. Guided by a theory for calculating maximally informative sample times, population simulations are used to assess the effect of specific sampling times on the precision of resulting clearance estimates and subsequent predictions of theophylline plasma concentrations. The simulations incorporated noise terms for intersubject variability, dosing errors, sample collection errors, and assay error. Clearance was estimated using Chiou's method, least squares, and a Bayesian estimation procedure. The results of these simulations suggest that clinically significant estimation and prediction errors may result when using the above two-point protocol for estimating theophylline clearance if the time separating the two blood samples is less than one population mean elimination half-life.

  14. Anisotropic solvent model of the lipid bilayer. 1. Parameterization of long-range electrostatics and first solvation shell effects.

    PubMed

    Lomize, Andrei L; Pogozheva, Irina D; Mosberg, Henry I

    2011-04-25

    A new implicit solvation model was developed for calculating free energies of transfer of molecules from water to any solvent with defined bulk properties. The transfer energy was calculated as a sum of the first solvation shell energy and the long-range electrostatic contribution. The first term was proportional to solvent accessible surface area and solvation parameters (σ(i)) for different atom types. The electrostatic term was computed as a product of group dipole moments and dipolar solvation parameter (η) for neutral molecules or using a modified Born equation for ions. The regression coefficients in linear dependencies of solvation parameters σ(i) and η on dielectric constant, solvatochromic polarizability parameter π*, and hydrogen-bonding donor and acceptor capacities of solvents were optimized using 1269 experimental transfer energies from 19 organic solvents to water. The root-mean-square errors for neutral compounds and ions were 0.82 and 1.61 kcal/mol, respectively. Quantification of energy components demonstrates the dominant roles of hydrophobic effect for nonpolar atoms and of hydrogen-bonding for polar atoms. The estimated first solvation shell energy outweighs the long-range electrostatics for most compounds including ions. The simplicity and computational efficiency of the model allows its application for modeling of macromolecules in anisotropic environments, such as biological membranes.

  15. A COUPLED 2 × 2D BABCOCK–LEIGHTON SOLAR DYNAMO MODEL. I. SURFACE MAGNETIC FLUX EVOLUTION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lemerle, Alexandre; Charbonneau, Paul; Carignan-Dugas, Arnaud, E-mail: lemerle@astro.umontreal.ca, E-mail: paulchar@astro.umontreal.ca

    The need for reliable predictions of the solar activity cycle motivates the development of dynamo models incorporating a representation of surface processes sufficiently detailed to allow assimilation of magnetographic data. In this series of papers we present one such dynamo model, and document its behavior and properties. This first paper focuses on one of the model’s key components, namely surface magnetic flux evolution. Using a genetic algorithm, we obtain best-fit parameters of the transport model by least-squares minimization of the differences between the associated synthetic synoptic magnetogram and real magnetographic data for activity cycle 21. Our fitting procedure also returnsmore » Monte Carlo-like error estimates. We show that the range of acceptable surface meridional flow profiles is in good agreement with Doppler measurements, even though the latter are not used in the fitting process. Using a synthetic database of bipolar magnetic region (BMR) emergences reproducing the statistical properties of observed emergences, we also ascertain the sensitivity of global cycle properties, such as the strength of the dipole moment and timing of polarity reversal, to distinct realizations of BMR emergence, and on this basis argue that this stochasticity represents a primary source of uncertainty for predicting solar cycle characteristics.« less

  16. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  17. A posteriori error estimation for multi-stage Runge–Kutta IMEX schemes

    DOE PAGES

    Chaudhry, Jehanzeb H.; Collins, J. B.; Shadid, John N.

    2017-02-05

    Implicit–Explicit (IMEX) schemes are widely used for time integration methods for approximating solutions to a large class of problems. In this work, we develop accurate a posteriori error estimates of a quantity-of-interest for approximations obtained from multi-stage IMEX schemes. This is done by first defining a finite element method that is nodally equivalent to an IMEX scheme, then using typical methods for adjoint-based error estimation. Furthermore, the use of a nodally equivalent finite element method allows a decomposition of the error into multiple components, each describing the effect of a different portion of the method on the total error inmore » a quantity-of-interest.« less

  18. National suicide rates a century after Durkheim: do we know enough to estimate error?

    PubMed

    Claassen, Cynthia A; Yip, Paul S; Corcoran, Paul; Bossarte, Robert M; Lawrence, Bruce A; Currier, Glenn W

    2010-06-01

    Durkheim's nineteenth-century analysis of national suicide rates dismissed prior concerns about mortality data fidelity. Over the intervening century, however, evidence documenting various types of error in suicide data has only mounted, and surprising levels of such error continue to be routinely uncovered. Yet the annual suicide rate remains the most widely used population-level suicide metric today. After reviewing the unique sources of bias incurred during stages of suicide data collection and concatenation, we propose a model designed to uniformly estimate error in future studies. A standardized method of error estimation uniformly applied to mortality data could produce data capable of promoting high quality analyses of cross-national research questions.

  19. View Estimation Based on Value System

    NASA Astrophysics Data System (ADS)

    Takahashi, Yasutake; Shimada, Kouki; Asada, Minoru

    Estimation of a caregiver's view is one of the most important capabilities for a child to understand the behavior demonstrated by the caregiver, that is, to infer the intention of behavior and/or to learn the observed behavior efficiently. We hypothesize that the child develops this ability in the same way as behavior learning motivated by an intrinsic reward, that is, he/she updates the model of the estimated view of his/her own during the behavior imitated from the observation of the behavior demonstrated by the caregiver based on minimizing the estimation error of the reward during the behavior. From this view, this paper shows a method for acquiring such a capability based on a value system from which values can be obtained by reinforcement learning. The parameters of the view estimation are updated based on the temporal difference error (hereafter TD error: estimation error of the state value), analogous to the way such that the parameters of the state value of the behavior are updated based on the TD error. Experiments with simple humanoid robots show the validity of the method, and the developmental process parallel to young children's estimation of its own view during the imitation of the observed behavior of the caregiver is discussed.

  20. Reference-free error estimation for multiple measurement methods.

    PubMed

    Madan, Hennadii; Pernuš, Franjo; Špiclin, Žiga

    2018-01-01

    We present a computational framework to select the most accurate and precise method of measurement of a certain quantity, when there is no access to the true value of the measurand. A typical use case is when several image analysis methods are applied to measure the value of a particular quantitative imaging biomarker from the same images. The accuracy of each measurement method is characterized by systematic error (bias), which is modeled as a polynomial in true values of measurand, and the precision as random error modeled with a Gaussian random variable. In contrast to previous works, the random errors are modeled jointly across all methods, thereby enabling the framework to analyze measurement methods based on similar principles, which may have correlated random errors. Furthermore, the posterior distribution of the error model parameters is estimated from samples obtained by Markov chain Monte-Carlo and analyzed to estimate the parameter values and the unknown true values of the measurand. The framework was validated on six synthetic and one clinical dataset containing measurements of total lesion load, a biomarker of neurodegenerative diseases, which was obtained with four automatic methods by analyzing brain magnetic resonance images. The estimates of bias and random error were in a good agreement with the corresponding least squares regression estimates against a reference.

  1. Improving estimation of flight altitude in wildlife telemetry studies

    USGS Publications Warehouse

    Poessel, Sharon; Duerr, Adam E.; Hall, Jonathan C.; Braham, Melissa A.; Katzner, Todd

    2018-01-01

    Altitude measurements from wildlife tracking devices, combined with elevation data, are commonly used to estimate the flight altitude of volant animals. However, these data often include measurement error. Understanding this error may improve estimation of flight altitude and benefit applied ecology.There are a number of different approaches that have been used to address this measurement error. These include filtering based on GPS data, filtering based on behaviour of the study species, and use of state-space models to correct measurement error. The effectiveness of these approaches is highly variable.Recent studies have based inference of flight altitude on misunderstandings about avian natural history and technical or analytical tools. In this Commentary, we discuss these misunderstandings and suggest alternative strategies both to resolve some of these issues and to improve estimation of flight altitude. These strategies also can be applied to other measures derived from telemetry data.Synthesis and applications. Our Commentary is intended to clarify and improve upon some of the assumptions made when estimating flight altitude and, more broadly, when using GPS telemetry data. We also suggest best practices for identifying flight behaviour, addressing GPS error, and using flight altitudes to estimate collision risk with anthropogenic structures. Addressing the issues we describe would help improve estimates of flight altitude and advance understanding of the treatment of error in wildlife telemetry studies.

  2. Comment on Hoffman and Rovine (2007): SPSS MIXED can estimate models with heterogeneous variances.

    PubMed

    Weaver, Bruce; Black, Ryan A

    2015-06-01

    Hoffman and Rovine (Behavior Research Methods, 39:101-117, 2007) have provided a very nice overview of how multilevel models can be useful to experimental psychologists. They included two illustrative examples and provided both SAS and SPSS commands for estimating the models they reported. However, upon examining the SPSS syntax for the models reported in their Table 3, we found no syntax for models 2B and 3B, both of which have heterogeneous error variances. Instead, there is syntax that estimates similar models with homogeneous error variances and a comment stating that SPSS does not allow heterogeneous errors. But that is not correct. We provide SPSS MIXED commands to estimate models 2B and 3B with heterogeneous error variances and obtain results nearly identical to those reported by Hoffman and Rovine in their Table 3. Therefore, contrary to the comment in Hoffman and Rovine's syntax file, SPSS MIXED can estimate models with heterogeneous error variances.

  3. Identification of dynamic systems, theory and formulation

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Iliff, K. W.

    1985-01-01

    The problem of estimating parameters of dynamic systems is addressed in order to present the theoretical basis of system identification and parameter estimation in a manner that is complete and rigorous, yet understandable with minimal prerequisites. Maximum likelihood and related estimators are highlighted. The approach used requires familiarity with calculus, linear algebra, and probability, but does not require knowledge of stochastic processes or functional analysis. The treatment emphasizes unification of the various areas in estimation in dynamic systems is treated as a direct outgrowth of the static system theory. Topics covered include basic concepts and definitions; numerical optimization methods; probability; statistical estimators; estimation in static systems; stochastic processes; state estimation in dynamic systems; output error, filter error, and equation error methods of parameter estimation in dynamic systems, and the accuracy of the estimates.

  4. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part II: Evaluation of Estimates Using Independent Data

    NASA Technical Reports Server (NTRS)

    Yang, Song; Olson, William S.; Wang, Jian-Jian; Bell, Thomas L.; Smith, Eric A.; Kummerow, Christian D.

    2006-01-01

    Rainfall rate estimates from spaceborne microwave radiometers are generally accepted as reliable by a majority of the atmospheric science community. One of the Tropical Rainfall Measuring Mission (TRMM) facility rain-rate algorithms is based upon passive microwave observations from the TRMM Microwave Imager (TMI). In Part I of this series, improvements of the TMI algorithm that are required to introduce latent heating as an additional algorithm product are described. Here, estimates of surface rain rate, convective proportion, and latent heating are evaluated using independent ground-based estimates and satellite products. Instantaneous, 0.5 deg. -resolution estimates of surface rain rate over ocean from the improved TMI algorithm are well correlated with independent radar estimates (r approx. 0.88 over the Tropics), but bias reduction is the most significant improvement over earlier algorithms. The bias reduction is attributed to the greater breadth of cloud-resolving model simulations that support the improved algorithm and the more consistent and specific convective/stratiform rain separation method utilized. The bias of monthly 2.5 -resolution estimates is similarly reduced, with comparable correlations to radar estimates. Although the amount of independent latent heating data is limited, TMI-estimated latent heating profiles compare favorably with instantaneous estimates based upon dual-Doppler radar observations, and time series of surface rain-rate and heating profiles are generally consistent with those derived from rawinsonde analyses. Still, some biases in profile shape are evident, and these may be resolved with (a) additional contextual information brought to the estimation problem and/or (b) physically consistent and representative databases supporting the algorithm. A model of the random error in instantaneous 0.5 deg. -resolution rain-rate estimates appears to be consistent with the levels of error determined from TMI comparisons with collocated radar. Error model modifications for nonraining situations will be required, however. Sampling error represents only a portion of the total error in monthly 2.5 -resolution TMI estimates; the remaining error is attributed to random and systematic algorithm errors arising from the physical inconsistency and/or nonrepresentativeness of cloud-resolving-model-simulated profiles that support the algorithm.

  5. Terrestrial Water Mass Load Changes from Gravity Recovery and Climate Experiment (GRACE)

    NASA Technical Reports Server (NTRS)

    Seo, K.-W.; Wilson, C. R.; Famiglietti, J. S.; Chen, J. L.; Rodell M.

    2006-01-01

    Recent studies show that data from the Gravity Recovery and Climate Experiment (GRACE) is promising for basin- to global-scale water cycle research. This study provides varied assessments of errors associated with GRACE water storage estimates. Thirteen monthly GRACE gravity solutions from August 2002 to December 2004 are examined, along with synthesized GRACE gravity fields for the same period that incorporate simulated errors. The synthetic GRACE fields are calculated using numerical climate models and GRACE internal error estimates. We consider the influence of measurement noise, spatial leakage error, and atmospheric and ocean dealiasing (AOD) model error as the major contributors to the error budget. Leakage error arises from the limited range of GRACE spherical harmonics not corrupted by noise. AOD model error is due to imperfect correction for atmosphere and ocean mass redistribution applied during GRACE processing. Four methods of forming water storage estimates from GRACE spherical harmonics (four different basin filters) are applied to both GRACE and synthetic data. Two basin filters use Gaussian smoothing, and the other two are dynamic basin filters which use knowledge of geographical locations where water storage variations are expected. Global maps of measurement noise, leakage error, and AOD model errors are estimated for each basin filter. Dynamic basin filters yield the smallest errors and highest signal-to-noise ratio. Within 12 selected basins, GRACE and synthetic data show similar amplitudes of water storage change. Using 53 river basins, covering most of Earth's land surface excluding Antarctica and Greenland, we document how error changes with basin size, latitude, and shape. Leakage error is most affected by basin size and latitude, and AOD model error is most dependent on basin latitude.

  6. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  7. Precipitation and Latent Heating Distributions from Satellite Passive Microwave Radiometry. Part 1; Method and Uncertainties

    NASA Technical Reports Server (NTRS)

    Olson, William S.; Kummerow, Christian D.; Yang, Song; Petty, Grant W.; Tao, Wei-Kuo; Bell, Thomas L.; Braun, Scott A.; Wang, Yansen; Lang, Stephen E.; Johnson, Daniel E.

    2004-01-01

    A revised Bayesian algorithm for estimating surface rain rate, convective rain proportion, and latent heating/drying profiles from satellite-borne passive microwave radiometer observations over ocean backgrounds is described. The algorithm searches a large database of cloud-radiative model simulations to find cloud profiles that are radiatively consistent with a given set of microwave radiance measurements. The properties of these radiatively consistent profiles are then composited to obtain best estimates of the observed properties. The revised algorithm is supported by an expanded and more physically consistent database of cloud-radiative model simulations. The algorithm also features a better quantification of the convective and non-convective contributions to total rainfall, a new geographic database, and an improved representation of background radiances in rain-free regions. Bias and random error estimates are derived from applications of the algorithm to synthetic radiance data, based upon a subset of cloud resolving model simulations, and from the Bayesian formulation itself. Synthetic rain rate and latent heating estimates exhibit a trend of high (low) bias for low (high) retrieved values. The Bayesian estimates of random error are propagated to represent errors at coarser time and space resolutions, based upon applications of the algorithm to TRMM Microwave Imager (TMI) data. Errors in instantaneous rain rate estimates at 0.5 deg resolution range from approximately 50% at 1 mm/h to 20% at 14 mm/h. These errors represent about 70-90% of the mean random deviation between collocated passive microwave and spaceborne radar rain rate estimates. The cumulative algorithm error in TMI estimates at monthly, 2.5 deg resolution is relatively small (less than 6% at 5 mm/day) compared to the random error due to infrequent satellite temporal sampling (8-35% at the same rain rate).

  8. A Low Frequency Electromagnetic Sensor for Underwater Geo-Location

    DTIC Science & Technology

    2011-05-01

    used a set of commercially available fluxgate magnetometers to measure the magnetic field gradients associated with a magnetic dipole transmitter...insight into the operational capabilities of commercial fluxgate sensors. Figure 42. Applied Physics Systems 1540 magnetometer ...a magnetic field gradient receiver array. Highest quality gradient estimates were achieved with three vector magnetometers equally spaced and

  9. Use of Earth's magnetic field for mitigating gyroscope errors regardless of magnetic perturbation.

    PubMed

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth's magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth's magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment.

  10. Use of Earth’s Magnetic Field for Mitigating Gyroscope Errors Regardless of Magnetic Perturbation

    PubMed Central

    Afzal, Muhammad Haris; Renaudin, Valérie; Lachapelle, Gérard

    2011-01-01

    Most portable systems like smart-phones are equipped with low cost consumer grade sensors, making them useful as Pedestrian Navigation Systems (PNS). Measurements of these sensors are severely contaminated by errors caused due to instrumentation and environmental issues rendering the unaided navigation solution with these sensors of limited use. The overall navigation error budget associated with pedestrian navigation can be categorized into position/displacement errors and attitude/orientation errors. Most of the research is conducted for tackling and reducing the displacement errors, which either utilize Pedestrian Dead Reckoning (PDR) or special constraints like Zero velocity UPdaTes (ZUPT) and Zero Angular Rate Updates (ZARU). This article targets the orientation/attitude errors encountered in pedestrian navigation and develops a novel sensor fusion technique to utilize the Earth’s magnetic field, even perturbed, for attitude and rate gyroscope error estimation in pedestrian navigation environments where it is assumed that Global Navigation Satellite System (GNSS) navigation is denied. As the Earth’s magnetic field undergoes severe degradations in pedestrian navigation environments, a novel Quasi-Static magnetic Field (QSF) based attitude and angular rate error estimation technique is developed to effectively use magnetic measurements in highly perturbed environments. The QSF scheme is then used for generating the desired measurements for the proposed Extended Kalman Filter (EKF) based attitude estimator. Results indicate that the QSF measurements are capable of effectively estimating attitude and gyroscope errors, reducing the overall navigation error budget by over 80% in urban canyon environment. PMID:22247672

  11. Estimation of population mean in the presence of measurement error and non response under stratified random sampling

    PubMed Central

    Shabbir, Javid

    2018-01-01

    In the present paper we propose an improved class of estimators in the presence of measurement error and non-response under stratified random sampling for estimating the finite population mean. The theoretical and numerical studies reveal that the proposed class of estimators performs better than other existing estimators. PMID:29401519

  12. An Investigation of the Standard Errors of Expected A Posteriori Ability Estimates.

    ERIC Educational Resources Information Center

    De Ayala, R. J.; And Others

    Expected a posteriori has a number of advantages over maximum likelihood estimation or maximum a posteriori (MAP) estimation methods. These include ability estimates (thetas) for all response patterns, less regression towards the mean than MAP ability estimates, and a lower average squared error. R. D. Bock and R. J. Mislevy (1982) state that the…

  13. Crop area estimation based on remotely-sensed data with an accurate but costly subsample

    NASA Technical Reports Server (NTRS)

    Gunst, R. F.

    1983-01-01

    Alternatives to sampling-theory stratified and regression estimators of crop production and timber biomass were examined. An alternative estimator which is viewed as especially promising is the errors-in-variable regression estimator. Investigations established the need for caution with this estimator when the ratio of two error variances is not precisely known.

  14. Experimental determination of the navigation error of the 4-D navigation, guidance, and control systems on the NASA B-737 airplane

    NASA Technical Reports Server (NTRS)

    Knox, C. E.

    1978-01-01

    Navigation error data from these flights are presented in a format utilizing three independent axes - horizontal, vertical, and time. The navigation position estimate error term and the autopilot flight technical error term are combined to form the total navigation error in each axis. This method of error presentation allows comparisons to be made between other 2-, 3-, or 4-D navigation systems and allows experimental or theoretical determination of the navigation error terms. Position estimate error data are presented with the navigation system position estimate based on dual DME radio updates that are smoothed with inertial velocities, dual DME radio updates that are smoothed with true airspeed and magnetic heading, and inertial velocity updates only. The normal mode of navigation with dual DME updates that are smoothed with inertial velocities resulted in a mean error of 390 m with a standard deviation of 150 m in the horizontal axis; a mean error of 1.5 m low with a standard deviation of less than 11 m in the vertical axis; and a mean error as low as 252 m with a standard deviation of 123 m in the time axis.

  15. Proton radius from electron scattering data

    NASA Astrophysics Data System (ADS)

    Higinbotham, Douglas W.; Kabir, Al Amin; Lin, Vincent; Meekins, David; Norum, Blaine; Sawatzky, Brad

    2016-05-01

    Background: The proton charge radius extracted from recent muonic hydrogen Lamb shift measurements is significantly smaller than that extracted from atomic hydrogen and electron scattering measurements. The discrepancy has become known as the proton radius puzzle. Purpose: In an attempt to understand the discrepancy, we review high-precision electron scattering results from Mainz, Jefferson Lab, Saskatoon, and Stanford. Methods: We make use of stepwise regression techniques using the F test as well as the Akaike information criterion to systematically determine the predictive variables to use for a given set and range of electron scattering data as well as to provide multivariate error estimates. Results: Starting with the precision, low four-momentum transfer (Q2) data from Mainz (1980) and Saskatoon (1974), we find that a stepwise regression of the Maclaurin series using the F test as well as the Akaike information criterion justify using a linear extrapolation which yields a value for the proton radius that is consistent with the result obtained from muonic hydrogen measurements. Applying the same Maclaurin series and statistical criteria to the 2014 Rosenbluth results on GE from Mainz, we again find that the stepwise regression tends to favor a radius consistent with the muonic hydrogen radius but produces results that are extremely sensitive to the range of data included in the fit. Making use of the high-Q2 data on GE to select functions which extrapolate to high Q2, we find that a Padé (N =M =1 ) statistical model works remarkably well, as does a dipole function with a 0.84 fm radius, GE(Q2) =(1+Q2/0.66 GeV2) -2 . Conclusions: Rigorous applications of stepwise regression techniques and multivariate error estimates result in the extraction of a proton charge radius that is consistent with the muonic hydrogen result of 0.84 fm; either from linear extrapolation of the extremely-low-Q2 data or by use of the Padé approximant for extrapolation using a larger range of data. Thus, based on a purely statistical analysis of electron scattering data, we conclude that the electron scattering results and the muonic hydrogen results are consistent. It is the atomic hydrogen results that are the outliers.

  16. Simulations in site error estimation for direction finders

    NASA Astrophysics Data System (ADS)

    López, Raúl E.; Passi, Ranjit M.

    1991-08-01

    The performance of an algorithm for the recovery of site-specific errors of direction finder (DF) networks is tested under controlled simulated conditions. The simulations show that the algorithm has some inherent shortcomings for the recovery of site errors from the measured azimuth data. These limitations are fundamental to the problem of site error estimation using azimuth information. Several ways for resolving or ameliorating these basic complications are tested by means of simulations. From these it appears that for the effective implementation of the site error determination algorithm, one should design the networks with at least four DFs, improve the alignment of the antennas, and increase the gain of the DFs as much as it is compatible with other operational requirements. The use of a nonzero initial estimate of the site errors when working with data from networks of four or more DFs also improves the accuracy of the site error recovery. Even for networks of three DFs, reasonable site error corrections could be obtained if the antennas could be well aligned.

  17. Comparison of estimated and observed stormwater runoff for fifteen watersheds in west-central Florida, using five common design techniques

    USGS Publications Warehouse

    Trommer, J.T.; Loper, J.E.; Hammett, K.M.; Bowman, Georgia

    1996-01-01

    Hydrologists use several traditional techniques for estimating peak discharges and runoff volumes from ungaged watersheds. However, applying these techniques to watersheds in west-central Florida requires that empirical relationships be extrapolated beyond tested ranges. As a result there is some uncertainty as to their accuracy. Sixty-six storms in 15 west-central Florida watersheds were modeled using (1) the rational method, (2) the U.S. Geological Survey regional regression equations, (3) the Natural Resources Conservation Service (formerly the Soil Conservation Service) TR-20 model, (4) the Army Corps of Engineers HEC-1 model, and (5) the Environmental Protection Agency SWMM model. The watersheds ranged between fully developed urban and undeveloped natural watersheds. Peak discharges and runoff volumes were estimated using standard or recommended methods for determining input parameters. All model runs were uncalibrated and the selection of input parameters was not influenced by observed data. The rational method, only used to calculate peak discharges, overestimated 45 storms, underestimated 20 storms and estimated the same discharge for 1 storm. The mean estimation error for all storms indicates the method overestimates the peak discharges. Estimation errors were generally smaller in the urban watersheds and larger in the natural watersheds. The U.S. Geological Survey regression equations provide peak discharges for storms of specific recurrence intervals. Therefore, direct comparison with observed data was limited to sixteen observed storms that had precipitation equivalent to specific recurrence intervals. The mean estimation error for all storms indicates the method overestimates both peak discharges and runoff volumes. Estimation errors were smallest for the larger natural watersheds in Sarasota County, and largest for the small watersheds located in the eastern part of the study area. The Natural Resources Conservation Service TR-20 model, overestimated peak discharges for 45 storms and underestimated 21 storms, and overestimated runoff volumes for 44 storms and underestimated 22 storms. The mean estimation error for all storms modeled indicates that the model overestimates peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. The HEC-1 model overestimated peak discharge rates for 55 storms and underestimated 11 storms. Runoff volumes were overestimated for 44 storms and underestimated for 22 storms using the Army Corps of Engineers HEC-1 model. The mean estimation error for all the storms modeled indicates that the model overestimates peak discharge rates and runoff volumes. Generally, the smaller estimation errors in peak discharges were for storms occurring in the urban watersheds, and the larger errors were for storms occurring in the natural watersheds. Estimation errors in runoff volumes; however, were smallest for the 3 natural watersheds located in the southernmost part of Sarasota County. The Environmental Protection Agency Storm Water Management model produced similar peak discharges and runoff volumes when using both the Green-Ampt and Horton infiltration methods. Estimated peak discharge and runoff volume data calculated with the Horton method was only slightly higher than those calculated with the Green-Ampt method. The mean estimation error for all the storms modeled indicates the model using the Green-Ampt infiltration method overestimates peak discharges and slightly underestimates runoff volumes. Using the Horton infiltration method, the model overestimates both peak discharges and runoff volumes. The smaller estimation errors in both peak discharges and runoff volumes were for storms occurring in the five natural watersheds in Sarasota County with the least amount of impervious cover and the lowest slopes. The largest er

  18. Approximation of Bit Error Rates in Digital Communications

    DTIC Science & Technology

    2007-06-01

    and Technology Organisation DSTO—TN—0761 ABSTRACT This report investigates the estimation of bit error rates in digital communi- cations, motivated by...recent work in [6]. In the latter, bounds are used to construct estimates for bit error rates in the case of differentially coherent quadrature phase

  19. The Infinitesimal Jackknife with Exploratory Factor Analysis

    ERIC Educational Resources Information Center

    Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.

    2012-01-01

    The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…

  20. Improving the S-Shape Solar Radiation Estimation Method for Supporting Crop Models

    PubMed Central

    Fodor, Nándor

    2012-01-01

    In line with the critical comments formulated in relation to the S-shape global solar radiation estimation method, the original formula was improved via a 5-step procedure. The improved method was compared to four-reference methods on a large North-American database. According to the investigated error indicators, the final 7-parameter S-shape method has the same or even better estimation efficiency than the original formula. The improved formula is able to provide radiation estimates with a particularly low error pattern index (PIdoy) which is especially important concerning the usability of the estimated radiation values in crop models. Using site-specific calibration, the radiation estimates of the improved S-shape method caused an average of 2.72 ± 1.02 (α = 0.05) relative error in the calculated biomass. Using only readily available site specific metadata the radiation estimates caused less than 5% relative error in the crop model calculations when they were used for locations in the middle, plain territories of the USA. PMID:22645451

Top