High-Order Polynomial Expansions (HOPE) for flux-vector splitting
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Chris J., Jr.
1991-01-01
The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M equals 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions. Also, the authors give the weakness of the scheme and suggest areas for further investigation.
NASA Technical Reports Server (NTRS)
Dong, D.; Fang, P.; Bock, F.; Webb, F.; Prawirondirdjo, L.; Kedar, S.; Jamason, P.
2006-01-01
Spatial filtering is an effective way to improve the precision of coordinate time series for regional GPS networks by reducing so-called common mode errors, thereby providing better resolution for detecting weak or transient deformation signals. The commonly used approach to regional filtering assumes that the common mode error is spatially uniform, which is a good approximation for networks of hundreds of kilometers extent, but breaks down as the spatial extent increases. A more rigorous approach should remove the assumption of spatially uniform distribution and let the data themselves reveal the spatial distribution of the common mode error. The principal component analysis (PCA) and the Karhunen-Loeve expansion (KLE) both decompose network time series into a set of temporally varying modes and their spatial responses. Therefore they provide a mathematical framework to perform spatiotemporal filtering.We apply the combination of PCA and KLE to daily station coordinate time series of the Southern California Integrated GPS Network (SCIGN) for the period 2000 to 2004. We demonstrate that spatially and temporally correlated common mode errors are the dominant error source in daily GPS solutions. The spatial characteristics of the common mode errors are close to uniform for all east, north, and vertical components, which implies a very long wavelength source for the common mode errors, compared to the spatial extent of the GPS network in southern California. Furthermore, the common mode errors exhibit temporally nonrandom patterns.
Scattering of point particles by black holes: Gravitational radiation
NASA Astrophysics Data System (ADS)
Hopper, Seth; Cardoso, Vitor
2018-02-01
Gravitational waves can teach us not only about sources and the environment where they were generated, but also about the gravitational interaction itself. Here we study the features of gravitational radiation produced during the scattering of a pointlike mass by a black hole. Our results are exact (to numerical error) at any order in a velocity expansion, and are compared against various approximations. At large impact parameter and relatively small velocities our results agree to within percent level with various post-Newtonian and weak-field results. Further, we find good agreement with scaling predictions in the weak-field/high-energy regime. Lastly, we achieve striking agreement with zero-frequency estimates.
Multibody local approximation: Application to conformational entropy calculations on biomolecules
NASA Astrophysics Data System (ADS)
Suárez, Ernesto; Suárez, Dimas
2012-08-01
Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.
Multibody local approximation: application to conformational entropy calculations on biomolecules.
Suárez, Ernesto; Suárez, Dimas
2012-08-28
Multibody type expansions like mutual information expansions are widely used for computing or analyzing properties of large composite systems. The power of such expansions stems from their generality. Their weaknesses, however, are the large computational cost of including high order terms due to the combinatorial explosion and the fact that truncation errors do not decrease strictly with the expansion order. Herein, we take advantage of the redundancy of multibody expansions in order to derive an efficient reformulation that captures implicitly all-order correlation effects within a given cutoff, avoiding the combinatory explosion. This approach, which is cutoff dependent rather than order dependent, keeps the generality of the original expansions and simultaneously mitigates their limitations provided that a reasonable cutoff can be used. An application of particular interest can be the computation of the conformational entropy of flexible peptide molecules from molecular dynamics trajectories. By combining the multibody local estimations of conformational entropy with average values of the rigid-rotor and harmonic-oscillator entropic contributions, we obtain by far a tighter upper bound of the absolute entropy than the one obtained by the broadly used quasi-harmonic method.
Charge renormalization at the large-D limit for N-electron atoms and weakly bound systems
NASA Astrophysics Data System (ADS)
Kais, S.; Bleil, R.
1995-05-01
We develop a systematic way to determine an effective nuclear charge ZRD such that the Hartree-Fock results will be significantly closer to the exact energies by utilizing the analytically known large-D limit energies. This method yields an expansion for the effective nuclear charge in powers of (1/D), which we have evaluated to the first order. This first order approximation to the desired effective nuclear charge has been applied to two-electron atoms with Z=2-20, and weakly bound systems such as H-. The errors for the two-electron atoms when compared with exact results were reduced from ˜0.2% for Z=2 to ˜0.002% for large Z. Although usual Hartree-Fock calculations for H- show this to be unstable, our results reduce the percent error of the Hartree-Fock energy from 7.6% to 1.86% and predicts the anion to be stable. For N-electron atoms (N=3-18, Z=3-28), using only the zeroth order approximation for the effective charge significantly reduces the error of Hartree-Fock calculations and recovers more than 80% of the correlation energy.
A family of chaotic pure analog coding schemes based on baker's map function
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Jing; Lu, Xuanxuan; Yuen, Chau; Wu, Jun
2015-12-01
This paper considers a family of pure analog coding schemes constructed from dynamic systems which are governed by chaotic functions—baker's map function and its variants. Various decoding methods, including maximum likelihood (ML), minimum mean square error (MMSE), and mixed ML-MMSE decoding algorithms, have been developed for these novel encoding schemes. The proposed mirrored baker's and single-input baker's analog codes perform a balanced protection against the fold error (large distortion) and weak distortion and outperform the classical chaotic analog coding and analog joint source-channel coding schemes in literature. Compared to the conventional digital communication system, where quantization and digital error correction codes are used, the proposed analog coding system has graceful performance evolution, low decoding latency, and no quantization noise. Numerical results show that under the same bandwidth expansion, the proposed analog system outperforms the digital ones over a wide signal-to-noise (SNR) range.
NASA Astrophysics Data System (ADS)
Friedrich, Oliver; Eifler, Tim
2018-01-01
Computing the inverse covariance matrix (or precision matrix) of large data vectors is crucial in weak lensing (and multiprobe) analyses of the large-scale structure of the Universe. Analytically computed covariances are noise-free and hence straightforward to invert; however, the model approximations might be insufficient for the statistical precision of future cosmological data. Estimating covariances from numerical simulations improves on these approximations, but the sample covariance estimator is inherently noisy, which introduces uncertainties in the error bars on cosmological parameters and also additional scatter in their best-fitting values. For future surveys, reducing both effects to an acceptable level requires an unfeasibly large number of simulations. In this paper we describe a way to expand the precision matrix around a covariance model and show how to estimate the leading order terms of this expansion from simulations. This is especially powerful if the covariance matrix is the sum of two contributions, C = A+B, where A is well understood analytically and can be turned off in simulations (e.g. shape noise for cosmic shear) to yield a direct estimate of B. We test our method in mock experiments resembling tomographic weak lensing data vectors from the Dark Energy Survey (DES) and the Large Synoptic Survey Telescope (LSST). For DES we find that 400 N-body simulations are sufficient to achieve negligible statistical uncertainties on parameter constraints. For LSST this is achieved with 2400 simulations. The standard covariance estimator would require >105 simulations to reach a similar precision. We extend our analysis to a DES multiprobe case finding a similar performance.
Bs and Ds decay constants in three-flavor lattice QCD.
Wingate, Matthew; Davies, Christine T H; Gray, Alan; Lepage, G Peter; Shigemitsu, Junko
2004-04-23
Capitalizing on recent advances in lattice QCD, we present a calculation of the leptonic decay constants f(B(s)) and f(D(s)) that includes effects of one strange sea quark and two light sea quarks via an improved staggered action. By shedding the quenched approximation and the associated lattice scale uncertainty, lattice QCD greatly increases its predictive power. Nonrelativistic QCD is used to simulate heavy quarks with masses between 1.5m(c) and m(b). We arrive at the following results: f(B(s))=260+/-7+/-26+/-8+/-5 and f(D(s))=290+/-20+/-29+/-29+/-6 MeV. The first quoted error is the statistical uncertainty, and the rest estimate the sizes of higher order terms neglected in this calculation. All of these uncertainties are systematically improvable by including another order in the weak coupling expansion, the nonrelativistic expansion, or the Symanzik improvement program.
Temperature equilibration rate with Fermi-Dirac statistics.
Brown, Lowell S; Singleton, Robert L
2007-12-01
We calculate analytically the electron-ion temperature equilibration rate in a fully ionized, weakly to moderately coupled plasma, using an exact treatment of the Fermi-Dirac electrons. The temperature is sufficiently high so that the quantum-mechanical Born approximation to the scattering is valid. It should be emphasized that we do not build a model of the energy exchange mechanism, but rather, we perform a systematic first principles calculation of the energy exchange. At the heart of this calculation lies the method of dimensional continuation, a technique that we borrow from quantum field theory and use in a different fashion to regulate the kinetic equations in a consistent manner. We can then perform a systematic perturbation expansion and thereby obtain a finite first-principles result to leading and next-to-leading order. Unlike model building, this systematic calculation yields an estimate of its own error and thus prescribes its domain of applicability. The calculational error is small for a weakly to moderately coupled plasma, for which our result is nearly exact. It should also be emphasized that our calculation becomes unreliable for a strongly coupled plasma, where the perturbative expansion that we employ breaks down, and one must then utilize model building and computer simulations. Besides providing different and potentially useful results, we use this calculation as an opportunity to explain the method of dimensional continuation in a pedagogical fashion. Interestingly, in the regime of relevance for many inertial confinement fusion experiments, the degeneracy corrections are comparable in size to the subleading quantum correction below the Born approximation. For consistency, we therefore present this subleading quantum-to-classical transition correction in addition to the degeneracy correction.
Moment expansion for ionospheric range error
NASA Technical Reports Server (NTRS)
Mallinckrodt, A.; Reich, R.; Parker, H.; Berbert, J.
1972-01-01
On a plane earth, the ionospheric or tropospheric range error depends only on the total refractivity content or zeroth moment of the refracting layer and the elevation angle. On a spherical earth, however, the dependence is more complex; so for more accurate results it has been necessary to resort to complex ray-tracing calculations. A simple, high-accuracy alternative to the ray-tracing calculation is presented. By appropriate expansion of the angular dependence in the ray-tracing integral in a power series in height, an expression is obtained for the range error in terms of a simple function of elevation angle, E, at the expansion height and of the mth moment of the refractivity, N, distribution about the expansion height. The rapidity of convergence is heavily dependent on the choice of expansion height. For expansion heights in the neighborhood of the centroid of the layer (300-490 km), the expansion to N = 2 (three terms) gives results accurate to about 0.4% at E = 10 deg. As an analytic tool, the expansion affords some insight on the influence of layer shape on range errors in special problems.
On nonlinear evolution of low-frequency Alfvén waves in weakly-expanding solar wind plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nariyuki, Y.
A multi-dimensional nonlinear evolution equation for Alfvén waves in weakly-expanding solar wind plasmas is derived by using the reductive perturbation method. The expansion of solar wind plasma parcels is modeled by an expanding box model, which includes the accelerating expansion. It is shown that the resultant equation agrees with the Wentzel-Kramers-Brillouin prediction of the low-frequency Alfvén waves in the linear limit. In the cold and one-dimensional limit, a modified derivative nonlinear Schrodinger equation is obtained. Direct numerical simulations are carried out to discuss the effect of the expansion on the modulational instability of monochromatic Alfvén waves and the propagation ofmore » Alfvén solitons. By using the instantaneous frequency, it is quantitatively shown that as far as the expansion rate is much smaller than wave frequencies, effects of the expansion are almost adiabatic. It is also confirmed that while shapes of Alfvén solitons temporally change due to the expansion, some of them can stably propagate after their collision in weakly-expanding plasmas.« less
Tolerance analysis of optical telescopes using coherent addition of wavefront errors
NASA Technical Reports Server (NTRS)
Davenport, J. W.
1982-01-01
A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.
NASA Astrophysics Data System (ADS)
Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn
The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.
Guided-mode interactions in thin films with surface corrugation
NASA Astrophysics Data System (ADS)
Seshadri, S. R.
1994-12-01
The guided modes in a thin-film planar dielectric waveguide sandwiched between a cover and a substrate (two different dielectrics) are considered. The interface between the cover and the film has a smooth corrugation in the longitudinal direction. For weak corrugations, the guided-mode interactions are investigated using the expansion in terms of ideal normal modes. A corresponding treament is given for the not-so-weak corrugations using the expansion in terms of local normal modes. The coupling coefficients are evaluated and reduced to simple forms. The theories are specialized for the treatment of contradirectional coupling between two guided modes taking place selectively in the neighborhood of the Bragg frequency. The coupled-mode equations governing the contradirectional interaction obtained from the local normal mode expansion procedure, in the limit of weak periodic corrugations, are identical to those deduced directly using the ideal normal mode expansion technique. The treatments for both the transverse electric and the transvers magnetic modes are included.
NASA Astrophysics Data System (ADS)
Qiang, Bo; Brigham, John C.; Aristizabal, Sara; Greenleaf, James F.; Zhang, Xiaoming; Urban, Matthew W.
2015-02-01
In this paper, we propose a method to model the shear wave propagation in transversely isotropic, viscoelastic and incompressible media. The targeted application is ultrasound-based shear wave elastography for viscoelasticity measurements in anisotropic tissues such as the kidney and skeletal muscles. The proposed model predicts that if the viscoelastic parameters both across and along fiber directions can be characterized as a Voigt material, then the spatial phase velocity at any angle is also governed by a Voigt material model. Further, with the aid of Taylor expansions, it is shown that the spatial group velocity at any angle is close to a Voigt type for weakly attenuative materials within a certain bandwidth. The model is implemented in a finite element code by a time domain explicit integration scheme and shear wave simulations are conducted. The results of the simulations are analyzed to extract the shear wave elasticity and viscosity for both the spatial phase and group velocities. The estimated values match well with theoretical predictions. The proposed theory is further verified by an ex vivo tissue experiment measured in a porcine skeletal muscle by an ultrasound shear wave elastography method. The applicability of the Taylor expansion to analyze the spatial velocities is also discussed. We demonstrate that the approximations from the Taylor expansions are subject to errors when the viscosities across or along the fiber directions are large or the maximum frequency considered is beyond the bandwidth defined by radii of convergence of the Taylor expansions.
Cohesive Errors in Writing among ESL Pre-Service Teachers
ERIC Educational Resources Information Center
Kwan, Lisa S. L.; Yunus, Melor Md
2014-01-01
Writing is a complex skill and one of the most difficult to master. A teacher's weak writing skills may negatively influence their students. Therefore, reinforcing teacher education by first determining pre-service teachers' writing weaknesses is imperative. This mixed-methods error analysis study aims to examine the cohesive errors in the writing…
NASA Astrophysics Data System (ADS)
Zhang, Yunchao; Charles, Christine; Boswell, Roderick W.
2017-07-01
This experimental study shows the validity of Sheridan's method in determining plasma density in low pressure, weakly magnetized, RF plasmas using ion saturation current data measured by a planar Langmuir probe. The ion density derived from Sheridan's method which takes into account the sheath expansion around the negatively biased probe tip, presents a good consistency with the electron density measured by a cylindrical RF-compensated Langmuir probe using the Druyvesteyn theory. The ion density obtained from the simplified method which neglects the sheath expansion effect, overestimates the true density magnitude, e.g., by a factor of 3 to 12 for the present experiment.
NASA Astrophysics Data System (ADS)
Liu, B.; McLean, A. D.
1989-08-01
We report the LM-2 helium dimer interaction potential, from helium separations of 1.6 Å to dissociation, obtained by careful convergence studies with respect to configuration space, through a sequence of interacting correlated fragment (ICF) wave functions, and with respect to the primitive Slater-type basis used for orbital expansion. Parameters of the LM-2 potential are re=2.969 Å, rm=2.642 Å, and De=10.94 K, in near complete agreement with those of the best experimental potential of Aziz, McCourt, and Wong [Mol. Phys. 61, 1487 (1987)], which are re=2.963 Å, rm=2.637 Å, and De=10.95 K. The computationally estimated accuracy of each point on the potential is given; at re it is 0.03 K. Extrapolation procedures used to produce the LM-2 potential make use of the orbital basis inconsistency (OBI) and configuration base inconsistency (CBI) adjustments to separated fragment energies when computing the interaction energy. These components of basis set superposition error (BSSE) are given a full discussion.
Kowalevski's analysis of the swinging Atwood's machine
NASA Astrophysics Data System (ADS)
Babelon, O.; Talon, M.; Capdequi Peyranère, M.
2010-02-01
We study the Kowalevski expansions near singularities of the swinging Atwood's machine. We show that there is an infinite number of mass ratios M/m where such expansions exist with the maximal number of arbitrary constants. These expansions are of the so-called weak Painlevé type. However, in view of these expansions, it is not possible to distinguish between integrable and nonintegrable cases.
Catastrophic photometric redshift errors: Weak-lensing survey requirements
Bernstein, Gary; Huterer, Dragan
2010-01-11
We study the sensitivity of weak lensing surveys to the effects of catastrophic redshift errors - cases where the true redshift is misestimated by a significant amount. To compute the biases in cosmological parameters, we adopt an efficient linearized analysis where the redshift errors are directly related to shifts in the weak lensing convergence power spectra. We estimate the number N spec of unbiased spectroscopic redshifts needed to determine the catastrophic error rate well enough that biases in cosmological parameters are below statistical errors of weak lensing tomography. While the straightforward estimate of N spec is ~10 6 we findmore » that using only the photometric redshifts with z ≤ 2.5 leads to a drastic reduction in N spec to ~ 30,000 while negligibly increasing statistical errors in dark energy parameters. Therefore, the size of spectroscopic survey needed to control catastrophic errors is similar to that previously deemed necessary to constrain the core of the z s – z p distribution. We also study the efficacy of the recent proposal to measure redshift errors by cross-correlation between the photo-z and spectroscopic samples. We find that this method requires ~ 10% a priori knowledge of the bias and stochasticity of the outlier population, and is also easily confounded by lensing magnification bias. In conclusion, the cross-correlation method is therefore unlikely to supplant the need for a complete spectroscopic redshift survey of the source population.« less
Uncorrelated measurements of the cosmic expansion history and dark energy from supernovae
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Yun; Tegmark, Max; Department of Physics, University of Pennsylvania, Philadelphia, Pennsylvania 19104
We present a method for measuring the cosmic expansion history H(z) in uncorrelated redshift bins, and apply it to current and simulated type Ia supernova data assuming spatial flatness. If the matter density parameter {omega}{sub m} can be accurately measured from other data, then the dark-energy density history X(z)={rho}{sub X}(z)/{rho}{sub X}(0) can trivially be derived from this expansion history H(z). In contrast to customary 'black box' parameter fitting, our method is transparent and easy to interpret: the measurement of H(z){sup -1} in a redshift bin is simply a linear combination of the measured comoving distances for supernovae in that bin,more » making it obvious how systematic errors propagate from input to output. We find the Riess et al. (2004) gold sample to be consistent with the vanilla concordance model where the dark energy is a cosmological constant. We compare two mission concepts for the NASA/DOE Joint Dark-Energy Mission (JDEM), the Joint Efficient Dark-energy Investigation (JEDI), and the Supernova Accelaration Probe (SNAP), using simulated data including the effect of weak lensing (based on numerical simulations) and a systematic bias from K corrections. Estimating H(z) in seven uncorrelated redshift bins, we find that both provide dramatic improvements over current data: JEDI can measure H(z) to about 10% accuracy and SNAP to 30%-40% accuracy.« less
Ristić-Djurović, Jasna L; Ćirković, Saša; Mladenović, Pavle; Romčević, Nebojša; Trbovich, Alexander M
2018-04-01
A rough estimate indicated that use of samples of size not larger than ten is not uncommon in biomedical research and that many of such studies are limited to strong effects due to sample sizes smaller than six. For data collected from biomedical experiments it is also often unknown if mathematical requirements incorporated in the sample comparison methods are satisfied. Computer simulated experiments were used to examine performance of methods for qualitative sample comparison and its dependence on the effectiveness of exposure, effect intensity, distribution of studied parameter values in the population, and sample size. The Type I and Type II errors, their average, as well as the maximal errors were considered. The sample size 9 and the t-test method with p = 5% ensured error smaller than 5% even for weak effects. For sample sizes 6-8 the same method enabled detection of weak effects with errors smaller than 20%. If the sample sizes were 3-5, weak effects could not be detected with an acceptable error; however, the smallest maximal error in the most general case that includes weak effects is granted by the standard error of the mean method. The increase of sample size from 5 to 9 led to seven times more accurate detection of weak effects. Strong effects were detected regardless of the sample size and method used. The minimal recommended sample size for biomedical experiments is 9. Use of smaller sizes and the method of their comparison should be justified by the objective of the experiment. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Degroote, M.; Henderson, T. M.; Zhao, J.
We present a similarity transformation theory based on a polynomial form of a particle-hole pair excitation operator. In the weakly correlated limit, this polynomial becomes an exponential, leading to coupled cluster doubles. In the opposite strongly correlated limit, the polynomial becomes an extended Bessel expansion and yields the projected BCS wavefunction. In between, we interpolate using a single parameter. The e ective Hamiltonian is non-hermitian and this Polynomial Similarity Transformation Theory follows the philosophy of traditional coupled cluster, left projecting the transformed Hamiltonian onto subspaces of the Hilbert space in which the wave function variance is forced to be zero.more » Similarly, the interpolation parameter is obtained through minimizing the next residual in the projective hierarchy. We rationalize and demonstrate how and why coupled cluster doubles is ill suited to the strongly correlated limit whereas the Bessel expansion remains well behaved. The model provides accurate wave functions with energy errors that in its best variant are smaller than 1% across all interaction stengths. The numerical cost is polynomial in system size and the theory can be straightforwardly applied to any realistic Hamiltonian.« less
NASA Technical Reports Server (NTRS)
Drake, Jeremy J.; Lambert, David L.
1994-01-01
Sodium abundances have been determined for eight weak G-band giants whose atmospheres are greatly enriched with products of the CN-cycling H-burning reactions. Systematic errors are minimized by comparing the weak G-band giants to a sample of similar but normal giants. If, further, Ca is selected as a reference element, model atmosphere-related errors should largely be removed. For the weak-G-band stars (Na/Ca) = 0.16 +/- 0.01, which is just possibly greater than the result (Na/Ca) = 0.10 /- 0.03 from the normal giants. This result demonstrates that the atmospheres of the weak G-band giants are not seriously contaminated with products of ON cycling.
Convergence of Spectral Discretizations of the Vlasov--Poisson System
Manzini, G.; Funaro, D.; Delzanno, G. L.
2017-09-26
Here we prove the convergence of a spectral discretization of the Vlasov-Poisson system. The velocity term of the Vlasov equation is discretized using either Hermite functions on the infinite domain or Legendre polynomials on a bounded domain. The spatial term of the Vlasov and Poisson equations is discretized using periodic Fourier expansions. Boundary conditions are treated in weak form through a penalty type term that can be applied also in the Hermite case. As a matter of fact, stability properties of the approximated scheme descend from this added term. The convergence analysis is carried out in detail for the 1D-1Vmore » case, but results can be generalized to multidimensional domains, obtained as Cartesian product, in both space and velocity. The error estimates show the spectral convergence under suitable regularity assumptions on the exact solution.« less
Brain State Before Error Making in Young Patients With Mild Spastic Cerebral Palsy.
Hakkarainen, Elina; Pirilä, Silja; Kaartinen, Jukka; van der Meere, Jaap J
2015-10-01
In the present experiment, children with mild spastic cerebral palsy and a control group carried out a memory recognition task. The key question was if errors of the patient group are foreshadowed by attention lapses, by weak motor preparation, or by both. Reaction times together with event-related potentials associated with motor preparation (frontal late contingent negative variation), attention (parietal P300), and response evaluation (parietal error-preceding positivity) were investigated in instances where 3 subsequent correct trials preceded an error. The findings indicated that error responses of the patient group are foreshadowed by weak motor preparation in correct trials directly preceding an error. © The Author(s) 2015.
Weak charge form factor and radius of 208Pb through parity violation in electron scattering
Horowitz, C. J.; Ahmed, Z.; Jen, C. -M.; ...
2012-03-26
We use distorted wave electron scattering calculations to extract the weak charge form factor F W(more » $$\\bar{q}$$), the weak charge radius R W, and the point neutron radius R n, of 208Pb from the PREX parity violating asymmetry measurement. The form factor is the Fourier transform of the weak charge density at the average momentum transfer $$\\bar{q}$$ = 0.475 fm -1. We find F W($$\\bar{q}$$) = 0.204 ± 0.028(exp) ± 0.001(model). We use the Helm model to infer the weak radius from F W($$\\bar{q}$$). We find RW = 5.826 ± 0.181(exp) ± 0.027(model) fm. Here the exp error includes PREX statistical and systematic errors, while the model error describes the uncertainty in R W from uncertainties in the surface thickness σ of the weak charge density. The weak radius is larger than the charge radius, implying a 'weak charge skin' where the surface region is relatively enriched in weak charges compared to (electromagnetic) charges. We extract the point neutron radius R n = 5.751 ± 0.175 (exp) ± 0.026(model) ± 0.005(strange) fm, from R W. Here there is only a very small error (strange) from possible strange quark contributions. We find R n to be slightly smaller than R W because of the nucleon's size. As a result, we find a neutron skin thickness of R n-R p = 0.302 ± 0.175 (exp) ± 0.026 (model) ± 0.005 (strange) fm, where R p is the point proton radius.« less
High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.
Wang, Fei; Xie, Zhaoxin; Chen, Zuo
2014-01-01
Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S. R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R; Kocak-Uzel, Esengul; Fuller, Clifton D.
2016-01-01
Larynx may alternatively serve as a target or organ-at-risk (OAR) in head and neck cancer (HNC) image-guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population–based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT-on-rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other 6 points were calculated post-isocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all 6 points for all scans over the course of treatment were calculated. Residual systematic and random error, and the necessary compensatory CTV-to-PTV and OAR-to-PRV margins were calculated, using both observational cohort data and a bootstrap-resampled population estimator. The grand mean displacements for all anatomical points was 5.07mm, with mean systematic error of 1.1mm and mean random setup error of 2.63mm, while bootstrapped POIs grand mean displacement was 5.09mm, with mean systematic error of 1.23mm and mean random setup error of 2.61mm. Required margin for CTV-PTV expansion was 4.6mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9mm. The calculated OAR-to-PRV expansion for the observed residual set-up error was 2.7mm, and bootstrap estimated expansion of 2.9mm. We conclude that the interfractional larynx setup error is a significant source of RT set-up/delivery error in HNC both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5mm to compensate for set up error if the larynx is a target or 3mm if the larynx is an OAR when using a non-laryngeal bony isocenter. PMID:25679151
Baron, Charles A.; Awan, Musaddiq J.; Mohamed, Abdallah S.R.; Akel, Imad; Rosenthal, David I.; Gunn, G. Brandon; Garden, Adam S.; Dyer, Brandon A.; Court, Laurence; Sevak, Parag R.; Kocak‐Uzel, Esengul
2014-01-01
Larynx may alternatively serve as a target or organs at risk (OAR) in head and neck cancer (HNC) image‐guided radiotherapy (IGRT). The objective of this study was to estimate IGRT parameters required for larynx positional error independent of isocentric alignment and suggest population‐based compensatory margins. Ten HNC patients receiving radiotherapy (RT) with daily CT on‐rails imaging were assessed. Seven landmark points were placed on each daily scan. Taking the most superior‐anterior point of the C5 vertebra as a reference isocenter for each scan, residual displacement vectors to the other six points were calculated postisocentric alignment. Subsequently, using the first scan as a reference, the magnitude of vector differences for all six points for all scans over the course of treatment was calculated. Residual systematic and random error and the necessary compensatory CTV‐to‐PTV and OAR‐to‐PRV margins were calculated, using both observational cohort data and a bootstrap‐resampled population estimator. The grand mean displacements for all anatomical points was 5.07 mm, with mean systematic error of 1.1 mm and mean random setup error of 2.63 mm, while bootstrapped POIs grand mean displacement was 5.09 mm, with mean systematic error of 1.23 mm and mean random setup error of 2.61 mm. Required margin for CTV‐PTV expansion was 4.6 mm for all cohort points, while the bootstrap estimator of the equivalent margin was 4.9 mm. The calculated OAR‐to‐PRV expansion for the observed residual setup error was 2.7 mm and bootstrap estimated expansion of 2.9 mm. We conclude that the interfractional larynx setup error is a significant source of RT setup/delivery error in HNC, both when the larynx is considered as a CTV or OAR. We estimate the need for a uniform expansion of 5 mm to compensate for setup error if the larynx is a target, or 3 mm if the larynx is an OAR, when using a nonlaryngeal bony isocenter. PACS numbers: 87.55.D‐, 87.55.Qr
Identifying and reducing error in cluster-expansion approximations of protein energies.
Hahn, Seungsoo; Ashenberg, Orr; Grigoryan, Gevorg; Keating, Amy E
2010-12-01
Protein design involves searching a vast space for sequences that are compatible with a defined structure. This can pose significant computational challenges. Cluster expansion is a technique that can accelerate the evaluation of protein energies by generating a simple functional relationship between sequence and energy. The method consists of several steps. First, for a given protein structure, a training set of sequences with known energies is generated. Next, this training set is used to expand energy as a function of clusters consisting of single residues, residue pairs, and higher order terms, if required. The accuracy of the sequence-based expansion is monitored and improved using cross-validation testing and iterative inclusion of additional clusters. As a trade-off for evaluation speed, the cluster-expansion approximation causes prediction errors, which can be reduced by including more training sequences, including higher order terms in the expansion, and/or reducing the sequence space described by the cluster expansion. This article analyzes the sources of error and introduces a method whereby accuracy can be improved by judiciously reducing the described sequence space. The method is applied to describe the sequence-stability relationship for several protein structures: coiled-coil dimers and trimers, a PDZ domain, and T4 lysozyme as examples with computationally derived energies, and SH3 domains in amphiphysin-1 and endophilin-1 as examples where the expanded pseudo-energies are obtained from experiments. Our open-source software package Cluster Expansion Version 1.0 allows users to expand their own energy function of interest and thereby apply cluster expansion to custom problems in protein design. © 2010 Wiley Periodicals, Inc.
Dhatt, Sharmistha; Bhattacharyya, Kamal
2012-08-01
Appropriate constructions of Padé approximants are believed to provide reasonable estimates of the asymptotic (large-coupling) amplitude and exponent of an observable, given its weak-coupling expansion to some desired order. In many instances, however, sequences of such approximants are seen to converge very poorly. We outline here a strategy that exploits the idea of fractional calculus to considerably improve the convergence behavior. Pilot calculations on the ground-state perturbative energy series of quartic, sextic, and octic anharmonic oscillators reveal clearly the worth of our endeavor.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chan, Poh Kam; Kosaka, Wataru; Oikawa, Shun-ichi
We have solved the Heisenberg equation of motion for the time evolution of the position and momentum operators for a non-relativistic spinless charged particle in the presence of a weakly non-uniform electric and magnetic field. It is shown that the drift velocity operator obtained in this study agrees with the classical counterpart, and that, using the time dependent operators, the variances in position and momentum grow with time. The expansion rate of variance in position and momentum are dependent on the magnetic gradient scale length, however, independent of the electric gradient scale length. In the presence of a weakly non-uniformmore » electric and magnetic field, the theoretical expansion rates of variance expansion are in good agreement with the numerical analysis. It is analytically shown that the variance in position reaches the square of the interparticle separation, which is the characteristic time much shorter than the proton collision time of plasma fusion. After this time, the wavefunctions of the neighboring particles would overlap, as a result, the conventional classical analysis may lose its validity. The broad distribution of individual particle in space means that their Coulomb interactions with other particles become weaker than that expected in classical mechanics.« less
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@nao.ac.jp, E-mail: tof@astr.tohoku.ac.jp
This is the third paper on the improvement of systematic errors in weak lensing analysis using an elliptical weight function, referred to as E-HOLICs. In previous papers, we succeeded in avoiding errors that depend on the ellipticity of the background image. In this paper, we investigate the systematic error that depends on the signal-to-noise ratio of the background image. We find that the origin of this error is the random count noise that comes from the Poisson noise of sky counts. The random count noise makes additional moments and centroid shift error, and those first-order effects are canceled in averaging,more » but the second-order effects are not canceled. We derive the formulae that correct this systematic error due to the random count noise in measuring the moments and ellipticity of the background image. The correction formulae obtained are expressed as combinations of complex moments of the image, and thus can correct the systematic errors caused by each object. We test their validity using a simulated image and find that the systematic error becomes less than 1% in the measured ellipticity for objects with an IMCAT significance threshold of {nu} {approx} 11.7.« less
Older drivers: On-road and off-road test results.
Selander, Helena; Lee, Hoe C; Johansson, Kurt; Falkmer, Torbjörn
2011-07-01
Eighty-five volunteer drivers, 65-85 years old, without cognitive impairments impacting on their driving were examined, in order to investigate driving errors characteristic for older drivers. In addition, any relationships between cognitive off-road and on-road tests results, the latter being the gold standard, were identified. Performance measurements included Trail Making Test (TMT), Nordic Stroke Driver Screening Assessment (NorSDSA), Useful Field of View (UFOV), self-rating driving performance and the two on-road protocols P-Drive and ROA. Some of the older drivers displayed questionable driving behaviour. In total, 21% of the participants failed the on-road assessment. Some of the specific errors were more serious than others. The most common driving errors embraced speed; exceeding the speed limit or not controlling the speed. Correlations with the P-Drive protocol were established for NorSDSA total score (weak), UFOV subtest 2 (weak), and UFOV subtest 3 (moderate). Correlations with the ROA protocol were established for UFOV subtest 2 (weak) and UFOV subtest 3 (weak). P-Drive and self ratings correlated weakly, whereas no correlation between self ratings and the ROA protocol was found. The results suggest that specific problems or errors seen in an older person's driving can actually be "normal driving behaviours". Copyright © 2011 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Tolson, R. H.
1981-01-01
A technique is described for providing a means of evaluating the influence of spatial sampling on the determination of global mean total columnar ozone. A finite number of coefficients in the expansion are determined, and the truncated part of the expansion is shown to contribute an error to the estimate, which depends strongly on the spatial sampling and is relatively insensitive to data noise. First and second order statistics are derived for each term in a spherical harmonic expansion which represents the ozone field, and the statistics are used to estimate systematic and random errors in the estimates of total ozone.
ERIC Educational Resources Information Center
Liu, Xiaochen; Marchis, Lavinia; DeBiase, Emily; Breaux, Kristina C.; Courville, Troy; Pan, Xingyu; Hatcher, Ryan C.; Koriakin, Taylor; Choi, Dowon; Kaufman, Alan S.
2017-01-01
This study investigated the relationship between specific cognitive patterns of strengths and weaknesses (PSWs) and the errors children make in reading, writing, and spelling tests from the Kaufman Test of Educational Achievement-Third Edition (KTEA-3). Participants were selected from the KTEA-3 standardization sample based on five cognitive…
Patterns of Cognitive Strengths and Weaknesses and Relationships to Math Errors
ERIC Educational Resources Information Center
Koriakin, Taylor; White, Erica; Breaux, Kristina C.; DeBiase, Emily; O'Brien, Rebecca; Howell, Meiko; Costa, Michael; Liu, Xiaochen; Pan, Xingyu; Courville, Troy
2017-01-01
This study investigated cognitive patterns of strengths and weaknesses (PSW) and their relationship to patterns of math errors on the Kaufman Test of Educational Achievement (KTEA-3). Participants, ages 5 to 18, were selected from the KTEA-3 standardization sample if they met one of two PSW profiles: high crystallized ability (Gc) paired with low…
Interferometer for Measuring Displacement to Within 20 pm
NASA Technical Reports Server (NTRS)
Zhao, Feng
2003-01-01
An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers
A new method to generate large order low temperature expansions for discrete spin models
NASA Astrophysics Data System (ADS)
Bhanot, Gyan
1993-03-01
I describe work done in collaboration with Michael Creutz at BNL and Jan Lacki at IAS Princeton. We have developed a method to generate very high order low temperature (weak coupling) expansions for discrete spin systems. For the 3-d and 4-d Ising model, we give results for the low temperature expansion of the average free energy to 50 and 44 excited bonds respectively.
NASA Astrophysics Data System (ADS)
Choi, J. H.; Kim, S. W.; Won, J. S.
2017-12-01
The objective of this study is monitoring and evaluating the stability of buildings in Seoul, Korea. This study includes both algorithm development and application to a case study. The development focuses on improving the PSI approach for discriminating various geophysical phase components and separating them from the target displacement phase. A thermal expansion is one of the key components that make it difficult for precise displacement measurement. The core idea is to optimize the thermal expansion factor using air temperature data and to model the corresponding phase by fitting the residual phase. We used TerraSAR-X SAR data acquired over two years from 2011 to 2013 in Seoul, Korea. The temperature fluctuation according to seasons is considerably high in Seoul, Korea. Other problem is the highly-developed skyscrapers in Seoul, which seriously contribute to DEM errors. To avoid a high computational burden and unstable solution of the nonlinear equation due to unknown parameters (a thermal expansion parameter as well as two conventional parameters: linear velocity and DEM errors), we separate a phase model into two main steps as follows. First, multi-baseline pairs with very short time interval in which deformation components and thermal expansion can be negligible were used to estimate DEM errors first. Second, single-baseline pairs were used to estimate two remaining parameters, linear deformation rate and thermal expansion. The thermal expansion of buildings closely correlate with the seasonal temperature fluctuation. Figure 1 shows deformation patterns of two selected buildings in Seoul. In the figures of left column (Figure 1), it is difficult to observe the true ground subsidence due to a large cyclic pattern caused by thermal dilation of the buildings. The thermal dilation often mis-leads the results into wrong conclusions. After the correction by the proposed method, true ground subsidence was able to be precisely measured as in the bottom right figure in Figure 1. The results demonstrate how the thermal expansion phase blinds the time-series measurement of ground motion and how well the proposed approach able to remove the noise phases caused by thermal expansion and DEM errors. Some of the detected displacements matched well with the pre-reported events, such as ground subsidence and sinkhole.
Falsification of dark energy by fluid mechanics
NASA Astrophysics Data System (ADS)
Gibson, Carl H.
2011-11-01
The 2011 Nobel Prize in Physics has been awarded for the discovery from observations of increased supernovae dimness interpreted as distance, so that the Universe expansion rate has changed from a rate decreasing since the big bang to one that is now increasing, driven by anti-gravity forces of a mysterious dark energy material comprising 70% of the Universe mass-energy. Fluid mechanical considerations falsify both the accelerating expansion and dark energy concepts. Kinematic viscosity is neglected in current stan- dard models of self-gravitational structure formation, which rely on cold dark matter CDM condensations and clusterings that are also falsified by fluid mechanics. Weakly collisional CDM particles do not condense but diffuse away. Photon viscosity predicts su- perclustervoid fragmentation early in the plasma epoch and protogalaxies at the end. At the plasma-gas transition, the plasma fragments into Earth-mass gas planets in trillion planet clumps (proto-globular-star-cluster PGCs). The hydrogen planets freeze to form the dark matter of galaxies and merge to form their stars. Dark energy is a systematic dimming error for Supernovae Ia caused by dark matter planets near hot white dwarf stars at the Chandrasekhar carbon limit. Evaporated planet atmospheres may or may not scatter light from the events depending on the line of sight.
ERIC Educational Resources Information Center
Uehara, Soichi
This study was made to determine the most prevalent errors, areas of weakness, and their frequency in the writing of letters so that a course in business communications classes at Kapiolani Community College (Hawaii) could be prepared that would help students learn to write effectively. The 55 participating students were divided into two groups…
ERIC Educational Resources Information Center
Ottone-Cross, Karen L.; Dulong-Langley, Susan; Root, Melissa M.; Gelbar, Nicholas; Bray, Melissa A.; Luria, Sarah R.; Choi, Dowon; Kaufman, James C.; Courville, Troy; Pan, Xingyu
2017-01-01
An understanding of the strengths, weaknesses, and achievement profiles of students with giftedness and learning disabilities (G&LD) is needed to address their asynchronous development. This study examines the subtests and error factors in the Kaufman Test of Educational Achievement--Third Edition (KTEA-3) for strength and weakness patterns of…
3D Anisotropy of Solar Wind Turbulence, Tubes, or Ribbons?
NASA Astrophysics Data System (ADS)
Verdini, Andrea; Grappin, Roland; Alexandrova, Olga; Lion, Sonny
2018-01-01
We study the anisotropy with respect to the local magnetic field of turbulent magnetic fluctuations at magnetofluid scales in the solar wind. Previous measurements in the fast solar wind obtained axisymmetric anisotropy, despite that the analysis method allows nonaxisymmetric structures. These results are probably contaminated by the wind expansion that introduces another symmetry axis, namely, the radial direction, as indicated by recent numerical simulations. These simulations also show that while the expansion is strong, the principal fluctuations are in the plane perpendicular to the radial direction. Using this property, we separate 11 yr of Wind spacecraft data into two subsets characterized by strong and weak expansion and determine the corresponding turbulence anisotropy. Under strong expansion, the small-scale anisotropy is consistent with the Goldreich & Sridhar critical balance. As in previous works, when the radial symmetry axis is not eliminated, the turbulent structures are field-aligned tubes. Under weak expansion, we find 3D anisotropy predicted by the Boldyrev model, that is, turbulent structures are ribbons and not tubes. However, the very basis of the Boldyrev phenomenology, namely, a cross-helicity increasing at small scales, is not observed in the solar wind: the origin of the ribbon formation is unknown.
Weak meson decays and the 1/Nc expansion
NASA Astrophysics Data System (ADS)
Tadić, Dubravko; Trampetić, Josip
1982-07-01
In the QCD corrected weak hamiltonian, the leading terms in the large-Nc limit give a reasonable description of D--> Kπ decays and good values of K --> ππ decay amplitudes. Alexander von Humboldt Fellow of Max-Planck Institut für Physik und Astrophysik, Munich, Fed. Rep. Germany.
Feischl, Michael; Gantner, Gregor; Praetorius, Dirk
2015-01-01
We consider the Galerkin boundary element method (BEM) for weakly-singular integral equations of the first-kind in 2D. We analyze some residual-type a posteriori error estimator which provides a lower as well as an upper bound for the unknown Galerkin BEM error. The required assumptions are weak and allow for piecewise smooth parametrizations of the boundary, local mesh-refinement, and related standard piecewise polynomials as well as NURBS. In particular, our analysis gives a first contribution to adaptive BEM in the frame of isogeometric analysis (IGABEM), for which we formulate an adaptive algorithm which steers the local mesh-refinement and the multiplicity of the knots. Numerical experiments underline the theoretical findings and show that the proposed adaptive strategy leads to optimal convergence. PMID:26085698
NASA Technical Reports Server (NTRS)
Bhatt, R. T.; Palczer, A. R.
1994-01-01
Thermal expansion curves for SiC fiber-reinforced reaction-bonded Si3N4 matrix composites (SiC/RBSN) and unreinforced RBSN were measured from 25 to 1400 C in nitrogen and in oxygen. The effects of fiber/matrix bonding and cycling on the thermal expansion curves and room-temperature tensile properties of unidirectional composites were determined. The measured thermal expansion curves were compared with those predicted from composite theory. Predicted thermal expansion curves parallel to the fiber direction for both bonding cases were similar to that of the weakly bonded composites, but those normal to the fiber direction for both bonding cases resulted in no net dimensional changes at room temperature, and no loss in tensile properties from the as-fabricated condition. In contrast, thermal cycling in oxygen for both composites caused volume expansion primarily due to internal oxidation of RBSN. Cyclic oxidation affected the mechanical properties of the weakly bonded SiC/RBSN composites the most, resulting in loss of strain capability beyond matrix fracture and catastrophic, brittle fracture. Increased bonding between the SiC fiber and RBSN matrix due to oxidation of the carbon-rich fiber surface coating and an altered residual stress pattern in the composite due to internal oxidation of the matrix are the main reasons for the poor mechanical performance of these composites.
ERIC Educational Resources Information Center
Breaux, Kristina C.; Avitia, Maria; Koriakin, Taylor; Bray, Melissa A.; DeBiase, Emily; Courville, Troy; Pan, Xingyu; Witholt, Thomas; Grossman, Sandy
2017-01-01
This study investigated the relationship between specific cognitive patterns of strengths and weaknesses and the errors children make on oral language, reading, writing, spelling, and math subtests from the Kaufman Test of Educational Achievement-Third Edition (KTEA-3). Participants with scores from the KTEA-3 and either the Wechsler Intelligence…
NASA Astrophysics Data System (ADS)
Shaw, Jeremy A.; Daescu, Dacian N.
2017-08-01
This article presents the mathematical framework to evaluate the sensitivity of a forecast error aspect to the input parameters of a weak-constraint four-dimensional variational data assimilation system (w4D-Var DAS), extending the established theory from strong-constraint 4D-Var. Emphasis is placed on the derivation of the equations for evaluating the forecast sensitivity to parameters in the DAS representation of the model error statistics, including bias, standard deviation, and correlation structure. A novel adjoint-based procedure for adaptive tuning of the specified model error covariance matrix is introduced. Results from numerical convergence tests establish the validity of the model error sensitivity equations. Preliminary experiments providing a proof-of-concept are performed using the Lorenz multi-scale model to illustrate the theoretical concepts and potential benefits for practical applications.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shirasaki, Masato; Yoshida, Naoki, E-mail: masato.shirasaki@utap.phys.s.u-tokyo.ac.jp
2014-05-01
The measurement of cosmic shear using weak gravitational lensing is a challenging task that involves a number of complicated procedures. We study in detail the systematic errors in the measurement of weak-lensing Minkowski Functionals (MFs). Specifically, we focus on systematics associated with galaxy shape measurements, photometric redshift errors, and shear calibration correction. We first generate mock weak-lensing catalogs that directly incorporate the actual observational characteristics of the Canada-France-Hawaii Lensing Survey (CFHTLenS). We then perform a Fisher analysis using the large set of mock catalogs for various cosmological models. We find that the statistical error associated with the observational effects degradesmore » the cosmological parameter constraints by a factor of a few. The Subaru Hyper Suprime-Cam (HSC) survey with a sky coverage of ∼1400 deg{sup 2} will constrain the dark energy equation of the state parameter with an error of Δw {sub 0} ∼ 0.25 by the lensing MFs alone, but biases induced by the systematics can be comparable to the 1σ error. We conclude that the lensing MFs are powerful statistics beyond the two-point statistics only if well-calibrated measurement of both the redshifts and the shapes of source galaxies is performed. Finally, we analyze the CFHTLenS data to explore the ability of the MFs to break degeneracies between a few cosmological parameters. Using a combined analysis of the MFs and the shear correlation function, we derive the matter density Ω{sub m0}=0.256±{sub 0.046}{sup 0.054}.« less
Estimates of the Modeling Error of the α -Models of Turbulence in Two and Three Space Dimensions
NASA Astrophysics Data System (ADS)
Dunca, Argus A.
2017-12-01
This report investigates the convergence rate of the weak solutions w^{α } of the Leray-α , modified Leray-α , Navier-Stokes-α and the zeroth ADM turbulence models to a weak solution u of the Navier-Stokes equations. It is assumed that this weak solution u of the NSE belongs to the space L^4(0, T; H^1) . It is shown that under this regularity condition the error u-w^{α } is O(α ) in the norms L^2(0, T; H^1) and L^{∞}(0, T; L^2) , thus improving related known results. It is also shown that the averaged error \\overline{u}-\\overline{w^{α }} is higher order, O(α ^{1.5}) , in the same norms, therefore the α -regularizations considered herein approximate better filtered flow structures than the exact (unfiltered) flow velocities.
Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise
2013-05-01
To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced <10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores than preschoolers who produced fewer distortion errors. Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschoolers may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.
Expansion shock waves in regularized shallow-water theory
NASA Astrophysics Data System (ADS)
El, Gennady A.; Hoefer, Mark A.; Shearer, Michael
2016-05-01
We identify a new type of shock wave by constructing a stationary expansion shock solution of a class of regularized shallow-water equations that include the Benjamin-Bona-Mahony and Boussinesq equations. An expansion shock exhibits divergent characteristics, thereby contravening the classical Lax entropy condition. The persistence of the expansion shock in initial value problems is analysed and justified using matched asymptotic expansions and numerical simulations. The expansion shock's existence is traced to the presence of a non-local dispersive term in the governing equation. We establish the algebraic decay of the shock as it is gradually eroded by a simple wave on either side. More generally, we observe a robustness of the expansion shock in the presence of weak dissipation and in simulations of asymmetric initial conditions where a train of solitary waves is shed from one side of the shock.
Understanding the many-body expansion for large systems. II. Accuracy considerations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lao, Ka Un; Liu, Kuan-Yu; Richard, Ryan M.
2016-04-28
To complement our study of the role of finite precision in electronic structure calculations based on a truncated many-body expansion (MBE, or “n-body expansion”), we examine the accuracy of such methods in the present work. Accuracy may be defined either with respect to a supersystem calculation computed at the same level of theory as the n-body calculations, or alternatively with respect to high-quality benchmarks. Both metrics are considered here. In applications to a sequence of water clusters, (H{sub 2}O){sub N=6−55} described at the B3LYP/cc-pVDZ level, we obtain mean absolute errors (MAEs) per H{sub 2}O monomer of ∼1.0 kcal/mol for two-bodymore » expansions, where the benchmark is a B3LYP/cc-pVDZ calculation on the entire cluster. Three- and four-body expansions exhibit MAEs of 0.5 and 0.1 kcal/mol/monomer, respectively, without resort to charge embedding. A generalized many-body expansion truncated at two-body terms [GMBE(2)], using 3–4 H{sub 2}O molecules per fragment, outperforms all of these methods and affords a MAE of ∼0.02 kcal/mol/monomer, also without charge embedding. GMBE(2) requires significantly fewer (although somewhat larger) subsystem calculations as compared to MBE(4), reducing problems associated with floating-point roundoff errors. When compared to high-quality benchmarks, we find that error cancellation often plays a critical role in the success of MBE(n) calculations, even at the four-body level, as basis-set superposition error can compensate for higher-order polarization interactions. A many-body counterpoise correction is introduced for the GMBE, and its two-body truncation [GMBCP(2)] is found to afford good results without error cancellation. Together with a method such as ωB97X-V/aug-cc-pVTZ that can describe both covalent and non-covalent interactions, the GMBE(2)+GMBCP(2) approach provides an accurate, stable, and tractable approach for large systems.« less
NASA Technical Reports Server (NTRS)
Bursik, J. W.; Hall, R. M.
1980-01-01
The saturated equilibrium expansion approximation for two phase flow often involves ideal-gas and latent-heat assumptions to simplify the solution procedure. This approach is well documented by Wegener and Mack and works best at low pressures where deviations from ideal-gas behavior are small. A thermodynamic expression for liquid mass fraction that is decoupled from the equations of fluid mechanics is used to compare the effects of the various assumptions on nitrogen-gas saturated equilibrium expansion flow starting at 8.81 atm, 2.99 atm, and 0.45 atm, which are conditions representative of transonic cryogenic wind tunnels. For the highest pressure case, the entire set of ideal-gas and latent-heat assumptions are shown to be in error by 62 percent for the values of heat capacity and latent heat. An approximation of the exact, real-gas expression is also developed using a constant, two phase isentropic expansion coefficient which results in an error of only 2 percent for the high pressure case.
Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise
2012-01-01
Purpose To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost four years later. Method Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 and followed up at 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors were used to predict later speech sound production, PA, and literacy outcomes. Results Group averages revealed below-average school-age articulation scores and low-average PA, but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom more than 10% of their speech sound errors were atypical had lower PA and literacy scores at school-age than children who produced fewer than 10% atypical errors. Preschoolers who produced more distortion errors were likely to have lower school-age articulation scores. Conclusions Different preschool speech error patterns predict different school-age clinical outcomes. Many atypical speech sound errors in preschool may be indicative of weak phonological representations, leading to long-term PA weaknesses. Preschool distortions may be resistant to change over time, leading to persisting speech sound production problems. PMID:23184137
NASA Technical Reports Server (NTRS)
Long, S. A. T.
1975-01-01
Formulas for the general-altitude (height above the ellipsoid) transformation from geocentric to geodetic coordinates and vice versa are derived. The set of four formulas is expressed in each of two useful forms: series expansions in powers of the earth's flattening and series expansions in powers of the earth's eccentricity. The error incurred in these expansions is of the order of one part in 30 million.-
Elderly Onset of Weakness in Facioscapulohumeral Muscular Dystrophy
Fee, Dominic B.
2012-01-01
A 77-year-old male is presented. He had onset of proximal weakness 10 years earlier. His course was slowly progressive. Despite having phenotypic features of facioscapulohumeral muscular dystrophy (FSH), genetic testing for this was delayed because of his age of onset, lack of family history, and benign appearing muscle biopsy. This case is one of the oldest onset of weakness in genetically confirmed FSH and highlights the recognized expansion in phenotype that has occurred since the advent of genetic testing. PMID:23024867
Elderly onset of weakness in facioscapulohumeral muscular dystrophy.
Fee, Dominic B
2012-01-01
A 77-year-old male is presented. He had onset of proximal weakness 10 years earlier. His course was slowly progressive. Despite having phenotypic features of facioscapulohumeral muscular dystrophy (FSH), genetic testing for this was delayed because of his age of onset, lack of family history, and benign appearing muscle biopsy. This case is one of the oldest onset of weakness in genetically confirmed FSH and highlights the recognized expansion in phenotype that has occurred since the advent of genetic testing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bovier, A.; Klein, A.
We show that the formal perturbation expansion of the invariant measure for the Anderson model in one dimension has singularities at all energies E/sub 0/ = 2 cos ..pi..(p/q); we derive a modified expansion near these energies that we show to have finite coefficients to all orders. Moreover, we show that the first q - 3 of them coincide with those of the naive expansion, while there is an anomaly in the (q - 2)th term. This also gives a weak disorder expansion for the Liapunov exponent and for the density of states. This generalizes previous results of Kappus andmore » Wegner and of Derrida and Gardner.« less
NASA Astrophysics Data System (ADS)
Trung, Ha Duyen
2017-12-01
In this paper, the end-to-end performance of free-space optical (FSO) communication system combining with Amplify-and-Forward (AF)-assisted or fixed-gain relaying technology using subcarrier quadrature amplitude modulation (SC-QAM) over weak atmospheric turbulence channels modeled by log-normal distribution with pointing error impairments is studied. More specifically, unlike previous studies on AF relaying FSO communication systems without pointing error effects; the pointing error effect is studied by taking into account the influence of beamwidth, aperture size and jitter variance. In addition, a combination of these models to analyze the combined effect of atmospheric turbulence and pointing error to AF relaying FSO/SC-QAM systems is used. Finally, an analytical expression is derived to evaluate the average symbol error rate (ASER) performance of such systems. The numerical results show that the impact of pointing error on the performance of AF relaying FSO/SC-QAM systems and how we use proper values of aperture size and beamwidth to improve the performance of such systems. Some analytical results are confirmed by Monte-Carlo simulations.
On Complicated Expansions of Solutions to ODES
NASA Astrophysics Data System (ADS)
Bruno, A. D.
2018-03-01
Polynomial ordinary differential equations are studied by asymptotic methods. The truncated equation associated with a vertex or a nonhorizontal edge of their polygon of the initial equation is assumed to have a solution containing the logarithm of the independent variable. It is shown that, under very weak constraints, this nonpower asymptotic form of solutions to the original equation can be extended to an asymptotic expansion of these solutions. This is an expansion in powers of the independent variable with coefficients being Laurent series in decreasing powers of the logarithm. Such expansions are sometimes called psi-series. Algorithms for such computations are described. Six examples are given. Four of them are concern with Painlevé equations. An unexpected property of these expansions is revealed.
Titration Curves: Fact and Fiction.
ERIC Educational Resources Information Center
Chamberlain, John
1997-01-01
Discusses ways in which datalogging equipment can enable titration curves to be measured accurately and how computing power can be used to predict the shape of curves. Highlights include sources of error, use of spreadsheets to generate titration curves, titration of a weak acid with a strong alkali, dibasic acids, weak acid and weak base, and…
Expansion shock waves in regularized shallow-water theory
El, Gennady A.; Shearer, Michael
2016-01-01
We identify a new type of shock wave by constructing a stationary expansion shock solution of a class of regularized shallow-water equations that include the Benjamin–Bona–Mahony and Boussinesq equations. An expansion shock exhibits divergent characteristics, thereby contravening the classical Lax entropy condition. The persistence of the expansion shock in initial value problems is analysed and justified using matched asymptotic expansions and numerical simulations. The expansion shock's existence is traced to the presence of a non-local dispersive term in the governing equation. We establish the algebraic decay of the shock as it is gradually eroded by a simple wave on either side. More generally, we observe a robustness of the expansion shock in the presence of weak dissipation and in simulations of asymmetric initial conditions where a train of solitary waves is shed from one side of the shock. PMID:27279780
Stochastic Evolution Equations Driven by Fractional Noises
2016-11-28
rate of convergence to zero or the error and the limit in distribution of the error fluctuations. We have studied time discrete numerical schemes...error fluctuations. We have studied time discrete numerical schemes based on Taylor expansions for rough differential equations and for stochastic...variations of the time discrete Taylor schemes for rough differential equations and for stochastic differential equations driven by fractional Brownian
Fixed-point theorems for families of weakly non-expansive maps
NASA Astrophysics Data System (ADS)
Mai, Jie-Hua; Liu, Xin-He
2007-10-01
In this paper, we present some fixed-point theorems for families of weakly non-expansive maps under some relatively weaker and more general conditions. Our results generalize and improve several results due to Jungck [G. Jungck, Fixed points via a generalized local commutativity, Int. J. Math. Math. Sci. 25 (8) (2001) 497-507], Jachymski [J. Jachymski, A generalization of the theorem by Rhoades and Watson for contractive type mappings, Math. Japon. 38 (6) (1993) 1095-1102], Guo [C. Guo, An extension of fixed point theorem of Krasnoselski, Chinese J. Math. (P.O.C.) 21 (1) (1993) 13-20], Rhoades [B.E. Rhoades, A comparison of various definitions of contractive mappings, Trans. Amer. Math. Soc. 226 (1977) 257-290], and others.
Identifiability Of Systems With Modeling Errors
NASA Technical Reports Server (NTRS)
Hadaegh, Yadolah " fred"
1988-01-01
Advances in theory of modeling errors reported. Recent paper on errors in mathematical models of deterministic linear or weakly nonlinear systems. Extends theoretical work described in NPO-16661 and NPO-16785. Presents concrete way of accounting for difference in structure between mathematical model and physical process or system that it represents.
Exploratory Factor Analysis of Reading, Spelling, and Math Errors
ERIC Educational Resources Information Center
O'Brien, Rebecca; Pan, Xingyu; Courville, Troy; Bray, Melissa A.; Breaux, Kristina; Avitia, Maria; Choi, Dowon
2017-01-01
Norm-referenced error analysis is useful for understanding individual differences in students' academic skill development and for identifying areas of skill strength and weakness. The purpose of the present study was to identify underlying connections between error categories across five language and math subtests of the Kaufman Test of…
Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions
Onufriev, Alexey V.
2013-01-01
We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790
The neutral emergence of error minimized genetic codes superior to the standard genetic code.
Massey, Steven E
2016-11-07
The standard genetic code (SGC) assigns amino acids to codons in such a way that the impact of point mutations is reduced, this is termed 'error minimization' (EM). The occurrence of EM has been attributed to the direct action of selection, however it is difficult to explain how the searching of alternative codes for an error minimized code can occur via codon reassignments, given that these are likely to be disruptive to the proteome. An alternative scenario is that EM has arisen via the process of genetic code expansion, facilitated by the duplication of genes encoding charging enzymes and adaptor molecules. This is likely to have led to similar amino acids being assigned to similar codons. Strikingly, we show that if during code expansion the most similar amino acid to the parent amino acid, out of the set of unassigned amino acids, is assigned to codons related to those of the parent amino acid, then genetic codes with EM superior to the SGC easily arise. This scheme mimics code expansion via the gene duplication of charging enzymes and adaptors. The result is obtained for a variety of different schemes of genetic code expansion and provides a mechanistically realistic manner in which EM has arisen in the SGC. These observations might be taken as evidence for self-organization in the earliest stages of life. Copyright © 2016 Elsevier Ltd. All rights reserved.
Cosmology with cosmic shear observations: a review.
Kilbinger, Martin
2015-07-01
Cosmic shear is the distortion of images of distant galaxies due to weak gravitational lensing by the large-scale structure in the Universe. Such images are coherently deformed by the tidal field of matter inhomogeneities along the line of sight. By measuring galaxy shape correlations, we can study the properties and evolution of structure on large scales as well as the geometry of the Universe. Thus, cosmic shear has become a powerful probe into the nature of dark matter and the origin of the current accelerated expansion of the Universe. Over the last years, cosmic shear has evolved into a reliable and robust cosmological probe, providing measurements of the expansion history of the Universe and the growth of its structure. We review here the principles of weak gravitational lensing and show how cosmic shear is interpreted in a cosmological context. Then we give an overview of weak-lensing measurements, and present the main observational cosmic-shear results since it was discovered 15 years ago, as well as the implications for cosmology. We then conclude with an outlook on the various future surveys and missions, for which cosmic shear is one of the main science drivers, and discuss promising new weak cosmological lensing techniques for future observations.
Dynamical Analogy of Cabibbo-Kobayashi-Maskawa Matrices
NASA Astrophysics Data System (ADS)
Beshtoev, Khamidbi M.
1996-12-01
The dynamical analogy of Cabibbo-Kobayashi-Maskawa matrices is built, i.e. the phenomenological expansion of the weak interaction theory with the inclusion of three doublets of the vector bosons B±,C±,D±, leading to quark mixing is suggested. But this expansion works only on a tree level. Estimate of the bosons masses is performed. The quasielastic processes proceeding through exchange of bosons are given.
Non-integer expansion embedding techniques for reversible image watermarking
NASA Astrophysics Data System (ADS)
Xiang, Shijun; Wang, Yi
2015-12-01
This work aims at reducing the embedding distortion of prediction-error expansion (PE)-based reversible watermarking. In the classical PE embedding method proposed by Thodi and Rodriguez, the predicted value is rounded to integer number for integer prediction-error expansion (IPE) embedding. The rounding operation makes a constraint on a predictor's performance. In this paper, we propose a non-integer PE (NIPE) embedding approach, which can proceed non-integer prediction errors for embedding data into an audio or image file by only expanding integer element of a prediction error while keeping its fractional element unchanged. The advantage of the NIPE embedding technique is that the NIPE technique can really bring a predictor into full play by estimating a sample/pixel in a noncausal way in a single pass since there is no rounding operation. A new noncausal image prediction method to estimate a pixel with four immediate pixels in a single pass is included in the proposed scheme. The proposed noncausal image predictor can provide better performance than Sachnev et al.'s noncausal double-set prediction method (where data prediction in two passes brings a distortion problem due to the fact that half of the pixels were predicted with the watermarked pixels). In comparison with existing several state-of-the-art works, experimental results have shown that the NIPE technique with the new noncausal prediction strategy can reduce the embedding distortion for the same embedding payload.
Hermite Functional Link Neural Network for Solving the Van der Pol-Duffing Oscillator Equation.
Mall, Susmita; Chakraverty, S
2016-08-01
Hermite polynomial-based functional link artificial neural network (FLANN) is proposed here to solve the Van der Pol-Duffing oscillator equation. A single-layer hermite neural network (HeNN) model is used, where a hidden layer is replaced by expansion block of input pattern using Hermite orthogonal polynomials. A feedforward neural network model with the unsupervised error backpropagation principle is used for modifying the network parameters and minimizing the computed error function. The Van der Pol-Duffing and Duffing oscillator equations may not be solved exactly. Here, approximate solutions of these types of equations have been obtained by applying the HeNN model for the first time. Three mathematical example problems and two real-life application problems of Van der Pol-Duffing oscillator equation, extracting the features of early mechanical failure signal and weak signal detection problems, are solved using the proposed HeNN method. HeNN approximate solutions have been compared with results obtained by the well known Runge-Kutta method. Computed results are depicted in term of graphs. After training the HeNN model, we may use it as a black box to get numerical results at any arbitrary point in the domain. Thus, the proposed HeNN method is efficient. The results reveal that this method is reliable and can be applied to other nonlinear problems too.
NASA Astrophysics Data System (ADS)
Tugendhat, Tim M.; Schäfer, Björn Malte
2018-05-01
We investigate a physical, composite alignment model for both spiral and elliptical galaxies and its impact on cosmological parameter estimation from weak lensing for a tomographic survey. Ellipticity correlation functions and angular ellipticity spectra for spiral and elliptical galaxies are derived on the basis of tidal interactions with the cosmic large-scale structure and compared to the tomographic weak-lensing signal. We find that elliptical galaxies cause a contribution to the weak-lensing dominated ellipticity correlation on intermediate angular scales between ℓ ≃ 40 and ℓ ≃ 400 before that of spiral galaxies dominates on higher multipoles. The predominant term on intermediate scales is the negative cross-correlation between intrinsic alignments and weak gravitational lensing (GI-alignment). We simulate parameter inference from weak gravitational lensing with intrinsic alignments unaccounted; the bias induced by ignoring intrinsic alignments in a survey like Euclid is shown to be several times larger than the statistical error and can lead to faulty conclusions when comparing to other observations. The biases generally point into different directions in parameter space, such that in some cases one can observe a partial cancellation effect. Furthermore, it is shown that the biases increase with the number of tomographic bins used for the parameter estimation process. We quantify this parameter estimation bias in units of the statistical error and compute the loss of Bayesian evidence for a model due to the presence of systematic errors as well as the Kullback-Leibler divergence to quantify the distance between the true model and the wrongly inferred one.
A wide-angle high Mach number modal expansion for infrasound propagation.
Assink, Jelle; Waxler, Roger; Velea, Doru
2017-03-01
The use of modal expansions to solve the problem of atmospheric infrasound propagation is revisited. A different form of the associated modal equation is introduced, valid for wide-angle propagation in atmospheres with high Mach number flow. The modal equation can be formulated as a quadratic eigenvalue problem for which there are simple and efficient numerical implementations. A perturbation expansion for the treatment of attenuation, valid for stratified media with background flow, is derived as well. Comparisons are carried out between the proposed algorithm and a modal algorithm assuming an effective sound speed, including a real data case study. The comparisons show that the effective sound speed approximation overestimates the effect of horizontal wind on sound propagation, leading to errors in traveltime, propagation path, trace velocity, and absorption. The error is found to be dependent on propagation angle and Mach number.
[Comparison of three stand-level biomass estimation methods].
Dong, Li Hu; Li, Feng Ri
2016-12-01
At present, the forest biomass methods of regional scale attract most of attention of the researchers, and developing the stand-level biomass model is popular. Based on the forestry inventory data of larch plantation (Larix olgensis) in Jilin Province, we used non-linear seemly unrelated regression (NSUR) to estimate the parameters in two additive system of stand-level biomass equations, i.e., stand-level biomass equations including the stand variables and stand biomass equations including the biomass expansion factor (i.e., Model system 1 and Model system 2), listed the constant biomass expansion factor for larch plantation and compared the prediction accuracy of three stand-level biomass estimation methods. The results indicated that for two additive system of biomass equations, the adjusted coefficient of determination (R a 2 ) of the total and stem equations was more than 0.95, the root mean squared error (RMSE), the mean prediction error (MPE) and the mean absolute error (MAE) were smaller. The branch and foliage biomass equations were worse than total and stem biomass equations, and the adjusted coefficient of determination (R a 2 ) was less than 0.95. The prediction accuracy of a constant biomass expansion factor was relatively lower than the prediction accuracy of Model system 1 and Model system 2. Overall, although stand-level biomass equation including the biomass expansion factor belonged to the volume-derived biomass estimation method, and was different from the stand biomass equations including stand variables in essence, but the obtained prediction accuracy of the two methods was similar. The constant biomass expansion factor had the lower prediction accuracy, and was inappropriate. In addition, in order to make the model parameter estimation more effective, the established stand-level biomass equations should consider the additivity in a system of all tree component biomass and total biomass equations.
WiLE: A Mathematica package for weak coupling expansion of Wilson loops in ABJ(M) theory
NASA Astrophysics Data System (ADS)
Preti, M.
2018-06-01
We present WiLE, a Mathematica® package designed to perform the weak coupling expansion of any Wilson loop in ABJ(M) theory at arbitrary perturbative order. For a given set of fields on the loop and internal vertices, the package displays all the possible Feynman diagrams and their integral representations. The user can also choose to exclude non planar diagrams, tadpoles and self-energies. Through the use of interactive input windows, the package should be easily accessible to users with little or no previous experience. The package manual provides some pedagogical examples and the computation of all ladder diagrams at three-loop relevant for the cusp anomalous dimension in ABJ(M). The latter application gives also support to some recent results computed in different contexts.
Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.
Limongi, Roberto; Silva, Angélica M
2016-11-01
The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.
Ginzburg-Landau expansion in strongly disordered attractive Anderson-Hubbard model
NASA Astrophysics Data System (ADS)
Kuchinskii, E. Z.; Kuleeva, N. A.; Sadovskii, M. V.
2017-07-01
We have studied disordering effects on the coefficients of Ginzburg-Landau expansion in powers of superconducting order parameter in the attractive Anderson-Hubbard model within the generalized DMFT+Σ approximation. We consider the wide region of attractive potentials U from the weak coupling region, where superconductivity is described by BCS model, to the strong coupling region, where the superconducting transition is related with Bose-Einstein condensation (BEC) of compact Cooper pairs formed at temperatures essentially larger than the temperature of superconducting transition, and a wide range of disorder—from weak to strong, where the system is in the vicinity of Anderson transition. In the case of semielliptic bare density of states, disorder's influence upon the coefficients A and B of the square and the fourth power of the order parameter is universal for any value of electron correlation and is related only to the general disorder widening of the bare band (generalized Anderson theorem). Such universality is absent for the gradient term expansion coefficient C. In the usual theory of "dirty" superconductors, the C coefficient drops with the growth of disorder. In the limit of strong disorder in BCS limit, the coefficient C is very sensitive to the effects of Anderson localization, which lead to its further drop with disorder growth up to the region of the Anderson insulator. In the region of BCS-BEC crossover and in BEC limit, the coefficient C and all related physical properties are weakly dependent on disorder. In particular, this leads to relatively weak disorder dependence of both penetration depth and coherence lengths, as well as of related slope of the upper critical magnetic field at superconducting transition, in the region of very strong coupling.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soudackov, Alexander V.; Hammes-Schiffer, Sharon
2015-11-21
Rate constant expressions for vibronically nonadiabatic proton transfer and proton-coupled electron transfer reactions are presented and analyzed. The regimes covered include electronically adiabatic and nonadiabatic reactions, as well as high-frequency and low-frequency proton donor-acceptor vibrational modes. These rate constants differ from previous rate constants derived with the cumulant expansion approach in that the logarithmic expansion of the vibronic coupling in terms of the proton donor-acceptor distance includes a quadratic as well as a linear term. The analysis illustrates that inclusion of this quadratic term in the framework of the cumulant expansion framework may significantly impact the rate constants at highmore » temperatures for proton transfer interfaces with soft proton donor-acceptor modes that are associated with small force constants and weak hydrogen bonds. The effects of the quadratic term may also become significant in these regimes when using the vibronic coupling expansion in conjunction with a thermal averaging procedure for calculating the rate constant. In this case, however, the expansion of the coupling can be avoided entirely by calculating the couplings explicitly for the range of proton donor-acceptor distances sampled. The effects of the quadratic term for weak hydrogen-bonding systems are less significant for more physically realistic models that prevent the sampling of unphysical short proton donor-acceptor distances. Additionally, the rigorous relation between the cumulant expansion and thermal averaging approaches is clarified. In particular, the cumulant expansion rate constant includes effects from dynamical interference between the proton donor-acceptor and solvent motions and becomes equivalent to the thermally averaged rate constant when these dynamical effects are neglected. This analysis identifies the regimes in which each rate constant expression is valid and thus will be important for future applications to proton transfer and proton-coupled electron transfer in chemical and biological processes.« less
ERIC Educational Resources Information Center
Yee, Ng Kin; Lam, Toh Tin
2008-01-01
This paper reports on students' errors in performing integration of rational functions, a topic of calculus in the pre-university mathematics classrooms. Generally the errors could be classified as those due to the students' weak algebraic concepts and their lack of understanding of the concept of integration. With the students' inability to link…
Separating Dark Physics from Physical Darkness: Minimalist Modified Gravity vs. Dark Energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huterer, Dragan; Linder, Eric V.
The acceleration of the cosmic expansion may be due to a new component of physical energy density or a modification of physics itself. Mapping the expansion of cosmic scales and the growth of large scale structure in tandem can provide insights to distinguish between the two origins. Using Minimal Modified Gravity (MMG) - a single parameter gravitational growth index formalism to parameterize modified gravity theories - we examine the constraints that cosmological data can place on the nature of the new physics. For next generation measurements combining weak lensing, supernovae distances, and the cosmic microwave background we can extend themore » reach of physics to allow for fitting gravity simultaneously with the expansion equation of state, diluting the equation of state estimation by less than 25percent relative to when general relativity is assumed, and determining the growth index to 8percent. For weak lensing we examine the level of understanding needed of quasi- and nonlinear structure formation in modified gravity theories, and the trade off between stronger precision but greater susceptibility to bias as progressively more nonlinear information is used.« less
Separating dark physics from physical darkness: Minimalist modified gravity versus dark energy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huterer, Dragan; Linder, Eric V.
The acceleration of the cosmic expansion may be due to a new component of physical energy density or a modification of physics itself. Mapping the expansion of cosmic scales and the growth of large scale structure in tandem can provide insights to distinguish between the two origins. Using Minimal Modified Gravity (MMG) - a single parameter gravitational growth index formalism to parametrize modified gravity theories - we examine the constraints that cosmological data can place on the nature of the new physics. For next generation measurements combining weak lensing, supernovae distances, and the cosmic microwave background we can extend themore » reach of physics to allow for fitting gravity simultaneously with the expansion equation of state, diluting the equation of state estimation by less than 25% relative to when general relativity is assumed, and determining the growth index to 8%. For weak lensing we examine the level of understanding needed of quasi- and nonlinear structure formation in modified gravity theories, and the trade off between stronger precision but greater susceptibility to bias as progressively more nonlinear information is used.« less
Corrigendum and addendum. Modeling weakly nonlinear acoustic wave propagation
Christov, Ivan; Christov, C. I.; Jordan, P. M.
2014-12-18
This article presents errors, corrections, and additions to the research outlined in the following citation: Christov, I., Christov, C. I., & Jordan, P. M. (2007). Modeling weakly nonlinear acoustic wave propagation. The Quarterly Journal of Mechanics and Applied Mathematics, 60(4), 473-495.
Thermal expansion and elastic anisotropy in single crystal Al2O3 and SiC reinforcements
NASA Technical Reports Server (NTRS)
Salem, Jonathan A.; Li, Zhuang; Bradt, Richard C.
1994-01-01
In single crystal form, SiC and Al2O3 are attractive reinforcing components for high temperature composites. In this study, the axial coefficients of thermal expansion and single crystal elastic constants of SiC and Al2O3 were used to determine their coefficients of thermal expansion and Young's moduli as a function of crystallographic orientation and temperature. SiC and Al2O3 exhibit a strong variation of Young's modulus with orientation; however, their moduli and anisotropies are weak functions of temperature below 1000 C. The coefficients of thermal expansion exhibit significant temperature dependence, and that of the non-cubic Al2O3 is also a function of crystallographic orientation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahn, Charlene; Wiseman, Howard; Jacobs, Kurt
2004-08-01
It was shown by Ahn, Wiseman, and Milburn [Phys. Rev. A 67, 052310 (2003)] that feedback control could be used as a quantum error correction process for errors induced by weak continuous measurement, given one perfectly measured error channel per qubit. Here we point out that this method can be easily extended to an arbitrary number of error channels per qubit. We show that the feedback protocols generated by our method encode n-2 logical qubits in n physical qubits, thus requiring just one more physical qubit than in the previous case.
49 CFR 180.511 - Acceptable results of inspections and tests.
Code of Federal Regulations, 2011 CFR
2011-10-01
...) Safety system inspection. A tank car successfully passes the safety system inspection when each thermal..., distortion, excessive permanent expansion, or other evidence of weakness that might render the tank car...
Neutrino mass and dark energy from weak lensing.
Abazajian, Kevork N; Dodelson, Scott
2003-07-25
Weak gravitational lensing of background galaxies by intervening matter directly probes the mass distribution in the Universe. This distribution is sensitive to both the dark energy and neutrino mass. We examine the potential of lensing experiments to measure features of both simultaneously. Focusing on the radial information contained in a future deep 4000 deg(2) survey, we find that the expected (1-sigma) error on a neutrino mass is 0.1 eV, if the dark-energy parameters are allowed to vary. The constraints on dark-energy parameters are similarly restrictive, with errors on w of 0.09.
On Using a Space Telescope to Detect Weak-lensing Shear
NASA Astrophysics Data System (ADS)
Tung, Nathan; Wright, Edward
2017-11-01
Ignoring redshift dependence, the statistical performance of a weak-lensing survey is set by two numbers: the effective shape noise of the sources, which includes the intrinsic ellipticity dispersion and the measurement noise, and the density of sources that are useful for weak-lensing measurements. In this paper, we provide some general guidance for weak-lensing shear measurements from a “generic” space telescope by looking for the optimum wavelength bands to maximize the galaxy flux signal-to-noise ratio (S/N) and minimize ellipticity measurement error. We also calculate an effective galaxy number per square degree across different wavelength bands, taking into account the density of sources that are useful for weak-lensing measurements and the effective shape noise of sources. Galaxy data collected from the ultra-deep UltraVISTA Ks-selected and R-selected photometric catalogs (Muzzin et al. 2013) are fitted to radially symmetric Sérsic galaxy light profiles. The Sérsic galaxy profiles are then stretched to impose an artificial weak-lensing shear, and then convolved with a pure Airy Disk PSF to simulate imaging of weak gravitationally lensed galaxies from a hypothetical diffraction-limited space telescope. For our model calculations and sets of galaxies, our results show that the peak in the average galaxy flux S/N, the minimum average ellipticity measurement error, and the highest effective galaxy number counts all lie around the K-band near 2.2 μm.
NASA Astrophysics Data System (ADS)
Hui, Wei-Hua; Bao, Fu-Ting; Wei, Xiang-Geng; Liu, Yang
2015-12-01
In this paper, a new measuring method of ablation rate was proposed based on X-ray three-dimensional (3D) reconstruction. The ablation of 4-direction carbon/carbon composite nozzles was investigated in the combustion environment of a solid rocket motor, and the macroscopic ablation and linear recession rate were studied through the X-ray 3D reconstruction method. The results showed that the maximum relative error of the X-ray 3D reconstruction was 0.0576%, which met the minimum accuracy of the ablation analysis; along the nozzle axial direction, from convergence segment, throat to expansion segment, the ablation gradually weakened; in terms of defect ablation, the middle ablation was weak, while the ablation in both sides was more serious. In a word, the proposed reconstruction method based on X-ray about C/C nozzle ablation can construct a clear model of ablative nozzle which characterizes the details about micro-cracks, deposition, pores and surface to analyze ablation, so that this method can create the ablation curve in any surface clearly.
Momentum-space cluster dual-fermion method
NASA Astrophysics Data System (ADS)
Iskakov, Sergei; Terletska, Hanna; Gull, Emanuel
2018-03-01
Recent years have seen the development of two types of nonlocal extensions to the single-site dynamical mean field theory. On one hand, cluster approximations, such as the dynamical cluster approximation, recover short-range momentum-dependent correlations nonperturbatively. On the other hand, diagrammatic extensions, such as the dual-fermion theory, recover long-ranged corrections perturbatively. The correct treatment of both strong short-ranged and weak long-ranged correlations within the same framework is therefore expected to lead to a quick convergence of results, and offers the potential of obtaining smooth self-energies in nonperturbative regimes of phase space. In this paper, we present an exact cluster dual-fermion method based on an expansion around the dynamical cluster approximation. Unlike previous formulations, our method does not employ a coarse-graining approximation to the interaction, which we show to be the leading source of error at high temperature, and converges to the exact result independently of the size of the underlying cluster. We illustrate the power of the method with results for the second-order cluster dual-fermion approximation to the single-particle self-energies and double occupancies.
Smolek, Michael K.
2011-01-01
Purpose The significance of ocular or corneal aberrations may be subject to misinterpretation whenever eyes with different pupil sizes or the application of different Zernike expansion orders are compared. A method is shown that uses simple mathematical interpolation techniques based on normal data to rapidly determine the clinical significance of aberrations, without concern for pupil and expansion order. Methods Corneal topography (Tomey, Inc.; Nagoya, Japan) from 30 normal corneas was collected and the corneal wavefront error analyzed by Zernike polynomial decomposition into specific aberration types for pupil diameters of 3, 5, 7, and 10 mm and Zernike expansion orders of 6, 8, 10 and 12. Using this 4×4 matrix of pupil sizes and fitting orders, best-fitting 3-dimensional functions were determined for the mean and standard deviation of the RMS error for specific aberrations. The functions were encoded into software to determine the significance of data acquired from non-normal cases. Results The best-fitting functions for 6 types of aberrations were determined: defocus, astigmatism, prism, coma, spherical aberration, and all higher-order aberrations. A clinical screening method of color-coding the significance of aberrations in normal, postoperative LASIK, and keratoconus cases having different pupil sizes and different expansion orders is demonstrated. Conclusions A method to calibrate wavefront aberrometry devices by using a standard sample of normal cases was devised. This method could be potentially useful in clinical studies involving patients with uncontrolled pupil sizes or in studies that compare data from aberrometers that use different Zernike fitting-order algorithms. PMID:22157570
NASA Technical Reports Server (NTRS)
Padilla, Peter A.
1991-01-01
An investigation was made in AIRLAB of the fault handling performance of the Fault Tolerant MultiProcessor (FTMP). Fault handling errors detected during fault injection experiments were characterized. In these fault injection experiments, the FTMP disabled a working unit instead of the faulted unit once in every 500 faults, on the average. System design weaknesses allow active faults to exercise a part of the fault management software that handles Byzantine or lying faults. Byzantine faults behave such that the faulted unit points to a working unit as the source of errors. The design's problems involve: (1) the design and interface between the simplex error detection hardware and the error processing software, (2) the functional capabilities of the FTMP system bus, and (3) the communication requirements of a multiprocessor architecture. These weak areas in the FTMP's design increase the probability that, for any hardware fault, a good line replacement unit (LRU) is mistakenly disabled by the fault management software.
NASA Astrophysics Data System (ADS)
Bailey, David H.; Frolov, Alexei M.
2003-12-01
Since the above paper was published we have received a suggestion from T K Rebane that our variational energy, -402.261 928 652 266 220 998 au, for the 3S(L = 0) state from table 4 (right-hand column) is wrong in the fourth and fifth decimal digits. Our original variational energies were E(2000) = -402.192 865 226 622 099 583 au and E(3000) = -402.192 865 226 622 099 838 au. Unfortunately, table 4 contains a simple typographic error. The first two digits after the decimal point (26) in the published energies must be removed. Then the results exactly coincide with the original energies. These digits (26) were left in table 4 from the original version, which also included the 2S(L = 0) states of the helium-muonic atoms. A similar typographic error was found in table 4 of another paper by A M Frolov (2001 J. Phys. B: At. Mol. Opt. Phys. 34 3813). The computed ground state energy for the ppµ muonic molecular ion was -0.494 386 820 248 934 546 94 mau. In table 4 of that paper the first figure '8' (fifth digit after the decimal point) was lost from the energy value presented in this table. We wish to thank T K Rebane of the Fock Physical Institute in St Petersburg for pointing out the misprint related to the helium(4)-muonic atom.
Atmospheric Dispersion Effects in Weak Lensing Measurements
Plazas, Andrés Alejandro; Bernstein, Gary
2012-10-01
The wavelength dependence of atmospheric refraction causes elongation of finite-bandwidth images along the elevation vector, which produces spurious signals in weak gravitational lensing shear measurements unless this atmospheric dispersion is calibrated and removed to high precision. Because astrometric solutions and PSF characteristics are typically calibrated from stellar images, differences between the reference stars' spectra and the galaxies' spectra will leave residual errors in both the astrometric positions (dr) and in the second moment (width) of the wavelength-averaged PSF (dv) for galaxies.We estimate the level of dv that will induce spurious weak lensing signals in PSF-corrected galaxy shapes that exceed themore » statistical errors of the DES and the LSST cosmic-shear experiments. We also estimate the dr signals that will produce unacceptable spurious distortions after stacking of exposures taken at different airmasses and hour angles. We also calculate the errors in the griz bands, and find that dispersion systematics, uncorrected, are up to 6 and 2 times larger in g and r bands,respectively, than the requirements for the DES error budget, but can be safely ignored in i and z bands. For the LSST requirements, the factors are about 30, 10, and 3 in g, r, and i bands,respectively. We find that a simple correction linear in galaxy color is accurate enough to reduce dispersion shear systematics to insignificant levels in the r band for DES and i band for LSST,but still as much as 5 times than the requirements for LSST r-band observations. More complex corrections will likely be able to reduce the systematic cosmic-shear errors below statistical errors for LSST r band. But g-band effects remain large enough that it seems likely that induced systematics will dominate the statistical errors of both surveys, and cosmic-shear measurements should rely on the redder bands.« less
The thermal expansion of hard magnetic materials of the Nd-Fe-B system
NASA Astrophysics Data System (ADS)
Savchenko, Igor; Kozlovskii, Yurii; Samoshkin, Dmitriy; Yatsuk, Oleg
2017-10-01
The results of dilatometric measurement of the thermal expansion of hard magnetic materials brands N35M, N35H and N35SH containing as a main component the crystalline phase of Nd2Fe14B type are presented. The temperature range from 200 to 750 K has been investigated by the method of dilatometry with an error of 1.5-2×10-7 K-1. The approximation dependences of the linear thermal expansion coefficient have been obtained. The character of changes of the thermal coefficient of linear expansion in the region of the Curie point has been specified, its critical indices and critical amplitudes have been defined.
Elias, Gabriel A.; Bieszczad, Kasia M.; Weinberger, Norman M.
2015-01-01
Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS− tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was “bar-press from tone-onset-to-error signal” (“TOTE”). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error (“iTOTE”). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of involved brain sites, including reversal of cortical expansion. PMID:26596700
Elias, Gabriel A; Bieszczad, Kasia M; Weinberger, Norman M
2015-12-01
Primary sensory cortical fields develop highly specific associative representational plasticity, notably enlarged area of representation of reinforced signal stimuli within their topographic maps. However, overtraining subjects after they have solved an instrumental task can reduce or eliminate the expansion while the successful behavior remains. As the development of this plasticity depends on the learning strategy used to solve a task, we asked whether the loss of expansion is due to the strategy used during overtraining. Adult male rats were trained in a three-tone auditory discrimination task to bar-press to the CS+ for water reward and refrain from doing so during the CS- tones and silent intertrial intervals; errors were punished by a flashing light and time-out penalty. Groups acquired this task to a criterion within seven training sessions by relying on a strategy that was "bar-press from tone-onset-to-error signal" ("TOTE"). Three groups then received different levels of overtraining: Group ST, none; Group RT, one week; Group OT, three weeks. Post-training mapping of their primary auditory fields (A1) showed that Groups ST and RT had developed significantly expanded representational areas, specifically restricted to the frequency band of the CS+ tone. In contrast, the A1 of Group OT was no different from naïve controls. Analysis of learning strategy revealed this group had shifted strategy to a refinement of TOTE in which they self-terminated bar-presses before making an error ("iTOTE"). Across all animals, the greater the use of iTOTE, the smaller was the representation of the CS+ in A1. Thus, the loss of cortical expansion is attributable to a shift or refinement in strategy. This reversal of expansion was considered in light of a novel theoretical framework (CONCERTO) highlighting four basic principles of brain function that resolve anomalous findings and explaining why even a minor change in strategy would involve concomitant shifts of involved brain sites, including reversal of cortical expansion. Published by Elsevier Inc.
Factors influencing body image in individuals after a first heart attack.
Zarek, Aleksandra; Barański, Jarosław
Experiencing a heart attack can change the attitude of patients towards their corporeality. Body image may significantly influence the recovery of patients, their adherence to medical recommendations, and adopting a healthy lifestyle. The aim of this study was to analyze the relationship between body image and personality characteristics, as well as sociodemographic, physical and medical factors in patients after a first myocardial infarction. The study comprised 160 patients after a first heart attack (80 women and 80 men) aged 34–65 years (mean = 53.44; SD = 6.40). Body image was measured with the Body Image Questionnaire, and personality was analyzed according to the Adjective Check List. The level of body satisfaction was shaped by two dimensions of personality (Sociability, Weakness and inhibition) and by respondents’ gender. In respondents’ personality profile, lower body satisfaction was associated with elevated Weakness and inhibition and with lowered Sociability. Women were less satisfied with their bodies than men. The significance attributed to one’s own body was shaped by two dimensions of personality (Expansiveness, Weakness and inhibition) and by respondents’ age. Patients with a higher degree of Expansiveness, a lower degree of Weakness and inhibition and more advanced in age gave greater priority to corporeality. Improving body image in persons after a first heart attack should be combined with the development of personality abilities important for self-efficacy and social competency.
The calculation of average error probability in a digital fibre optical communication system
NASA Astrophysics Data System (ADS)
Rugemalira, R. A. M.
1980-03-01
This paper deals with the problem of determining the average error probability in a digital fibre optical communication system, in the presence of message dependent inhomogeneous non-stationary shot noise, additive Gaussian noise and intersymbol interference. A zero-forcing equalization receiver filter is considered. Three techniques for error rate evaluation are compared. The Chernoff bound and the Gram-Charlier series expansion methods are compared to the characteristic function technique. The latter predicts a higher receiver sensitivity
Scientific Impacts of Wind Direction Errors
NASA Technical Reports Server (NTRS)
Liu, W. Timothy; Kim, Seung-Bum; Lee, Tong; Song, Y. Tony; Tang, Wen-Qing; Atlas, Robert
2004-01-01
An assessment on the scientific impact of random errors in wind direction (less than 45 deg) retrieved from space-based observations under weak wind (less than 7 m/s ) conditions was made. averages, and these weak winds cover most of the tropical, sub-tropical, and coastal oceans. Introduction of these errors in the semi-daily winds causes, on average, 5% changes of the yearly mean Ekman and Sverdrup volume transports computed directly from the winds, respectively. These poleward movements of water are the main mechanisms to redistribute heat from the warmer tropical region to the colder high- latitude regions, and they are the major manifestations of the ocean's function in modifying Earth's climate. Simulation by an ocean general circulation model shows that the wind errors introduce a 5% error in the meridional heat transport at tropical latitudes. The simulation also shows that the erroneous winds cause a pile-up of warm surface water in the eastern tropical Pacific, similar to the conditions during El Nino episode. Similar wind directional errors cause significant change in sea-surface temperature and sea-level patterns in coastal oceans in a coastal model simulation. Previous studies have shown that assimilation of scatterometer winds improves 3-5 day weather forecasts in the Southern Hemisphere. When directional information below 7 m/s was withheld, approximately 40% of the improvement was lost
NASA Astrophysics Data System (ADS)
van Putten, Maurice H. P. M.
2018-01-01
The H0-tension problem poses a confrontation of dark energy driving latetime cosmological expansion measured by the Hubble parameter H(z) over an extended range of redshifts z. Distinct values H0 ≃ 73 km s-1 Mpcs-1 and H0 ≃ 68 km s-1 Mpcs-1 obtain from surveys of the Local Universe and, respectively, ΛCBM analysis of the CMB. These are representative of accelerated expansion with H'(0) ≃ 0 by and, respectively, H'(0) > 0 in ΛCDM, where is a fundamental frequency of the cosmological horizon in a Friedmann-Robertson-Walker universe with deceleration parameter q(z) = -1 + (1+z)H-1 H'(z). Explicit solution H(z) = H0 and, respectively, H(z) = H0 are here compared with recent data on H(z) over 0 ≲ z ≲ 2.The first is found to be free of tension with H0 from local surveys, while the latter is disfavored at 2:7σ A further confrontation obtains in galaxy dynamics by a finite sensitivity of inertia to background cosmology in weak gravity, putting an upper bound of m ≲ 10-30 eV on the mass of dark matter. A C0 onset to weak gravity at the de Sitter scale of acceleration adS = cH(z), where c denotes the velocity of light, can be seen in galaxy rotation curves covering 0 ≲ z ≲ 2 Weak gravity in galaxy dynamics hereby provides a proxy for cosmological evolution.
Multipolar Ewald methods, 1: theory, accuracy, and performance.
Giese, Timothy J; Panteva, Maria T; Chen, Haoyuan; York, Darrin M
2015-02-10
The Ewald, Particle Mesh Ewald (PME), and Fast Fourier–Poisson (FFP) methods are developed for systems composed of spherical multipole moment expansions. A unified set of equations is derived that takes advantage of a spherical tensor gradient operator formalism in both real space and reciprocal space to allow extension to arbitrary multipole order. The implementation of these methods into a novel linear-scaling modified “divide-and-conquer” (mDC) quantum mechanical force field is discussed. The evaluation times and relative force errors are compared between the three methods, as a function of multipole expansion order. Timings and errors are also compared within the context of the quantum mechanical force field, which encounters primary errors related to the quality of reproducing electrostatic forces for a given density matrix and secondary errors resulting from the propagation of the approximate electrostatics into the self-consistent field procedure, which yields a converged, variational, but nonetheless approximate density matrix. Condensed-phase simulations of an mDC water model are performed with the multipolar PME method and compared to an electrostatic cutoff method, which is shown to artificially increase the density of water and heat of vaporization relative to full electrostatic treatment.
Chen, Hsing Hung; Shen, Tao; Xu, Xin-Long; Ma, Chao
2013-01-01
The characteristics of firm's expansion by differentiated products and diversified products are quite different. However, the study employing absorptive capacity to examine the impacts of different modes of expansion on performance of small solar energy firms has never been discussed before. Then, a conceptual model to analyze the tension between strategies and corporate performance is proposed to filling the vacancy. After practical investigation, the results show that stronger organizational institutions help small solar energy firms expanded by differentiated products increase consistency between strategies and corporate performance; oppositely, stronger working attitudes with weak management controls help small solar energy firms expanded by diversified products reduce variance between strategies and corporate performance.
Shear Recovery Accuracy in Weak-Lensing Analysis with the Elliptical Gauss-Laguerre Method
NASA Astrophysics Data System (ADS)
Nakajima, Reiko; Bernstein, Gary
2007-04-01
We implement the elliptical Gauss-Laguerre (EGL) galaxy-shape measurement method proposed by Bernstein & Jarvis and quantify the shear recovery accuracy in weak-lensing analysis. This method uses a deconvolution fitting scheme to remove the effects of the point-spread function (PSF). The test simulates >107 noisy galaxy images convolved with anisotropic PSFs and attempts to recover an input shear. The tests are designed to be immune to statistical (random) distributions of shapes, selection biases, and crowding, in order to test more rigorously the effects of detection significance (signal-to-noise ratio [S/N]), PSF, and galaxy resolution. The systematic error in shear recovery is divided into two classes, calibration (multiplicative) and additive, with the latter arising from PSF anisotropy. At S/N > 50, the deconvolution method measures the galaxy shape and input shear to ~1% multiplicative accuracy and suppresses >99% of the PSF anisotropy. These systematic errors increase to ~4% for the worst conditions, with poorly resolved galaxies at S/N simeq 20. The EGL weak-lensing analysis has the best demonstrated accuracy to date, sufficient for the next generation of weak-lensing surveys.
Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients
Bruno, Mattia; Lehner, Christoph; Soni, Amarjit
2018-04-20
Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.
Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients
NASA Astrophysics Data System (ADS)
Bruno, Mattia; Lehner, Christoph; Soni, Amarjit; Rbc; Ukqcd Collaborations
2018-04-01
We propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C1 and C2 , related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.
Towards a nonperturbative calculation of weak Hamiltonian Wilson coefficients
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bruno, Mattia; Lehner, Christoph; Soni, Amarjit
Here, we propose a method to compute the Wilson coefficients of the weak effective Hamiltonian to all orders in the strong coupling constant using Lattice QCD simulations. We perform our calculations adopting an unphysically light weak boson mass of around 2 GeV. We demonstrate that systematic errors for the Wilson coefficients C 1 and C 2, related to the current-current four-quark operators, can be controlled and present a path towards precise determinations in subsequent works.
Ferreiro-Rangel, Carlos A; Gelb, Lev D
2013-06-13
Structural and mechanical properties of silica aerogels are studied using a flexible coarse-grained model and a variety of simulation techniques. The model, introduced in a previous study (J. Phys. Chem. C 2007, 111, 15792-15802), consists of spherical "primary" gel particles that interact through weak nonbonded forces and through microscopically motivated interparticle bonds that may break and form during the simulations. Aerogel models are prepared using a three-stage protocol consisting of separate simulations of gelation, aging, and a final relaxation during which no further bond formation is permitted. Models of varying particle size, density, and size dispersity are considered. These are characterized in terms of fractal dimensions and pore size distributions, and generally good agreement with experimental data is obtained for these metrics. The bulk moduli of these materials are studied in detail. Two different techniques for obtaining the bulk modulus are considered, fluctuation analysis and direct compression/expansion simulations. We find that the fluctuation result can be subject to systematic error due to coupling with the simulation barostat but, if performed carefully, yields results equivalent with those of compression/expansion experiments. The dependence of the bulk modulus on density follows a power law with an exponent between 3.00 and 3.15, in agreement with reported experimental results. The best correlate for the bulk modulus appears to be the volumetric bond density, on which there is also a power law dependence. Polydisperse models exhibit lower bulk moduli than comparable monodisperse models, which is due to lower bond densities in the polydisperse materials.
NASA Astrophysics Data System (ADS)
Boumaza, R.; Bencheikh, K.
2017-12-01
Using the so-called operator product expansion to lowest order, we extend the work in Campbell et al (2015 Phys. Rev. Lett 114 125302) by deriving a simple analytical expression for the long-time asymptotic one-body reduced density matrix during free expansion for a one-dimensional system of bosons with large atom number interacting through a repulsive delta potential initially confined by a potential well. This density matrix allows direct access to the momentum distribution and also to the mass current density. For initially confining power-law potentials we give explicit expressions, in the limits of very weak and very strong interaction, for the current density distributions during the free expansion. In the second part of the work we consider the expansion of ultracold gas from a confining harmonic trap to another harmonic trap with a different frequency. For the case of a quantum impenetrable gas of bosons (a Tonks-Girardeau gas) with a given atom number, we present an exact analytical expression for the mass current distribution (mass transport) after release from one harmonic trap to another harmonic trap. It is shown that, for a harmonically quenched Tonks-Girardeau gas, the current distribution is a suitable collective observable and under the weak quench regime, it exhibits oscillations at the same frequencies as those recently predicted for the peak momentum distribution in the breathing mode. The analysis is extended to other possible quenched systems.
CFRP composite mirrors for space telescopes and their micro-dimensional stability
NASA Astrophysics Data System (ADS)
Utsunomiya, Shin; Kamiya, Tomohiro; Shimizu, Ryuzo
2010-07-01
Ultra-lightweight and high-accuracy CFRP (carbon fiber reinforced plastics) mirrors for space telescopes were fabricated to demonstrate their feasibility for light wavelength applications. The CTE (coefficient of thermal expansion) of the all- CFRP sandwich panels was tailored to be smaller than 1×10-7/K. The surface accuracy of mirrors of 150 mm in diameter was 1.8 um RMS as fabricated and the surface smoothness was improved to 20 nm RMS by using a replica technique. Moisture expansion was considered the largest in un-predictable surface preciseness errors. The moisture expansion affected not only homologous shape change but also out-of-plane distortion especially in unsymmetrical compositions. Dimensional stability due to the moisture expansion was compared with a structural mathematical model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Jinkoo, E-mail: jkim3@hfhs.or; Hammoud, Rabih; Pradhan, Deepak
2010-07-15
Purpose: To evaluate different similarity metrics (SM) using natural calcifications and observation-based measures to determine the most accurate prostate and seminal vesicle localization on daily cone-beam CT (CBCT) images. Methods and Materials: CBCT images of 29 patients were retrospectively analyzed; 14 patients with prostate calcifications (calcification data set) and 15 patients without calcifications (no-calcification data set). Three groups of test registrations were performed. Test 1: 70 CT/CBCT pairs from calcification dataset were registered using 17 SMs (6,580 registrations) and compared using the calcification mismatch error as an endpoint. Test 2: Using the four best SMs from Test 1, 75 CT/CBCTmore » pairs in the no-calcification data set were registered (300 registrations). Accuracy of contour overlays was ranked visually. Test 3: For the best SM from Tests 1 and 2, accuracy was estimated using 356 CT/CBCT registrations. Additionally, target expansion margins were investigated for generating registration regions of interest. Results: Test 1-Incremental sign correlation (ISC), gradient correlation (GC), gradient difference (GD), and normalized cross correlation (NCC) showed the smallest errors ({mu} {+-} {sigma}: 1.6 {+-} 0.9 {approx} 2.9 {+-} 2.1 mm). Test 2-Two of the three reviewers ranked GC higher. Test 3-Using GC, 96% of registrations showed <3-mm error when calcifications were filtered. Errors were left/right: 0.1 {+-} 0.5mm, anterior/posterior: 0.8 {+-} 1.0mm, and superior/inferior: 0.5 {+-} 1.1 mm. The existence of calcifications increased the success rate to 97%. Expansion margins of 4-10 mm were equally successful. Conclusion: Gradient-based SMs were most accurate. Estimated error was found to be <3 mm (1.1 mm SD) in 96% of the registrations. Results suggest that the contour expansion margin should be no less than 4 mm.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soudackov, Alexander; Hammes-Schiffer, Sharon
2015-11-17
Rate constant expressions for vibronically nonadiabatic proton transfer and proton-coupled electron transfer reactions are presented and analyzed. The regimes covered include electronically adiabatic and nonadiabatic reactions, as well as high-frequency and low-frequency regimes for the proton donor-acceptor vibrational mode. These rate constants differ from previous rate constants derived with the cumulant expansion approach in that the logarithmic expansion of the vibronic coupling in terms of the proton donor-acceptor distance includes a quadratic as well as a linear term. The analysis illustrates that inclusion of this quadratic term does not significantly impact the rate constants derived using the cumulant expansion approachmore » in any of the regimes studied. The effects of the quadratic term may become significant when using the vibronic coupling expansion in conjunction with a thermal averaging procedure for calculating the rate constant, however, particularly at high temperatures and for proton transfer interfaces with extremely soft proton donor-acceptor modes that are associated with extraordinarily weak hydrogen bonds. Even with the thermal averaging procedure, the effects of the quadratic term for weak hydrogen-bonding systems are less significant for more physically realistic models that prevent the sampling of unphysical short proton donor-acceptor distances, and the expansion of the coupling can be avoided entirely by calculating the couplings explicitly for the range of proton donor-acceptor distances. This analysis identifies the regimes in which each rate constant expression is valid and thus will be important for future applications to proton transfer and proton-coupled electron transfer in chemical and biological processes. We are grateful for support from National Institutes of Health Grant GM056207 (applications to enzymes) and the Center for Molecular Electrocatalysis, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences (applications to molecular electrocatalysts).« less
Ensemble codes involving hippocampal neurons are at risk during delayed performance tests.
Hampson, R E; Deadwyler, S A
1996-11-26
Multielectrode recording techniques were used to record ensemble activity from 10 to 16 simultaneously active CA1 and CA3 neurons in the rat hippocampus during performance of a spatial delayed-nonmatch-to-sample task. Extracted sources of variance were used to assess the nature of two different types of errors that accounted for 30% of total trials. The two types of errors included ensemble "miscodes" of sample phase information and errors associated with delay-dependent corruption or disappearance of sample information at the time of the nonmatch response. Statistical assessment of trial sequences and associated "strength" of hippocampal ensemble codes revealed that miscoded error trials always followed delay-dependent error trials in which encoding was "weak," indicating that the two types of errors were "linked." It was determined that the occurrence of weakly encoded, delay-dependent error trials initiated an ensemble encoding "strategy" that increased the chances of being correct on the next trial and avoided the occurrence of further delay-dependent errors. Unexpectedly, the strategy involved "strongly" encoding response position information from the prior (delay-dependent) error trial and carrying it forward to the sample phase of the next trial. This produced a miscode type error on trials in which the "carried over" information obliterated encoding of the sample phase response on the next trial. Application of this strategy, irrespective of outcome, was sufficient to reorient the animal to the proper between trial sequence of response contingencies (nonmatch-to-sample) and boost performance to 73% correct on subsequent trials. The capacity for ensemble analyses of strength of information encoding combined with statistical assessment of trial sequences therefore provided unique insight into the "dynamic" nature of the role hippocampus plays in delay type memory tasks.
Modified multiple time scale method for solving strongly nonlinear damped forced vibration systems
NASA Astrophysics Data System (ADS)
Razzak, M. A.; Alam, M. Z.; Sharif, M. N.
2018-03-01
In this paper, modified multiple time scale (MTS) method is employed to solve strongly nonlinear forced vibration systems. The first-order approximation is only considered in order to avoid complexicity. The formulations and the determination of the solution procedure are very easy and straightforward. The classical multiple time scale (MS) and multiple scales Lindstedt-Poincare method (MSLP) do not give desire result for the strongly damped forced vibration systems with strong damping effects. The main aim of this paper is to remove these limitations. Two examples are considered to illustrate the effectiveness and convenience of the present procedure. The approximate external frequencies and the corresponding approximate solutions are determined by the present method. The results give good coincidence with corresponding numerical solution (considered to be exact) and also provide better result than other existing results. For weak nonlinearities with weak damping effect, the absolute relative error measures (first-order approximate external frequency) in this paper is only 0.07% when amplitude A = 1.5 , while the relative error gives MSLP method is surprisingly 28.81%. Furthermore, for strong nonlinearities with strong damping effect, the absolute relative error found in this article is only 0.02%, whereas the relative error obtained by MSLP method is 24.18%. Therefore, the present method is not only valid for weakly nonlinear damped forced systems, but also gives better result for strongly nonlinear systems with both small and strong damping effect.
Joint Source-Channel Coding by Means of an Oversampled Filter Bank Code
NASA Astrophysics Data System (ADS)
Marinkovic, Slavica; Guillemot, Christine
2006-12-01
Quantized frame expansions based on block transforms and oversampled filter banks (OFBs) have been considered recently as joint source-channel codes (JSCCs) for erasure and error-resilient signal transmission over noisy channels. In this paper, we consider a coding chain involving an OFB-based signal decomposition followed by scalar quantization and a variable-length code (VLC) or a fixed-length code (FLC). This paper first examines the problem of channel error localization and correction in quantized OFB signal expansions. The error localization problem is treated as an[InlineEquation not available: see fulltext.]-ary hypothesis testing problem. The likelihood values are derived from the joint pdf of the syndrome vectors under various hypotheses of impulse noise positions, and in a number of consecutive windows of the received samples. The error amplitudes are then estimated by solving the syndrome equations in the least-square sense. The message signal is reconstructed from the corrected received signal by a pseudoinverse receiver. We then improve the error localization procedure by introducing a per-symbol reliability information in the hypothesis testing procedure of the OFB syndrome decoder. The per-symbol reliability information is produced by the soft-input soft-output (SISO) VLC/FLC decoders. This leads to the design of an iterative algorithm for joint decoding of an FLC and an OFB code. The performance of the algorithms developed is evaluated in a wavelet-based image coding system.
Diagnosis of Cognitive Errors by Statistical Pattern Recognition Methods.
ERIC Educational Resources Information Center
Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.
The rule space model permits measurement of cognitive skill acquisition, diagnosis of cognitive errors, and detection of the strengths and weaknesses of knowledge possessed by individuals. Two ways to classify an individual into his or her most plausible latent state of knowledge include: (1) hypothesis testing--Bayes' decision rules for minimum…
[Coalition tactics of the weaks in the power struggle].
Yamaguchi, H
1991-02-01
This study was intended to investigate the coalition tactics of the weaks under the situation where four players in the power relationship such as "A greater than B = C = D, A less than (B + C + D)" struggled for new resources of power. Subjects were 128 male undergraduates divided into 32 groups of four members each. The experimental design was 2 (determinants of power strength; resource size or rank order) x 2 (range of power distance between the strong and the weaks; large or small). As the result, it was revealed that the weaks preferred revolutional coalition "BCD" under the condition where the resource size determined the power strength, while preferred getting-ahead coalition "AB, AC, AD" under the condition where the rank order determined, and that expansion of power distance reinforced such tendency of the weaks. It was also shown, however, that the weaks did not always form the coalitions as they had hoped before bargaining. In conclusion, the necessity to examine the characteristics of the weaks' mentalities and behaviors in coalition bargaining was suggested.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu Wei; Li Hui; Li Shengtai
Nonlinear ideal magnetohydrodynamic (MHD) simulations of the propagation and expansion of a magnetic ''bubble'' plasma into a lower density, weakly magnetized background plasma, are presented. These simulations mimic the geometry and parameters of the Plasma Bubble Expansion Experiment (PBEX) [A. G. Lynn, Y. Zhang, S. C. Hsu, H. Li, W. Liu, M. Gilmore, and C. Watts, Bull. Am. Phys. Soc. 52, 53 (2007)], which is studying magnetic bubble expansion as a model for extragalactic radio lobes. The simulations predict several key features of the bubble evolution. First, the direction of bubble expansion depends on the ratio of the bubble toroidalmore » to poloidal magnetic field, with a higher ratio leading to expansion predominantly in the direction of propagation and a lower ratio leading to expansion predominantly normal to the direction of propagation. Second, a MHD shock and a trailing slow-mode compressible MHD wavefront are formed ahead of the bubble as it propagates into the background plasma. Third, the bubble expansion and propagation develop asymmetries about its propagation axis due to reconnection facilitated by numerical resistivity and to inhomogeneous angular momentum transport mainly due to the background magnetic field. These results will help guide the initial experiments and diagnostic measurements on PBEX.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Applegate, D. E; Mantz, A.; Allen, S. W.
This is the fourth in a series of papers studying the astrophysics and cosmology of massive, dynamically relaxed galaxy clusters. Here, we use measurements of weak gravitational lensing from the Weighing the Giants project to calibrate Chandra X-ray measurements of total mass that rely on the assumption of hydrostatic equilibrium. This comparison of X-ray and lensing masses measures the combined bias of X-ray hydrostatic masses from both astrophysical and instrumental sources. While we cannot disentangle the two sources of bias, only the combined bias is relevant for calibrating cosmological measurements using relaxed clusters. Assuming a fixed cosmology, and within amore » characteristic radius (r 2500) determined from the X-ray data, we measure a lensing to X-ray mass ratio of 0.96 ± 9% (stat) ± 9% (sys). We find no significant trends of this ratio with mass, redshift or the morphological indicators used to select the sample. Our results imply that any departures from hydrostatic equilibrium at these radii are offset by calibration errors of comparable magnitude, with large departures of tens-of-percent unlikely. In addition, we find a mean concentration of the sample measured from lensing data of c 200 = 3.0 +4.4 –1.8. In conclusion, anticipated short-term improvements in lensing systematics, and a modest expansion of the relaxed lensing sample, can easily increase the measurement precision by 30–50%, leading to similar improvements in cosmological constraints that employ X-ray hydrostatic mass estimates, such as on Ω m from the cluster gas mass fraction.« less
Applegate, D. E; Mantz, A.; Allen, S. W.; ...
2016-02-04
This is the fourth in a series of papers studying the astrophysics and cosmology of massive, dynamically relaxed galaxy clusters. Here, we use measurements of weak gravitational lensing from the Weighing the Giants project to calibrate Chandra X-ray measurements of total mass that rely on the assumption of hydrostatic equilibrium. This comparison of X-ray and lensing masses measures the combined bias of X-ray hydrostatic masses from both astrophysical and instrumental sources. While we cannot disentangle the two sources of bias, only the combined bias is relevant for calibrating cosmological measurements using relaxed clusters. Assuming a fixed cosmology, and within amore » characteristic radius (r 2500) determined from the X-ray data, we measure a lensing to X-ray mass ratio of 0.96 ± 9% (stat) ± 9% (sys). We find no significant trends of this ratio with mass, redshift or the morphological indicators used to select the sample. Our results imply that any departures from hydrostatic equilibrium at these radii are offset by calibration errors of comparable magnitude, with large departures of tens-of-percent unlikely. In addition, we find a mean concentration of the sample measured from lensing data of c 200 = 3.0 +4.4 –1.8. In conclusion, anticipated short-term improvements in lensing systematics, and a modest expansion of the relaxed lensing sample, can easily increase the measurement precision by 30–50%, leading to similar improvements in cosmological constraints that employ X-ray hydrostatic mass estimates, such as on Ω m from the cluster gas mass fraction.« less
NASA Astrophysics Data System (ADS)
Applegate, D. E.; Mantz, A.; Allen, S. W.; von der Linden, A.; Morris, R. Glenn; Hilbert, S.; Kelly, Patrick L.; Burke, D. L.; Ebeling, H.; Rapetti, D. A.; Schmidt, R. W.
2016-04-01
This is the fourth in a series of papers studying the astrophysics and cosmology of massive, dynamically relaxed galaxy clusters. Here, we use measurements of weak gravitational lensing from the Weighing the Giants project to calibrate Chandra X-ray measurements of total mass that rely on the assumption of hydrostatic equilibrium. This comparison of X-ray and lensing masses measures the combined bias of X-ray hydrostatic masses from both astrophysical and instrumental sources. While we cannot disentangle the two sources of bias, only the combined bias is relevant for calibrating cosmological measurements using relaxed clusters. Assuming a fixed cosmology, and within a characteristic radius (r2500) determined from the X-ray data, we measure a lensing to X-ray mass ratio of 0.96 ± 9 per cent (stat) ± 9 per cent (sys). We find no significant trends of this ratio with mass, redshift or the morphological indicators used to select the sample. Our results imply that any departures from hydrostatic equilibrium at these radii are offset by calibration errors of comparable magnitude, with large departures of tens-of-percent unlikely. In addition, we find a mean concentration of the sample measured from lensing data of c_{200} = 3.0_{-1.8}^{+4.4}. Anticipated short-term improvements in lensing systematics, and a modest expansion of the relaxed lensing sample, can easily increase the measurement precision by 30-50 per cent, leading to similar improvements in cosmological constraints that employ X-ray hydrostatic mass estimates, such as on Ωm from the cluster gas mass fraction.
Exploration of multiphoton entangled states by using weak nonlinearities
He, Ying-Qiu; Ding, Dong; Yan, Feng-Li; Gao, Ting
2016-01-01
We propose a fruitful scheme for exploring multiphoton entangled states based on linear optics and weak nonlinearities. Compared with the previous schemes the present method is more feasible because there are only small phase shifts instead of a series of related functions of photon numbers in the process of interaction with Kerr nonlinearities. In the absence of decoherence we analyze the error probabilities induced by homodyne measurement and show that the maximal error probability can be made small enough even when the number of photons is large. This implies that the present scheme is quite tractable and it is possible to produce entangled states involving a large number of photons. PMID:26751044
Quintessential inflation from a variable cosmological constant in a 5D vacuum
NASA Astrophysics Data System (ADS)
Membiela, Agustin; Bellini, Mauricio
2006-10-01
We explore an effective 4D cosmological model for the universe where the variable cosmological constant governs its evolution and the pressure remains negative along all the expansion. This model is introduced from a 5D vacuum state where the (space-like) extra coordinate is considered as noncompact. The expansion is produced by the inflaton field, which is considered as nonminimally coupled to gravity. We conclude from experimental data that the coupling of the inflaton with gravity should be weak, but variable in different epochs of the evolution of the universe.
NASA Astrophysics Data System (ADS)
Aarts, Gert; Laurie, Nathan; Tranberg, Anders
2008-12-01
The 1/N expansion of the two-particle irreducible effective action offers a powerful approach to study quantum field dynamics far from equilibrium. We investigate the effective convergence of the 1/N expansion in the O(N) model by comparing results obtained numerically in 1+1 dimensions at leading, next-to-leading and next-to-next-to-leading order in 1/N as well as in the weak coupling limit. A comparison in classical statistical field theory, where exact numerical results are available, is made as well. We focus on early-time dynamics and quasiparticle properties far from equilibrium and observe rapid effective convergence already for moderate values of 1/N or the coupling.
Pinning by rare defects and effective mobility for elastic interfaces in high dimensions
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Démery, Vincent; Rosso, Alberto
2018-06-01
The existence of a depinning transition for a high dimensional interface in a weakly disordered medium is controversial. Following Larkin arguments and a perturbative expansion, one expects a linear response with a renormalized mobility . In this paper, we compare these predictions with the exact solution of a fully connected model, which displays a finite critical force . At small disorder, we unveil an intermediary linear regime for characterized by the renormalized mobility . Our results suggest that in high dimension the critical force is always finite and determined by the effect of rare impurities that is missed by the perturbative expansion. However, the perturbative expansion correctly describes an intermediate regime that should be visible at small disorder.
Cluster-Expansion Model for Complex Quinary Alloys: Application to Alnico Permanent Magnets
NASA Astrophysics Data System (ADS)
Nguyen, Manh Cuong; Zhou, Lin; Tang, Wei; Kramer, Matthew J.; Anderson, Iver E.; Wang, Cai-Zhuang; Ho, Kai-Ming
2017-11-01
An accurate and transferable cluster-expansion model for complex quinary alloys is developed. Lattice Monte Carlo simulation enabled by this cluster-expansion model is used to investigate temperature-dependent atomic structure of alnico alloys, which are considered as promising high-performance non-rare-earth permanent-magnet materials for high-temperature applications. The results of the Monte Carlo simulations are consistent with available experimental data and provide useful insights into phase decomposition, selection, and chemical ordering in alnico. The simulations also reveal a previously unrecognized D 03 alloy phase. This phase is very rich in Ni and exhibits very weak magnetization. Manipulating the size and location of this phase provides a possible route to improve the magnetic properties of alnico, especially coercivity.
Chen, Hsing Hung; Shen, Tao; Xu, Xin-long; Ma, Chao
2013-01-01
The characteristics of firm's expansion by differentiated products and diversified products are quite different. However, the study employing absorptive capacity to examine the impacts of different modes of expansion on performance of small solar energy firms has never been discussed before. Then, a conceptual model to analyze the tension between strategies and corporate performance is proposed to filling the vacancy. After practical investigation, the results show that stronger organizational institutions help small solar energy firms expanded by differentiated products increase consistency between strategies and corporate performance; oppositely, stronger working attitudes with weak management controls help small solar energy firms expanded by diversified products reduce variance between strategies and corporate performance. PMID:24453837
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuchinskii, E. Z., E-mail: kuchinsk@iep.uran.ru; Kuleeva, N. A.; Sadovskii, M. V., E-mail: sadovski@iep.uran.ru
We derive a Ginzburg–Landau (GL) expansion in the disordered attractive Hubbard model within the combined Nozieres–Schmitt-Rink and DMFT+Σ approximation. Restricting ourselves to the homogeneous expansion, we analyze the disorder dependence of GL expansion coefficients for a wide range of attractive potentials U, from the weak BCS coupling region to the strong-coupling limit, where superconductivity is described by Bose–Einstein condensation (BEC) of preformed Cooper pairs. We show that for the a semielliptic “bare” density of states of the conduction band, the disorder influence on the GL coefficients A and B before quadratic and quartic terms of the order parameter, as wellmore » as on the specific heat discontinuity at the superconducting transition, is of a universal nature at any strength of the attractive interaction and is related only to the general widening of the conduction band by disorder. In general, disorder growth increases the values of the coefficients A and B, leading either to a suppression of the specific heat discontinuity (in the weak-coupling limit), or to its significant growth (in the strong-coupling region). However, this behavior actually confirms the validity of the generalized Anderson theorem, because the disorder dependence of the superconducting transition temperature T{sub c}, is also controlled only by disorder widening of the conduction band (density of states).« less
Msh2-Msh3 Interferes with Okazaki Fragment Processing to Promote Trinucleotide Repeat Expansions
Kantartzis, Athena; Williams, Gregory M.; Balakrishnan, Lata; Roberts, Rick L.; Surtees, Jennifer A.; Bambara, Robert A.
2012-01-01
Summary Trinucleotide repeat (TNR) expansions are the underlying cause of more than forty neurodegenerative and neuromuscular diseases, including myotonic dystrophy and Huntington’s disease. Although genetic evidence has attributed the cause of these diseases to errors in DNA replication and/or repair, clear molecular mechanisms have not been described. We have focused on the role of the mismatch repair complex Msh2-Msh3 in promoting TNR expansions. We demonstrate that Msh2-Msh3 promotes CTG and CAG repeat expansions in vivo in Saccharomyces cerevisiae. We further provide biochemical evidence that Msh2-Msh3 directly interferes with normal Okazaki fragment processing by flap endonuclease1 (Rad27) and DNA Ligase I (Cdc9) in the presence of TNR sequences, thereby producing small, incremental expansion events. We believe that this is the first mechanistic evidence showing the interplay of replication and repair proteins in the expansion of sequences during lagging strand DNA replication. PMID:22938864
Msh2-Msh3 interferes with Okazaki fragment processing to promote trinucleotide repeat expansions.
Kantartzis, Athena; Williams, Gregory M; Balakrishnan, Lata; Roberts, Rick L; Surtees, Jennifer A; Bambara, Robert A
2012-08-30
Trinucleotide repeat (TNR) expansions are the underlying cause of more than 40 neurodegenerative and neuromuscular diseases, including myotonic dystrophy and Huntington's disease. Although genetic evidence points to errors in DNA replication and/or repair as the cause of these diseases, clear molecular mechanisms have not been described. Here, we focused on the role of the mismatch repair complex Msh2-Msh3 in promoting TNR expansions. We demonstrate that Msh2-Msh3 promotes CTG and CAG repeat expansions in vivo in Saccharomyces cerevisiae. Furthermore, we provide biochemical evidence that Msh2-Msh3 directly interferes with normal Okazaki fragment processing by flap endonuclease1 (Rad27) and DNA ligase I (Cdc9) in the presence of TNR sequences, thereby producing small, incremental expansion events. We believe that this is the first mechanistic evidence showing the interplay of replication and repair proteins in the expansion of sequences during lagging-strand DNA replication. Copyright © 2012 The Authors. Published by Elsevier Inc. All rights reserved.
A technique for measuring hypersonic flow velocity profiles
NASA Technical Reports Server (NTRS)
Gartrell, L. R.
1973-01-01
A technique for measuring hypersonic flow velocity profiles is described. This technique utilizes an arc-discharge-electron-beam system to produce a luminous disturbance in the flow. The time of flight of this disturbance was measured. Experimental tests were conducted in the Langley pilot model expansion tube. The measured velocities were of the order of 6000 m/sec over a free-stream density range from 0.000196 to 0.00186 kg/cu m. The fractional error in the velocity measurements was less than 5 percent. Long arc discharge columns (0.356 m) were generated under hypersonic flow conditions in the expansion-tube modified to operate as an expansion tunnel.
Analysis of Errors Made by Students Solving Genetics Problems.
ERIC Educational Resources Information Center
Costello, Sandra Judith
The purpose of this study was to analyze the errors made by students solving genetics problems. A sample of 10 non-science undergraduate students was obtained from a private college in Northern New Jersey. The results support prior research in the area of genetics education and show that a weak understanding of the relationship of meiosis to…
Dark Energy Survey Year 1 Results: Weak Lensing Mass Calibration of redMaPPer Galaxy Clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
McClintock, T.; et al.
We constrain the mass--richness scaling relation of redMaPPer galaxy clusters identified in the Dark Energy Survey Year 1 data using weak gravitational lensing. We split clusters intomore » $$4\\times3$$ bins of richness $$\\lambda$$ and redshift $z$ for $$\\lambda\\geq20$$ and $$0.2 \\leq z \\leq 0.65$$ and measure the mean masses of these bins using their stacked weak lensing signal. By modeling the scaling relation as $$\\langle M_{\\rm 200m}|\\lambda,z\\rangle = M_0 (\\lambda/40)^F ((1+z)/1.35)^G$$, we constrain the normalization of the scaling relation at the 5.0 per cent level as $$M_0 = [3.081 \\pm 0.075 ({\\rm stat}) \\pm 0.133 ({\\rm sys})] \\cdot 10^{14}\\ {\\rm M}_\\odot$$ at $$\\lambda=40$$ and $z=0.35$. The richness scaling index is constrained to be $$F=1.356 \\pm 0.051\\ ({\\rm stat})\\pm 0.008\\ ({\\rm sys})$$ and the redshift scaling index $$G=-0.30\\pm 0.30\\ ({\\rm stat})\\pm 0.06\\ ({\\rm sys})$$. These are the tightest measurements of the normalization and richness scaling index made to date. We use a semi-analytic covariance matrix to characterize the statistical errors in the recovered weak lensing profiles. Our analysis accounts for the following sources of systematic error: shear and photometric redshift errors, cluster miscentering, cluster member dilution of the source sample, systematic uncertainties in the modeling of the halo--mass correlation function, halo triaxiality, and projection effects. We discuss prospects for reducing this systematic error budget, which dominates the uncertainty on $$M_0$$. Our result is in excellent agreement with, but has significantly smaller uncertainties than, previous measurements in the literature, and augurs well for the power of the DES cluster survey as a tool for precision cosmology and upcoming galaxy surveys such as LSST, Euclid and WFIRST.« less
A New Closed Form Approximation for BER for Optical Wireless Systems in Weak Atmospheric Turbulence
NASA Astrophysics Data System (ADS)
Kaushik, Rahul; Khandelwal, Vineet; Jain, R. C.
2018-04-01
Weak atmospheric turbulence condition in an optical wireless communication (OWC) is captured by log-normal distribution. The analytical evaluation of average bit error rate (BER) of an OWC system under weak turbulence is intractable as it involves the statistical averaging of Gaussian Q-function over log-normal distribution. In this paper, a simple closed form approximation for BER of OWC system under weak turbulence is given. Computation of BER for various modulation schemes is carried out using proposed expression. The results obtained using proposed expression compare favorably with those obtained using Gauss-Hermite quadrature approximation and Monte Carlo Simulations.
Key management and encryption under the bounded storage model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Draelos, Timothy John; Neumann, William Douglas; Lanzone, Andrew J.
2005-11-01
There are several engineering obstacles that need to be solved before key management and encryption under the bounded storage model can be realized. One of the critical obstacles hindering its adoption is the construction of a scheme that achieves reliable communication in the event that timing synchronization errors occur. One of the main accomplishments of this project was the development of a new scheme that solves this problem. We show in general that there exist message encoding techniques under the bounded storage model that provide an arbitrarily small probability of transmission error. We compute the maximum capacity of this channelmore » using the unsynchronized key-expansion as side-channel information at the decoder and provide tight lower bounds for a particular class of key-expansion functions that are pseudo-invariant to timing errors. Using our results in combination with Dziembowski et al. [11] encryption scheme we can construct a scheme that solves the timing synchronization error problem. In addition to this work we conducted a detailed case study of current and future storage technologies. We analyzed the cost, capacity, and storage data rate of various technologies, so that precise security parameters can be developed for bounded storage encryption schemes. This will provide an invaluable tool for developing these schemes in practice.« less
ERIC Educational Resources Information Center
Rajendran, Gnanathusharan; Mitchell, Peter
2007-01-01
This article considers three theories of autism: The Theory of Mind Deficit, Executive Dysfunction and the Weak Central Coherence accounts. It outlines each along with studies relevant to their emergence, their expansion, their limitations and their possible integration. Furthermore, consideration is given to any implication from the theories in…
Instabilities in rapid directional solidification under weak flow
NASA Astrophysics Data System (ADS)
Kowal, Katarzyna N.; Davis, Stephen H.; Voorhees, Peter W.
2017-12-01
We examine a rapidly solidifying binary alloy under directional solidification with nonequilibrium interfacial thermodynamics viz. the segregation coefficient and the liquidus slope are speed dependent and attachment-kinetic effects are present. Both of these effects alone give rise to (steady) cellular instabilities, mode S , and a pulsatile instability, mode P . We examine how weak imposed boundary-layer flow of magnitude |V | affects these instabilities. For small |V | , mode S becomes a traveling and the flow stabilizes (destabilizes) the interface for small (large) surface energies. For small |V | , mode P has a critical wave number that shifts from zero to nonzero giving spatial structure. The flow promotes this instability and the frequencies of the complex conjugate pairs each increase (decrease) with flow for large (small) wave numbers. These results are obtained by regular perturbation theory in powers of V far from the point where the neutral curves cross, but requires a modified expansion in powers of V1 /3 near the crossing. A uniform composite expansion is then obtained valid for all small |V | .
Improvement of Accuracy for Background Noise Estimation Method Based on TPE-AE
NASA Astrophysics Data System (ADS)
Itai, Akitoshi; Yasukawa, Hiroshi
This paper proposes a method of a background noise estimation based on the tensor product expansion with a median and a Monte carlo simulation. We have shown that a tensor product expansion with absolute error method is effective to estimate a background noise, however, a background noise might not be estimated by using conventional method properly. In this paper, it is shown that the estimate accuracy can be improved by using proposed methods.
ZZ-Type a posteriori error estimators for adaptive boundary element methods on a curve☆
Feischl, Michael; Führer, Thomas; Karkulik, Michael; Praetorius, Dirk
2014-01-01
In the context of the adaptive finite element method (FEM), ZZ-error estimators named after Zienkiewicz and Zhu (1987) [52] are mathematically well-established and widely used in practice. In this work, we propose and analyze ZZ-type error estimators for the adaptive boundary element method (BEM). We consider weakly singular and hyper-singular integral equations and prove, in particular, convergence of the related adaptive mesh-refining algorithms. Throughout, the theoretical findings are underlined by numerical experiments. PMID:24748725
Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students
NASA Astrophysics Data System (ADS)
Priyani, H. A.; Ekawati, R.
2018-01-01
Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.
High-Order Polynomial Expansions (HOPE) for flux-vector splitting
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing; Steffen, Chris J., Jr.
1991-01-01
The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M = 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions.
NASA Astrophysics Data System (ADS)
Matras, A.
2017-08-01
The paper discusses the impact of the feed screw heating on the machining accuracy. The test stand was built based on HASS Mini Mill 2 CNC milling machine and a Flir SC620 infrared camera. Measurements of workpiece were performed on Talysurf Intra 50 Taylor Hobson profilometer. The research proved that the intensive work of the milling machine lasted 60 minutes, causing thermal expansion of the feed screw what influence on the dimension error of the workpiece.
Polyakov loop correlator in perturbation theory
Berwein, Matthias; Brambilla, Nora; Petreczky, Péter; ...
2017-07-25
We study the Polyakov loop correlator in the weak coupling expansion and show how the perturbative series re-exponentiates into singlet and adjoint contributions. We calculate the order g 7 correction to the Polyakov loop correlator in the short distance limit. We show how the singlet and adjoint free energies arising from the re-exponentiation formula of the Polyakov loop correlator are related to the gauge invariant singlet and octet free energies that can be defined in pNRQCD, namely we find that the two definitions agree at leading order in the multipole expansion, but differ at first order in the quark-antiquark distance.
Polyakov loop correlator in perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berwein, Matthias; Brambilla, Nora; Petreczky, Péter
We study the Polyakov loop correlator in the weak coupling expansion and show how the perturbative series re-exponentiates into singlet and adjoint contributions. We calculate the order g 7 correction to the Polyakov loop correlator in the short distance limit. We show how the singlet and adjoint free energies arising from the re-exponentiation formula of the Polyakov loop correlator are related to the gauge invariant singlet and octet free energies that can be defined in pNRQCD, namely we find that the two definitions agree at leading order in the multipole expansion, but differ at first order in the quark-antiquark distance.
Lattice QCD phase diagram in and away from the strong coupling limit.
de Forcrand, Ph; Langelage, J; Philipsen, O; Unger, W
2014-10-10
We study lattice QCD with four flavors of staggered quarks. In the limit of infinite gauge coupling, "dual" variables can be introduced, which render the finite-density sign problem mild and allow a full determination of the μ-T phase diagram by Monte Carlo simulations, also in the chiral limit. However, the continuum limit coincides with the weak coupling limit. We propose a strong-coupling expansion approach towards the continuum limit. We show first results, including the phase diagram and its chiral critical point, from this expansion truncated at next-to-leading order.
Breathing pulses in singularly perturbed reaction-diffusion systems
NASA Astrophysics Data System (ADS)
Veerman, Frits
2015-07-01
The weakly nonlinear stability of pulses in general singularly perturbed reaction-diffusion systems near a Hopf bifurcation is determined using a centre manifold expansion. A general framework to obtain leading order expressions for the (Hopf) centre manifold expansion for scale separated, localised structures is presented. Using the scale separated structure of the underlying pulse, directly calculable expressions for the Hopf normal form coefficients are obtained in terms of solutions to classical Sturm-Liouville problems. The developed theory is used to establish the existence of breathing pulses in a slowly nonlinear Gierer-Meinhardt system, and is confirmed by direct numerical simulation.
The JPL Cryogenic Dilatometer: Measuring the Thermal Expansion Coefficient of Aerospace Materials
NASA Technical Reports Server (NTRS)
Halverson, Peter G.; Dudick, Matthew J.; Karlmann, Paul; Klein, Kerry J.; Levine, Marie; Marcin, Martin; Parker, Tyler J.; Peters, Robert D.; Shaklan, Stuart; VanBuren, David
2007-01-01
This slide presentation details the cryogenic dilatometer, which is used by JPL to measure the thermal expansion coefficient of materials used in Aerospace. Included is a system diagram, a picture of the dilatometer chamber and the laser source, a description of the laser source, pictures of the interferometer, block diagrams of the electronics and software and a picture of the electronics, and software. Also there is a brief review of the accurace.error budget. The materials tested are also described, and the results are shown in strain curves, JPL measured strain fits are described, and the coefficient of thermal expansion (CTE) is also shown for the materials tested.
A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less
A Weak Galerkin Method for the Reissner–Mindlin Plate in Primary Form
Mu, Lin; Wang, Junping; Ye, Xiu
2017-10-04
We developed a new finite element method for the Reissner–Mindlin equations in its primary form by using the weak Galerkin approach. Like other weak Galerkin finite element methods, this one is highly flexible and robust by allowing the use of discontinuous approximating functions on arbitrary shape of polygons and, at the same time, is parameter independent on its stability and convergence. Furthermore, error estimates of optimal order in mesh size h are established for the corresponding weak Galerkin approximations. Numerical experiments are conducted for verifying the convergence theory, as well as suggesting some superconvergence and a uniform convergence of themore » method with respect to the plate thickness.« less
[The SWOT analysis and strategic considerations for the present medical devices' procurement].
Li, Bin; He, Meng-qiao; Cao, Jian-wen
2006-05-01
In this paper, the SWOT analysis method is used to find out the internal strength, weakness, exterior opportunities and threats of the present medical devices' procurements in hospitals and some strategic considerations are suggested as "one direction, two expansions, three changes and four countermeasures".
Magnetometer-augmented IMU simulator: in-depth elaboration.
Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel
2015-03-04
The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests.
Magnetometer-Augmented IMU Simulator: In-Depth Elaboration
Brunner, Thomas; Lauffenburger, Jean-Philippe; Changey, Sébastien; Basset, Michel
2015-01-01
The location of objects is a growing research topic due, for instance, to the expansion of civil drones or intelligent vehicles. This expansion was made possible through the development of microelectromechanical systems (MEMS), inexpensive and miniaturized inertial sensors. In this context, this article describes the development of a new simulator which generates sensor measurements, giving a specific input trajectory. This will allow the comparison of pose estimation algorithms. To develop this simulator, the measurement equations of every type of sensor have to be analytically determined. To achieve this objective, classical kinematic equations are used for the more common sensors, i.e., accelerometers and rate gyroscopes. As nowadays, the MEMS inertial measurement units (IMUs) are generally magnetometer-augmented, an absolute world magnetic model is implemented. After the determination of the perfect measurement (through the error-free sensor models), realistic error models are developed to simulate real IMU behavior. Finally, the developed simulator is subjected to different validation tests. PMID:25746095
The Rate of Beneficial Mutations Surfing on the Wave of a Range Expansion
Lehe, Rémi; Hallatschek, Oskar; Peliti, Luca
2012-01-01
Many theoretical and experimental studies suggest that range expansions can have severe consequences for the gene pool of the expanding population. Due to strongly enhanced genetic drift at the advancing frontier, neutral and weakly deleterious mutations can reach large frequencies in the newly colonized regions, as if they were surfing the front of the range expansion. These findings raise the question of how frequently beneficial mutations successfully surf at shifting range margins, thereby promoting adaptation towards a range-expansion phenotype. Here, we use individual-based simulations to study the surfing statistics of recurrent beneficial mutations on wave-like range expansions in linear habitats. We show that the rate of surfing depends on two strongly antagonistic factors, the probability of surfing given the spatial location of a novel mutation and the rate of occurrence of mutations at that location. The surfing probability strongly increases towards the tip of the wave. Novel mutations are unlikely to surf unless they enjoy a spatial head start compared to the bulk of the population. The needed head start is shown to be proportional to the inverse fitness of the mutant type, and only weakly dependent on the carrying capacity. The precise location dependence of surfing probabilities is derived from the non-extinction probability of a branching process within a moving field of growth rates. The second factor is the mutation occurrence which strongly decreases towards the tip of the wave. Thus, most successful mutations arise at an intermediate position in the front of the wave. We present an analytic theory for the tradeoff between these factors that allows to predict how frequently substitutions by beneficial mutations occur at invasion fronts. We find that small amounts of genetic drift increase the fixation rate of beneficial mutations at the advancing front, and thus could be important for adaptation during species invasions. PMID:22479175
Comment on "Infants' perseverative search errors are induced by pragmatic misinterpretation".
Spencer, John P; Dineva, Evelina; Smith, Linda B
2009-09-25
Topál et al. (Reports, 26 September 2008, p. 1831) proposed that infants' perseverative search errors can be explained by ostensive cues from the experimenter. We use the dynamic field theory to test the proposal that infants encode locations more weakly when social cues are present. Quantitative simulations show that this account explains infants' performance without recourse to the theory of natural pedagogy.
Modeling Errors in Daily Precipitation Measurements: Additive or Multiplicative?
NASA Technical Reports Server (NTRS)
Tian, Yudong; Huffman, George J.; Adler, Robert F.; Tang, Ling; Sapiano, Matthew; Maggioni, Viviana; Wu, Huan
2013-01-01
The definition and quantification of uncertainty depend on the error model used. For uncertainties in precipitation measurements, two types of error models have been widely adopted: the additive error model and the multiplicative error model. This leads to incompatible specifications of uncertainties and impedes intercomparison and application.In this letter, we assess the suitability of both models for satellite-based daily precipitation measurements in an effort to clarify the uncertainty representation. Three criteria were employed to evaluate the applicability of either model: (1) better separation of the systematic and random errors; (2) applicability to the large range of variability in daily precipitation; and (3) better predictive skills. It is found that the multiplicative error model is a much better choice under all three criteria. It extracted the systematic errors more cleanly, was more consistent with the large variability of precipitation measurements, and produced superior predictions of the error characteristics. The additive error model had several weaknesses, such as non constant variance resulting from systematic errors leaking into random errors, and the lack of prediction capability. Therefore, the multiplicative error model is a better choice.
NASA Technical Reports Server (NTRS)
Sidi, A.; Israeli, M.
1986-01-01
High accuracy numerical quadrature methods for integrals of singular periodic functions are proposed. These methods are based on the appropriate Euler-Maclaurin expansions of trapezoidal rule approximations and their extrapolations. They are used to obtain accurate quadrature methods for the solution of singular and weakly singular Fredholm integral equations. Such periodic equations are used in the solution of planar elliptic boundary value problems, elasticity, potential theory, conformal mapping, boundary element methods, free surface flows, etc. The use of the quadrature methods is demonstrated with numerical examples.
Stopping distance for high energy jets in weakly coupled quark-gluon plasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Peter; Cantrell, Sean; Xiao Wei
2010-02-15
We derive a simple formula for the stopping distance for a high-energy quark traveling through a weakly coupled quark-gluon plasma. The result is given to next-to-leading order in an expansion in inverse logarithms ln(E/T), where T is the temperature of the plasma. We also define a stopping distance for gluons and give a leading-log result. Discussion of stopping distance has a theoretical advantage over discussion of energy loss rates in that stopping distances can be generalized to the case of strong coupling, where one may not speak of individual partons.
Deblurring of Class-Averaged Images in Single-Particle Electron Microscopy.
Park, Wooram; Madden, Dean R; Rockmore, Daniel N; Chirikjian, Gregory S
2010-03-01
This paper proposes a method for deblurring of class-averaged images in single-particle electron microscopy (EM). Since EM images of biological samples are very noisy, the images which are nominally identical projection images are often grouped, aligned and averaged in order to cancel or reduce the background noise. However, the noise in the individual EM images generates errors in the alignment process, which creates an inherent limit on the accuracy of the resulting class averages. This inaccurate class average due to the alignment errors can be viewed as the result of a convolution of an underlying clear image with a blurring function. In this work, we develop a deconvolution method that gives an estimate for the underlying clear image from a blurred class-averaged image using precomputed statistics of misalignment. Since this convolution is over the group of rigid body motions of the plane, SE(2), we use the Fourier transform for SE(2) in order to convert the convolution into a matrix multiplication in the corresponding Fourier space. For practical implementation we use a Hermite-function-based image modeling technique, because Hermite expansions enable lossless Cartesian-polar coordinate conversion using the Laguerre-Fourier expansions, and Hermite expansion and Laguerre-Fourier expansion retain their structures under the Fourier transform. Based on these mathematical properties, we can obtain the deconvolution of the blurred class average using simple matrix multiplication. Tests of the proposed deconvolution method using synthetic and experimental EM images confirm the performance of our method.
Stoycheva, Diana; Deiser, Katrin; Stärck, Lilian; Nishanth, Gopala; Schlüter, Dirk; Uckert, Wolfgang; Schüler, Thomas
2015-01-15
In response to primary Ag contact, naive mouse CD8(+) T cells undergo clonal expansion and differentiate into effector T cells. After pathogen clearance, most effector T cells die, and only a small number of memory T cell precursors (TMPs) survive to form a pool of long-lived memory T cells (TMs). Although high- and low-affinity CD8(+) T cell clones are recruited into the primary response, the TM pool consists mainly of high-affinity clones. It remains unclear whether the more efficient expansion of high-affinity clones and/or cell-intrinsic processes exclude low-affinity T cells from the TM pool. In this article, we show that the lack of IFN-γR signaling in CD8(+) T cells promotes TM formation in response to weak, but not strong, TCR agonists. The IFN-γ-sensitive accumulation of TMs correlates with reduced mammalian target of rapamycin activation and the accumulation of long-lived CD62L(hi)Bcl-2(hi)Eomes(hi) TMPs. Reconstitution of mammalian target of rapamycin or IFN-γR signaling is sufficient to block this process. Hence, our data suggest that IFN-γR signaling actively blocks the formation of TMPs responding to weak TCR agonists, thereby promoting the accumulation of high-affinity T cells finally dominating the TM pool. Copyright © 2015 by The American Association of Immunologists, Inc.
Uncertainty Quantification for Polynomial Systems via Bernstein Expansions
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.
2012-01-01
This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.
Amplification of seismic waves beneath active volcanoes
NASA Astrophysics Data System (ADS)
Navon, O.; Lensky, N. G.; Collier, L.; Neuberg, J.; Lyakhovsky, V.
2003-04-01
Long-period (LP) seismic events are typical of many volcanoes and are attributed to energy leaking from waves traveling through the volcanic conduit or along the conduit - country-rock interface. The LP events are triggered locally, at the volcanic edifice, but the source of energy for the formation of tens of events per day is not clear. Energy may be supplied by volatile-release from a supersaturated melt. If bubbles are present in equilibrium with the melt in the conduit, and the melt is suddenly decompressed, transfer of volatiles from the supersaturated melt into the bubbles transforms stored potential energy into expansion work. For example, small dome collapses may decompress the conduit by a few bars and lead to solubility decrease, exsolution of volatiles and, consequently, to work done by the expansion of the bubbles under pressure. This energy is released over a timescale that is similar to that of LP events and may amplify the original weak seismic signals associated with the collapse. Using the formulation of Lensky et al. (2002), following the decompression, when the transfer of volatiles into bubbles is fast enough, expansion accelerates and the bulk viscosity of the bubbly magma is negative. New calculations show that under such conditions a sinusoidal P-wave is amplified. We note that seismic waves created by tectonic earthquakes that are not associated with net decompression, do not lead to net release of volatiles or to net expansion. In this case, the bulk viscosity is positive and waves traveling through the magma should attenuate. The proposed model explains how weak seismic signals may be amplified as they travel through a conduit that contains supersaturated bubbly magma. It provides the general framework for amplifying volcanic seismicity such as the signals associated with long-period events.
Analysis of the thermal expansivity near the tricritical point in dilute chromium alloys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yurtseven, H., E-mail: hamit@metu.edu.tr; Tari, Ö., E-mail: ozlemilgin@arel.edu.tr
Chromium (Cr) undergoes a first order Neel transition as an antiferromagnetic material. When V, Mo and Mn atoms are substituted in the Cr lattice, a weak first order Neel transition in pure Cr changes toward a second order transition and a possible tricritical point in CrV occurs close to 0.2 at %V, as observed experimentally from the measurements of the thermal expansivity at various temperatures. In this study, we analyze the experimental data for the thermal expansivity from the literature as a function of temperature using the power - law formula for Cr alloys (Cr - 0.1V, 0.2V, 0.5V andmore » Cr - 0.1Mn, Cr - 0.2Mo, 0.3Mo, 0.4Mo). Our results are interpreted near the tricritical point in dilute chromium alloys.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Okura, Yuki; Futamase, Toshifumi, E-mail: yuki.okura@riken.jp
We improve the ellipticity of re-smeared artificial image (ERA) method of point-spread function (PSF) correction in a weak lensing shear analysis in order to treat the realistic shape of galaxies and the PSF. This is done by re-smearing the PSF and the observed galaxy image using a re-smearing function (RSF) and allows us to use a new PSF with a simple shape and to correct the PSF effect without any approximations or assumptions. We perform a numerical test to show that the method applied for galaxies and PSF with some complicated shapes can correct the PSF effect with a systematicmore » error of less than 0.1%. We also apply the ERA method for real data of the Abell 1689 cluster to confirm that it is able to detect the systematic weak lensing shear pattern. The ERA method requires less than 0.1 or 1 s to correct the PSF for each object in a numerical test and a real data analysis, respectively.« less
Michael, J Robert; Koritsanszky, Tibor
2017-05-28
The convergence of nucleus-centered multipolar expansion of the quantum-chemical electron density (QC-ED), gradient, and Laplacian is investigated in terms of numerical radial functions derived by projecting stockholder atoms onto real spherical harmonics at each center. The partial sums of this exact one-center expansion are compared with the corresponding Hansen-Coppens pseudoatom (HC-PA) formalism [Hansen, N. K. and Coppens, P., "Testing aspherical atom refinements on small-molecule data sets," Acta Crystallogr., Sect. A 34, 909-921 (1978)] commonly utilized in experimental electron density studies. It is found that the latter model, due to its inadequate radial part, lacks pointwise convergence and fails to reproduce the local topology of the target QC-ED even at a high-order expansion. The significance of the quantitative agreement often found between HC-PA-based (quadrupolar-level) experimental and extended-basis QC-EDs can thus be challenged.
NASA Astrophysics Data System (ADS)
Li, Yan-Chao; Wang, Chun-Hui; Qu, Yang; Gao, Long; Cong, Hai-Fang; Yang, Yan-Ling; Gao, Jie; Wang, Ao-You
2011-01-01
This paper proposes a novel method of multi-beam laser heterodyne measurement for metal linear expansion coefficient. Based on the Doppler effect and heterodyne technology, the information is loaded of length variation to the frequency difference of the multi-beam laser heterodyne signal by the frequency modulation of the oscillating mirror, this method can obtain many values of length variation caused by temperature variation after the multi-beam laser heterodyne signal demodulation simultaneously. Processing these values by weighted-average, it can obtain length variation accurately, and eventually obtain the value of linear expansion coefficient of metal by the calculation. This novel method is used to simulate measurement for linear expansion coefficient of metal rod under different temperatures by MATLAB, the obtained result shows that the relative measurement error of this method is just 0.4%.
NASA Astrophysics Data System (ADS)
Michael, J. Robert; Koritsanszky, Tibor
2017-05-01
The convergence of nucleus-centered multipolar expansion of the quantum-chemical electron density (QC-ED), gradient, and Laplacian is investigated in terms of numerical radial functions derived by projecting stockholder atoms onto real spherical harmonics at each center. The partial sums of this exact one-center expansion are compared with the corresponding Hansen-Coppens pseudoatom (HC-PA) formalism [Hansen, N. K. and Coppens, P., "Testing aspherical atom refinements on small-molecule data sets," Acta Crystallogr., Sect. A 34, 909-921 (1978)] commonly utilized in experimental electron density studies. It is found that the latter model, due to its inadequate radial part, lacks pointwise convergence and fails to reproduce the local topology of the target QC-ED even at a high-order expansion. The significance of the quantitative agreement often found between HC-PA-based (quadrupolar-level) experimental and extended-basis QC-EDs can thus be challenged.
ERIC Educational Resources Information Center
Chen, Susan Tiffany
2012-01-01
Recent and ongoing expansion of online opportunities for teacher education and training continue in response to calls for better teacher preparation and professional development opportunities. However, with the introduction of online learning, the already controversial debate over educational technology has taken on a new dimension. Today's…
Dislocation dynamics and crystal plasticity in the phase-field crystal model
NASA Astrophysics Data System (ADS)
Skaugen, Audun; Angheluta, Luiza; Viñals, Jorge
2018-02-01
A phase-field model of a crystalline material is introduced to develop the necessary theoretical framework to study plastic flow due to dislocation motion. We first obtain the elastic stress from the phase-field crystal free energy under weak distortion and show that it obeys the stress-strain relation of linear elasticity. We focus next on dislocations in a two-dimensional hexagonal lattice. They are composite topological defects in the weakly nonlinear amplitude equation expansion of the phase field, with topological charges given by the standard Burgers vector. This allows us to introduce a formal relation between the dislocation velocity and the evolution of the slowly varying amplitudes of the phase field. Standard dissipative dynamics of the phase-field crystal model is shown to determine the velocity of the dislocations. When the amplitude expansion is valid and under additional simplifications, we find that the dislocation velocity is determined by the Peach-Koehler force. As an application, we compute the defect velocity for a dislocation dipole in two setups, pure glide and pure climb, and compare it with the analytical predictions.
STARS 2.0: 2nd-generation open-source archiving and query software
NASA Astrophysics Data System (ADS)
Winegar, Tom
2008-07-01
The Subaru Telescope is in process of developing an open-source alternative to the 1st-generation software and databases (STARS 1) used for archiving and query. For STARS 2, we have chosen PHP and Python for scripting and MySQL as the database software. We have collected feedback from staff and observers, and used this feedback to significantly improve the design and functionality of our future archiving and query software. Archiving - We identified two weaknesses in 1st-generation STARS archiving software: a complex and inflexible table structure and uncoordinated system administration for our business model: taking pictures from the summit and archiving them in both Hawaii and Japan. We adopted a simplified and normalized table structure with passive keyword collection, and we are designing an archive-to-archive file transfer system that automatically reports real-time status and error conditions and permits error recovery. Query - We identified several weaknesses in 1st-generation STARS query software: inflexible query tools, poor sharing of calibration data, and no automatic file transfer mechanisms to observers. We are developing improved query tools and sharing of calibration data, and multi-protocol unassisted file transfer mechanisms for observers. In the process, we have redefined a 'query': from an invisible search result that can only transfer once in-house right now, with little status and error reporting and no error recovery - to a stored search result that can be monitored, transferred to different locations with multiple protocols, reporting status and error conditions and permitting recovery from errors.
Refractive errors and strabismus in Down's syndrome in Korea.
Han, Dae Heon; Kim, Kyun Hyung; Paik, Hae Jung
2012-12-01
The aims of this study were to examine the distribution of refractive errors and clinical characteristics of strabismus in Korean patients with Down's syndrome. A total of 41 Korean patients with Down's syndrome were screened for strabismus and refractive errors in 2009. A total of 41 patients with an average age of 11.9 years (range, 2 to 36 years) were screened. Eighteen patients (43.9%) had strabismus. Ten (23.4%) of 18 patients exhibited esotropia and the others had intermittent exotropia. The most frequently detected type of esotropia was acquired non-accommodative esotropia, and that of exotropia was the basic type. Fifteen patients (36.6%) had hypermetropia and 20 (48.8%) had myopia. The patients with esotropia had refractive errors of +4.89 diopters (D, ±3.73) and the patients with exotropia had refractive errors of -0.31 D (±1.78). Six of ten patients with esotropia had an accommodation weakness. Twenty one patients (63.4%) had astigmatism. Eleven (28.6%) of 21 patients had anisometropia and six (14.6%) of those had clinically significant anisometropia. In Korean patients with Down's syndrome, esotropia was more common than exotropia and hypermetropia more common than myopia. Especially, Down's syndrome patients with esotropia generally exhibit clinically significant hyperopic errors (>+3.00 D) and evidence of under-accommodation. Thus, hypermetropia and accommodation weakness could be possible factors in esotropia when it occurs in Down's syndrome patients. Based on the results of this study, eye examinations of Down's syndrome patients should routinely include a measure of accommodation at near distances, and bifocals should be considered for those with evidence of under-accommodation.
A weak Galerkin least-squares finite element method for div-curl systems
NASA Astrophysics Data System (ADS)
Li, Jichun; Ye, Xiu; Zhang, Shangyou
2018-06-01
In this paper, we introduce a weak Galerkin least-squares method for solving div-curl problem. This finite element method leads to a symmetric positive definite system and has the flexibility to work with general meshes such as hybrid mesh, polytopal mesh and mesh with hanging nodes. Error estimates of the finite element solution are derived. The numerical examples demonstrate the robustness and flexibility of the proposed method.
NASA Astrophysics Data System (ADS)
Heaps, Charles W.; Schatz, George C.
2017-06-01
A computational method to model diffraction-limited images from super-resolution surface-enhanced Raman scattering microscopy is introduced. Despite significant experimental progress in plasmon-based super-resolution imaging, theoretical predictions of the diffraction limited images remain a challenge. The method is used to calculate localization errors and image intensities for a single spherical gold nanoparticle-molecule system. The light scattering is calculated using a modification of generalized Mie (T-matrix) theory with a point dipole source and diffraction limited images are calculated using vectorial diffraction theory. The calculation produces the multipole expansion for each emitter and the coherent superposition of all fields. Imaging the constituent fields in addition to the total field provides new insight into the strong coupling between the molecule and the nanoparticle. Regardless of whether the molecular dipole moment is oriented parallel or perpendicular to the nanoparticle surface, the anisotropic excitation distorts the center of the nanoparticle as measured by the point spread function by approximately fifty percent of the particle radius toward to the molecule. Inspection of the nanoparticle multipoles reveals that distortion arises from a weak quadrupole resonance interfering with the dipole field in the nanoparticle. When the nanoparticle-molecule fields are in-phase, the distorted nanoparticle field dominates the observed image. When out-of-phase, the nanoparticle and molecule are of comparable intensity and interference between the two emitters dominates the observed image. The method is also applied to different wavelengths and particle radii. At off-resonant wavelengths, the method predicts images closer to the molecule not because of relative intensities but because of greater distortion in the nanoparticle. The method is a promising approach to improving the understanding of plasmon-enhanced super-resolution experiments.
Numerical method based on the lattice Boltzmann model for the Fisher equation.
Yan, Guangwu; Zhang, Jianying; Dong, Yinfeng
2008-06-01
In this paper, a lattice Boltzmann model for the Fisher equation is proposed. First, the Chapman-Enskog expansion and the multiscale time expansion are used to describe higher-order moment of equilibrium distribution functions and a series of partial differential equations in different time scales. Second, the modified partial differential equation of the Fisher equation with the higher-order truncation error is obtained. Third, comparison between numerical results of the lattice Boltzmann models and exact solution is given. The numerical results agree well with the classical ones.
Yang, Qi; Al Amin, Abdullah; Chen, Xi; Ma, Yiran; Chen, Simin; Shieh, William
2010-08-02
High-order modulation formats and advanced error correcting codes (ECC) are two promising techniques for improving the performance of ultrahigh-speed optical transport networks. In this paper, we present record receiver sensitivity for 107 Gb/s CO-OFDM transmission via constellation expansion to 16-QAM and rate-1/2 LDPC coding. We also show the single-channel transmission of a 428-Gb/s CO-OFDM signal over 960-km standard-single-mode-fiber (SSMF) without Raman amplification.
Evaluation of WES One-Dimensional Dynamic Soil Testing Procedures.
1983-06-01
relations: 18 AC - (2K +4G (6)a so I so r (6) Dividing the stress reduction Aa by a from equation (5), we obtain ana r estimate of the fractional error in...walls, the radial expansion of the soil caused by the expansion of the steel side walls, and the nonuniform stress and strain states in the sample...is applied rapidly, the stress state in the soil at the steel base may be very different from that at the top surface of the soil. With such
The F(N) method for the one-angle radiative transfer equation applied to plant canopies
NASA Technical Reports Server (NTRS)
Ganapol, B. D.; Myneni, R. B.
1992-01-01
The paper presents a semianalytical solution method, called the F(N) method, for the one-angle radiative transfer equation in slab geometry. The F(N) method is based on two integral equations specifying the intensities exiting the boundaries of the vegetation canopy; the solution is obtained through an expansion in a set of basis functions with expansion coefficients to be determined. The advantage of this method is that it avoids spatial truncation error entirely because it requires discretization only in the angular variable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Alves, L. M. S., E-mail: leandro-fisico@hotmail.com; Lima, B. S. de; Santos, C. A. M. dos
K{sub 0.05}MoO{sub 2} has been studied by x-ray and neutron diffractometry, electrical resistivity, magnetization, heat capacity, and thermal expansion measurements. The compound displays two phase transitions, a first-order phase transition near room temperature and a second-order transition near 54 K. Below the transition at 54 K, a weak magnetic anomaly is observed and the electrical resistivity is well described by a power-law temperature dependence with exponent near 0.5. The phase transitions in the K-doped MoO{sub 2} compound have been discussed for the first time using neutron diffraction, high resolution thermal expansion, and heat capacity measurements as a function of temperature.
Transverse flow induced by inhomogeneous magnetic fields in the Bjorken expansion
NASA Astrophysics Data System (ADS)
Pu, Shi; Yang, Di-Lun
2016-03-01
We investigate the magnetohydrodynamics in the presence of an external magnetic field following the power-law decay in proper time and having spatial inhomogeneity characterized by a Gaussian distribution in one of transverse coordinates under the Bjorken expansion. The leading-order solution is obtained in the weak-field approximation, where both energy density and fluid velocity are modified. It is found that the spatial gradient of the magnetic field results in transverse flow, where the flow direction depends on the decay exponents of the magnetic field. We suggest that such a magnetic-field-induced effect might influence anisotropic flow in heavy ion collisions.
U(1) current from the AdS/CFT: diffusion, conductivity and causality
NASA Astrophysics Data System (ADS)
Bu, Yanyan; Lublinsky, Michael; Sharon, Amir
2016-04-01
For a holographically defined finite temperature theory, we derive an off-shell constitutive relation for a global U(1) current driven by a weak external non-dynamical electromagnetic field. The constitutive relation involves an all order gradient expansion resummed into three momenta-dependent transport coefficient functions: diffusion, electric conductivity, and "magnetic" conductivity. These transport functions are first computed analytically in the hydrodynamic limit, up to third order in the derivative expansion, and then numerically for generic values of momenta. We also compute a diffusion memory function, which, as a result of all order gradient resummation, is found to be causal.
NASA Technical Reports Server (NTRS)
Zimmerman, M.
1979-01-01
The classical mechanics results for free precession which are needed in order to calculate the weak field, slow-motion, quadrupole-moment gravitational waves are reviewed. Within that formalism, algorithms are given for computing the exact gravitational power radiated and waveforms produced by arbitrary rigid-body freely-precessing sources. The dominant terms are presented in series expansions of the waveforms for the case of an almost spherical object precessing with a small wobble angle. These series expansions, which retain the precise frequency dependence of the waves, may be useful for gravitational astronomers when freely-precessing sources begin to be observed.
Hydrodynamics in a Degenerate, Strongly Attractive Fermi Gas
NASA Technical Reports Server (NTRS)
Thomas, John E.; Kinast, Joseph; Hemmer, Staci; Turlapov, Andrey; O'Hara, Ken; Gehm, Mike; Granade, Stephen
2004-01-01
In summary, we use all-optical methods with evaporative cooling near a Feshbach resonance to produce a strongly interacting degenerate Fermi gas. We observe hydrodynamic behavior in the expansion dynamics. At low temperatures, collisions may not explain the expansion dynamics. We observe hydrodynamics in the trapped gas. Our observations include collisionally-damped excitation spectra at high temperature which were not discussed above. In addition, we observe weakly damped breathing modes at low temperature. The observed temperature dependence of the damping time and hydrodynamic frequency are not consistent with collisional dynamics nor with collisionless mean field interactions. These observations constitute the first evidence for superfluid hydrodynamics in a Fermi gas.
Spatio-temporal error growth in the multi-scale Lorenz'96 model
NASA Astrophysics Data System (ADS)
Herrera, S.; Fernández, J.; Rodríguez, M. A.; Gutiérrez, J. M.
2010-07-01
The influence of multiple spatio-temporal scales on the error growth and predictability of atmospheric flows is analyzed throughout the paper. To this aim, we consider the two-scale Lorenz'96 model and study the interplay of the slow and fast variables on the error growth dynamics. It is shown that when the coupling between slow and fast variables is weak the slow variables dominate the evolution of fluctuations whereas in the case of strong coupling the fast variables impose a non-trivial complex error growth pattern on the slow variables with two different regimes, before and after saturation of fast variables. This complex behavior is analyzed using the recently introduced Mean-Variance Logarithmic (MVL) diagram.
Bit-error rate for free-space adaptive optics laser communications.
Tyson, Robert K
2002-04-01
An analysis of adaptive optics compensation for atmospheric-turbulence-induced scintillation is presented with the figure of merit being the laser communications bit-error rate. The formulation covers weak, moderate, and strong turbulence; on-off keying; and amplitude-shift keying, over horizontal propagation paths or on a ground-to-space uplink or downlink. The theory shows that under some circumstances the bit-error rate can be improved by a few orders of magnitude with the addition of adaptive optics to compensate for the scintillation. Low-order compensation (less than 40 Zernike modes) appears to be feasible as well as beneficial for reducing the bit-error rate and increasing the throughput of the communication link.
NASA Astrophysics Data System (ADS)
Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.
2018-04-01
Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.
Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang
2018-04-28
The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.
NASA Astrophysics Data System (ADS)
Xu, Meng; Yan, Yaming; Liu, Yanying; Shi, Qiang
2018-04-01
The Nakajima-Zwanzig generalized master equation provides a formally exact framework to simulate quantum dynamics in condensed phases. Yet, the exact memory kernel is hard to obtain and calculations based on perturbative expansions are often employed. By using the spin-boson model as an example, we assess the convergence of high order memory kernels in the Nakajima-Zwanzig generalized master equation. The exact memory kernels are calculated by combining the hierarchical equation of motion approach and the Dyson expansion of the exact memory kernel. High order expansions of the memory kernels are obtained by extending our previous work to calculate perturbative expansions of open system quantum dynamics [M. Xu et al., J. Chem. Phys. 146, 064102 (2017)]. It is found that the high order expansions do not necessarily converge in certain parameter regimes where the exact kernel show a long memory time, especially in cases of slow bath, weak system-bath coupling, and low temperature. Effectiveness of the Padé and Landau-Zener resummation approaches is tested, and the convergence of higher order rate constants beyond Fermi's golden rule is investigated.
Inference for dynamics of continuous variables: the extended Plefka expansion with hidden nodes
NASA Astrophysics Data System (ADS)
Bravi, B.; Sollich, P.
2017-06-01
We consider the problem of a subnetwork of observed nodes embedded into a larger bulk of unknown (i.e. hidden) nodes, where the aim is to infer these hidden states given information about the subnetwork dynamics. The biochemical networks underlying many cellular and metabolic processes are important realizations of such a scenario as typically one is interested in reconstructing the time evolution of unobserved chemical concentrations starting from the experimentally more accessible ones. We present an application to this problem of a novel dynamical mean field approximation, the extended Plefka expansion, which is based on a path integral description of the stochastic dynamics. As a paradigmatic model we study the stochastic linear dynamics of continuous degrees of freedom interacting via random Gaussian couplings. The resulting joint distribution is known to be Gaussian and this allows us to fully characterize the posterior statistics of the hidden nodes. In particular the equal-time hidden-to-hidden variance—conditioned on observations—gives the expected error at each node when the hidden time courses are predicted based on the observations. We assess the accuracy of the extended Plefka expansion in predicting these single node variances as well as error correlations over time, focussing on the role of the system size and the number of observed nodes.
Foreign Language Teaching and the Computer.
ERIC Educational Resources Information Center
Garrett, Nina; Hart, Robert S.
1988-01-01
A review of the APPLE MACINTOSH-compatible software "Conjugate! Spanish," intended to drill Spanish verb forms, points out its strengths (error feedback, user manual, user interface, and feature control) and its weaknesses (pedagogical approach). (CB)
Seeing in the Dark: Weak Lensing from the Sloan Digital Sky Survey
NASA Astrophysics Data System (ADS)
Huff, Eric Michael
Statistical weak lensing by large-scale structure { cosmic shear { is a promising cosmological tool, which has motivated the design of several large upcoming astronomical surveys. This Thesis presents a measurement of cosmic shear using coadded Sloan Digital Sky Survey (SDSS) imaging in 168 square degrees of the equatorial region, with r < 23:5 and i < 22:5, a source number density of 2.2 per arcmin2 and median redshift of zmed = 0.52. These coadds were generated using a new rounding kernel method that was intended to minimize systematic errors in the lensing measurement due to coherent PSF anisotropies that are otherwise prevalent in the SDSS imaging data. Measurements of cosmic shear out to angular separations of 2 degrees are presented, along with systematics tests of the catalog generation and shear measurement steps that demonstrate that these results are dominated by statistical rather than systematic errors. Assuming a cosmological model corresponding to WMAP7 (Komatsu et al., 2011) and allowing only the amplitude of matter fluctuations sigma8 to vary, the best-t value of the amplitude of matter fluctuations is sigma 8=0.636+0.109-0.154 (1sigma); without systematic errors this would be sigma8=0.636+0.099 -0.137 (1sigma). Assuming a flat Λ CDM model, the combined constraints with WMAP7 are sigma8=0.784+0.028 -0.026 (1sigma). The 2sigma error range is 14 percent smaller than WMAP7 alone. Aside from the intrinsic value of such cosmological constraints from the growth of structure, some important lessons are identified for upcoming surveys that may face similar issues when combining multi-epoch data to measure cosmic shear. Motivated by the challenges faced in the cosmic shear measurement, two new lensing probes are suggested for increasing the available weak lensing signal. Both use galaxy scaling relations to control for scatter in lensing observables. The first employs a version of the well-known fundamental plane relation for early type galaxies. This modified "photometric fundamental plane" replaces velocity dispersions with photometric galaxy properties, thus obviating the need for spectroscopic data. We present the first detection of magnification using this method by applying it to photometric catalogs from the Sloan Digital Sky Survey. This analysis shows that the derived magnification signal is comparable to that available from conventional methods using gravitational shear. We suppress the dominant sources of systematic error and discuss modest improvements that may allow this method to equal or even surpass the signal-to-noise achievable with shear. Moreover, some of the dominant sources of systematic error are substantially different from those of shear-based techniques. The second outlines an idea for using the optical Tully-Fisher relation to dramatically improve the signal-to-noise and systematic error control for shear measurements. The expected error properties and potential advantages of such a measurement are proposed, and a pilot study is suggested in order to test the viability of Tully-Fisher weak lensing in the context of the forthcoming generation of large spectroscopic surveys.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, Yong-Seon; Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Portsmouth, PO1 3FX; Zhao Gongbo
We explore the complementarity of weak lensing and galaxy peculiar velocity measurements to better constrain modifications to General Relativity. We find no evidence for deviations from General Relativity on cosmological scales from a combination of peculiar velocity measurements (for Luminous Red Galaxies in the Sloan Digital Sky Survey) with weak lensing measurements (from the Canadian France Hawaii Telescope Legacy Survey). We provide a Fisher error forecast for a Euclid-like space-based survey including both lensing and peculiar velocity measurements and show that the expected constraints on modified gravity will be at least an order of magnitude better than with present data,more » i.e. we will obtain {approx_equal}5% errors on the modified gravity parametrization described here. We also present a model-independent method for constraining modified gravity parameters using tomographic peculiar velocity information, and apply this methodology to the present data set.« less
NASA Technical Reports Server (NTRS)
Vasilyev, Y. M.; Lagunov, L. F.
1973-01-01
The schematic diagram of a noise measuring device is presented that uses pulse expansion modeling according to the peak or any other measured values, to obtain instrument readings at a very low noise error.
Self-calibration of photometric redshift scatter in weak-lensing surveys
Zhang, Pengjie; Pen, Ue -Li; Bernstein, Gary
2010-06-11
Photo-z errors, especially catastrophic errors, are a major uncertainty for precision weak lensing cosmology. We find that the shear-(galaxy number) density and density-density cross correlation measurements between photo-z bins, available from the same lensing surveys, contain valuable information for self-calibration of the scattering probabilities between the true-z and photo-z bins. The self-calibration technique we propose does not rely on cosmological priors nor parameterization of the photo-z probability distribution function, and preserves all of the cosmological information available from shear-shear measurement. We estimate the calibration accuracy through the Fisher matrix formalism. We find that, for advanced lensing surveys such as themore » planned stage IV surveys, the rate of photo-z outliers can be determined with statistical uncertainties of 0.01-1% for z < 2 galaxies. Among the several sources of calibration error that we identify and investigate, the galaxy distribution bias is likely the most dominant systematic error, whereby photo-z outliers have different redshift distributions and/or bias than non-outliers from the same bin. This bias affects all photo-z calibration techniques based on correlation measurements. As a result, galaxy bias variations of O(0.1) produce biases in photo-z outlier rates similar to the statistical errors of our method, so this galaxy distribution bias may bias the reconstructed scatters at several-σ level, but is unlikely to completely invalidate the self-calibration technique.« less
Practical scheme for error control using feedback
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sarovar, Mohan; Milburn, Gerard J.; Ahn, Charlene
2004-05-01
We describe a scheme for quantum-error correction that employs feedback and weak measurement rather than the standard tools of projective measurement and fast controlled unitary gates. The advantage of this scheme over previous protocols [for example, Ahn et al. Phys. Rev. A 65, 042301 (2001)], is that it requires little side processing while remaining robust to measurement inefficiency, and is therefore considerably more practical. We evaluate the performance of our scheme by simulating the correction of bit flips. We also consider implementation in a solid-state quantum-computation architecture and estimate the maximal error rate that could be corrected with current technology.
Brouwer, Anne-Marie; López-Moliner, Joan; Brenner, Eli; Smeets, Jeroen B J
2006-02-01
We propose and evaluate a source of information that ball catchers may use to determine whether a ball will land behind or in front of them. It combines estimates for the ball's horizontal and vertical speed. These estimates are based, respectively, on the rate of angular expansion and vertical velocity. Our variable could account for ball catchers' data of Oudejans et al. [The effects of baseball experience on movement initiation in catching fly balls. Journal of Sports Sciences, 15, 587-595], but those data could also be explained by the use of angular expansion alone. We therefore conducted additional experiments in which we asked subjects where simulated balls would land under conditions in which both angular expansion and vertical velocity must be combined for obtaining a correct response. Subjects made systematic errors. We found evidence for the use of angular velocity but hardly any indication for the use of angular expansion. Thus, if catchers use a strategy that involves combining vertical and horizontal estimates of the ball's speed, they do not obtain their estimates of the horizontal component from the rate of expansion alone.
Geometric Integration of Weakly Dissipative Systems
NASA Astrophysics Data System (ADS)
Modin, K.; Führer, C.; Soöderlind, G.
2009-09-01
Some problems in mechanics, e.g. in bearing simulation, contain subsystems that are conservative as well as weakly dissipative subsystems. Our experience is that geometric integration methods are often superior for such systems, as long as the dissipation is weak. Here we develop adaptive methods for dissipative perturbations of Hamiltonian systems. The methods are "geometric" in the sense that the form of the dissipative perturbation is preserved. The methods are linearly explicit, i.e., they require the solution of a linear subsystem. We sketch an analysis in terms of backward error analysis and numerical comparisons with a conventional RK method of the same order is given.
Weak-lensing magnification as a probe for the dark Universe
DOE Office of Scientific and Technical Information (OSTI.GOV)
García Fernández, Manuel
This Thesis is devoted to the analysis of weak-lensing magnification on the Dark Energy Survey. Two analysis with different goals each are made on different data-sets: the Science Verification (DES-SV) and the Year 1 (DES-Y1). The DES-SV analysis aims the development of techniques to detect the weak-lensing number count magnification signal and the mitigation of systematic errors. The DES-Y1 analysis employs the methods used with the DES-SV data to measure the convergence profile of the emptiest regions of the Universe –voids and troughs–to use them as a new cosmological probe.
NASA Astrophysics Data System (ADS)
Taitano, W. T.; Chacón, L.; Simakov, A. N.
2017-06-01
The Fokker-Planck collision operator is an advection-diffusion operator which describe dynamical systems such as weakly coupled plasmas [1,2], photonics in high temperature environment [3,4], biological [5], and even social systems [6]. For plasmas in the continuum, the Fokker-Planck collision operator supports such important physical properties as conservation of number, momentum, and energy, as well as positivity. It also obeys the Boltzmann's H-theorem [7-11], i.e., the operator increases the system entropy while simultaneously driving the distribution function towards a Maxwellian. In the discrete, when these properties are not ensured, numerical simulations can either fail catastrophically or suffer from significant numerical pollution [12,13]. There is strong emphasis in the literature on developing numerical techniques to solve the Fokker-Planck equation while preserving these properties [12-24]. In this short note, we focus on the analytical equilibrium preserving property, meaning that the Fokker-Planck collision operator vanishes when acting on an analytical Maxwellian distribution function. The equilibrium preservation property is especially important, for example, when one is attempting to capture subtle transport physics. Since transport arises from small O (ɛ) corrections to the equilibrium [25] (where ɛ is a small expansion parameter), numerical truncation error present in the equilibrium solution may dominate, overwhelming transport dynamics.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-07-01
We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-04-01
We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
Non-supersymmetric Wilson loop in N = 4 SYM and defect 1d CFT
NASA Astrophysics Data System (ADS)
Beccaria, Matteo; Giombi, Simone; Tseytlin, Arkady A.
2018-03-01
Following Polchinski and Sully (arXiv:1104.5077), we consider a generalized Wilson loop operator containing a constant parameter ζ in front of the scalar coupling term, so that ζ = 0 corresponds to the standard Wilson loop, while ζ = 1 to the locally supersymmetric one. We compute the expectation value of this operator for circular loop as a function of ζ to second order in the planar weak coupling expansion in N = 4 SYM theory. We then explain the relation of the expansion near the two conformal points ζ = 0 and ζ = 1 to the correlators of scalar operators inserted on the loop. We also discuss the AdS5 × S 5 string 1-loop correction to the strong-coupling expansion of the standard circular Wilson loop, as well as its generalization to the case of mixed boundary conditions on the five-sphere coordinates, corresponding to general ζ. From the point of view of the defect CFT1 defined on the Wilson line, the ζ-dependent term can be seen as a perturbation driving a RG flow from the standard Wilson loop in the UV to the supersymmetric Wilson loop in the IR. Both at weak and strong coupling we find that the logarithm of the expectation value of the standard Wilson loop for the circular contour is larger than that of the supersymmetric one, which appears to be in agreement with the 1d analog of the F-theorem.
NASA Astrophysics Data System (ADS)
Bonde, Jeffrey; Vincena, Stephen; Gekelman, Walter
2018-04-01
The momentum coupled to a magnetized, ambient argon plasma from a high- β, laser-produced carbon plasma is examined in a collisionless, weakly coupled limit. The total electric field was measured by separately examining the induced component associated with the rapidly changing magnetic field of the high- β (kinetic β˜106), expanding plasma and the electrostatic component due to polarization of the expansion. Their temporal and spatial structures are discussed and their effect on the ambient argon plasma (thermal β˜10-2) is confirmed with a laser-induced fluorescence diagnostic, which directly probed the argon ion velocity distribution function. For the given experimental conditions, the electrostatic field is shown to dominate the interaction between the high- β expansion and the ambient plasma. Specifically, the expanding plasma couples energy and momentum into the ambient plasma by pulling ions inward against the flow direction.
NASA Technical Reports Server (NTRS)
Gopalswamy, Nat; Akiyama, Sachiko; Yashiro, Seiji; Xie, Hong; Makela, Pertti; Michalek, Grzegorz
2014-01-01
The familiar correlation between the speed and angular width of coronal mass ejections (CMEs) is also found in solar cycle 24, but the regression line has a larger slope: for a given CME speed, cycle 24 CMEs are significantly wider than those in cycle 23. The slope change indicates a significant change in the physical state of the heliosphere, due to the weak solar activity. The total pressure in the heliosphere (magnetic + plasma) is reduced by approximately 40%, which leads to the anomalous expansion of CMEs explaining the increased slope. The excess CME expansion contributes to the diminished effectiveness of CMEs in producing magnetic storms during cycle 24, both because the magnetic content of the CMEs is diluted and also because of the weaker ambient fields. The reduced magnetic field in the heliosphere may contribute to the lack of solar energetic particles accelerated to very high energies during this cycle.
On mathematical modelling of flameless combustion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mancini, Marco; Schwoeppe, Patrick; Weber, Roman
2007-07-15
A further analysis of the IFRF semi-industrial-scale experiments on flameless (mild) combustion of natural gas is carried out. The experimental burner features a strong oxidizer jet and two weak natural gas jets. Numerous publications have shown the inability of various RANS-based mathematical models to predict the structure of the weak jet. We have proven that the failure is in error predictions of the entrainment and therefore is not related to any chemistry submodels, as has been postulated. (author)
2015-04-01
to successfully operate after being exposed to the harsh launch vibration environment. 2. Uncover workmanship flaws such as loose fasteners or weak...uncover any workmanship errors in spite of exposing the PPUs to vibration levels in excess of what is expected for flight on any of the launchers ...successfully operate after being exposed to the harsh launch vibration environment. 2. Uncover workmanship flaws such as loose fasteners or weak
Gluon Bremsstrahlung in Weakly-Coupled Plasmas
NASA Astrophysics Data System (ADS)
Arnold, Peter
2009-11-01
I report on some theoretical progress concerning the calculation of gluon bremsstrahlung for very high energy particles crossing a weakly-coupled quark-gluon plasma. (i) I advertise that two of the several formalisms used to study this problem, the BDMPS-Zakharov formalism and the AMY formalism (the latter used only for infinite, uniform media), can be made equivalent when appropriately formulated. (ii) A standard technique to simplify calculations is to expand in inverse powers of logarithms ln(E/T). I give an example where such expansions are found to work well for ω/T≳10 where ω is the bremsstrahlung gluon energy. (iii) Finally, I report on perturbative calculations of q̂.
Hakkarainen, Elina; Pirilä, Silja; Kaartinen, Jukka; van der Meere, Jaap J
2013-06-01
This study evaluated the brain activation state during error making in youth with mild spastic cerebral palsy and a peer control group while carrying out a stimulus recognition task. The key question was whether patients were detecting their own errors and subsequently improving their performance in a future trial. Findings indicated that error responses of the group with cerebral palsy were associated with weak motor preparation, as indexed by the amplitude of the late contingent negative variation. However, patients were detecting their errors as indexed by the amplitude of the response-locked negativity and thus improved their performance in a future trial. Findings suggest that the consequence of error making on future performance is intact in a sample of youth with mild spastic cerebral palsy. Because the study group is small, the present findings need replication using a larger sample.
Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation.
Miao, Yinglong; Sinko, William; Pierce, Levi; Bucher, Denis; Walker, Ross C; McCammon, J Andrew
2014-07-08
Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20 k B T) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2-3 k B T). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼ k B T, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting "PyReweighting" is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/.
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Improved Reweighting of Accelerated Molecular Dynamics Simulations for Free Energy Calculation
2015-01-01
Accelerated molecular dynamics (aMD) simulations greatly improve the efficiency of conventional molecular dynamics (cMD) for sampling biomolecular conformations, but they require proper reweighting for free energy calculation. In this work, we systematically compare the accuracy of different reweighting algorithms including the exponential average, Maclaurin series, and cumulant expansion on three model systems: alanine dipeptide, chignolin, and Trp-cage. Exponential average reweighting can recover the original free energy profiles easily only when the distribution of the boost potential is narrow (e.g., the range ≤20kBT) as found in dihedral-boost aMD simulation of alanine dipeptide. In dual-boost aMD simulations of the studied systems, exponential average generally leads to high energetic fluctuations, largely due to the fact that the Boltzmann reweighting factors are dominated by a very few high boost potential frames. In comparison, reweighting based on Maclaurin series expansion (equivalent to cumulant expansion on the first order) greatly suppresses the energetic noise but often gives incorrect energy minimum positions and significant errors at the energy barriers (∼2–3kBT). Finally, reweighting using cumulant expansion to the second order is able to recover the most accurate free energy profiles within statistical errors of ∼kBT, particularly when the distribution of the boost potential exhibits low anharmonicity (i.e., near-Gaussian distribution), and should be of wide applicability. A toolkit of Python scripts for aMD reweighting “PyReweighting” is distributed free of charge at http://mccammon.ucsd.edu/computing/amdReweighting/. PMID:25061441
Exporting Hong Kong's Higher Education in Asian Markets: A SWOT Analysis
ERIC Educational Resources Information Center
Cheung, Alan Chi Keung; Yuen, Timothy Wai Wa; Yuen, Celeste Yuet Mui
2008-01-01
With the rapid growth and expansion of the Asian economies in recent years, there has been a continued rise of students in Asia who are studying outside their home countries. This study attempts to highlight the major strengths, weaknesses, opportunities, and threats of Hong Kong's higher education in relation to its potential of being a regional…
Robust model comparison disfavors power law cosmology
NASA Astrophysics Data System (ADS)
Shafer, Daniel L.
2015-05-01
Late-time power law expansion has been proposed as an alternative to the standard cosmological model and shown to be consistent with some low-redshift data. We test power law expansion against the standard flat Λ CDM cosmology using goodness-of-fit and model comparison criteria. We consider type Ia supernova (SN Ia) data from two current compilations (JLA and Union2.1) along with a current set of baryon acoustic oscillation (BAO) measurements that includes the high-redshift Lyman-α forest measurements from BOSS quasars. We find that neither power law expansion nor Λ CDM is strongly preferred over the other when the SN Ia and BAO data are analyzed separately but that power law expansion is strongly disfavored by the combination. We treat the Rh=c t cosmology (a constant rate of expansion) separately and find that it is conclusively disfavored by all combinations of data that include SN Ia observations and a poor overall fit when systematic errors in the SN Ia measurements are ignored, despite a recent claim to the contrary. We discuss this claim and some concerns regarding hidden model dependence in the SN Ia data.
On the optimization of Gaussian basis sets
NASA Astrophysics Data System (ADS)
Petersson, George A.; Zhong, Shijun; Montgomery, John A.; Frisch, Michael J.
2003-01-01
A new procedure for the optimization of the exponents, αj, of Gaussian basis functions, Ylm(ϑ,φ)rle-αjr2, is proposed and evaluated. The direct optimization of the exponents is hindered by the very strong coupling between these nonlinear variational parameters. However, expansion of the logarithms of the exponents in the orthonormal Legendre polynomials, Pk, of the index, j: ln αj=∑k=0kmaxAkPk((2j-2)/(Nprim-1)-1), yields a new set of well-conditioned parameters, Ak, and a complete sequence of well-conditioned exponent optimizations proceeding from the even-tempered basis set (kmax=1) to a fully optimized basis set (kmax=Nprim-1). The error relative to the exact numerical self-consistent field limit for a six-term expansion is consistently no more than 25% larger than the error for the completely optimized basis set. Thus, there is no need to optimize more than six well-conditioned variational parameters, even for the largest sets of Gaussian primitives.
NASA Astrophysics Data System (ADS)
Sharma, Navneet; Rawat, Tarun Kumar; Parthasarathy, Harish; Gautam, Kumar
2016-06-01
The aim of this paper is to design a current source obtained as a representation of p information symbols \\{I_k\\} so that the electromagnetic (EM) field generated interacts with a quantum atomic system producing after a fixed duration T a unitary gate U( T) that is as close as possible to a given unitary gate U_g. The design procedure involves calculating the EM field produced by \\{I_k\\} and hence the perturbing Hamiltonian produced by \\{I_k\\} finally resulting in the evolution operator produced by \\{I_k\\} up to cubic order based on the Dyson series expansion. The gate error energy is thus obtained as a cubic polynomial in \\{I_k\\} which is minimized using gravitational search algorithm. The signal to noise ratio (SNR) in the designed gate is higher as compared to that using quadratic Dyson series expansion. The SNR is calculated as the ratio of the Frobenius norm square of the desired gate to that of the desired gate error.
Density-functional expansion methods: Grand challenges.
Giese, Timothy J; York, Darrin M
2012-03-01
We discuss the source of errors in semiempirical density functional expansion (VE) methods. In particular, we show that VE methods are capable of well-reproducing their standard Kohn-Sham density functional method counterparts, but suffer from large errors upon using one or more of these approximations: the limited size of the atomic orbital basis, the Slater monopole auxiliary basis description of the response density, and the one- and two-body treatment of the core-Hamiltonian matrix elements. In the process of discussing these approximations and highlighting their symptoms, we introduce a new model that supplements the second-order density-functional tight-binding model with a self-consistent charge-dependent chemical potential equalization correction; we review our recently reported method for generalizing the auxiliary basis description of the atomic orbital response density; and we decompose the first-order potential into a summation of additive atomic components and many-body corrections, and from this examination, we provide new insights and preliminary results that motivate and inspire new approximate treatments of the core-Hamiltonian.
Multispectral optical telescope alignment testing for a cryogenic space environment
NASA Astrophysics Data System (ADS)
Newswander, Trent; Hooser, Preston; Champagne, James
2016-09-01
Multispectral space telescopes with visible to long wave infrared spectral bands provide difficult alignment challenges. The visible channels require precision in alignment and stability to provide good image quality in short wavelengths. This is most often accomplished by choosing materials with near zero thermal expansion glass or ceramic mirrors metered with carbon fiber reinforced polymer (CFRP) that are designed to have a matching thermal expansion. The IR channels are less sensitive to alignment but they often require cryogenic cooling for improved sensitivity with the reduced radiometric background. Finding efficient solutions to this difficult problem of maintaining good visible image quality at cryogenic temperatures has been explored with the building and testing of a telescope simulator. The telescope simulator is an onaxis ZERODUR® mirror, CFRP metered set of optics. Testing has been completed to accurately measure telescope optical element alignment and mirror figure changes in a cryogenic space simulated environment. Measured alignment error and mirror figure error test results are reported with a discussion of their impact on system optical performance.
THE KINEMATICS OF THE NEBULAR SHELLS AROUND LOW MASS PROGENITORS OF PNe WITH LOW METALLICITY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pereyra, Margarita; López, José Alberto; Richer, Michael G., E-mail: mally@astrosen.unam.mx, E-mail: jal@astrosen.unam.mx, E-mail: richer@astrosen.unam.mx
2016-03-15
We analyze the internal kinematics of 26 planetary nebulae (PNe) with low metallicity that appear to derive from progenitor stars of the lowest masses, including the halo PN population. Based upon spatially resolved, long-slit, echelle spectroscopy drawn from the San Pedro Mártir Kinematic Catalog of PNe, we characterize the kinematics of these PNe measuring their global expansion velocities based upon the largest sample used to date for this purpose. We find kinematics that follow the trends observed and predicted in other studies, but also find that most of the PNe studied here tend to have expansion velocities less than 20more » km s{sup −1} in all of the emission lines considered. The low expansion velocities that we observe in this sample of low metallicity PNe with low mass progenitors are most likely a consequence of a weak central star (CS) wind driving the kinematics of the nebular shell. This study complements previous results that link the expansion velocities of the PN shells with the characteristics of the CS.« less
NASA Astrophysics Data System (ADS)
Chao, Luo
2015-11-01
In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.
NASA Astrophysics Data System (ADS)
Yilmaz, T. I.; Hess, K. U.; Vasseur, J.; Wadsworth, F. B.; Gilg, H. A.; Nakada, S.; Dingwell, D. B.
2017-12-01
When hot magma intrudes the crust, the surrounding rocks expand. Similarly, the cooling magma contracts. The expansion and contraction of these multiphase materials is not simple and often requires empirical constraint. Therefore, we constrained the thermal expansivity of Unzen dome and conduit samples using a NETZSCH® DIL 402C. Following experiments, those samples were scanned using a Phoenix v|tome|x m to observe the cracks that may have developed during the heating and cooling. The dome samples do not show petrological or chemical signs of alteration. However, the alteration of the conduit dykes is represented by the occurrence of the main secondary phases such as chlorite, sulfides, carbonates, R1 (Reichweite parameter) illite-smectite, and kaolinite. These alteration products indicate an (I) early weak to moderate argillic magmatic alteration, and a (II) second stage weak to moderate propylitic hydrothermal alteration. The linear thermal expansion coefficient aL of the dome material is K-1 between 150° and 800°C and shows a sharp peak of up to K-1 around the alpha-beta-quartz-transition ( 573°C). In contrast, aL of the hydrothermally altered conduit samples starts to increase around 180° and reaches K-1 at 400°C. We interpret this effect as being due to the water content of the kaolinite and the R1 illite-smectite, which induces larger expansions per degree temperature change. Furthermore, the altered conduit samples show a more pronounced increases of aL between 500 and 650°C of up to peaks at K-1, which is generated by the breakdown of chlorite, iron-rich dolomite solid solutions, calcite, and pyrite. We use a 1D conductive model of heat transfer to explore how the country rock around the Unzen conduit zone would heat up after intrusion. In turn, we convert these temperature profiles to thermal stress profiles, assuming the edifice is largely undeformable. We show that these high linear thermal expansion coefficients of the hydrothermally altered conduit rocks may large induce thermal stresses in the surrounding host rock and therefore promotes cracking, which may in turn lead to edifice instability.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Data Quality Control and Maintenance for the Qweak Experiment
NASA Astrophysics Data System (ADS)
Heiner, Nicholas; Spayde, Damon
2014-03-01
The Qweak collaboration seeks to quantify the weak charge of a proton through the analysis of the parity-violating electron asymmetry in elastic electron-proton scattering. The asymmetry is calculated by measuring how many electrons deflect from a hydrogen target at the chosen scattering angle for aligned and anti-aligned electron spins, then evaluating the difference between the numbers of deflections that occurred for both polarization states. The weak charge can then be extracted from this data. Knowing the weak charge will allow us to calculate the electroweak mixing angle for the particular Q2 value of the chosen electrons, which the Standard Model makes a firm prediction for. Any significant deviation from this prediction would be a prime indicator of the existence of physics beyond what the Standard Model describes. After the experiment was conducted at Jefferson Lab, collected data was stored within a MySQL database for further analysis. I will present an overview of the database and its functions as well as a demonstration of the quality checks and maintenance performed on the data itself. These checks include an analysis of errors occurring throughout the experiment, specifically data acquisition errors within the main detector array, and an analysis of data cuts.
Weak lensing measurement of the mass–richness relation of SDSS redMaPPer clusters
Simet, Melanie; McClintock, Tom; Mandelbaum, Rachel; ...
2016-12-15
Here, we perform a measurement of the mass–richness relation of the redMaPPer galaxy cluster catalogue using weak lensing data from the Sloan Digital Sky Survey. We carefully characterized a broad range of systematic uncertainties, including shear calibration errors, photo-zz biases, dilution by member galaxies, source obscuration, magnification bias, incorrect assumptions about cluster mass profiles, cluster centering, halo triaxiality, and projection effects. We then compare measurements of the lensing signal from two independently-produced shear and photometric redshift catalogues to characterize systematic errors in the lensing signal itself. Using a sample of 5,570 clusters from 0.1 ≤ zz ≤ 0.33, the normalization of our power-law mass vs. λ relation is log 10[M 200m/h -1 M ⊙] = 14.344 ± 0.021 (statistical) ±0.023 (systematic) at a richness λ = 40, a 7 per cent calibration uncertainty, with a power-law index of 1.33+0.09-0.101.33more » $$+0.09\\atop{-0.10}$$ (1σ). Finally, the detailed systematics characterization in this work renders it the definitive weak lensing mass calibration for SDSS redMaPPer clusters at this time.« less
New Developments in Error Detection and Correction Strategies for Critical Applications
NASA Technical Reports Server (NTRS)
Berg, Melanie; LaBel, Ken
2016-01-01
The presentation will cover a variety of mitigation strategies that were developed for critical applications. An emphasis is placed on strengths and weaknesses per mitigation technique as it pertains to different FPGA device types.
Note: Focus error detection device for thermal expansion-recovery microscopy (ThERM).
Domené, E A; Martínez, O E
2013-01-01
An innovative focus error detection method is presented that is only sensitive to surface curvature variations, canceling both thermoreflectance and photodefelection effects. The detection scheme consists of an astigmatic probe laser and a four-quadrant detector. Nonlinear curve fitting of the defocusing signal allows the retrieval of a cutoff frequency, which only depends on the thermal diffusivity of the sample and the pump beam size. Therefore, a straightforward retrieval of the thermal diffusivity of the sample is possible with microscopic lateral resolution and high axial resolution (~100 pm).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Falsaperla, P.; Fonte, G.
1993-05-01
Applying a method based on some results due to Kato [Proc. Phys. Soc. Jpn. 4, 334 (1949)], we show that series of Rydberg eigenvalues and Rydberg eigenfunctions of hydrogen in a uniform magnetic field can be calculated with a rigorous error estimate. The efficiency of the method decreases as the eigenvalue density increases and as [gamma][ital n][sup 3][r arrow]1, where [gamma] is the magnetic-field strength in units of 2.35[times]10[sup 9] G and [ital n] is the principal quantum number of the unperturbed hydrogenic manifold from which the diamagnetic Rydberg states evolve. Fixing [gamma] at the laboratory value 2[times]10[sup [minus]5] andmore » confining our calculations to the region [gamma][ital n][sup 3][lt]1 (weak-field regime), we obtain extremely accurate results up to states corresponding to the [ital n]=32 manifold.« less
Gao, Wei; Liu, Yalong; Xu, Bo
2014-12-19
A new algorithm called Huber-based iterated divided difference filtering (HIDDF) is derived and applied to cooperative localization of autonomous underwater vehicles (AUVs) supported by a single surface leader. The position states are estimated using acoustic range measurements relative to the leader, in which some disadvantages such as weak observability, large initial error and contaminated measurements with outliers are inherent. By integrating both merits of iterated divided difference filtering (IDDF) and Huber's M-estimation methodology, the new filtering method could not only achieve more accurate estimation and faster convergence contrast to standard divided difference filtering (DDF) in conditions of weak observability and large initial error, but also exhibit robustness with respect to outlier measurements, for which the standard IDDF would exhibit severe degradation in estimation accuracy. The correctness as well as validity of the algorithm is demonstrated through experiment results.
Stochastic series expansion simulation of the t -V model
NASA Astrophysics Data System (ADS)
Wang, Lei; Liu, Ye-Hua; Troyer, Matthias
2016-04-01
We present an algorithm for the efficient simulation of the half-filled spinless t -V model on bipartite lattices, which combines the stochastic series expansion method with determinantal quantum Monte Carlo techniques widely used in fermionic simulations. The algorithm scales linearly in the inverse temperature, cubically with the system size, and is free from the time-discretization error. We use it to map out the finite-temperature phase diagram of the spinless t -V model on the honeycomb lattice and observe a suppression of the critical temperature of the charge-density-wave phase in the vicinity of a fermionic quantum critical point.
2–stage stochastic Runge–Kutta for stochastic delay differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosli, Norhayati; Jusoh Awang, Rahimah; Bahar, Arifah
2015-05-15
This paper proposes a newly developed one-step derivative-free method, that is 2-stage stochastic Runge-Kutta (SRK2) to approximate the solution of stochastic delay differential equations (SDDEs) with a constant time lag, r > 0. General formulation of stochastic Runge-Kutta for SDDEs is introduced and Stratonovich Taylor series expansion for numerical solution of SRK2 is presented. Local truncation error of SRK2 is measured by comparing the Stratonovich Taylor expansion of the exact solution with the computed solution. Numerical experiment is performed to assure the validity of the method in simulating the strong solution of SDDEs.
Parameterised post-Newtonian expansion in screened regions
NASA Astrophysics Data System (ADS)
McManus, Ryan; Lombriser, Lucas; Peñarrubia, Jorge
2017-12-01
The parameterised post-Newtonian (PPN) formalism has enabled stringent tests of static weak-field gravity in a theory-independent manner. Here we incorporate screening mechanisms of modified gravity theories into the framework by introducing an effective gravitational coupling and defining the PPN parameters as functions of position. To determine these functions we develop a general method for efficiently performing the post-Newtonian expansion in screened regimes. For illustration, we derive all the PPN functions for a cubic galileon and a chameleon model. We also analyse the Shapiro time delay effect for these two models and find no deviations from General Relativity insofar as the signal path and the perturbing mass reside in a screened region of space.
Inversion of residual stress profiles from ultrasonic Rayleigh wave dispersion data
NASA Astrophysics Data System (ADS)
Mora, P.; Spies, M.
2018-05-01
We investigate theoretically and with synthetic data the performance of several inversion methods to infer a residual stress state from ultrasonic surface wave dispersion data. We show that this particular problem may reveal in relevant materials undesired behaviors for some methods that could be reliably applied to infer other properties. We focus on two methods, one based on a Taylor-expansion, and another one based on a piecewise linear expansion regularized by a singular value decomposition. We explain the instabilities of the Taylor-based method by highlighting singularities in the series of coefficients. At the same time, we show that the other method can successfully provide performances which only weakly depend on the material.
ABJ theory in the higher spin limit
NASA Astrophysics Data System (ADS)
Hirano, Shinji; Honda, Masazumi; Okuyama, Kazumi; Shigemori, Masaki
2016-08-01
We study the conjecture made by Chang, Minwalla, Sharma, and Yin on the duality between the {N}=6 Vasiliev higher spin theory on AdS4 and the {N}=6 Chern-Simons-matter theory, so-called ABJ theory, with gauge group U( N) × U( N + M). Building on our earlier results on the ABJ partition function, we develop the systematic 1 /M expansion, corresponding to the weak coupling expansion in the higher spin theory, and compare the leading 1 /M correction, with our proposed prescription, to the one-loop free energy of the {N}=6 Vasiliev theory. We find an agreement between the two sides up to an ambiguity that appears in the bulk one-loop calculation.
Metric Tests for Curvature from Weak Lensing and Baryon Acoustic Oscillations
NASA Astrophysics Data System (ADS)
Bernstein, G.
2006-02-01
We describe a practical measurement of the curvature of the universe which, unlike current constraints, relies purely on the properties of the Robertson-Walker metric rather than any assumed model for the dynamics and content of the universe. The observable quantity is the cross-correlation between foreground mass and gravitational shear of background galaxies, which depends on the angular diameter distances dA(zl), dA(zs), and dA(zs,zl) on the degenerate triangle formed by observer, source, and lens. In a flat universe, dA(zl,zs)=dA(zs)-dA(zl), but in curved universes an additional term ~Ωk appears and alters the lensing observables even if dA(z) is fixed. We describe a method whereby weak-lensing data can be used to solve simultaneously for dA and the curvature. This method is completely insensitive to the equation of state of the contents of the universe, or amendments to general relativity that alter the gravitational deflection of light or the growth of structure. The curvature estimate is also independent of biases in the photometric redshift scale. This measurement is shown to be subject to a degeneracy among dA, Ωk, and the galaxy bias factors that may be broken by using the same imaging data to measure the angular scale of baryon acoustic oscillations. Simplified estimates of the accuracy attainable by this method indicate that ambitious weak-lensing + baryon-oscillation surveys would measure Ωk to an accuracy ~0.04f-1/2sky(σlnz/0.04)1/2, where σlnz is the photometric redshift error. The Fisher-matrix formalism developed here is also useful for predicting bounds on curvature and other characteristics of parametric dark energy models. We forecast some representative error levels and compare ours to other analyses of the weak-lensing cross-correlation method. We find both curvature and parametric constraints to be surprisingly insensitive to the systematic shear calibration errors.
Barbieri, Ana A; Scoralick, Raquel A; Naressi, Suely C M; Moraes, Mari E L; Daruge, Eduardo; Daruge, Eduardo
2013-01-01
The objective of this study was to demonstrate the effectiveness of rugoscopy as a human identification method, even when the patient is submitted to rapid palatal expansion, which in theory would introduce doubt. With this intent, the Rugoscopic Identity was obtained for each subject using the classification formula proposed by Santos based on the intra-oral casts made before and after treatment from patients who were subjected to palatal expansion. The casts were labeled with the patients' initials and randomly arranged for studying. The palatine rugae kept the same patterns in every case studied. The technical error of the intra-evaluator measurement provided a confidence interval of 95%, making rugoscopy a reliable identification method for patients who were submitted to rapid palatal expansion, because even in the presence of intra-oral changes owing to the use of palatal expanders, the palatine rugae retained the biological and technical requirements for the human identification process. © 2012 American Academy of Forensic Sciences.
QCD equation of state to O (μB6) from lattice QCD
NASA Astrophysics Data System (ADS)
Bazavov, A.; Ding, H.-T.; Hegde, P.; Kaczmarek, O.; Karsch, F.; Laermann, E.; Maezawa, Y.; Mukherjee, Swagato; Ohno, H.; Petreczky, P.; Sandmeyer, H.; Steinbrecher, P.; Schmidt, C.; Sharma, S.; Soeldner, W.; Wagner, M.
2017-03-01
We calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ∈[135 MeV ,330 MeV ] using up to four different sets of lattice cutoffs corresponding to lattices of size Nσ3×Nτ with aspect ratio Nσ/Nτ=4 and Nτ=6 - 16 . The strange quark mass is tuned to its physical value, and we use two strange to light quark mass ratios ms/ml=20 and 27, which in the continuum limit correspond to a pion mass of about 160 and 140 MeV, respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (μB≤2 T ). The fourth-order equation of state thus is suitable for the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √{sN N}˜12 GeV . We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth-order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -μB plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. We argue that results on sixth-order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for μB/T ≤2 and T /Tc(μB=0 )>0.9 .
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bazavov, A.; Ding, H. -T.; Hegde, P.
In this work, we calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ϵ [135 MeV, 330 MeV] using up to four different sets of lattice cut-offs corresponding to lattices of size Nmore » $$3\\atop{σ}$$ × N τ with aspect ratio N σ/N τ = 4 and N τ = 6-16. The strange quark mass is tuned to its physical value and we use two strange to light quark mass ratios m s/m l = 20 and 27, which in the continuum limit correspond to a pion mass of about 160 MeV and 140 MeV respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (µ B ≤ 2T ). The fourth-order equation of state thus is suitable for √the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √sNN ~ 12 GeV. We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -µ B plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. Lastly, we argue that results on sixth order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for µ B/T ≤ 2 and T/T c(µ B = 0) > 0.9.« less
The Kadomtsev-Petviashvili equation under rapid forcing
NASA Astrophysics Data System (ADS)
Moroz, Irene M.
1997-06-01
We consider the initial value problem for the forced Kadomtsev-Petviashvili equation (KP) when the forcing is assumed to be fast compared to the evolution of the unforced equation. This suggests the introduction of two time scales. Solutions to the forced KP are sought by expanding the dependent variable in powers of a small parameter, which is inversely related to the forcing time scale. The unforced system describes weakly nonlinear, weakly dispersive, weakly two-dimensional wave propagation and is studied in two forms, depending upon whether gravity dominates surface tension or vice versa. We focus on the effect that the forcing has on the one-lump solution to the KPI equation (where surface tension dominates) and on the one- and two-line soliton solutions to the KPII equation (when gravity dominates). Solutions to second order in the expansion are computed analytically for some specific choices of the forcing function, which are related to the choice of initial data.
The tyrosine phosphatase PTPN22 discriminates weak self peptides from strong agonist TCR signals.
Salmond, Robert J; Brownlie, Rebecca J; Morrison, Vicky L; Zamoyska, Rose
2014-09-01
T cells must be tolerant of self antigens to avoid autoimmunity but responsive to foreign antigens to provide protection against infection. We found that in both naive T cells and effector T cells, the tyrosine phosphatase PTPN22 limited signaling via the T cell antigen receptor (TCR) by weak agonists and self antigens while not impeding responses to strong agonist antigens. T cells lacking PTPN22 showed enhanced formation of conjugates with antigen-presenting cells pulsed with weak peptides, which led to activation of the T cells and their production of inflammatory cytokines. This effect was exacerbated under conditions of lymphopenia, with the formation of potent memory T cells in the absence of PTPN22. Our data address how loss-of-function PTPN22 alleles can lead to the population expansion of effector and/or memory T cells and a predisposition to human autoimmunity.
A Weak Value Based QKD Protocol Robust Against Detector Attacks
NASA Astrophysics Data System (ADS)
Troupe, James
2015-03-01
We propose a variation of the BB84 quantum key distribution protocol that utilizes the properties of weak values to insure the validity of the quantum bit error rate estimates used to detect an eavesdropper. The protocol is shown theoretically to be secure against recently demonstrated attacks utilizing detector blinding and control and should also be robust against all detector based hacking. Importantly, the new protocol promises to achieve this additional security without negatively impacting the secure key generation rate as compared to that originally promised by the standard BB84 scheme. Implementation of the weak measurements needed by the protocol should be very feasible using standard quantum optical techniques.
Van Weverberg, K.; Morcrette, C. J.; Petch, J.; ...
2018-02-28
Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stationsmore » near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Weverberg, K.; Morcrette, C. J.; Petch, J.
Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stationsmore » near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.« less
NASA Astrophysics Data System (ADS)
Van Weverberg, K.; Morcrette, C. J.; Petch, J.; Klein, S. A.; Ma, H.-Y.; Zhang, C.; Xie, S.; Tang, Q.; Gustafson, W. I.; Qian, Y.; Berg, L. K.; Liu, Y.; Huang, M.; Ahlgrimm, M.; Forbes, R.; Bazile, E.; Roehrig, R.; Cole, J.; Merryfield, W.; Lee, W.-S.; Cheruy, F.; Mellul, L.; Wang, Y.-C.; Johnson, K.; Thieman, M. M.
2018-04-01
Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stations near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.
Carrez, Laurent; Bouchoud, Lucie; Fleury-Souverain, Sandrine; Combescure, Christophe; Falaschi, Ludivine; Sadeghipour, Farshid; Bonnabry, Pascal
2017-03-01
Background and objectives Centralized chemotherapy preparation units have established systematic strategies to avoid errors. Our work aimed to evaluate the accuracy of manual preparations associated with different control methods. Method A simulation study in an operational setting used phenylephrine and lidocaine as markers. Each operator prepared syringes that were controlled using a different method during each of three sessions (no control, visual double-checking, and gravimetric control). Eight reconstitutions and dilutions were prepared in each session, with variable doses and volumes, using different concentrations of stock solutions. Results were analyzed according to qualitative (choice of stock solution) and quantitative criteria (accurate, <5% deviation from the target concentration; weakly accurate, 5%-10%; inaccurate, 10%-30%; wrong, >30% deviation). Results Eleven operators carried out 19 sessions. No final preparation (n = 438) contained a wrong drug. The protocol involving no control failed to detect 1 of 3 dose errors made and double-checking failed to detect 3 of 7 dose errors. The gravimetric control method detected all 5 out of 5 dose errors. The accuracy of the doses measured was equivalent across the control methods ( p = 0.63 Kruskal-Wallis). The final preparations ranged from 58% to 60% accurate, 25% to 27% weakly accurate, 14% to 17% inaccurate and 0.9% wrong. A high variability was observed between operators. Discussion Gravimetric control was the only method able to detect all dose errors, but it did not improve dose accuracy. A dose accuracy with <5% deviation cannot always be guaranteed using manual production. Automation should be considered in the future.
Smailes, David; Meins, Elizabeth; Fernyhough, Charles
2015-01-01
People who experience intrusive thoughts are at increased risk of developing hallucinatory experiences, as are people who have weak reality discrimination skills. No study has yet examined whether these two factors interact to make a person especially prone to hallucinatory experiences. The present study examined this question in a non-clinical sample. Participants were 160 students, who completed a reality discrimination task, as well as self-report measures of cannabis use, negative affect, intrusive thoughts and auditory hallucination-proneness. The possibility of an interaction between reality discrimination performance and level of intrusive thoughts was assessed using multiple regression. The number of reality discrimination errors and level of intrusive thoughts were independent predictors of hallucination-proneness. The reality discrimination errors × intrusive thoughts interaction term was significant, with participants who made many reality discrimination errors and reported high levels of intrusive thoughts being especially prone to hallucinatory experiences. Hallucinatory experiences are more likely to occur in people who report high levels of intrusive thoughts and have weak reality discrimination skills. If applicable to clinical samples, these findings suggest that improving patients' reality discrimination skills and reducing the number of intrusive thoughts they experience may reduce the frequency of hallucinatory experiences.
Cascading activation from lexical processing to letter-level processing in written word production.
Buchwald, Adam; Falconer, Carolyn
2014-01-01
Descriptions of language production have identified processes involved in producing language and the presence and type of interaction among those processes. In the case of spoken language production, consensus has emerged that there is interaction among lexical selection processes and phoneme-level processing. This issue has received less attention in written language production. In this paper, we present a novel analysis of the writing-to-dictation performance of an individual with acquired dysgraphia revealing cascading activation from lexical processing to letter-level processing. The individual produced frequent lexical-semantic errors (e.g., chipmunk → SQUIRREL) as well as letter errors (e.g., inhibit → INBHITI) and had a profile consistent with impairment affecting both lexical processing and letter-level processing. The presence of cascading activation is suggested by lower letter accuracy on words that are more weakly activated during lexical selection than on those that are more strongly activated. We operationalize weakly activated lexemes as those lexemes that are produced as lexical-semantic errors (e.g., lethal in deadly → LETAHL) compared to strongly activated lexemes where the intended target word (e.g., lethal) is the lexeme selected for production.
An integral formulation for wave propagation on weakly non-uniform potential flows
NASA Astrophysics Data System (ADS)
Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel
2016-12-01
An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.
ERIC Educational Resources Information Center
Thomas, Matthew A. M.
2018-01-01
This article explores two distinct strategies suggested by academics in Tanzania for publishing and disseminating their research amidst immense higher education expansion. It draws on Arjun Appadurai's notions of 'strong' and 'weak' internationalisation to analyse the perceived binary between 'international' and 'local' academic journals and their…
NASA Astrophysics Data System (ADS)
Konovalov, Dmitry A.; Cocks, Daniel G.; White, Ronald D.
2017-10-01
The velocity distribution function and transport coefficients for charged particles in weakly ionized plasmas are calculated via a multi-term solution of Boltzmann's equation and benchmarked using a Monte-Carlo simulation. A unified framework for the solution of the original full Boltzmann's equation is presented which is valid for ions and electrons, avoiding any recourse to approximate forms of the collision operator in various limiting mass ratio cases. This direct method using Lebedev quadratures over the velocity and scattering angles avoids the need to represent the ion mass dependence in the collision operator through an expansion in terms of the charged particle to neutral mass ratio. For the two-temperature Burnett function method considered in this study, this amounts to avoiding the need for the complex Talmi-transformation methods and associated mass-ratio expansions. More generally, we highlight the deficiencies in the two-temperature Burnett function method for heavy ions at high electric fields to calculate the ion velocity distribution function, even though the transport coefficients have converged. Contribution to the Topical Issue "Physics of Ionized Gases (SPIG 2016)", edited by Goran Poparic, Bratislav Obradovic, Dragana Maric and Aleksandar Milosavljevic.
The Weak Spots in Contemporary Science (and How to Fix Them)
2017-01-01
Simple Summary Several fraud cases, widespread failure to replicate or reproduce seminal findings, and pervasive error in the scientific literature have led to a crisis of confidence in the biomedical, behavioral, and social sciences. In this review, the author discusses some of the core findings that point at weak spots in contemporary science and considers the human factors that underlie them. He delves into the human tendencies that create errors and biases in data collection, analyses, and reporting of research results. He presents several solutions to deal with observer bias, publication bias, the researcher’s tendency to exploit degrees of freedom in their analysis of data, low statistical power, and errors in the reporting of results, with a focus on the specific challenges in animal welfare research. Abstract In this review, the author discusses several of the weak spots in contemporary science, including scientific misconduct, the problems of post hoc hypothesizing (HARKing), outcome switching, theoretical bloopers in formulating research questions and hypotheses, selective reading of the literature, selective citing of previous results, improper blinding and other design failures, p-hacking or researchers’ tendency to analyze data in many different ways to find positive (typically significant) results, errors and biases in the reporting of results, and publication bias. The author presents some empirical results highlighting problems that lower the trustworthiness of reported results in scientific literatures, including that of animal welfare studies. Some of the underlying causes of these biases are discussed based on the notion that researchers are only human and hence are not immune to confirmation bias, hindsight bias, and minor ethical transgressions. The author discusses solutions in the form of enhanced transparency, sharing of data and materials, (post-publication) peer review, pre-registration, registered reports, improved training, reporting guidelines, replication, dealing with publication bias, alternative inferential techniques, power, and other statistical tools. PMID:29186879
NASA Astrophysics Data System (ADS)
Kim, Donghyun; Lee, Yong Il; Hyeong, Kiseong; Yoo, Chan Min
2016-09-01
The appearance and expansion of C4 plants in the Late Cenozoic was a dramatic example of terrestrial ecological change. The fire hypothesis, which suggests fire as a major cause of C4 grassland is gaining support, yet a more detailed relationship between fire and vegetation-type change remains unresolved. We report the content and stable carbon isotope record of black carbon (BC) in a sediment core retrieved from the northeastern equatorial Pacific that covers the past 14.3 million years. The content record of BC suggests the development process of a flammable ecosystem. The stable carbon isotope record of BC reveals the existence of the Late Miocene C4 expansion, the ‘C4 maximum period of burned biomass’ during the Pliocene to Early Pleistocene, and the collapse of the C4 in the Late Pleistocene. Records showing the initial expansion of C4 plants after large fire support the role of fire as a destructive agent of C3-dominated forest, yet the weak relationships between fire and vegetation after initial expansion suggest that environmental advantages for C4 plants were necessary to maintain the development of C4 plants during the late Neogene. Among the various environmental factors, aridity is likely most influential in C4 expansion.
Quantitative Analysis of Temperature Dependence of Raman shift of monolayer WS2
NASA Astrophysics Data System (ADS)
Huang, Xiaoting; Gao, Yang; Yang, Tianqi; Ren, Wencai; Cheng, Hui-Ming; Lai, Tianshu
2016-08-01
We report the temperature-dependent evolution of Raman spectra of monolayer WS2 directly CVD-grown on a gold foil and then transferred onto quartz substrates over a wide temperature range from 84 to 543 K. The nonlinear temperature dependence of Raman shifts for both and A1g modes has been observed. The first-order temperature coefficients of Raman shifts are obtained to be -0.0093 (cm-1/K) and -0.0122 (cm-1/K) for and A1g peaks, respectively. A physical model, including thermal expansion and three- and four-phonon anharmonic effects, is used quantitatively to analyze the observed nonlinear temperature dependence. Thermal expansion coefficient (TEC) of monolayer WS2 is extracted from the experimental data for the first time. It is found that thermal expansion coefficient of out-plane mode is larger than one of in-plane mode, and TECs of and A1g modes are temperature-dependent weakly and strongly, respectively. It is also found that the nonlinear temperature dependence of Raman shift of mode mainly originates from the anharmonic effect of three-phonon process, whereas one of A1g mode is mainly contributed by thermal expansion effect in high temperature region, revealing that thermal expansion effect cannot be ignored.
Huntington disease without CAG expansion: Phenocopies or errors in assignment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Andrew, S.E.; Goldberg, Y.P.; Kremer, B.
1994-05-01
Huntington disease (HD) has been shown to be associated with an expanded CAG repeat within a novel gene on 4p16.3 (IT15). A total of 30 of 1,022 affected persons (2.9% of the cohort) did not have an expanded CAG in the disease range. The reasons for not observing expansion in affected individuals are important for determining the sensitivity of using repeat length both for diagnosis of affected patients and for predictive testing programs and may have biological relevance for the understanding of the molecular mechanism underlying HD. Here the authors show that the majority (18) of the individuals with normalmore » sized alleles represent misdiagnosis, sample mix-up, or clerical error. The remaining 12 patients represent possible phenocopies for HD. In at least four cases, family studies of these phenocopies excluded 4p16.3 as the region responsible for the phenotype. Mutations in the HD gene that are other than CAG expansion have not been excluded for the remaining eight cases, however, in as many as seven of these persons, retrospective review of these patients' clinical features identified characteristics not typical for HD. This study shows that on rare occasions mutations in other, as-yet-undefined genes can present with a clinical phenotype very similar to that of HD. 30 refs., 4 figs., 3 tabs.« less
Detonation energies of explosives by optimized JCZ3 procedures
NASA Astrophysics Data System (ADS)
Stiel, Leonard I.; Baker, Ernest L.
1998-07-01
Procedures for the detonation properties of explosives have been extended for the calculation of detonation energies at adiabatic expansion conditions. The use of the JCZ3 equation of state with optimized Exp-6 potential parameters leads to lower errors in comparison to JWL detonation energies than for other methods tested.
Mathes, Tim; Klaßen, Pauline; Pieper, Dawid
2017-11-28
Our objective was to assess the frequency of data extraction errors and its potential impact on results in systematic reviews. Furthermore, we evaluated the effect of different extraction methods, reviewer characteristics and reviewer training on error rates and results. We performed a systematic review of methodological literature in PubMed, Cochrane methodological registry, and by manual searches (12/2016). Studies were selected by two reviewers independently. Data were extracted in standardized tables by one reviewer and verified by a second. The analysis included six studies; four studies on extraction error frequency, one study comparing different reviewer extraction methods and two studies comparing different reviewer characteristics. We did not find a study on reviewer training. There was a high rate of extraction errors (up to 50%). Errors often had an influence on effect estimates. Different data extraction methods and reviewer characteristics had moderate effect on extraction error rates and effect estimates. The evidence base for established standards of data extraction seems weak despite the high prevalence of extraction errors. More comparative studies are needed to get deeper insights into the influence of different extraction methods.
Opar, David A; Piatkowski, Timothy; Williams, Morgan D; Shield, Anthony J
2013-09-01
Reliability and case-control injury study. To determine if a novel device designed to measure eccentric knee flexor strength via the Nordic hamstring exercise displays acceptable test-retest reliability; to determine normative values for eccentric knee flexor strength derived from the device in individuals without a history of hamstring strain injury (HSI); and to determine if the device can detect weakness in elite athletes with a previous history of unilateral HSI. HSI and reinjury are the most common cause of lost playing time in a number of sports. Eccentric knee flexor weakness is a major modifiable risk factor for future HSI. However, at present, there is a lack of easily accessible equipment to assess eccentric knee flexor strength. Thirty recreationally active males without a history of HSI completed the Nordic hamstring exercise on the device on 2 separate occasions. Intraclass correlation coefficients, typical error, typical error as a coefficient of variation, and minimal detectable change at a 95% confidence level were calculated. Normative strength data were determined using the most reliable measurement. An additional 20 elite athletes with a unilateral history of HSI within the previous 12 months performed the Nordic hamstring exercise on the device to determine if residual eccentric muscle weakness existed in the previously injured limb. The device displayed high to moderate reliability (intraclass correlation coefficient = 0.83-0.90; typical error, 21.7-27.5 N; typical error as a coefficient of variation, 5.8%-8.5%; minimal detectable change at a 95% confidence level, 60.1-76.2 N). Mean ± SD normative eccentric flexor strength in the uninjured group was 344.7 ± 61.1 N for the left and 361.2 ± 65.1 N for the right side. The previously injured limb was 15% weaker than the contralateral uninjured limb (mean difference, 50.3 N; 95% confidence interval: 25.7, 74.9; P<.01), 15% weaker than the normative left limb (mean difference, 50.0 N; 95% confidence interval: 1.4, 98.5; P = .04), and 18% weaker than the normative right limb (mean difference, 66.5 N; 95% confidence interval: 18.0, 115.1; P<.01). The experimental device offers a reliable method to measure eccentric knee flexor strength and strength asymmetry and to detect residual weakness in previously injured elite athletes.
EIT images of ventilation: what contributes to the resistivity changes?
Zhang, Jie; Patterson, Robert P
2005-04-01
One promising application of electrical impedance tomography (EIT) is the monitoring of pulmonary ventilation and edema. Using three-dimensional (3D) finite difference human models as virtual phantoms, the factors that contribute to the observed lung resistivity changes in the EIT images were investigated. The results showed that the factors included not only tissue resistivity or vessel volume changes, but also chest expansion and tissue/organ movement. The chest expansion introduced artifacts in the center of the EIT images, ranging from -2% to 31% of the image magnitude. With the increase of simulated chest expansion, the percentage contribution of chest expansion relative to lung resistivity change in the EIT image remained relatively constant. The averaged resistivity changes in the lung regions caused by chest expansion ranged from 0.65% to 18.31%. Tissue/organ movement resulted in an increased resistivity in the lung region and in the center anterior region of EIT images. The increased resistivity with inspiration observed in the heart region was caused mainly by a drop in the heart position, which reduced the heart area at the electrode level and was replaced by the lung tissue with higher resistivity. This study indicates that for the analysis of EIT, data errors caused by chest expansion and tissue/organ movement need to be considered.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dietrich, J.P.; et al.
Uncertainty in the mass-observable scaling relations is currently the limiting factor for galaxy cluster based cosmology. Weak gravitational lensing can provide a direct mass calibration and reduce the mass uncertainty. We present new ground-based weak lensing observations of 19 South Pole Telescope (SPT) selected clusters and combine them with previously reported space-based observations of 13 galaxy clusters to constrain the cluster mass scaling relations with the Sunyaev-Zel'dovich effect (SZE), the cluster gas massmore » $$M_\\mathrm{gas}$$, and $$Y_\\mathrm{X}$$, the product of $$M_\\mathrm{gas}$$ and X-ray temperature. We extend a previously used framework for the analysis of scaling relations and cosmological constraints obtained from SPT-selected clusters to make use of weak lensing information. We introduce a new approach to estimate the effective average redshift distribution of background galaxies and quantify a number of systematic errors affecting the weak lensing modelling. These errors include a calibration of the bias incurred by fitting a Navarro-Frenk-White profile to the reduced shear using $N$-body simulations. We blind the analysis to avoid confirmation bias. We are able to limit the systematic uncertainties to 6.4% in cluster mass (68% confidence). Our constraints on the mass-X-ray observable scaling relations parameters are consistent with those obtained by earlier studies, and our constraints for the mass-SZE scaling relation are consistent with the the simulation-based prior used in the most recent SPT-SZ cosmology analysis. We can now replace the external mass calibration priors used in previous SPT-SZ cosmology studies with a direct, internal calibration obtained on the same clusters.« less
Influence of a weak gravitational wave on a bound system of two point-masses. [of binary stars
NASA Technical Reports Server (NTRS)
Turner, M. S.
1979-01-01
The problem of a weak gravitational wave impinging upon a nonrelativistic bound system of two point masses is considered. The geodesic equation for each mass is expanded in terms of two small parameters, v/c and dimensionless wave amplitude, in a manner similar to the post-Newtonian expansion; the geodesic equations are resolved into orbital and center-of-mass equations of motion. The effect of the wave on the orbit is determined by using Lagrange's planetary equations to calculate the time evolution of the orbital elements. The gauge properties of the solutions and, in particular, the gauge invariance of the secular effects are discussed.
Extension of lattice cluster theory to strongly interacting, self-assembling polymeric systems.
Freed, Karl F
2009-02-14
A new extension of the lattice cluster theory is developed to describe the influence of monomer structure and local correlations on the free energy of strongly interacting and self-assembling polymer systems. This extension combines a systematic high dimension (1/d) and high temperature expansion (that is appropriate for weakly interacting systems) with a direct treatment of strong interactions. The general theory is illustrated for a binary polymer blend whose two components contain "sticky" donor and acceptor groups, respectively. The free energy is determined as an explicit function of the donor-acceptor contact probabilities that depend, in turn, on the local structure and both the strong and weak interactions.
Limits on negative information in language input.
Morgan, J L; Travis, L L
1989-10-01
Hirsh-Pasek, Treiman & Schneiderman (1984) and Demetras, Post & Snow (1986) have recently suggested that certain types of parental repetitions and clarification questions may provide children with subtle cues to their grammatical errors. We further investigated this possibility by examining parental responses to inflectional over-regularizations and wh-question auxiliary-verb omission errors in the sets of transcripts from Adam, Eve and Sarah (Brown 1973). These errors were chosen because they are exemplars of overgeneralization, the type of mistake for which negative information is, in theory, most critically needed. Expansions and Clarification Questions occurred more often following ill-formed utterances in Adam's and Eve's input, but not in Sarah's. However, these corrective responses formed only a small proportion of all adult responses following Adam's and Eve's grammatical errors. Moreover, corrective responses appear to drop out of children's input while they continue to make overgeneralization errors. Whereas negative feedback may occasionally be available, in the light of these findings the contention that language input generally incorporates negative information appears to be unfounded.
Data error and highly parameterized groundwater models
Hill, M.C.
2008-01-01
Strengths and weaknesses of highly parameterized models, in which the number of parameters exceeds the number of observations, are demonstrated using a synthetic test case. Results suggest that the approach can yield close matches to observations but also serious errors in system representation. It is proposed that avoiding the difficulties of highly parameterized models requires close evaluation of: (1) model fit, (2) performance of the regression, and (3) estimated parameter distributions. Comparisons to hydrogeologic information are expected to be critical to obtaining credible models. Copyright ?? 2008 IAHS Press.
Ersland, Karen; Pick-Jacobs, John C.; Gern, Benjamin H.; Frye, Christopher A.; Sullivan, Thomas D.; Brennan, Meghan B.; Filutowicz, Hanna I.; O'Brien, Kevin; Korthauer, Keegan D.; Schultz-Cherry, Stacey; Klein, Bruce S.
2012-01-01
CD4+ T cells are the key players of vaccine resistance to fungi. The generation of effective T cell-based vaccines requires an understanding of how to induce and maintain CD4+ T cells and memory. The kinetics of fungal antigen (Ag)-specific CD4+ T cell memory development has not been studied due to the lack of any known protective epitopes and clonally restricted T cell subsets with complementary T cell receptors (TCRs). Here, we investigated the expansion and function of CD4+ T cell memory after vaccination with transgenic (Tg) Blastomyces dermatitidis yeasts that display a model Ag, Eα-mCherry (Eα-mCh). We report that Tg yeast led to Eα display on Ag-presenting cells and induced robust activation, proliferation, and expansion of adoptively transferred TEa cells in an Ag-specific manner. Despite robust priming by Eα-mCh yeast, antifungal TEa cells recruited and produced cytokines weakly during a recall response to the lung. The addition of exogenous Eα-red fluorescent protein (RFP) to the Eα-mCh yeast boosted the number of cytokine-producing TEa cells that migrated to the lung. Thus, model epitope expression on yeast enables the interrogation of Ag presentation to CD4+ T cells and primes Ag-specific T cell activation, proliferation, and expansion. However, the limited availability of model Ag expressed by Tg fungi during T cell priming blunts the downstream generation of effector and memory T cells. PMID:22124658
NASA Astrophysics Data System (ADS)
Zou, Z.; Scott, M. A.; Borden, M. J.; Thomas, D. C.; Dornisch, W.; Brivadis, E.
2018-05-01
In this paper we develop the isogeometric B\\'ezier dual mortar method. It is based on B\\'ezier extraction and projection and is applicable to any spline space which can be represented in B\\'ezier form (i.e., NURBS, T-splines, LR-splines, etc.). The approach weakly enforces the continuity of the solution at patch interfaces and the error can be adaptively controlled by leveraging the refineability of the underlying dual spline basis without introducing any additional degrees of freedom. We also develop weakly continuous geometry as a particular application of isogeometric B\\'ezier dual mortaring. Weakly continuous geometry is a geometry description where the weak continuity constraints are built into properly modified B\\'ezier extraction operators. As a result, multi-patch models can be processed in a solver directly without having to employ a mortaring solution strategy. We demonstrate the utility of the approach on several challenging benchmark problems. Keywords: Mortar methods, Isogeometric analysis, B\\'ezier extraction, B\\'ezier projection
Hamedi Sangsari, Adrien; Sadr-Eshkevari, Pooyan; Al-Dam, Ahmed; Friedrich, Reinhard E; Freymiller, Earl; Rashad, Ashkan
2016-02-01
The purpose of this review was to evaluate the outcome measurements of anterior expansion, posterior expansion, and complications after surgically assisted rapid palatal expansion (SARPE) with or without pterygomaxillary disjunction (PMD). A computerized database search was performed using PubMed, CINAHL, Cochrane, Scopus, and Web of Science. Then, a computerized search was conducted in Google Scholar and ProQuest to overcome publication bias. From the original 125 combined results, 3 met the inclusion criteria. The Quality Assessment Tool for Quantitative Studies of the Effective Public Health Practice Project assessed 2 articles as weak and 1 as moderate. The systematic review included a total of 48 patients (11 male and 37 female). For 25 patients, SARPE was performed with PMD and for 23 patients SARPE was performed without PMD. A tooth-borne fixed hyrax-type palatal expansion screw appliance was used for all cases, activated 1 to 2 mm intraoperatively, and, after a latency period of 3 to 7 days, activated 0.5 to 0.6 mm per day for 38 patients and 0.25 mm for the other 10 until adequate expansion. Postexpansion retention was performed using ligature wired hyrax in 18 patients for 4 months. Comparisons were based on cone-beam computed tomographic projections, study models only, or a combination of study models, anteroposterior cephalometric radiographs, and occlusal radiographs. The time to measure the changes ranged from before fixed orthodontic retention to 6 months after the completion of active expansion. A meta-analysis was possible only for anterior (intercanine) and posterior (inter-molar) dental expansions. The literature is inconclusive regarding the effect of PMD on the outcomes of SARPE. Further controlled trials are needed. Copyright © 2016 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Melendez, Jordan; Wesolowski, Sarah; Furnstahl, Dick
2017-09-01
Chiral effective field theory (EFT) predictions are necessarily truncated at some order in the EFT expansion, which induces an error that must be quantified for robust statistical comparisons to experiment. A Bayesian model yields posterior probability distribution functions for these errors based on expectations of naturalness encoded in Bayesian priors and the observed order-by-order convergence pattern of the EFT. As a general example of a statistical approach to truncation errors, the model was applied to chiral EFT for neutron-proton scattering using various semi-local potentials of Epelbaum, Krebs, and Meißner (EKM). Here we discuss how our model can learn correlation information from the data and how to perform Bayesian model checking to validate that the EFT is working as advertised. Supported in part by NSF PHY-1614460 and DOE NUCLEI SciDAC DE-SC0008533.
Achievable flatness in a large microwave power transmitting antenna
NASA Technical Reports Server (NTRS)
Ried, R. C.
1980-01-01
A dual reference SPS system with pseudoisotropic graphite composite as a representative dimensionally stable composite was studied. The loads, accelerations, thermal environments, temperatures and distortions were calculated for a variety of operational SPS conditions along with statistical considerations of material properties, manufacturing tolerances, measurement accuracy and the resulting loss of sight (LOS) and local slope distributions. A LOS error and a subarray rms slope error of two arc minutes can be achieved with a passive system. Results show that existing materials measurement, manufacturing, assembly and alignment techniques can be used to build the microwave power transmission system antenna structure. Manufacturing tolerance can be critical to rms slope error. The slope error budget can be met with a passive system. Structural joints without free play are essential in the assembly of the large truss structure. Variations in material properties, particularly for coefficient of thermal expansion from part to part, is more significant than actual value.
NICER Detection of Strong Photospheric Expansion during a Thermonuclear X-Ray Burst from 4U 1820–30
NASA Astrophysics Data System (ADS)
Keek, L.; Arzoumanian, Z.; Chakrabarty, D.; Chenevez, J.; Gendreau, K. C.; Guillot, S.; Güver, T.; Homan, J.; Jaisawal, G. K.; LaMarr, B.; Lamb, F. K.; Mahmoodifar, S.; Markwardt, C. B.; Okajima, T.; Strohmayer, T. E.; in ’t Zand, J. J. M.
2018-04-01
The Neutron Star Interior Composition Explorer (NICER) on the International Space Station (ISS) observed strong photospheric expansion of the neutron star in 4U 1820–30 during a Type I X-ray burst. A thermonuclear helium flash in the star’s envelope powered a burst that reached the Eddington limit. Radiation pressure pushed the photosphere out to ∼200 km, while the blackbody temperature dropped to 0.45 keV. Previous observations of similar bursts were performed with instruments that are sensitive only above 3 keV, and the burst signal was weak at low temperatures. NICER's 0.2–12 keV passband enables the first complete detailed observation of strong expansion bursts. The strong expansion lasted only 0.6 s, and was followed by moderate expansion with a 20 km apparent radius, before the photosphere finally settled back down at 3 s after the burst onset. In addition to thermal emission from the neutron star, the NICER spectra reveal a second component that is well fit by optically thick Comptonization. During the strong expansion, this component is six times brighter than prior to the burst, and it accounts for 71% of the flux. In the moderate expansion phase, the Comptonization flux drops, while the thermal component brightens, and the total flux remains constant at the Eddington limit. We speculate that the thermal emission is reprocessed in the accretion environment to form the Comptonization component, and that changes in the covering fraction of the star explain the evolution of the relative contributions to the total flux.
Two-parameter asymptotics in magnetic Weyl calculus
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lein, Max
2010-12-15
This paper is concerned with small parameter asymptotics of magnetic quantum systems. In addition to a semiclassical parameter {epsilon}, the case of small coupling {lambda} to the magnetic vector potential naturally occurs in this context. Magnetic Weyl calculus is adapted to incorporate both parameters, at least one of which needs to be small. Of particular interest is the expansion of the Weyl product which can be used to expand the product of operators in a small parameter, a technique which is prominent to obtain perturbation expansions. Three asymptotic expansions for the magnetic Weyl product of two Hoermander class symbols aremore » proven as (i) {epsilon}<< 1 and {lambda}<< 1, (ii) {epsilon}<< 1 and {lambda}= 1, as well as (iii) {epsilon}= 1 and {lambda}<< 1. Expansions (i) and (iii) are impossible to obtain with ordinary Weyl calculus. Furthermore, I relate the results derived by ordinary Weyl calculus with those obtained with magnetic Weyl calculus by one- and two-parameter expansions. To show the power and versatility of magnetic Weyl calculus, I derive the semirelativistic Pauli equation as a scaling limit from the Dirac equation up to errors of fourth order in 1/c.« less
New Developments in Error Detection and Correction Strategies for Critical Applications
NASA Technical Reports Server (NTRS)
Berg, Melanie; Label, Ken
2017-01-01
The presentation will cover a variety of mitigation strategies that were developed for critical applications. An emphasis is placed on strengths and weaknesses per mitigation technique as it pertains to different Field programmable gate array (FPGA) device types.
Exchange-Correlation Effects for Noncovalent Interactions in Density Functional Theory.
Otero-de-la-Roza, A; DiLabio, Gino A; Johnson, Erin R
2016-07-12
In this article, we develop an understanding of how errors from exchange-correlation functionals affect the modeling of noncovalent interactions in dispersion-corrected density-functional theory. Computed CCSD(T) reference binding energies for a collection of small-molecule clusters are decomposed via a molecular many-body expansion and are used to benchmark density-functional approximations, including the effect of semilocal approximation, exact-exchange admixture, and range separation. Three sources of error are identified. Repulsion error arises from the choice of semilocal functional approximation. This error affects intermolecular repulsions and is present in all n-body exchange-repulsion energies with a sign that alternates with the order n of the interaction. Delocalization error is independent of the choice of semilocal functional but does depend on the exact exchange fraction. Delocalization error misrepresents the induction energies, leading to overbinding in all induction n-body terms, and underestimates the electrostatic contribution to the 2-body energies. Deformation error affects only monomer relaxation (deformation) energies and behaves similarly to bond-dissociation energy errors. Delocalization and deformation errors affect systems with significant intermolecular orbital interactions (e.g., hydrogen- and halogen-bonded systems), whereas repulsion error is ubiquitous. Many-body errors from the underlying exchange-correlation functional greatly exceed in general the magnitude of the many-body dispersion energy term. A functional built to accurately model noncovalent interactions must contain a dispersion correction, semilocal exchange, and correlation components that minimize the repulsion error independently and must also incorporate exact exchange in such a way that delocalization error is absent.
NASA Astrophysics Data System (ADS)
Krasnoshchekov, Sergey V.; Schutski, Roman S.; Craig, Norman C.; Sibaev, Marat; Crittenden, Deborah L.
2018-02-01
Three dihalogenated methane derivatives (CH2F2, CH2FCl, and CH2Cl2) were used as model systems to compare and assess the accuracy of two different approaches for predicting observed fundamental frequencies: canonical operator Van Vleck vibrational perturbation theory (CVPT) and vibrational configuration interaction (VCI). For convenience and consistency, both methods employ the Watson Hamiltonian in rectilinear normal coordinates, expanding the potential energy surface (PES) as a Taylor series about equilibrium and constructing the wavefunction from a harmonic oscillator product basis. At the highest levels of theory considered here, fourth-order CVPT and VCI in a harmonic oscillator basis with up to 10 quanta of vibrational excitation in conjunction with a 4-mode representation sextic force field (SFF-4MR) computed at MP2/cc-pVTZ with replacement CCSD(T)/aug-cc-pVQZ harmonic force constants, the agreement between computed fundamentals is closer to 0.3 cm-1 on average, with a maximum difference of 1.7 cm-1. The major remaining accuracy-limiting factors are the accuracy of the underlying electronic structure model, followed by the incompleteness of the PES expansion. Nonetheless, computed and experimental fundamentals agree to within 5 cm-1, with an average difference of 2 cm-1, confirming the utility and accuracy of both theoretical models. One exception to this rule is the formally IR-inactive but weakly allowed through Coriolis-coupling H-C-H out-of-plane twisting mode of dichloromethane, whose spectrum we therefore revisit and reassign. We also investigate convergence with respect to order of CVPT, VCI excitation level, and order of PES expansion, concluding that premature truncation substantially decreases accuracy, although VCI(6)/SFF-4MR results are still of acceptable accuracy, and some error cancellation is observed with CVPT2 using a quartic force field.
Validation of the activity expansion method with ultrahigh pressure shock equations of state
NASA Astrophysics Data System (ADS)
Rogers, Forrest J.; Young, David A.
1997-11-01
Laser shock experiments have recently been used to measure the equation of state (EOS) of matter in the ultrahigh pressure region between condensed matter and a weakly coupled plasma. Some ultrahigh pressure data from nuclear-generated shocks are also available. Matter at these conditions has proven very difficult to treat theoretically. The many-body activity expansion method (ACTEX) has been used for some time to calculate EOS and opacity data in this region, for use in modeling inertial confinement fusion and stellar interior plasmas. In the present work, we carry out a detailed comparison with the available experimental data in order to validate the method. The agreement is good, showing that ACTEX adequately describes strongly shocked matter.
Error estimates of Lagrange interpolation and orthonormal expansions for Freud weights
NASA Astrophysics Data System (ADS)
Kwon, K. H.; Lee, D. W.
2001-08-01
Let Sn[f] be the nth partial sum of the orthonormal polynomials expansion with respect to a Freud weight. Then we obtain sufficient conditions for the boundedness of Sn[f] and discuss the speed of the convergence of Sn[f] in weighted Lp space. We also find sufficient conditions for the boundedness of the Lagrange interpolation polynomial Ln[f], whose nodal points are the zeros of orthonormal polynomials with respect to a Freud weight. In particular, if W(x)=e-(1/2)x2 is the Hermite weight function, then we obtain sufficient conditions for the inequalities to hold:andwhere and k=0,1,2...,r.
NASA Technical Reports Server (NTRS)
Cecil, R. W.; White, R. A.; Szczur, M. R.
1972-01-01
The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.
Fitting by Orthonormal Polynomials of Silver Nanoparticles Spectroscopic Data
NASA Astrophysics Data System (ADS)
Bogdanova, Nina; Koleva, Mihaela
2018-02-01
Our original Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied for first time to describe the silver nanoparticles (NPs) spectroscopic data. The weights for approximation include experimental errors in variables. In this way we construct orthonormal polynomial expansion for approximating the curve on a non equidistant point grid. The corridors of given data and criteria define the optimal behavior of searched curve. The most important subinterval of spectra data is investigated, where the minimum (surface plasmon resonance absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.
Lattice cluster theory of associating polymers. I. Solutions of linear telechelic polymer chains.
Dudowicz, Jacek; Freed, Karl F
2012-02-14
The lattice cluster theory (LCT) for the thermodynamics of a wide array of polymer systems has been developed by using an analogy to Mayer's virial expansions for non-ideal gases. However, the high-temperature expansion inherent to the LCT has heretofore precluded its application to systems exhibiting strong, specific "sticky" interactions. The present paper describes a reformulation of the LCT necessary to treat systems with both weak and strong, "sticky" interactions. This initial study concerns solutions of linear telechelic chains (with stickers at the chain ends) as the self-assembling system. The main idea behind this extension of the LCT lies in the extraction of terms associated with the strong interactions from the cluster expansion. The generalized LCT for sticky systems reduces to the quasi-chemical theory of hydrogen bonding of Panyioutou and Sanchez when correlation corrections are neglected in the LCT. A diagrammatic representation is employed to facilitate the evaluation of the corrections to the zeroth-order approximation from short range correlations. © 2012 American Institute of Physics
An Ocean Tale of Two Climates: Modern and Last Glacial Maximum
NASA Astrophysics Data System (ADS)
Ferrari, R. M.
2014-12-01
In the present climate, the ocean below 2 km is mainly filled by waters sinking into the abyss around Antarctica and in the North Atlantic. Paleo proxies indicate that waters of North Atlantic origin were instead absent below 2 km at the Last Glacial Maximum (LGM), resulting in an expansion of the volume occupied by Antarctic origin waters. I will argue that this rearrangement of deep water masses is dynamically connected to the expansion of summer sea ice around Antarctica. A simple theory will be introduced to suggest that these deep waters only came to the surface under summer sea ice, which insulated them from atmospheric forcing, and were weakly mixed with overlying waters, thus being able to store carbon for long times. I will show that this unappreciated link between the expansion of sea ice and the appearance of a voluminous and insulated water mass appear to be crucial in explaining the ocean's role in regulating atmospheric carbon dioxide on glacial-interglacial timescales.
Design of fiber optic based respiratory sensor for newborn incubator application
NASA Astrophysics Data System (ADS)
Dhia, Arika; Devara, Kresna; Abuzairi, Tomy; Poespawati, N. R.; Purnamaningsih, Retno W.
2018-02-01
This paper reports the design of respiratory sensor using fiber optic for newborn incubator application. The sensor works based on light intensity losses difference obtained due to thorax movement during respiration. The output of the sensor launched to support electronic circuits to be processed in Arduino Uno microcontroler such that the real-time respiratory rate (breath per minute) can be presented on LCD. Experiment results using thorax expansion of newborn simulator show that the system is able to measure respiratory rate from 10 up to 130 breaths per minute with 0.595% error and 0.2% hysteresis error.
Calculation of the Nucleon Axial Form Factor Using Staggered Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meyer, Aaron S.; Hill, Richard J.; Kronfeld, Andreas S.
The nucleon axial form factor is a dominant contribution to errors in neutrino oscillation studies. Lattice QCD calculations can help control theory errors by providing first-principles information on nucleon form factors. In these proceedings, we present preliminary results on a blinded calculation ofmore » $$g_A$$ and the axial form factor using HISQ staggered baryons with 2+1+1 flavors of sea quarks. Calculations are done using physical light quark masses and are absolutely normalized. We discuss fitting form factor data with the model-independent $z$ expansion parametrization.« less
Vector space methods of photometric analysis - Applications to O stars and interstellar reddening
NASA Technical Reports Server (NTRS)
Massa, D.; Lillie, C. F.
1978-01-01
A multivariate vector-space formulation of photometry is developed which accounts for error propagation. An analysis of uvby and H-beta photometry of O stars is presented, with attention given to observational errors, reddening, general uvby photometry, early stars, and models of O stars. The number of observable parameters in O-star continua is investigated, the way these quantities compare with model-atmosphere predictions is considered, and an interstellar reddening law is derived. It is suggested that photospheric expansion affects the formation of the continuum in at least some O stars.
Major strengths and weaknesses of the lod score method.
Ott, J
2001-01-01
Strengths and weaknesses of the lod score method for human genetic linkage analysis are discussed. The main weakness is its requirement for the specification of a detailed inheritance model for the trait. Various strengths are identified. For example, the lod score (likelihood) method has optimality properties when the trait to be studied is known to follow a Mendelian mode of inheritance. The ELOD is a useful measure for information content of the data. The lod score method can emulate various "nonparametric" methods, and this emulation is equivalent to the nonparametric methods. Finally, the possibility of building errors into the analysis will prove to be essential for the large amount of linkage and disequilibrium data expected in the near future.
NASA Astrophysics Data System (ADS)
Mishin, V. M.; Russell, C. T.; Saifudinova, T. I.; Bazarzhapov, A. D.
2000-10-01
We define an expansion onset (synonymous with the main breakup) to be one with sufficient signatures of open tail reconnection. Earlier onsets, which we term initial onsets, occur before the expansion onset, without the signatures of open tail reconnection but with other signs of a clear substorm onset. These two types of substorm onsets and their timing are discussed herein in a study of selected substorm-like events. During the 10-hour interval studied, five impulses of the Perreault-Akasofu index ɛ were observed with comparable peak values. However, the observed magnetospheric responses were very different in terms of equatorward motion and poleward expansion of the auroral oval. We conclude that the occurrence either of an initial onset or of a full onset (under similar boundary conditions) depends on the amount of stored free energy, proportional to the tail length, which is controlled by the input power. The earlier or initial onset marks a sudden change in the convection pattern in the nightside. This onset could mark the initiation of reconnection on closed field lines while the expansion onset could mark the initiation of reconnection on open field lines.
Experimental study of an adaptive CFRC reflector for high order wave-front error correction
NASA Astrophysics Data System (ADS)
Lan, Lan; Fang, Houfei; Wu, Ke; Jiang, Shuidong; Zhou, Yang
2018-03-01
The recent radio frequency communication system developments are generating the need for creating space antennas with lightweight and high precision. The carbon fiber reinforced composite (CFRC) materials have been used to manufacture the high precision reflector. The wave-front errors caused by fabrication and on-orbit distortion are inevitable. The adaptive CFRC reflector has received much attention to do the wave-front error correction. Due to uneven stress distribution that is introduced by actuation force and fabrication, the high order wave-front errors such as print-through error is found on the reflector surface. However, the adaptive CFRC reflector with PZT actuators basically has no control authority over the high order wave-front errors. A new design architecture assembled secondary ribs at the weak triangular surfaces is presented in this paper. The virtual experimental study of the new adaptive CFRC reflector has conducted. The controllability of the original adaptive CFRC reflector and the new adaptive CFRC reflector with secondary ribs are investigated. The virtual experimental investigation shows that the new adaptive CFRC reflector is feasible and efficient to diminish the high order wave-front error.
Investigating the Impact of Uncertainty about Item Parameters on Ability Estimation
ERIC Educational Resources Information Center
Zhang, Jinming; Xie, Minge; Song, Xiaolan; Lu, Ting
2011-01-01
Asymptotic expansions of the maximum likelihood estimator (MLE) and weighted likelihood estimator (WLE) of an examinee's ability are derived while item parameter estimators are treated as covariates measured with error. The asymptotic formulae present the amount of bias of the ability estimators due to the uncertainty of item parameter estimators.…
Rapid Non-Gaussian Uncertainty Quantification of Seismic Velocity Models and Images
NASA Astrophysics Data System (ADS)
Ely, G.; Malcolm, A. E.; Poliannikov, O. V.
2017-12-01
Conventional seismic imaging typically provides a single estimate of the subsurface without any error bounds. Noise in the observed raw traces as well as the uncertainty of the velocity model directly impact the uncertainty of the final seismic image and its resulting interpretation. We present a Bayesian inference framework to quantify uncertainty in both the velocity model and seismic images, given noise statistics of the observed data.To estimate velocity model uncertainty, we combine the field expansion method, a fast frequency domain wave equation solver, with the adaptive Metropolis-Hastings algorithm. The speed of the field expansion method and its reduced parameterization allows us to perform the tens or hundreds of thousands of forward solves needed for non-parametric posterior estimations. We then migrate the observed data with the distribution of velocity models to generate uncertainty estimates of the resulting subsurface image. This procedure allows us to create both qualitative descriptions of seismic image uncertainty and put error bounds on quantities of interest such as the dip angle of a subduction slab or thickness of a stratigraphic layer.
Linear shoaling of free-surface waves in multi-layer non-hydrostatic models
NASA Astrophysics Data System (ADS)
Bai, Yefei; Cheung, Kwok Fai
2018-01-01
The capability to describe shoaling over sloping bottom is fundamental to modeling of coastal wave transformation. The linear shoaling gradient provides a metric to measure this property in non-hydrostatic models with layer-integrated formulations. The governing equations in Boussinesq form facilitate derivation of the linear shoaling gradient, which is in the form of a [ 2 P + 2 , 2 P ] expansion of the water depth parameter kd with P equal to 1 for a one-layer model and (4 N - 4) for an N-layer model. The expansion reproduces the analytical solution from Airy wave theory at the shallow water limit and maintains a reasonable approximation up to kd = 1.2 and 2 for the one and two-layer models. Additional layers provide rapid and monotonic convergence of the shoaling gradient into deep water. Numerical experiments of wave propagation over a plane slope illustrate manifestation of the shoaling errors through the transformation processes from deep to shallow water. Even though outside the zone of active wave transformation, shoaling errors from deep to intermediate water are cumulative to produce appreciable impact to the wave amplitude in shallow water.
Consistent lattice Boltzmann methods for incompressible axisymmetric flows
NASA Astrophysics Data System (ADS)
Zhang, Liangqi; Yang, Shiliang; Zeng, Zhong; Yin, Linmao; Zhao, Ya; Chew, Jia Wei
2016-08-01
In this work, consistent lattice Boltzmann (LB) methods for incompressible axisymmetric flows are developed based on two efficient axisymmetric LB models available in the literature. In accord with their respective original models, the proposed axisymmetric models evolve within the framework of the standard LB method and the source terms contain no gradient calculations. Moreover, the incompressibility conditions are realized with the Hermite expansion, thus the compressibility errors arising in the existing models are expected to be reduced by the proposed incompressible models. In addition, an extra relaxation parameter is added to the Bhatnagar-Gross-Krook collision operator to suppress the effect of the ghost variable and thus the numerical stability of the present models is significantly improved. Theoretical analyses, based on the Chapman-Enskog expansion and the equivalent moment system, are performed to derive the macroscopic equations from the LB models and the resulting truncation terms (i.e., the compressibility errors) are investigated. In addition, numerical validations are carried out based on four well-acknowledged benchmark tests and the accuracy and applicability of the proposed incompressible axisymmetric LB models are verified.
NASA Technical Reports Server (NTRS)
Davis, John H.
1993-01-01
Lunar spherical harmonic gravity coefficients are estimated from simulated observations of a near-circular low altitude polar orbiter disturbed by lunar mascons. Lunar gravity sensing missions using earth-based nearside observations with and without satellite-based far-side observations are simulated and least squares maximum likelihood estimates are developed for spherical harmonic expansion fit models. Simulations and parameter estimations are performed by a modified version of the Smithsonian Astrophysical Observatory's Planetary Ephemeris Program. Two different lunar spacecraft mission phases are simulated to evaluate the estimated fit models. Results for predicting state covariances one orbit ahead are presented along with the state errors resulting from the mismodeled gravity field. The position errors from planning a lunar landing maneuver with a mismodeled gravity field are also presented. These simulations clearly demonstrate the need to include observations of satellite motion over the far side in estimating the lunar gravity field. The simulations also illustrate that the eighth degree and order expansions used in the simulated fits were unable to adequately model lunar mascons.
Self-calibration method without joint iteration for distributed small satellite SAR systems
NASA Astrophysics Data System (ADS)
Xu, Qing; Liao, Guisheng; Liu, Aifei; Zhang, Juan
2013-12-01
The performance of distributed small satellite synthetic aperture radar systems degrades significantly due to the unavoidable array errors, including gain, phase, and position errors, in real operating scenarios. In the conventional method proposed in (IEEE T Aero. Elec. Sys. 42:436-451, 2006), the spectrum components within one Doppler bin are considered as calibration sources. However, it is found in this article that the gain error estimation and the position error estimation in the conventional method can interact with each other. The conventional method may converge to suboptimal solutions in large position errors since it requires the joint iteration between gain-phase error estimation and position error estimation. In addition, it is also found that phase errors can be estimated well regardless of position errors when the zero Doppler bin is chosen. In this article, we propose a method obtained by modifying the conventional one, based on these two observations. In this modified method, gain errors are firstly estimated and compensated, which eliminates the interaction between gain error estimation and position error estimation. Then, by using the zero Doppler bin data, the phase error estimation can be performed well independent of position errors. Finally, position errors are estimated based on the Taylor-series expansion. Meanwhile, the joint iteration between gain-phase error estimation and position error estimation is not required. Therefore, the problem of suboptimal convergence, which occurs in the conventional method, can be avoided with low computational method. The modified method has merits of faster convergence and lower estimation error compared to the conventional one. Theoretical analysis and computer simulation results verified the effectiveness of the modified method.
Eglinton, Elizabeth; Annett, Marian
2008-06-01
Poor spellers in normal schools, who were not poor readers, were studied for handedness, visuospatial and other cognitive abilities in order to explore contrasts between poor spellers with and without good phonology. It was predicted by the right shift (RS) theory of handedness and cerebral dominance that those with good phonology would have strong bias to dextrality and relative weakness of the right hemisphere, while those without good phonology would have reduced bias to dextrality and relative weakness of the left hemisphere. Poor spellers with good phonetic equivalent spelling errors (GFEs) included fewer left-handers (2.4%) than poor spellers without GFEs (24.4%). Differences for hand skill were as predicted. Tests of visuospatial processing found no differences between the groups in levels of ability, but there was a marked difference in pattern of correlations between visuospatial test scores and homophonic word discrimination. Whereas good spellers (GS) and poor spellers without GFEs showed positive correlations between word discrimination and visuospatial ability, there were no significant correlations for poor spellers with GFEs. The differences for handedness and possibly for the utilisation of visuospatial skills suggest that surface dyslexics differ from phonological dyslexics in cerebral specialisation and perhaps in the quality of inter-hemispheric relations.
Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.
Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J
2012-01-01
The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.
NASA Astrophysics Data System (ADS)
Mananga, Eugene S.; Reid, Alicia E.
2013-01-01
This paper presents a study of finite pulse widths for the BABA pulse sequence using the Floquet-Magnus expansion (FME) approach. In the FME scheme, the first order ? is identical to its counterparts in average Hamiltonian theory (AHT) and Floquet theory (FT). However, the timing part in the FME approach is introduced via the ? function not present in other schemes. This function provides an easy way for evaluating the spin evolution during the time in between' through the Magnus expansion of the operator connected to the timing part of the evolution. The evaluation of ? is particularly useful for the analysis of the non-stroboscopic evolution. Here, the importance of the boundary conditions, which provide a natural choice of ? , is ignored. This work uses the ? function to compare the efficiency of the BABA pulse sequence with ? and the BABA pulse sequence with finite pulses. Calculations of ? and ? are presented.
Steady-state phase error for a phase-locked loop subjected to periodic Doppler inputs
NASA Technical Reports Server (NTRS)
Chen, C.-C.; Win, M. Z.
1991-01-01
The performance of a carrier phase locked loop (PLL) driven by a periodic Doppler input is studied. By expanding the Doppler input into a Fourier series and applying the linearized PLL approximations, it is easy to show that, for periodic frequency disturbances, the resulting steady state phase error is also periodic. Compared to the method of expanding frequency excursion into a power series, the Fourier expansion method can be used to predict the maximum phase error excursion for a periodic Doppler input. For systems with a large Doppler rate fluctuation, such as an optical transponder aboard an Earth orbiting spacecraft, the method can be applied to test whether a lower order tracking loop can provide satisfactory tracking and thereby save the effect of a higher order loop design.
Autonomous Quantum Error Correction with Application to Quantum Metrology
NASA Astrophysics Data System (ADS)
Reiter, Florentin; Sorensen, Anders S.; Zoller, Peter; Muschik, Christine A.
2017-04-01
We present a quantum error correction scheme that stabilizes a qubit by coupling it to an engineered environment which protects it against spin- or phase flips. Our scheme uses always-on couplings that run continuously in time and operates in a fully autonomous fashion without the need to perform measurements or feedback operations on the system. The correction of errors takes place entirely at the microscopic level through a build-in feedback mechanism. Our dissipative error correction scheme can be implemented in a system of trapped ions and can be used for improving high precision sensing. We show that the enhanced coherence time that results from the coupling to the engineered environment translates into a significantly enhanced precision for measuring weak fields. In a broader context, this work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.
Geometry and growth contributions to cosmic shear observables
Matilla, Jose Manuel Zorrilla; Haiman, Zoltan; Petri, Andrea; ...
2017-07-13
We explore the sensitivity of weak-lensing observables to the expansion history of the Universe and to the growth of cosmic structures, as well as the relative contribution of both effects to constraining cosmological parameters. We utilize ray-tracing dark-matter-only N-body simulations and validate our technique by comparing our results for the convergence power spectrum with analytic results from past studies. We then extend our analysis to non-Gaussian observables which cannot be easily treated analytically. We study the convergence (equilateral) bispectrum and two topological observables, lensing peaks and Minkowski functionals, focusing on their sensitivity to the matter density Ω m and themore » dark energy equation of state w. We find that a cancellation between the geometry and growth effects is a common feature for all observables and exists at the map level. It weakens the overall sensitivity by factors of up to 3 and 1.5 for w and Ω m, respectively, with the bispectrum worst affected. However, combining geometry and growth information alleviates the degeneracy between Ω m and w from either effect alone. As a result, the magnitudes of marginalized errors remain similar to those obtained from growth-only effects, but with the correlation between the two parameters switching sign. Furthermore, these results shed light on the origin of the cosmology sensitivity of non-Gaussian statistics and should be useful in optimizing combinations of observables.« less
`Un-Darkening' the Cosmos: New laws of physics for an expanding universe
NASA Astrophysics Data System (ADS)
George, William
2017-11-01
Dark matter is believed to exist because Newton's Laws are inconsistent with the visible matter in galaxies. Dark energy is necessary to explain the universe expansion. (also available from www.turbulence-online.com) suggested that the equations themselves might be in error because they implicitly assume that time is measured in linear increments. This presentation couples the possible non-linearity of time with an expanding universe. Maxwell's equations for an expanding universe with constant speed of light are shown to be invariant only if time itself is non-linear. Both linear and exponential expansion rates are considered. A linearly expanding universe corresponds to logarithmic time, while exponential expansion corresponds to exponentially varying time. Revised Newton's laws using either leads to different definitions of mass and kinetic energy, both of which appear time-dependent if expressed in linear time. And provide the possibility of explaining the astronomical observations without either dark matter or dark energy. We would have never noticed the differences on earth, since the leading term in both expansions is linear in δ /to where to is the current age.
Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.
Götz, Andreas W; Kollmar, Christian; Hess, Bernd A
2005-09-01
We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Kompan, T. A.; Korenev, A. S.; Lukin, A. Ya.
2008-10-01
The artificial material sitall CO-115M was developed purposely as a material with an extra-low thermal expansion. The controlled crystallization of an aluminosilicate glass melt leads to the formation of a mixture of β-spodumen, β-eucryptite, and β-silica anisotropic microcrystals in a matrix of residual glass. Due to the small size of the microcrystals, the material is homogeneous and transparent. Specific lattice anharmonism of these microcrystal materials results in close to zero average thermal linear expansion coefficient (TLEC) of the sitall material. The thermal expansion coefficient of this material was measured using an interferometric method in line with the classical approach of Fizeau. To obtain the highest accuracy, the registration of light intensity of the total interference field was used. Then, the parameters of the interference pattern were calculated. Due to the large amount of information in the interference pattern, the error of the calculated fringe position was less than the size of a pixel of the optical registration system. The thermal expansion coefficient of the sitall CO-115M and its temperature dependence were measured. The TLEC value of about 3 × 10-8 K-1 to 5 × 10-8 K-1 in the temperature interval from -20 °C to +60 °C was obtained. A special investigation was carried out to show the homogeneity of the material.
Exploratory Lattice QCD Study of the Rare Kaon Decay K^{+}→π^{+}νν[over ¯].
Bai, Ziyuan; Christ, Norman H; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T
2017-06-23
We report a first, complete lattice QCD calculation of the long-distance contribution to the K^{+}→π^{+}νν[over ¯] decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
Exploratory Lattice QCD Study of the Rare Kaon Decay K+→π+ν ν ¯
NASA Astrophysics Data System (ADS)
Bai, Ziyuan; Christ, Norman H.; Feng, Xu; Lawson, Andrew; Portelli, Antonin; Sachrajda, Christopher T.; Rbc-Ukqcd Collaboration
2017-06-01
We report a first, complete lattice QCD calculation of the long-distance contribution to the K+→π+ν ν ¯ decay within the standard model. This is a second-order weak process involving two four-Fermi operators that is highly sensitive to new physics and being studied by the NA62 experiment at CERN. While much of this decay comes from perturbative, short-distance physics, there is a long-distance part, perhaps as large as the planned experimental error, which involves nonperturbative phenomena. The calculation presented here, with unphysical quark masses, demonstrates that this contribution can be computed using lattice methods by overcoming three technical difficulties: (i) a short-distance divergence that results when the two weak operators approach each other, (ii) exponentially growing, unphysical terms that appear in Euclidean, second-order perturbation theory, and (iii) potentially large finite-volume effects. A follow-on calculation with physical quark masses and controlled systematic errors will be possible with the next generation of computers.
Spatial range of illusory effects in Müller-Lyer figures.
Predebon, J
2001-11-01
The spatial range of the illusory effects in Müller-Lyer (M-L) figures was examined in three experiments. Experiments 1 and 2 assessed the pattern of bisection errors along the shaft of the standard or double-angle (experiment 1) and the single-angle (experiment 2) M-L figures: Subjects bisected the shaft and the resulting two half-segments of the shaft to produce apparently equal quarters, and then each of the quarters to produce eight equal-appearing segments. The bisection judgments of each segment were referenced to the segment's physical midpoints. The expansion or wings-out and the contraction or wings-in figures yielded similar patterns of bisection errors. For the standard M-L figures, there were significant errors in bisecting each half, and each end-quarter, but not the two central quarters of the shaft. For the single-angle M-L figures, there were significant errors in bisecting the length of the shaft, the half-segment, and the quarter, of the shaft adjacent to the vertex but not the second quarter from the vertex nor in dividing the half of the shaft at the open end of the figure into four equal intervals. Experiment 3 assessed the apparent length of the half-segment of the shaft at the open end of the single-angle figures. Length judgments were unaffected by the vertex at the opposite end of the shaft. Taken together, the results indicate that the length distortions in both the standard and single-angle M-L figures are not uniformly distributed along the shaft but rather are confined mainly to the quarters adjacent to the vertices. The present findings imply that theories of the M-L illusion which assume uniform expansion or contraction of the shafts are incomplete.
Optimal configurations of spatial scale for grid cell firing under noise and uncertainty
Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil
2014-01-01
We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144
Calculation of neutral weak nucleon form factors with the AdS/QCD correspondence
NASA Astrophysics Data System (ADS)
Lohmann, Mark
The AdS/QCD (Anti-de Sitter/Quantum Chromodynamics) is a mathematical formalism applied to a theory based on the original AdS/CFT (Anti-de Sitter/ Conformal Field Theory) correspondence. The aim is to describe properties of the strong force in an essentially non-perturbative way. AdS/QCD theories break the conformal symmetry of the AdS metric (a sacrifice) to arrive at a boundary theory which is QCD-like (a payoff). This correspondence has been used to calculate well-known quantities in nucleon spectra and structure like Regge trajectories, form factors, and many others within an error of less than 20% from experiment. This is impressive considering that ordinary perturbation theory in QCD applied to the strongly interacting domain usually obtains an error of about 30%. In this thesis, the AdS/QCD correspondence method of light-front holography established by Brodsky and de Teramond is used in an attempt to calculate the Dirac and Pauli neutral weak form factors, FZ1 (Q2) and FZ2 (Q 2) respectively, for both the proton and the neutron. With this approach, we were able to determine the neutral weak Dirac form factor for both nucleons and the Pauli form factor for the proton, while the method did not succeed at determining the neutral weak Pauli form factor for the neutron. With these we were also able to extract the proton's strange electric and magnetic form factor, which addresses important questions in nucleon sub-structure that are currently being investigated through experiments at the Thomas Jefferson National Accelerator Facility.
Thermal error analysis and compensation for digital image/volume correlation
NASA Astrophysics Data System (ADS)
Pan, Bing
2018-02-01
Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.
NASA Astrophysics Data System (ADS)
Bacha, Tulu
The Goddard Lidar Observatory for Wind (GLOW), a mobile direct detection Doppler LIDAR based on molecular backscattering for measurement of wind in the troposphere and lower stratosphere region of atmosphere is operated and its errors characterized. It was operated at Howard University Beltsville Center for Climate Observation System (BCCOS) side by side with other operating instruments: the NASA/Langely Research Center Validation Lidar (VALIDAR), Leosphere WLS70, and other standard wind sensing instruments. The performance of Goddard Lidar Observatory for Wind (GLOW) is presented for various optical thicknesses of cloud conditions. It was also compared to VALIDAR under various conditions. These conditions include clear and cloudy sky regions. The performance degradation due to the presence of cirrus clouds is quantified by comparing the wind speed error to cloud thickness. The cloud thickness is quantified in terms of aerosol backscatter ratio (ASR) and cloud optical depth (COD). ASR and COD are determined from Howard University Raman Lidar (HURL) operating at the same station as GLOW. The wind speed error of GLOW was correlated with COD and aerosol backscatter ratio (ASR) which are determined from HURL data. The correlation related in a weak linear relationship. Finally, the wind speed measurements of GLOW were corrected using the quantitative relation from the correlation relations. Using ASR reduced the GLOW wind error from 19% to 8% in a thin cirrus cloud and from 58% to 28% in a relatively thick cloud. After correcting for cloud induced error, the remaining error is due to shot noise and atmospheric variability. Shot-noise error is the statistical random error of backscattered photons detected by photon multiplier tube (PMT) can only be minimized by averaging large number of data recorded. The atmospheric backscatter measured by GLOW along its line-of-sight direction is also used to analyze error due to atmospheric variability within the volume of measurement. GLOW scans in five different directions (vertical and at elevation angles of 45° in north, south, east, and west) to generate wind profiles. The non-uniformity of the atmosphere in all scanning directions is a factor contributing to the measurement error of GLOW. The atmospheric variability in the scanning region leads to difference in the intensity of backscattered signals for scanning directions. Taking the ratio of the north (east) to south (west) and comparing the statistical differences lead to a weak linear relation between atmospheric variability and line-of-sights wind speed differences. This relation was used to make correction which reduced by about 50%.
Avoiding Misdiagnosis in Patients with Neurological Emergencies
Pope, Jennifer V.; Edlow, Jonathan A.
2012-01-01
Approximately 5% of patients presenting to emergency departments have neurological symptoms. The most common symptoms or diagnoses include headache, dizziness, back pain, weakness, and seizure disorder. Little is known about the actual misdiagnosis of these patients, which can have disastrous consequences for both the patients and the physicians. This paper reviews the existing literature about the misdiagnosis of neurological emergencies and analyzes the reason behind the misdiagnosis by specific presenting complaint. Our goal is to help emergency physicians and other providers reduce diagnostic error, understand how these errors are made, and improve patient care. PMID:22888439
Multiple maternal origins and weak phylogeographic structure in domestic goats
Luikart, Gordon; Gielly, Ludovic; Excoffier, Laurent; Vigne, Jean-Denis; Bouvet, Jean; Taberlet, Pierre
2001-01-01
Domestic animals have played a key role in human history. Despite their importance, however, the origins of most domestic species remain poorly understood. We assessed the phylogenetic history and population structure of domestic goats by sequencing a hypervariable segment (481 bp) of the mtDNA control region from 406 goats representing 88 breeds distributed across the Old World. Phylogeographic analysis revealed three highly divergent goat lineages (estimated divergence >200,000 years ago), with one lineage occurring only in eastern and southern Asia. A remarkably similar pattern exists in cattle, sheep, and pigs. These results, combined with recent archaeological findings, suggest that goats and other farm animals have multiple maternal origins with a possible center of origin in Asia, as well as in the Fertile Crescent. The pattern of goat mtDNA diversity suggests that all three lineages have undergone population expansions, but that the expansion was relatively recent for two of the lineages (including the Asian lineage). Goat populations are surprisingly less genetically structured than cattle populations. In goats only ≈10% of the mtDNA variation is partitioned among continents. In cattle the amount is ≥50%. This weak structuring suggests extensive intercontinental transportation of goats and has intriguing implications about the importance of goats in historical human migrations and commerce. PMID:11344314
Petruzielo, F R; Toulouse, Julien; Umrigar, C J
2011-02-14
A simple yet general method for constructing basis sets for molecular electronic structure calculations is presented. These basis sets consist of atomic natural orbitals from a multiconfigurational self-consistent field calculation supplemented with primitive functions, chosen such that the asymptotics are appropriate for the potential of the system. Primitives are optimized for the homonuclear diatomic molecule to produce a balanced basis set. Two general features that facilitate this basis construction are demonstrated. First, weak coupling exists between the optimal exponents of primitives with different angular momenta. Second, the optimal primitive exponents for a chosen system depend weakly on the particular level of theory employed for optimization. The explicit case considered here is a basis set appropriate for the Burkatzki-Filippi-Dolg pseudopotentials. Since these pseudopotentials are finite at nuclei and have a Coulomb tail, the recently proposed Gauss-Slater functions are the appropriate primitives. Double- and triple-zeta bases are developed for elements hydrogen through argon. These new bases offer significant gains over the corresponding Burkatzki-Filippi-Dolg bases at various levels of theory. Using a Gaussian expansion of the basis functions, these bases can be employed in any electronic structure method. Quantum Monte Carlo provides an added benefit: expansions are unnecessary since the integrals are evaluated numerically.
A simulation study of radial expansion of an electron beam injected into an ionospheric plasma
NASA Technical Reports Server (NTRS)
Koga, J.; Lin, C. S.
1994-01-01
Injections of nonrelativistic electron beams from a finite equipotential conductor into an ionospheric plasma have been simulated using a two-dimensional electrostatic particle code. The purpose of the study is to survey the simulation parameters for understanding the dependence of beam radius on physical variables. The conductor is charged to a high potential when the background plasma density is less than the beam density. Beam electrons attracted by the charged conductor are decelerated to zero velocity near the stagnation point, which is at a few Debye lengths from the conductor. The simulations suggest that the beam electrons at the stagnation point receive a large transverse kick and the beam expands radially thereafter. The buildup of beam electrons at the stagnation point produces a large electrostatic force responsible for the transverse kick. However, for the weak charging cases where the background plasma density is larger than the beam density, the radial expansion mechanism is different; the beam plasma instability is found to be responsible for the radial expansion. The simulations show that the electron beam radius for high spacecraft charging cases is of the order of the beam gyroradius, defined as the beam velocity divided by the gyrofrequency. In the weak charging cases, the beam radius is only a fraction of the beam gyroradius. The parameter survey indicates that the beam radius increases with beam density and decreases with magnetic field and beam velocity. The beam radius normalized by the beam gyroradius is found to scale according to the ratio of the beam electron Debye length to the ambient electron Debye length. The parameter dependence deduced would be useful for interpreting the beam radius and beam density of electron beam injection experiments conducted from rockets and the space shuttle.
Zhang, Li-Juan; Cai, Wan-Zhi; Luo, Jun-Yu; Zhang, Shuai; Wang, Chun-Yi; Lv, Li-Min; Zhu, Xiang-Zhen; Wang, Li; Cui, Jin-Jie
2017-01-01
Lygus pratensis (L.) is an important cotton pest in China, especially in the northwest region. Nymphs and adults cause serious quality and yield losses. However, the genetic structure and geographic distribution of L. pratensis is not well known. We analyzed genetic diversity, geographical structure, gene flow, and population dynamics of L. pratensis in northwest China using mitochondrial and nuclear sequence datasets to study phylogeographical patterns and demographic history. L. pratensis (n = 286) were collected at sites across an area spanning 2,180,000 km2, including the Xinjiang and Gansu-Ningxia regions. Populations in the two regions could be distinguished based on mitochondrial criteria but the overall genetic structure was weak. The nuclear dataset revealed a lack of diagnostic genetic structure across sample areas. Phylogenetic analysis indicated a lack of population level monophyly that may have been caused by incomplete lineage sorting. The Mantel test showed a significant correlation between genetic and geographic distances among the populations based on the mtDNA data. However the nuclear dataset did not show significant correlation. A high level of gene flow among populations was indicated by migration analysis; human activities may have also facilitated insect movement. The availability of irrigation water and ample cotton hosts makes the Xinjiang region well suited for L. pratensis reproduction. Bayesian skyline plot analysis, star-shaped network, and neutrality tests all indicated that L. pratensis has experienced recent population expansion. Climatic changes and extensive areas occupied by host plants have led to population expansion of L. pratensis. In conclusion, the present distribution and phylogeographic pattern of L. pratensis was influenced by climate, human activities, and availability of plant hosts.
Xue, Dong-Xiu; Wang, Hai-Yan; Zhang, Tao; Liu, Jin-Xian
2014-01-01
The pen shell, Atrina pectinata, is one of the commercial bivalves in East Asia and thought to be recently affected by anthropogenic pressure (habitat destruction and/or fishing pressure). Information on its population genetic structure is crucial for the conservation of A. pectinata. Considering its long pelagic larval duration and iteroparity with high fecundity, the genetic structure for A. pectinata could be expected to be weak at a fine scale. However, the unusual oceanography in the coasts of China and Korea suggests potential for restricted dispersal of pelagic larvae and geographical differentiation. In addition, environmental changes associated with Pleistocene sea level fluctuations on the East China Sea continental shelf may also have strongly influenced historical population demography and genetic diversity of marine organisms. Here, partial sequences of the mitochondrial Cytochrome c oxidase subunit I (COI) gene and seven microsatellite loci were used to estimate population genetic structure and demographic history of seven samples from Northern China coast and one sample from North Korea coast. Despite high levels of genetic diversity within samples, there was no genetic differentiation among samples from Northern China coast and low but significant genetic differentiation between some of the Chinese samples and the North Korean sample. A late Pleistocene population expansion, probably after the Last Glacial Maximum, was also demonstrated for A. pectinata samples. No recent genetic bottleneck was detected in any of the eight samples. We concluded that both historical recolonization (through population range expansion and demographic expansion in the late Pleistocene) and current gene flow (through larval dispersal) were responsible for the weak level of genetic structure detected in A. pectinata. PMID:24789175
Storing Data from Qweak--A Precision Measurement of the Proton's Weak Charge
NASA Astrophysics Data System (ADS)
Pote, Timothy
2008-10-01
The Qweak experiment will perform a precision measurement of the proton's parity violating weak charge at low Q-squared. The experiment will do so by measuring the asymmetry in parity-violating electron scattering. The proton's weak charge is directly related to the value of the weak mixing angle--a fundamental quantity in the Standard Model. The Standard Model makes a firm prediction for the value of the weak mixing angle and thus Qweak may provide insight into shortcomings in the SM. The Qweak experiment will run at Thomas Jefferson National Accelerator Facility in Newport News, VA. A database was designed to hold data directly related to the measurement of the proton's weak charge such as detector and beam monitor yield, asymmetry, and error as well as control structures such as the voltage across photomultiplier tubes and the temperature of the liquid hydrogen target. In order to test the database for speed and stability, it was filled with fake data that mimicked the data that Qweak is expected to collect. I will give a brief overview of the Qweak experiment and database design, and present data collected during these tests.
Protostellar Collapse with a Shock
NASA Technical Reports Server (NTRS)
Tsai, John C.; Hsu, Juliana J.
1995-01-01
We reexamine both numerically and analytically the collapse of the singular isothermal sphere in the context of low-mass star formation. We consider the case where the onset of collapse is initiated by some arbitrary process which is accompanied by a central output of either heat or kinetic energy. We find two classes of numerical solutions describing this manner of collapse. The first approaches in time the expansion wave solution of Shu, while the second class is characterized by an ever-decreasing central accretion rate and the presence of an outwardly propagating weak shock. The collapse solution which represents the dividing case between these two classes is determined analytically by a similarity analysis. This solution shares with the expansion wave solution the properties that the gas remains stationary with an r(exp -2) density profile at large radius and that, at small radius, the gas free-falls onto a nascent core at a constant rate which depends only on the isothermal sound speed. This accretion rate is a factor of approx. 0.1 that predicted by the expansion wave solution. This reduction is due in part to the presence of a weak shock which propagates outward at 1.26 times the sound speed. Gas in the postshock region first moves out subsonically but is then decelerated and begins to collapse. The existence of two classes of numerical collapse solutions is explained in terms of the instability to radial perturbations of the analytic solution. Collapse occurring in the manner described by some of our solutions would eventually unbind a finite-sized core. However, this does not constitute a violation of the instability properties of the singular isothermal sphere which is unstable both to collapse and to expansion. To emphasize this, we consider a purely expanding solution for isothermal spheres. This solution is found to be self-similar and results in a uniform density core in the central regions of the gas. Our solutions may be relevant to the 'luminosity' problem of protostellar cores since the predicted central accretion rates are significantly reduced relative to that of the expansion wave solution. Furthermore, our calculations indicate that star-forming cloud cores are not very tightly bound and that modest disturbances can easily result in both termination of infall and dispersal of unaccreted material.
Protostellar Collapse with a Shock
NASA Technical Reports Server (NTRS)
Tsai, John C.; Hsu, Juliana J. L.
1995-01-01
We reexamine both numerically and analytically the collapse of the singular isothermal sphere in the context of low-mass star formation. We consider the case where the onset of collapse is initiated by some arbitrary process which is accompanied by a central output of either heat or kinetic energy. We find two classes of numerical solutions describing this manner of collapse. The first approaches in time the expansion wave solution of Shu, while the second class is characterized by an ever-decreasing central accretion rate and the presence of an outwardly propagating weak shock. The collapse solution which represents the dividing case between these two classes is determined analytically by a similarity analysis. This solution shares with the expansion wave solution the properties that the gas remains stationary with an r(sup -2) density profile at large radius and that, at small radius, the gas free-falls onto a nascent core at a constant rate which depends only on the isothermal sound speed. This accretion rate is a factor of approx. 0.1 that predicted by the expansion wave solution. This reduction is due in part to the presence of a weak shock which propagates outward at 1.26 times the sound speed. Gas in the postshock region first moves out subsonically but is then decelerated and begins to collapse. The existence of two classes of numerical collapse solutions is explained in terms of the instability to radial perturbations of the analytic solution. Collapse occurring in the manner described by some of our solutions would eventually unbind a finite-sized core. However, this does not constitute a violation of the instability properties of the singular isothermal sphere which is unstable both to collapse and to expansion. To emphasize this, we consider a purely expanding solution for isothermal spheres. This solution is found to be self-similar and results in a uniform density core in the central regions of the gas. Our solutions may be relevant to the 'luminosity' problem of protostellar cores since the predicted central accretion rates are significantly reduced relative to that of the expansion wave solution. Furthermore, our calculations indicate that star-forming cloud cores are not very tightly bound and that modest disturbances can easily result in both termination of infall and dispersal of unaccreted material.
Impact of theoretical priors in cosmological analyses: The case of single field quintessence
NASA Astrophysics Data System (ADS)
Peirone, Simone; Martinelli, Matteo; Raveri, Marco; Silvestri, Alessandra
2017-09-01
We investigate the impact of general conditions of theoretical stability and cosmological viability on dynamical dark energy models. As a powerful example, we study whether minimally coupled, single field quintessence models that are safe from ghost instabilities, can source the Chevallier-Polarski-Linder (CPL) expansion history recently shown to be mildly favored by a combination of cosmic microwave background (Planck) and weak lensing (KiDS) data. We find that in their most conservative form, the theoretical conditions impact the analysis in such a way that smooth single field quintessence becomes significantly disfavored with respect to the standard Λ CDM cosmological model. This is due to the fact that these conditions cut a significant portion of the (w0,wa) parameter space for CPL, in particular, eliminating the region that would be favored by weak lensing data. Within the scenario of a smooth dynamical dark energy parametrized with CPL, weak lensing data favors a region that would require multiple fields to ensure gravitational stability.
SKA weak lensing - I. Cosmological forecasts and the power of radio-optical cross-correlations
NASA Astrophysics Data System (ADS)
Harrison, Ian; Camera, Stefano; Zuntz, Joe; Brown, Michael L.
2016-12-01
We construct forecasts for cosmological parameter constraints from weak gravitational lensing surveys involving the Square Kilometre Array (SKA). Considering matter content, dark energy and modified gravity parameters, we show that the first phase of the SKA (SKA1) can be competitive with other Stage III experiments such as the Dark Energy Survey and that the full SKA (SKA2) can potentially form tighter constraints than Stage IV optical weak lensing experiments, such as those that will be conducted with LSST, WFIRST-AFTA or Euclid-like facilities. Using weak lensing alone, going from SKA1 to SKA2 represents improvements by factors of ˜10 in matter, ˜10 in dark energy and ˜5 in modified gravity parameters. We also show, for the first time, the powerful result that comparably tight constraints (within ˜5 per cent) for both Stage III and Stage IV experiments, can be gained from cross-correlating shear maps between the optical and radio wavebands, a process which can also eliminate a number of potential sources of systematic errors which can otherwise limit the utility of weak lensing cosmology.
Numerical investigation of over expanded flow behavior in a single expansion ramp nozzle
NASA Astrophysics Data System (ADS)
Mousavi, Seyed Mahmood; Pourabidi, Reza; Goshtasbi-Rad, Ebrahim
2018-05-01
The single expansion ramp nozzle is severely over-expanded when the vehicle is at low speed, which hinders its ability to provide optimal configurations for combined cycle engines. The over-expansion leads to flow separation as a result of shock wave/boundary-layer interaction. Flow separation, and the presence of shocks themselves, result in a performance loss in the single expansion ramp nozzle, leading to reduced thrust and increased pressure losses. In the present work, the unsteady two dimensional compressible flow in an over expanded single expansion ramp nozzle has been investigated using finite volume code. To achieve this purpose, the Reynolds stress turbulence model and full multigrid initialization, in addition to the Smirnov's method for examining the errors accumulation, have been employed and the results are compared with available experimental data. The results show that the numerical code is capable of predicting the experimental data with high accuracy. Afterward, the effect of discontinuity jump in wall temperature as well as the length of straight ramp on flow behavior have been studied. It is concluded that variations in wall temperature and length of straight ramp change the shock wave boundary layer interaction, shock structure, shock strength as well as the distance between Lambda shocks.
Oil Palm expansion over Southeast Asia: land use change and air quality
NASA Astrophysics Data System (ADS)
Silva, S. J.; Heald, C. L.; Geddes, J.; Marlier, M. E.; Austin, K.; Kasibhatla, P. S.
2015-12-01
Over recent decades oil palm plantations have rapidly expanded across Southeast Asia (SEA). Much of this expansion has come at the expense of natural forests and grasslands. Aircraft measurements from a 2008 campaign, OP3, found that oil palm plantations emit as much as 7 times more isoprene than nearby natural forests. Furthermore, SEA is a rapidly developing region, with increasing urban population, and growing air quality concerns. Thus, SEA represents an ideal case study to examine the impacts of land use change on air quality in the region, and whether those changes can be detected from satellite observations of atmospheric composition. We investigate the impacts of historical and future oil palm expansion in SEA using satellite data, high-resolution land maps, and the chemical transport model GEOS-Chem. We examine the impact of palm plantations on surface-atmosphere processes (dry deposition, biogenic emissions). We show the sensitivity of air quality to current and future oil palm expansion scenarios, and discuss the limitations of current satellite measurements in capturing these changes. Our results indicate that while the impact of oil palm expansion on air quality can be significant, the retrieval error and sensitivity of the satellite measurements limit our ability to observe these impacts from space.
Reduction of image-based ADI-to-AEI overlay inconsistency with improved algorithm
NASA Astrophysics Data System (ADS)
Chen, Yen-Liang; Lin, Shu-Hong; Chen, Kai-Hsiung; Ke, Chih-Ming; Gau, Tsai-Sheng
2013-04-01
In image-based overlay (IBO) measurement, the measurement quality of various measurement spectra can be judged by quality indicators and also the ADI-to-AEI similarity to determine the optimum light spectrum. However we found some IBO measured results showing erroneous indication of wafer expansion from the difference between the ADI and the AEI maps, even after their measurement spectra were optimized. To reduce this inconsistency, an improved image calculation algorithm is proposed in this paper. Different gray levels composed of inner- and outer-box contours are extracted to calculate their ADI overlay errors. The symmetry of intensity distribution at the thresholds dictated by a range of gray levels is used to determine the particular gray level that can minimize the ADI-to-AEI overlay inconsistency. After this improvement, the ADI is more similar to AEI with less expansion difference. The same wafer was also checked by the diffraction-based overlay (DBO) tool to verify that there is no physical wafer expansion. When there is actual wafer expansion induced by large internal stress, both the IBO and the DBO measurements indicate similar expansion results. The scanning white-light interference microscope was used to check the variation of wafer warpage during the ADI and AEI stages. It predicts a similar trend with the overlay difference map, confirming the internal stress.
QCD equation of state to O ( μ B 6 ) from lattice QCD
Bazavov, A.; Ding, H. -T.; Hegde, P.; ...
2017-03-07
In this work, we calculated the QCD equation of state using Taylor expansions that include contributions from up to sixth order in the baryon, strangeness and electric charge chemical potentials. Calculations have been performed with the Highly Improved Staggered Quark action in the temperature range T ϵ [135 MeV, 330 MeV] using up to four different sets of lattice cut-offs corresponding to lattices of size Nmore » $$3\\atop{σ}$$ × N τ with aspect ratio N σ/N τ = 4 and N τ = 6-16. The strange quark mass is tuned to its physical value and we use two strange to light quark mass ratios m s/m l = 20 and 27, which in the continuum limit correspond to a pion mass of about 160 MeV and 140 MeV respectively. Sixth-order results for Taylor expansion coefficients are used to estimate truncation errors of the fourth-order expansion. We show that truncation errors are small for baryon chemical potentials less then twice the temperature (µ B ≤ 2T ). The fourth-order equation of state thus is suitable for √the modeling of dense matter created in heavy ion collisions with center-of-mass energies down to √sNN ~ 12 GeV. We provide a parametrization of basic thermodynamic quantities that can be readily used in hydrodynamic simulation codes. The results on up to sixth order expansion coefficients of bulk thermodynamics are used for the calculation of lines of constant pressure, energy and entropy densities in the T -µ B plane and are compared with the crossover line for the QCD chiral transition as well as with experimental results on freeze-out parameters in heavy ion collisions. These coefficients also provide estimates for the location of a possible critical point. Lastly, we argue that results on sixth order expansion coefficients disfavor the existence of a critical point in the QCD phase diagram for µ B/T ≤ 2 and T/T c(µ B = 0) > 0.9.« less
NASA Technical Reports Server (NTRS)
Gao, Feng; DeColstoun, Eric Brown; Ma, Ronghua; Weng, Qihao; Masek, Jeffrey G.; Chen, Jin; Pan, Yaozhong; Song, Conghe
2012-01-01
Cities have been expanding rapidly worldwide, especially over the past few decades. Mapping the dynamic expansion of impervious surface in both space and time is essential for an improved understanding of the urbanization process, land-cover and land-use change, and their impacts on the environment. Landsat and other medium-resolution satellites provide the necessary spatial details and temporal frequency for mapping impervious surface expansion over the past four decades. Since the US Geological Survey opened the historical record of the Landsat image archive for free access in 2008, the decades-old bottleneck of data limitation has gone. Remote-sensing scientists are now rich with data, and the challenge is how to make best use of this precious resource. In this article, we develop an efficient algorithm to map the continuous expansion of impervious surface using a time series of four decades of medium-resolution satellite images. The algorithm is based on a supervised classification of the time-series image stack using a decision tree. Each imerpervious class represents urbanization starting in a different image. The algorithm also allows us to remove inconsistent training samples because impervious expansion is not reversible during the study period. The objective is to extract a time series of complete and consistent impervious surface maps from a corresponding times series of images collected from multiple sensors, and with a minimal amount of image preprocessing effort. The approach was tested in the lower Yangtze River Delta region, one of the fastest urban growth areas in China. Results from nearly four decades of medium-resolution satellite data from the Landsat Multispectral Scanner (MSS), Thematic Mapper (TM), Enhanced Thematic Mapper plus (ETM+) and China-Brazil Earth Resources Satellite (CBERS) show a consistent urbanization process that is consistent with economic development plans and policies. The time-series impervious spatial extent maps derived from this study agree well with an existing urban extent polygon data set that was previously developed independently. The overall mapping accuracy was estimated at about 92.5% with 3% commission error and 12% omission error for the impervious type from all images regardless of image quality and initial spatial resolution.
Delving Into Dissipative Quantum Dynamics: From Approximate to Numerically Exact Approaches
NASA Astrophysics Data System (ADS)
Chen, Hsing-Ta
In this thesis, I explore dissipative quantum dynamics of several prototypical model systems via various approaches, ranging from approximate to numerically exact schemes. In particular, in the realm of the approximate I explore the accuracy of Pade-resummed master equations and the fewest switches surface hopping (FSSH) algorithm for the spin-boson model, and non-crossing approximations (NCA) for the Anderson-Holstein model. Next, I develop new and exact Monte Carlo approaches and test them on the spin-boson model. I propose well-defined criteria for assessing the accuracy of Pade-resummed quantum master equations, which correctly demarcate the regions of parameter space where the Pade approximation is reliable. I continue the investigation of spin-boson dynamics by benchmark comparisons of the semiclassical FSSH algorithm to exact dynamics over a wide range of parameters. Despite small deviations from golden-rule scaling in the Marcus regime, standard surface hopping algorithm is found to be accurate over a large portion of parameter space. The inclusion of decoherence corrections via the augmented FSSH algorithm improves the accuracy of dynamical behavior compared to exact simulations, but the effects are generally not dramatic for the cases I consider. Next, I introduce new methods for numerically exact real-time simulation based on real-time diagrammatic Quantum Monte Carlo (dQMC) and the inchworm algorithm. These methods optimally recycle Monte Carlo information from earlier times to greatly suppress the dynamical sign problem. In the context of the spin-boson model, I formulate the inchworm expansion in two distinct ways: the first with respect to an expansion in the system-bath coupling and the second as an expansion in the diabatic coupling. In addition, a cumulant version of the inchworm Monte Carlo method is motivated by the latter expansion, which allows for further suppression of the growth of the sign error. I provide a comprehensive comparison of the performance of the inchworm Monte Carlo algorithms to other exact methodologies as well as a discussion of the relative advantages and disadvantages of each. Finally, I investigate the dynamical interplay between the electron-electron interaction and the electron-phonon coupling within the Anderson-Holstein model via two complementary NCAs: the first is constructed around the weak-coupling limit and the second around the polaron limit. The influence of phonons on spectral and transport properties is explored in equilibrium, for non-equilibrium steady state and for transient dynamics after a quench. I find the two NCAs disagree in nontrivial ways, indicating that more reliable approaches to the problem are needed. The complementary frameworks used here pave the way for numerically exact methods based on inchworm dQMC algorithms capable of treating open systems simultaneously coupled to multiple fermionic and bosonic baths.
Local non-Calderbank-Shor-Steane quantum error-correcting code on a three-dimensional lattice
NASA Astrophysics Data System (ADS)
Kim, Isaac H.
2011-05-01
We present a family of non-Calderbank-Shor-Steane quantum error-correcting code consisting of geometrically local stabilizer generators on a 3D lattice. We study the Hamiltonian constructed from ferromagnetic interaction of overcomplete set of local stabilizer generators. The degenerate ground state of the system is characterized by a quantum error-correcting code whose number of encoded qubits are equal to the second Betti number of the manifold. These models (i) have solely local interactions; (ii) admit a strong-weak duality relation with an Ising model on a dual lattice; (iii) have topological order in the ground state, some of which survive at finite temperature; and (iv) behave as classical memory at finite temperature.
A flexible environmental reuse/recycle policy based on economic strength.
Tsiliyannis, C A
2007-01-01
Environmental policies based on fixed recycling rates may lead to increased environmental impacts (e.g., landfilled wastes) during economic expansion. A rate policy is proposed, which is adjusted according to the overall strength or weakness of the economy, as reflected by overall packaging demand and consumption, production and imports-exports. During economic expansion featuring rising consumption, production or exports, the proposed flexible policy suggests a higher reuse/recycle rate. During economic slowdown a lower rate results in lower impacts. The flexible target rates are determined in terms of annual data, including consumption, imports-exports and production. Higher environmental gains can be achieved at lower cost if the flexible policy is applied to widely consumed packaging products and materials associated with low rates, or if cleaner recycling technology is adopted.
Exponential gain of randomness certified by quantum contextuality
NASA Astrophysics Data System (ADS)
Um, Mark; Zhang, Junhua; Wang, Ye; Wang, Pengfei; Kim, Kihwan
2017-04-01
We demonstrate the protocol of exponential gain of randomness certified by quantum contextuality in a trapped ion system. The genuine randomness can be produced by quantum principle and certified by quantum inequalities. Recently, randomness expansion protocols based on inequality of Bell-text and Kochen-Specker (KS) theorem, have been demonstrated. These schemes have been theoretically innovated to exponentially expand the randomness and amplify the randomness from weak initial random seed. Here, we report the experimental evidence of such exponential expansion of randomness. In the experiment, we use three states of a 138Ba + ion between a ground state and two quadrupole states. In the 138Ba + ion system, we do not have detection loophole and we apply a methods to rule out certain hidden variable models that obey a kind of extended noncontextuality.
From dipolar to multipolar interactions between ultracold Feshbach molecules
NASA Astrophysics Data System (ADS)
Quéméner, Goulven; Lepers, Maxence; Luc-Koenig, Eliane; Dulieu, Olivier
2016-05-01
Using the multipolar expansion of electrostatic and magnetostatic potential energies, we characterize the long-range interactions between two weakly-bound diatomic molecules, taking as an example the paramagnetic Er2 Feshbach molecules which were produced recently. The interaction between atomic magnetic dipoles gives rise to the usual R-3 leading term of the multipolar expansion, where R is the intermolecular distance. We show that additional terms scaling as R-5, R-7 and so on also appear, which are strongly anisotropic with respect to the orientation of the molecules. These terms can be seen as effective molecular multipole moments reflecting the spatial extension of the molecules which is non-negligible compared to R. We acknowledge the financial support of the COPOMOL project (ANR-13-IS04-0004) from Agence Nationale de la Recherche.
Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals
NASA Astrophysics Data System (ADS)
Buchert, Thomas; France, Martin J.; Steiner, Frank
2017-05-01
Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k = P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.
Assessing the Prospects for Employment in an Expansion of US Aquaculture
NASA Astrophysics Data System (ADS)
Ngo, N.
2006-12-01
The United States imports 60 percent of its seafood, leading to a 7 billion seafood trade deficit. To mitigate this deficit, the National Oceanographic and Atmospheric Administration (NOAA), a branch of the U.S. Department of Commerce, has promoted the expansion of U.S. production of seafood by aquaculture. NOAA projects that the future expansion of a U.S. aquaculture industry could produce as much as 5 billion in annual sales. NOAA claims that one of the benefits of this expansion would be an increase in employment from 180,000 to 600,000 persons (100,000 indirect jobs and 500,000 direct jobs). Sources of these estimates and the assumptions upon which they are based are unclear, however. The Marine Aquaculture Task Force (MATF), an independent scientific panel, has been skeptical of NOAA's employment estimates, claiming that its sources of information are weak and based upon dubious assumptions. If NOAA has exaggerated its employment projections, then the benefits from an expansion of U.S. aquaculture production would not be as large as projected. y study examined published estimates of labor productivity from the domestic and foreign aquaculture of a variety of species, and I projected the potential increase in employment associated with a 5 billion aquaculture industry, as proposed by NOAA. Results showed that employment estimates will range from only 40,000 to 128,000 direct jobs by 2025 as a consequence of the proposed expansion. Consequently, NOAA may have overestimated its employment projections-?possibly by as much as 170 percent, implying that NOAA's employment estimate requires further research or adjustment.
Hubble confirms cosmic acceleration with weak lensing
2017-12-08
NASA/ESA Hubble Release Date: March 25, 2010 This image shows a smoothed reconstruction of the total (mostly dark) matter distribution in the COSMOS field, created from data taken by the NASA/ESA Hubble Space Telescope and ground-based telescopes. It was inferred from the weak gravitational lensing distortions that are imprinted onto the shapes of background galaxies. The colour coding indicates the distance of the foreground mass concentrations as gathered from the weak lensing effect. Structures shown in white, cyan, and green are typically closer to us than those indicated in orange and red. To improve the resolution of the map, data from galaxies both with and without redshift information were used. The new study presents the most comprehensive analysis of data from the COSMOS survey. The researchers have, for the first time ever, used Hubble and the natural "weak lenses" in space to characterise the accelerated expansion of the Universe. Credit: NASA, ESA, P. Simon (University of Bonn) and T. Schrabback (Leiden Observatory) To learn more abou this image go to: www.spacetelescope.org/news/html/heic1005.html For more information about Goddard Space Flight Center go here: www.nasa.gov/centers/goddard/home/index.html
Measurement of inclusive radiative B-meson decay B decaying to X(S) meson-gamma
NASA Astrophysics Data System (ADS)
Ozcan, Veysi Erkcan
Radiative decays of the B meson, B→ Xsgamma, proceed via virtual flavor changing neutral current processes that are sensitive to contributions from high mass scales, either within the Standard Model of electroweak interactions or beyond. In the Standard Model, these transitions are sensitive to the weak interactions of the top quark, and relatively robust predictions of the inclusive decay rate exist. Significant deviation from these predictions could be interpreted as indications for processes not included in the minimal Standard Model, like interactions of charged Higgs or SUSY particles. The analysis of the inclusive photon spectrum from B→ Xsgamma decays is rather challenging due to high backgrounds from photons emitted in the decay of mesons in B decays as well as e+e- annihilation to low mass quark and lepton pairs. Based on 88.5 million BB events collected by the BABAR detector, the photon spectrum above 1.9 GeV is presented. By comparison of the first and second moments of the photon spectrum with QCD predictions (calculated in the kinetic scheme), QCD parameters describing the bound state of the b quark in the B meson are extracted: mb=4.45+/-0.16 GeV/c2m2 p=0.65+/-0.29 GeV2 These parameters are useful input to non-perturbative QCD corrections to the semileptonic B decay rate and the determination of the CKM parameter Vub. Based on these parameters and heavy quark expansion, the full branching fraction is obtained as: BRB→X sgEg >1.6GeV=4.050.32 stat+/-0.38syst +/-0.29model x10-4. This result is in good agreement with previous measurements, the statistical and systematic errors are comparable. It is also in good agreement with the theoretical Standard Model predictions, and thus within the present errors there is no indication of any interactions not accounted for in the Standard Model. This finding implies strong constraints on physics beyond the Standard Model.
Advanced repair solution of clear defects on HTPSM by using nanomachining tool
NASA Astrophysics Data System (ADS)
Lee, Hyemi; Kim, Munsik; Jung, Hoyong; Kim, Sangpyo; Yim, Donggyu
2015-10-01
As the mask specifications become tighter for low k1 lithography, more aggressive repair accuracy is required below sub 20nm tech. node. To meet tight defect specifications, many maskshops select effective repair tools according to defect types. Normally, pattern defects are repaired by the e-beam repair tool and soft defects such as particles are repaired by the nanomachining tool. It is difficult for an e-beam repair tool to remove particle defects because it uses chemical reaction between gas and electron, and a nanomachining tool, which uses physical reaction between a nano-tip and defects, cannot be applied for repairing clear defects. Generally, film deposition process is widely used for repairing clear defects. However, the deposited film has weak cleaning durability, so it is easily removed by accumulated cleaning process. Although the deposited film is strongly attached on MoSiN(or Qz) film, the adhesive strength between deposited Cr film and MoSiN(or Qz) film becomes weaker and weaker by the accumulated energy when masks are exposed in a scanner tool due to the different coefficient of thermal expansion of each materials. Therefore, whenever a re-pellicle process is needed to a mask, all deposited repair points have to be confirmed whether those deposition film are damaged or not. And if a deposition point is damaged, repair process is needed again. This process causes longer and more complex process. In this paper, the basic theory and the principle are introduced to recover clear defects by using nanomachining tool, and the evaluated results are reviewed at dense line (L/S) patterns and contact hole (C/H) patterns. Also, the results using a nanomachining were compared with those using an e-beam repair tool, including the cleaning durability evaluated by the accumulated cleaning process. Besides, we discuss the phase shift issue and the solution about the image placement error caused by phase error.
Theoretical and experimental studies of error in square-law detector circuits
NASA Technical Reports Server (NTRS)
Stanley, W. D.; Hearn, C. P.; Williams, J. B.
1984-01-01
Square law detector circuits to determine errors from the ideal input/output characteristic function were investigated. The nonlinear circuit response is analyzed by a power series expansion containing terms through the fourth degree, from which the significant deviation from square law can be predicted. Both fixed bias current and flexible bias current configurations are considered. The latter case corresponds with the situation where the mean current can change with the application of a signal. Experimental investigations of the circuit arrangements are described. Agreement between the analytical models and the experimental results are established. Factors which contribute to differences under certain conditions are outlined.
Weakly-tunable transmon qubits in a multi-qubit architecture
NASA Astrophysics Data System (ADS)
Hertzberg, Jared; Bronn, Nicholas; Corcoles, Antonio; Brink, Markus; Keefe, George; Takita, Maika; Hutchings, M.; Plourde, B. L. T.; Gambetta, Jay; Chow, Jerry
Quantum error-correction employing a 2D lattice of qubits requires a strong coupling between adjacent qubits and consistently high gate fidelity among them. In such a system, all-microwave cross-resonance gates offer simplicity of setup and operation. However, the relative frequencies of adjacent qubits must be carefully arranged in order to optimize gate rates and eliminate unwanted couplings. We discuss the incorporation of weakly-flux-tunable transmon qubits into such an architecture. Using DC tuning through filtered flux-bias lines, we adjust qubit frequencies while minimizing the effects of flux noise on decoherence.
Insight into organic reactions from the direct random phase approximation and its corrections
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ruzsinszky, Adrienn; Zhang, Igor Ying; Scheffler, Matthias
2015-10-14
The performance of the random phase approximation (RPA) and beyond-RPA approximations for the treatment of electron correlation is benchmarked on three different molecular test sets. The test sets are chosen to represent three typical sources of error which can contribute to the failure of most density functional approximations in chemical reactions. The first test set (atomization and n-homodesmotic reactions) offers a gradually increasing balance of error from the chemical environment. The second test set (Diels-Alder reaction cycloaddition = DARC) reflects more the effect of weak dispersion interactions in chemical reactions. Finally, the third test set (self-interaction error 11 = SIE11)more » represents reactions which are exposed to noticeable self-interaction errors. This work seeks to answer whether any one of the many-body approximations considered here successfully addresses all these challenges.« less
The random coding bound is tight for the average code.
NASA Technical Reports Server (NTRS)
Gallager, R. G.
1973-01-01
The random coding bound of information theory provides a well-known upper bound to the probability of decoding error for the best code of a given rate and block length. The bound is constructed by upperbounding the average error probability over an ensemble of codes. The bound is known to give the correct exponential dependence of error probability on block length for transmission rates above the critical rate, but it gives an incorrect exponential dependence at rates below a second lower critical rate. Here we derive an asymptotic expression for the average error probability over the ensemble of codes used in the random coding bound. The result shows that the weakness of the random coding bound at rates below the second critical rate is due not to upperbounding the ensemble average, but rather to the fact that the best codes are much better than the average at low rates.
Hellier, Elizabeth; Tucker, Mike; Kenny, Natalie; Rowntree, Anna; Edworthy, Judy
2010-09-01
This study aimed to examine the utility of using color and shape to differentiate drug strength information on over-the-counter medicine packages. Medication errors are an important threat to patient safety, and confusions between drug strengths are a significant source of medication error. A visual search paradigm required laypeople to search for medicine packages of a particular strength from among distracter packages of different strengths, and measures of reaction time and error were recorded. Using color to differentiate drug strength information conferred an advantage on search times and accuracy. Shape differentiation did not improve search times and had only a weak effect on search accuracy. Using color to differentiate drug strength information improves drug strength identification performance. Color differentiation of drug strength information may be a useful way of reducing medication errors and improving patient safety.
Bias Reduction and Filter Convergence for Long Range Stereo
NASA Technical Reports Server (NTRS)
Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav
2005-01-01
We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.
The influence of random element displacement on DOA estimates obtained with (Khatri-Rao-)root-MUSIC.
Inghelbrecht, Veronique; Verhaevert, Jo; van Hecke, Tanja; Rogier, Hendrik
2014-11-11
Although a wide range of direction of arrival (DOA) estimation algorithms has been described for a diverse range of array configurations, no specific stochastic analysis framework has been established to assess the probability density function of the error on DOA estimates due to random errors in the array geometry. Therefore, we propose a stochastic collocation method that relies on a generalized polynomial chaos expansion to connect the statistical distribution of random position errors to the resulting distribution of the DOA estimates. We apply this technique to the conventional root-MUSIC and the Khatri-Rao-root-MUSIC methods. According to Monte-Carlo simulations, this novel approach yields a speedup by a factor of more than 100 in terms of CPU-time for a one-dimensional case and by a factor of 56 for a two-dimensional case.
Weak crystallization theory of metallic alloys
Martin, Ivar; Gopalakrishnan, Sarang; Demler, Eugene A.
2016-06-20
Crystallization is one of the most familiar, but hardest to analyze, phase transitions. The principal reason is that crystallization typically occurs via a strongly first-order phase transition, and thus rigorous treatment would require comparing energies of an infinite number of possible crystalline states with the energy of liquid. A great simplification occurs when crystallization transition happens to be weakly first order. In this case, weak crystallization theory, based on unbiased Ginzburg-Landau expansion, can be applied. Even beyond its strict range of validity, it has been a useful qualitative tool for understanding crystallization. In its standard form, however, weak crystallization theorymore » cannot explain the existence of a majority of observed crystalline and quasicrystalline states. Here we extend the weak crystallization theory to the case of metallic alloys. In this paper, we identify a singular effect of itinerant electrons on the form of weak crystallization free energy. It is geometric in nature, generating strong dependence of free energy on the angles between ordering wave vectors of ionic density. That leads to stabilization of fcc, rhombohedral, and icosahedral quasicrystalline (iQC) phases, which are absent in the generic theory with only local interactions. Finally, as an application, we find the condition for stability of iQC that is consistent with the Hume-Rothery rules known empirically for the majority of stable iQC; namely, the length of the primary Bragg-peak wave vector is approximately equal to the diameter of the Fermi sphere.« less
NASA Astrophysics Data System (ADS)
Jouvel, S.; Kneib, J.-P.; Bernstein, G.; Ilbert, O.; Jelinsky, P.; Milliard, B.; Ealet, A.; Schimd, C.; Dahlen, T.; Arnouts, S.
2011-08-01
Context. With the discovery of the accelerated expansion of the universe, different observational probes have been proposed to investigate the presence of dark energy, including possible modifications to the gravitation laws by accurately measuring the expansion of the Universe and the growth of structures. We need to optimize the return from future dark energy surveys to obtain the best results from these probes. Aims: A high precision weak-lensing analysis requires not an only accurate measurement of galaxy shapes but also a precise and unbiased measurement of galaxy redshifts. The survey strategy has to be defined following both the photometric redshift and shape measurement accuracy. Methods: We define the key properties of the weak-lensing instrument and compute the effective PSF and the overall throughput and sensitivities. We then investigate the impact of the pixel scale on the sampling of the effective PSF, and place upper limits on the pixel scale. We then define the survey strategy computing the survey area including in particular both the Galactic absorption and Zodiacal light variation accross the sky. Using the Le Phare photometric redshift code and realistic galaxy mock catalog, we investigate the properties of different filter-sets and the importance of the u-band photometry quality to optimize the photometric redshift and the dark energy figure of merit (FoM). Results: Using the predicted photometric redshift quality, simple shape measurement requirements, and a proper sky model, we explore what could be an optimal weak-lensing dark energy mission based on FoM calculation. We find that we can derive the most accurate the photometric redshifts for the bulk of the faint galaxy population when filters have a resolution ℛ ~ 3.2. We show that an optimal mission would survey the sky through eight filters using two cameras (visible and near infrared). Assuming a five-year mission duration, a mirror size of 1.5 m and a 0.5 deg2 FOV with a visible pixel scale of 0.15'', we found that a homogeneous survey reaching a survey population of IAB = 25.6 (10σ) with a sky coverage of ~11 000 deg2 maximizes the weak lensing FoM. The effective number density of galaxies used for WL is then ~45 gal/arcmin2, which is at least a factor of two higher than ground-based surveys. Conclusions: This study demonstrates that a full account of the observational strategy is required to properly optimize the instrument parameters and maximize the FoM of the future weak-lensing space dark energy mission.
Identifying Genetic Traces of Historical Expansions: Phoenician Footprints in the Mediterranean
Zalloua, Pierre A.; Platt, Daniel E.; El Sibai, Mirvat; Khalife, Jade; Makhoul, Nadine; Haber, Marc; Xue, Yali; Izaabel, Hassan; Bosch, Elena; Adams, Susan M.; Arroyo, Eduardo; López-Parra, Ana María; Aler, Mercedes; Picornell, Antònia; Ramon, Misericordia; Jobling, Mark A.; Comas, David; Bertranpetit, Jaume; Wells, R. Spencer; Tyler-Smith, Chris
2008-01-01
The Phoenicians were the dominant traders in the Mediterranean Sea two thousand to three thousand years ago and expanded from their homeland in the Levant to establish colonies and trading posts throughout the Mediterranean, but then they disappeared from history. We wished to identify their male genetic traces in modern populations. Therefore, we chose Phoenician-influenced sites on the basis of well-documented historical records and collected new Y-chromosomal data from 1330 men from six such sites, as well as comparative data from the literature. We then developed an analytical strategy to distinguish between lineages specifically associated with the Phoenicians and those spread by geographically similar but historically distinct events, such as the Neolithic, Greek, and Jewish expansions. This involved comparing historically documented Phoenician sites with neighboring non-Phoenician sites for the identification of weak but systematic signatures shared by the Phoenician sites that could not readily be explained by chance or by other expansions. From these comparisons, we found that haplogroup J2, in general, and six Y-STR haplotypes, in particular, exhibited a Phoenician signature that contributed > 6% to the modern Phoenician-influenced populations examined. Our methodology can be applied to any historically documented expansion in which contact and noncontact sites can be identified. PMID:18976729
Bergmann's rule is maintained during a rapid range expansion in a damselfly.
Hassall, Christopher; Keat, Simon; Thompson, David J; Watts, Phillip C
2014-02-01
Climate-induced range shifts result in the movement of a sample of genotypes from source populations to new regions. The phenotypic consequences of those shifts depend upon the sample characteristics of the dispersive genotypes, which may act to either constrain or promote phenotypic divergence, and the degree to which plasticity influences the genotype-environment interaction. We sampled populations of the damselfly Erythromma viridulum from northern Europe to quantify the phenotypic (latitude-body size relationship based on seven morphological traits) and genetic (variation at microsatellite loci) patterns that occur during a range expansion itself. We find a weak spatial genetic structure that is indicative of high gene flow during a rapid range expansion. Despite the potentially homogenizing effect of high gene flow, however, there is extensive phenotypic variation among samples along the invasion route that manifests as a strong, positive correlation between latitude and body size consistent with Bergmann's rule. This positive correlation cannot be explained by variation in the length of larval development (voltinism). While the adaptive significance of latitudinal variation in body size remains obscure, geographical patterns in body size in odonates are apparently underpinned by phenotypic plasticity and this permits a response to one or more environmental correlates of latitude during a range expansion. © 2013 John Wiley & Sons Ltd.
Variations of cosmic large-scale structure covariance matrices across parameter space
NASA Astrophysics Data System (ADS)
Reischke, Robert; Kiessling, Alina; Schäfer, Björn Malte
2017-03-01
The likelihood function for cosmological parameters, given by e.g. weak lensing shear measurements, depends on contributions to the covariance induced by the non-linear evolution of the cosmic web. As highly non-linear clustering to date has only been described by numerical N-body simulations in a reliable and sufficiently precise way, the necessary computational costs for estimating those covariances at different points in parameter space are tremendous. In this work, we describe the change of the matter covariance and the weak lensing covariance matrix as a function of cosmological parameters by constructing a suitable basis, where we model the contribution to the covariance from non-linear structure formation using Eulerian perturbation theory at third order. We show that our formalism is capable of dealing with large matrices and reproduces expected degeneracies and scaling with cosmological parameters in a reliable way. Comparing our analytical results to numerical simulations, we find that the method describes the variation of the covariance matrix found in the SUNGLASS weak lensing simulation pipeline within the errors at one-loop and tree-level for the spectrum and the trispectrum, respectively, for multipoles up to ℓ ≤ 1300. We show that it is possible to optimize the sampling of parameter space where numerical simulations should be carried out by minimizing interpolation errors and propose a corresponding method to distribute points in parameter space in an economical way.
Solute boundary layer on a rotating crystal
NASA Astrophysics Data System (ADS)
Povinelli, Michelle L.; Korpela, Seppo A.; Chait, Arnon
1994-11-01
A perturbation analysis has been carried out for the solutal boundary layer next to a rotating crystal. Our aim is to extend the classical results of Burton, Prim and Slicher [1] in order to obtain higher order terms in asymptotic expansions for the concentration field and boundary-layer thickness. Expressions for the effective segregation coefficient are directly obtained from the concentration solution in the two limits that correspond to weak and strong rotation.
NASA Astrophysics Data System (ADS)
Seadawy, A. R.; El-Rashidy, K.
2018-03-01
The Kadomtsev-Petviashvili (KP) and modified KP equations are two of the most universal models in nonlinear wave theory, which arises as a reduction of system with quadratic nonlinearity which admit weakly dispersive waves. The generalized extended tanh method and the F-expansion method are used to derive exact solitary waves solutions of KP and modified KP equations. The region of solutions are displayed graphically.
Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data
NASA Technical Reports Server (NTRS)
Voorhies, C. V.; Santana, J.; Sabaka, T.
1999-01-01
Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).
Consistency condition for inflation from (broken) conformal symmetry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schalm, Koenraad; Aalst, Ted van der; Shiu, Gary, E-mail: kschalm@lorentz.leidenuniv.nl, E-mail: shiu@physics.wisc.edu, E-mail: vdaalst@lorentz.leidenuniv.nl
2013-03-01
We investigate the symmetry constraints on the bispectrum, i.e. the three-point correlation function of primordial density fluctuations, in slow-roll inflation. It follows from the defining property of slow-roll inflation that primordial correlation functions inherit most of their structure from weakly broken de Sitter symmetries. Using holographic techniques borrowed from the AdS/CFT correspondence, the symmetry constraints on the bispectrum can be mapped to a set of stress-tensor Ward identities in a weakly broken 2+1-dimensional Euclidean CFT. We construct the consistency condition from these Ward identities using conformal perturbation theory. This requires a second order Ward identity and the use of themore » evolution equation. Our result also illustrates a subtle difference between conformal perturbation theory and the slow-roll expansion.« less
NASA Astrophysics Data System (ADS)
Tuckness, D. G.; Jost, B.
1995-08-01
Current knowledge of the lunar gravity field is presented. The various methods used in determining these gravity fields are investigated and analyzed. It will be shown that weaknesses exist in the current models of the lunar gravity field. The dominant part of this weakness is caused by the lack of lunar tracking data information (farside, polar areas), which makes modeling the total lunar potential difficult. Comparisons of the various lunar models reveal an agreement in the low-order coefficients of the Legendre polynomials expansions. However, substantial differences in the models can exist in the higher-order harmonics. The main purpose of this study is to assess today's lunar gravity field models for use in tomorrow's lunar mission designs and operations.
Bremsstrahlung function, leading Lüscher correction at weak coupling and localization
NASA Astrophysics Data System (ADS)
Bonini, Marisa; Griguolo, Luca; Preti, Michelangelo; Seminara, Domenico
2016-02-01
We discuss the near BPS expansion of the generalized cusp anomalous dimension with L units of R-charge. Integrability provides an exact solution, obtained by solving a general TBA equation in the appropriate limit: we propose here an alternative method based on supersymmetric localization. The basic idea is to relate the computation to the vacuum expectation value of certain 1/8 BPS Wilson loops with local operator insertions along the contour. These observables localize on a two-dimensional gauge theory on S 2, opening the possibility of exact calculations. As a test of our proposal, we reproduce the leading Lüscher correction at weak coupling to the generalized cusp anomalous dimension. This result is also checked against a genuine Feynman diagram approach in {N}=4 Super Yang-Mills theory.
[European health systems and the integration problem of modern societies].
Lüschen, G
2000-04-01
With reference to the national health systems in Germany and the UK we must acknowledge that it was in particular Bismarck's Reform, originally directed toward a solidarity among the socially weak, which entailed in its development a marked redistribution via progressive health fees and standardized health services. In view of Alfred Marshall's original expectations this has resulted in a specific integration of the socially weak and with some difference for nationally tax-financed and social security financed health systems to a genuine contribution towards integration of modern society. An open research question is whether as a consequence of solidarity and integration through health systems there is a decline of social inequality for health. Equally open is the question as to the socio-structural and economic consequences the expansion of modern health systems has.
Pyroelectric property of SrTiO3/Si ferroelectric-semiconductor heterojunctions near room temperature
NASA Astrophysics Data System (ADS)
Bai, Gang; Wu, Dongmei; Xie, Qiyun; Guo, Yanyan; Li, Wei; Deng, Licheng; Liu, Zhiguo
2015-12-01
A nonlinear thermodynamic formalism is developed to calculate the pyroelectric property of epitaxial single domain SrTiO3/Si heterojunctions by taking into account the thermal expansion misfit strain at different temperatures. It has been demonstrated that the crucial role was played by the contribution associated with the structure order parameter arising from the rotations of oxygen octahedral on pyroelectricity. A dramatic decrease in the pyroelectric coefficient due to the strong coupling between the polarization and the structure order parameter is found at ferroelectric TF1-TF2 phase transition. At the same time, the thermal expansion mismatch between film and substrate is also found to provide an additional weak decrease of pyroelectricity. The analytic relationship of the out-of-plane pyroelectric coefficient and dielectric constant of ferroelectric phases by considering the thermal expansion of thin films and substrates has been determined for the first time. Our research provides another avenue for the investigation of the pyroelectric effects of ferroic thin films, especially, such as antiferroelectric and multiferroic materials having two or more order parameters.
Ground state energies from converging and diverging power series expansions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lisowski, C.; Norris, S.; Pelphrey, R.
2016-10-15
It is often assumed that bound states of quantum mechanical systems are intrinsically non-perturbative in nature and therefore any power series expansion methods should be inapplicable to predict the energies for attractive potentials. However, if the spatial domain of the Schrödinger Hamiltonian for attractive one-dimensional potentials is confined to a finite length L, the usual Rayleigh–Schrödinger perturbation theory can converge rapidly and is perfectly accurate in the weak-binding region where the ground state’s spatial extension is comparable to L. Once the binding strength is so strong that the ground state’s extension is less than L, the power expansion becomes divergent,more » consistent with the expectation that bound states are non-perturbative. However, we propose a new truncated Borel-like summation technique that can recover the bound state energy from the diverging sum. We also show that perturbation theory becomes divergent in the vicinity of an avoided-level crossing. Here the same numerical summation technique can be applied to reproduce the energies from the diverging perturbative sums.« less
A time-corrector device for adjusting streamflow records
Raymond W. Lavigne
1960-01-01
The first job in compiling streamflow data from streamflow charts is to mark storm rises and storm peaks, make corrections as necessary for time and stage height, and account for irregularities on the chart. Errors in the time scale can result from faulty clock operation, irregularities in chart take-up by the drum, or expansion of the paper. This note suggests a...
The expansion of neighborhood and pattern formation on spatial prisoner's dilemma
NASA Astrophysics Data System (ADS)
Qian, Xiaolan; Xu, Fangqian; Yang, Junzhong; Kurths, Jürgen
2015-04-01
The prisoner's dilemma (PD), in which players can either cooperate or defect, is considered a paradigm for studying the evolution of cooperation in spatially structured populations. There the compact cooperator cluster is identified as a characteristic pattern and the probability of forming such pattern in turn depends on the features of the networks. In this paper, we investigate the influence of expansion of neighborhood on pattern formation by taking a weak PD game with one free parameter T, the temptation to defect. Two different expansion methods of neighborhood are considered. One is based on a square lattice and expanses along four directions generating networks with degree increasing with K = 4 m . The other is based on a lattice with Moore neighborhood and expanses along eight directions, generating networks with degree of K = 8 m . Individuals are placed on the nodes of the networks, interact with their neighbors and learn from the better one. We find that cooperator can survive for a broad degree 4 ≤ K ≤ 70 by taking a loose type of cooperator clusters. The former simple corresponding relationship between macroscopic patterns and the microscopic PD interactions is broken. Under a condition that is unfavorable for cooperators such as large T and K, systems prefer to evolve to a loose type of cooperator clusters to support cooperation. However, compared to the well-known compact pattern, it is a suboptimal strategy because it cannot help cooperators dominating the population and always corresponding to a low cooperation level.
SKA weak lensing - III. Added value of multiwavelength synergies for the mitigation of systematics
NASA Astrophysics Data System (ADS)
Camera, Stefano; Harrison, Ian; Bonaldi, Anna; Brown, Michael L.
2017-02-01
In this third paper of a series on radio weak lensing for cosmology with the Square Kilometre Array, we scrutinize synergies between cosmic shear measurements in the radio and optical/near-infrared (IR) bands for mitigating systematic effects. We focus on three main classes of systematics: (I) experimental systematic errors in the observed shear; (II) signal contamination by intrinsic alignments and (III) systematic effects due to an incorrect modelling of non-linear scales. First, we show that a comprehensive, multiwavelength analysis provides a self-calibration method for experimental systematic effects, only implying <50 per cent increment on the errors on cosmological parameters. We also illustrate how the cross-correlation between radio and optical/near-IR surveys alone is able to remove residual systematics with variance as large as 10-5, I.e. the same order of magnitude of the cosmological signal. This also opens the possibility of using such a cross-correlation as a means to detect unknown experimental systematics. Secondly, we demonstrate that, thanks to polarization information, radio weak lensing surveys will be able to mitigate contamination by intrinsic alignments, in a way similar but fully complementary to available self-calibration methods based on position-shear correlations. Lastly, we illustrate how radio weak lensing experiments, reaching higher redshifts than those accessible to optical surveys, will probe dark energy and the growth of cosmic structures in regimes less contaminated by non-linearities in the matter perturbations. For instance, the higher redshift bins of radio catalogues peak at z ≃ 0.8-1, whereas their optical/near-IR counterparts are limited to z ≲ 0.5-0.7. This translates into having a cosmological signal 2-5 times less contaminated by non-linear perturbations.
NASA Technical Reports Server (NTRS)
Gentry, R. C.; Rodgers, E.; Steranka, J.; Shenk, W. E.
1978-01-01
A regression technique was developed to forecast 24 hour changes of the maximum winds for weak (maximum winds less than or equal to 65 Kt) and strong (maximum winds greater than 65 Kt) tropical cyclones by utilizing satellite measured equivalent blackbody temperatures around the storm alone and together with the changes in maximum winds during the preceding 24 hours and the current maximum winds. Independent testing of these regression equations shows that the mean errors made by the equations are lower than the errors in forecasts made by the peristence techniques.
Improved Extreme Learning Machine based on the Sensitivity Analysis
NASA Astrophysics Data System (ADS)
Cui, Licheng; Zhai, Huawei; Wang, Benchao; Qu, Zengtang
2018-03-01
Extreme learning machine and its improved ones is weak in some points, such as computing complex, learning error and so on. After deeply analyzing, referencing the importance of hidden nodes in SVM, an novel analyzing method of the sensitivity is proposed which meets people’s cognitive habits. Based on these, an improved ELM is proposed, it could remove hidden nodes before meeting the learning error, and it can efficiently manage the number of hidden nodes, so as to improve the its performance. After comparing tests, it is better in learning time, accuracy and so on.
The DES Science Verification Weak Lensing Shear Catalogs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jarvis, M.
We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less
The DES Science Verification Weak Lensing Shear Catalogs
Jarvis, M.
2016-05-01
We present weak lensing shear catalogs for 139 square degrees of data taken during the Science Verification (SV) time for the new Dark Energy Camera (DECam) being used for the Dark Energy Survey (DES). We describe our object selection, point spread function estimation and shear measurement procedures using two independent shear pipelines, IM3SHAPE and NGMIX, which produce catalogs of 2.12 million and 3.44 million galaxies respectively. We also detail a set of null tests for the shear measurements and find that they pass the requirements for systematic errors at the level necessary for weak lensing science applications using the SVmore » data. Furthermore, we discuss some of the planned algorithmic improvements that will be necessary to produce sufficiently accurate shear catalogs for the full 5-year DES, which is expected to cover 5000 square degrees.« less
NASA Astrophysics Data System (ADS)
Bateni, S. M.; Xu, T.
2015-12-01
Accurate estimation of water and heat fluxes is required for irrigation scheduling, weather prediction, and water resources planning and management. A weak-constraint variational data assimilation (WC-VDA) scheme is developed to estimate water and heat fluxes by assimilating sequences of land surface temperature (LST) observations. The commonly used strong-constraint VDA systems adversely affect the accuracy of water and heat flux estimates as they assume the model is perfect. The WC-VDA approach accounts for structural and model errors and generates more accurate results via adding a model error term into the surface energy balance equation. The two key unknown parameters of the WC-VDA system (i.e., CHN, the bulk heat transfer coefficient and EF, evaporative fraction) and the model error term are optimized by minimizing the cost function. The WC-VDA model was tested at two sites with contrasting hydrological and vegetative conditions: the Daman site (a wet site located in an oasis area and covered by seeded corn) and the Huazhaizi site (a dry site located in a desert area and covered by sparse grass) in middle stream of Heihe river basin, northwest China. Compared to the strong-constraint VDA system, the WC-VDA method generates more accurate estimates of water and energy fluxes over the desert and oasis sites with dry and wet conditions.
Structure and thermal expansion of Lu 2O 3 and Yb 2O 3 up to the melting points
Pavlik, Alfred; Ushakov, Sergey V.; Navrotsky, Alexandra; ...
2017-08-24
Knowledge of thermal expansion and high temperature phase transformations is essential for prediction and interpretation of materials behavior under the extreme conditions of high temperature and intense radiation encountered in nuclear reactors. We studied the structure and thermal expansion of Lu 2O 3 and Yb 2O 3 were studied in oxygen and argon atmospheres up to their melting temperatures using synchrotron X-ray diffraction on laser heated levitated samples. Both oxides retained the cubic bixbyite C-type structure in oxygen and argon to melting. In contrast to fluorite-type structures, the increase in the unit cell parameter of Yb 2O 3 and Lumore » 2O 33 with temperature is linear within experimental error from room temperature to the melting point, with mean thermal expansion coefficients (8.5 ± 0.6) · 10 -6 K -1 and (7.7 ± 0.6) · 10 -6 K -1, respectively. There is no indication of a superionic (Bredig) transition in the C-type structure or of a previously suggested Yb 2O 3 phase transformation to hexagonal phase prior to melting.« less
Stable forming conditions and geometrical expansion of L-shape rings in ring rolling process
NASA Astrophysics Data System (ADS)
Quagliato, Luca; Berti, Guido A.; Kim, Dongwook; Kim, Naksoo
2018-05-01
Based on previous research results concerning the radial-axial ring rolling process of flat rings, this paper details an innovative approach for the determination of the stable forming conditions to successfully simulate the radial ring rolling process of L-shape profiled rings. In addition to that, an analytical model for the estimation of the geometrical expansion of L-shape rings from its initial flat ring preform is proposed and validated by comparing its results with those of numerical simulations. By utilizing the proposed approach, steady forming conditions could be achieved, granting a uniform expansion of the ring throughout the process for all of the six tested cases of rings having the final outer diameter of the flange ranging from 545mm and 1440mm. The validation of the proposed approach allowed concluding that the geometrical expansion of the ring, as estimated by the proposed analytical model, is in good agreement with the results of the numerical simulation, with a maximum error of 2.18%, in the estimation of the ring wall diameter, 1.42% of the ring flange diameter and 1.87% for the estimation of the inner diameter of the ring, respectively.
Structure and thermal expansion of Lu 2O 3 and Yb 2O 3 up to the melting points
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pavlik, Alfred; Ushakov, Sergey V.; Navrotsky, Alexandra
Knowledge of thermal expansion and high temperature phase transformations is essential for prediction and interpretation of materials behavior under the extreme conditions of high temperature and intense radiation encountered in nuclear reactors. We studied the structure and thermal expansion of Lu 2O 3 and Yb 2O 3 were studied in oxygen and argon atmospheres up to their melting temperatures using synchrotron X-ray diffraction on laser heated levitated samples. Both oxides retained the cubic bixbyite C-type structure in oxygen and argon to melting. In contrast to fluorite-type structures, the increase in the unit cell parameter of Yb 2O 3 and Lumore » 2O 33 with temperature is linear within experimental error from room temperature to the melting point, with mean thermal expansion coefficients (8.5 ± 0.6) · 10 -6 K -1 and (7.7 ± 0.6) · 10 -6 K -1, respectively. There is no indication of a superionic (Bredig) transition in the C-type structure or of a previously suggested Yb 2O 3 phase transformation to hexagonal phase prior to melting.« less
NASA Astrophysics Data System (ADS)
Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin
2017-10-01
The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.
Trans-dimensional joint inversion of seabed scattering and reflection data.
Steininger, Gavin; Dettmer, Jan; Dosso, Stan E; Holland, Charles W
2013-03-01
This paper examines joint inversion of acoustic scattering and reflection data to resolve seabed interface roughness parameters (spectral strength, exponent, and cutoff) and geoacoustic profiles. Trans-dimensional (trans-D) Bayesian sampling is applied with both the number of sediment layers and the order (zeroth or first) of auto-regressive parameters in the error model treated as unknowns. A prior distribution that allows fluid sediment layers over an elastic basement in a trans-D inversion is derived and implemented. Three cases are considered: Scattering-only inversion, joint scattering and reflection inversion, and joint inversion with the trans-D auto-regressive error model. Including reflection data improves the resolution of scattering and geoacoustic parameters. The trans-D auto-regressive model further improves scattering resolution and correctly differentiates between strongly and weakly correlated residual errors.
Nagata, Takeshi; Iwata, Suehiro
2004-02-22
The locally projected self-consistent field molecular orbital method for molecular interaction (LP SCF MI) is reformulated for multifragment systems. For the perturbation expansion, two types of the local excited orbitals are defined; one is fully local in the basis set on a fragment, and the other has to be partially delocalized to the basis sets on the other fragments. The perturbation expansion calculations only within single excitations (LP SE MP2) are tested for water dimer, hydrogen fluoride dimer, and colinear symmetric ArM+ Ar (M = Na and K). The calculated binding energies of LP SE MP2 are all close to the corresponding counterpoise corrected SCF binding energy. By adding the single excitations, the deficiency in LP SCF MI is thus removed. The results suggest that the exclusion of the charge-transfer effects in LP SCF MI might indeed be the cause of the underestimation for the binding energy. (c) 2004 American Institute of Physics.
Patra, Bikash; Jana, Subrata; Samal, Prasanjit
2018-03-28
The exchange hole, which is one of the principal constituents of the density functional formalism, can be used to design accurate range-separated hybrid functionals in association with appropriate correlation. In this regard, the exchange hole derived from the density matrix expansion has gained attention due to its fulfillment of some of the desired exact constraints. Thus, the new long-range corrected density functional proposed here combines the meta generalized gradient approximation level exchange functional designed from the density matrix expansion based exchange hole coupled with the ab initio Hartree-Fock exchange through the range separation of the Coulomb interaction operator using the standard error function technique. Then, in association with the Lee-Yang-Parr correlation functional, the assessment and benchmarking of the above newly constructed range-separated functional with various well-known test sets shows its reasonable performance for a broad range of molecular properties, such as thermochemistry, non-covalent interaction and barrier heights of the chemical reactions.
Linear-scaling generation of potential energy surfaces using a double incremental expansion
DOE Office of Scientific and Technical Information (OSTI.GOV)
König, Carolin, E-mail: carolink@kth.se; Christiansen, Ove, E-mail: ove@chem.au.dk
We present a combination of the incremental expansion of potential energy surfaces (PESs), known as n-mode expansion, with the incremental evaluation of the electronic energy in a many-body approach. The application of semi-local coordinates in this context allows the generation of PESs in a very cost-efficient way. For this, we employ the recently introduced flexible adaptation of local coordinates of nuclei (FALCON) coordinates. By introducing an additional transformation step, concerning only a fraction of the vibrational degrees of freedom, we can achieve linear scaling of the accumulated cost of the single point calculations required in the PES generation. Numerical examplesmore » of these double incremental approaches for oligo-phenyl examples show fast convergence with respect to the maximum number of simultaneously treated fragments and only a modest error introduced by the additional transformation step. The approach, presented here, represents a major step towards the applicability of vibrational wave function methods to sizable, covalently bound systems.« less
NASA Astrophysics Data System (ADS)
Arai, Shun; Nishizawa, Atsushi
2018-05-01
Gravitational waves (GW) are generally affected by modification of a gravity theory during propagation at cosmological distances. We numerically perform a quantitative analysis on Horndeski theory at the cosmological scale to constrain the Horndeski theory by GW observations in a model-independent way. We formulate a parametrization for a numerical simulation based on the Monte Carlo method and obtain the classification of the models that agrees with cosmic accelerating expansion within observational errors of the Hubble parameter. As a result, we find that a large group of the models in the Horndeski theory that mimic cosmic expansion of the Λ CDM model can be excluded from the simultaneous detection of a GW and its electromagnetic transient counterpart. Based on our result and the latest detection of GW170817 and GRB170817A, we conclude that the subclass of Horndeski theory including arbitrary functions G4 and G5 can hardly explain cosmic accelerating expansion without fine-tuning.
Effects of thermal inhomogeneity on 4m class mirror substrates
NASA Astrophysics Data System (ADS)
Jedamzik, Ralf; Kunisch, Clemens; Westerhoff, Thomas
2016-07-01
The new ground based telescope generation is moving to a next stage of performance and resolution. Mirror substrate material properties tolerance and homogeneity are getting into focus. The coefficient of thermal expansion (CTE) homogeneity is even more important than the absolute CTE. The error in shape of a mirror, even one of ZERODUR, is affected by changes in temperature, and by gradients in temperature. Front to back gradients will change the radius of curvature R that in turn will change the focus. Some systems rely on passive athermalization and do not have means to focus. Similarly changes in soak temperature will result in surface changes to the extent there is a non-zero coefficient of thermal expansion. When there are in-homogeneities in CTE, the mirror will react accordingly. Results of numerical experiments are presented discussing the impact of CTE in-homogeneities on the optical performance of 4 m class mirror substrates. Latest improvements in 4 m class ZERODUR CTE homogeneity and the thermal expansion metrology are presented as well.
Non-minimal derivative coupling gravity in cosmology
NASA Astrophysics Data System (ADS)
Gumjudpai, Burin; Rangdee, Phongsaphat
2015-11-01
We give a brief review of the non-minimal derivative coupling (NMDC) scalar field theory in which there is non-minimal coupling between the scalar field derivative term and the Einstein tensor. We assume that the expansion is of power-law type or super-acceleration type for small redshift. The Lagrangian includes the NMDC term, a free kinetic term, a cosmological constant term and a barotropic matter term. For a value of the coupling constant that is compatible with inflation, we use the combined WMAP9 (WMAP9 + eCMB + BAO + H_0) dataset, the PLANCK + WP dataset, and the PLANCK TT, TE, EE + lowP + Lensing + ext datasets to find the value of the cosmological constant in the model. Modeling the expansion with power-law gives a negative cosmological constants while the phantom power-law (super-acceleration) expansion gives positive cosmological constant with large error bar. The value obtained is of the same order as in the Λ CDM model, since at late times the NMDC effect is tiny due to small curvature.
NASA Astrophysics Data System (ADS)
Khan, Mehbub; Hao, Yun; Hsu, Jong-Ping
2018-01-01
Based on baryon charge conservation and a generalized Yang-Mills symmetry for Abelian (and non-Abelian) groups, we discuss a new baryonic gauge field and its linear potential for two point-like baryon charges. The force between two point-like baryons is repulsive, extremely weak and independent of distance. However, for two extended baryonic systems, we have a dominant linear force α r. Thus, only in the later stage of the cosmic evolution, when two baryonic galaxies are separated by an extremely large distance, the new repulsive baryonic force can overcome the gravitational attractive force. Such a model provides a gauge-field-theoretic understanding of the late-time accelerated cosmic expansion. The baryonic force can be tested by measuring the accelerated Wu-Doppler frequency shifts of supernovae at different distances.
Large-N kinetic theory for highly occupied systems
NASA Astrophysics Data System (ADS)
Walz, R.; Boguslavski, K.; Berges, J.
2018-06-01
We consider an effective kinetic description for quantum many-body systems, which is not based on a weak-coupling or diluteness expansion. Instead, it employs an expansion in the number of field components N of the underlying scalar quantum field theory. Extending previous studies, we demonstrate that the large-N kinetic theory at next-to-leading order is able to describe important aspects of highly occupied systems, which are beyond standard perturbative kinetic approaches. We analyze the underlying quasiparticle dynamics by computing the effective scattering matrix elements analytically and solve numerically the large-N kinetic equation for a highly occupied system far from equilibrium. This allows us to compute the universal scaling form of the distribution function at an infrared nonthermal fixed point within a kinetic description, and we compare to existing lattice field theory simulation results.
High temperature expanding cement composition and use
Nelson, Erik B.; Eilers, Louis H.
1982-01-01
A hydratable cement composition useful for preparing a pectolite-containing expanding cement at temperatures above about 150.degree. C. comprising a water soluble sodium salt of a weak acid, a 0.1 molar aqueous solution of which salt has a pH of between about 7.5 and about 11.5, a calcium source, and a silicon source, where the atomic ratio of sodium to calcium to silicon ranges from about 0.3:0.6:1 to about 0.03:1:1; aqueous slurries prepared therefrom and the use of such slurries for plugging subterranean cavities at a temperature of at least about 150.degree. C. The invention composition is useful for preparing a pectolite-containing expansive cement having about 0.2 to about 2 percent expansion, by volume, when cured at at least 150.degree. C.
Galactoseismology and the local density of dark matter
Banik, Nilanjan; Widrow, Lawrence M.; Dodelson, Scott
2016-10-08
Here, we model vertical breathing mode perturbations in the Milky Way's stellar disc and study their effects on estimates of the local dark matter density, surface density, and vertical force. Evidence for these perturbations, which involve compression and expansion of the Galactic disc perpendicular to its midplane, come from the SEGUE, RAVE, and LAMOST surveys. We show that their existence may lead to systematic errors ofmore » $$10\\%$$ or greater in the vertical force $$K_z(z)$$ at $$|z|=1.1\\,{\\rm kpc}$$. These errors translate to $$\\gtrsim 25\\%$$ errors in estimates of the local dark matter density. Using different mono-abundant subpopulations as tracers offers a way out: if the inferences from all tracers in the Gaia era agree, then the dark matter determination will be robust. Disagreement in the inferences from different tracers will signal the breakdown of the unperturbed model and perhaps provide the means for determining the nature of the perturbation.« less
Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm
Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; ...
2018-02-12
Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less
Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm
NASA Astrophysics Data System (ADS)
Colless, J. I.; Ramasesh, V. V.; Dahlen, D.; Blok, M. S.; Kimchi-Schwartz, M. E.; McClean, J. R.; Carter, J.; de Jong, W. A.; Siddiqi, I.
2018-02-01
Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. We use a superconducting-qubit-based processor to apply the QSE approach to the H2 molecule, extracting both ground and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.
Computation of Molecular Spectra on a Quantum Processor with an Error-Resilient Algorithm
DOE Office of Scientific and Technical Information (OSTI.GOV)
Colless, J. I.; Ramasesh, V. V.; Dahlen, D.
Harnessing the full power of nascent quantum processors requires the efficient management of a limited number of quantum bits with finite coherent lifetimes. Hybrid algorithms, such as the variational quantum eigensolver (VQE), leverage classical resources to reduce the required number of quantum gates. Experimental demonstrations of VQE have resulted in calculation of Hamiltonian ground states, and a new theoretical approach based on a quantum subspace expansion (QSE) has outlined a procedure for determining excited states that are central to dynamical processes. Here, we use a superconducting-qubit-based processor to apply the QSE approach to the H 2 molecule, extracting both groundmore » and excited states without the need for auxiliary qubits or additional minimization. Further, we show that this extended protocol can mitigate the effects of incoherent errors, potentially enabling larger-scale quantum simulations without the need for complex error-correction techniques.« less
Modeling Systematic Error Effects for a Sensitive Storage Ring EDM Polarimeter
NASA Astrophysics Data System (ADS)
Stephenson, Edward; Imig, Astrid
2009-10-01
The Storage Ring EDM Collaboration has obtained a set of measurements detailing the sensitivity of a storage ring polarimeter for deuterons to small geometrical and rate changes. Various schemes, such as the calculation of the cross ratio [1], can cancel effects due to detector acceptance differences and luminosity differences for states of opposite polarization. Such schemes fail at second-order in the errors, becoming sensitive to geometrical changes, polarization magnitude differences between opposite polarization states, and changes to the detector response with changing data rates. An expansion of the polarimeter response in a Taylor series based on small errors about the polarimeter operating point can parametrize such effects, primarily in terms of the logarithmic derivatives of the cross section and analyzing power. A comparison will be made to measurements obtained with the EDDA detector at COSY-J"ulich. [4pt] [1] G.G. Ohlsen and P.W. Keaton, Jr., NIM 109, 41 (1973).
Anisotropic thermal expansion in a metal-organic framework.
Madsen, Solveig Røgild; Lock, Nina; Overgaard, Jacob; Iversen, Bo Brummerstedt
2014-06-01
Ionothermal reaction between Mn(II)(acetate)2·4H2O and 1,3,5-benzenetricarboxylic acid (H3BTC) in either of the two ionic liquids 1-ethyl-3-methylimidazolium bromide (EMIMBr) and 1-ethyl-3-methylimidazolium tosylate (EMIMOTs) resulted in the formation of the new metal-organic framework (MOF) EMIM[Mn(II)BTC] (BTC = 1,3,5-benzenetricarboxylate). The compound crystallizes in the orthorhombic space group Pbca with unit-cell parameters of a = 14.66658 (12), b = 12.39497 (9), c = 16.63509 (14) Å at 100 K. Multi-temperature single-crystal (15-340 K) and powder X-ray diffraction studies (100-400 K) reveal strongly anisotropic thermal expansion properties. The linear thermal expansion coefficients, αL(l), attain maximum values at 400 K along the a- and b-axis, with αL(a) = 115 × 10(-6) K(-1) and αL(b) = 75 × 10(-6) K(-1). At 400 K a negative thermal expansion coefficient of -40 × 10(-6) K(-1) is observed along the c-axis. The thermal expansion is coupled to a continuous deformation of the framework, which causes the structure to expand in two directions. Due to the rigidity of the linker, the expansion in the ab plane causes the network to contract along the c-axis. Hirshfeld surface analysis has been used to describe the interaction between the framework structure and the EMIM cation that resides within the channel. This reveals a number of rather weak interactions and one governing hydrogen-bonding interactions.
Measurement Techniques for Transmit Source Clock Jitter for Weak Serial RF Links
NASA Technical Reports Server (NTRS)
Lansdowne, Chatwin A.; Schlesinger, Adam M.
2010-01-01
Techniques for filtering clock jitter measurements are developed, in the context of controlling data modulation jitter on an RF carrier to accommodate low signal-to-noise ratio thresholds of high-performance error correction codes. Measurement artifacts from sampling are considered, and a tutorial on interpretation of direct readings is included.
The Strengths and Weaknesses of ISO 9000 in Vocational Education
ERIC Educational Resources Information Center
Bevans-Gonzales, Theresa L.; Nair, Ajay T.
2004-01-01
ISO 9000 is a set of quality standards that assists an organization to identify, correct and prevent errors, and to promote continual improvement. Educational institutions worldwide are implementing ISO 9000 as they face increasing external pressure to maintain accountability for funding. Similar to other countries, in the United States vocational…
ERIC Educational Resources Information Center
Hariri, Ruaa Osama
2016-01-01
Children with Attention-Deficiency/Hyperactive Disorder (ADHD) often have co-existing learning disabilities and developmental weaknesses or delays in some areas including speech (Rief, 2005). Seeing that phonological disorders include articulation errors and other forms of speech disorders, studies pertaining to children with ADHD symptoms who…
Is It True What They Say about Dixie?
ERIC Educational Resources Information Center
Kell, Carl L.
In analyzing the reasons for George McGovern's failure in the presidential election of 1972, the author cites weaknesses in rhetoric, rhetorical strategy, and confrontation with and answers to the issues, and the apt handling of the South by Richard Nixon's aide, Harry Dent. McGovern's continual citation of the "errors of our ways" and…
Current Trends in Computer-Based Language Instruction.
ERIC Educational Resources Information Center
Hart, Robert S.
1987-01-01
A discussion of computer-based language instruction examines the quality of materials currently in use and looks at developments in the field. It is found that language courseware is generally weak in the areas of error analysis and feedback, communicative realism, and convenience of lesson authoring. A review of research under way to improve…
Fourier/Chebyshev methods for the incompressible Navier-Stokes equations in finite domains
NASA Technical Reports Server (NTRS)
Corral, Roque; Jimenez, Javier
1992-01-01
A fully spectral numerical scheme for the incompressible Navier-Stokes equations in domains which are infinite or semi-infinite in one dimension. The domain is not mapped, and standard Fourier or Chebyshev expansions can be used. The handling of the infinite domain does not introduce any significant overhead. The scheme assumes that the vorticity in the flow is essentially concentrated in a finite region, which is represented numerically by standard spectral collocation methods. To accomodate the slow exponential decay of the velocities at infinity, extra expansion functions are introduced, which are handled analytically. A detailed error analysis is presented, and two applications to Direct Numerical Simulation of turbulent flows are discussed in relation with the numerical performance of the scheme.
Discrete conservation properties for shallow water flows using mixed mimetic spectral elements
NASA Astrophysics Data System (ADS)
Lee, D.; Palha, A.; Gerritsma, M.
2018-03-01
A mixed mimetic spectral element method is applied to solve the rotating shallow water equations. The mixed method uses the recently developed spectral element histopolation functions, which exactly satisfy the fundamental theorem of calculus with respect to the standard Lagrange basis functions in one dimension. These are used to construct tensor product solution spaces which satisfy the generalized Stokes theorem, as well as the annihilation of the gradient operator by the curl and the curl by the divergence. This allows for the exact conservation of first order moments (mass, vorticity), as well as higher moments (energy, potential enstrophy), subject to the truncation error of the time stepping scheme. The continuity equation is solved in the strong form, such that mass conservation holds point wise, while the momentum equation is solved in the weak form such that vorticity is globally conserved. While mass, vorticity and energy conservation hold for any quadrature rule, potential enstrophy conservation is dependent on exact spatial integration. The method possesses a weak form statement of geostrophic balance due to the compatible nature of the solution spaces and arbitrarily high order spatial error convergence.
Sayago, Ana; Asuero, Agustin G
2006-09-14
A bilogarithmic hyperbolic cosine method for the spectrophotometric evaluation of stability constants of 1:1 weak complexes from continuous variation data has been devised and applied to literature data. A weighting scheme, however, is necessary in order to take into account the transformation for linearization. The method may be considered a useful alternative to methods in which one variable is involved on both sides of the basic equation (i.e. Heller and Schwarzenbach, Likussar and Adsul and Ramanathan). Classical least squares lead in those instances to biased and approximate stability constants and limiting absorbance values. The advantages of the proposed method are: the method gives a clear indication of the existence of only one complex in solution, it is flexible enough to allow for weighting of measurements and the computation procedure yield the best value of logbeta11 and its limit of error. The agreement between the values obtained by applying the weighted hyperbolic cosine method and the non-linear regression (NLR) method is good, being in both cases the mean quadratic error at a minimum.
NASA Astrophysics Data System (ADS)
Gnedin, Nickolay Y.; Semenov, Vadim A.; Kravtsov, Andrey V.
2018-04-01
An optimally efficient explicit numerical scheme for solving fluid dynamics equations, or any other parabolic or hyperbolic system of partial differential equations, should allow local regions to advance in time with their own, locally constrained time steps. However, such a scheme can result in violation of the Courant-Friedrichs-Lewy (CFL) condition, which is manifestly non-local. Although the violations can be considered to be "weak" in a certain sense and the corresponding numerical solution may be stable, such calculation does not guarantee the correct propagation speed for arbitrary waves. We use an experimental fluid dynamics code that allows cubic "patches" of grid cells to step with independent, locally constrained time steps to demonstrate how the CFL condition can be enforced by imposing a constraint on the time steps of neighboring patches. We perform several numerical tests that illustrate errors introduced in the numerical solutions by weak CFL condition violations and show how strict enforcement of the CFL condition eliminates these errors. In all our tests the strict enforcement of the CFL condition does not impose a significant performance penalty.
Heralded creation of photonic qudits from parametric down-conversion using linear optics
NASA Astrophysics Data System (ADS)
Yoshikawa, Jun-ichi; Bergmann, Marcel; van Loock, Peter; Fuwa, Maria; Okada, Masanori; Takase, Kan; Toyama, Takeshi; Makino, Kenzo; Takeda, Shuntaro; Furusawa, Akira
2018-05-01
We propose an experimental scheme to generate, in a heralded fashion, arbitrary quantum superpositions of two-mode optical states with a fixed total photon number n based on weakly squeezed two-mode squeezed state resources (obtained via weak parametric down-conversion), linear optics, and photon detection. Arbitrary d -level (qudit) states can be created this way where d =n +1 . Furthermore, we experimentally demonstrate our scheme for n =2 . The resulting qutrit states are characterized via optical homodyne tomography. We also discuss possible extensions to more than two modes concluding that, in general, our approach ceases to work in this case. For illustration and with regards to possible applications, we explicitly calculate a few examples such as NOON states and logical qubit states for quantum error correction. In particular, our approach enables one to construct bosonic qubit error-correction codes against amplitude damping (photon loss) with a typical suppression of √{n }-1 losses and spanned by two logical codewords that each correspond to an n -photon superposition for two bosonic modes.
Gravitational particle production in braneworld cosmology.
Bambi, C; Urban, F R
2007-11-09
Gravitational particle production in a time variable metric of an expanding universe is efficient only when the Hubble parameter H is not too small in comparison with the particle mass. In standard cosmology, the huge value of the Planck mass M{Pl} makes the mechanism phenomenologically irrelevant. On the other hand, in braneworld cosmology, the expansion rate of the early Universe can be much faster, and many weakly interacting particles can be abundantly created. Cosmological implications are discussed.
Strategic business planning for internal medicine.
Ervin, F R
1996-07-01
The internal medicine generalist is at market risk with expansion of managed care. The cottage industry of Academic Departments of internal medicine should apply more business tools to the internal medicine business problem. A strength, weakness, opportunity, threat (SWOT) analysis demonstrates high vulnerability to the internal medicine generalist initiative. Recommitment to the professional values of internal medicine and enhanced focus on the master clinician as the competitive core competency of internal medicine will be necessary to retain image and market share.
Ferromagnetism versus slow paramagnetic relaxation in Fe-doped Li3N
NASA Astrophysics Data System (ADS)
Fix, M.; Jesche, A.; Jantz, S. G.; Bräuninger, S. A.; Klauss, H.-H.; Manna, R. S.; Pietsch, I. M.; Höppe, H. A.; Canfield, P. C.
2018-02-01
We report on isothermal magnetization, Mössbauer spectroscopy, and magnetostriction as well as temperature-dependent alternating-current (ac) susceptibility, specific heat, and thermal expansion of single crystalline and polycrystalline Li2(Li1 -xFex) N with x =0 and x ≈0.30 . Magnetic hysteresis emerges at temperatures below T ≈50 K with coercivity fields of up to μ0H =11.6 T at T =2 K and magnetic anisotropy energies of 310 K (27 meV). The ac susceptibility is strongly frequency-dependent (f =10 -10 000 Hz) and reveals an effective energy barrier for spin reversal of Δ E ≈1100 K (90 meV). The relaxation times follow Arrhenius behavior for T >25 K . For T <10 K , however, the relaxation times of τ ≈1010 s are only weakly temperature-dependent, indicating the relevance of a quantum tunneling process instead of thermal excitations. The magnetic entropy amounts to more than 25 J molFe-1 K-1, which significantly exceeds R ln 2 , the value expected for the entropy of a ground-state doublet. Thermal expansion and magnetostriction indicate a weak magnetoelastic coupling in accordance with slow relaxation of the magnetization. The classification of Li2(Li1 -xFex) N as ferromagnet is stressed and contrasted with highly anisotropic and slowly relaxing paramagnetic behavior.
Ferromagnetism versus slow paramagnetic relaxation in Fe-doped Li 3 N
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fix, M.; Jesche, A.; Jantz, S. G.
We report on isothermal magnetization, Mössbauer spectroscopy, and magnetostriction as well as temperature-dependent alternating-current (ac) susceptibility, specific heat, and thermal expansion of single crystalline and polycrystalline Li 2 ( Li 1-xFe x) N with x = 0 and x ≈ 0.30 . Magnetic hysteresis emerges at temperatures below T ≈ 50 K with coercivity fields of up to μ 0H = 11.6 T at T = 2 K and magnetic anisotropy energies of 310 K (27 meV). The ac susceptibility is strongly frequency-dependent (f = 10 – 10 000 Hz) and reveals an effective energy barrier for spin reversal ofmore » Δ E ≈ 1100 K (90 meV). The relaxation times follow Arrhenius behavior for T > 25 K . For T < 10 K , however, the relaxation times of τ ≈ 10 10s are only weakly temperature-dependent, indicating the relevance of a quantum tunneling process instead of thermal excitations. The magnetic entropy amounts to more than 25 J mol -1 Fe K -1, which significantly exceeds R ln 2 , the value expected for the entropy of a ground-state doublet. Thermal expansion and magnetostriction indicate a weak magnetoelastic coupling in accordance with slow relaxation of the magnetization. The classification of Li 2 ( Li 1-xFe x) N as ferromagnet is stressed and contrasted with highly anisotropic and slowly relaxing paramagnetic behavior.« less
Ferromagnetism versus slow paramagnetic relaxation in Fe-doped Li 3 N
Fix, M.; Jesche, A.; Jantz, S. G.; ...
2018-02-23
We report on isothermal magnetization, Mössbauer spectroscopy, and magnetostriction as well as temperature-dependent alternating-current (ac) susceptibility, specific heat, and thermal expansion of single crystalline and polycrystalline Li 2 ( Li 1-xFe x) N with x = 0 and x ≈ 0.30 . Magnetic hysteresis emerges at temperatures below T ≈ 50 K with coercivity fields of up to μ 0H = 11.6 T at T = 2 K and magnetic anisotropy energies of 310 K (27 meV). The ac susceptibility is strongly frequency-dependent (f = 10 – 10 000 Hz) and reveals an effective energy barrier for spin reversal ofmore » Δ E ≈ 1100 K (90 meV). The relaxation times follow Arrhenius behavior for T > 25 K . For T < 10 K , however, the relaxation times of τ ≈ 10 10s are only weakly temperature-dependent, indicating the relevance of a quantum tunneling process instead of thermal excitations. The magnetic entropy amounts to more than 25 J mol -1 Fe K -1, which significantly exceeds R ln 2 , the value expected for the entropy of a ground-state doublet. Thermal expansion and magnetostriction indicate a weak magnetoelastic coupling in accordance with slow relaxation of the magnetization. The classification of Li 2 ( Li 1-xFe x) N as ferromagnet is stressed and contrasted with highly anisotropic and slowly relaxing paramagnetic behavior.« less
Biological effects due to weak magnetic field on plants
NASA Astrophysics Data System (ADS)
Belyavskaya, N. A.
2004-01-01
Throughout the evolution process, Earth's magnetic field (MF, about 50 μT) was a natural component of the environment for living organisms. Biological objects, flying on planned long-term interplanetary missions, would experience much weaker magnetic fields, since galactic MF is known to be 0.1-1 nT. However, the role of weak magnetic fields and their influence on functioning of biological organisms are still insufficiently understood, and is actively studied. Numerous experiments with seedlings of different plant species placed in weak magnetic field have shown that the growth of their primary roots is inhibited during early germination stages in comparison with control. The proliferative activity and cell reproduction in meristem of plant roots are reduced in weak magnetic field. Cell reproductive cycle slows down due to the expansion of G 1 phase in many plant species (and of G 2 phase in flax and lentil roots), while other phases of cell cycle remain relatively stabile. In plant cells exposed to weak magnetic field, the functional activity of genome at early pre-replicate period is shown to decrease. Weak magnetic field causes intensification of protein synthesis and disintegration in plant roots. At ultrastructural level, changes in distribution of condensed chromatin and nucleolus compactization in nuclei, noticeable accumulation of lipid bodies, development of a lytic compartment (vacuoles, cytosegresomes and paramural bodies), and reduction of phytoferritin in plastids in meristem cells were observed in pea roots exposed to weak magnetic field. Mitochondria were found to be very sensitive to weak magnetic field: their size and relative volume in cells increase, matrix becomes electron-transparent, and cristae reduce. Cytochemical studies indicate that cells of plant roots exposed to weak magnetic field show Ca 2+ over-saturation in all organelles and in cytoplasm unlike the control ones. The data presented suggest that prolonged exposures of plants to weak magnetic field may cause different biological effects at the cellular, tissue and organ levels. They may be functionally related to systems that regulate plant metabolism including the intracellular Ca 2+ homeostasis. However, our understanding of very complex fundamental mechanisms and sites of interactions between weak magnetic fields and biological systems is still incomplete and still deserve strong research efforts.
NASA Astrophysics Data System (ADS)
Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos
2018-07-01
In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAOs). Using analytic expressions and results from 1000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAOs, and the cosmological information in them. We find that (a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; (b) photo-z errors decrease the smearing of BAOs due to non-linear redshift-space distortions (RSDs) by giving less weight to line-of-sight modes; and (c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.
NASA Astrophysics Data System (ADS)
Chaves-Montero, Jonás; Angulo, Raúl E.; Hernández-Monteagudo, Carlos
2018-04-01
In the upcoming era of high-precision galaxy surveys, it becomes necessary to understand the impact of redshift uncertainties on cosmological observables. In this paper we explore the effect of sub-percent photometric redshift errors (photo-z errors) on galaxy clustering and baryonic acoustic oscillations (BAO). Using analytic expressions and results from 1 000 N-body simulations, we show how photo-z errors modify the amplitude of moments of the 2D power spectrum, their variances, the amplitude of BAO, and the cosmological information in them. We find that: a) photo-z errors suppress the clustering on small scales, increasing the relative importance of shot noise, and thus reducing the interval of scales available for BAO analyses; b) photo-z errors decrease the smearing of BAO due to non-linear redshift-space distortions (RSD) by giving less weight to line-of-sight modes; and c) photo-z errors (and small-scale RSD) induce a scale dependence on the information encoded in the BAO scale, and that reduces the constraining power on the Hubble parameter. Using these findings, we propose a template that extracts unbiased cosmological information from samples with photo-z errors with respect to cases without them. Finally, we provide analytic expressions to forecast the precision in measuring the BAO scale, showing that spectro-photometric surveys will measure the expansion history of the Universe with a precision competitive to that of spectroscopic surveys.
NASA Astrophysics Data System (ADS)
Plazas, A. A.; Shapiro, C.; Kannawadi, A.; Mandelbaum, R.; Rhodes, J.; Smith, R.
2016-10-01
Weak gravitational lensing (WL) is one of the most powerful techniques to learn about the dark sector of the universe. To extract the WL signal from astronomical observations, galaxy shapes must be measured and corrected for the point-spread function (PSF) of the imaging system with extreme accuracy. Future WL missions—such as NASA’s Wide-Field Infrared Survey Telescope (WFIRST)—will use a family of hybrid near-infrared complementary metal-oxide-semiconductor detectors (HAWAII-4RG) that are untested for accurate WL measurements. Like all image sensors, these devices are subject to conversion gain nonlinearities (voltage response to collected photo-charge) that bias the shape and size of bright objects such as reference stars that are used in PSF determination. We study this type of detector nonlinearity (NL) and show how to derive requirements on it from WFIRST PSF size and ellipticity requirements. We simulate the PSF optical profiles expected for WFIRST and measure the fractional error in the PSF size (ΔR/R) and the absolute error in the PSF ellipticity (Δe) as a function of star magnitude and the NL model. For our nominal NL model (a quadratic correction), we find that, uncalibrated, NL can induce an error of ΔR/R = 1 × 10-2 and Δe 2 = 1.75 × 10-3 in the H158 bandpass for the brightest unsaturated stars in WFIRST. In addition, our simulations show that to limit the bias of ΔR/R and Δe in the H158 band to ˜10% of the estimated WFIRST error budget, the quadratic NL model parameter β must be calibrated to ˜1% and ˜2.4%, respectively. We present a fitting formula that can be used to estimate WFIRST detector NL requirements once a true PSF error budget is established.
Mass Mapping Abell 2261 with Kinematic Weak Lensing: A Pilot Study for NASAs WFIRST mission
NASA Astrophysics Data System (ADS)
Eifler, Tim
2015-02-01
We propose to investigate a new method to extract cosmological information from weak gravitational lensing in the context of the mission design and requirements of NASAs Wide-Field Infrared Survey Telescope (WFIRST). In a recent paper (Huff, Krause, Eifler, George, Schlegel 2013) we describe a new method for reducing the shape noise in weak lensing measurements by an order of magnitude. Our method relies on spectroscopic measurements of disk galaxy rotation and makes use of the well-established Tully-Fisher (TF) relation in order to control for the intrinsic orientations of galaxy disks. Whereas shape noise is one of the major limitations for current weak lensing experiments it ceases to be an important source of statistical error in our new proposed technique. Specifically, we propose a pilot study that maps the projected mass distribution in the massive cluster Abell 2261 (z=0.225) to infer whether this promising technique faces systematics that prohibit its application to WFIRST. In addition to the cosmological weak lensing prospects, these measurements will also allow us to test kinematic lensing in the context of cluster mass reconstruction with a drastically improved signal-to-noise (S/N) per galaxy.
Model test on partial expansion in stratified subsidence during foundation pit dewatering
NASA Astrophysics Data System (ADS)
Wang, Jianxiu; Deng, Yansheng; Ma, Ruiqiang; Liu, Xiaotian; Guo, Qingfeng; Liu, Shaoli; Shao, Yule; Wu, Linbo; Zhou, Jie; Yang, Tianliang; Wang, Hanmei; Huang, Xinlei
2018-02-01
Partial expansion was observed in stratified subsidence during foundation pit dewatering. However, the phenomenon was suspected to be an error because the compression of layers is known to occur when subsidence occurs. A slice of the subsidence cone induced by drawdown was selected as the prototype. Model tests were performed to investigate the phenomenon. The underlying confined aquifer was generated as a movable rigid plate with a hinge at one end. The overlying layers were simulated with remolded materials collected from a construction site. Model tests performed under the conceptual model indicated that partial expansion occurred in stratified settlements under coordination deformation and consolidation conditions. During foundation pit dewatering, rapid drawdown resulted in rapid subsidence in the dewatered confined aquifer. The rapidly subsiding confined aquifer top was the bottom deformation boundary of the overlying layers. Non-coordination deformation was observed at the top and bottom of the subsiding overlying layers. The subsidence of overlying layers was larger at the bottom than at the top. The layers expanded and became thicker. The phenomenon was verified using numerical simulation method based on finite difference method. Compared with numerical simulation results, the boundary effect of the physical tests was obvious in the observation point close to the movable endpoint. The tensile stress of the overlying soil layers induced by the underlying settlement of dewatered confined aquifer contributed to the expansion phenomenon. The partial expansion of overlying soil layers was defined as inversed rebound. The inversed rebound was induced by inversed coordination deformation. Compression was induced by the consolidation in the overlying soil layers because of drainage. Partial expansion occurred when the expansion exceeded the compression. Considering the inversed rebound, traditional layer-wise summation method for calculating subsidence should be revised and improved.
Second-order shaped pulsed for solid-state quantum computation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sengupta, Pinaki
2008-01-01
We present the construction and detailed analysis of highly optimized self-refocusing pulse shapes for several rotation angles. We characterize the constructed pulses by the coefficients appearing in the Magnus expansion up to second order. This allows a semianalytical analysis of the performance of the constructed shapes in sequences and composite pulses by computing the corresponding leading-order error operators. Higher orders can be analyzed with the numerical technique suggested by us previously. We illustrate the technique by analyzing several composite pulses designed to protect against pulse amplitude errors, and on decoupling sequences for potentially long chains of qubits with on-site andmore » nearest-neighbor couplings.« less
Tunnel ionization of atoms and molecules: How accurate are the weak-field asymptotic formulas?
NASA Astrophysics Data System (ADS)
Labeye, Marie; Risoud, François; Maquet, Alfred; Caillat, Jérémie; Taïeb, Richard
2018-05-01
Weak-field asymptotic formulas for the tunnel ionization rate of atoms and molecules in strong laser fields are often used for the analysis of strong field recollision experiments. We investigate their accuracy and domain of validity for different model systems by confronting them to exact numerical results, obtained by solving the time dependent Schrödinger equation. We find that corrections that take the dc-Stark shift into account are a simple and efficient way to improve the formula. Furthermore, analyzing the different approximations used, we show that error compensation plays a crucial role in the fair agreement between exact and analytical results.
A negentropy minimization approach to adaptive equalization for digital communication systems.
Choi, Sooyong; Lee, Te-Won
2004-07-01
In this paper, we introduce and investigate a new adaptive equalization method based on minimizing approximate negentropy of the estimation error for a finite-length equalizer. We consider an approximate negentropy using nonpolynomial expansions of the estimation error as a new performance criterion to improve performance of a linear equalizer based on minimizing minimum mean squared error (MMSE). Negentropy includes higher order statistical information and its minimization provides improved converge, performance and accuracy compared to traditional methods such as MMSE in terms of bit error rate (BER). The proposed negentropy minimization (NEGMIN) equalizer has two kinds of solutions, the MMSE solution and the other one, depending on the ratio of the normalization parameters. The NEGMIN equalizer has best BER performance when the ratio of the normalization parameters is properly adjusted to maximize the output power(variance) of the NEGMIN equalizer. Simulation experiments show that BER performance of the NEGMIN equalizer with the other solution than the MMSE one has similar characteristics to the adaptive minimum bit error rate (AMBER) equalizer. The main advantage of the proposed equalizer is that it needs significantly fewer training symbols than the AMBER equalizer. Furthermore, the proposed equalizer is more robust to nonlinear distortions than the MMSE equalizer.
[Improving blood safety: errors management in transfusion medicine].
Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana
2014-01-01
The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.
Efficient Z gates for quantum computing
NASA Astrophysics Data System (ADS)
McKay, David C.; Wood, Christopher J.; Sheldon, Sarah; Chow, Jerry M.; Gambetta, Jay M.
2017-08-01
For superconducting qubits, microwave pulses drive rotations around the Bloch sphere. The phase of these drives can be used to generate zero-duration arbitrary virtual Z gates, which, combined with two Xπ /2 gates, can generate any SU(2) gate. Here we show how to best utilize these virtual Z gates to both improve algorithms and correct pulse errors. We perform randomized benchmarking using a Clifford set of Hadamard and Z gates and show that the error per Clifford is reduced versus a set consisting of standard finite-duration X and Y gates. Z gates can correct unitary rotation errors for weakly anharmonic qubits as an alternative to pulse-shaping techniques such as derivative removal by adiabatic gate (DRAG). We investigate leakage and show that a combination of DRAG pulse shaping to minimize leakage and Z gates to correct rotation errors realizes a 13.3 ns Xπ /2 gate characterized by low error [1.95 (3 ) ×10-4] and low leakage [3.1 (6 ) ×10-6] . Ultimately leakage is limited by the finite temperature of the qubit, but this limit is two orders of magnitude smaller than pulse errors due to decoherence.
NASA Technical Reports Server (NTRS)
Otterman, J.; Susskind, J.; Dalu, G.; Kratz, D.; Goldberg, I. L.
1992-01-01
The impact of water-emission anisotropy on remotedly sensed long-wave data has been studied. Water emission is formulated from a calm body for a facile computation of radiative transfer in the atmosphere. The error stemming from the blackbody assumption are calculated for cases of a purely absorbing or a purely scattering atmosphere taking the optical properties of the atmosphere as known. For an absorbing atmosphere, the errors in the sea-surface temperature (SST) are found to be always reduced and be the same whether measurements are made from space or at any level of the atmosphere. The inferred optical thickness tau of an absorbing layer can be in error under the blackbody assumption by a delta tau of 0.01-0.08, while the inferred optical thickness of a scattering layer can be in error by a larger amount, delta tau of 0.03-0.13. It is concluded that the error delta tau depends only weakly on the actual optical thickness and the viewing angle, but is rather sensitive to the wavelength of the measurement.
NASA Technical Reports Server (NTRS)
Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.
2002-01-01
The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.
Expansion of Countermine Lidar UAV-based System (CLUBS)
2012-09-30
Joong Yong Park Optech Inc. 7225 Stennis Airport Drive, Suite 300 Kiln , Mississippi 39556 phone: (228) 252-1004 fax: (228) 252-1007...Suite 300 Kiln , Mississippi 39556 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR...of conventional "half-peak to half-peak" algorithm to calculate slant distance results in significantly high depth error compared to the counterpart
DOE Office of Scientific and Technical Information (OSTI.GOV)
Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno
2016-09-15
The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less
A penalty-based nodal discontinuous Galerkin method for spontaneous rupture dynamics
NASA Astrophysics Data System (ADS)
Ye, R.; De Hoop, M. V.; Kumar, K.
2017-12-01
Numerical simulation of the dynamic rupture processes with slip is critical to understand the earthquake source process and the generation of ground motions. However, it can be challenging due to the nonlinear friction laws interacting with seismicity, coupled with the discontinuous boundary conditions across the rupture plane. In practice, the inhomogeneities in topography, fault geometry, elastic parameters and permiability add extra complexity. We develop a nodal discontinuous Galerkin method to simulate seismic wave phenomenon with slipping boundary conditions, including the fluid-solid boundaries and ruptures. By introducing a novel penalty flux, we avoid solving Riemann problems on interfaces, which makes our method capable for general anisotropic and poro-elastic materials. Based on unstructured tetrahedral meshes in 3D, the code can capture various geometries in geological model, and use polynomial expansion to achieve high-order accuracy. We consider the rate and state friction law, in the spontaneous rupture dynamics, as part of a nonlinear transmitting boundary condition, which is weakly enforced across the fault surface as numerical flux. An iterative coupling scheme is developed based on implicit time stepping, containing a constrained optimization process that accounts for the nonlinear part. To validate the method, we proof the convergence of the coupled system with error estimates. We test our algorithm on a well-established numerical example (TPV102) of the SCEC/USGS Spontaneous Rupture Code Verification Project, and benchmark with the simulation of PyLith and SPECFEM3D with agreeable results.
Weak field equations and generalized FRW cosmology on the tangent Lorentz bundle
NASA Astrophysics Data System (ADS)
Triantafyllopoulos, A.; Stavrinos, P. C.
2018-04-01
We study field equations for a weak anisotropic model on the tangent Lorentz bundle TM of a spacetime manifold. A geometrical extension of general relativity (GR) is considered by introducing the concept of local anisotropy, i.e. a direct dependence of geometrical quantities on observer 4‑velocity. In this approach, we consider a metric on TM as the sum of an h-Riemannian metric structure and a weak anisotropic perturbation, field equations with extra terms are obtained for this model. As well, extended Raychaudhuri equations are studied in the framework of Finsler-like extensions. Canonical momentum and mass-shell equation are also generalized in relation to their GR counterparts. Quantization of the mass-shell equation leads to a generalization of the Klein–Gordon equation and dispersion relation for a scalar field. In this model the accelerated expansion of the universe can be attributed to the geometry itself. A cosmological bounce is modeled with the introduction of an anisotropic scalar field. Also, the electromagnetic field equations are directly incorporated in this framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Ziyang; Yang, Tao; Li, Guoqi
We study synchronization of coupled linear systems over networks with weak connectivity and time-varying delays. We focus on the case that the internal dynamics are time-varying but non-expansive. Both uniformly connected and infinitely connected communication topologies are considered. A new concept of P-synchronization is introduced and we first show that global asymptotic P-synchronization can be achieved over directed networks with uniform joint connectivity and arbitrarily bounded delays. We then study the case of the infinitely jointly connected communication topology. In particular, for the undirected communication topologies, it turns out that the existence of a uniform time interval for the communicationmore » topology is not necessary and P-synchronization can be achieved when the time varying delays are arbitrarily bounded. Simulations are given to validate the theoretical results.« less
NASA Astrophysics Data System (ADS)
Tchoufag, Joël; Fabre, David; Magnaudet, Jacques
2015-09-01
Gravity- or buoyancy-driven bodies moving in a slightly viscous fluid frequently follow fluttering or helical paths. Current models of such systems are largely empirical and fail to predict several of the key features of their evolution, especially close to the onset of path instability. Here, using a weakly nonlinear expansion of the full set of governing equations, we present a new generic reduced-order model based on a pair of amplitude equations with exact coefficients that drive the evolution of the first pair of unstable modes. We show that the predictions of this model for the style (e.g., fluttering or spiraling) and characteristics (e.g., frequency and maximum inclination angle) of path oscillations compare well with various recent data for both solid disks and air bubbles.
NASA Astrophysics Data System (ADS)
Magnaudet, Jacques; Tchoufag, Joel; Fabre, David
2015-11-01
Gravity/buoyancy-driven bodies moving in a slightly viscous fluid frequently follow fluttering or helical paths. Current models of such systems are largely empirical and fail to predict several of the key features of their evolution, especially close to the onset of path instability. Using a weakly nonlinear expansion of the full set of governing equations, we derive a new generic reduced-order model of this class of phenomena based on a pair of amplitude equations with exact coefficients that drive the evolution of the first pair of unstable modes. We show that the predictions of this model for the style (eg. fluttering or spiraling) and characteristics (eg. frequency and maximum inclination angle) of path oscillations compare well with various recent data for both solid disks and air bubbles.
Analysis of general power counting rules in effective field theory
Gavela, Belen; Jenkins, Elizabeth E.; Manohar, Aneesh V.; ...
2016-09-02
We derive the general counting rules for a quantum effective field theory (EFT) in d dimensions. The rules are valid for strongly and weakly coupled theories, and they predict that all kinetic energy terms are canonically normalized. They determine the energy dependence of scattering cross sections in the range of validity of the EFT expansion. We show that the size of the cross sections is controlled by the Λ power counting of EFT, not by chiral counting, even for chiral perturbation theory (χPT). The relation between Λ and f is generalized to d dimensions. We show that the naive dimensionalmore » analysis 4π counting is related to ℏ counting. The EFT counting rules are applied to χPT, low-energy weak interactions, Standard Model EFT and the non-trivial case of Higgs EFT.« less
NASA Astrophysics Data System (ADS)
Galloway, Duncan K.; Psaltis, Dimitrios; Chakrabarty, Deepto; Muno, Michael P.
2003-06-01
We investigate the limitations of thermonuclear X-ray bursts as a distance indicator for the weakly magnetized accreting neutron star 4U 1728-34. We measured the unabsorbed peak flux of 81 bursts in public data from the Rossi X-Ray Timing Explorer (RXTE). The distribution of peak fluxes was bimodal: 66 bursts exhibited photospheric radius expansion (presumably reaching the local Eddington limit) and were distributed about a mean bolometric flux of 9.2×10-8ergscm-2s-1, while the remaining (non-radius expansion) bursts reached 4.5×10-8ergscm-2s-1, on average. The peak fluxes of the radius expansion bursts were not constant, exhibiting a standard deviation of 9.4% and a total variation of 46%. These bursts showed significant correlations between their peak flux and the X-ray colors of the persistent emission immediately prior to the burst. We also found evidence for quasi-periodic variation of the peak fluxes of radius expansion bursts, with a timescale of ~=40 days. The persistent flux observed with RXTE/ASM over 5.8 yr exhibited quasi-periodic variability on a similar timescale. We suggest that these variations may have a common origin in reflection from a warped accretion disk. Once the systematic variation of the peak burst fluxes is subtracted, the residual scatter is only ~=3%, roughly consistent with the measurement uncertainties. The narrowness of this distribution strongly suggests that (1) the radiation from the neutron star atmosphere during radius expansion episodes is nearly spherically symmetric and (2) the radius expansion bursts reach a common peak flux that may be interpreted as a standard candle intensity. Adopting the minimum peak flux for the radius expansion bursts as the Eddington flux limit, we derive a distance for the source of 4.4-4.8 kpc (assuming RNS=10 km), with the uncertainty arising from the probable range of the neutron star mass MNS=1.4-2 Msolar.
A hybridized formulation for the weak Galerkin mixed finite element method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Wang, Junping; Ye, Xiu
This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less
A hybridized formulation for the weak Galerkin mixed finite element method
Mu, Lin; Wang, Junping; Ye, Xiu
2016-01-14
This paper presents a hybridized formulation for the weak Galerkin mixed finite element method (WG-MFEM) which was introduced and analyzed in Wang and Ye (2014) for second order elliptic equations. The WG-MFEM method was designed by using discontinuous piecewise polynomials on finite element partitions consisting of polygonal or polyhedral elements of arbitrary shape. The key to WG-MFEM is the use of a discrete weak divergence operator which is defined and computed by solving inexpensive problems locally on each element. The hybridized formulation of this paper leads to a significantly reduced system of linear equations involving only the unknowns arising frommore » the Lagrange multiplier in hybridization. Optimal-order error estimates are derived for the hybridized WG-MFEM approximations. In conclusion, some numerical results are reported to confirm the theory and a superconvergence for the Lagrange multiplier.« less
A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.
Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong
2015-05-05
The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.
Balancing aggregation and smoothing errors in inverse models
Turner, A. J.; Jacob, D. J.
2015-06-30
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-01-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
Balancing aggregation and smoothing errors in inverse models
NASA Astrophysics Data System (ADS)
Turner, A. J.; Jacob, D. J.
2015-06-01
Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Van Weverberg, K.; Morcrette, C. J.; Petch, J.
Many numerical weather prediction (NWP) and climate models exhibit too warm lower tropospheres near the mid-latitude continents. This warm bias has been extensively studied before, but evidence about its origin remains inconclusive. Some studies point to deficiencies in the deep convective or low clouds. Other studies found an important contribution from errors in the land surface properties. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. Documenting these radiation errors is hence an important step towards understanding and alleviating themore » warm bias. This paper presents an attribution study to quantify the net radiation biases in 9 model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, integrated water vapor (IWV) and aerosols are quantified, using an array of radiation measurement stations near the ARM SGP site. Furthermore, an in depth-analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface SW radiation is overestimated (LW underestimated) in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation in all but one model, which has a dominant albedo issue. Using a cloud regime analysis, it was shown that missing deep cloud events and/or simulating deep clouds with too weak cloud-radiative effects account for most of these cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud, but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly however, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, the deep cloud problem in many models could be related to too weak convective cloud detrainment and too large precipitation efficiencies. This does not rule out that previously documented issues with the evaporative fraction contribute to the warm bias as well, since the majority of the models underestimate the surface rain rates overall, as they miss the observed large nocturnal precipitation peak.« less
Analyzing a stochastic time series obeying a second-order differential equation.
Lehle, B; Peinke, J
2015-06-01
The stochastic properties of a Langevin-type Markov process can be extracted from a given time series by a Markov analysis. Also processes that obey a stochastically forced second-order differential equation can be analyzed this way by employing a particular embedding approach: To obtain a Markovian process in 2N dimensions from a non-Markovian signal in N dimensions, the system is described in a phase space that is extended by the temporal derivative of the signal. For a discrete time series, however, this derivative can only be calculated by a differencing scheme, which introduces an error. If the effects of this error are not accounted for, this leads to systematic errors in the estimation of the drift and diffusion functions of the process. In this paper we will analyze these errors and we will propose an approach that correctly accounts for them. This approach allows an accurate parameter estimation and, additionally, is able to cope with weak measurement noise, which may be superimposed to a given time series.
NASA Astrophysics Data System (ADS)
Matsui, Chihiro; Kinoshita, Reika; Takeuchi, Ken
2018-04-01
A hybrid of storage class memory (SCM) and NAND flash is a promising technology for high performance storage. Error correction is inevitable on SCM and NAND flash because their bit error rate (BER) increases with write/erase (W/E) cycles, data retention, and program/read disturb. In addition, scaling and multi-level cell technologies increase BER. However, error-correcting code (ECC) degrades storage performance because of extra memory reading and encoding/decoding time. Therefore, applicable ECC strength of SCM and NAND flash is evaluated independently by fixing ECC strength of one memory in the hybrid storage. As a result, weak BCH ECC with small correctable bit is recommended for the hybrid storage with large SCM capacity because SCM is accessed frequently. In contrast, strong and long-latency LDPC ECC can be applied to NAND flash in the hybrid storage with large SCM capacity because large-capacity SCM improves the storage performance.
Woolf, Steven H.; Kuzel, Anton J.; Dovey, Susan M.; Phillips, Robert L.
2004-01-01
BACKGROUND Notions about the most common errors in medicine currently rest on conjecture and weak epidemiologic evidence. We sought to determine whether cascade analysis is of value in clarifying the epidemiology and causes of errors and whether physician reports are sensitive to the impact of errors on patients. METHODS Eighteen US family physicians participating in a 6-country international study filed 75 anonymous error reports. The narratives were examined to identify the chain of events and the predominant proximal errors. We tabulated the consequences to patients, both reported by physicians and inferred by investigators. RESULTS A chain of errors was documented in 77% of incidents. Although 83% of the errors that ultimately occurred were mistakes in treatment or diagnosis, 2 of 3 were set in motion by errors in communication. Fully 80% of the errors that initiated cascades involved informational or personal miscommunication. Examples of informational miscommunication included communication breakdowns among colleagues and with patients (44%), misinformation in the medical record (21%), mishandling of patients’ requests and messages (18%), inaccessible medical records (12%), and inadequate reminder systems (5%). When asked whether the patient was harmed, physicians answered affirmatively in 43% of cases in which their narratives described harms. Psychological and emotional effects accounted for 17% of physician-reported consequences but 69% of investigator-inferred consequences. CONCLUSIONS Cascade analysis of physicians’ error reports is helpful in understanding the precipitant chain of events, but physicians provide incomplete information about how patients are affected. Miscommunication appears to play an important role in propagating diagnostic and treatment mistakes. PMID:15335130
Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N
2014-12-01
Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.
Continuum limit of Bk from 2+1 flavor domain wall QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soni, A.; T. Izubuchi, et al.
2011-07-01
We determine the neutral kaon mixing matrix element B{sub K} in the continuum limit with 2+1 flavors of domain wall fermions, using the Iwasaki gauge action at two different lattice spacings. These lattice fermions have near exact chiral symmetry and therefore avoid artificial lattice operator mixing. We introduce a significant improvement to the conventional nonperturbative renormalization (NPR) method in which the bare matrix elements are renormalized nonperturbatively in the regularization invariant momentum scheme (RI-MOM) and are then converted into the MS{sup -} scheme using continuum perturbation theory. In addition to RI-MOM, we introduce and implement four nonexceptional intermediate momentum schemesmore » that suppress infrared nonperturbative uncertainties in the renormalization procedure. We compute the conversion factors relating the matrix elements in this family of regularization invariant symmetric momentum schemes (RI-SMOM) and MS{sup -} at one-loop order. Comparison of the results obtained using these different intermediate schemes allows for a more reliable estimate of the unknown higher-order contributions and hence for a correspondingly more robust estimate of the systematic error. We also apply a recently proposed approach in which twisted boundary conditions are used to control the Symanzik expansion for off-shell vertex functions leading to a better control of the renormalization in the continuum limit. We control chiral extrapolation errors by considering both the next-to-leading order SU(2) chiral effective theory, and an analytic mass expansion. We obtain B{sub K}{sup MS{sup -}} (3 GeV) = 0.529(5){sub stat}(15){sub {chi}}(2){sub FV}(11){sub NPR}. This corresponds to B{sup -}{sub K}{sup RGI{sup -}} = 0.749(7){sub stat}(21){sub {chi}}(3){sub FV}(15){sub NPR}. Adding all sources of error in quadrature, we obtain B{sup -}{sub K}{sup RGI{sup -}} = 0.749(27){sub combined}, with an overall combined error of 3.6%.« less
Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling
NASA Astrophysics Data System (ADS)
Sung, Chih-Jen; Niemeyer, Kyle E.
2010-05-01
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.
Transmutation of a trans-series: the Gross-Witten-Wadia phase transition
NASA Astrophysics Data System (ADS)
Ahmed, Anees; Dunne, Gerald V.
2017-11-01
We study the change in the resurgent asymptotic properties of a trans-series in two parameters, a coupling g 2 and a gauge index N, as a system passes through a large N phase transition, using the universal example of the Gross-Witten-Wadia third-order phase transition in the unitary matrix model. This transition is well-studied in the immediate vicinity of the transition point, where it is characterized by a double-scaling limit Painlevé II equation, and also away from the transition point using the pre-string difference equation. Here we present a complementary analysis of the transition at all coupling and all finite N, in terms of a differential equation, using the explicit Tracy-Widom mapping of the Gross-Witten-Wadia partition function to a solution of a Painlevé III equation. This mapping provides a simple method to generate trans-series expansions in all parameter regimes, and to study their transmutation as the parameters are varied. For example, at any finite N the weak coupling expansion is divergent, with a non-perturbative trans-series completion; on the other hand, the strong coupling expansion is convergent, and yet there is still a non-perturbative trans-series completion. We show how the different instanton terms `condense' at the transition point to match with the double-scaling limit trans-series. We also define a uniform large N strong-coupling expansion (a non-linear analogue of uniform WKB), which is much more precise than the conventional large N expansion through the transition region, and apply it to the evaluation of Wilson loops.
ERIC Educational Resources Information Center
Arieli-Attali, Meirav; Liu, Ying
2016-01-01
Diagnostic assessment approaches intend to provide fine-grained reports of what students know and can do, focusing on their areas of strengths and weaknesses. However, current application of such diagnostic approaches is limited by the scoring method for item responses; important diagnostic information, such as type of errors and strategy use is…
Five Flaws of Staff Development and the Future Beyond
ERIC Educational Resources Information Center
Hargreaves, Andy
2007-01-01
Student learning and development do not occur without teacher learning and development. Not any teacher development will do, though. The old flaws of weak and wayward staff development are well-known--no staff development, in which trial and error are assumed to be enough; staff development that is all ideas and no implementation, i.e. the…
Profile models for estimating log end diameters in the Rocky Mountain Region
Raymond L. Czaplewski; Amy S. Brown; Raymond C. Walker
1989-01-01
The segmented polynomial stem profile model of Max and Burkhart was applied to seven tree species in the Rocky Mountain Region of the Forest Service. Errors were reduced over the entire data set by use of second-stage models that adjust for transformation bias and explained weak patterns in the residual diameter predictions.
Efficient Learning Algorithms with Limited Information
ERIC Educational Resources Information Center
De, Anindya
2013-01-01
The thesis explores efficient learning algorithms in settings which are more restrictive than the PAC model of learning (Valiant) in one of the following two senses: (i) The learning algorithm has a very weak access to the unknown function, as in, it does not get labeled samples for the unknown function (ii) The error guarantee required from the…
Analysis of Mongolian Students' Common Translation Errors and Its Solutions
ERIC Educational Resources Information Center
Zhao, Changhua
2013-01-01
In Inner Mongolia, those Mongolian students face lots of difficulties in learning English. Especially the English translation ability of Mongolian students is a weak point. It is worth to think a problem that how to let our students use the English freely on a certain foundation. This article investigates the problems of Mongolian English learners…
NASA Astrophysics Data System (ADS)
Pavošević, Fabijan; Neese, Frank; Valeev, Edward F.
2014-08-01
We present a production implementation of reduced-scaling explicitly correlated (F12) coupled-cluster singles and doubles (CCSD) method based on pair-natural orbitals (PNOs). A key feature is the reformulation of the explicitly correlated terms using geminal-spanning orbitals that greatly reduce the truncation errors of the F12 contribution. For the standard S66 benchmark of weak intermolecular interactions, the cc-pVDZ-F12 PNO CCSD F12 interaction energies reproduce the complete basis set CCSD limit with mean absolute error <0.1 kcal/mol, and at a greatly reduced cost compared to the conventional CCSD F12.
Gain loss and noise temperature degradation due to subreflector rotations in a Cassegrain antenna
NASA Astrophysics Data System (ADS)
Lamb, J. W.; Olver, A. D.
An evaluation of performance degradation due to subreflector rotations is reported for the 15 m UK/NL Millimetrewave Radio Telescope Cassegrain antenna. The analytical treatment of the phase errors shows that the optimum point for the center of rotation of the subreflector is the primary focus, indicating astigmatic error, and it is shown that a compromise must be made betwen mechanical and electrical performance. Gain deterioration due to spillover is only weakly dependent on zc, and this loss decreases as z(c) moves towards the subreflector vertex. The associated spillover gives rise to a noise temperature which is calculated to be a few degrees K.
A statistical assessment of zero-polarization catalogues
NASA Astrophysics Data System (ADS)
Clarke, D.; Naghizadeh-Khouei, J.; Simmons, J. F. L.; Stewart, B. G.
1993-03-01
The statistical behavior associated with polarization measurements is presented. The cumulative distribution function for measurements of unpolarized sources normalized by the measurement error is considered and Kolmogorov tests have been applied to data which might be considered as being representative of assemblies of unpolarized stars. Tinbergen's (1979, 1982) and Piirola's I (1977) catalogs have been examined and reveal shortcomings, the former indicating the presence of uncorrected instrumental polarization in part of the data and both suggesting that the quoted errors are in general slightly underestimated. Citings of these catalogs as providing evidence that middle-type stars in general exhibit weak intrinsic polarizations are shown to be invalid.
Observations of interstellar zinc
NASA Technical Reports Server (NTRS)
Jura, M.; York, D.
1981-01-01
The International Ultraviolet Explorer observations of interstellar zinc toward 10 stars are examined. It is found that zinc is at most only slightly depleted in the interstellar medium; its abundance may serve as a tracer of the true metallicity in the gas. The local interstellar medium has abundances that apparently are homogeneous to within a factor of two, when integrated over paths of about 500 pc, and this result is important for understanding the history of nucleosynthesis in the solar neighborhood. The intrinsic errors in detecting weak interstellar lines are analyzed and suggestions are made as to how this error limit may be lowered to 5 mA per target observation.
Global sea level trend in the past century
NASA Technical Reports Server (NTRS)
Gornitz, V.; Lebedeff, S.; Hansen, J.
1982-01-01
Data derived from tide-gauge stations throughout the world indicate that the mean sea level rose by about 12 centimeters in the past century. The sea level change has a high correlation with the trend of global surface air temperature. A large part of the sea level rise can be accounted for in terms of the thermal expansion of the upper layers of the ocean. The results also represent weak indirect evidence for a net melting of the continental ice sheets.
Quasilinear theory of plasma turbulence. Origins, ideas, and evolution of the method
NASA Astrophysics Data System (ADS)
Bakunin, O. G.
2018-01-01
The quasilinear method of describing weak plasma turbulence is one of the most important elements of current plasma physics research. Today, this method is not only a tool for solving individual problems but a full-fledged theory of general physical interest. The author's objective is to show how the early ideas of describing the wave-particle interactions in a plasma have evolved as a result of the rapid expansion of the research interests of turbulence and turbulent transport theorists.
2016-11-01
validation (Daniell) (months 1-16): 1a. Plant transgenic tobacco seeds and grow (months 1-4, typical growing period; repeated 2 times). 1b. Harvest...GFP. Adult mice were orally fed with leaf materials from transgenic tobacco plants, with the amount adjusted to GFP expression levels, for three... Breeding Cmdx and C57 mice for colony maintenance and expansion (Barton) (months 1-30) 3a. Purchase mice (month 1). 3b. Breed mice for older age
Study of intelligent building system based on the internet of things
NASA Astrophysics Data System (ADS)
Wan, Liyong; Xu, Renbo
2017-03-01
In accordance with the problem such as isolated subsystems, weak system linkage and expansibility of the bus type buildings management system, this paper based on the modern intelligent buildings has studied some related technologies of the intelligent buildings and internet of things, and designed system architecture of the intelligent buildings based on the Internet of Things. Meanwhile, this paper has also analyzed wireless networking modes, wireless communication protocol and wireless routing protocol of the intelligent buildings based on the Internet of Things.
Weak lensing calibration of mass bias in the REFLEX+BCS X-ray galaxy cluster catalogue
NASA Astrophysics Data System (ADS)
Simet, Melanie; Battaglia, Nicholas; Mandelbaum, Rachel; Seljak, Uroš
2017-04-01
The use of large, X-ray-selected Galaxy cluster catalogues for cosmological analyses requires a thorough understanding of the X-ray mass estimates. Weak gravitational lensing is an ideal method to shed light on such issues, due to its insensitivity to the cluster dynamical state. We perform a weak lensing calibration of 166 galaxy clusters from the REFLEX and BCS cluster catalogue and compare our results to the X-ray masses based on scaled luminosities from that catalogue. To interpret the weak lensing signal in terms of cluster masses, we compare the lensing signal to simple theoretical Navarro-Frenk-White models and to simulated cluster lensing profiles, including complications such as cluster substructure, projected large-scale structure and Eddington bias. We find evidence of underestimation in the X-ray masses, as expected, with
The relationships among work stress, strain and self-reported errors in UK community pharmacy.
Johnson, S J; O'Connor, E M; Jacobs, S; Hassell, K; Ashcroft, D M
2014-01-01
Changes in the UK community pharmacy profession including new contractual frameworks, expansion of services, and increasing levels of workload have prompted concerns about rising levels of workplace stress and overload. This has implications for pharmacist health and well-being and the occurrence of errors that pose a risk to patient safety. Despite these concerns being voiced in the profession, few studies have explored work stress in the community pharmacy context. To investigate work-related stress among UK community pharmacists and to explore its relationships with pharmacists' psychological and physical well-being, and the occurrence of self-reported dispensing errors and detection of prescribing errors. A cross-sectional postal survey of a random sample of practicing community pharmacists (n = 903) used ASSET (A Shortened Stress Evaluation Tool) and questions relating to self-reported involvement in errors. Stress data were compared to general working population norms, and regressed on well-being and self-reported errors. Analysis of the data revealed that pharmacists reported significantly higher levels of workplace stressors than the general working population, with concerns about work-life balance, the nature of the job, and work relationships being the most influential on health and well-being. Despite this, pharmacists were not found to report worse health than the general working population. Self-reported error involvement was linked to both high dispensing volume and being troubled by perceived overload (dispensing errors), and resources and communication (detection of prescribing errors). This study contributes to the literature by benchmarking community pharmacists' health and well-being, and investigating sources of stress using a quantitative approach. A further important contribution to the literature is the identification of a quantitative link between high workload and self-reported dispensing errors. Copyright © 2014 Elsevier Inc. All rights reserved.
Dzubak, Allison L.; Krogel, Jaron T.; Reboredo, Fernando A.
2017-07-10
The necessarily approximate evaluation of non-local pseudopotentials in diffusion Monte Carlo (DMC) introduces localization errors. In this paper, we estimate these errors for two families of non-local pseudopotentials for the first-row transition metal atoms Sc–Zn using an extrapolation scheme and multideterminant wavefunctions. Sensitivities of the error in the DMC energies to the Jastrow factor are used to estimate the quality of two sets of pseudopotentials with respect to locality error reduction. The locality approximation and T-moves scheme are also compared for accuracy of total energies. After estimating the removal of the locality and T-moves errors, we present the range ofmore » fixed-node energies between a single determinant description and a full valence multideterminant complete active space expansion. The results for these pseudopotentials agree with previous findings that the locality approximation is less sensitive to changes in the Jastrow than T-moves yielding more accurate total energies, however not necessarily more accurate energy differences. For both the locality approximation and T-moves, we find decreasing Jastrow sensitivity moving left to right across the series Sc–Zn. The recently generated pseudopotentials of Krogel et al. reduce the magnitude of the locality error compared with the pseudopotentials of Burkatzki et al. by an average estimated 40% using the locality approximation. The estimated locality error is equivalent for both sets of pseudopotentials when T-moves is used. Finally, for the Sc–Zn atomic series with these pseudopotentials, and using up to three-body Jastrow factors, our results suggest that the fixed-node error is dominant over the locality error when a single determinant is used.« less
NASA Astrophysics Data System (ADS)
Sturgess, G. J.; Syed, S. A.
1982-06-01
A numerical simulation is made of the flow in the Wright Aeronautical Propulsion Laboratory diffusion flame research combustor operating with a strong central jet of carbon dioxide in a weak and removed co-axial jet of air. The simulation is based on a finite difference solution of the time-average, steady-state, elliptic form of the Reynolds equations. Closure for these equations is provided by a two-equation turbulence model. Comparisons between measurements and predictions are made for centerline axial velocities and radial profiles of CO2 concentration. Earlier findings for a single specie, constant density, single jet flow that a large expansion ratio confined jet behaves initially as if it were unconfined, are confirmed for the multiple-specie, variable density, multiple-jet system. The lack of universality in the turbulence model constants and the turbulent Schmidt/Prandtl number is discussed.
Valera Yepes, Rocío; Virgili Casas, Maria; Povedano Panades, Monica; Guerrero Gual, Mireia; Villabona Artero, Carles
2015-05-01
Kennedy's disease, also known as bulbospinal muscular atrophy, is a rare, X-linked recessive neurodegenerative disorder affecting adult males. It is caused by expansion of an unstable cytosine-adenine-guanine tandem-repeat in exon 1 of the androgen-receptor gene on chromosome Xq11-12, and is characterized by spinal motor neuron progressive degeneration. Endocrinologically, these patients often have the features of hypogonadism associated to the androgen insensitivity syndrome, particularly its partial forms. We report 4 cases with the typical neurological presentation, consisting of slowly progressing generalized muscle weakness with atrophy and bulbar muscle involvement; these patients also had several endocrine manifestations; the most common non-neurological manifestation was gynecomastia. In all cases reported, molecular analysis showed an abnormal cytosine-adenine-guanine triplet repeat expansion in the androgen receptor gene. Copyright © 2014 SEEN. Published by Elsevier España, S.L.U. All rights reserved.
Roshid, Harun-Or; Kabir, Md Rashed; Bhowmik, Rajandra Chadra; Datta, Bimal Kumar
2014-01-01
In this paper, we have described two dreadfully important methods to solve nonlinear partial differential equations which are known as exp-function and the exp(-ϕ(ξ)) -expansion method. Recently, there are several methods to use for finding analytical solutions of the nonlinear partial differential equations. The methods are diverse and useful for solving the nonlinear evolution equations. With the help of these methods, we are investigated the exact travelling wave solutions of the Vakhnenko- Parkes equation. The obtaining soliton solutions of this equation are described many physical phenomena for weakly nonlinear surface and internal waves in a rotating ocean. Further, three-dimensional plots of the solutions such as solitons, singular solitons, bell type solitary wave i.e. non-topological solitons solutions and periodic solutions are also given to visualize the dynamics of the equation.
Finite-amplitude, pulsed, ultrasonic beams
NASA Astrophysics Data System (ADS)
Coulouvrat, François; Frøysa, Kjell-Eivind
An analytical, approximate solution of the inviscid KZK equation for a nonlinear pulsed sound beam radiated by an acoustic source with a Gaussian velocity distribution, is obtained by means of the renormalization method. This method involves two steps. First, the transient, weakly nonlinear field is computed. However, because of cumulative nonlinear effects, that expansion is non-uniform and breaks down at some distance away from the source. So, in order to extend its validity, it is re-written in a new frame of co-ordinates, better suited to following the nonlinear distorsion of the wave profile. Basically, the nonlinear coordinate transform introduces additional terms in the expansion, which are chosen so as to counterbalance the non-uniform ones. Special care is devoted to the treatment of shock waves. Finally, comparisons with the results of a finite-difference scheme turn out favorable, and show the efficiency of the method for a rather large range of parameters.
NASA Astrophysics Data System (ADS)
Schau, Kyle A.
This thesis presents a complete method of modeling the autospectra of turbulence in closed form via an expansion series using the von Karman model as a basis function. It is capable of modeling turbulence in all three directions of fluid flow: longitudinal, lateral, and vertical, separately, thus eliminating the assumption of homogeneous, isotropic flow. A thorough investigation into the expansion series is presented, with the strengths and weaknesses highlighted. Furthermore, numerical aspects and theoretical derivations are provided. This method is then tested against three highly complex flow fields: wake turbulence inside wind farms, helicopter downwash, and helicopter downwash coupled with turbulence shed from a ship superstructure. These applications demonstrate that this method is remarkably robust, that the developed autospectral models are virtually tailored to the design of white noise driven shaping filters, and that these models in closed form facilitate a greater understanding of complex flow fields in wind engineering.
A bead-spring chain as a one-dimensional polyelectrolyte gel.
Manning, Gerald S
2018-05-23
The physical principles underlying expansion of a single-chain polyelectrolyte coil caused by Coulomb repulsions among its ionized groups, and the expansion of a cross-linked polyelectrolyte gel, are probably the same. In this paper, we analyze a "one-dimensional" version of a gel, namely, a linear chain of charged beads connected by Hooke's law springs. In the Debye-Hückel range of relatively weak Coulomb strength, where counterion condensation does not occur, the springs are realistically stretched on a nanolength scale by the repulsive interactions among the beads, if we use a spring constant normalized by the inverse square of the solvent Bjerrum length. The persistence length and radius of gyration counter-intuitively decrease when Coulomb strength is increased, if analyzed in the framework of an OSF-type theory; however, a buckling theory generates the increase that is consistent with bead-spring simulations.
Eby, Joshua; Leembruggen, Madelyn; Suranyi, Peter; ...
2016-12-15
Axion stars, gravitationally bound states of low-energy axion particles, have a maximum mass allowed by gravitational stability. Weakly bound states obtaining this maximum mass have sufficiently large radii such that they are dilute, and as a result, they are well described by a leading-order expansion of the axion potential. Here, heavier states are susceptible to gravitational collapse. Inclusion of higher-order interactions, present in the full potential, can give qualitatively different results in the analysis of collapsing heavy states, as compared to the leading-order expansion. In this work, we find that collapsing axion stars are stabilized by repulsive interactions present inmore » the full potential, providing evidence that such objects do not form black holes. In the last moments of collapse, the binding energy of the axion star grows rapidly, and we provide evidence that a large amount of its energy is lost through rapid emission of relativistic axions.« less
Ultrafast photo-induced dynamics across the metal-insulator transition of VO2
NASA Astrophysics Data System (ADS)
Wang, Siming; Ramírez, Juan Gabriel; Jeffet, Jonathan; Bar-Ad, Shimshon; Huppert, Dan; Schuller, Ivan K.
2017-04-01
The transient reflectivity of VO2 films across the metal-insulator transition clearly shows that with low-fluence excitation, when insulating domains are dominant, energy transfer from the optically excited electrons to the lattice is not instantaneous, but precedes the superheating-driven expansion of the metallic domains. This implies that the phase transition in the coexistence regime is lattice-, not electronically-driven, at weak laser excitation. The superheated phonons provide the latent heat required for the propagation of the optically-induced phase transition. For VO2 this transition path is significantly different from what has been reported in the strong-excitation regime. We also observe a slow-down of the superheating-driven expansion of the metallic domains around the metal-insulator transition, which is possibly due to the competition among several co-existing phases, or an emergent critical-like behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bunge, C.F.; Barrientos, J.A.; Bunge, A.V.
1993-01-01
Roothaan-Hartree-Fock orbitals expressed in a Slater-type basis are reported for the ground states of He through Xe. Energy accuracy ranges between 8 and 10 significant figures, reducing by between 21 and 2,770 times the energy errors of the previous such compilation (E. Clementi and C. Roetti, Atomic Data and Nuclear Data Tables 14, 177, 1974). For each atom, the total energy, kinetic energy, potential energy, virial ratio, electron density at the nucleus, and the Kato cusp are given together with radial expectation values [l angle]r[sup n][r angle] with n from [minus]3 to 2 for each orbital, orbital energies, and orbitalmore » expansion coefficients. 29 refs., 1 tab.« less
NASA Astrophysics Data System (ADS)
Mitchell, Roger H.; Cranswick, Lachlan M. D.; Swainson, Ian
2006-11-01
The cell dimensions of the fluoroperovskite KMgF3 synthesized by solid state methods have been determined by powder neutron diffraction and Rietveld refinement over the temperature range 293 3.6 K using Pt metal as an internal standard for calibration of the neutron wavelength. These data demonstrate conclusively that cubic Pmoverline{3} m KMgF3 does not undergo any phase transitions to structures of lower symmetry with decreasing temperature. Cell dimensions range from 3.9924(2) Å at 293 K to 3.9800(2) Å at 3.6 K, and are essentially constant within experimental error from 50 to 3.6 K. The thermal expansion data are described using a fourth order polynomial function.
Gaussian polarizable-ion tight binding.
Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P
2016-10-14
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Gaussian polarizable-ion tight binding
NASA Astrophysics Data System (ADS)
Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.
2016-10-01
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Cosmic microwave background snapshots: pre-WMAP and post-WMAP.
Bond, J Richard; Contaldi, Carlo; Pogosyan, Dmitry
2003-11-15
We highlight the remarkable evolution in the cosmic microwave background (CMB) power spectrum C(l) as a function of multipole l over the past few years, and in the cosmological parameters for minimal inflation models derived from it: from anisotropy results before 2000; in 2000 and 2001 from Boomerang, Maxima and the Degree Angular Scale Interferometer (DASI), extending l to approximately 1000; and in 2002 from the Cosmic Background Imager (CBI), Very Small Array (VSA), ARCHEOPS and Arcminute Cosmology Bolometer Array Receiver (ACBAR), extending l to approximately 3000, with more from Boomerang and DASI as well. Pre-WMAP (pre-Wilkinson Microwave Anisotropy Probe) optimal band powers are in good agreement with each other and with the exquisite one-year WMAP results, unveiled in February 2003, which now dominate the l less, similar 600 bands. These CMB experiments significantly increased the case for accelerated expansion in the early Universe (the inflationary paradigm) and at the current epoch (dark energy dominance) when they were combined with "prior" probabilities on the parameters. The minimal inflation parameter set, [omega(b), omega(cdm), Omega(tot), Omega(Lambda), n(s), tau(C), sigma(8)], is applied in the same way to the evolving data. C(l) database and Monte Carlo Markov Chain (MCMC) methods are shown to give similar values, which are highly stable over time and for different prior choices, with the increasing precision best characterized by decreasing errors on uncorrelated "parameter eigenmodes". Priors applied range from weak ones to stronger constraints from the expansion rate (HST-h), from cosmic acceleration from supernovae (SN1) and from galaxy clustering, gravitational lensing and local cluster abundance (LSS). After marginalizing over the other cosmic and experimental variables for the weak + LSS prior, the pre-WMAP data of January 2003 compared with the post-WMAP data of March 2003 give Omega(tot) = 1.03(-0.04)(+0.05) compared with 1.02(-0.03)(+0.04), consistent with (non-Baroque) inflation theory. Adding the flat Omega(tot) = 1 prior, we find a nearly scale-invariant spectrum, n(s) = 0.95(-0.04)(+0.07) compared with 0.97(-0.02)(+0.02). The evidence for a logarithmic variation of the spectral tilt is less than or approximately 2sigma. The densities are for: baryons, omega(b) identical with Omega(b)h(2) = 0.0217(-0.002)(+0.002) (compared with 0.0228(-0.001)(+0.001)), near the Big Bang nucleosynthesis (BBN) estimate of 0.0214 +/- 0.002; CDM, omega(cdm) = Omega(cdm)h(2) = 0.126(-0.012)(+0.012) (compared with 0.121(-0.010)(+0.010)); the substantial dark (unclustered) energy, Omega(Lambda) approximately 0.66(-0.09)(+0.07) (compared with 0.70(-0.05)(+0.05)). The dark energy pressure-to-density ratio w(Q) is not well constrained by our weak + LSS prior, but adding SN1 gives w(Q) less than or approximately -0.7 for January 2003 and March 2003, consistent with the w(Q) = -1 cosmological constant case. We find sigma(8) = 0.89(-0.07)(+0.06) (compared with 0.86(-0.04)(+0.04)), implying a sizable Sunyaev-Zel'dovich (SZ) effect from clusters and groups; the high-l power found in the January 2003 data suggest sigma(8) approximately 0.94(-0.16)(+0.08) is needed to be SZ-compatible.
ERIC Educational Resources Information Center
Watts, Jerry G.
1983-01-01
Contends that Benjamin Bowser's essay (title cited above) contains conceptual and factual errors on such matters as the relationship of African slavery to European economic expansion; influence of Social Darwinism; and the role of Robert Park, W.E.B. DuBois, E. Franklin Frazier, and other Black historians in sociological research in this country.…
High order cell-centered scheme totally based on cell average
NASA Astrophysics Data System (ADS)
Liu, Ze-Yu; Cai, Qing-Dong
2018-05-01
This work clarifies the concept of cell average by pointing out the differences between cell average and cell centroid value, which are averaged cell-centered value and pointwise cell-centered value, respectively. Interpolation based on cell averages is constructed and high order QUICK-like numerical scheme is designed for such interpolation. A new approach of error analysis is introduced in this work, which is similar to Taylor’s expansion.
Information preserving coding for multispectral data
NASA Technical Reports Server (NTRS)
Duan, J. R.; Wintz, P. A.
1973-01-01
A general formulation of the data compression system is presented. A method of instantaneous expansion of quantization levels by reserving two codewords in the codebook to perform a folding over in quantization is implemented for error free coding of data with incomplete knowledge of the probability density function. Results for simple DPCM with folding and an adaptive transform coding technique followed by a DPCM technique are compared using ERTS-1 data.
A. Broido; F.A. Williams
1973-01-01
An earIier numerical analysis showed that the second approximate method of Horotitz and Metzger can be rendered exceedingly accurate for reduction of thermo-gravimetry data. It is demonstrated here that this result can be justified on the basis of an asymptotic expansion with a nondimensional activation energy as the large parameter. The order of magnitude of the error...
Dynamic Target Definition: a novel approach for PTV definition in ion beam therapy.
Cabal, Gonzalo A; Jäkel, Oliver
2013-05-01
To present a beam arrangement specific approach for PTV definition in ion beam therapy. By means of a Monte Carlo error propagation analysis a criteria is formulated to assess whether a voxel is safely treated. Based on this a non-isotropical expansion rule is proposed aiming to minimize the impact of uncertainties on the dose delivered. The method is exemplified in two cases: a Head and Neck case and a Prostate case. In both cases the modality used is proton beam irradiation and the sources of uncertainties taken into account are positioning (set up) errors and range uncertainties. It is shown how different beam arrangements have an impact on plan robustness which leads to different target expansions necessary to assure a predefined level of plan robustness. The relevance of appropriate beam angle arrangements as a way to minimize uncertainties is demonstrated. A novel method for PTV definition in on beam therapy is presented. The method show promising results by improving the probability of correct dose CTV coverage while reducing the size of the PTV volume. In a clinical scenario this translates into an enhanced tumor control probability while reducing the volume of healthy tissue being irradiated. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
$$B\\to Kl^+l^-$$ decay form factors from three-flavor lattice QCD
Bailey, Jon A.
2016-01-27
We compute the form factors for the B → Kl +l - semileptonic decay process in lattice QCD using gauge-field ensembles with 2+1 flavors of sea quark, generated by the MILC Collaboration. The ensembles span lattice spacings from 0.12 to 0.045 fm and have multiple sea-quark masses to help control the chiral extrapolation. The asqtad improved staggered action is used for the light valence and sea quarks, and the clover action with the Fermilab interpretation is used for the heavy b quark. We present results for the form factors f+(q 2), f 0(q 2), and f T(q 2), where q 2more » is the momentum transfer, together with a comprehensive examination of systematic errors. Lattice QCD determines the form factors for a limited range of q 2, and we use the model-independent z expansion to cover the whole kinematically allowed range. We present our final form-factor results as coefficients of the z expansion and the correlations between them, where the errors on the coefficients include statistical and all systematic uncertainties. Lastly, we use this complete description of the form factors to test QCD predictions of the form factors at high and low q 2.« less
Transmission network of the 2014-2015 Ebola epidemic in Sierra Leone.
Yang, Wan; Zhang, Wenyi; Kargbo, David; Yang, Ruifu; Chen, Yong; Chen, Zeliang; Kamara, Abdul; Kargbo, Brima; Kandula, Sasikiran; Karspeck, Alicia; Liu, Chao; Shaman, Jeffrey
2015-11-06
Understanding the growth and spatial expansion of (re)emerging infectious disease outbreaks, such as Ebola and avian influenza, is critical for the effective planning of control measures; however, such efforts are often compromised by data insufficiencies and observational errors. Here, we develop a spatial-temporal inference methodology using a modified network model in conjunction with the ensemble adjustment Kalman filter, a Bayesian inference method equipped to handle observational errors. The combined method is capable of revealing the spatial-temporal progression of infectious disease, while requiring only limited, readily compiled data. We use this method to reconstruct the transmission network of the 2014-2015 Ebola epidemic in Sierra Leone and identify source and sink regions. Our inference suggests that, in Sierra Leone, transmission within the network introduced Ebola to neighbouring districts and initiated self-sustaining local epidemics; two of the more populous and connected districts, Kenema and Port Loko, facilitated two independent transmission pathways. Epidemic intensity differed by district, was highly correlated with population size (r = 0.76, p = 0.0015) and a critical window of opportunity for containing local Ebola epidemics at the source (ca one month) existed. This novel methodology can be used to help identify and contain the spatial expansion of future (re)emerging infectious disease outbreaks. © 2015 The Author(s).
NASA Astrophysics Data System (ADS)
Yahampath, Pradeepa
2017-12-01
Consider communicating a correlated Gaussian source over a Rayleigh fading channel with no knowledge of the channel signal-to-noise ratio (CSNR) at the transmitter. In this case, a digital system cannot be optimal for a range of CSNRs. Analog transmission however is optimal at all CSNRs, if the source and channel are memoryless and bandwidth matched. This paper presents new hybrid digital-analog (HDA) systems for sources with memory and channels with bandwidth expansion, which outperform both digital-only and analog-only systems over a wide range of CSNRs. The digital part is either a predictive quantizer or a transform code, used to achieve a coding gain. Analog part uses linear encoding to transmit the quantization error which improves the performance under CSNR variations. The hybrid encoder is optimized to achieve the minimum AMMSE (average minimum mean square error) over the CSNR distribution. To this end, analytical expressions are derived for the AMMSE of asymptotically optimal systems. It is shown that the outage CSNR of the channel code and the analog-digital power allocation must be jointly optimized to achieve the minimum AMMSE. In the case of HDA predictive quantization, a simple algorithm is presented to solve the optimization problem. Experimental results are presented for both Gauss-Markov sources and speech signals.
Accurate Acoustic Thermometry I: The Triple Point of Gallium
NASA Astrophysics Data System (ADS)
Moldover, M. R.; Trusler, J. P. M.
1988-01-01
The speed of sound in argon has been accurately measured in the pressure range 25-380 kPa at the temperature of the triple point of gallium (Tg) and at 340 kPa at the temperature of the triple point of water (Tt). The results are combined with previously published thermodynamic and transport property data to obtain Tg = (302.9169 +/- 0.0005) K on the thermodynamic scale. Among recent determinations of T68 (the temperature on IPTS-68) at the gallium triple point, those with the smallest measurement uncertainty fall in the range 302.923 71 to 302.923 98 K. We conclude that T-T68 = (-6.9 +/- 0.5) mK near 303 K, in agreement with results obtained from other primary thermometers. The speed of sound was measured with a spherical resonator. The volume and thermal expansion of the resonator were determined by weighing the mercury required to fill it at Tt and Tg. The largest part of the standard error in the present determination of Tg is systematic. It results from imperfect knowledge of the thermal expansion of mercury between Tt and Tg. Smaller parts of the error result from imperfections in the measurement of the temperature of the resonator and of the resonance frequencies.
Primordial nucleosynthesis and neutrino physics
NASA Astrophysics Data System (ADS)
Smith, Christel Johanna
We study primordial nucleosynthesis abundance yields for assumed ranges of cosmological lepton numbers, sterile neutrino mass-squared differences and active-sterile vacuum mixing angles. We fix the baryon-to-photon ratio at the value derived from the cosmic microwave background (CMB) data and then calculate the deviation of the 2 H, 4 He, and 7 Li abundance yields from those expected in the zero lepton number(s), no-new-neutrino-physics case. We conclude that high precision (< 5% error) measurements of the primordial 2 H abundance from, e.g., QSO absorption line observations coupled with high precision (< 1% error) baryon density measurements from the CMB could have the power to either: (1) reveal or rule out the existence of a light sterile neutrino if the sign of the cosmological lepton number is known; or (2) place strong constraints on lepton numbers, sterile neutrino mixing properties and resonance sweep physics. Similar conclusions would hold if the primordial 4 He abundance could be determined to better than 10%. We have performed new Big Bang Nucleosynthesis calculations which employ arbitrarily-specified, time-dependent neutrino and antineutrino distribution functions for each of up to four neutrino flavors. We self-consistently couple these distributions to the thermodynamics, the expansion rate and scale factor-time/temperature relationship, as well as to all relevant weak, electromagnetic, and strong nuclear reaction processes in the early universe. With this approach, we can treat any scenario in which neutrino or antineutrino spectral distortion might arise. These scenarios might include, for example, decaying particles, active-sterile neutrino oscillations, and active-active neutrino oscillations in the presence of significant lepton numbers. Our calculations allow lepton numbers and sterile neutrinos to be constrained with observationally-determined primordial helium and deuterium abundances. We have modified a standard BBN code to perform these calculations and have made it available to the community. We have applied a fully relativistic Coulomb wave correction to the weak reactions in the full Kawano/Wagoner Big Bang Nucleosynthesis (BBN) code. We have also added the zero temperature radiative correction. We find that using this higher accuracy Coulomb correction results in good agreement with previous work, giving only a modest ˜ 0.04% increase in helium mass fraction over correction prescriptions applied previously in BBN calculations. We have calculated the effect of these corrections on other light element abundance yields in BBN and we have studied these yields as functions of electron neutrino lepton number. This has allowed insights into the role of the Coulomb correction in the setting of the neutron-to-proton ratio during the BBN epoch. We find that the lepton capture processes' contributions to this ratio are only second order in the Coulomb correction.
Antarctic sea ice control on ocean circulation in present and glacial climates
Ferrari, Raffaele; Jansen, Malte F.; Adkins, Jess F.; Burke, Andrea; Stewart, Andrew L.; Thompson, Andrew F.
2014-01-01
In the modern climate, the ocean below 2 km is mainly filled by waters sinking into the abyss around Antarctica and in the North Atlantic. Paleoproxies indicate that waters of North Atlantic origin were instead absent below 2 km at the Last Glacial Maximum, resulting in an expansion of the volume occupied by Antarctic origin waters. In this study we show that this rearrangement of deep water masses is dynamically linked to the expansion of summer sea ice around Antarctica. A simple theory further suggests that these deep waters only came to the surface under sea ice, which insulated them from atmospheric forcing, and were weakly mixed with overlying waters, thus being able to store carbon for long times. This unappreciated link between the expansion of sea ice and the appearance of a voluminous and insulated water mass may help quantify the ocean’s role in regulating atmospheric carbon dioxide on glacial–interglacial timescales. Previous studies pointed to many independent changes in ocean physics to account for the observed swings in atmospheric carbon dioxide. Here it is shown that many of these changes are dynamically linked and therefore must co-occur. PMID:24889624
Antarctic sea ice control on ocean circulation in present and glacial climates.
Ferrari, Raffaele; Jansen, Malte F; Adkins, Jess F; Burke, Andrea; Stewart, Andrew L; Thompson, Andrew F
2014-06-17
In the modern climate, the ocean below 2 km is mainly filled by waters sinking into the abyss around Antarctica and in the North Atlantic. Paleoproxies indicate that waters of North Atlantic origin were instead absent below 2 km at the Last Glacial Maximum, resulting in an expansion of the volume occupied by Antarctic origin waters. In this study we show that this rearrangement of deep water masses is dynamically linked to the expansion of summer sea ice around Antarctica. A simple theory further suggests that these deep waters only came to the surface under sea ice, which insulated them from atmospheric forcing, and were weakly mixed with overlying waters, thus being able to store carbon for long times. This unappreciated link between the expansion of sea ice and the appearance of a voluminous and insulated water mass may help quantify the ocean's role in regulating atmospheric carbon dioxide on glacial-interglacial timescales. Previous studies pointed to many independent changes in ocean physics to account for the observed swings in atmospheric carbon dioxide. Here it is shown that many of these changes are dynamically linked and therefore must co-occur.
Tensor integrand reduction via Laurent expansion
Hirschi, Valentin; Peraro, Tiziano
2016-06-09
We introduce a new method for the application of one-loop integrand reduction via the Laurent expansion algorithm, as implemented in the public C++ library Ninja. We show how the coefficients of the Laurent expansion can be computed by suitable contractions of the loop numerator tensor with cut-dependent projectors, making it possible to interface Ninja to any one-loop matrix element generator that can provide the components of this tensor. We implemented this technique in the Ninja library and interfaced it to MadLoop, which is part of the public MadGraph5_aMC@NLO framework. We performed a detailed performance study, comparing against other public reductionmore » tools, namely CutTools, Samurai, IREGI, PJFry++ and Golem95. We find that Ninja out-performs traditional integrand reduction in both speed and numerical stability, the latter being on par with that of the tensor integral reduction tool Golem95 which is however more limited and slower than Ninja. Lastly, we considered many benchmark multi-scale processes of increasing complexity, involving QCD and electro-weak corrections as well as effective non-renormalizable couplings, showing that Ninja’s performance scales well with both the rank and multiplicity of the considered process.« less
Anomalous transport from holography: part II
NASA Astrophysics Data System (ADS)
Bu, Yanyan; Lublinsky, Michael; Sharon, Amir
2017-03-01
This is a second study of chiral anomaly-induced transport within a holographic model consisting of anomalous U(1)_V× U(1)_A Maxwell theory in Schwarzschild-AdS_5 spacetime. In the first part, chiral magnetic/separation effects (CME/CSE) are considered in the presence of a static spatially inhomogeneous external magnetic field. Gradient corrections to CME/CSE are analytically evaluated up to third order in the derivative expansion. Some of the third order gradient corrections lead to an anomaly-induced negative B^2-correction to the diffusion constant. We also find modifications to the chiral magnetic wave nonlinear in B. In the second part, we focus on the experimentally interesting case of the axial chemical potential being induced dynamically by a constant magnetic and time-dependent electric fields. Constitutive relations for the vector/axial currents are computed employing two different approximations: (a) derivative expansion (up to third order) but fully nonlinear in the external fields, and (b) weak electric field limit but resuming all orders in the derivative expansion. A non-vanishing nonlinear axial current (CSE) is found in the first case. The dependence on magnetic field and frequency of linear transport coefficient functions is explored in the second.
NASA Astrophysics Data System (ADS)
Yamada, Y.; Shimokawa, T.; Shinomoto, S. Yano, T.; Gouda, N.
2009-09-01
For the purpose of determining the celestial coordinates of stellar positions, consecutive observational images are laid overlapping each other with clues of stars belonging to multiple plates. In the analysis, one has to estimate not only the coordinates of individual plates, but also the possible expansion and distortion of the frame. This problem reduces to a least-squares fit that can in principle be solved by a huge matrix inversion, which is, however, impracticable. Here, we propose using Kalman filtering to perform the least-squares fit and implement a practical iterative algorithm. We also estimate errors associated with this iterative method and suggest a design of overlapping plates to minimize the error.
The Zeeman effect or linear birefringence? VLA polarimetric spectral line observations of H2O masers
NASA Astrophysics Data System (ADS)
Zhao, Jun-Hui; Goss, W. M.; Diamond, P.
We present line profiles of the four Stokes parameters of H2O masers at 22 GHz observed with the VLA in full polarimetric spectral line mode. With careful calibration, the instrumental effects such as linear leakage and the difference of antenna gain between RCP and LCP, can be minimized. Our measurements show a few percent linear polarization. Weak circular polarization was detected at a level of 0.1 percent of the peak intensity. A large uncertainty in the measurements of weak circular polarization is caused by telescope pointing errors. The observed polarization of H2O masers can be interpreted as either the Zeeman effect or linear birefringence.
Melchior, P.; Gruen, D.; McClintock, T.; ...
2017-05-16
Here, we use weak-lensing shear measurements to determine the mean mass of optically selected galaxy clusters in Dark Energy Survey Science Verification data. In a blinded analysis, we split the sample of more than 8000 redMaPPer clusters into 15 subsets, spanning ranges in the richness parameter 5 ≤ λ ≤ 180 and redshift 0.2 ≤ z ≤ 0.8, and fit the averaged mass density contrast profiles with a model that accounts for seven distinct sources of systematic uncertainty: shear measurement and photometric redshift errors; cluster-member contamination; miscentring; deviations from the NFW halo profile; halo triaxiality and line-of-sight projections.
Boosting Learning Algorithm for Stock Price Forecasting
NASA Astrophysics Data System (ADS)
Wang, Chengzhang; Bai, Xiaoming
2018-03-01
To tackle complexity and uncertainty of stock market behavior, more studies have introduced machine learning algorithms to forecast stock price. ANN (artificial neural network) is one of the most successful and promising applications. We propose a boosting-ANN model in this paper to predict the stock close price. On the basis of boosting theory, multiple weak predicting machines, i.e. ANNs, are assembled to build a stronger predictor, i.e. boosting-ANN model. New error criteria of the weak studying machine and rules of weights updating are adopted in this study. We select technical factors from financial markets as forecasting input variables. Final results demonstrate the boosting-ANN model works better than other ones for stock price forecasting.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Melchior, P.; Gruen, D.; McClintock, T.
Here, we use weak-lensing shear measurements to determine the mean mass of optically selected galaxy clusters in Dark Energy Survey Science Verification data. In a blinded analysis, we split the sample of more than 8000 redMaPPer clusters into 15 subsets, spanning ranges in the richness parameter 5 ≤ λ ≤ 180 and redshift 0.2 ≤ z ≤ 0.8, and fit the averaged mass density contrast profiles with a model that accounts for seven distinct sources of systematic uncertainty: shear measurement and photometric redshift errors; cluster-member contamination; miscentring; deviations from the NFW halo profile; halo triaxiality and line-of-sight projections.
NASA Astrophysics Data System (ADS)
Zhang, Xiaoping; Pan, Delu; Chen, Jianyu; Zhan, Yuanzeng; Mao, Zhihua
2013-01-01
Islands are an important part of the marine ecosystem. Increasing impervious surfaces in the Zhoushan Islands due to new development and increased population have an ecological impact on the runoff and water quality. Based on time-series classification and the complement of vegetation fraction in urban regions, Landsat thematic mapper and other high-resolution satellite images were applied to monitor the dynamics of impervious surface area (ISA) in the Zhoushan Islands from 1986 to 2011. Landsat-derived ISA results were validated by the high-resolution Worldview-2 and aerial photographs. The validation shows that mean relative errors of these ISA maps are <15 %. The results reveal that the ISA in the Zhoushan Islands increased from 19.2 km2 in 1986 to 86.5 km2 in 2011, and the period from 2006 to 2011 had the fastest expansion rate of 5.59 km2 per year. The major land conversions to high densities of ISA were from the tidal zone and arable lands. The expansions of ISA were unevenly distributed and most of them were located along the periphery of these islands. Time-series maps revealed that ISA expansions happened continuously over the last 25 years. Our analysis indicated that the policy and the topography were the dominant factors controlling the spatial patterns of ISA and its expansions in the Zhoushan Islands. With continuous urbanization processes, the rapid ISA expansions may not be stopped in the near feature.
NASA Astrophysics Data System (ADS)
Liu, Z.; Li, Y.
2018-04-01
This paper from the perspective of the Neighbor cellular space, Proposed a new urban space expansion model based on a new multi-objective gray decision and CA. The model solved the traditional cellular automata conversion rules is difficult to meet the needs of the inner space-time analysis of urban changes and to overcome the problem of uncertainty in the combination of urban drivers and urban cellular automata. At the same time, the study takes Pidu District as a research area and carries out urban spatial simulation prediction and analysis, and draws the following conclusions: (1) The design idea of the urban spatial expansion model proposed in this paper is that the urban driving factor and the neighborhood function are tightly coupled by the multi-objective grey decision method based on geographical conditions. The simulation results show that the simulation error of urban spatial expansion is less than 5.27 %. The Kappa coefficient is 0.84. It shows that the model can better capture the inner transformation mechanism of the city. (2) We made a simulation prediction for Pidu District of Chengdu by discussing Pidu District of Chengdu as a system instance.In this way, we analyzed the urban growth tendency of this area.presenting a contiguous increasing mode, which is called "urban intensive development". This expansion mode accorded with sustainable development theory and the ecological urbanization design theory.
Stages of polymer transformation during remote plasma oxidation (RPO) at atmospheric pressure
NASA Astrophysics Data System (ADS)
Luan, P.; Oehrlein, G. S.
2018-04-01
The interaction of cold temperature plasma sources with materials can be separated into two types: ‘direct’ and ‘remote’ treatments. Compared to the ‘direct’ treatment which involves energetic charged species along with short-lived, strongly oxidative neutral species, ‘remote’ treatment by the long-lived weakly oxidative species is less invasive and better for producing uniformly treated surfaces. In this paper, we examine the prototypical case of remote plasma oxidation (RPO) of polymer materials by employing a surface micro-discharge (in a N2/O2 mixture environment) treatment on polystyrene. Using material characterization techniques including real-time ellipsometry, x-ray photoelectron spectroscopy, and Fourier-transform infrared spectroscopy, the time evolution of polymer film thickness, refractive index, surface, and bulk chemical composition were evaluated. These measurements revealed three consecutive stages of polymer transformation, i.e. surface adsorption and oxidation, bulk film permeation and thickness expansion followed by the material removal as a result of RPO. By correlating the observed film thickness changes with simultaneously obtained chemical information, we found that the three stages were due to the three effects of weakly oxidative species on polymers: (1) surface oxidation and nitrate (R-ONO2) chemisorption, (2) bulk oxidation, and (3) etching. Our results demonstrate that surface adsorption and oxidation, bulk oxidation, and etching can all happen during one continuous plasma treatment. We show that surface nitrate is only adsorbed on the top few nanometers of the polymer surface. The polymer film expansion also provided evidence for the diffusion and reaction of long-lived plasma species in the polymer bulk. Besides, we found that the remote plasma etched surface was relatively rich in O-C=O (ester or carboxylic acid). These findings clarify the roles of long-lived weakly oxidative plasma species on polymers and advance the understanding of plasma-polymer interactions on a molecular scale.
NASA Astrophysics Data System (ADS)
Udovydchenkov, Ilya A.
2017-07-01
Modal pulses are broadband contributions to an acoustic wave field with fixed mode number. Stable weakly dispersive modal pulses (SWDMPs) are special modal pulses that are characterized by weak dispersion and weak scattering-induced broadening and are thus suitable for communications applications. This paper investigates, using numerical simulations, receiver array requirements for recovering information carried by SWDMPs under various signal-to-noise ratio conditions without performing channel equalization. Two groups of weakly dispersive modal pulses are common in typical mid-latitude deep ocean environments: the lowest order modes (typically modes 1-3 at 75 Hz), and intermediate order modes whose waveguide invariant is near-zero (often around mode 20 at 75 Hz). Information loss is quantified by the bit error rate (BER) of a recovered binary phase-coded signal. With fixed receiver depths, low BERs (less than 1%) are achieved at ranges up to 400 km with three hydrophones for mode 1 with 90% probability and with 34 hydrophones for mode 20 with 80% probability. With optimal receiver depths, depending on propagation range, only a few, sometimes only two, hydrophones are often sufficient for low BERs, even with intermediate mode numbers. Full modal resolution is unnecessary to achieve low BERs. Thus, a flexible receiver array of autonomous vehicles can outperform a cabled array.
Traveltime inversion and error analysis for layered anisotropy
NASA Astrophysics Data System (ADS)
Jiang, Fan; Zhou, Hua-wei
2011-02-01
While tilted transverse isotropy (TTI) is a good approximation of the velocity structure for many dipping and fractured strata, it is still challenging to estimate anisotropic depth models even when the tilted angle is known. With the assumption of weak anisotropy, we present a TTI traveltime inversion approach for models consisting of several thickness-varying layers where the anisotropic parameters are constant for each layer. For each model layer the inversion variables consist of the anisotropic parameters ɛ and δ, the tilted angle φ of its symmetry axis, layer velocity along the symmetry axis, and thickness variation of the layer. Using this method and synthetic data, we evaluate the effects of errors in some of the model parameters on the inverted values of the other parameters in crosswell and Vertical Seismic Profile (VSP) acquisition geometry. The analyses show that the errors in the layer symmetry axes sensitively affect the inverted values of other parameters, especially δ. However, the impact of errors in δ on the inversion of other parameters is much less than the impact on δ from the errors in other parameters. Hence, a practical strategy is first to invert for the most error-tolerant parameter layer velocity, then progressively invert for ɛ in crosswell geometry or δ in VSP geometry.
Friedrich, Joachim; Coriani, Sonia; Helgaker, Trygve; Dolg, Michael
2009-10-21
A fully automated parallelized implementation of the incremental scheme for coupled-cluster singles-and-doubles (CCSD) energies has been extended to treat molecular (unrelaxed) first-order one-electron properties such as the electric dipole and quadrupole moments. The convergence and accuracy of the incremental approach for the dipole and quadrupole moments have been studied for a variety of chemically interesting systems. It is found that the electric dipole moment can be obtained to within 5% and 0.5% accuracy with respect to the exact CCSD value at the third and fourth orders of the expansion, respectively. Furthermore, we find that the incremental expansion of the quadrupole moment converges to the exact result with increasing order of the expansion: the convergence of nonaromatic compounds is fast with errors less than 16 mau and less than 1 mau at third and fourth orders, respectively (1 mau=10(-3)ea(0)(2)); the aromatic compounds converge slowly with maximum absolute deviations of 174 and 72 mau at third and fourth orders, respectively.
Mananga, Eugene S; Reid, Alicia E
This paper presents the study of finite pulse widths for the BABA pulse sequence using the Floquet-Magnus expansion (FME) approach. In the FME scheme, the first order F 1 is identical to its counterparts in average Hamiltonian theory (AHT) and Floquet theory (FT). However, the timing part in the FME approach is introduced via the Λ 1 ( t ) function not present in other schemes. This function provides an easy way for evaluating the spin evolution during "the time in between" through the Magnus expansion of the operator connected to the timing part of the evolution. The evaluation of Λ 1 ( t ) is useful especially for the analysis of the non-stroboscopic evolution. Here, the importance of the boundary conditions, which provides a natural choice of Λ 1 (0) is ignored. This work uses the Λ 1 ( t ) function to compare the efficiency of the BABA pulse sequence with δ - pulses and the BABA pulse sequence with finite pulses. Calculations of Λ 1 ( t ) and F 1 are presented.
Mananga, Eugene S.; Reid, Alicia E.
2013-01-01
This paper presents the study of finite pulse widths for the BABA pulse sequence using the Floquet-Magnus expansion (FME) approach. In the FME scheme, the first order F1 is identical to its counterparts in average Hamiltonian theory (AHT) and Floquet theory (FT). However, the timing part in the FME approach is introduced via the Λ1 (t) function not present in other schemes. This function provides an easy way for evaluating the spin evolution during “the time in between” through the Magnus expansion of the operator connected to the timing part of the evolution. The evaluation of Λ1 (t) is useful especially for the analysis of the non-stroboscopic evolution. Here, the importance of the boundary conditions, which provides a natural choice of Λ1 (0) is ignored. This work uses the Λ1 (t) function to compare the efficiency of the BABA pulse sequence with δ – pulses and the BABA pulse sequence with finite pulses. Calculations of Λ1 (t) and F1 are presented. PMID:25792763
Toward unbiased estimations of the statefinder parameters
NASA Astrophysics Data System (ADS)
Aviles, Alejandro; Klapp, Jaime; Luongo, Orlando
2017-09-01
With the use of simulated supernova catalogs, we show that the statefinder parameters turn out to be poorly and biased estimated by standard cosmography. To this end, we compute their standard deviations and several bias statistics on cosmologies near the concordance model, demonstrating that these are very large, making standard cosmography unsuitable for future and wider compilations of data. To overcome this issue, we propose a new method that consists in introducing the series of the Hubble function into the luminosity distance, instead of considering the usual direct Taylor expansions of the luminosity distance. Moreover, in order to speed up the numerical computations, we estimate the coefficients of our expansions in a hierarchical manner, in which the order of the expansion depends on the redshift of every single piece of data. In addition, we propose two hybrids methods that incorporates standard cosmography at low redshifts. The methods presented here perform better than the standard approach of cosmography both in the errors and bias of the estimated statefinders. We further propose a one-parameter diagnostic to reject non-viable methods in cosmography.
Artificial Intelligence, DNA Mimicry, and Human Health.
Stefano, George B; Kream, Richard M
2017-08-14
The molecular evolution of genomic DNA across diverse plant and animal phyla involved dynamic registrations of sequence modifications to maintain existential homeostasis to increasingly complex patterns of environmental stressors. As an essential corollary, driver effects of positive evolutionary pressure are hypothesized to effect concerted modifications of genomic DNA sequences to meet expanded platforms of regulatory controls for successful implementation of advanced physiological requirements. It is also clearly apparent that preservation of updated registries of advantageous modifications of genomic DNA sequences requires coordinate expansion of convergent cellular proofreading/error correction mechanisms that are encoded by reciprocally modified genomic DNA. Computational expansion of operationally defined DNA memory extends to coordinate modification of coding and previously under-emphasized noncoding regions that now appear to represent essential reservoirs of untapped genetic information amenable to evolutionary driven recruitment into the realm of biologically active domains. Additionally, expansion of DNA memory potential via chemical modification and activation of noncoding sequences is targeted to vertical augmentation and integration of an expanded cadre of transcriptional and epigenetic regulatory factors affecting linear coding of protein amino acid sequences within open reading frames.
Looking for trouble? Diagnostics expanding disease and producing patients.
Hofmann, Bjørn
2018-05-23
Novel tests give great opportunities for earlier and more precise diagnostics. At the same time, new tests expand disease, produce patients, and cause unnecessary harm in overdiagnosis and overtreatment. How can we evaluate diagnostics to obtain the benefits and avoid harm? One way is to pay close attention to the diagnostic process and its core concepts. Doing so reveals 3 errors that expand disease and increase overdiagnosis. The first error is to decouple diagnostics from harm, eg, by diagnosing insignificant conditions. The second error is to bypass proper validation of the relationship between test indicator and disease, eg, by introducing biomarkers for Alzheimer's disease before the tests are properly validated. The third error is to couple the name of disease to insignificant or indecisive indicators, eg, by lending the cancer name to preconditions, such as ductal carcinoma in situ. We need to avoid these errors to promote beneficial testing, bar harmful diagnostics, and evade unwarranted expansion of disease. Accordingly, we must stop identifying and testing for conditions that are only remotely associated with harm. We need more stringent verification of tests, and we must avoid naming indicators and indicative conditions after diseases. If not, we will end like ancient tragic heroes, succumbing because of our very best abilities. © 2018 John Wiley & Sons, Ltd.
Weakly sheared active suspensions: hydrodynamics, stability, and rheology.
Cui, Zhenlu
2011-03-01
We present a kinetic model for flowing active suspensions and analyze the behavior of a suspension subjected to a weak steady shear. Asymptotic solutions are sought in Deborah number expansions. At the leading order, we explore the steady states and perform their stability analysis. We predict the rheology of active systems including an activity thickening or thinning behavior of the apparent viscosity and a negative apparent viscosity depending on the particle type, flow alignment, and the anchoring conditions, which can be tested on bacterial suspensions. We find remarkable dualities that show that flow-aligning rodlike contractile (extensile) particles are dynamically and rheologically equivalent to flow-aligning discoid extensile (contractile) particles for both tangential and homeotropic anchoring conditions. Another key prediction of this work is the role of the concentration of active suspensions in controlling the rheological behavior: the apparent viscosity may decrease with the increase of the concentration.
Anomalous dimensions of spinning operators from conformal symmetry
NASA Astrophysics Data System (ADS)
Gliozzi, Ferdinando
2018-01-01
We compute, to the first non-trivial order in the ɛ-expansion of a perturbed scalar field theory, the anomalous dimensions of an infinite class of primary operators with arbitrary spin ℓ = 0, 1, . . . , including as a particular case the weakly broken higher-spin currents, using only constraints from conformal symmetry. Following the bootstrap philosophy, no reference is made to any Lagrangian, equations of motion or coupling constants. Even the space dimensions d are left free. The interaction is implicitly turned on through the local operators by letting them acquire anomalous dimensions. When matching certain four-point and five-point functions with the corresponding quantities of the free field theory in the ɛ → 0 limit, no free parameter remains. It turns out that only the expected discrete d values are permitted and the ensuing anomalous dimensions reproduce known results for the weakly broken higher-spin currents and provide new results for the other spinning operators.
NASA Technical Reports Server (NTRS)
Iijima, T.; Kim, J. S.; Sugiura, M.
1984-01-01
The development of the polar cap current and the relationship of that development to the evolution of auroral electrojets during individual polar geomagnetic disturbances is studied using 1 min average data from US-Canada IMS network stations and standard magnetograms from sites on the polar cap and in the auroral zone. It is found that even when the auroral electrojet activity is weak, polar cap currents producing fields of magnitude approximately 100-200 nT almost always exist. A normal convection current system exists quasi-persistently in the polar cap during extended quiet or weakly disturbed periods of auroral electrojet activity. After one such period, some drastic changes occur in the polar cap currents, which are followed by phases of growth, expansion, and recovery. Polar cap currents cannot all be completely ascribed to a single source mechanism.
NASA Astrophysics Data System (ADS)
Xu, Dazhi; Cao, Jianshu
2016-08-01
The concept of polaron, emerged from condense matter physics, describes the dynamical interaction of moving particle with its surrounding bosonic modes. This concept has been developed into a useful method to treat open quantum systems with a complete range of system-bath coupling strength. Especially, the polaron transformation approach shows its validity in the intermediate coupling regime, in which the Redfield equation or Fermi's golden rule will fail. In the polaron frame, the equilibrium distribution carried out by perturbative expansion presents a deviation from the canonical distribution, which is beyond the usual weak coupling assumption in thermodynamics. A polaron transformed Redfield equation (PTRE) not only reproduces the dissipative quantum dynamics but also provides an accurate and efficient way to calculate the non-equilibrium steady states. Applications of the PTRE approach to problems such as exciton diffusion, heat transport and light-harvesting energy transfer are presented.
Condensate statistics and thermodynamics of weakly interacting Bose gas: Recursion relation approach
NASA Astrophysics Data System (ADS)
Dorfman, K. E.; Kim, M.; Svidzinsky, A. A.
2011-03-01
We study condensate statistics and thermodynamics of weakly interacting Bose gas with a fixed total number N of particles in a cubic box. We find the exact recursion relation for the canonical ensemble partition function. Using this relation, we calculate the distribution function of condensate particles for N=200. We also calculate the distribution function based on multinomial expansion of the characteristic function. Similar to the ideal gas, both approaches give exact statistical moments for all temperatures in the framework of Bogoliubov model. We compare them with the results of unconstraint canonical ensemble quasiparticle formalism and the hybrid master equation approach. The present recursion relation can be used for any external potential and boundary conditions. We investigate the temperature dependence of the first few statistical moments of condensate fluctuations as well as thermodynamic potentials and heat capacity analytically and numerically in the whole temperature range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Meng, Ziyang; Yang, Tao; Li, Guoqi
Here, we study synchronization of coupled linear systems over networks with weak connectivity and nonuniform time-varying delays. We focus on the case where the internal dynamics are time-varying but non-expansive (stable dynamics with a quadratic Lyapunov function). Both uniformly jointly connected and infinitely jointly connected communication topologies are considered. A new concept of quadratic synchronization is introduced. We first show that global asymptotic quadratic synchronization can be achieved over directed networks with uniform joint connectivity and arbitrarily bounded delays. We then study the case of infinitely jointly connected communication topology. In particular, for the undirected communication topologies, it turns outmore » that the existence of a uniform time interval for the jointly connected communication topology is not necessary and quadratic synchronization can be achieved when the time-varying nonuniform delays are arbitrarily bounded. Finally, simulation results are provided to validate the theoretical results.« less